[pytables] 01/11: Imported Upstream version 3.1.0

Antonio Valentino a_valentino-guest at moszumanska.debian.org
Fri Feb 21 07:57:40 UTC 2014


This is an automated email from the git hooks/post-receive script.

a_valentino-guest pushed a commit to branch master
in repository pytables.

commit f4868012ae29efc9308aaa0971668714fe189211
Author: Antonio Valentino <antonio.valentino at tiscali.it>
Date:   Fri Feb 21 07:48:04 2014 +0000

    Imported Upstream version 3.1.0
---
 .travis.yml                                        |    3 +-
 ANNOUNCE.txt.in                                    |   66 +-
 LICENSE.txt                                        |    2 +-
 LICENSE.txt => LICENSES/LZ4.txt                    |   31 +-
 LICENSE.txt => LICENSES/SNAPPY.txt                 |   27 +-
 LICENSES/STDINT.txt                                |    7 +-
 LICENSES/WIN32PTHREADS.txt                         |   19 +
 LICENSES/ZLIB.txt                                  |   22 +
 MANIFEST.in                                        |   12 +-
 Makefile                                           |    2 +
 README.txt                                         |    9 +-
 RELEASE_NOTES.txt                                  |  393 ++--
 VERSION                                            |    2 +-
 bench/LRU-experiments.py                           |   55 +-
 bench/LRU-experiments2.py                          |   30 +-
 bench/LRUcache-node-bench.py                       |   46 +-
 bench/blosc.py                                     |   40 +-
 bench/bsddb-table-bench.py                         |  143 +-
 bench/cacheout.py                                  |    8 +-
 bench/chunkshape-bench.py                          |   58 +-
 bench/chunkshape-testing.py                        |   62 +-
 bench/collations.py                                |   46 +-
 bench/copy-bench.py                                |   15 +-
 bench/create-large-number-objects.py               |    9 +-
 bench/deep-tree-h5py.py                            |   34 +-
 bench/deep-tree.py                                 |   70 +-
 bench/evaluate.py                                  |   43 +-
 bench/expression.py                                |   63 +-
 bench/get-figures-ranges.py                        |   17 +-
 bench/get-figures.py                               |   27 +-
 bench/indexed_search.py                            |  182 +-
 bench/keysort.py                                   |   31 +-
 bench/lookup_bench.py                              |   77 +-
 bench/open_close-bench-gzip.h5                     |  Bin 22472 -> 0 bytes
 bench/open_close-bench.py                          |   76 +-
 bench/optimal-chunksize.py                         |   73 +-
 bench/plot-bar.py                                  |   20 +-
 bench/poly.py                                      |   86 +-
 bench/postgres-search-bench.py                     |   77 +-
 bench/postgres_backend.py                          |   53 +-
 bench/pytables-search-bench.py                     |  130 +-
 bench/pytables_backend.py                          |   65 +-
 bench/recarray2-test.py                            |  114 +-
 bench/search-bench-plot.py                         |   41 +-
 bench/search-bench.py                              |  238 +--
 bench/searchsorted-bench.py                        |  161 +-
 bench/searchsorted-bench2.py                       |  159 +-
 bench/shelve-bench.py                              |  119 +-
 bench/sqlite-search-bench.py                       |  209 ++-
 bench/sqlite3-search-bench.py                      |   68 +-
 bench/stress-test.py                               |  170 +-
 bench/stress-test2.py                              |   95 +-
 bench/stress-test3.py                              |  111 +-
 bench/table-bench.py                               |  203 +-
 bench/table-copy.py                                |  226 +--
 bench/undo_redo.py                                 |   89 +-
 bench/widetree.py                                  |   86 +-
 bench/widetree2.py                                 |   46 +-
 c-blosc/.gitignore                                 |    1 +
 c-blosc/.mailmap                                   |    4 +
 c-blosc/.travis.yml                                |   12 +
 c-blosc/ANNOUNCE.rst                               |   68 +
 c-blosc/CMakeLists.txt                             |  207 ++
 c-blosc/LICENSES/BLOSC.txt                         |   23 +
 c-blosc/LICENSES/FASTLZ.txt                        |   24 +
 LICENSE.txt => c-blosc/LICENSES/H5PY.txt           |   15 +-
 LICENSE.txt => c-blosc/LICENSES/LZ4.txt            |   31 +-
 LICENSE.txt => c-blosc/LICENSES/SNAPPY.txt         |   27 +-
 {LICENSES => c-blosc/LICENSES}/STDINT.txt          |    0
 c-blosc/LICENSES/ZLIB.txt                          |   22 +
 c-blosc/README.rst                                 |  286 +++
 c-blosc/README_HEADER.rst                          |   66 +
 c-blosc/README_THREADED.rst                        |   33 +
 c-blosc/RELEASE_NOTES.rst                          |  318 ++++
 c-blosc/RELEASING.rst                              |  102 +
 c-blosc/bench/CMakeLists.txt                       |   72 +
 c-blosc/bench/Makefile                             |   40 +
 c-blosc/bench/Makefile.mingw                       |   45 +
 c-blosc/bench/bench.c                              |  539 ++++++
 c-blosc/bench/plot-speeds.py                       |  197 ++
 c-blosc/blosc/CMakeLists.txt                       |  104 ++
 {blosc => c-blosc/blosc}/blosc.c                   |  544 +++++-
 {blosc => c-blosc/blosc}/blosc.h                   |  194 +-
 {blosc => c-blosc/blosc}/blosclz.c                 |   19 +-
 {blosc => c-blosc/blosc}/blosclz.h                 |    0
 c-blosc/blosc/config.h.in                          |    9 +
 {blosc => c-blosc/blosc}/shuffle.c                 |   16 +-
 {blosc => c-blosc/blosc}/shuffle.h                 |    0
 {blosc => c-blosc/blosc}/win32/pthread.c           |   20 +-
 {blosc => c-blosc/blosc}/win32/pthread.h           |   26 +-
 {blosc => c-blosc/blosc}/win32/stdint-windows.h    |   22 +-
 c-blosc/cmake/FindLZ4.cmake                        |   10 +
 c-blosc/cmake/FindSnappy.cmake                     |   10 +
 c-blosc/cmake_uninstall.cmake.in                   |   22 +
 c-blosc/hdf5/CMakeLists.txt                        |   38 +
 c-blosc/hdf5/README.rst                            |   62 +
 {blosc => c-blosc/hdf5}/blosc_filter.c             |   46 +-
 {blosc => c-blosc/hdf5}/blosc_filter.h             |    6 +-
 c-blosc/hdf5/example.c                             |  126 ++
 .../internal-complibs/lz4-r110/add-version.patch   |   14 +
 c-blosc/internal-complibs/lz4-r110/lz4.c           |  865 +++++++++
 c-blosc/internal-complibs/lz4-r110/lz4.h           |  252 +++
 c-blosc/internal-complibs/lz4-r110/lz4hc.c         |  856 +++++++++
 c-blosc/internal-complibs/lz4-r110/lz4hc.h         |  157 ++
 .../snappy-1.1.1/add-version.patch                 |   19 +
 c-blosc/internal-complibs/snappy-1.1.1/msvc1.patch |   17 +
 c-blosc/internal-complibs/snappy-1.1.1/msvc2.patch |   27 +
 c-blosc/internal-complibs/snappy-1.1.1/snappy-c.cc |   90 +
 c-blosc/internal-complibs/snappy-1.1.1/snappy-c.h  |  146 ++
 .../snappy-1.1.1/snappy-internal.h                 |  150 ++
 .../snappy-1.1.1/snappy-sinksource.cc              |   71 +
 .../snappy-1.1.1/snappy-sinksource.h               |  137 ++
 .../snappy-1.1.1/snappy-stubs-internal.cc          |   42 +
 .../snappy-1.1.1/snappy-stubs-internal.h           |  491 +++++
 .../snappy-1.1.1/snappy-stubs-public.h             |  111 ++
 c-blosc/internal-complibs/snappy-1.1.1/snappy.cc   | 1306 +++++++++++++
 c-blosc/internal-complibs/snappy-1.1.1/snappy.h    |  192 ++
 c-blosc/internal-complibs/zlib-1.2.8/adler32.c     |  179 ++
 c-blosc/internal-complibs/zlib-1.2.8/compress.c    |   80 +
 c-blosc/internal-complibs/zlib-1.2.8/crc32.c       |  425 +++++
 c-blosc/internal-complibs/zlib-1.2.8/crc32.h       |  441 +++++
 c-blosc/internal-complibs/zlib-1.2.8/deflate.c     | 1967 ++++++++++++++++++++
 c-blosc/internal-complibs/zlib-1.2.8/deflate.h     |  346 ++++
 c-blosc/internal-complibs/zlib-1.2.8/gzclose.c     |   25 +
 c-blosc/internal-complibs/zlib-1.2.8/gzguts.h      |  209 +++
 c-blosc/internal-complibs/zlib-1.2.8/gzlib.c       |  634 +++++++
 c-blosc/internal-complibs/zlib-1.2.8/gzread.c      |  594 ++++++
 c-blosc/internal-complibs/zlib-1.2.8/gzwrite.c     |  577 ++++++
 c-blosc/internal-complibs/zlib-1.2.8/infback.c     |  640 +++++++
 c-blosc/internal-complibs/zlib-1.2.8/inffast.c     |  340 ++++
 c-blosc/internal-complibs/zlib-1.2.8/inffast.h     |   11 +
 c-blosc/internal-complibs/zlib-1.2.8/inffixed.h    |   94 +
 c-blosc/internal-complibs/zlib-1.2.8/inflate.c     | 1512 +++++++++++++++
 c-blosc/internal-complibs/zlib-1.2.8/inflate.h     |  122 ++
 c-blosc/internal-complibs/zlib-1.2.8/inftrees.c    |  306 +++
 c-blosc/internal-complibs/zlib-1.2.8/inftrees.h    |   62 +
 c-blosc/internal-complibs/zlib-1.2.8/trees.c       | 1226 ++++++++++++
 c-blosc/internal-complibs/zlib-1.2.8/trees.h       |  128 ++
 c-blosc/internal-complibs/zlib-1.2.8/uncompr.c     |   59 +
 c-blosc/internal-complibs/zlib-1.2.8/zconf.h       |  511 +++++
 c-blosc/internal-complibs/zlib-1.2.8/zlib.h        | 1768 ++++++++++++++++++
 c-blosc/internal-complibs/zlib-1.2.8/zutil.c       |  324 ++++
 c-blosc/internal-complibs/zlib-1.2.8/zutil.h       |  253 +++
 c-blosc/tests/.gitignore                           |    1 +
 c-blosc/tests/CMakeLists.txt                       |   14 +
 c-blosc/tests/Makefile                             |   46 +
 c-blosc/tests/print_versions.c                     |   32 +
 c-blosc/tests/test_all.sh                          |   14 +
 c-blosc/tests/test_api.c                           |  103 +
 c-blosc/tests/test_basics.c                        |  141 ++
 c-blosc/tests/test_common.h                        |   40 +
 doc/scripts/filenode.py                            |   29 +-
 doc/scripts/pickletrouble.py                       |   18 +-
 doc/scripts/tutorial1.py                           |  104 +-
 doc/source/FAQ.rst                                 |   21 +-
 doc/source/conf.py                                 |    4 +-
 doc/source/cookbook/custom_data_types.rst          |    9 +-
 doc/source/cookbook/hints_for_sql_users.rst        |   75 +-
 doc/source/cookbook/inmemory_hdf5_files.rst        |   32 +-
 doc/source/cookbook/tailoring_atexit_hooks.rst     |   25 +-
 doc/source/project_pointers.rst                    |    4 +-
 doc/source/release-notes/RELEASE_NOTES_v3.0.x.rst  |  304 ++-
 ...E_NOTES_v3.0.x.rst => RELEASE_NOTES_v3.1.x.rst} |    0
 doc/source/release_notes.rst                       |    4 +
 doc/source/usersguide/bibliography.rst             |    2 +-
 doc/source/usersguide/filenode.rst                 |   22 +-
 doc/source/usersguide/index.rst                    |    2 +-
 doc/source/usersguide/installation.rst             |  128 +-
 .../usersguide/libref/homogenous_storage.rst       |    2 +
 .../usersguide/libref/structured_storage.rst       |    2 +
 doc/source/usersguide/optimization.rst             |   39 +-
 doc/source/usersguide/parameter_files.rst          |    3 +
 doc/source/usersguide/tutorials.rst                |  192 +-
 doc/source/usersguide/usersguide.rst               |    2 +-
 doc/source/usersguide/utilities.rst                |    6 +-
 examples/add-column.py                             |   42 +-
 examples/array1.py                                 |   29 +-
 examples/array2.py                                 |   33 +-
 examples/array3.py                                 |   39 +-
 examples/array4.py                                 |   31 +-
 examples/attributes1.py                            |   11 +-
 examples/carray1.py                                |    5 +-
 examples/check_examples.sh                         |   32 +-
 examples/earray1.py                                |    7 +-
 examples/earray2.py                                |   70 +-
 examples/enum.py                                   |   22 +-
 examples/filenodes1.py                             |   24 +-
 examples/index.py                                  |   30 +-
 examples/inmemory.py                               |   52 +
 examples/links.py                                  |   21 +-
 examples/multiprocess_access_benchmarks.py         |   13 +-
 examples/multiprocess_access_queues.py             |   35 +-
 examples/nested-iter.py                            |    9 +-
 examples/nested-tut.py                             |  148 +-
 examples/nested1.py                                |   79 +-
 examples/objecttree.py                             |   24 +-
 examples/particles.py                              |  128 +-
 examples/read_array_out_arg.py                     |    6 +-
 examples/split.py                                  |   38 +
 examples/table-tree.py                             |  266 +--
 examples/table1.py                                 |   69 +-
 examples/table2.py                                 |   58 +-
 examples/table3.py                                 |   55 +-
 examples/tutorial1-1.py                            |  103 +-
 examples/tutorial1-2.py                            |  234 +--
 examples/tutorial2.py                              |   84 +-
 examples/tutorial3-1.py                            |    2 +-
 examples/tutorial3-2.py                            |    2 +-
 examples/undo-redo.py                              |   10 +-
 examples/vlarray1.py                               |   27 +-
 examples/vlarray2.py                               |   82 +-
 examples/vlarray3.py                               |   17 +-
 examples/vlarray4.py                               |   17 +-
 setup.py                                           |  164 +-
 src/H5ARRAY.c                                      |   20 +-
 src/H5TB-opt.c                                     |   18 +-
 src/H5VLARRAY.c                                    |   18 +-
 src/idx-opt.c                                      |   50 +-
 src/utils.c                                        |   93 +-
 subtree-merge-blosc.sh                             |   43 +
 tables/__init__.py                                 |   44 +-
 tables/_comp_bzip2.pyx                             |    3 +-
 tables/_comp_lzo.pyx                               |    3 +-
 tables/_past.py                                    |   15 +-
 tables/array.py                                    |   50 +-
 tables/atom.py                                     |   39 +-
 tables/attributeset.py                             |   19 +-
 tables/carray.py                                   |    6 +-
 tables/conditions.py                               |   49 +-
 tables/definitions.pxd                             |    4 +-
 tables/description.py                              |   68 +-
 tables/earray.py                                   |    2 +-
 tables/exceptions.py                               |   19 +-
 tables/expression.py                               |   18 +-
 tables/file.py                                     |  749 +++++---
 tables/filters.py                                  |  103 +-
 tables/flavor.py                                   |   10 +-
 tables/group.py                                    |  111 +-
 tables/hdf5Extension.py                            |    2 +-
 tables/hdf5extension.pyx                           |   82 +-
 tables/idxutils.py                                 |   23 +-
 tables/index.py                                    |   48 +-
 tables/indexes.py                                  |    6 +-
 tables/indexesExtension.py                         |    2 +-
 tables/indexesextension.pyx                        |  404 ++--
 tables/leaf.py                                     |   26 +-
 tables/link.py                                     |   28 +-
 tables/linkExtension.py                            |    2 +-
 tables/linkextension.pyx                           |    4 +-
 tables/lrucacheExtension.py                        |    2 +-
 tables/lrucacheextension.pxd                       |    3 +-
 tables/lrucacheextension.pyx                       |   27 +-
 tables/misc/enum.py                                |   40 +-
 tables/misc/proxydict.py                           |    2 +-
 tables/node.py                                     |   59 +-
 tables/nodes/filenode.py                           |   36 +-
 tables/nodes/tests/__init__.py                     |    2 +-
 tables/nodes/tests/test_filenode.py                |    8 +-
 tables/parameters.py                               |   49 +-
 tables/path.py                                     |    1 +
 tables/scripts/__init__.py                         |    3 +-
 tables/scripts/pt2to3.py                           |    6 +-
 tables/scripts/ptdump.py                           |  155 +-
 tables/scripts/ptrepack.py                         |  449 ++---
 tables/table.py                                    |  128 +-
 tables/tableExtension.py                           |    2 +-
 tables/tableextension.pyx                          |   45 +-
 tables/tests/__init__.py                           |    8 +-
 tables/tests/check_leaks.py                        |  281 ++-
 tables/tests/common.py                             |   97 +-
 tables/tests/test_all.py                           |   69 +-
 tables/tests/test_array.py                         |  455 ++---
 tables/tests/test_attributes.py                    |  412 ++--
 tables/tests/test_backcompat.py                    |   22 +-
 tables/tests/test_basics.py                        |  618 +++---
 tables/tests/test_carray.py                        |  477 +++--
 tables/tests/test_create.py                        |  618 ++++--
 tables/tests/test_do_undo.py                       |  302 +--
 tables/tests/test_earray.py                        |  448 ++---
 tables/tests/test_enum.py                          |  106 +-
 tables/tests/test_expression.py                    |  247 +--
 tables/tests/test_garbage.py                       |    5 +-
 tables/tests/test_hdf5compat.py                    |   40 +-
 tables/tests/test_index_backcompat.py              |   50 +-
 tables/tests/test_indexes.py                       |  699 +++----
 tables/tests/test_indexvalues.py                   | 1340 +++++++------
 tables/tests/test_links.py                         |  122 +-
 tables/tests/test_lists.py                         |  111 +-
 tables/tests/test_nestedtypes.py                   |  235 +--
 tables/tests/test_numpy.py                         |  371 ++--
 tables/tests/test_queries.py                       |  102 +-
 tables/tests/test_tables.py                        | 1291 +++++++------
 tables/tests/test_tablesMD.py                      |  341 ++--
 tables/tests/test_timetype.py                      |   49 +-
 tables/tests/test_tree.py                          |  206 +-
 tables/tests/test_types.py                         |   96 +-
 tables/tests/test_vlarray.py                       | 1191 ++++++------
 tables/unimplemented.py                            |    4 +-
 tables/utils.py                                    |   51 +-
 tables/utilsExtension.py                           |    2 +-
 tables/utilsextension.pyx                          |  143 +-
 tables/vlarray.py                                  |   35 +-
 302 files changed, 33470 insertions(+), 9458 deletions(-)

diff --git a/.travis.yml b/.travis.yml
index 8cea274..e70bd59 100644
--- a/.travis.yml
+++ b/.travis.yml
@@ -7,11 +7,12 @@ python:
   - "2.6"
   - "2.7"
   - "3.2"
+  - "3.3"
 
 before_install:
     - sudo apt-get update -qq
     - sudo apt-get install -qq libhdf5-serial-dev liblzo2-dev libbz2-dev python3-numpy
+    - if [[ $TRAVIS_PYTHON_VERSION == '3.3' ]]; then pip install -U numpy>=1.4.1 --use-mirrors; fi
     - pip install -r requirements.txt --use-mirrors
 
 script: "make check"
-
diff --git a/ANNOUNCE.txt.in b/ANNOUNCE.txt.in
index 83f3b25..fbee1ca 100644
--- a/ANNOUNCE.txt.in
+++ b/ANNOUNCE.txt.in
@@ -4,45 +4,43 @@
 
 We are happy to announce PyTables @VERSION at .
 
-PyTables @VERSION@ comes after about 5 years from the last major release
-(2.0) and 7 months since the last stable release (2.4.0).
-
-This is new major release and an important milestone for the PyTables project
-since it provides the long waited support for Python 3.x, which has been around
-for 4 years.
-
-Almost all of the core numeric/scientific packages for Python already support
-Python 3 so we are very happy that now also PyTables can provide this
-important feature.
+This is a feature release.  The upgrading is recommended for users that
+are running PyTables in production environments.
 
 
 What's new
 ==========
 
-A short summary of main new features:
-
-- Since this release, PyTables now provides full support to Python 3
-- The entire code base is now more compliant with coding style guidelines
-  described in PEP8.
-- Basic support for HDF5 drivers.  It now is possible to open/create an
-  HDF5 file using one of the SEC2, DIRECT, LOG, WINDOWS, STDIO or CORE
-  drivers.
-- Basic support for in-memory image files.  An HDF5 file can be set from or
-  copied into a memory buffer.
-- Implemented methods to get/set the user block size in a HDF5 file.
-- All read methods now have an optional *out* argument that allows to pass a
-  pre-allocated array to store data.
-- Added support for the floating point data types with extended precision
-  (Float96, Float128, Complex192 and Complex256).
-- Consistent ``create_xxx()`` signatures.  Now it is possible to create all
-  data sets Array, CArray, EArray, VLArray, and Table from existing Python
-  objects.
-- Complete rewrite of the `nodes.filenode` module. Now it is fully
-  compliant with the interfaces defined in the standard `io` module.
-  Only non-buffered binary I/O is supported currently.
-
-Please refer to the RELEASE_NOTES document for a more detailed list of
-changes in this release.
+Probably the most relevant changes in this release are internal improvements
+like the new node cache that is now compatible with the upcoming Python 3.4
+and the registry for open files has been deeply reworked. The caching feature
+of file handlers has been completely dropped so now PyTables is a little bit
+more "thread friendly".
+
+New, user visible, features include:
+
+- a new lossy filter for HDF5 datasets (EArray, CArray, VLArray and Table
+  objects). The *quantization* filter truncates floating point data to a
+  specified precision before writing to disk.
+  This can significantly improve the performance of compressors
+  (many thanks to Andreas Hilboll).
+- support for the H5FD_SPLIT HDF5 driver (thanks to simleo)
+- all new features introduced in the Blosc_ 1.3.x series, and in particular
+  the ability to leverage different compressors within Blosc_ are now available
+  in PyTables via the blosc filter (a big thank you to Francesc)
+- the ability to save/restore the default value of :class:`EnumAtom` types
+
+Also, installations of the HDF5 library that have a broken support for the
+*long double* data type (see the `Issues with H5T_NATIVE_LDOUBLE`_ thread on
+the HFG5 forum) are detected by PyTables @VERSION@ and the corresponding
+features are automatically disabled.
+
+Users that need support for the *long double* data type should ensure to build
+PyTables against an installation of the HDF5 library that is not affected by the
+bug.
+
+.. _`Issues with H5T_NATIVE_LDOUBLE`:
+    http://hdf-forum.184993.n3.nabble.com/Issues-with-H5T-NATIVE-LDOUBLE-tt4026450.html
 
 As always, a large amount of bugs have been addressed and squashed as well.
 
diff --git a/LICENSE.txt b/LICENSE.txt
index dbd08e5..51d4e29 100644
--- a/LICENSE.txt
+++ b/LICENSE.txt
@@ -3,7 +3,7 @@ Copyright Notice and Statement for PyTables Software Library and Utilities:
 Copyright (c) 2002-2004 by Francesc Alted
 Copyright (c) 2005-2007 by Carabos Coop. V.
 Copyright (c) 2008-2010 by Francesc Alted
-Copyright (c) 2011-2013 by PyTables maintainers
+Copyright (c) 2011-2014 by PyTables maintainers
 All rights reserved.
 
 Redistribution and use in source and binary forms, with or without
diff --git a/LICENSE.txt b/LICENSES/LZ4.txt
similarity index 53%
copy from LICENSE.txt
copy to LICENSES/LZ4.txt
index dbd08e5..39784cb 100644
--- a/LICENSE.txt
+++ b/LICENSES/LZ4.txt
@@ -1,26 +1,18 @@
-Copyright Notice and Statement for PyTables Software Library and Utilities:
+LZ4 - Fast LZ compression algorithm
 
-Copyright (c) 2002-2004 by Francesc Alted
-Copyright (c) 2005-2007 by Carabos Coop. V.
-Copyright (c) 2008-2010 by Francesc Alted
-Copyright (c) 2011-2013 by PyTables maintainers
-All rights reserved.
+Copyright (C) 2011-2013, Yann Collet.
+BSD 2-Clause License (http://www.opensource.org/licenses/bsd-license.php)
 
 Redistribution and use in source and binary forms, with or without
 modification, are permitted provided that the following conditions are
 met:
 
-a. Redistributions of source code must retain the above copyright
-   notice, this list of conditions and the following disclaimer.
-
-b. Redistributions in binary form must reproduce the above copyright
-   notice, this list of conditions and the following disclaimer in the
-   documentation and/or other materials provided with the
-   distribution.
-
-c. Neither the name of Francesc Alted nor the names of its
-   contributors may be used to endorse or promote products derived
-   from this software without specific prior written permission.
+    * Redistributions of source code must retain the above copyright
+notice, this list of conditions and the following disclaimer.
+    * Redistributions in binary form must reproduce the above
+copyright notice, this list of conditions and the following disclaimer
+in the documentation and/or other materials provided with the
+distribution.
 
 THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
 "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
@@ -33,3 +25,8 @@ DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
 THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
 (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
 OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+
+You can contact the author at :
+- LZ4 homepage : http://fastcompression.blogspot.com/p/lz4.html
+- LZ4 source repository : http://code.google.com/p/lz4/
+
diff --git a/LICENSE.txt b/LICENSES/SNAPPY.txt
similarity index 54%
copy from LICENSE.txt
copy to LICENSES/SNAPPY.txt
index dbd08e5..8d6bd9f 100644
--- a/LICENSE.txt
+++ b/LICENSES/SNAPPY.txt
@@ -1,26 +1,19 @@
-Copyright Notice and Statement for PyTables Software Library and Utilities:
-
-Copyright (c) 2002-2004 by Francesc Alted
-Copyright (c) 2005-2007 by Carabos Coop. V.
-Copyright (c) 2008-2010 by Francesc Alted
-Copyright (c) 2011-2013 by PyTables maintainers
+Copyright 2011, Google Inc.
 All rights reserved.
 
 Redistribution and use in source and binary forms, with or without
 modification, are permitted provided that the following conditions are
 met:
 
-a. Redistributions of source code must retain the above copyright
-   notice, this list of conditions and the following disclaimer.
-
-b. Redistributions in binary form must reproduce the above copyright
-   notice, this list of conditions and the following disclaimer in the
-   documentation and/or other materials provided with the
-   distribution.
-
-c. Neither the name of Francesc Alted nor the names of its
-   contributors may be used to endorse or promote products derived
-   from this software without specific prior written permission.
+    * Redistributions of source code must retain the above copyright
+notice, this list of conditions and the following disclaimer.
+    * Redistributions in binary form must reproduce the above
+copyright notice, this list of conditions and the following disclaimer
+in the documentation and/or other materials provided with the
+distribution.
+    * Neither the name of Google Inc. nor the names of its
+contributors may be used to endorse or promote products derived from
+this software without specific prior written permission.
 
 THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
 "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
diff --git a/LICENSES/STDINT.txt b/LICENSES/STDINT.txt
index 7e9941a..486e694 100644
--- a/LICENSES/STDINT.txt
+++ b/LICENSES/STDINT.txt
@@ -1,4 +1,4 @@
-Copyright (c) 2006-2008 Alexander Chemeris
+Copyright (c) 2006-2013 Alexander Chemeris
 
 Redistribution and use in source and binary forms, with or without
 modification, are permitted provided that the following conditions are met:
@@ -10,8 +10,9 @@ modification, are permitted provided that the following conditions are met:
      notice, this list of conditions and the following disclaimer in the
      documentation and/or other materials provided with the distribution.
 
-  3. The name of the author may be used to endorse or promote products
-     derived from this software without specific prior written permission.
+  3. Neither the name of the product nor the names of its contributors may
+     be used to endorse or promote products derived from this software
+     without specific prior written permission.
 
 THIS SOFTWARE IS PROVIDED BY THE AUTHOR ``AS IS'' AND ANY EXPRESS OR IMPLIED
 WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF
diff --git a/LICENSES/WIN32PTHREADS.txt b/LICENSES/WIN32PTHREADS.txt
new file mode 100644
index 0000000..bd5ced5
--- /dev/null
+++ b/LICENSES/WIN32PTHREADS.txt
@@ -0,0 +1,19 @@
+Copyright (C) 2009 Andrzej K. Haczewski <ahaczewski at gmail.com>
+
+Permission is hereby granted, free of charge, to any person obtaining a copy
+of this software and associated documentation files (the "Software"), to deal
+in the Software without restriction, including without limitation the rights
+to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
+copies of the Software, and to permit persons to whom the Software is
+furnished to do so, subject to the following conditions:
+
+The above copyright notice and this permission notice shall be included in
+all copies or substantial portions of the Software.
+
+THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
+AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
+OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
+THE SOFTWARE.
diff --git a/LICENSES/ZLIB.txt b/LICENSES/ZLIB.txt
new file mode 100644
index 0000000..5d74f5c
--- /dev/null
+++ b/LICENSES/ZLIB.txt
@@ -0,0 +1,22 @@
+Copyright notice:
+
+ (C) 1995-2013 Jean-loup Gailly and Mark Adler
+
+  This software is provided 'as-is', without any express or implied
+  warranty.  In no event will the authors be held liable for any damages
+  arising from the use of this software.
+
+  Permission is granted to anyone to use this software for any purpose,
+  including commercial applications, and to alter it and redistribute it
+  freely, subject to the following restrictions:
+
+  1. The origin of this software must not be misrepresented; you must not
+     claim that you wrote the original software. If you use this software
+     in a product, an acknowledgment in the product documentation would be
+     appreciated but is not required.
+  2. Altered source versions must be plainly marked as such, and must not be
+     misrepresented as being the original software.
+  3. This notice may not be removed or altered from any source distribution.
+
+  Jean-loup Gailly        Mark Adler
+  jloup at gzip.org          madler at alumni.caltech.edu
diff --git a/MANIFEST.in b/MANIFEST.in
index c974641..7763bbf 100644
--- a/MANIFEST.in
+++ b/MANIFEST.in
@@ -6,12 +6,20 @@ recursive-include tables *.py *.pyx *.pxd *.c
 recursive-include tables/tests *.h5 *.mat
 recursive-include tables/nodes/tests *.h5 *.dat *.xbm
 recursive-include src *.c *.h Makefile
-recursive-include blosc *.c *.h
+
+include c-blosc/hdf5/blosc_filter.?
+recursive-include c-blosc/blosc *.c *.h
+recursive-include c-blosc/internal-complibs *.c *.cc *.h
 
 recursive-include LICENSES *
 recursive-include utils *
-recursive-include doc *.rst *.html *.js *.css *.*_t *.png *.ico *.conf *.py Makefile make.bat
+include doc/Makefile doc/make.bat
 #include doc/*.pdf
+recursive-include doc *.rst *.conf *.py *.*_t
+recursive-include doc *.html *.js *.css *.png *.ico
+recursive-include doc/source *.pdf objecttree.svg
+#recursive-include doc/source *.pdf *.svg
+recursive-include doc/html *.txt *.svg *.gif *.inv
 recursive-include doc/scripts *.py
 recursive-include doc/sphinxext *
 recursive-exclude doc/build *
diff --git a/Makefile b/Makefile
index 5ad6a45..c421bc4 100644
--- a/Makefile
+++ b/Makefile
@@ -25,6 +25,8 @@ dist:		all
 
 clean:
 	rm -rf MANIFEST build dist tmp tables/__pycache__
+	rm -rf bench/*.h5 bench/*.prof
+	rm -rf examples/*.h5 examples/raw
 	rm -f $(GENERATED) tables/*.so a.out
 	find . '(' -name '*.py[co]' -o -name '*~' ')' -exec rm '{}' ';'
 	for srcdir in $(SRCDIRS) ; do $(MAKE) -C $$srcdir $(OPT) $@ ; done
diff --git a/README.txt b/README.txt
index 50a0aee..6722e63 100644
--- a/README.txt
+++ b/README.txt
@@ -84,8 +84,8 @@ and bzip2 compression libraries support you will also need recent
 versions of them. LZO and bzip2 compression libraries are, however,
 optional.
 
-We've tested this PyTables version with HDF5 1.8.4/1.8.10, NumPy 1.4.1
-and Numexpr 2.0, and you *need* to use these versions, or higher, to
+We've tested this PyTables version with HDF5 1.8.11/1.8.12, NumPy 1.7.1/1.8.0
+and Numexpr 2.2.2, and you *need* to use these versions, or higher, to
 make use of PyTables.
 
 Installation
@@ -99,8 +99,9 @@ available in Chapter 2 of the User's Manual (``doc/usersguide.pdf`` or
 http://www.pytables.org/moin/HowToUse).
 
 1. First, make sure that you have HDF5, NumPy and Numexpr installed
-   (you will need at least HDF5 1.8.4, NumPy 1.4.1 and Numexpr
-   2.0). If don't, get them from http://www.hdfgroup.org/HDF5/,
+   (you will need at least HDF5 1.8.4, HDF5 >= 1.8.7 is strongly recommended,
+   NumPy 1.4.1 and Numexpr 2.0).
+   If don't, get them from http://www.hdfgroup.org/HDF5/,
    http://www.numpy.org and http://code.google.com/p/numexpr.
    Compile/install them.
 
diff --git a/RELEASE_NOTES.txt b/RELEASE_NOTES.txt
index 2eaf2a6..0498084 100644
--- a/RELEASE_NOTES.txt
+++ b/RELEASE_NOTES.txt
@@ -1,5 +1,5 @@
 =======================================
- Release notes for PyTables 3.0 series
+ Release notes for PyTables 3.1 series
 =======================================
 
 :Author: PyTables Developers
@@ -8,287 +8,196 @@
 .. py:currentmodule:: tables
 
 
-Changes from 2.4 to 3.0
-=======================
+Changes from 3.0 to 3.1.0
+=========================
 
 New features
 ------------
 
-- Since this release PyTables provides full support to Python_ 3
-  (closes :issue:`188`).
-
-- The entire code base is now more compliant with coding style guidelines
-  describe in the PEP8_ (closes :issue:`103` and :issue:`224`).
-  See `API changes`_ for more details.
-
-- Basic support for HDF5 drivers.  Now it is possible to open/create an
-  HDF5 file using one of the SEC2, DIRECT, LOG, WINDOWS, STDIO or CORE
-  drivers.  Users can also set the main driver parameters (closes
-  :issue:`166`).
-  Thanks to Michal Slonina.
-
-- Basic support for in-memory image files.  An HDF5 file can be set from or
-  copied into a memory buffer (thanks to Michal Slonina).  This feature is
-  only available if PyTables is built against HDF5 1.8.9 or newer.
-  Closes :issue:`165` and :issue:`173`.
-
-- New :meth:`File.get_filesize` method for retrieving the HDF5 file size.
-
-- Implemented methods to get/set the user block size in a HDF5 file
-  (closes :issue:`123`)
-
-- Improved support for PyInstaller_.  Now it is easier to pack frozen
-  applications that use the PyTables package (closes: :issue:`177`).
-  Thanks to Stuart Mentzer and Christoph Gohlke.
-
-- All read methods now have an optional *out* argument that allows to pass a
-  pre-allocated array to store data (closes :issue:`192`)
-
-- Added support for the floating point data types with extended precision
-  (Float96, Float128, Complex192 and Complex256).  This feature is only
-  available if numpy_ provides it as well.
-  Closes :issue:`51` and :issue:`214`.  Many thanks to Andrea Bedini.
-
-- Consistent ``create_xxx()`` signatures.  Now it is possible to create all
-  data sets :class:`Array`, :class:`CArray`, :class:`EArray`,
-  :class:`VLArray`, and :class:`Table` from existing Python objects (closes
-  :issue:`61` and :issue:`249`).  See also the `API changes`_ section.
-
-- Complete rewrite of the :mod:`nodes.filenode` module. Now it is fully
-  compliant with the interfaces defined in the standard :mod:`io` module.
-  Only non-buffered binary I/O is supported currently.
-  See also the `API changes`_ section.  Closes :issue:`244`.
-
-- New :program:`pt2to3` tool is provided to help users to port their
-  applications to the new API (see `API changes`_ section).
+- Now PyTables is able to save/restore the default value of :class:`EnumAtom`
+  types (closes :issue:`234`).
+- Implemented support for the H5FD_SPLIT driver (closes :issue:`288`,
+  :issue:`289` and :issue:`295`). Many thanks to simleo.
+- New quantization filter: the filter truncates floating point data to a
+  specified precision before writing to disk. This can significantly improve
+  the performance of compressors (closes :issue:`261`).
+  Thanks to Andreas Hilboll.
+- Added new :meth:`VLArray.get_row_size` method to :class:`VLArray` for
+  querying the number of atoms of a :class:`VLArray` row.
+  Closes :issue:`24` and :issue:`315`.
+- The internal Blosc_ library has been updated to version 1.3.2.
+  All new features introduced in the Blosc_ 1.3.x series, and in particular
+  the ability to leverage different compressors within Blosc_ (see the `Blosc
+  Release Notes`_), are now available in PyTables via the blosc filter
+  (closes: :issue:`324`). A big thank you to Francesc.
 
 
 Improvements
 ------------
 
-- Improved runtime checks on dynamic loading of libraries: meaningful error
-  messages are generated in case of failure.
-  Also, now PyTables no more alters the system PATH.
-  Closes :issue:`178` and :issue:`179` (thanks to Christoph Gohlke).
-
-- Improved list of search paths for libraries as suggested by Nicholaus
-  Halecky (see :issue:`219`).
-
-- Removed deprecated Cython_ include (.pxi) files. Contents of
-  :file:`convtypetables.pxi` have been moved in :file:`utilsextension.pyx`.
-  Closes :issue:`217`.
-
-- The internal Blosc_ library has been upgraded to version 1.2.3.
-
-- Pre-load the bzip2_ library on windows (closes :issue:`205`)
-
-- The :meth:`File.get_node` method now accepts unicode paths
-  (closes :issue:`203`)
-
-- Improved compatibility with Cython_ 0.19 (see :issue:`220` and
-  :issue:`221`)
-
-- Improved compatibility with numexpr_ 2.1 (see also :issue:`199` and
-  :issue:`241`)
-
-- Improved compatibility with development versions of numpy_
-  (see :issue:`193`)
-
-- Packaging: since this release the standard tar-ball package no more includes
-  the PDF version of the "PyTables User Guide", so it is a little bit smaller
-  now.  The complete and pre-build version of the documentation both in HTML
-  and PDF format is available on the file `download area`_ on SourceForge.net.
-  Closes: :issue:`172`.
-
-- Now PyTables also uses `Travis-CI`_ as continuous integration service.
-  All branches and all pull requests are automatically tested with different
-  Python_ versions.  Closes :issue:`212`.
-
-
-Other changes
--------------
-
-- PyTables now requires Python 2.6 or newer.
-
-- Minimum supported version of Numexpr_ is now 2.0.
-
-
-API changes
------------
-
-The entire PyTables API as been made more PEP8_ compliant (see :issue:`224`).
-
-This means that many methods, attributes, module global variables and also
-keyword parameters have been renamed to be compliant with PEP8_ style
-guidelines (e.g. the ``tables.hdf5Version`` constant has been renamed into
-``tables.hdf5_version``).
+- The node caching mechanism has been completely redesigned to be simpler and
+  less dependent from specific behaviours of the ``__del__`` method.
+  Now PyTables is compatible with the forthcoming Python 3.4.
+  Closes :issue:`306`.
+- PyTables no longer uses shared/cached file handlers. This change somewhat
+  improves support for concurrent reading allowing the user to safely open the
+  same file in different threads for reading (requires HDF5 >= 1.8.7).
+  More details about this change can be found in the `Backward incompatible
+  changes`_ section.
+  See also :issue:`130`, :issue:`129` :issue:`292` and :issue:`216`.
+- PyTables is now able to detect and use external installations of the Blosc_
+  library (closes :issue:`104`).  If Blosc_ is not found in the system, and the
+  user do not specify a custom installation directory, then it is used an internal
+  copy of the Blosc_ source code.
+- Automatically disable extended float support if a buggy version of HDF5
+  is detected (see also `Issues with H5T_NATIVE_LDOUBLE`_).
+  See also :issue:`275`, :issue:`290` and :issue:`300`.
+- Documented an unexpected behaviour with string literals in query conditions
+  on Python 3 (closes :issue:`265`)
+- The deprecated :mod:`getopt` module has been dropped in favour of
+  :mod:`argparse` in all command line utilities (close :issue:`251`)
+- Improved the installation section of the :doc:`../usersguide/index`.
+
+  * instructions for installing PyTables via pip_ have been added.
+  * added a reference to the Anaconda_, Canopy_ and `Christoph Gohlke suites`_
+    (closes :issue:`291`)
+
+- Enabled `Travis-CI`_ builds for Python_ 3.3
+- :meth:`Tables.read_coordinates` now also works with boolean indices input.
+  Closes :issue:`287` and :issue:`298`.
+- Improved compatibility with numpy_ >= 1.8 (see :issue:`259`)
+- The code of the benchmark programs (bench directory) has been updated.
+  Closes :issue:`114`.
+- Fixed some warning related to non-unicode file names (the Windows bytes API
+  has been deprecated in Python 3.4)
 
-We made the best effort to maintain compatibility to the old API for existing
-applications.  In most cases, the old 2.x API is still available and usable
-even if it is now deprecated (see the Deprecations_ section).
 
-The only important backwards incompatible API changes are for names of
-function/methods arguments.  All uses of keyword arguments should be
-checked and fixed to use the new naming convention.
-
-The new :program:`pt2to3` tool can be used to port PyTables based applications
-to the new API.
-
-Many deprecated features and support for obsolete modules has been dropped:
-
-- The deprecated :data:`is_pro` module constant has been removed
-
-- The nra module and support for the obsolete numarray module has been removed.
-  The *numarray* flavor is no more supported as well (closes :issue:`107`).
-
-- Support for the obsolete Numeric module has been removed.
-  The *numeric* flavor is no longer available (closes :issue:`108`).
-
-- The tables.netcdf3 module has been removed (closes :issue:`68`).
-
-- The deprecated :exc:`exceptions.Incompat16Warning` exception has been
-  removed
-
-- The :meth:`File.create_external_link` method no longer has a keyword
-  parameter named *warn16incompat*.  It was deprecated in PyTables 2.4.
-
-Moreover:
-
-- The :meth:`File.create_array`, :meth:`File.create_carray`,
-  :meth:`File.create_earray`, :meth:`File.create_vlarray`, and
-  :meth:`File.create_table` methods of the :class:`File` objects gained a
-  new (optional) keyword argument named ``obj``.  It can be used to initialize
-  the newly created dataset with an existing Python object, though normally
-  these are numpy_ arrays.
-
-  The *atom*/*descriptor* and *shape* parameters are now optional if the
-  *obj* argument is provided.
-
-- The :mod:`nodes.filenode` has been completely rewritten to be fully
-  compliant with the interfaces defined in the :mod:`io` module.
-
-  The FileNode classes currently implemented are intended for binary I/O.
-
-  Main changes:
-
-  * the FileNode base class is no more available,
-  * the new version of :class:`nodes.filenode.ROFileNode` and
-    :class:`nodes.filenode.RAFileNode` objects no more expose the *offset*
-    attribute (the *seek* and *tell* methods can be used instead),
-  * the *lineSeparator* property is no more available end the ``\n``
-    character is always used as line separator.
-
-- The `__version__` module constants has been removed from almost all the
-  modules (it was not used after the switch to Git).  Of course the package
-  level constant (:data:`tables.__version__`) still remains.
-  Closes :issue:`112`.
-
-- The :func:`lrange` has been dropped in favor of xrange (:issue:`181`)
-
-- The :data:`parameters.MAX_THREADS` configuration parameter has been dropped
-  in favor of :data:`parameters.MAX_BLOSC_THREADS` and
-  :data:`parameters.MAX_NUMEXPR_THREADS` (closes :issue:`147`).
-
-- The :func:`conditions.compile_condition` function no more has a *copycols*
-  argument, it was no more necessary since Numexpr_ 1.3.1.
-  Closes :issue:`117`.
+Bugs fixed
+----------
 
-- The *expectedsizeinMB* parameter of the :meth:`File.create_vlarray` and of
-  the :meth:`VLArrsy.__init__` methods has been replaced by *expectedrows*.
-  See also (:issue:`35`).
+- Fixed detection of platforms supporting Blosc_
+- Fixed a crash that occurred when one attempts to write a numpy_ array to
+  an :class:`Atom` (closes :issue:`209` and :issue:`296`)
+- Prevent creation of a table with no columns (closes :issue:`18` and
+  :issue:`299`)
+- Fixed a memory leak that occured when iterating over
+  :class:`CArray`/:class:`EArray` objects (closes :issue:`308`,
+  see also :issue:`309`).
+  Many thanks to Alistair Muldal.
+- Make NaN types sort to the end. Closes :issue:`282` and :issue:`313`
+- Fixed selection on float columns when NaNs are present (closes :issue:`327`
+  and :issue:`330`)
+- Fix computation of the buffer size for iterations on rows.
+  The buffers size was overestimated resulting in a :exc:`MemoryError`
+  in some cases.
+  Closes :issue:`316`. Thamks to bbudescu.
+- Better check of file open mode. Closes :issue:`318`.
+- The Blosc filter now works correctly together with fletcher32.
+  Closes :issue:`21`.
+- Close the file handle before trying to delete the corresponding file.
+  Fixes a test failure on Windows.
+- Use integer division for computing indices (fixes some warning on Windows)
 
-- The :meth:`Table.whereAppend` method has been renamed into
-  :meth:`Table.append_where` (closes :issue:`248`).
 
-Please refer to the :doc:`../MIGRATING_TO_3.x` document for more details about
-API changes and for some useful hint about the migration process from the 2.X
-API to the new one.
+Deprecations
+------------
 
+Following the plan for the complete transition to the new (PEP8_ compliant)
+API, all calls to the old API will raise a :exc:`DeprecationWarning`.
 
-Other possibly incompatible changes
------------------------------------
+The new API has been introduced in PyTables 3.0 and is backward incompatible.
+In order to guarantee a smoother transition the old API is still usable even
+if it is now deprecated.
 
-- All methods of the :class:`Table` class that take *start*, *stop* and
-  *step* parameters (including :meth:`Table.read`, :meth:`Table.where`,
-  :meth:`Table.iterrows`, etc) have been redesigned to have a consistent
-  behaviour.  The meaning of the *start*, *stop* and *step* and their default
-  values now always work exactly like in the standard :class:`slice` objects.
-  Closes :issue:`44` and :issue:`255`.
+The plan for the complete transition to the new API is outlined in
+:issue:`224`.
 
-- Unicode attributes are not stored in the HDF5 file as pickled string.
-  They are now saved on the HDF5 file as UTF-8 encoded strings.
 
-  Although this does not introduce any API breakage, files produced are
-  different (for unicode attributes) from the ones produced by earlier
-  versions of PyTables.
+Backward incompatible changes
+-----------------------------
 
-- System attributes are now stored in the HDF5 file using the character set
-  that reflects the native string behaviour: ASCII for Python 2 and UTF8 for
-  Python 3.  In any case, system attributes are represented as Python string.
+In PyTables <= 3.0 file handles (objects that are returned by the
+:func:`open_file` function) were stored in an internal registry and re-used
+when possible.
 
-- The :meth:`iterrows` method of :class:`*Array` and :class:`Table` as well
-  as the :meth:`Table.itersorted` now behave like functions in the standard
-  :mod:`itertools` module.
-  If the *start* parameter is provided and *stop* is None then the
-  array/table is iterated from *start* to the last line.
-  In PyTables < 3.0 only one element was returned.
+Two subsequent attempts to open the same file (with compatible open mode)
+returned the same file handle in PyTables <= 3.0::
 
+    In [1]: import tables
+    In [2]: print(tables.__version__)
+    3.0.0
+    In [3]: a = tables.open_file('test.h5', 'a')
+    In [4]: b = tables.open_file('test.h5', 'a')
+    In [5]: a is b
+    Out[5]: True
 
-Deprecations
-------------
+All this is an implementation detail, it happened under the hood and the user
+had no control over the process.
 
-- As described in `API changes`_, all functions, methods and attribute names
-  that was not compliant with the PEP8_ guidelines have been changed.
-  Old names are still available but they are deprecated.
+This kind of behaviour was considered a feature since it can speed up opening
+of files in case of repeated opens and it also avoids any potential problem
+related to multiple opens, a practice that the HDF5 developers recommend to
+avoid (see also H5Fopen_ reference page).
 
-- The use of upper-case keyword arguments in the :func:`open_file` function
-  and the :class:`File` class initializer is now deprecated.  All parameters
-  defined in the :file:`tables/parameters.py` module can still be passed as
-  keyword argument to the :func:`open_file` function just using a lower-case
-  version of the parameter name.
+The trick, of course, is that files are not opened multiple times at HDF5
+level, rather an open file is referenced several times.
 
+The big drawback of this approach is that there are really few chances to use
+PyTables safely in a multi thread program.  Several bug reports have been
+filed regarding this topic.
 
-Bugs fixed
-----------
+After long discussions about the possibility to actually achieve concurrent I/O
+and about patterns that should be used for the I/O in concurrent programs
+PyTables developers decided to remove the *black magic under the hood* and
+allow the users to implement the patterns they want.
 
-- Better check access on closed files (closes :issue:`62`)
+Starting from PyTables 3.1 file handles are no more re-used (*shared*) and
+each call to the :func:`open_file` function returns a new file handle::
 
-- Fix for :meth:`File.renameNode` where in certain cases
-  :meth:`File._g_updateLocation` was wrongly called (closes :issue:`208`).
-  Thanks to Michka Popoff.
+    In [1]: import tables
+    In [2]: print tables.__version__
+    3.1.0
+    In [3]: a = tables.open_file('test.h5', 'a')
+    In [4]: b = tables.open_file('test.h5', 'a')
+    In [5]: a is b
+    Out[5]: False
 
-- Fixed ptdump failure on data with nested columns (closes :issue:`213`).
-  Thanks to Alexander Ford.
+It is important to stress that the new implementation still has an internal
+registry (implementation detail) and it is still **not thread safe**.
+Just now a smart enough developer should be able to use PyTables in a
+muti-thread program without too much headaches.
 
-- Fixed an error in :func:`open_file` when *filename* is a :class:`numpy.str_`
-  (closes :issue:`204`)
+The new implementation behaves differently from the previous one, although the
+API has not been changed.  Now users should pay more attention when they open a
+file multiple times (as recommended in the `HDF5 reference`__ ) and they
+should take care of using them in an appropriate way.
 
-- Fixed :issue:`119`, :issue:`230` and :issue:`232`, where an index on
-  :class:`Time64Col` (only, :class:`Time32Col` was ok) hides the data on
-  selection from a Tables. Thanks to Jeff Reback.
+__ H5Fopen_
 
-- Fixed ``tables.tests.test_nestedtypes.ColsTestCase.test_00a_repr`` test
-  method.  Now the ``repr`` of of cols on big-endian platforms is correctly
-  handled  (closes :issue:`237`).
+Please note that the :attr:`File.open_count` property was originally intended
+to keep track of the number of references to the same file handle.
+In PyTables >= 3.1, despite of the name, it maintains the same semantics, just
+now its value should never be higher that 1.
 
-- Fixes bug with completely sorted indexes where *nrowsinbuf* must be equal
-  to or greater than the *chunksize* (thanks to Thadeus Burgess).
-  Closes :issue:`206` and :issue:`238`.
+.. note::
 
-- Fixed an issue of the :meth:`Table.itersorted` with reverse iteration
-  (closes :issue:`252` and :issue:`253`).
+    HDF5 versions lower than 1.8.7 are not fully compatible with PyTables 3.1.
+    A partial support to HDF5 < 1.8.7 is still provided but in that case
+    multiple file opens are not allowed at all (even in read-only mode).
 
 
+.. _pip: http://www.pip-installer.org
+.. _Anaconda: https://store.continuum.io/cshop/anaconda
+.. _Canopy: https://www.enthought.com/products/canopy
+.. _`Christoph Gohlke suites`: http://www.lfd.uci.edu/~gohlke/pythonlibs
+.. _`Issues with H5T_NATIVE_LDOUBLE`: http://hdf-forum.184993.n3.nabble.com/Issues-with-H5T-NATIVE-LDOUBLE-tt4026450.html
 .. _Python: http://www.python.org
-.. _PEP8: http://www.python.org/dev/peps/pep-0008
-.. _PyInstaller: http://www.pyinstaller.org
-.. _Blosc: https://github.com/FrancescAlted/blosc
-.. _bzip2: http://www.bzip.org
-.. _Cython: http://www.cython.org
-.. _Numexpr: http://code.google.com/p/numexpr
+.. _Blosc: http://www.blosc.org
 .. _numpy: http://www.numpy.org
-.. _`download area`: http://sourceforge.net/projects/pytables/files/pytables
 .. _`Travis-CI`: https://travis-ci.org
+.. _PEP8: http://www.python.org/dev/peps/pep-0008
+.. _`Blosc Release Notes`: https://github.com/FrancescAlted/blosc/wiki/Release-notes
+.. _H5Fopen: http://www.hdfgroup.org/HDF5/doc/RM/RM_H5F.html#File-Open
 
 
   **Enjoy data!**
diff --git a/VERSION b/VERSION
index 4a36342..fd2a018 100644
--- a/VERSION
+++ b/VERSION
@@ -1 +1 @@
-3.0.0
+3.1.0
diff --git a/bench/LRU-experiments.py b/bench/LRU-experiments.py
index 1825049..f15e9de 100644
--- a/bench/LRU-experiments.py
+++ b/bench/LRU-experiments.py
@@ -1,38 +1,41 @@
 # Testbed to perform experiments in order to determine best values for
 # the node numbers in LRU cache. Tables version.
 
+from __future__ import print_function
 from time import time
 from tables import *
 import tables
 
-print "PyTables version-->", tables.__version__
+print("PyTables version-->", tables.__version__)
 
 filename = "/tmp/junk-tables-100.h5"
 NLEAVES = 2000
 NROWS = 1000
 
+
 class Particle(IsDescription):
-    name        = StringCol(16, pos=1)   # 16-character String
-    lati        = Int32Col(pos=2)        # integer
-    longi       = Int32Col(pos=3)        # integer
-    pressure    = Float32Col(pos=4)      # float  (single-precision)
-    temperature = Float64Col(pos=5)      # double (double-precision)
+    name = StringCol(16, pos=1)         # 16-character String
+    lati = Int32Col(pos=2)              # integer
+    longi = Int32Col(pos=3)             # integer
+    pressure = Float32Col(pos=4)        # float  (single-precision)
+    temperature = Float64Col(pos=5)     # double (double-precision)
+
 
 def create_junk():
     # Open a file in "w"rite mode
-    fileh = open_file(filename, mode = "w")
+    fileh = open_file(filename, mode="w")
     # Create a new group
     group = fileh.create_group(fileh.root, "newgroup")
 
-    for i in xrange(NLEAVES):
+    for i in range(NLEAVES):
         # Create a new table in newgroup group
-        table = fileh.create_table(group, 'table'+str(i), Particle,
-                                  "A table", Filters(1))
+        table = fileh.create_table(group, 'table' + str(i), Particle,
+                                   "A table", Filters(1))
         particle = table.row
-        print "Creating table-->", table._v_name
+        print("Creating table-->", table._v_name)
 
         # Fill the table with particles
-        for i in xrange(NROWS):
+        for i in range(NROWS):
             # This injects the row values.
             particle.append()
         table.flush()
@@ -40,11 +43,12 @@ def create_junk():
     # Finally, close the file
     fileh.close()
 
+
 def modify_junk_LRU():
     fileh = open_file(filename, 'a')
     group = fileh.root.newgroup
     for j in range(5):
-        print "iter -->", j
+        print("iter -->", j)
         for tt in fileh.walk_nodes(group):
             if isinstance(tt, Table):
                 pass
@@ -52,38 +56,41 @@ def modify_junk_LRU():
 #                     pass
     fileh.close()
 
+
 def modify_junk_LRU2():
     fileh = open_file(filename, 'a')
     group = fileh.root.newgroup
     for j in range(20):
         t1 = time()
         for i in range(100):
-#              print "table-->", tt._v_name
-            tt = getattr(group, "table"+str(i))
-#             for row in tt:
-#                 pass
-        print "iter and time -->", j+1, round(time()-t1, 3)
+            #print("table-->", tt._v_name)
+            tt = getattr(group, "table" + str(i))
+            #for row in tt:
+            #    pass
+        print("iter and time -->", j + 1, round(time() - t1, 3))
     fileh.close()
 
+
 def modify_junk_LRU3():
     fileh = open_file(filename, 'a')
     group = fileh.root.newgroup
     for j in range(3):
         t1 = time()
         for tt in fileh.walk_nodes(group, "Table"):
-            title = tt.attrs.TITLE
+            tt.attrs.TITLE
             for row in tt:
                 pass
-        print "iter and time -->", j+1, round(time()-t1, 3)
+        print("iter and time -->", j + 1, round(time() - t1, 3))
     fileh.close()
 
 if 1:
-    #create_junk()
-    #modify_junk_LRU()    # uses the iterator version (walk_nodes)
-    #modify_junk_LRU2()   # uses a regular loop (getattr)
+    # create_junk()
+    # modify_junk_LRU()    # uses the iterator version (walk_nodes)
+    # modify_junk_LRU2()   # uses a regular loop (getattr)
     modify_junk_LRU3()   # uses a regular loop (getattr)
 else:
-    import profile, pstats
+    import profile
+    import pstats
     profile.run('modify_junk_LRU2()', 'modify.prof')
     stats = pstats.Stats('modify.prof')
     stats.strip_dirs()
diff --git a/bench/LRU-experiments2.py b/bench/LRU-experiments2.py
index f3be75f..564352b 100644
--- a/bench/LRU-experiments2.py
+++ b/bench/LRU-experiments2.py
@@ -1,24 +1,28 @@
 # Testbed to perform experiments in order to determine best values for
 # the node numbers in LRU cache. Arrays version.
 
+from __future__ import print_function
 from time import time
 import tables
-print "PyTables version-->", tables.__version__
+
+print("PyTables version-->", tables.__version__)
 
 filename = "/tmp/junk-array.h5"
 NOBJS = 1000
 
+
 def create_junk():
-    fileh = tables.open_file(filename, mode = "w")
-    for i in xrange(NOBJS):
-        table = fileh.create_array(fileh.root, 'array'+str(i), [1])
+    fileh = tables.open_file(filename, mode="w")
+    for i in range(NOBJS):
+        fileh.create_array(fileh.root, 'array' + str(i), [1])
     fileh.close()
 
+
 def modify_junk_LRU():
     fileh = tables.open_file(filename, 'a')
     group = fileh.root
     for j in range(5):
-        print "iter -->", j
+        print("iter -->", j)
         for tt in fileh.walk_nodes(group):
             if isinstance(tt, tables.Array):
 #                 d = tt.read()
@@ -26,24 +30,26 @@ def modify_junk_LRU():
 
     fileh.close()
 
+
 def modify_junk_LRU2():
     fileh = tables.open_file(filename, 'a')
     group = fileh.root
     for j in range(5):
         t1 = time()
         for i in range(100):  # The number
-#              print "table-->", tt._v_name
-            tt = getattr(group, "array"+str(i))
-#                 d = tt.read()
-        print "iter and time -->", j+1, round(time()-t1, 3)
+            #print("table-->", tt._v_name)
+            tt = getattr(group, "array" + str(i))
+            #d = tt.read()
+        print("iter and time -->", j + 1, round(time() - t1, 3))
     fileh.close()
 
 if 1:
-    #create_junk()
-    #modify_junk_LRU()    # uses the iterador version (walk_nodes)
+    # create_junk()
+    # modify_junk_LRU()    # uses the iterador version (walk_nodes)
     modify_junk_LRU2()   # uses a regular loop (getattr)
 else:
-    import profile, pstats
+    import profile
+    import pstats
     profile.run('modify_junk_LRU2()', 'modify.prof')
     stats = pstats.Stats('modify.prof')
     stats.strip_dirs()
diff --git a/bench/LRUcache-node-bench.py b/bench/LRUcache-node-bench.py
index 04807b0..eb5e51f 100644
--- a/bench/LRUcache-node-bench.py
+++ b/bench/LRUcache-node-bench.py
@@ -1,37 +1,51 @@
+from __future__ import print_function
+
+import sys
 import numpy
 import tables
 from time import time
-import psyco
+#import psyco
 
 filename = "/tmp/LRU-bench.h5"
 nodespergroup = 250
 niter = 100
 
-f = tables.open_file(filename, "w")
+print('nodespergroup:', nodespergroup)
+print('niter:', niter)
+
+if len(sys.argv) > 1:
+    NODE_CACHE_SLOTS = int(sys.argv[1])
+    print('NODE_CACHE_SLOTS:', NODE_CACHE_SLOTS)
+else:
+    NODE_CACHE_SLOTS = tables.parameters.NODE_CACHE_SLOTS
+f = tables.open_file(filename, "w", node_cache_slots=NODE_CACHE_SLOTS)
 g = f.create_group("/", "NodeContainer")
-print "Creating nodes"
+print("Creating nodes")
 for i in range(nodespergroup):
-    f.create_array(g, "arr%d"%i, [i])
+    f.create_array(g, "arr%d" % i, [i])
 f.close()
 
 f = tables.open_file(filename)
 
+
 def iternodes():
 #     for a in f.root.NodeContainer:
 #         pass
-    indices = numpy.random.randn(nodespergroup*niter)*30+nodespergroup/2.
-    indices = indices.astype('i4').clip(0, nodespergroup-1)
+    indices = numpy.random.randn(nodespergroup * niter) * \
+        30 + nodespergroup / 2.
+    indices = indices.astype('i4').clip(0, nodespergroup - 1)
     g = f.get_node("/", "NodeContainer")
     for i in indices:
-        a = f.get_node(g, "arr%d"%i)
-        #print "a-->", a
+        a = f.get_node(g, "arr%d" % i)
+        # print("a-->", a)
 
-print "reading nodes..."
+print("reading nodes...")
 # First iteration (put in LRU cache)
 t1 = time()
 for a in f.root.NodeContainer:
     pass
-print "time (init cache)-->", round(time()-t1, 3)
+print("time (init cache)-->", round(time() - t1, 3))
+
 
 def timeLRU():
     # Next iterations
@@ -39,7 +53,8 @@ def timeLRU():
 #     for i in range(niter):
 #         iternodes()
     iternodes()
-    print "time (from cache)-->", round((time()-t1)/niter, 3)
+    print("time (from cache)-->", round((time() - t1) / niter, 3))
+
 
 def profile(verbose=False):
     import pstats
@@ -53,8 +68,13 @@ def profile(verbose=False):
     else:
         stats.print_stats(20)
 
-#profile()
-#psyco.bind(timeLRU)
+# profile()
+# psyco.bind(timeLRU)
 timeLRU()
 
 f.close()
+
+# for N in 0 4 8 16 32 64 128 256 512 1024 2048 4096; do
+#     env PYTHONPATH=../build/lib.linux-x86_64-2.7 \
+#     python LRUcache-node-bench.py $N;
+# done
diff --git a/bench/blosc.py b/bench/blosc.py
index 9508a98..54b2c9b 100644
--- a/bench/blosc.py
+++ b/bench/blosc.py
@@ -1,4 +1,6 @@
-import sys, os
+from __future__ import print_function
+import os
+import sys
 from time import time
 import numpy as np
 import tables as tb
@@ -16,7 +18,7 @@ shuffle = True
 
 def create_file(kind, prec, synth):
     prefix_orig = 'cellzome/cellzome-'
-    iname = dirname+prefix_orig+'none-'+prec+'.h5'
+    iname = dirname + prefix_orig + 'none-' + prec + '.h5'
     f = tb.open_file(iname, "r")
 
     if prec == "single":
@@ -31,13 +33,14 @@ def create_file(kind, prec, synth):
 
     for clevel in range(10):
         oname = '%s/%s-%s%d-%s.h5' % (dirname, prefix, kind, clevel, prec)
-        #print "creating...", iname
+        # print "creating...", iname
         f2 = tb.open_file(oname, "w")
 
         if kind in ["none", "numpy"]:
             filters = None
         else:
-            filters = tb.Filters(complib=kind, complevel=clevel, shuffle=shuffle)
+            filters = tb.Filters(
+                complib=kind, complevel=clevel, shuffle=shuffle)
 
         for name in ['maxarea', 'mascotscore']:
             col = f.get_node('/', name)
@@ -48,7 +51,7 @@ def create_file(kind, prec, synth):
                 r[:] = col[:]
         f2.close()
         if clevel == 0:
-            size = 1.5*float(os.stat(oname)[6])
+            size = 1.5 * float(os.stat(oname)[6])
     f.close()
     return size
 
@@ -56,7 +59,7 @@ def create_file(kind, prec, synth):
 def create_synth(kind, prec):
 
     prefix_orig = 'cellzome/cellzome-'
-    iname = dirname+prefix_orig+'none-'+prec+'.h5'
+    iname = dirname + prefix_orig + 'none-' + prec + '.h5'
     f = tb.open_file(iname, "r")
 
     if prec == "single":
@@ -67,13 +70,14 @@ def create_synth(kind, prec):
     prefix = 'synth/synth-'
     for clevel in range(10):
         oname = '%s/%s-%s%d-%s.h5' % (dirname, prefix, kind, clevel, prec)
-        #print "creating...", iname
+        # print "creating...", iname
         f2 = tb.open_file(oname, "w")
 
         if kind in ["none", "numpy"]:
             filters = None
         else:
-            filters = tb.Filters(complib=kind, complevel=clevel, shuffle=shuffle)
+            filters = tb.Filters(
+                complib=kind, complevel=clevel, shuffle=shuffle)
 
         for name in ['maxarea', 'mascotscore']:
             col = f.get_node('/', name)
@@ -85,7 +89,7 @@ def create_synth(kind, prec):
 
         f2.close()
         if clevel == 0:
-            size = 1.5*float(os.stat(oname)[6])
+            size = 1.5 * float(os.stat(oname)[6])
     f.close()
     return size
 
@@ -120,10 +124,10 @@ def process_file(kind, prec, clevel, synth):
     if kind == "numpy":
         a2, b2 = a_[:], b_[:]
         t0 = time()
-        r = eval(expression, {'a':a2, 'b':b2})
-        print "%5.2f" % round(time()-t0, 3)
+        r = eval(expression, {'a': a2, 'b': b2})
+        print("%5.2f" % round(time() - t0, 3))
     else:
-        expr = tb.Expr(expression, {'a':a_, 'b':b_})
+        expr = tb.Expr(expression, {'a': a_, 'b': b_})
         expr.set_output(r)
         expr.eval()
     f.close()
@@ -141,21 +145,21 @@ if __name__ == '__main__':
         else:
             synth = False
     else:
-        print "3 parameters required"
+        print("3 parameters required")
         sys.exit(1)
 
-    #print "kind, precision, synth:", kind, prec, synth
+    # print "kind, precision, synth:", kind, prec, synth
 
-    #print "Creating input files..."
+    # print "Creating input files..."
     size_orig = create_file(kind, prec, synth)
 
-    #print "Processing files for compression levels in range(10)..."
+    # print "Processing files for compression levels in range(10)..."
     for clevel in range(10):
         t0 = time()
         ts = []
         for i in range(niter):
             size = process_file(kind, prec, clevel, synth)
-            ts.append(time()-t0)
+            ts.append(time() - t0)
             t0 = time()
         ratio = size_orig / size
-        print "%5.2f, %5.2f" % (round(min(ts), 3), ratio)
+        print("%5.2f, %5.2f" % (round(min(ts), 3), ratio))
diff --git a/bench/bsddb-table-bench.py b/bench/bsddb-table-bench.py
index 053b8e2..49c4f80 100644
--- a/bench/bsddb-table-bench.py
+++ b/bench/bsddb-table-bench.py
@@ -1,12 +1,17 @@
 #!/usr/bin/env python
 ###### WARNING #######
 ### This script is obsoleted ###
-### If you get it working again, please drop me a line
-### F. Alted 2004-01-27
-from tables import *
-import numarray as NA
-import struct, sys
+# If you get it working again, please drop me a line
+# F. Alted 2004-01-27
+
+from __future__ import print_function
+import sys
+import struct
 import cPickle
+
+from tables import *
+import numpy as np
+
 try:
     # For Python 2.3
     from bsddb import db
@@ -18,40 +23,48 @@ import psyco
 
 # This class is accessible only for the examples
 class Small(IsDescription):
-    """ A record has several columns. They are represented here as
-    class attributes, whose names are the column names and their
-    values will become their types. The IsColDescr class will take care
-    the user will not add any new variables and that its type is
-    correct."""
+    """Record descriptor.
+
+    A record has several columns. They are represented here as class
+    attributes, whose names are the column names and their values will
+    become their types. The IsColDescr class will take care the user
+    will not add any new variables and that its type is correct.
+
+    """
 
     var1 = StringCol(itemsize=16)
     var2 = Int32Col()
     var3 = Float64Col()
 
 # Define a user record to characterize some kind of particles
+
+
 class Medium(IsDescription):
-    name        = StringCol(itemsize=16, pos=0)  # 16-character String
+    name = StringCol(itemsize=16, pos=0)  # 16-character String
     #float1      = Float64Col(shape=2, dflt=2.3)
-    float1      = Float64Col(dflt=1.3, pos=1)
-    float2      = Float64Col(dflt=2.3, pos=2)
-    ADCcount    = Int16Col(pos=3)    # signed short integer
-    grid_i      = Int32Col(pos=4)    # integer
-    grid_j      = Int32Col(pos=5)    # integer
-    pressure    = Float32Col(pos=6)    # float  (single-precision)
-    energy      = Float64Col(pos=7)    # double (double-precision)
+    float1 = Float64Col(dflt=1.3, pos=1)
+    float2 = Float64Col(dflt=2.3, pos=2)
+    ADCcount = Int16Col(pos=3)     # signed short integer
+    grid_i = Int32Col(pos=4)        # integer
+    grid_j = Int32Col(pos=5)        # integer
+    pressure = Float32Col(pos=6)    # float  (single-precision)
+    energy = Float64Col(pos=7)      # double (double-precision)
 
 # Define a user record to characterize some kind of particles
+
+
 class Big(IsDescription):
-    name        = StringCol(itemsize=16)  # 16-character String
-    #float1      = Float64Col(shape=32, dflt=NA.arange(32))
-    #float2      = Float64Col(shape=32, dflt=NA.arange(32))
-    float1      = Float64Col(shape=32, dflt=range(32))
-    float2      = Float64Col(shape=32, dflt=[2.2]*32)
-    ADCcount    = Int16Col()    # signed short integer
-    grid_i      = Int32Col()    # integer
-    grid_j      = Int32Col()    # integer
-    pressure    = Float32Col()    # float  (single-precision)
-    energy      = Float64Col()    # double (double-precision)
+    name = StringCol(itemsize=16)   # 16-character String
+    #float1 = Float64Col(shape=32, dflt=np.arange(32))
+    #float2 = Float64Col(shape=32, dflt=np.arange(32))
+    float1 = Float64Col(shape=32, dflt=range(32))
+    float2 = Float64Col(shape=32, dflt=[2.2] * 32)
+    ADCcount = Int16Col()           # signed short integer
+    grid_i = Int32Col()             # integer
+    grid_j = Int32Col()             # integer
+    pressure = Float32Col()         # float  (single-precision)
+    energy = Float64Col()           # double (double-precision)
+
 
 def createFile(filename, totalrows, recsize, verbose):
 
@@ -63,21 +76,21 @@ def createFile(filename, totalrows, recsize, verbose):
         isrec = Medium()
     else:
         isrec = Description(Small)
-    #dd.set_re_len(struct.calcsize(isrec._v_fmt))  # fixed length records
+    # dd.set_re_len(struct.calcsize(isrec._v_fmt))  # fixed length records
     dd.open(filename, db.DB_RECNO, db.DB_CREATE | db.DB_TRUNCATE)
 
     rowswritten = 0
     # Get the record object associated with the new table
     if recsize == "big":
         isrec = Big()
-        arr = NA.array(NA.arange(32), type=NA.Float64)
-        arr2 = NA.array(NA.arange(32), type=NA.Float64)
+        arr = np.array(np.arange(32), type=np.Float64)
+        arr2 = np.array(np.arange(32), type=np.Float64)
     elif recsize == "medium":
         isrec = Medium()
-        arr = NA.array(NA.arange(2), type=NA.Float64)
+        arr = np.array(np.arange(2), type=np.Float64)
     else:
         isrec = Small()
-    #print d
+    # print d
     # Fill the table
     if recsize == "big" or recsize == "medium":
         d = {"name": " ",
@@ -89,13 +102,13 @@ def createFile(filename, totalrows, recsize, verbose):
              "pressure": 1.9,
              "energy": 1.8,
              }
-        for i in xrange(totalrows):
+        for i in range(totalrows):
             #d['name']  = 'Particle: %6d' % (i)
             #d['TDCcount'] = i % 256
             d['ADCcount'] = (i * 256) % (1 << 16)
             if recsize == "big":
-                #d.float1 = NA.array([i]*32, NA.Float64)
-                #d.float2 = NA.array([i**2]*32, NA.Float64)
+                #d.float1 = np.array([i]*32, np.Float64)
+                #d.float2 = np.array([i**2]*32, np.Float64)
                 arr[0] = 1.1
                 d['float1'] = arr
                 arr2[0] = 2.2
@@ -106,7 +119,7 @@ def createFile(filename, totalrows, recsize, verbose):
                 d['float2'] = float(i)
             d['grid_i'] = i
             d['grid_j'] = 10 - i
-            d['pressure'] = float(i*i)
+            d['pressure'] = float(i * i)
             d['energy'] = d['pressure']
             dd.append(cPickle.dumps(d))
 #             dd.append(struct.pack(isrec._v_fmt,
@@ -116,20 +129,21 @@ def createFile(filename, totalrows, recsize, verbose):
 #                                   d['pressure'],  d['energy']))
     else:
         d = {"var1": " ", "var2": 1, "var3": 12.1e10}
-        for i in xrange(totalrows):
+        for i in range(totalrows):
             d['var1'] = str(i)
             d['var2'] = i
             d['var3'] = 12.1e10
             dd.append(cPickle.dumps(d))
-            #dd.append(struct.pack(isrec._v_fmt, d['var1'], d['var2'], d['var3']))
+            #dd.append(
+            #    struct.pack(isrec._v_fmt, d['var1'], d['var2'], d['var3']))
 
     rowswritten += totalrows
 
-
     # Close the file
     dd.close()
     return (rowswritten, struct.calcsize(isrec._v_fmt))
 
+
 def readFile(filename, recsize, verbose):
     # Open the HDF5 file in read-only mode
     #fileh = shelve.open(filename, "r")
@@ -140,27 +154,27 @@ def readFile(filename, recsize, verbose):
         isrec = Medium()
     else:
         isrec = Small()
-    #dd.set_re_len(struct.calcsize(isrec._v_fmt))  # fixed length records
-    #dd.set_re_pad('-') # sets the pad character...
-    #dd.set_re_pad(45)  # ...test both int and char
+    # dd.set_re_len(struct.calcsize(isrec._v_fmt))  # fixed length records
+    # dd.set_re_pad('-') # sets the pad character...
+    # dd.set_re_pad(45)  # ...test both int and char
     dd.open(filename, db.DB_RECNO)
     if recsize == "big" or recsize == "medium":
-        print isrec._v_fmt
+        print(isrec._v_fmt)
         c = dd.cursor()
         rec = c.first()
         e = []
         while rec:
             record = cPickle.loads(rec[1])
             #record = struct.unpack(isrec._v_fmt, rec[1])
-            #if verbose:
+            # if verbose:
             #    print record
             if record['grid_i'] < 20:
                 e.append(record['grid_j'])
-            #if record[4] < 20:
+            # if record[4] < 20:
             #    e.append(record[5])
-            rec = c.next()
+            rec = next(c)
     else:
-        print isrec._v_fmt
+        print(isrec._v_fmt)
         #e = [ t[1] for t in fileh[table] if t[1] < 20 ]
         c = dd.cursor()
         rec = c.first()
@@ -168,25 +182,24 @@ def readFile(filename, recsize, verbose):
         while rec:
             record = cPickle.loads(rec[1])
             #record = struct.unpack(isrec._v_fmt, rec[1])
-            #if verbose:
+            # if verbose:
             #    print record
             if record['var2'] < 20:
                 e.append(record['var1'])
-            #if record[1] < 20:
+            # if record[1] < 20:
             #    e.append(record[2])
-            rec = c.next()
+            rec = next(c)
 
-    print "resulting selection list ==>", e
-    print "last record read ==>", record
-    print "Total selected records ==> ", len(e)
+    print("resulting selection list ==>", e)
+    print("last record read ==>", record)
+    print("Total selected records ==> ", len(e))
 
     # Close the file (eventually destroy the extended type)
     dd.close()
 
 
 # Add code to test here
-if __name__=="__main__":
-    import sys
+if __name__ == "__main__":
     import getopt
     import time
 
@@ -230,20 +243,20 @@ if __name__=="__main__":
     psyco.bind(createFile)
     (rowsw, rowsz) = createFile(file, iterations, recsize, verbose)
     t2 = time.clock()
-    tapprows = round(t2-t1, 3)
+    tapprows = round(t2 - t1, 3)
 
     t1 = time.clock()
     psyco.bind(readFile)
     readFile(file, recsize, verbose)
     t2 = time.clock()
-    treadrows = round(t2-t1, 3)
+    treadrows = round(t2 - t1, 3)
 
-    print "Rows written:", rowsw, " Row size:", rowsz
-    print "Time appending rows:", tapprows
+    print("Rows written:", rowsw, " Row size:", rowsz)
+    print("Time appending rows:", tapprows)
     if tapprows > 0.:
-        print "Write rows/sec: ", int(iterations / float(tapprows))
-        print "Write KB/s :", int(rowsw * rowsz / (tapprows * 1024))
-    print "Time reading rows:", treadrows
+        print("Write rows/sec: ", int(iterations / float(tapprows)))
+        print("Write KB/s :", int(rowsw * rowsz / (tapprows * 1024)))
+    print("Time reading rows:", treadrows)
     if treadrows > 0.:
-        print "Read rows/sec: ", int(iterations / float(treadrows))
-        print "Read KB/s :", int(rowsw * rowsz / (treadrows * 1024))
+        print("Read rows/sec: ", int(iterations / float(treadrows)))
+        print("Read KB/s :", int(rowsw * rowsz / (treadrows * 1024)))
diff --git a/bench/cacheout.py b/bench/cacheout.py
index b3510cb..e68cbe5 100644
--- a/bench/cacheout.py
+++ b/bench/cacheout.py
@@ -1,13 +1,13 @@
 # Program to clean out the filesystem cache
 import numpy
 
-a=numpy.arange(1000*100*125, dtype='f8')  # 100 MB of RAM
-b=a*3  # Another 100 MB
+a = numpy.arange(1000 * 100 * 125, dtype='f8')  # 100 MB of RAM
+b = a * 3  # Another 100 MB
 # delete the reference to the booked memory
 del a
 del b
 
 # Do a loop to fully recharge the python interpreter
 j = 2
-for i in range(1000*1000):
-    j+=i*2
+for i in range(1000 * 1000):
+    j += i * 2
diff --git a/bench/chunkshape-bench.py b/bench/chunkshape-bench.py
index ead4b2b..bc4df36 100644
--- a/bench/chunkshape-bench.py
+++ b/bench/chunkshape-bench.py
@@ -3,53 +3,59 @@
 # You need at least PyTables 2.1 to run this!
 # F. Alted
 
-import numpy, tables
+from __future__ import print_function
+import numpy
+import tables
 from time import time
 
 dim1, dim2 = 360, 6109666
 rows_to_read = range(0, 360, 36)
 
-print "="*32
+print("=" * 32)
 # Create the EArray
 f = tables.open_file("/tmp/test.h5", "w")
-a = f.create_earray(f.root, "a", tables.Float64Atom(), shape = (dim1, 0),
-                   expectedrows=dim2)
-print "Chunkshape for original array:", a.chunkshape
+a = f.create_earray(f.root, "a", tables.Float64Atom(), shape=(dim1, 0),
+                    expectedrows=dim2)
+print("Chunkshape for original array:", a.chunkshape)
 
 # Fill the EArray
 t1 = time()
 zeros = numpy.zeros((dim1, 1), dtype="float64")
-for i in xrange(dim2):
+for i in range(dim2):
     a.append(zeros)
-tcre = round(time()-t1, 3)
-thcre = round(dim1*dim2*8 / (tcre * 1024 * 1024), 1)
-print "Time to append %d rows: %s sec (%s MB/s)" % (a.nrows, tcre, thcre)
+tcre = round(time() - t1, 3)
+thcre = round(dim1 * dim2 * 8 / (tcre * 1024 * 1024), 1)
+print("Time to append %d rows: %s sec (%s MB/s)" % (a.nrows, tcre, thcre))
 
 # Read some row vectors from the original array
 t1 = time()
-for i in rows_to_read: r1 = a[i,:]
-tr1 = round(time()-t1, 3)
-thr1 = round(dim2*len(rows_to_read)*8 / (tr1 * 1024 * 1024), 1)
-print "Time to read ten rows in original array: %s sec (%s MB/s)" % (tr1, thr1)
-
-print "="*32
+for i in rows_to_read:
+    r1 = a[i, :]
+tr1 = round(time() - t1, 3)
+thr1 = round(dim2 * len(rows_to_read) * 8 / (tr1 * 1024 * 1024), 1)
+print("Time to read ten rows in original array: %s sec (%s MB/s)" % (tr1,
+                                                                     thr1))
+
+print("=" * 32)
 # Copy the array to another with a row-wise chunkshape
 t1 = time()
 #newchunkshape = (1, a.chunkshape[0]*a.chunkshape[1])
-newchunkshape = (1, a.chunkshape[0]*a.chunkshape[1]*10)  # ten times larger
+newchunkshape = (1, a.chunkshape[0] * a.chunkshape[1] * 10)  # ten times larger
 b = a.copy(f.root, "b", chunkshape=newchunkshape)
-tcpy = round(time()-t1, 3)
-thcpy = round(dim1*dim2*8 / (tcpy * 1024 * 1024), 1)
-print "Chunkshape for row-wise chunkshape array:", b.chunkshape
-print "Time to copy the original array: %s sec (%s MB/s)" % (tcpy, thcpy)
+tcpy = round(time() - t1, 3)
+thcpy = round(dim1 * dim2 * 8 / (tcpy * 1024 * 1024), 1)
+print("Chunkshape for row-wise chunkshape array:", b.chunkshape)
+print("Time to copy the original array: %s sec (%s MB/s)" % (tcpy, thcpy))
 
 # Read the same ten rows from the new copied array
 t1 = time()
-for i in rows_to_read: r2 = b[i,:]
-tr2 = round(time()-t1, 3)
-thr2 = round(dim2*len(rows_to_read)*8 / (tr2 * 1024 * 1024), 1)
-print "Time to read with a row-wise chunkshape: %s sec (%s MB/s)" % (tr2, thr2)
-print "="*32
-print "Speed-up with a row-wise chunkshape:", round(tr1/tr2, 1)
+for i in rows_to_read:
+    r2 = b[i, :]
+tr2 = round(time() - t1, 3)
+thr2 = round(dim2 * len(rows_to_read) * 8 / (tr2 * 1024 * 1024), 1)
+print("Time to read with a row-wise chunkshape: %s sec (%s MB/s)" % (tr2,
+                                                                     thr2))
+print("=" * 32)
+print("Speed-up with a row-wise chunkshape:", round(tr1 / tr2, 1))
 
 f.close()
diff --git a/bench/chunkshape-testing.py b/bench/chunkshape-testing.py
index 4c178f8..1b1082a 100644
--- a/bench/chunkshape-testing.py
+++ b/bench/chunkshape-testing.py
@@ -2,7 +2,9 @@
 
 """Simple benchmark for testing chunkshapes and nrowsinbuf."""
 
-import numpy, tables
+from __future__ import print_function
+import numpy
+import tables
 from time import time
 
 L = 20
@@ -12,7 +14,7 @@ complevel = 1
 
 recarray = numpy.empty(shape=2, dtype='(2,2,2)i4,(2,3,3)f8,i4,i8')
 
-f = tables.open_file("chunkshape.h5", mode = "w")
+f = tables.open_file("chunkshape.h5", mode="w")
 
 # t = f.create_table(f.root, 'table', recarray, "mdim recarray")
 
@@ -26,59 +28,59 @@ f = tables.open_file("chunkshape.h5", mode = "w")
 #                     tables.Float64Atom(), (2,3,3),
 #                     "mdim float64 carray")
 
-f1 = tables.open_file("chunkshape1.h5", mode = "w")
+f1 = tables.open_file("chunkshape1.h5", mode="w")
 c1 = f.create_carray(f1.root, 'cfield1',
-                    tables.Int32Atom(), (L, N, M),
-                    "scalar int32 carray", tables.Filters(complevel=0))
+                     tables.Int32Atom(), (L, N, M),
+                     "scalar int32 carray", tables.Filters(complevel=0))
 
-t1=time()
+t1 = time()
 c1[:] = numpy.empty(shape=(L, 1, 1), dtype="int32")
-print "carray1 populate time:", time()-t1
+print("carray1 populate time:", time() - t1)
 f1.close()
 
 
-f2 = tables.open_file("chunkshape2.h5", mode = "w")
+f2 = tables.open_file("chunkshape2.h5", mode="w")
 c2 = f.create_carray(f2.root, 'cfield2',
-                    tables.Int32Atom(), (L, M, N),
-                    "scalar int32 carray", tables.Filters(complevel))
+                     tables.Int32Atom(), (L, M, N),
+                     "scalar int32 carray", tables.Filters(complevel))
 
-t1=time()
+t1 = time()
 c2[:] = numpy.empty(shape=(L, 1, 1), dtype="int32")
-print "carray2 populate time:", time()-t1
+print("carray2 populate time:", time() - t1)
 f2.close()
 
-f0 = tables.open_file("chunkshape0.h5", mode = "w")
+f0 = tables.open_file("chunkshape0.h5", mode="w")
 e0 = f.create_earray(f0.root, 'efield0',
-                    tables.Int32Atom(), (0, L, M),
-                    "scalar int32 carray", tables.Filters(complevel),
-                    expectedrows=N)
+                     tables.Int32Atom(), (0, L, M),
+                     "scalar int32 carray", tables.Filters(complevel),
+                     expectedrows=N)
 
-t1=time()
+t1 = time()
 e0.append(numpy.empty(shape=(N, L, M), dtype="int32"))
-print "earray0 populate time:", time()-t1
+print("earray0 populate time:", time() - t1)
 f0.close()
 
-f1 = tables.open_file("chunkshape1.h5", mode = "w")
+f1 = tables.open_file("chunkshape1.h5", mode="w")
 e1 = f.create_earray(f1.root, 'efield1',
-                    tables.Int32Atom(), (L, 0, M),
-                    "scalar int32 carray", tables.Filters(complevel),
-                    expectedrows=N)
+                     tables.Int32Atom(), (L, 0, M),
+                     "scalar int32 carray", tables.Filters(complevel),
+                     expectedrows=N)
 
-t1=time()
+t1 = time()
 e1.append(numpy.empty(shape=(L, N, M), dtype="int32"))
-print "earray1 populate time:", time()-t1
+print("earray1 populate time:", time() - t1)
 f1.close()
 
 
-f2 = tables.open_file("chunkshape2.h5", mode = "w")
+f2 = tables.open_file("chunkshape2.h5", mode="w")
 e2 = f.create_earray(f2.root, 'efield2',
-                    tables.Int32Atom(), (L, M, 0),
-                    "scalar int32 carray", tables.Filters(complevel),
-                    expectedrows=N)
+                     tables.Int32Atom(), (L, M, 0),
+                     "scalar int32 carray", tables.Filters(complevel),
+                     expectedrows=N)
 
-t1=time()
+t1 = time()
 e2.append(numpy.empty(shape=(L, M, N), dtype="int32"))
-print "earray2 populate time:", time()-t1
+print("earray2 populate time:", time() - t1)
 f2.close()
 
 # t1=time()
diff --git a/bench/collations.py b/bench/collations.py
index fd6bdac..e9f69b7 100644
--- a/bench/collations.py
+++ b/bench/collations.py
@@ -1,20 +1,23 @@
+from __future__ import print_function
 import numpy as np
 import tables
 from time import time
 
-N = 1000*1000
+N = 1000 * 1000
 NCOLL = 200  # 200 collections maximum
 
 # In order to have reproducible results
 np.random.seed(19)
 
+
 class Energies(tables.IsDescription):
     collection = tables.UInt8Col()
     energy = tables.Float64Col()
 
+
 def fill_bucket(lbucket):
     #c = np.random.normal(NCOLL/2, NCOLL/10, lbucket)
-    c = np.random.normal(NCOLL/2, NCOLL/100, lbucket)
+    c = np.random.normal(NCOLL / 2, NCOLL / 100, lbucket)
     e = np.arange(lbucket, dtype='f8')
     return c, e
 
@@ -24,14 +27,14 @@ f = tables.open_file("data.nobackup/collations.h5", "w")
 table = f.create_table("/", "Energies", Energies, expectedrows=N)
 # Fill the table with values
 lbucket = 1000   # Fill in buckets of 1000 rows, for speed
-for i in xrange(0, N, lbucket):
+for i in range(0, N, lbucket):
     bucket = fill_bucket(lbucket)
     table.append(bucket)
 # Fill the remaining rows
-bucket = fill_bucket(N%lbucket)
+bucket = fill_bucket(N % lbucket)
 table.append(bucket)
 f.close()
-print "Time to create the table with %d entries: %.3f" % (N, time()-t1)
+print("Time to create the table with %d entries: %.3f" % (N, time() - t1))
 
 # Now, read the table and group it by collection
 f = tables.open_file("data.nobackup/collations.h5", "a")
@@ -41,7 +44,7 @@ table = f.root.Energies
 # First solution: load the table completely in memory
 #########################################################
 t1 = time()
-t = table[:] # convert to structured array
+t = table[:]  # convert to structured array
 coll1 = []
 collections = np.unique(t['collection'])
 for c in collections:
@@ -49,9 +52,9 @@ for c in collections:
     energy_this_collection = t['energy'][cond]
     sener = energy_this_collection.sum()
     coll1.append(sener)
-    print c, ' : ', sener
+    print(c, ' : ', sener)
 del collections, energy_this_collection
-print "Time for first solution: %.3f" % (time()-t1)
+print("Time for first solution: %.3f" % (time() - t1))
 
 #########################################################
 # Second solution: load all the collections in memory
@@ -71,47 +74,48 @@ for c in sorted(collections):
     energy_this_collection = np.array(collections[c])
     sener = energy_this_collection.sum()
     coll2.append(sener)
-    print c, ' : ', sener
+    print(c, ' : ', sener)
 del collections, energy_this_collection
-print "Time for second solution: %.3f" % (time()-t1)
+print("Time for second solution: %.3f" % (time() - t1))
 
 t1 = time()
 table.cols.collection.create_csindex()
-#table.cols.collection.reindex()
-print "Time for indexing: %.3f" % (time()-t1)
+# table.cols.collection.reindex()
+print("Time for indexing: %.3f" % (time() - t1))
 
 #########################################################
 # Third solution: load each collection separately
 #########################################################
 t1 = time()
 coll3 = []
-for c in np.unique(table.col('collection')) :
-    energy_this_collection = table.read_where('collection == c', field='energy')
+for c in np.unique(table.col('collection')):
+    energy_this_collection = table.read_where(
+        'collection == c', field='energy')
     sener = energy_this_collection.sum()
     coll3.append(sener)
-    print c, ' : ', sener
+    print(c, ' : ', sener)
 del energy_this_collection
-print "Time for third solution: %.3f" % (time()-t1)
+print("Time for third solution: %.3f" % (time() - t1))
 
 
 t1 = time()
 table2 = table.copy('/', 'EnergySortedByCollation', overwrite=True,
-            sortby="collection", propindexes=True)
-print "Time for sorting: %.3f" % (time()-t1)
+                    sortby="collection", propindexes=True)
+print("Time for sorting: %.3f" % (time() - t1))
 
 #####################################################################
 # Fourth solution: load each collection separately.  Sorted table.
 #####################################################################
 t1 = time()
 coll4 = []
-for c in np.unique(table2.col('collection')) :
+for c in np.unique(table2.col('collection')):
     energy_this_collection = table2.read_where(
         'collection == c', field='energy')
     sener = energy_this_collection.sum()
     coll4.append(sener)
-    print c, ' : ', sener
+    print(c, ' : ', sener)
     del energy_this_collection
-print "Time for fourth solution: %.3f" % (time()-t1)
+print("Time for fourth solution: %.3f" % (time() - t1))
 
 
 # Finally, check that all solutions do match
diff --git a/bench/copy-bench.py b/bench/copy-bench.py
index b30ad39..6346a82 100644
--- a/bench/copy-bench.py
+++ b/bench/copy-bench.py
@@ -1,9 +1,10 @@
+from __future__ import print_function
 import tables
 import sys
 import time
 
 if len(sys.argv) != 3:
-    print "usage: %s source_file dest_file", sys.argv[0]
+    print("usage: %s source_file dest_file", sys.argv[0])
 filesrc = sys.argv[1]
 filedest = sys.argv[2]
 filehsrc = tables.open_file(filesrc)
@@ -17,16 +18,16 @@ for group in filehsrc.walk_groups():
     else:
         pathname = group._v_parent._v_pathname
         groupdest = filehdest.create_group(pathname, group._v_name,
-                                          title=group._v_title)
+                                           title=group._v_title)
     for table in filehsrc.list_nodes(group, classname='Table'):
-        print "copying table -->", table
+        print("copying table -->", table)
         table.copy(groupdest, table.name)
         ntables += 1
         tsize += table.nrows * table.rowsize
-tsizeMB = tsize / (1024*1024)
+tsizeMB = tsize / (1024 * 1024)
 ttime = round(time.time() - t1, 3)
-speed = round(tsizeMB/ttime, 2)
-print "Copied %s tables for a total of %s MB in %s seconds (%s MB/s)" % \
-      (ntables, tsizeMB, ttime, speed)
+speed = round(tsizeMB / ttime, 2)
+print("Copied %s tables for a total of %s MB in %s seconds (%s MB/s)" %
+      (ntables, tsizeMB, ttime, speed))
 filehsrc.close()
 filehdest.close()
diff --git a/bench/create-large-number-objects.py b/bench/create-large-number-objects.py
index a68fdb3..f70406b 100644
--- a/bench/create-large-number-objects.py
+++ b/bench/create-large-number-objects.py
@@ -7,7 +7,7 @@ import tables
 filename = sys.argv[1]
 
 # Open a new empty HDF5 file
-fileh = tables.open_file(filename, mode = "w")
+fileh = tables.open_file(filename, mode="w")
 
 # nlevels -- Number of levels in hierarchy
 # ngroups -- Number of groups on each level
@@ -29,11 +29,12 @@ for k in range(nlevels):
     for j in range(ngroups):
         for i in range(ndatasets):
             # Save the array on the HDF5 file
-            fileh.create_array(group2, 'array'+str(i), a, "Signed short array")
+            fileh.create_array(group2, 'array' + str(i),
+                               a, "Signed short array")
         # Create a new group
-        group2 = fileh.create_group(group, 'group'+str(j))
+        group2 = fileh.create_group(group, 'group' + str(j))
     # Create a new group
-    group3 = fileh.create_group(group, 'ngroup'+str(k))
+    group3 = fileh.create_group(group, 'ngroup' + str(k))
     # Iterate over this new group (group3)
     group = group3
     group2 = group3
diff --git a/bench/deep-tree-h5py.py b/bench/deep-tree-h5py.py
index 27f966c..2f356e8 100644
--- a/bench/deep-tree-h5py.py
+++ b/bench/deep-tree-h5py.py
@@ -1,4 +1,6 @@
-import os, subprocess, gc
+from __future__ import print_function
+import os
+import subprocess
 from time import time
 import random
 import numpy
@@ -6,6 +8,7 @@ import h5py
 
 random.seed(2)
 
+
 def show_stats(explain, tref):
     "Show the used memory (only works for Linux 2.6.x)."
     # Build the command to obtain memory info
@@ -25,12 +28,12 @@ def show_stats(explain, tref):
         elif line.startswith("VmLib:"):
             vmlib = int(line.split()[1])
     sout.close()
-    print "Memory usage: ******* %s *******" % explain
-    print "VmSize: %7s kB\tVmRSS: %7s kB" % (vmsize, vmrss)
-    print "VmData: %7s kB\tVmStk: %7s kB" % (vmdata, vmstk)
-    print "VmExe:  %7s kB\tVmLib: %7s kB" % (vmexe, vmlib)
+    print("Memory usage: ******* %s *******" % explain)
+    print("VmSize: %7s kB\tVmRSS: %7s kB" % (vmsize, vmrss))
+    print("VmData: %7s kB\tVmStk: %7s kB" % (vmdata, vmstk))
+    print("VmExe:  %7s kB\tVmLib: %7s kB" % (vmexe, vmlib))
     tnow = time()
-    print "WallClock time:", round(tnow - tref, 3)
+    print("WallClock time:", round(tnow - tref, 3))
     return tnow
 
 
@@ -40,21 +43,22 @@ def populate(f, nlevels):
     for i in range(nlevels):
         g["DS1"] = arr
         g["DS2"] = arr
-        group2 = g.create_group('group2_')
+        g.create_group('group2_')
         g = g.create_group('group')
 
 
 def getnode(f, nlevels, niter, range_):
     for i in range(niter):
-        nlevel = random.randrange((nlevels-range_)/2, (nlevels+range_)/2)
+        nlevel = random.randrange(
+            (nlevels - range_) / 2, (nlevels + range_) / 2)
         groupname = ""
         for i in range(nlevel):
             groupname += "/group"
         groupname += "/DS1"
-        n = f[groupname]
+        f[groupname]
 
 
-if __name__=='__main__':
+if __name__ == '__main__':
     nlevels = 1024
     niter = 1000
     range_ = 256
@@ -66,8 +70,10 @@ if __name__=='__main__':
         import pstats
         import cProfile as prof
 
-    if profile: tref = time()
-    if profile: show_stats("Abans de crear...", tref)
+    if profile:
+        tref = time()
+    if profile:
+        show_stats("Abans de crear...", tref)
     f = h5py.File("/tmp/deep-tree.h5", 'w')
     if doprofile:
         prof.run('populate(f, nlevels)', 'populate.prof')
@@ -81,7 +87,8 @@ if __name__=='__main__':
     else:
         populate(f, nlevels)
     f.close()
-    if profile: show_stats("Despres de crear", tref)
+    if profile:
+        show_stats("Despres de crear", tref)
 
 #     if profile: tref = time()
 #     if profile: show_stats("Abans d'obrir...", tref)
@@ -110,4 +117,3 @@ if __name__=='__main__':
 #         group2 = g['group2_']
 #         g = g['group']
 #     f.close()
-
diff --git a/bench/deep-tree.py b/bench/deep-tree.py
index e5efd9e..1e897b1 100644
--- a/bench/deep-tree.py
+++ b/bench/deep-tree.py
@@ -1,14 +1,17 @@
 # Small benchmark for compare creation times with parameter
 # PYTABLES_SYS_ATTRS active or not.
 
-import os, subprocess, gc
+from __future__ import print_function
+import os
+import subprocess
 from time import time
 import random
-import numpy
+#import numpy
 import tables
 
 random.seed(2)
 
+
 def show_stats(explain, tref):
     "Show the used memory (only works for Linux 2.6.x)."
     # Build the command to obtain memory info
@@ -28,47 +31,47 @@ def show_stats(explain, tref):
         elif line.startswith("VmLib:"):
             vmlib = int(line.split()[1])
     sout.close()
-    print "Memory usage: ******* %s *******" % explain
-    print "VmSize: %7s kB\tVmRSS: %7s kB" % (vmsize, vmrss)
-    print "VmData: %7s kB\tVmStk: %7s kB" % (vmdata, vmstk)
-    print "VmExe:  %7s kB\tVmLib: %7s kB" % (vmexe, vmlib)
+    print("Memory usage: ******* %s *******" % explain)
+    print("VmSize: %7s kB\tVmRSS: %7s kB" % (vmsize, vmrss))
+    print("VmData: %7s kB\tVmStk: %7s kB" % (vmdata, vmstk))
+    print("VmExe:  %7s kB\tVmLib: %7s kB" % (vmexe, vmlib))
     tnow = time()
-    print "WallClock time:", round(tnow - tref, 3)
+    print("WallClock time:", round(tnow - tref, 3))
     return tnow
 
 
 def populate(f, nlevels):
     g = f.root
-    arr = numpy.zeros((10,), "f4")
-    recarr = numpy.zeros((10,), "i4,f4")
-    descr = {'f0': tables.Int32Col(), 'f1': tables.Float32Col()}
+    #arr = numpy.zeros((10,), "f4")
+    #descr = {'f0': tables.Int32Col(), 'f1': tables.Float32Col()}
     for i in range(nlevels):
         #dset = f.create_array(g, "DS1", arr)
         #dset = f.create_array(g, "DS2", arr)
-        dset = f.create_carray(g, "DS1", tables.IntAtom(), (10,))
-        dset = f.create_carray(g, "DS2", tables.IntAtom(), (10,))
+        f.create_carray(g, "DS1", tables.IntAtom(), (10,))
+        f.create_carray(g, "DS2", tables.IntAtom(), (10,))
         #dset = f.create_table(g, "DS1", descr)
         #dset = f.create_table(g, "DS2", descr)
-        group2 = f.create_group(g, 'group2_')
+        f.create_group(g, 'group2_')
         g = f.create_group(g, 'group')
 
 
 def getnode(f, nlevels, niter, range_):
     for i in range(niter):
-        nlevel = random.randrange((nlevels-range_)/2, (nlevels+range_)/2)
+        nlevel = random.randrange(
+            (nlevels - range_) / 2, (nlevels + range_) / 2)
         groupname = ""
         for i in range(nlevel):
             groupname += "/group"
         groupname += "/DS1"
-        n = f.get_node(groupname)
+        f.get_node(groupname)
 
 
-if __name__=='__main__':
+if __name__ == '__main__':
     nlevels = 1024
     niter = 256
     range_ = 128
     nodeCacheSlots = 64
-    pytablesSysAttrs = True
+    pytables_sys_attrs = True
     profile = True
     doprofile = True
     verbose = False
@@ -77,11 +80,13 @@ if __name__=='__main__':
         import pstats
         import cProfile as prof
 
-    if profile: tref = time()
-    if profile: show_stats("Abans de crear...", tref)
+    if profile:
+        tref = time()
+    if profile:
+        show_stats("Abans de crear...", tref)
     f = tables.open_file("/tmp/PTdeep-tree.h5", 'w',
-                        node_cache_slots=nodeCacheSlots,
-                        pytables_sys_attrs=pytablesSysAttrs)
+                         node_cache_slots=nodeCacheSlots,
+                         pytables_sys_attrs=pytables_sys_attrs)
     if doprofile:
         prof.run('populate(f, nlevels)', 'populate.prof')
         stats = pstats.Stats('populate.prof')
@@ -94,14 +99,18 @@ if __name__=='__main__':
     else:
         populate(f, nlevels)
     f.close()
-    if profile: show_stats("Despres de crear", tref)
+    if profile:
+        show_stats("Despres de crear", tref)
 
-    if profile: tref = time()
-    if profile: show_stats("Abans d'obrir...", tref)
+    if profile:
+        tref = time()
+    if profile:
+        show_stats("Abans d'obrir...", tref)
     f = tables.open_file("/tmp/PTdeep-tree.h5", 'r',
-                        node_cache_slots=nodeCacheSlots,
-                        pytables_sys_attrs=pytablessysattrs)
-    if profile: show_stats("Abans d'accedir...", tref)
+                         node_cache_slots=nodeCacheSlots,
+                         pytables_sys_attrs=pytables_sys_attrs)
+    if profile:
+        show_stats("Abans d'accedir...", tref)
     if doprofile:
         prof.run('getnode(f, nlevels, niter, range_)', 'getnode.prof')
         stats = pstats.Stats('getnode.prof')
@@ -113,7 +122,8 @@ if __name__=='__main__':
             stats.print_stats(20)
     else:
         getnode(f, nlevels, niter, range_)
-    if profile: show_stats("Despres d'accedir", tref)
+    if profile:
+        show_stats("Despres d'accedir", tref)
     f.close()
-    if profile: show_stats("Despres de tancar", tref)
-
+    if profile:
+        show_stats("Despres de tancar", tref)
diff --git a/bench/evaluate.py b/bench/evaluate.py
index f108590..9980c17 100644
--- a/bench/evaluate.py
+++ b/bench/evaluate.py
@@ -1,12 +1,11 @@
+from __future__ import print_function
 import sys
 from time import time
 
 import numpy as np
 import tables as tb
-import tables.numexpr as ne
-from tables.numexpr.necompiler import (
+from numexpr.necompiler import (
     getContext, getExprNames, getType, NumExpr)
-from tables.utilsextension import lrange
 
 
 shape = (1000, 160000)
@@ -20,21 +19,23 @@ ofilters = tb.Filters(complevel=1, complib="blosc", shuffle=0)
 typecode_to_dtype = {'b': 'bool', 'i': 'int32', 'l': 'int64', 'f': 'float32',
                      'd': 'float64', 'c': 'complex128'}
 
+
 def _compute(result, function, arguments,
              start=None, stop=None, step=None):
-    """Compute the `function` over the `arguments` and put the outcome in `result`"""
+    """Compute the `function` over the `arguments` and put the outcome in
+    `result`"""
     arg0 = arguments[0]
     if hasattr(arg0, 'maindim'):
         maindim = arg0.maindim
         (start, stop, step) = arg0._process_range_read(start, stop, step)
         nrowsinbuf = arg0.nrowsinbuf
-        print "nrowsinbuf-->", nrowsinbuf
+        print("nrowsinbuf-->", nrowsinbuf)
     else:
         maindim = 0
         (start, stop, step) = (0, len(arg0), 1)
         nrowsinbuf = len(arg0)
     shape = list(arg0.shape)
-    shape[maindim] = lrange(start, stop, step).length
+    shape[maindim] = len(range(start, stop, step))
 
     # The slices parameter for arg0.__getitem__
     slices = [slice(0, dim, 1) for dim in arg0.shape]
@@ -46,14 +47,14 @@ def _compute(result, function, arguments,
             arg._v_convert = False
 
     # Start the computation itself
-    for start2 in lrange(start, stop, step*nrowsinbuf):
+    for start2 in range(start, stop, step * nrowsinbuf):
         # Save the records on disk
         stop2 = start2 + step * nrowsinbuf
         if stop2 > stop:
             stop2 = stop
         # Set the proper slice in the main dimension
         slices[maindim] = slice(start2, stop2, step)
-        start3 = (start2-start)/step
+        start3 = (start2 - start) / step
         stop3 = start3 + nrowsinbuf
         if stop3 > shape[maindim]:
             stop3 = shape[maindim]
@@ -102,18 +103,18 @@ def evaluate(ex, out=None, local_dict=None, global_dict=None, **kwargs):
 
     # Create a signature
     signature = [(name, getType(type_)) for (name, type_) in zip(names, types)]
-    print "signature-->", signature
+    print("signature-->", signature)
 
     # Compile the expression
     compiled_ex = NumExpr(ex, signature, [], **kwargs)
-    print "fullsig-->", compiled_ex.fullsig
+    print("fullsig-->", compiled_ex.fullsig)
 
     _compute(out, compiled_ex, arguments)
 
     return
 
 
-if __name__=="__main__":
+if __name__ == "__main__":
     iarrays = 0
     oarrays = 0
     doprofile = 1
@@ -124,28 +125,28 @@ if __name__=="__main__":
     # Create some arrays
     if iarrays:
         a = np.ones(shape, dtype='float32')
-        b = np.ones(shape, dtype='float32')*2
-        c = np.ones(shape, dtype='float32')*3
+        b = np.ones(shape, dtype='float32') * 2
+        c = np.ones(shape, dtype='float32') * 3
     else:
         a = f.create_carray(f.root, 'a', tb.Float32Atom(dflt=1.),
-                           shape=shape, filters=filters)
+                            shape=shape, filters=filters)
         a[:] = 1.
         b = f.create_carray(f.root, 'b', tb.Float32Atom(dflt=2.),
-                           shape=shape, filters=filters)
+                            shape=shape, filters=filters)
         b[:] = 2.
         c = f.create_carray(f.root, 'c', tb.Float32Atom(dflt=3.),
-                           shape=shape, filters=filters)
+                            shape=shape, filters=filters)
         c[:] = 3.
     if oarrays:
         out = np.empty(shape, dtype='float32')
     else:
         out = f.create_carray(f.root, 'out', tb.Float32Atom(),
-                             shape=shape, filters=ofilters)
+                              shape=shape, filters=ofilters)
 
     t0 = time()
     if iarrays and oarrays:
         #out = ne.evaluate("a*b+c")
-        out = a*b+c
+        out = a * b + c
     elif doprofile:
         import cProfile as prof
         import pstats
@@ -165,9 +166,9 @@ if __name__=="__main__":
         ofile.close()
     else:
         evaluate("a*b+c", out)
-    print "Time for evaluate-->", round(time()-t0, 3)
+    print("Time for evaluate-->", round(time() - t0, 3))
 
-    #print "out-->", `out`
-    #print `out[:]`
+    # print "out-->", `out`
+    # print `out[:]`
 
     f.close()
diff --git a/bench/expression.py b/bench/expression.py
index 74abdd2..55beade 100644
--- a/bench/expression.py
+++ b/bench/expression.py
@@ -1,3 +1,4 @@
+from __future__ import print_function
 from time import time
 import os.path
 
@@ -6,11 +7,12 @@ import tables as tb
 
 OUT_DIR = "/scratch2/faltet/"   # the directory for data output
 
-shape = (1000, 1000*1000)   # shape for input arrays
+shape = (1000, 1000 * 1000)   # shape for input arrays
 expr = "a*b+1"   # Expression to be computed
 
 nrows, ncols = shape
 
+
 def tables(docompute, dowrite, complib, verbose):
 
     # Filenames
@@ -28,7 +30,7 @@ def tables(docompute, dowrite, complib, verbose):
     else:
         filters = tb.Filters(complevel=0, shuffle=False)
     if verbose:
-        print "Will use filters:", filters
+        print("Will use filters:", filters)
 
     if dowrite:
         f = tb.open_file(ifilename, 'w')
@@ -36,18 +38,20 @@ def tables(docompute, dowrite, complib, verbose):
         # Build input arrays
         t0 = time()
         root = f.root
-        a = f.create_carray(root, 'a', tb.Float32Atom(), shape, filters=filters)
-        b = f.create_carray(root, 'b', tb.Float32Atom(), shape, filters=filters)
+        a = f.create_carray(root, 'a', tb.Float32Atom(),
+                            shape, filters=filters)
+        b = f.create_carray(root, 'b', tb.Float32Atom(),
+                            shape, filters=filters)
         if verbose:
-            print "chunkshape:", a.chunkshape
-            print "chunksize:", np.prod(a.chunkshape)*a.dtype.itemsize
+            print("chunkshape:", a.chunkshape)
+            print("chunksize:", np.prod(a.chunkshape) * a.dtype.itemsize)
         #row = np.linspace(0, 1, ncols)
         row = np.arange(0, ncols, dtype='float32')
-        for i in xrange(nrows):
-            a[i] = row*(i+1)
-            b[i] = row*(i+1)*2
+        for i in range(nrows):
+            a[i] = row * (i + 1)
+            b[i] = row * (i + 1) * 2
         f.close()
-        print "[tables.Expr] Time for creating inputs:", round(time()-t0, 3)
+        print("[tables.Expr] Time for creating inputs:", round(time() - t0, 3))
 
     if docompute:
         f = tb.open_file(ifilename, 'r')
@@ -55,17 +59,18 @@ def tables(docompute, dowrite, complib, verbose):
         a = f.root.a
         b = f.root.b
         r1 = f.create_carray(fr.root, 'r1', tb.Float32Atom(), shape,
-                            filters=filters)
+                             filters=filters)
         # The expression
         e = tb.Expr(expr)
         e.set_output(r1)
         t0 = time()
         e.eval()
         if verbose:
-            print "First ten values:", r1[0, :10]
+            print("First ten values:", r1[0, :10])
         f.close()
         fr.close()
-        print "[tables.Expr] Time for computing & save:", round(time()-t0, 3)
+        print("[tables.Expr] Time for computing & save:",
+              round(time() - t0, 3))
 
 
 def memmap(docompute, dowrite, verbose):
@@ -81,11 +86,12 @@ def memmap(docompute, dowrite, verbose):
         # Fill arrays a and b
         #row = np.linspace(0, 1, ncols)
         row = np.arange(0, ncols, dtype='float32')
-        for i in xrange(nrows):
-            a[i] = row*(i+1)
-            b[i] = row*(i+1)*2
+        for i in range(nrows):
+            a[i] = row * (i + 1)
+            b[i] = row * (i + 1) * 2
         del a, b  # flush data
-        print "[numpy.memmap] Time for creating inputs:", round(time()-t0, 3)
+        print("[numpy.memmap] Time for creating inputs:",
+              round(time() - t0, 3))
 
     if docompute:
         t0 = time()
@@ -95,13 +101,13 @@ def memmap(docompute, dowrite, verbose):
         # Create the array output
         r = np.memmap(rfilename, dtype='float32', mode='w+', shape=shape)
         # Do the computation row by row
-        for i in xrange(nrows):
-            r[i] = eval(expr, {'a':a[i], 'b':b[i]})
+        for i in range(nrows):
+            r[i] = eval(expr, {'a': a[i], 'b': b[i]})
         if verbose:
-            print "First ten values:", r[0, :10]
+            print("First ten values:", r[0, :10])
         del a, b
         del r  # flush output data
-        print "[numpy.memmap] Time for compute & save:", round(time()-t0, 3)
+        print("[numpy.memmap] Time for compute & save:", round(time() - t0, 3))
 
 
 def do_bench(what, documpute, dowrite, complib, verbose):
@@ -111,8 +117,9 @@ def do_bench(what, documpute, dowrite, complib, verbose):
         memmap(docompute, dowrite, verbose)
 
 
-if __name__=="__main__":
-    import sys, os
+if __name__ == "__main__":
+    import sys
+    import os
     import getopt
 
     usage = """usage: %s [-T] [-M] [-c] [-w] [-v] [-z complib]
@@ -153,15 +160,15 @@ if __name__=="__main__":
         elif option[0] == '-z':
             complib = option[1]
             if complib not in ('blosc', 'lzo', 'zlib'):
-                print ("complib must be 'lzo' or 'zlib' "
-                       "and you passed: '%s'" % complib)
+                print(("complib must be 'lzo' or 'zlib' "
+                       "and you passed: '%s'" % complib))
                 sys.exit(1)
 
     # If not a backend selected, abort
     if not usepytables and not usememmap:
-        print "Please select a backend:"
-        print "PyTables.Expr: -T"
-        print "NumPy.memmap: -M"
+        print("Please select a backend:")
+        print("PyTables.Expr: -T")
+        print("NumPy.memmap: -M")
         sys.exit(1)
 
     # Select backend and do the benchmark
diff --git a/bench/get-figures-ranges.py b/bench/get-figures-ranges.py
index c6d7aab..98685a6 100644
--- a/bench/get-figures-ranges.py
+++ b/bench/get-figures-ranges.py
@@ -1,11 +1,13 @@
+from __future__ import print_function
 from pylab import *
 
-linewidth=2
+linewidth = 2
 #markers= ['+', ',', 'o', '.', 's', 'v', 'x', '>', '<', '^']
 #markers= [ 'x', '+', 'o', 's', 'v', '^', '>', '<', ]
-markers= [ 's', 'o', 'v', '^', '+', 'x', '>', '<', ]
+markers = ['s', 'o', 'v', '^', '+', 'x', '>', '<', ]
 markersize = 8
 
+
 def get_values(filename):
     f = open(filename)
     sizes = []
@@ -18,7 +20,7 @@ def get_values(filename):
             tmp = tmp[1:-1]
             lower, upper = int(tmp.split(',')[0]), int(tmp.split(',')[1])
             isize = upper - lower
-            #print "isize-->", isize
+            # print "isize-->", isize
         if isize is None or isize == 0:
             continue
         if insert and line.startswith('Insert time'):
@@ -71,6 +73,7 @@ def get_values(filename):
     f.close()
     return sizes, values
 
+
 def show_plot(plots, yaxis, legends, gtitle):
     xlabel('Number of hits')
     ylabel(yaxis)
@@ -81,8 +84,7 @@ def show_plot(plots, yaxis, legends, gtitle):
 #     legends = [f[f.find('-'):f.index('.out')] for f in filenames]
 #     legends = [l.replace('-', ' ') for l in legends]
     #legend([p[0] for p in plots], legends, loc = "upper left")
-    legend([p[0] for p in plots], legends, loc = "best")
-
+    legend([p[0] for p in plots], legends, loc="best")
 
     #subplots_adjust(bottom=0.2, top=None, wspace=0.2, hspace=0.2)
     if outfile:
@@ -92,7 +94,8 @@ def show_plot(plots, yaxis, legends, gtitle):
 
 if __name__ == '__main__':
 
-    import sys, getopt
+    import sys
+    import getopt
 
     usage = """usage: %s [-o file] [-t title] [--insert] [--create-index] [--create-total] [--table-size] [--indexes-size] [--total-size] [--query=colname]  [--query-cold=colname] [--query-warm=colname] files
  -o filename for output (only .png and .jpg extensions supported)
@@ -205,7 +208,7 @@ if __name__ == '__main__':
         plegend = filename[filename.find('-'):filename.index('.out')]
         plegend = plegend.replace('-', ' ')
         xval, yval = get_values(filename)
-        print "Values for %s --> %s, %s" % (filename, xval, yval)
+        print("Values for %s --> %s, %s" % (filename, xval, yval))
         if "PyTables" in filename or "pytables" in filename:
             plot = loglog(xval, yval, linewidth=2)
             #plot = semilogx(xval, yval, linewidth=2)
diff --git a/bench/get-figures.py b/bench/get-figures.py
index 9df8c87..be0e24b 100644
--- a/bench/get-figures.py
+++ b/bench/get-figures.py
@@ -1,11 +1,13 @@
+from __future__ import print_function
 from pylab import *
 
-linewidth=2
+linewidth = 2
 #markers= ['+', ',', 'o', '.', 's', 'v', 'x', '>', '<', '^']
 #markers= [ 'x', '+', 'o', 's', 'v', '^', '>', '<', ]
-markers= [ 's', 'o', 'v', '^', '+', 'x', '>', '<', ]
+markers = ['s', 'o', 'v', '^', '+', 'x', '>', '<', ]
 markersize = 8
 
+
 def get_values(filename):
     f = open(filename)
     sizes = []
@@ -24,7 +26,7 @@ def get_values(filename):
                     if size[-1] == "m":
                         isize *= 1000
                     elif size[-1] == "g":
-                        isize *= 1000*1000
+                        isize *= 1000 * 1000
         elif insert and line.startswith('Insert time'):
             tmp = line.split(':')[1]
             itime = float(tmp)
@@ -38,9 +40,9 @@ def get_values(filename):
                 values.pop()
             sizes.append(isize)
             if overlaps:
-                values.append(int(e1)+1)
+                values.append(int(e1) + 1)
             else:
-                values.append(float(e2)+1)
+                values.append(float(e2) + 1)
         elif (create_total or create_index) and line.startswith('Index time'):
             tmp = line.split(':')[1]
             xtime = float(tmp)
@@ -96,13 +98,14 @@ def get_values(filename):
     f.close()
     return sizes, values
 
+
 def show_plot(plots, yaxis, legends, gtitle):
     xlabel('Number of rows')
     ylabel(yaxis)
     title(gtitle)
     #xlim(10**3, 10**9)
-    xlim(10**3, 10**10)
-    #ylim(1.0e-5)
+    xlim(10 ** 3, 10 ** 10)
+    # ylim(1.0e-5)
     #ylim(-1e4, 1e5)
     #ylim(-1e3, 1e4)
     #ylim(-1e2, 1e3)
@@ -110,10 +113,9 @@ def show_plot(plots, yaxis, legends, gtitle):
 
 #     legends = [f[f.find('-'):f.index('.out')] for f in filenames]
 #     legends = [l.replace('-', ' ') for l in legends]
-    legend([p[0] for p in plots], legends, loc = "upper left")
+    legend([p[0] for p in plots], legends, loc="upper left")
     #legend([p[0] for p in plots], legends, loc = "center left")
 
-
     #subplots_adjust(bottom=0.2, top=None, wspace=0.2, hspace=0.2)
     if outfile:
         savefig(outfile)
@@ -122,7 +124,8 @@ def show_plot(plots, yaxis, legends, gtitle):
 
 if __name__ == '__main__':
 
-    import sys, getopt
+    import sys
+    import getopt
 
     usage = """usage: %s [-o file] [-t title] [--insert] [--create-index] [--create-total] [--overlaps] [--entropy] [--table-size] [--indexes-size] [--total-size] [--query=colname] [--query-cold=colname] [--query-warm=colname] [--query-repeated=colname] files
  -o filename for output (only .png and .jpg extensions supported)
@@ -263,7 +266,7 @@ if __name__ == '__main__':
         #plegend = plegend.replace('zlib1', '')
         if filename.find('PyTables') != -1:
             xval, yval = get_values(filename)
-            print "Values for %s --> %s, %s" % (filename, xval, yval)
+            print("Values for %s --> %s, %s" % (filename, xval, yval))
             if xval != []:
                 plot = loglog(xval, yval)
                 #plot = semilogx(xval, yval)
@@ -273,7 +276,7 @@ if __name__ == '__main__':
                 legends.append(plegend)
         else:
             xval, yval = get_values(filename)
-            print "Values for %s --> %s, %s" % (filename, xval, yval)
+            print("Values for %s --> %s, %s" % (filename, xval, yval))
             plots.append(loglog(xval, yval, linewidth=3, color='m'))
             #plots.append(semilogx(xval, yval, linewidth=linewidth, color='m'))
             legends.append(plegend)
diff --git a/bench/indexed_search.py b/bench/indexed_search.py
index a91f65b..e7de81f 100644
--- a/bench/indexed_search.py
+++ b/bench/indexed_search.py
@@ -1,19 +1,22 @@
+from __future__ import print_function
 from time import time
 import subprocess
 import random
 import numpy
 
 # Constants
-STEP = 1000*100  # the size of the buffer to fill the table, in rows
-SCALE = 0.1      # standard deviation of the noise compared with actual values
-NI_NTIMES = 1      # The number of queries for doing a mean (non-idx cols)
-#COLDCACHE = 10   # The number of reads where the cache is considered 'cold'
-#WARMCACHE = 50   # The number of reads until the cache is considered 'warmed'
-#READ_TIMES = WARMCACHE+50    # The number of complete calls to DB.query_db()
-#COLDCACHE = 50   # The number of reads where the cache is considered 'cold'
-#WARMCACHE = 50  # The number of reads until the cache is considered 'warmed'
-#READ_TIMES = WARMCACHE+50    # The number of complete calls to DB.query_db()
-MROW = 1000*1000.
+
+STEP = 1000 * 100   # the size of the buffer to fill the table, in rows
+SCALE = 0.1         # standard deviation of the noise compared with actual
+                    # values
+NI_NTIMES = 1       # The number of queries for doing a mean (non-idx cols)
+# COLDCACHE = 10   # The number of reads where the cache is considered 'cold'
+# WARMCACHE = 50   # The number of reads until the cache is considered 'warmed'
+# READ_TIMES = WARMCACHE+50    # The number of complete calls to DB.query_db()
+# COLDCACHE = 50   # The number of reads where the cache is considered 'cold'
+# WARMCACHE = 50   # The number of reads until the cache is considered 'warmed'
+# READ_TIMES = WARMCACHE+50    # The number of complete calls to DB.query_db()
+MROW = 1000 * 1000.
 
 # Test values
 COLDCACHE = 5   # The number of reads where the cache is considered 'cold'
@@ -27,13 +30,15 @@ prec = 6  # precision for printing floats purposes
 
 def get_nrows(nrows_str):
     if nrows_str.endswith("k"):
-        return int(float(nrows_str[:-1])*1000)
+        return int(float(nrows_str[:-1]) * 1000)
     elif nrows_str.endswith("m"):
-        return int(float(nrows_str[:-1])*1000*1000)
+        return int(float(nrows_str[:-1]) * 1000 * 1000)
     elif nrows_str.endswith("g"):
-        return int(float(nrows_str[:-1])*1000*1000*1000)
+        return int(float(nrows_str[:-1]) * 1000 * 1000 * 1000)
     else:
-        raise ValueError("value of nrows must end with either 'k', 'm' or 'g' suffixes.")
+        raise ValueError(
+            "value of nrows must end with either 'k', 'm' or 'g' suffixes.")
+
 
 class DB(object):
 
@@ -53,23 +58,25 @@ class DB(object):
         return int(line.split()[0])
 
     def print_mtime(self, t1, explain):
-        mtime = time()-t1
-        print "%s:" % explain, round(mtime, 6)
-        print "Krows/s:", round((self.nrows/1000.)/mtime, 6)
+        mtime = time() - t1
+        print("%s:" % explain, round(mtime, 6))
+        print("Krows/s:", round((self.nrows / 1000.) / mtime, 6))
 
     def print_qtime(self, colname, ltimes):
-        qtime1 = ltimes[0] # First measured time
+        qtime1 = ltimes[0]  # First measured time
         qtime2 = ltimes[-1]  # Last measured time
-        print "Query time for %s:" % colname, round(qtime1, 6)
-        print "Mrows/s:", round((self.nrows/(MROW))/qtime1, 6)
-        print "Query time for %s (cached):" % colname, round(qtime2, 6)
-        print "Mrows/s (cached):", round((self.nrows/(MROW))/qtime2, 6)
+        print("Query time for %s:" % colname, round(qtime1, 6))
+        print("Mrows/s:", round((self.nrows / (MROW)) / qtime1, 6))
+        print("Query time for %s (cached):" % colname, round(qtime2, 6))
+        print("Mrows/s (cached):", round((self.nrows / (MROW)) / qtime2, 6))
 
     def norm_times(self, ltimes):
         "Get the mean and stddev of ltimes, avoiding the extreme values."
-        lmean = ltimes.mean(); lstd = ltimes.std()
-        ntimes = ltimes[ltimes < lmean+lstd]
-        nmean = ntimes.mean(); nstd = ntimes.std()
+        lmean = ltimes.mean()
+        lstd = ltimes.std()
+        ntimes = ltimes[ltimes < lmean + lstd]
+        nmean = ntimes.mean()
+        nstd = ntimes.std()
         return nmean, nstd
 
     def print_qtime_idx(self, colname, ltimes, repeated, verbose):
@@ -79,36 +86,36 @@ class DB(object):
             r = "[NOREP] "
         ltimes = numpy.array(ltimes)
         ntimes = len(ltimes)
-        qtime1 = ltimes[0] # First measured time
+        qtime1 = ltimes[0]  # First measured time
         ctimes = ltimes[1:COLDCACHE]
         cmean, cstd = self.norm_times(ctimes)
         wtimes = ltimes[WARMCACHE:]
         wmean, wstd = self.norm_times(wtimes)
         if verbose:
-            print "Times for cold cache:\n", ctimes
-            #print "Times for warm cache:\n", wtimes
-            print "Histogram for warm cache: %s\n%s" % \
-                  numpy.histogram(wtimes)
-        print "%s1st query time for %s:" % (r, colname), \
-              round(qtime1, prec)
-        print "%sQuery time for %s (cold cache):" % (r, colname), \
-              round(cmean, prec), "+-", round(cstd, prec)
-        print "%sQuery time for %s (warm cache):" % (r, colname), \
-              round(wmean, prec), "+-", round(wstd, prec)
+            print("Times for cold cache:\n", ctimes)
+            # print "Times for warm cache:\n", wtimes
+            print("Histogram for warm cache: %s\n%s" %
+                  numpy.histogram(wtimes))
+        print("%s1st query time for %s:" % (r, colname),
+              round(qtime1, prec))
+        print("%sQuery time for %s (cold cache):" % (r, colname),
+              round(cmean, prec), "+-", round(cstd, prec))
+        print("%sQuery time for %s (warm cache):" % (r, colname),
+              round(wmean, prec), "+-", round(wstd, prec))
 
     def print_db_sizes(self, init, filled, indexed):
-        table_size = (filled-init)/1024.
-        indexes_size = (indexed-filled)/1024.
-        print "Table size (MB):", round(table_size, 3)
-        print "Indexes size (MB):", round(indexes_size, 3)
-        print "Full size (MB):", round(table_size+indexes_size, 3)
+        table_size = (filled - init) / 1024.
+        indexes_size = (indexed - filled) / 1024.
+        print("Table size (MB):", round(table_size, 3))
+        print("Indexes size (MB):", round(indexes_size, 3))
+        print("Full size (MB):", round(table_size + indexes_size, 3))
 
     def fill_arrays(self, start, stop):
         arr_f8 = numpy.arange(start, stop, dtype='float64')
         arr_i4 = numpy.arange(start, stop, dtype='int32')
         if self.userandom:
-            arr_f8 += numpy.random.normal(0, stop*self.scale,
-                                          size=stop-start)
+            arr_f8 += numpy.random.normal(0, stop * self.scale,
+                                          size=stop - start)
             arr_i4 = numpy.array(arr_f8, dtype='int32')
         return arr_i4, arr_f8
 
@@ -116,7 +123,7 @@ class DB(object):
         self.con = self.open_db(remove=1)
         self.create_table(self.con)
         init_size = self.get_db_size()
-        t1=time()
+        t1 = time()
         self.fill_table(self.con)
         table_size = self.get_db_size()
         self.print_mtime(t1, 'Insert time')
@@ -133,7 +140,7 @@ class DB(object):
         else:
             idx_cols = ['col2', 'col4']
         for colname in idx_cols:
-            t1=time()
+            t1 = time()
             self.index_col(self.con, colname, kind, optlevel, verbose)
             self.print_mtime(t1, 'Index time (%s)' % colname)
 
@@ -161,11 +168,11 @@ class DB(object):
                 ltimes = []
                 random.seed(rseed)
                 for i in range(NI_NTIMES):
-                    t1=time()
+                    t1 = time()
                     results = self.do_query(self.con, colname, base, inkernel)
-                    ltimes.append(time()-t1)
+                    ltimes.append(time() - t1)
                 if verbose:
-                    print "Results len:", results
+                    print("Results len:", results)
                 self.print_qtime(colname, ltimes)
             # Always reopen the file after *every* query loop.
             # Necessary to make the benchmark to run correctly.
@@ -180,23 +187,25 @@ class DB(object):
                 # First, non-repeated queries
                 for i in range(niter):
                     base = rndbase[i]
-                    t1=time()
+                    t1 = time()
                     results = self.do_query(self.con, colname, base, inkernel)
-                    #results, tprof = self.do_query(self.con, colname, base, inkernel)
-                    ltimes.append(time()-t1)
+                    #results, tprof = self.do_query(
+                    #    self.con, colname, base, inkernel)
+                    ltimes.append(time() - t1)
                 if verbose:
-                    print "Results len:", results
+                    print("Results len:", results)
                 self.print_qtime_idx(colname, ltimes, False, verbose)
                 # Always reopen the file after *every* query loop.
                 # Necessary to make the benchmark to run correctly.
                 self.close_db(self.con)
                 self.con = self.open_db()
                 ltimes = []
-#                 # Second, repeated queries
+# Second, repeated queries
 #                 for i in range(niter):
 #                     t1=time()
-#                     results = self.do_query(self.con, colname, base, inkernel)
-#                     #results, tprof = self.do_query(self.con, colname, base, inkernel)
+#                     results = self.do_query(
+#                         self.con, colname, base, inkernel)
+# results, tprof = self.do_query(self.con, colname, base, inkernel)
 #                     ltimes.append(time()-t1)
 #                 if verbose:
 #                     print "Results len:", results
@@ -204,10 +213,10 @@ class DB(object):
                 # Print internal PyTables index tprof statistics
                 #tprof = numpy.array(tprof)
                 #tmean, tstd = self.norm_times(tprof)
-                #print "tprof-->", round(tmean, prec), "+-", round(tstd, prec)
-                #print "tprof hist-->", \
+                # print "tprof-->", round(tmean, prec), "+-", round(tstd, prec)
+                # print "tprof hist-->", \
                 #    numpy.histogram(tprof)
-                #print "tprof raw-->", tprof
+                # print "tprof raw-->", tprof
                 # Always reopen the file after *every* query loop.
                 # Necessary to make the benchmark to run correctly.
                 self.close_db(self.con)
@@ -219,8 +228,8 @@ class DB(object):
         con.close()
 
 
-if __name__=="__main__":
-    import sys, os
+if __name__ == "__main__":
+    import sys
     import getopt
 
     try:
@@ -256,7 +265,8 @@ if __name__=="__main__":
             \n""" % sys.argv[0]
 
     try:
-        opts, pargs = getopt.getopt(sys.argv[1:], 'TPvfkpmcqiISxz:l:R:N:n:d:O:t:s:Q:')
+        opts, pargs = getopt.getopt(
+            sys.argv[1:], 'TPvfkpmcqiISxz:l:R:N:n:d:O:t:s:Q:')
     except:
         sys.stderr.write(usage)
         sys.exit(1)
@@ -338,13 +348,14 @@ if __name__=="__main__":
             if option[1] in ('full', 'medium', 'light', 'ultralight'):
                 kind = option[1]
             else:
-                print "kind should be either 'full', 'medium', 'light' or 'ultralight'"
+                print("kind should be either 'full', 'medium', 'light' or "
+                      "'ultralight'")
                 sys.exit(1)
         elif option[0] == '-s':
             if option[1] in ('int', 'float'):
                 dtype = option[1]
             else:
-                print "column should be either 'int' or 'float'"
+                print("column should be either 'int' or 'float'")
                 sys.exit(1)
         elif option[0] == '-Q':
             repeatquery = 1
@@ -352,9 +363,9 @@ if __name__=="__main__":
 
     # If not database backend selected, abort
     if not usepytables and not usepostgres:
-        print "Please select a backend:"
-        print "PyTables: -T"
-        print "Postgres: -P"
+        print("Please select a backend:")
+        print("PyTables: -T")
+        print("Postgres: -P")
         sys.exit(1)
 
     # Create the class for the database
@@ -372,9 +383,9 @@ if __name__=="__main__":
 
     if verbose:
         if userandom:
-            print "using random values"
+            print("using random values")
         if onlyidxquery:
-            print "doing indexed queries only"
+            print("doing indexed queries only")
 
     if psyco_imported and usepsyco:
         psyco.bind(db.create_db)
@@ -382,15 +393,18 @@ if __name__=="__main__":
 
     if docreate:
         if verbose:
-            print "writing %s rows" % krows
+            print("writing %s rows" % krows)
         db.create_db(dtype, kind, optlevel, verbose)
 
     if doquery:
-        print "Calling query_db() %s times" % niter
+        print("Calling query_db() %s times" % niter)
         if doprofile:
             import pstats
             import cProfile as prof
-            prof.run('db.query_db(niter, dtype, onlyidxquery, onlynonidxquery, avoidfscache, verbose, inkernel)', 'indexed_search.prof')
+            prof.run(
+                'db.query_db(niter, dtype, onlyidxquery, onlynonidxquery, '
+                'avoidfscache, verbose, inkernel)',
+                'indexed_search.prof')
             stats = pstats.Stats('indexed_search.prof')
             stats.strip_dirs()
             stats.sort_stats('time', 'calls')
@@ -402,15 +416,20 @@ if __name__=="__main__":
             from cProfile import Profile
             import lsprofcalltree
             prof = Profile()
-            prof.run('db.query_db(niter, dtype, onlyidxquery, onlynonidxquery, avoidfscache, verbose, inkernel)')
+            prof.run(
+                'db.query_db(niter, dtype, onlyidxquery, onlynonidxquery, '
+                'avoidfscache, verbose, inkernel)')
             kcg = lsprofcalltree.KCacheGrind(prof)
             ofile = open('indexed_search.kcg', 'w')
             kcg.output(ofile)
             ofile.close()
         elif doprofile:
-            import hotshot, hotshot.stats
+            import hotshot
+            import hotshot.stats
             prof = hotshot.Profile("indexed_search.prof")
-            benchtime, stones = prof.run('db.query_db(niter, dtype, onlyidxquery, onlynonidxquery, avoidfscache, verbose, inkernel)')
+            benchtime, stones = prof.run(
+                'db.query_db(niter, dtype, onlyidxquery, onlynonidxquery, '
+                'avoidfscache, verbose, inkernel)')
             prof.close()
             stats = hotshot.stats.load("indexed_search.prof")
             stats.strip_dirs()
@@ -424,19 +443,20 @@ if __name__=="__main__":
         # Start by a range which is almost None
         db.rng = [1, 1]
         if verbose:
-            print "range:", db.rng
+            print("range:", db.rng)
         db.query_db(niter, dtype, onlyidxquery, onlynonidxquery,
                     avoidfscache, verbose, inkernel)
-        for i in xrange(repeatvalue):
+        for i in range(repeatvalue):
             for j in (1, 2, 5):
-                rng = j*10**i
-                db.rng = [-rng/2, rng/2]
+                rng = j * 10 ** i
+                db.rng = [-rng / 2, rng / 2]
                 if verbose:
-                    print "range:", db.rng
+                    print("range:", db.rng)
 #                 if usepostgres:
-#                     os.system("echo 1 > /proc/sys/vm/drop_caches; /etc/init.d/postgresql restart")
+#                     os.system(
+#                         "echo 1 > /proc/sys/vm/drop_caches;"
+#                         " /etc/init.d/postgresql restart")
 #                 else:
 #                     os.system("echo 1 > /proc/sys/vm/drop_caches")
                 db.query_db(niter, dtype, onlyidxquery, onlynonidxquery,
                             avoidfscache, verbose, inkernel)
-
diff --git a/bench/keysort.py b/bench/keysort.py
index 5a60fe1..b641641 100644
--- a/bench/keysort.py
+++ b/bench/keysort.py
@@ -1,32 +1,33 @@
+from __future__ import print_function
 from tables.indexesextension import keysort
 import numpy
 from time import time
 
-N = 1000*1000
-rnd=numpy.random.randint(N, size=N)
+N = 1000 * 1000
+rnd = numpy.random.randint(N, size=N)
 
 for dtype1 in ('S6', 'b1',
                'i1', 'i2', 'i4', 'i8',
                'u1', 'u2', 'u4', 'u8', 'f4', 'f8'):
     for dtype2 in ('u4', 'i8'):
-        print "dtype array1, array2-->", dtype1, dtype2
-        a=numpy.array(rnd, dtype1)
-        b=numpy.arange(N, dtype=dtype2)
-        c=a.copy()
+        print("dtype array1, array2-->", dtype1, dtype2)
+        a = numpy.array(rnd, dtype1)
+        b = numpy.arange(N, dtype=dtype2)
+        c = a.copy()
 
-        t1=time()
-        d=c.argsort()
+        t1 = time()
+        d = c.argsort()
         # c.sort()
         # e=c
-        e=c[d]
-        f=b[d]
-        tref = time()-t1
-        print "normal sort time-->", tref
+        e = c[d]
+        f = b[d]
+        tref = time() - t1
+        print("normal sort time-->", tref)
 
-        t1=time()
+        t1 = time()
         keysort(a, b)
-        tks = time()-t1
-        print "keysort time-->", tks, "    %.2fx" % (tref/tks,)
+        tks = time() - t1
+        print("keysort time-->", tks, "    %.2fx" % (tref / tks,))
         assert numpy.alltrue(a == e)
         #assert numpy.alltrue(b == d)
         assert numpy.alltrue(f == d)
diff --git a/bench/lookup_bench.py b/bench/lookup_bench.py
index d2dec49..49c35db 100644
--- a/bench/lookup_bench.py
+++ b/bench/lookup_bench.py
@@ -1,8 +1,7 @@
-"""
-Benchmark to help choosing the best chunksize so as to optimize the
-access time in random lookups.
-"""
+"""Benchmark to help choosing the best chunksize so as to optimize the access
+time in random lookups."""
 
+from __future__ import print_function
 from time import time
 import os
 import subprocess
@@ -14,15 +13,17 @@ NOISE = 1e-15    # standard deviation of the noise compared with actual values
 
 rdm_cod = ['lin', 'rnd']
 
+
 def get_nrows(nrows_str):
     if nrows_str.endswith("k"):
-        return int(float(nrows_str[:-1])*1000)
+        return int(float(nrows_str[:-1]) * 1000)
     elif nrows_str.endswith("m"):
-        return int(float(nrows_str[:-1])*1000*1000)
+        return int(float(nrows_str[:-1]) * 1000 * 1000)
     elif nrows_str.endswith("g"):
-        return int(float(nrows_str[:-1])*1000*1000*1000)
+        return int(float(nrows_str[:-1]) * 1000 * 1000 * 1000)
     else:
-        raise ValueError("value of nrows must end with either 'k', 'm' or 'g' suffixes.")
+        raise ValueError(
+            "value of nrows must end with either 'k', 'm' or 'g' suffixes.")
 
 
 class DB(object):
@@ -33,13 +34,13 @@ class DB(object):
         self.docompress = docompress
         self.complib = complib
         self.filename = '-'.join([rdm_cod[userandom],
-                                  "n"+nrows, "s"+chunksize, dtype])
+                                  "n" + nrows, "s" + chunksize, dtype])
         # Complete the filename
         self.filename = "lookup-" + self.filename
         if docompress:
             self.filename += '-' + complib + str(docompress)
         self.filename = datadir + '/' + self.filename + '.h5'
-        print "Processing database:", self.filename
+        print("Processing database:", self.filename)
         self.userandom = userandom
         self.nrows = get_nrows(nrows)
         self.chunksize = get_nrows(chunksize)
@@ -53,13 +54,13 @@ class DB(object):
         return int(line.split()[0])
 
     def print_mtime(self, t1, explain):
-        mtime = time()-t1
-        print "%s:" % explain, round(mtime, 6)
-        print "Krows/s:", round((self.nrows/1000.)/mtime, 6)
+        mtime = time() - t1
+        print("%s:" % explain, round(mtime, 6))
+        print("Krows/s:", round((self.nrows / 1000.) / mtime, 6))
 
     def print_db_sizes(self, init, filled):
-        array_size = (filled-init)/1024.
-        print "Array size (MB):", round(array_size, 3)
+        array_size = (filled - init) / 1024.
+        print("Array size (MB):", round(array_size, 3))
 
     def open_db(self, remove=0):
         if remove and os.path.exists(self.filename):
@@ -71,7 +72,7 @@ class DB(object):
         self.con = self.open_db(remove=1)
         self.create_array()
         init_size = self.get_db_size()
-        t1=time()
+        t1 = time()
         self.fill_array()
         array_size = self.get_db_size()
         self.print_mtime(t1, 'Insert time')
@@ -83,18 +84,18 @@ class DB(object):
         filters = tables.Filters(complevel=self.docompress,
                                  complib=self.complib)
         atom = tables.Atom.from_kind(self.dtype)
-        earray = self.con.create_earray(self.con.root, 'earray', atom, (0,),
-                                       filters=filters,
-                                       expectedrows=self.nrows,
-                                       chunkshape=(self.chunksize,))
+        self.con.create_earray(self.con.root, 'earray', atom, (0,),
+                               filters=filters,
+                               expectedrows=self.nrows,
+                               chunkshape=(self.chunksize,))
 
     def fill_array(self):
         "Fills the array"
         earray = self.con.root.earray
         j = 0
         arr = self.get_array(0, self.step)
-        for i in xrange(0, self.nrows, self.step):
-            stop = (j+1)*self.step
+        for i in range(0, self.nrows, self.step):
+            stop = (j + 1) * self.step
             if stop > self.nrows:
                 stop = self.nrows
             ###arr = self.get_array(i, stop, dtype)
@@ -105,24 +106,24 @@ class DB(object):
     def get_array(self, start, stop):
         arr = numpy.arange(start, stop, dtype='float')
         if self.userandom:
-            arr += numpy.random.normal(0, stop*self.scale, size=stop-start)
+            arr += numpy.random.normal(0, stop * self.scale, size=stop - start)
         arr = arr.astype(self.dtype)
         return arr
 
     def print_qtime(self, ltimes):
         ltimes = numpy.array(ltimes)
-        print "Raw query times:\n", ltimes
-        print "Histogram times:\n", numpy.histogram(ltimes[1:])
+        print("Raw query times:\n", ltimes)
+        print("Histogram times:\n", numpy.histogram(ltimes[1:]))
         ntimes = len(ltimes)
-        qtime1 = ltimes[0] # First measured time
+        qtime1 = ltimes[0]  # First measured time
         if ntimes > 5:
             # Wait until the 5th iteration (in order to
             # ensure that the index is effectively cached) to take times
-            qtime2 = sum(ltimes[5:])/(ntimes-5)
+            qtime2 = sum(ltimes[5:]) / (ntimes - 5)
         else:
             qtime2 = ltimes[-1]  # Last measured time
-        print "1st query time:", round(qtime1, 3)
-        print "Mean (skipping the first 5 meas.):", round(qtime2, 3)
+        print("1st query time:", round(qtime1, 3))
+        print("Mean (skipping the first 5 meas.):", round(qtime2, 3))
 
     def query_db(self, niter, avoidfscache, verbose):
         self.con = self.open_db()
@@ -132,12 +133,12 @@ class DB(object):
         else:
             rseed = 19
         numpy.random.seed(rseed)
-        base = numpy.random.randint(self.nrows)
+        numpy.random.randint(self.nrows)
         ltimes = []
         for i in range(niter):
-            t1=time()
-            results = self.do_query(earray, numpy.random.randint(self.nrows))
-            ltimes.append(time()-t1)
+            t1 = time()
+            self.do_query(earray, numpy.random.randint(self.nrows))
+            ltimes.append(time() - t1)
         self.print_qtime(ltimes)
         self.close_db()
 
@@ -148,7 +149,7 @@ class DB(object):
         self.con.close()
 
 
-if __name__=="__main__":
+if __name__ == "__main__":
     import sys
     import getopt
 
@@ -215,7 +216,7 @@ if __name__=="__main__":
             if option[1] in ('int', 'float'):
                 dtype = option[1]
             else:
-                print "type should be either 'int' or 'float'"
+                print("type should be either 'int' or 'float'")
                 sys.exit(0)
         elif option[0] == '-s':
             chunksize = option[1]
@@ -226,15 +227,15 @@ if __name__=="__main__":
 
     if verbose:
         if userandom:
-            print "using random values"
+            print("using random values")
 
     db = DB(krows, dtype, chunksize, userandom, datadir, docompress, complib)
 
     if docreate:
         if verbose:
-            print "writing %s rows" % krows
+            print("writing %s rows" % krows)
         db.create_db(verbose)
 
     if doquery:
-        print "Calling query_db() %s times" % niter
+        print("Calling query_db() %s times" % niter)
         db.query_db(niter, avoidfscache, verbose)
diff --git a/bench/open_close-bench-gzip.h5 b/bench/open_close-bench-gzip.h5
deleted file mode 100644
index 54c7823..0000000
Binary files a/bench/open_close-bench-gzip.h5 and /dev/null differ
diff --git a/bench/open_close-bench.py b/bench/open_close-bench.py
index 0dbb18a..08df89b 100644
--- a/bench/open_close-bench.py
+++ b/bench/open_close-bench.py
@@ -1,6 +1,14 @@
-"""Testbed for open/close PyTables files. This uses the HotShot profiler."""
+"""Testbed for open/close PyTables files.
 
-import sys, os, getopt, pstats
+This uses the HotShot profiler.
+
+"""
+
+from __future__ import print_function
+import os
+import sys
+import getopt
+import pstats
 import cProfile as prof
 import time
 import subprocess  # From Python 2.4 on
@@ -9,6 +17,7 @@ import tables
 filename = None
 niter = 1
 
+
 def show_stats(explain, tref):
     "Show the used memory"
     # Build the command to obtain memory info (only for Linux 2.6.x)
@@ -28,86 +37,95 @@ def show_stats(explain, tref):
         elif line.startswith("VmLib:"):
             vmlib = int(line.split()[1])
     sout.close()
-    print "WallClock time:", time.time() - tref
-    print "Memory usage: ******* %s *******" % explain
-    print "VmSize: %7s kB\tVmRSS: %7s kB" % (vmsize, vmrss)
-    print "VmData: %7s kB\tVmStk: %7s kB" % (vmdata, vmstk)
-    print "VmExe:  %7s kB\tVmLib: %7s kB" % (vmexe, vmlib)
+    print("WallClock time:", time.time() - tref)
+    print("Memory usage: ******* %s *******" % explain)
+    print("VmSize: %7s kB\tVmRSS: %7s kB" % (vmsize, vmrss))
+    print("VmData: %7s kB\tVmStk: %7s kB" % (vmdata, vmstk))
+    print("VmExe:  %7s kB\tVmLib: %7s kB" % (vmexe, vmlib))
+
 
 def check_open_close():
     for i in range(niter):
-        print "------------------ open_close #%s -------------------------" % i
+        print(
+            "------------------ open_close #%s -------------------------" % i)
         tref = time.time()
-        fileh=tables.open_file(filename)
+        fileh = tables.open_file(filename)
         fileh.close()
         show_stats("After closing file", tref)
 
+
 def check_only_open():
     for i in range(niter):
-        print "------------------ only_open #%s -------------------------" % i
+        print("------------------ only_open #%s -------------------------" % i)
         tref = time.time()
-        fileh=tables.open_file(filename)
+        fileh = tables.open_file(filename)
         show_stats("Before closing file", tref)
         fileh.close()
 
+
 def check_full_browse():
     for i in range(niter):
-        print "------------------ full_browse #%s -----------------------" % i
+        print("------------------ full_browse #%s -----------------------" % i)
         tref = time.time()
-        fileh=tables.open_file(filename)
+        fileh = tables.open_file(filename)
         for node in fileh:
             pass
         fileh.close()
         show_stats("After full browse", tref)
 
+
 def check_partial_browse():
     for i in range(niter):
-        print "------------------ partial_browse #%s --------------------" % i
+        print("------------------ partial_browse #%s --------------------" % i)
         tref = time.time()
-        fileh=tables.open_file(filename)
+        fileh = tables.open_file(filename)
         for node in fileh.root.ngroup0.ngroup1:
             pass
         fileh.close()
         show_stats("After closing file", tref)
 
+
 def check_full_browse_attrs():
     for i in range(niter):
-        print "------------------ full_browse_attrs #%s -----------------" % i
+        print("------------------ full_browse_attrs #%s -----------------" % i)
         tref = time.time()
-        fileh=tables.open_file(filename)
+        fileh = tables.open_file(filename)
         for node in fileh:
             # Access to an attribute
             klass = node._v_attrs.CLASS
         fileh.close()
         show_stats("After full browse", tref)
 
+
 def check_partial_browse_attrs():
     for i in range(niter):
-        print "------------------ partial_browse_attrs #%s --------------" % i
+        print("------------------ partial_browse_attrs #%s --------------" % i)
         tref = time.time()
-        fileh=tables.open_file(filename)
+        fileh = tables.open_file(filename)
         for node in fileh.root.ngroup0.ngroup1:
             # Access to an attribute
             klass = node._v_attrs.CLASS
         fileh.close()
         show_stats("After closing file", tref)
 
+
 def check_open_group():
     for i in range(niter):
-        print "------------------ open_group #%s ------------------------" % i
+        print("------------------ open_group #%s ------------------------" % i)
         tref = time.time()
-        fileh=tables.open_file(filename)
+        fileh = tables.open_file(filename)
         group = fileh.root.ngroup0.ngroup1
         # Access to an attribute
         klass = group._v_attrs.CLASS
         fileh.close()
         show_stats("After closing file", tref)
 
+
 def check_open_leaf():
     for i in range(niter):
-        print "------------------ open_leaf #%s -----------------------" % i
+        print("------------------ open_leaf #%s -----------------------" % i)
         tref = time.time()
-        fileh=tables.open_file(filename)
+        fileh = tables.open_file(filename)
         leaf = fileh.root.ngroup0.ngroup1.array9
         # Access to an attribute
         klass = leaf._v_attrs.CLASS
@@ -168,7 +186,7 @@ if __name__ == '__main__':
         '-a': 'check_partial_browse_attrs',
         '-g': 'check_open_group',
         '-l': 'check_open_leaf',
-        }
+    }
 
     # Get the options
     for option in opts:
@@ -196,13 +214,13 @@ if __name__ == '__main__':
         args.remove('-S')  # We don't want -S in the options list again
         for opt in options:
             opts = "%s \-s %s %s" % (progname, opt, " ".join(args))
-            #print "opts-->", opts
+            # print "opts-->", opts
             os.system("python2.4 %s" % opts)
     else:
         if profile:
             for ifunc in func:
-                prof.run(ifunc+'()', ifunc+'.prof')
-                stats = pstats.Stats(ifunc+'.prof')
+                prof.run(ifunc + '()', ifunc + '.prof')
+                stats = pstats.Stats(ifunc + '.prof')
                 stats.strip_dirs()
                 stats.sort_stats('time', 'calls')
                 if verbose:
@@ -211,8 +229,8 @@ if __name__ == '__main__':
                     stats.print_stats(20)
         else:
             for ifunc in func:
-                eval(ifunc+'()')
+                eval(ifunc + '()')
 
     if not silent:
-        print "------------------ End of run -------------------------"
+        print("------------------ End of run -------------------------")
         show_stats("Final statistics (after closing everything)", tref)
diff --git a/bench/optimal-chunksize.py b/bench/optimal-chunksize.py
index 902548f..e0660be 100644
--- a/bench/optimal-chunksize.py
+++ b/bench/optimal-chunksize.py
@@ -2,37 +2,45 @@
 
 Francesc Alted
 2007-11-25
+
 """
 
-import os, math, subprocess, tempfile
+from __future__ import print_function
+import os
+import math
+import subprocess
+import tempfile
 from time import time
 import numpy
 import tables
 
 # Size of dataset
-#N, M = 512, 2**16     # 256 MB
-#N, M = 512, 2**18     # 1 GB
-#N, M = 512, 2**19     # 2 GB
+# N, M = 512, 2**16     # 256 MB
+# N, M = 512, 2**18     # 1 GB
+# N, M = 512, 2**19     # 2 GB
 N, M = 2000, 1000000  # 15 GB
-#N, M = 4000, 1000000  # 30 GB
+# N, M = 4000, 1000000  # 30 GB
 datom = tables.Float64Atom()   # elements are double precision
 
+
 def quantize(data, least_significant_digit):
-    """quantize data to improve compression.
+    """Quantize data to improve compression.
 
     data is quantized using around(scale*data)/scale, where scale is
     2**bits, and bits is determined from the least_significant_digit.
-    For example, if least_significant_digit=1, bits will be 4."""
+    For example, if least_significant_digit=1, bits will be 4.
+
+    """
 
-    precision = 10.**-least_significant_digit
+    precision = 10. ** -least_significant_digit
     exp = math.log(precision, 10)
     if exp < 0:
         exp = int(math.floor(exp))
     else:
         exp = int(math.ceil(exp))
-    bits = math.ceil(math.log(10.**-exp, 2))
-    scale = 2.**bits
-    return numpy.around(scale*data)/scale
+    bits = math.ceil(math.log(10. ** -exp, 2))
+    scale = 2. ** bits
+    return numpy.around(scale * data) / scale
 
 
 def get_db_size(filename):
@@ -45,22 +53,22 @@ def get_db_size(filename):
 def bench(chunkshape, filters):
     numpy.random.seed(1)   # to have reproductible results
     filename = tempfile.mktemp(suffix='.h5')
-    print "Doing test on the file system represented by:", filename
+    print("Doing test on the file system represented by:", filename)
 
     f = tables.open_file(filename, 'w')
     e = f.create_earray(f.root, 'earray', datom, shape=(0, M),
-                       filters = filters,
-                       chunkshape = chunkshape)
+                        filters = filters,
+                        chunkshape = chunkshape)
     # Fill the array
     t1 = time()
-    for i in xrange(N):
-        #e.append([numpy.random.rand(M)])  # use this for less compressibility
+    for i in range(N):
+        # e.append([numpy.random.rand(M)])  # use this for less compressibility
         e.append([quantize(numpy.random.rand(M), 6)])
-    #os.system("sync")
-    print "Creation time:", round(time()-t1, 3),
+    # os.system("sync")
+    print("Creation time:", round(time() - t1, 3), end=' ')
     filesize = get_db_size(filename)
     filesize_bytes = os.stat(filename)[6]
-    print "\t\tFile size: %d -- (%s)" % (filesize_bytes, filesize)
+    print("\t\tFile size: %d -- (%s)" % (filesize_bytes, filesize))
 
     # Read in sequential mode:
     e = f.root.earray
@@ -69,10 +77,10 @@ def bench(chunkshape, filters):
     #os.system("sync; echo 1 > /proc/sys/vm/drop_caches")
     for row in e:
         t = row
-    print "Sequential read time:", round(time()-t1, 3),
+    print("Sequential read time:", round(time() - t1, 3), end=' ')
 
-    #f.close()
-    #return
+    # f.close()
+    # return
 
     # Read in random mode:
     i_index = numpy.random.randint(0, N, 128)
@@ -81,7 +89,8 @@ def bench(chunkshape, filters):
     #os.system("sync; echo 1 > /proc/sys/vm/drop_caches")
 
     # Protection against too large chunksizes
-    if 0 and filters.complevel and chunkshape[0]*chunkshape[1]*8 > 2**22:  # 4 MB
+    # 4 MB
+    if 0 and filters.complevel and chunkshape[0] * chunkshape[1] * 8 > 2 ** 22:
         f.close()
         return
 
@@ -89,29 +98,29 @@ def bench(chunkshape, filters):
     for i in i_index:
         for j in j_index:
             t = e[i, j]
-    print "\tRandom read time:", round(time()-t1, 3)
+    print("\tRandom read time:", round(time() - t1, 3))
 
     f.close()
 
 # Benchmark with different chunksizes and filters
-#for complevel in (0, 1, 3, 6, 9):
+# for complevel in (0, 1, 3, 6, 9):
 for complib in (None, 'zlib', 'lzo', 'blosc'):
-#for complib in ('blosc',):
+# for complib in ('blosc',):
     if complib:
         filters = tables.Filters(complevel=5, complib=complib)
     else:
         filters = tables.Filters(complevel=0)
-    print "8<--"*20, "\nFilters:", filters, "\n"+"-"*80
-    #for ecs in (11, 14, 17, 20, 21, 22):
+    print("8<--" * 20, "\nFilters:", filters, "\n" + "-" * 80)
+    # for ecs in (11, 14, 17, 20, 21, 22):
     for ecs in range(10, 24):
-    #for ecs in (19,):
-        chunksize = 2**ecs
+    # for ecs in (19,):
+        chunksize = 2 ** ecs
         chunk1 = 1
-        chunk2 = chunksize/datom.itemsize
+        chunk2 = chunksize / datom.itemsize
         if chunk2 > M:
             chunk1 = chunk2 / M
             chunk2 = M
         chunkshape = (chunk1, chunk2)
         cs_str = str(chunksize / 1024) + " KB"
-        print "***** Chunksize:", cs_str, "/ Chunkshape:", chunkshape, "*****"
+        print("***** Chunksize:", cs_str, "/ Chunkshape:", chunkshape, "*****")
         bench(chunkshape, filters)
diff --git a/bench/plot-bar.py b/bench/plot-bar.py
index ce5d4ca..4c57fcb 100644
--- a/bench/plot-bar.py
+++ b/bench/plot-bar.py
@@ -1,5 +1,7 @@
 #!/usr/bin/env python
 # a stacked bar plot with errorbars
+
+from __future__ import print_function
 from pylab import *
 
 checks = ['open_close', 'only_open',
@@ -11,19 +13,21 @@ width = 0.15       # the width of the bars: can also be len(x) sequence
 colors = ['r', 'm', 'g', 'y', 'b']
 ind = arange(len(checks))    # the x locations for the groups
 
+
 def get_values(filename):
     values = []
     f = open(filename)
     for line in f:
         if show_memory:
             if line.startswith('VmData:'):
-                values.append(float(line.split()[1])/1024.)
+                values.append(float(line.split()[1]) / 1024.)
         else:
             if line.startswith('WallClock time:'):
                 values.append(float(line.split(':')[1]))
     f.close()
     return values
 
+
 def plot_bar(values, n):
     global ind
     if not gtotal:
@@ -32,9 +36,10 @@ def plot_bar(values, n):
         if n == 0:
             checks.pop()
             ind = arange(len(checks))
-    p = bar(ind+width*n, values, width, color=colors[n])
+    p = bar(ind + width * n, values, width, color=colors[n])
     return p
 
+
 def show_plot(bars, filenames, tit):
     if show_memory:
         ylabel('Memory (MB)')
@@ -42,8 +47,8 @@ def show_plot(bars, filenames, tit):
         ylabel('Time (s)')
     title(tit)
     n = len(filenames)
-    xticks(ind+width*n/2., checks, rotation = 45,
-           horizontalalignment = 'right', fontsize=8)
+    xticks(ind + width * n / 2., checks, rotation=45,
+           horizontalalignment='right', fontsize=8)
     if not gtotal:
         #loc = 'center right'
         loc = 'upper left'
@@ -52,7 +57,7 @@ def show_plot(bars, filenames, tit):
 
     legends = [f[:f.index('_')] for f in filenames]
     legends = [l.replace('-', ' ') for l in legends]
-    legend([p[0] for p in bars], legends, loc = loc)
+    legend([p[0] for p in bars], legends, loc=loc)
 
     subplots_adjust(bottom=0.2, top=None, wspace=0.2, hspace=0.2)
     if outfile:
@@ -62,7 +67,8 @@ def show_plot(bars, filenames, tit):
 
 if __name__ == '__main__':
 
-    import sys, getopt
+    import sys
+    import getopt
 
     usage = """usage: %s [-g] [-m] [-o file] [-t title] files
             -g grand total
@@ -107,7 +113,7 @@ if __name__ == '__main__':
     n = 0
     for filename in filenames:
         values = get_values(filename)
-        print "Values-->", values
+        print("Values-->", values)
         bars.append(plot_bar(values, n))
         n += 1
     show_plot(bars, filenames, tit)
diff --git a/bench/poly.py b/bench/poly.py
index dccfa99..08752ad 100644
--- a/bench/poly.py
+++ b/bench/poly.py
@@ -6,17 +6,17 @@
 # Date: 2010-02-24
 #######################################################################
 
+from __future__ import print_function
 import os
-import sys
 from time import time
 import numpy as np
 import tables as tb
 import numexpr as ne
 
 expr = ".25*x**3 + .75*x**2 - 1.5*x - 2"  # the polynomial to compute
-N = 10*1000*1000          # the number of points to compute expression (80 MB)
-step = 100*1000           # perform calculation in slices of `step` elements
-dtype = np.dtype('f8')    # the datatype
+N = 10 * 1000 * 1000    # the number of points to compute expression (80 MB)
+step = 100 * 1000       # perform calculation in slices of `step` elements
+dtype = np.dtype('f8')  # the datatype
 #CHUNKSHAPE = (2**17,)
 CHUNKSHAPE = None
 
@@ -27,30 +27,31 @@ x = None
 
 # Filenames for numpy.memmap
 fprefix = "numpy.memmap"             # the I/O file prefix
-mpfnames = [fprefix+"-x.bin", fprefix+"-r.bin"]
+mpfnames = [fprefix + "-x.bin", fprefix + "-r.bin"]
 
 # Filename for tables.Expr
 h5fname = "tablesExpr.h5"     # the I/O file
 
-MB = 1024*1024.               # a MegaByte
+MB = 1024 * 1024.               # a MegaByte
 
 
 def print_filesize(filename, clib=None, clevel=0):
     """Print some statistics about file sizes."""
 
-    #os.system("sync")    # make sure that all data has been flushed to disk
+    # os.system("sync")    # make sure that all data has been flushed to disk
     if isinstance(filename, list):
         filesize_bytes = 0
         for fname in filename:
             filesize_bytes += os.stat(fname)[6]
     else:
         filesize_bytes = os.stat(filename)[6]
-    filesize_MB  = round(filesize_bytes / MB, 1)
-    print "\t\tTotal file sizes: %d -- (%s MB)" % (filesize_bytes, filesize_MB),
+    filesize_MB = round(filesize_bytes / MB, 1)
+    print("\t\tTotal file sizes: %d -- (%s MB)" % (
+        filesize_bytes, filesize_MB), end=' ')
     if clevel > 0:
-        print "(using %s lvl%s)" % (clib, clevel)
+        print("(using %s lvl%s)" % (clib, clevel))
     else:
-        print
+        print()
 
 
 def populate_x_numpy():
@@ -66,9 +67,10 @@ def populate_x_memmap():
     x = np.memmap(mpfnames[0], dtype=dtype, mode="w+", shape=(N,))
 
     # Populate x in range [-1, 1]
-    for i in xrange(0, N, step):
-        chunk = np.linspace((2*i-N)/float(N), (2*(i+step)-N)/float(N), step)
-        x[i:i+step] = chunk
+    for i in range(0, N, step):
+        chunk = np.linspace((2 * i - N) / float(N),
+                            (2 * (i + step) - N) / float(N), step)
+        x[i:i + step] = chunk
     del x        # close x memmap
 
 
@@ -80,14 +82,15 @@ def populate_x_tables(clib, clevel):
     atom = tb.Atom.from_dtype(dtype)
     filters = tb.Filters(complib=clib, complevel=clevel)
     x = f.create_carray(f.root, "x", atom=atom, shape=(N,),
-                       filters=filters,
-                       chunkshape=CHUNKSHAPE,
-                       )
+                        filters=filters,
+                        chunkshape=CHUNKSHAPE,
+                        )
 
     # Populate x in range [-1, 1]
-    for i in xrange(0, N, step):
-        chunk = np.linspace((2*i-N)/float(N), (2*(i+step)-N)/float(N), step)
-        x[i:i+step] = chunk
+    for i in range(0, N, step):
+        chunk = np.linspace((2 * i - N) / float(N),
+                            (2 * (i + step) - N) / float(N), step)
+        x[i:i + step] = chunk
     f.close()
 
 
@@ -110,7 +113,7 @@ def compute_memmap():
 
     # Do the computation by chunks and store in output
     r[:] = eval(expr)          # where is stored the result?
-    #r = eval(expr)            # result is stored in-memory
+    # r = eval(expr)            # result is stored in-memory
 
     del x, r                   # close x and r memmap arrays
     print_filesize(mpfnames)
@@ -124,9 +127,9 @@ def compute_tables(clib, clevel):
     atom = tb.Atom.from_dtype(dtype)
     filters = tb.Filters(complib=clib, complevel=clevel)
     r = f.create_carray(f.root, "r", atom=atom, shape=(N,),
-                       filters=filters,
-                       chunkshape=CHUNKSHAPE,
-                       )
+                        filters=filters,
+                        chunkshape=CHUNKSHAPE,
+                        )
 
     # Do the actual computation and store in output
     ex = tb.Expr(expr)         # parse the expression
@@ -142,19 +145,20 @@ if __name__ == '__main__':
 
     tb.print_versions()
 
-    print "Total size for datasets:", round(2*N*dtype.itemsize/MB, 1), "MB"
+    print("Total size for datasets:",
+          round(2 * N * dtype.itemsize / MB, 1), "MB")
 
     # Get the compression libraries supported
-    #supported_clibs = [clib for clib in ("zlib", "lzo", "bzip2", "blosc")
-    #supported_clibs = [clib for clib in ("zlib", "lzo", "blosc")
+    # supported_clibs = [clib for clib in ("zlib", "lzo", "bzip2", "blosc")
+    # supported_clibs = [clib for clib in ("zlib", "lzo", "blosc")
     supported_clibs = [clib for clib in ("blosc",)
                        if tb.which_lib_version(clib)]
 
     # Initialization code
-    #for what in ["numpy", "numpy.memmap", "numexpr"]:
+    # for what in ["numpy", "numpy.memmap", "numexpr"]:
     for what in ["numpy", "numexpr"]:
-        #break
-        print "Populating x using %s with %d points..." % (what, N)
+        # break
+        print("Populating x using %s with %d points..." % (what, N))
         t0 = time()
         if what == "numpy":
             populate_x_numpy()
@@ -165,28 +169,28 @@ if __name__ == '__main__':
         elif what == "numpy.memmap":
             populate_x_memmap()
             compute = compute_memmap
-        print "*** Time elapsed populating:", round(time() - t0, 3)
-        print "Computing: '%s' using %s" % (expr, what)
+        print("*** Time elapsed populating:", round(time() - t0, 3))
+        print("Computing: '%s' using %s" % (expr, what))
         t0 = time()
         compute()
-        print "**************** Time elapsed computing:", round(time() - t0, 3)
+        print("**************** Time elapsed computing:",
+              round(time() - t0, 3))
 
     for what in ["tables.Expr"]:
         t0 = time()
         first = True    # Sentinel
         for clib in supported_clibs:
-            #for clevel in (0, 1, 3, 6, 9):
+            # for clevel in (0, 1, 3, 6, 9):
             for clevel in range(10):
-            #for clevel in (1,):
+            # for clevel in (1,):
                 if not first and clevel == 0:
                     continue
-                print "Populating x using %s with %d points..." % (what, N)
+                print("Populating x using %s with %d points..." % (what, N))
                 populate_x_tables(clib, clevel)
-                print "*** Time elapsed populating:", round(time() - t0, 3)
-                print "Computing: '%s' using %s" % (expr, what)
+                print("*** Time elapsed populating:", round(time() - t0, 3))
+                print("Computing: '%s' using %s" % (expr, what))
                 t0 = time()
                 compute_tables(clib, clevel)
-                print "**************** Time elapsed computing:", \
-                      round(time() - t0, 3)
+                print("**************** Time elapsed computing:",
+                      round(time() - t0, 3))
                 first = False
-
diff --git a/bench/postgres-search-bench.py b/bench/postgres-search-bench.py
index 6973550..d2c9f4f 100644
--- a/bench/postgres-search-bench.py
+++ b/bench/postgres-search-bench.py
@@ -1,4 +1,4 @@
-import os, os.path
+from __future__ import print_function
 from time import time
 import numpy
 import random
@@ -8,25 +8,29 @@ DSN = "dbname=test port = 5435"
 # in order to always generate the same random sequence
 random.seed(19)
 
+
 def flatten(l):
     """Flattens list of tuples l."""
     return [x[0] for x in l]
 
+
 def fill_arrays(start, stop):
     col_i = numpy.arange(start, stop, type=numpy.Int32)
     if userandom:
-        col_j = numpy.random.uniform(0, nrows, size=[stop-start])
+        col_j = numpy.random.uniform(0, nrows, size=[stop - start])
     else:
         col_j = numpy.array(col_i, type=numpy.Float64)
     return col_i, col_j
 
 # Generator for ensure pytables benchmark compatibility
+
+
 def int_generator(nrows):
-    step = 1000*100
+    step = 1000 * 100
     j = 0
-    for i in xrange(nrows):
-        if i >= step*j:
-            stop = (j+1)*step
+    for i in range(nrows):
+        if i >= step * j:
+            stop = (j + 1) * step
             if stop > nrows:  # Seems unnecessary
                 stop = nrows
             col_i, col_j = fill_arrays(i, stop)
@@ -35,14 +39,17 @@ def int_generator(nrows):
         yield (col_i[k], col_j[k])
         k += 1
 
+
 def int_generator_slow(nrows):
-    for i in xrange(nrows):
+    for i in range(nrows):
         if userandom:
             yield (i, float(random.randint(0, nrows)))
         else:
             yield (i, float(i))
 
+
 class Stream32(object):
+
     "Object simulating a file for reading"
 
     def __init__(self):
@@ -54,8 +61,8 @@ class Stream32(object):
         for tup in int_generator(nrows):
             sout = "%s\t%s\n" % tup
             if n is not None and len(sout) > n:
-                for i in xrange(0, len(sout), n):
-                    yield sout[i:i+n]
+                for i in range(0, len(sout), n):
+                    yield sout[i:i + n]
             else:
                 yield sout
 
@@ -65,7 +72,7 @@ class Stream32(object):
         for tup in int_generator(nrows):
             sout += "%s\t%s\n" % tup
             if n is not None and len(sout) > n:
-                for i in xrange(n, len(sout), n):
+                for i in range(n, len(sout), n):
                     rout = sout[:n]
                     sout = sout[n:]
                     yield rout
@@ -74,11 +81,12 @@ class Stream32(object):
     def read(self, n=None):
         self.n = n
         try:
-            str = self.read_it.next()
+            str = next(self.read_it)
         except StopIteration:
             str = ""
         return str
 
+
 def open_db(filename, remove=0):
     if not filename:
         con = sqlite.connect(DSN)
@@ -87,6 +95,7 @@ def open_db(filename, remove=0):
     cur = con.cursor()
     return con, cur
 
+
 def create_db(filename, nrows):
     con, cur = open_db(filename, remove=1)
     try:
@@ -97,7 +106,7 @@ def create_db(filename, nrows):
         cur.execute("create table ints(i integer, j double precision)")
     con.commit()
     con.set_isolation_level(2)
-    t1=time()
+    t1 = time()
     st = Stream32()
     cur.copy_from(st, "ints")
     # In case of postgres, the speeds of generator and loop are similar
@@ -105,49 +114,53 @@ def create_db(filename, nrows):
 #     for i in xrange(nrows):
 #         cur.execute("insert into ints values (%s,%s)", (i, float(i)))
     con.commit()
-    ctime = time()-t1
+    ctime = time() - t1
     if verbose:
-        print "insert time:", round(ctime, 5)
-        print "Krows/s:", round((nrows/1000.)/ctime, 5)
+        print("insert time:", round(ctime, 5))
+        print("Krows/s:", round((nrows / 1000.) / ctime, 5))
     close_db(con, cur)
 
+
 def index_db(filename):
     con, cur = open_db(filename)
-    t1=time()
+    t1 = time()
     cur.execute("create index ij on ints(j)")
     con.commit()
-    itime = time()-t1
+    itime = time() - t1
     if verbose:
-        print "index time:", round(itime, 5)
-        print "Krows/s:", round(nrows/itime, 5)
+        print("index time:", round(itime, 5))
+        print("Krows/s:", round(nrows / itime, 5))
     # Close the DB
     close_db(con, cur)
 
+
 def query_db(filename, rng):
     con, cur = open_db(filename)
-    t1=time()
+    t1 = time()
     ntimes = 10
     for i in range(ntimes):
         # between clause does not seem to take advantage of indexes
-        #cur.execute("select j from ints where j between %s and %s" % \
-        cur.execute("select i from ints where j >= %s and j <= %s" % \
-        #cur.execute("select i from ints where i >= %s and i <= %s" % \
-                    (rng[0]+i, rng[1]+i))
+        # cur.execute("select j from ints where j between %s and %s" % \
+        cur.execute("select i from ints where j >= %s and j <= %s" %
+                    # cur.execute("select i from ints where i >= %s and i <=
+                    # %s" %
+                    (rng[0] + i, rng[1] + i))
         results = cur.fetchall()
     con.commit()
-    qtime = (time()-t1)/ntimes
+    qtime = (time() - t1) / ntimes
     if verbose:
-        print "query time:", round(qtime, 5)
-        print "Mrows/s:", round((nrows/1000.)/qtime, 5)
+        print("query time:", round(qtime, 5))
+        print("Mrows/s:", round((nrows / 1000.) / qtime, 5))
         results = sorted(flatten(results))
-        print results
+        print(results)
     close_db(con, cur)
 
+
 def close_db(con, cur):
     cur.close()
     con.close()
 
-if __name__=="__main__":
+if __name__ == "__main__":
     import sys
     import getopt
     try:
@@ -216,13 +229,13 @@ if __name__=="__main__":
     import psycopg2 as sqlite
 
     if verbose:
-        #print "pysqlite version:", sqlite.version
+        # print "pysqlite version:", sqlite.version
         if userandom:
-            print "using random values"
+            print("using random values")
 
     if docreate:
         if verbose:
-            print "writing %s krows" % nrows
+            print("writing %s krows" % nrows)
         if psyco_imported and usepsyco:
             psyco.bind(create_db)
         nrows *= 1000
diff --git a/bench/postgres_backend.py b/bench/postgres_backend.py
index ff4ce62..cae5f7b 100644
--- a/bench/postgres_backend.py
+++ b/bench/postgres_backend.py
@@ -1,5 +1,4 @@
-import os, os.path
-from time import sleep
+from __future__ import print_function
 import subprocess  # Needs Python 2.4
 from indexed_search import DB
 import psycopg2 as db2
@@ -13,6 +12,7 @@ DROP_DB = "dropdb %s"
 TABLE_NAME = "intsfloats"
 PORT = 5432
 
+
 class StreamChar(object):
     "Object simulating a file for reading"
 
@@ -24,9 +24,9 @@ class StreamChar(object):
 
     def values_generator(self):
         j = 0
-        for i in xrange(self.nrows):
-            if i >= j*self.step:
-                stop = (j+1)*self.step
+        for i in range(self.nrows):
+            if i >= j * self.step:
+                stop = (j + 1) * self.step
                 if stop > self.nrows:
                     stop = self.nrows
                 arr_i4, arr_f8 = self.db.fill_arrays(i, stop)
@@ -41,7 +41,7 @@ class StreamChar(object):
         for tup in self.values_generator():
             sout += "%s\t%s\t%s\t%s\n" % tup
             if n is not None and len(sout) > n:
-                for i in xrange(n, len(sout), n):
+                for i in range(n, len(sout), n):
                     rout = sout[:n]
                     sout = sout[n:]
                     yield rout
@@ -50,7 +50,7 @@ class StreamChar(object):
     def read(self, n=None):
         self.nbytes = n
         try:
-            str = self.read_it.next()
+            str = next(self.read_it)
         except StopIteration:
             str = ""
         return str
@@ -69,7 +69,7 @@ class Postgres_DB(DB):
     def flatten(self, l):
         """Flattens list of tuples l."""
         return [x[0] for x in l]
-        #return map(lambda x: x[col], l)
+        # return map(lambda x: x[col], l)
 
     # Overloads the method in DB class
     def get_db_size(self):
@@ -83,12 +83,14 @@ class Postgres_DB(DB):
         if remove:
             sout = subprocess.Popen(DROP_DB % self.filename, shell=True,
                                     stdout=subprocess.PIPE).stdout
-            for line in sout: print line
+            for line in sout:
+                print(line)
             sout = subprocess.Popen(CREATE_DB % self.filename, shell=True,
                                     stdout=subprocess.PIPE).stdout
-            for line in sout: print line
+            for line in sout:
+                print(line)
 
-        print "Processing database:", self.filename
+        print("Processing database:", self.filename)
         con = db2.connect(DSN % (self.filename, self.port))
         self.cur = con.cursor()
         return con
@@ -107,16 +109,16 @@ class Postgres_DB(DB):
         con.commit()
 
     def index_col(self, con, colname, optlevel, idxtype, verbose):
-        self.cur.execute("create index %s on %s(%s)" % \
-                         (colname+'_idx', TABLE_NAME, colname))
+        self.cur.execute("create index %s on %s(%s)" %
+                         (colname + '_idx', TABLE_NAME, colname))
         con.commit()
 
     def do_query_simple(self, con, column, base):
         self.cur.execute(
-            "select sum(%s) from %s where %s >= %s and %s <= %s" % \
+            "select sum(%s) from %s where %s >= %s and %s <= %s" %
             (column, TABLE_NAME,
-             column, base+self.rng[0],
-             column, base+self.rng[1]))
+             column, base + self.rng[0],
+             column, base + self.rng[1]))
 #             "select * from %s where %s >= %s and %s <= %s" % \
 #             (TABLE_NAME,
 #              column, base+self.rng[0],
@@ -127,27 +129,28 @@ class Postgres_DB(DB):
 
     def do_query(self, con, column, base, *unused):
         d = (self.rng[1] - self.rng[0]) / 2.
-        inf1 = int(self.rng[0]+base);  sup1 = int(self.rng[0]+d+base)
-        inf2 = self.rng[0]+base*2;  sup2 = self.rng[0]+d+base*2
-        #print "lims-->", inf1, inf2, sup1, sup2
+        inf1 = int(self.rng[0] + base)
+        sup1 = int(self.rng[0] + d + base)
+        inf2 = self.rng[0] + base * 2
+        sup2 = self.rng[0] + d + base * 2
+        # print "lims-->", inf1, inf2, sup1, sup2
         condition = "((%s>=%s) and (%s<%s)) or ((col2>%s) and (col2<%s))"
         #condition = "((col3>=%s) and (col3<%s)) or ((col1>%s) and (col1<%s))"
         condition += " and ((col1+3.1*col2+col3*col4) > 3)"
         #condition += " and (sqrt(col1^2+col2^2+col3^2+col4^2) > .1)"
         condition = condition % (column, inf2, column, sup2, inf1, sup1)
-        #print "condition-->", condition
+        # print "condition-->", condition
         self.cur.execute(
-#            "select sum(%s) from %s where %s" % \
-            "select %s from %s where %s" % \
+            #            "select sum(%s) from %s where %s" %
+            "select %s from %s where %s" %
             (column, TABLE_NAME, condition))
         #results = self.flatten(self.cur.fetchall())
         results = self.cur.fetchall()
         #results = self.cur.fetchall()
-        #print "results-->", results
-        #return results
+        # print "results-->", results
+        # return results
         return len(results)
 
-
     def close_db(self, con):
         self.cur.close()
         con.close()
diff --git a/bench/pytables-search-bench.py b/bench/pytables-search-bench.py
index 991329f..726d30b 100644
--- a/bench/pytables-search-bench.py
+++ b/bench/pytables-search-bench.py
@@ -1,14 +1,14 @@
-import tables
-import os, os.path
+from __future__ import print_function
+import os
 from time import time
 import random
-import numarray
-from numarray import random_array
-from numarray import records
+import numpy as np
+import tables
 
 # in order to always generate the same random sequence
 random.seed(19)
-random_array.seed(19, 20)
+np.random.seed((19, 20))
+
 
 def open_db(filename, remove=0):
     if remove and os.path.exists(filename):
@@ -16,6 +16,7 @@ def open_db(filename, remove=0):
     con = tables.open_file(filename, 'a')
     return con
 
+
 def create_db(filename, nrows):
 
     class Record(tables.IsDescription):
@@ -26,45 +27,47 @@ def create_db(filename, nrows):
 
     con = open_db(filename, remove=1)
     table = con.create_table(con.root, 'table', Record,
-                            filters=filters, expectedrows=nrows)
+                             filters=filters, expectedrows=nrows)
     table.indexFilters = filters
-    step = 1000*100
+    step = 1000 * 100
     scale = 0.1
-    t1=time()
+    t1 = time()
     j = 0
-    for i in xrange(0, nrows, step):
-        stop = (j+1)*step
+    for i in range(0, nrows, step):
+        stop = (j + 1) * step
         if stop > nrows:
             stop = nrows
-        arr_f8 = numarray.arange(i, stop, type=numarray.Float64)
-        arr_i4 = numarray.arange(i, stop, type=numarray.Int32)
+        arr_f8 = np.arange(i, stop, type=np.Float64)
+        arr_i4 = np.arange(i, stop, type=np.Int32)
         if userandom:
-            arr_f8 += random_array.normal(0, stop*scale, shape=[stop-i])
-            arr_i4 = numarray.array(arr_f8, type=numarray.Int32)
-        recarr = records.fromarrays([arr_i4, arr_i4, arr_f8, arr_f8])
+            arr_f8 += np.random.normal(0, stop * scale, shape=[stop - i])
+            arr_i4 = np.array(arr_f8, type=np.Int32)
+        recarr = np.rec.fromarrays([arr_i4, arr_i4, arr_f8, arr_f8])
         table.append(recarr)
         j += 1
     table.flush()
-    ctime = time()-t1
+    ctime = time() - t1
     if verbose:
-        print "insert time:", round(ctime, 5)
-        print "Krows/s:", round((nrows/1000.)/ctime, 5)
+        print("insert time:", round(ctime, 5))
+        print("Krows/s:", round((nrows / 1000.) / ctime, 5))
     index_db(table)
     close_db(con)
 
+
 def index_db(table):
-    t1=time()
+    t1 = time()
     table.cols.col2.create_index()
-    itime = time()-t1
+    itime = time() - t1
     if verbose:
-        print "index time (int):", round(itime, 5)
-        print "Krows/s:", round((nrows/1000.)/itime, 5)
-    t1=time()
+        print("index time (int):", round(itime, 5))
+        print("Krows/s:", round((nrows / 1000.) / itime, 5))
+    t1 = time()
     table.cols.col4.create_index()
-    itime = time()-t1
+    itime = time() - t1
     if verbose:
-        print "index time (float):", round(itime, 5)
-        print "Krows/s:", round((nrows/1000.)/itime, 5)
+        print("index time (float):", round(itime, 5))
+        print("Krows/s:", round((nrows / 1000.) / itime, 5))
+
 
 def query_db(filename, rng):
     con = open_db(filename)
@@ -72,57 +75,64 @@ def query_db(filename, rng):
     # Query for integer columns
     # Query for non-indexed column
     if not doqueryidx:
-        t1=time()
+        t1 = time()
         ntimes = 10
         for i in range(ntimes):
-            results = [ r['col1'] for r in
-                        table.where(rng[0]+i <= table.cols.col1 <= rng[1]+i) ]
-        qtime = (time()-t1)/ntimes
+            results = [
+                r['col1'] for r in table.where(
+                    rng[0] + i <= table.cols.col1 <= rng[1] + i)
+            ]
+        qtime = (time() - t1) / ntimes
         if verbose:
-            print "query time (int, not indexed):", round(qtime, 5)
-            print "Mrows/s:", round((nrows/1000.)/qtime, 5)
-            print results
+            print("query time (int, not indexed):", round(qtime, 5))
+            print("Mrows/s:", round((nrows / 1000.) / qtime, 5))
+            print(results)
     # Query for indexed column
-    t1=time()
+    t1 = time()
     ntimes = 10
     for i in range(ntimes):
-        results = [ r['col1'] for r in
-                    table.where(rng[0]+i <= table.cols.col2 <= rng[1]+i) ]
-    qtime = (time()-t1)/ntimes
+        results = [
+            r['col1'] for r in table.where(
+                rng[0] + i <= table.cols.col2 <= rng[1] + i)
+        ]
+    qtime = (time() - t1) / ntimes
     if verbose:
-        print "query time (int, indexed):", round(qtime, 5)
-        print "Mrows/s:", round((nrows/1000.)/qtime, 5)
-        print results
+        print("query time (int, indexed):", round(qtime, 5))
+        print("Mrows/s:", round((nrows / 1000.) / qtime, 5))
+        print(results)
     # Query for floating columns
     # Query for non-indexed column
     if not doqueryidx:
-        t1=time()
+        t1 = time()
         ntimes = 10
         for i in range(ntimes):
-            results = [ r['col3'] for r in
-                        table.where(rng[0]+i <= table.cols.col3 <= rng[1]+i) ]
-        qtime = (time()-t1)/ntimes
+            results = [
+                r['col3'] for r in table.where(
+                    rng[0] + i <= table.cols.col3 <= rng[1] + i)
+            ]
+        qtime = (time() - t1) / ntimes
         if verbose:
-            print "query time (float, not indexed):", round(qtime, 5)
-            print "Mrows/s:", round((nrows/1000.)/qtime, 5)
-            print results
+            print("query time (float, not indexed):", round(qtime, 5))
+            print("Mrows/s:", round((nrows / 1000.) / qtime, 5))
+            print(results)
     # Query for indexed column
-    t1=time()
+    t1 = time()
     ntimes = 10
     for i in range(ntimes):
-        results = [ r['col3'] for r in
-                    table.where(rng[0]+i <= table.cols.col4 <= rng[1]+i) ]
-    qtime = (time()-t1)/ntimes
+        results = [r['col3'] for r in
+                   table.where(rng[0] + i <= table.cols.col4 <= rng[1] + i)]
+    qtime = (time() - t1) / ntimes
     if verbose:
-        print "query time (float, indexed):", round(qtime, 5)
-        print "Mrows/s:", round((nrows/1000.)/qtime, 5)
-        print results
+        print("query time (float, indexed):", round(qtime, 5))
+        print("Mrows/s:", round((nrows / 1000.) / qtime, 5))
+        print(results)
     close_db(con)
 
+
 def close_db(con):
     con.close()
 
-if __name__=="__main__":
+if __name__ == "__main__":
     import sys
     import getopt
     try:
@@ -193,15 +203,15 @@ if __name__=="__main__":
     filters = tables.Filters(complevel=docompress, complib=complib)
 
     if verbose:
-        print "pytables version:", tables.__version__
+        print("pytables version:", tables.__version__)
         if userandom:
-            print "using random values"
+            print("using random values")
         if doqueryidx:
-            print "doing indexed queries only"
+            print("doing indexed queries only")
 
     if docreate:
         if verbose:
-            print "writing %s krows" % nrows
+            print("writing %s krows" % nrows)
         if psyco_imported and usepsyco:
             psyco.bind(create_db)
         nrows *= 1000
diff --git a/bench/pytables_backend.py b/bench/pytables_backend.py
index dbd1605..e03ef3b 100644
--- a/bench/pytables_backend.py
+++ b/bench/pytables_backend.py
@@ -1,7 +1,8 @@
-import os, os.path
+from __future__ import print_function
+import os
 import tables
 from indexed_search import DB
-from time import time
+
 
 class PyTables_DB(DB):
 
@@ -22,8 +23,8 @@ class PyTables_DB(DB):
         # The chosen filters
         self.filters = tables.Filters(complevel=self.docompress,
                                       complib=self.complib,
-                                      shuffle = 1)
-        print "Processing database:", self.filename
+                                      shuffle=1)
+        print("Processing database:", self.filename)
 
     def open_db(self, remove=0):
         if remove and os.path.exists(self.filename):
@@ -44,15 +45,15 @@ class PyTables_DB(DB):
             col3 = tables.Float64Col()
             col4 = tables.Float64Col()
 
-        table = con.create_table(con.root, 'table', Record,
-                                filters=self.filters, expectedrows=self.nrows)
+        con.create_table(con.root, 'table', Record,
+                         filters=self.filters, expectedrows=self.nrows)
 
     def fill_table(self, con):
         "Fills the table"
         table = con.root.table
         j = 0
-        for i in xrange(0, self.nrows, self.step):
-            stop = (j+1)*self.step
+        for i in range(0, self.nrows, self.step):
+            stop = (j + 1) * self.step
             if stop > self.nrows:
                 stop = self.nrows
             arr_i4, arr_f8 = self.fill_arrays(i, stop)
@@ -65,8 +66,8 @@ class PyTables_DB(DB):
     def index_col(self, con, column, kind, optlevel, verbose):
         col = getattr(con.root.table.cols, column)
         col.create_index(kind=kind, optlevel=optlevel, filters=self.filters,
-                        tmp_dir="/scratch2/faltet",
-                        _verbose=verbose, _blocksizes=None)
+                         tmp_dir="/scratch2/faltet",
+                         _verbose=verbose, _blocksizes=None)
 #                       _blocksizes=(2**27, 2**22, 2**15, 2**7))
 #                       _blocksizes=(2**27, 2**22, 2**14, 2**6))
 #                       _blocksizes=(2**27, 2**20, 2**13, 2**5),
@@ -95,14 +96,14 @@ class PyTables_DB(DB):
                              "col3": table.cols.col3,
                              "col4": table.cols.col4,
                              }
-        self.condvars['inf'] = self.rng[0]+base
-        self.condvars['sup'] = self.rng[1]+base
+        self.condvars['inf'] = self.rng[0] + base
+        self.condvars['sup'] = self.rng[1] + base
         # For queries that can use two indexes instead of just one
         d = (self.rng[1] - self.rng[0]) / 2.
-        inf1 = int(self.rng[0]+base)
-        sup1 = int(self.rng[0]+d+base)
-        inf2 = self.rng[0]+base*2
-        sup2 = self.rng[0]+d+base*2
+        inf1 = int(self.rng[0] + base)
+        sup1 = int(self.rng[0] + d + base)
+        inf2 = self.rng[0] + base * 2
+        sup2 = self.rng[0] + d + base * 2
         self.condvars['inf1'] = inf1
         self.condvars['sup1'] = sup1
         self.condvars['inf2'] = inf2
@@ -123,7 +124,7 @@ class PyTables_DB(DB):
         #condition = "((inf<=col4) & (col4<sup)) | ((inf<col2) & (col2<sup))"
         #condition = "((inf<=col4) & (col4<sup)) & ((inf<col2) & (col2<sup))"
         #condition = "((inf2<=col3) & (col3<sup2)) | ((inf1<col1) & (col1<sup1))"
-        #print "lims-->", inf1, inf2, sup1, sup2
+        # print "lims-->", inf1, inf2, sup1, sup2
         condition = "((inf2<=col) & (col<sup2)) | ((inf1<col2) & (col2<sup1))"
         condition += " & ((col1+3.1*col2+col3*col4) > 3)"
         #condition += " & (col2*(col3+3.1)+col3*col4 > col1)"
@@ -137,18 +138,20 @@ class PyTables_DB(DB):
         # condition = "(col==%s)" % (self.rng[0]+base)
         # condvars = {"col": colobj}
         #c = self.condvars
-        #print "condvars-->", c['inf'], c['sup'], c['inf2'], c['sup2']
+        # print "condvars-->", c['inf'], c['sup'], c['inf2'], c['sup2']
         ncoords = 0
         if colobj.is_indexed:
-            results = [r[column] for r in table.where(condition, self.condvars)]
+            results = [r[column]
+                       for r in table.where(condition, self.condvars)]
 #             coords = table.get_where_list(condition, self.condvars)
 #             results = table.read_coordinates(coords, field=column)
 
 #            results = table.read_where(condition, self.condvars, field=column)
 
         elif inkernel:
-            print "Performing in-kernel query"
-            results = [r[column] for r in table.where(condition, self.condvars)]
+            print("Performing in-kernel query")
+            results = [r[column]
+                       for r in table.where(condition, self.condvars)]
             #coords = [r.nrow for r in table.where(condition, self.condvars)]
             #results = table.read_coordinates(coords)
 #             for r in table.where(condition, self.condvars):
@@ -158,17 +161,19 @@ class PyTables_DB(DB):
 #             coords = [r.nrow for r in table
 #                       if (self.rng[0]+base <= r[column] <= self.rng[1]+base)]
 #             results = table.read_coordinates(coords)
-            print "Performing regular query"
-            results = [ r[column] for r in table if
-                        (((inf2<=r['col4']) and (r['col4']<sup2)) or
-                         ((inf1<r['col2']) and (r['col2']<sup1)) and
-                         ((r['col1']+3.1*r['col2']+r['col3']*r['col4']) > 3))]
+            print("Performing regular query")
+            results = [
+                r[column] for r in table if ((
+                    (inf2 <= r['col4']) and (r['col4'] < sup2)) or
+                    ((inf1 < r['col2']) and (r['col2'] < sup1)) and
+                    ((r['col1'] + 3.1 * r['col2'] + r['col3'] * r['col4']) > 3)
+                )]
 
         ncoords = len(results)
 
-        #return coords
-        #print "results-->", results
-        #return results
+        # return coords
+        # print "results-->", results
+        # return results
         return ncoords
         #self.tprof.append( self.colobj.index.tprof )
-        #return ncoords, self.tprof
+        # return ncoords, self.tprof
diff --git a/bench/recarray2-test.py b/bench/recarray2-test.py
index 793555e..e4affe2 100644
--- a/bench/recarray2-test.py
+++ b/bench/recarray2-test.py
@@ -1,102 +1,106 @@
-import sys, time, os
-import numarray as num
+from __future__ import print_function
+import os
+import sys
+import time
+import numpy as np
 import chararray
 import recarray
 import recarray2  # This is my modified version
 
-usage = \
-"""usage: %s recordlength
+usage = """usage: %s recordlength
      Set recordlength to 1000 at least to obtain decent figures!
 """ % sys.argv[0]
 
 try:
     reclen = int(sys.argv[1])
 except:
-    print usage
+    print(usage)
     sys.exit()
 
 delta = 0.000001
 
 # Creation of recarrays objects for test
-x1=num.array(num.arange(reclen))
-x2=chararray.array(None, itemsize=7, shape=reclen)
-x3=num.array(num.arange(reclen, reclen*3, 2), num.Float64)
-r1=recarray.fromarrays([x1, x2, x3], names='a,b,c')
-r2=recarray2.fromarrays([x1, x2, x3], names='a,b,c')
+x1 = np.array(np.arange(reclen))
+x2 = chararray.array(None, itemsize=7, shape=reclen)
+x3 = np.array(np.arange(reclen, reclen * 3, 2), np.Float64)
+r1 = recarray.fromarrays([x1, x2, x3], names='a,b,c')
+r2 = recarray2.fromarrays([x1, x2, x3], names='a,b,c')
 
-print "recarray shape in test ==>", r2.shape
+print("recarray shape in test ==>", r2.shape)
 
-print "Assignment in recarray original"
-print "-------------------------------"
+print("Assignment in recarray original")
+print("-------------------------------")
 t1 = time.clock()
-for row in xrange(reclen):
+for row in range(reclen):
     #r1.field("b")[row] = "changed"
-    r1.field("c")[row] = float(row**2)
+    r1.field("c")[row] = float(row ** 2)
 t2 = time.clock()
-origtime = round(t2-t1, 3)
-print "Assign time:", origtime, " Rows/s:", int(reclen/(origtime+delta))
-#print "Field b on row 2 after re-assign:", r1.field("c")[2]
-print
+origtime = round(t2 - t1, 3)
+print("Assign time:", origtime, " Rows/s:", int(reclen / (origtime + delta)))
+# print "Field b on row 2 after re-assign:", r1.field("c")[2]
+print()
 
-print "Assignment in recarray modified"
-print "-------------------------------"
+print("Assignment in recarray modified")
+print("-------------------------------")
 t1 = time.clock()
-for row in xrange(reclen):
+for row in range(reclen):
     rec = r2._row(row)  # select the row to be changed
-    #rec.b = "changed"      # change the "b" field
-    rec.c = float(row**2)  # Change the "c" field
+    # rec.b = "changed"      # change the "b" field
+    rec.c = float(row ** 2)  # Change the "c" field
 t2 = time.clock()
-ttime = round(t2-t1, 3)
-print "Assign time:", ttime, " Rows/s:", int(reclen/(ttime+delta)),
-print " Speed-up:", round(origtime/ttime, 3)
-#print "Field b on row 2 after re-assign:", r2.field("c")[2]
-print
+ttime = round(t2 - t1, 3)
+print("Assign time:", ttime, " Rows/s:", int(reclen / (ttime + delta)),
+      end=' ')
+print(" Speed-up:", round(origtime / ttime, 3))
+# print "Field b on row 2 after re-assign:", r2.field("c")[2]
+print()
 
-print "Selection in recarray original"
-print "------------------------------"
+print("Selection in recarray original")
+print("------------------------------")
 t1 = time.clock()
-for row in xrange(reclen):
+for row in range(reclen):
     rec = r1[row]
     if rec.field("a") < 3:
-        print "This record pass the cut ==>", rec.field("c"), "(row", row, ")"
+        print("This record pass the cut ==>", rec.field("c"), "(row", row, ")")
 t2 = time.clock()
-origtime = round(t2-t1, 3)
-print "Select time:", origtime, " Rows/s:", int(reclen/(origtime+delta))
-print
+origtime = round(t2 - t1, 3)
+print("Select time:", origtime, " Rows/s:", int(reclen / (origtime + delta)))
+print()
 
-print "Selection in recarray modified"
-print "------------------------------"
+print("Selection in recarray modified")
+print("------------------------------")
 t1 = time.clock()
-for row in xrange(reclen):
+for row in range(reclen):
     rec = r2._row(row)
     if rec.a < 3:
-        print "This record pass the cut ==>", rec.c, "(row", row, ")"
+        print("This record pass the cut ==>", rec.c, "(row", row, ")")
 t2 = time.clock()
-ttime = round(t2-t1, 3)
-print "Select time:", ttime, " Rows/s:", int(reclen/(ttime+delta)),
-print " Speed-up:", round(origtime/ttime, 3)
-print
+ttime = round(t2 - t1, 3)
+print("Select time:", ttime, " Rows/s:", int(reclen / (ttime + delta)),
+      end=' ')
+print(" Speed-up:", round(origtime / ttime, 3))
+print()
 
-print "Printing in recarray original"
-print "------------------------------"
+print("Printing in recarray original")
+print("------------------------------")
 f = open("test.out", "w")
 t1 = time.clock()
 f.write(str(r1))
 t2 = time.clock()
-origtime = round(t2-t1, 3)
+origtime = round(t2 - t1, 3)
 f.close()
 os.unlink("test.out")
-print "Print time:", origtime, " Rows/s:", int(reclen/(origtime+delta))
-print
-print "Printing in recarray modified"
-print "------------------------------"
+print("Print time:", origtime, " Rows/s:", int(reclen / (origtime + delta)))
+print()
+print("Printing in recarray modified")
+print("------------------------------")
 f = open("test2.out", "w")
 t1 = time.clock()
 f.write(str(r2))
 t2 = time.clock()
-ttime = round(t2-t1, 3)
+ttime = round(t2 - t1, 3)
 f.close()
 os.unlink("test2.out")
-print "Print time:", ttime, " Rows/s:", int(reclen/(ttime+delta)),
-print " Speed-up:", round(origtime/ttime, 3)
-print
+print("Print time:", ttime, " Rows/s:", int(reclen / (ttime + delta)), end=' ')
+print(" Speed-up:", round(origtime / ttime, 3))
+print()
diff --git a/bench/search-bench-plot.py b/bench/search-bench-plot.py
index b80c5d7..9dc4a87 100644
--- a/bench/search-bench-plot.py
+++ b/bench/search-bench-plot.py
@@ -1,38 +1,42 @@
+from __future__ import print_function
 import tables
 from pylab import *
 
+
 def get_values(filename, complib=''):
     f = tables.open_file(filename)
     nrows = f.root.small.create_best.cols.nrows[:]
-    corrected_sizes = nrows/10.**6
+    corrected_sizes = nrows / 10. ** 6
     if mb_units:
-        corrected_sizes = 16*nrows/10.**6
+        corrected_sizes = 16 * nrows / 10. ** 6
     if insert:
-        values = corrected_sizes/f.root.small.create_best.cols.tfill[:]
+        values = corrected_sizes / f.root.small.create_best.cols.tfill[:]
     if table_size:
-        values = f.root.small.create_best.cols.fsize[:]/nrows
+        values = f.root.small.create_best.cols.fsize[:] / nrows
     if query:
-        values = corrected_sizes/f.root.small.search_best.inkernel.int.cols.time1[:]
+        values = corrected_sizes / \
+            f.root.small.search_best.inkernel.int.cols.time1[:]
     if query_cache:
-        values = corrected_sizes/f.root.small.search_best.inkernel.int.cols.time2[:]
+        values = corrected_sizes / \
+            f.root.small.search_best.inkernel.int.cols.time2[:]
 
     f.close()
     return nrows, values
 
+
 def show_plot(plots, yaxis, legends, gtitle):
     xlabel('Number of rows')
     ylabel(yaxis)
-    xlim(10**3, 10**8)
+    xlim(10 ** 3, 10 ** 8)
     title(gtitle)
     grid(True)
 
 #     legends = [f[f.find('-'):f.index('.out')] for f in filenames]
 #     legends = [l.replace('-', ' ') for l in legends]
     if table_size:
-        legend([p[0] for p in plots], legends, loc = "upper right")
+        legend([p[0] for p in plots], legends, loc="upper right")
     else:
-        legend([p[0] for p in plots], legends, loc = "upper left")
-
+        legend([p[0] for p in plots], legends, loc="upper left")
 
     #subplots_adjust(bottom=0.2, top=None, wspace=0.2, hspace=0.2)
     if outfile:
@@ -42,7 +46,8 @@ def show_plot(plots, yaxis, legends, gtitle):
 
 if __name__ == '__main__':
 
-    import sys, getopt
+    import sys
+    import getopt
 
     usage = """usage: %s [-o file] [-t title] [--insert] [--table-size] [--query] [--query-cache] [--MB-units] files
  -o filename for output (only .png and .jpg extensions supported)
@@ -98,15 +103,18 @@ if __name__ == '__main__':
         elif option[0] == '--table-size':
             table_size = 1
             yaxis = "Bytes/row"
-            gtitle = "Disk space taken by a record (original record size: 16 bytes)"
+            gtitle = ("Disk space taken by a record (original record size: "
+                      "16 bytes)")
         elif option[0] == '--query':
             query = 1
             yaxis = "MRows/s"
-            gtitle = "Selecting with small (16 bytes) record size (file not in cache)"
+            gtitle = ("Selecting with small (16 bytes) record size (file not "
+                      "in cache)")
         elif option[0] == '--query-cache':
             query_cache = 1
             yaxis = "MRows/s"
-            gtitle = "Selecting with small (16 bytes) record size (file in cache)"
+            gtitle = ("Selecting with small (16 bytes) record size (file in "
+                      "cache)")
         elif option[0] == '--MB-units':
             mb_units = 1
 
@@ -118,14 +126,13 @@ if __name__ == '__main__':
     if tit:
         gtitle = tit
 
-
     plots = []
     legends = []
     for filename in filenames:
-        plegend = filename[filename.find('cl-')+3:filename.index('.h5')]
+        plegend = filename[filename.find('cl-') + 3:filename.index('.h5')]
         plegend = plegend.replace('-', ' ')
         xval, yval = get_values(filename, '')
-        print "Values for %s --> %s, %s" % (filename, xval, yval)
+        print("Values for %s --> %s, %s" % (filename, xval, yval))
         #plots.append(loglog(xval, yval, linewidth=5))
         plots.append(semilogx(xval, yval, linewidth=4))
         legends.append(plegend)
diff --git a/bench/search-bench.py b/bench/search-bench.py
index 346f7f6..3f9f3c9 100644
--- a/bench/search-bench.py
+++ b/bench/search-bench.py
@@ -1,62 +1,65 @@
 #!/usr/bin/env python
 
+from __future__ import print_function
 import sys
-
+import math
 import time
-from tables import *
 import random
-import math
 import warnings
+
 import numpy
 
+from tables import *
+
 # Initialize the random generator always with the same integer
 # in order to have reproductible results
 random.seed(19)
 numpy.random.seed(19)
 
 randomvalues = 0
-worst=0
+worst = 0
 
 Small = {
     "var1": StringCol(itemsize=4, dflt="Hi!", pos=2),
     "var2": Int32Col(pos=1),
     "var3": Float64Col(pos=0),
     #"var4" : BoolCol(),
-    }
+}
+
 
 def createNewBenchFile(bfile, verbose):
 
     class Create(IsDescription):
-        nrows   = Int32Col(pos=0)
-        irows   = Int32Col(pos=1)
-        tfill   = Float64Col(pos=2)
-        tidx    = Float64Col(pos=3)
-        tcfill  = Float64Col(pos=4)
-        tcidx   = Float64Col(pos=5)
+        nrows = Int32Col(pos=0)
+        irows = Int32Col(pos=1)
+        tfill = Float64Col(pos=2)
+        tidx = Float64Col(pos=3)
+        tcfill = Float64Col(pos=4)
+        tcidx = Float64Col(pos=5)
         rowsecf = Float64Col(pos=6)
         rowseci = Float64Col(pos=7)
-        fsize   = Float64Col(pos=8)
-        isize   = Float64Col(pos=9)
-        psyco   = BoolCol(pos=10)
+        fsize = Float64Col(pos=8)
+        isize = Float64Col(pos=9)
+        psyco = BoolCol(pos=10)
 
     class Search(IsDescription):
-        nrows   = Int32Col(pos=0)
-        rowsel  = Int32Col(pos=1)
-        time1   = Float64Col(pos=2)
-        time2   = Float64Col(pos=3)
-        tcpu1   = Float64Col(pos=4)
-        tcpu2   = Float64Col(pos=5)
+        nrows = Int32Col(pos=0)
+        rowsel = Int32Col(pos=1)
+        time1 = Float64Col(pos=2)
+        time2 = Float64Col(pos=3)
+        tcpu1 = Float64Col(pos=4)
+        tcpu2 = Float64Col(pos=5)
         rowsec1 = Float64Col(pos=6)
         rowsec2 = Float64Col(pos=7)
-        psyco   = BoolCol(pos=8)
+        psyco = BoolCol(pos=8)
 
     if verbose:
-        print "Creating a new benchfile:", bfile
+        print("Creating a new benchfile:", bfile)
     # Open the benchmarking file
     bf = open_file(bfile, "w")
     # Create groups
     for recsize in ["small"]:
-        group = bf.create_group("/", recsize, recsize+" Group")
+        group = bf.create_group("/", recsize, recsize + " Group")
         # Attach the row size of table as attribute
         if recsize == "small":
             group._v_attrs.rowsize = 16
@@ -65,59 +68,60 @@ def createNewBenchFile(bfile, verbose):
         bf.create_table(group, "create_worst", Create, "worst case")
         for case in ["best", "worst"]:
             # create a group for searching bench (best case)
-            groupS = bf.create_group(group, "search_"+case, "Search Group")
+            groupS = bf.create_group(group, "search_" + case, "Search Group")
             # Create Tables for searching
             for mode in ["indexed", "inkernel", "standard"]:
-                groupM = bf.create_group(groupS, mode, mode+" Group")
+                groupM = bf.create_group(groupS, mode, mode + " Group")
                 # for searching bench
-                #for atom in ["string", "int", "float", "bool"]:
+                # for atom in ["string", "int", "float", "bool"]:
                 for atom in ["string", "int", "float"]:
-                    bf.create_table(groupM, atom, Search, atom+" bench")
+                    bf.create_table(groupM, atom, Search, atom + " bench")
     bf.close()
 
+
 def createFile(filename, nrows, filters, index, heavy, noise, verbose):
 
     # Open a file in "w"rite mode
-    fileh = open_file(filename, mode = "w", title="Searchsorted Benchmark",
-                     filters=filters)
+    fileh = open_file(filename, mode="w", title="Searchsorted Benchmark",
+                      filters=filters)
     rowswritten = 0
 
     # Create the test table
     table = fileh.create_table(fileh.root, 'table', Small, "test table",
-                              None, nrows)
+                               None, nrows)
 
     t1 = time.time()
     cpu1 = time.clock()
     nrowsbuf = table.nrowsinbuf
     minimum = 0
     maximum = nrows
-    for i in xrange(0, nrows, nrowsbuf):
-        if i+nrowsbuf > nrows:
+    for i in range(0, nrows, nrowsbuf):
+        if i + nrowsbuf > nrows:
             j = nrows
         else:
-            j = i+nrowsbuf
+            j = i + nrowsbuf
         if randomvalues:
-            var3 = numpy.random.uniform(minimum, maximum, size=j-i)
+            var3 = numpy.random.uniform(minimum, maximum, size=j - i)
         else:
             var3 = numpy.arange(i, j, dtype=numpy.float64)
             if noise > 0:
-                var3 += numpy.random.uniform(-noise, noise, size=j-i)
+                var3 += numpy.random.uniform(-noise, noise, size=j - i)
         var2 = numpy.array(var3, dtype=numpy.int32)
-        var1 = numpy.empty(shape=[j-i], dtype="S4")
+        var1 = numpy.empty(shape=[j - i], dtype="S4")
         if not heavy:
             var1[:] = var2
         table.append([var3, var2, var1])
     table.flush()
     rowswritten += nrows
-    time1 = time.time()-t1
-    tcpu1 = time.clock()-cpu1
-    print "Time for filling:", round(time1, 3),\
-          "Krows/s:", round(nrows/1000./time1, 3),
+    time1 = time.time() - t1
+    tcpu1 = time.clock() - cpu1
+    print("Time for filling:", round(time1, 3),
+          "Krows/s:", round(nrows / 1000. / time1, 3), end=' ')
     fileh.close()
     size1 = os.stat(filename)[6]
-    print ", File size:", round(size1/(1024.*1024.), 3), "MB"
-    fileh = open_file(filename, mode = "a", title="Searchsorted Benchmark",
-                     filters=filters)
+    print(", File size:", round(size1 / (1024. * 1024.), 3), "MB")
+    fileh = open_file(filename, mode="a", title="Searchsorted Benchmark",
+                      filters=filters)
     table = fileh.root.table
     rowsize = table.rowsize
     if index:
@@ -128,10 +132,10 @@ def createFile(filename, nrows, filters, index, heavy, noise, verbose):
             indexrows = table.cols.var1.create_index(filters=filters)
         for colname in ['var2', 'var3']:
             table.colinstances[colname].create_index(filters=filters)
-        time2 = time.time()-t1
-        tcpu2 = time.clock()-cpu1
-        print "Time for indexing:", round(time2, 3), \
-              "iKrows/s:", round(indexrows/1000./time2, 3),
+        time2 = time.time() - t1
+        tcpu2 = time.clock() - cpu1
+        print("Time for indexing:", round(time2, 3),
+              "iKrows/s:", round(indexrows / 1000. / time2, 3), end=' ')
     else:
         indexrows = 0
         time2 = 0.0000000001  # an ugly hack
@@ -140,18 +144,19 @@ def createFile(filename, nrows, filters, index, heavy, noise, verbose):
     if verbose:
         if index:
             idx = table.cols.var1.index
-            print "Index parameters:", repr(idx)
+            print("Index parameters:", repr(idx))
         else:
-            print "NOT indexing rows"
+            print("NOT indexing rows")
     # Close the file
     fileh.close()
 
     size2 = os.stat(filename)[6] - size1
     if index:
-        print ", Index size:", round(size2/(1024.*1024.), 3), "MB"
+        print(", Index size:", round(size2 / (1024. * 1024.), 3), "MB")
     return (rowswritten, indexrows, rowsize, time1, time2,
             tcpu1, tcpu2, size1, size2)
 
+
 def benchCreate(file, nrows, filters, index, bfile, heavy,
                 psyco, noise, verbose):
 
@@ -159,17 +164,17 @@ def benchCreate(file, nrows, filters, index, bfile, heavy,
     bf = open_file(bfile, "a")
     recsize = "small"
     if worst:
-        table = bf.get_node("/"+recsize+"/create_worst")
+        table = bf.get_node("/" + recsize + "/create_worst")
     else:
-        table = bf.get_node("/"+recsize+"/create_best")
+        table = bf.get_node("/" + recsize + "/create_best")
 
     (rowsw, irows, rowsz, time1, time2, tcpu1, tcpu2, size1, size2) = \
-          createFile(file, nrows, filters, index, heavy, noise, verbose)
+        createFile(file, nrows, filters, index, heavy, noise, verbose)
     # Collect data
     table.row["nrows"] = rowsw
     table.row["irows"] = irows
     table.row["tfill"] = time1
-    table.row["tidx"]  = time2
+    table.row["tidx"] = time2
     table.row["tcfill"] = tcpu1
     table.row["tcidx"] = tcpu2
     table.row["fsize"] = size1
@@ -177,31 +182,33 @@ def benchCreate(file, nrows, filters, index, bfile, heavy,
     table.row["psyco"] = psyco
     tapprows = round(time1, 3)
     cpuapprows = round(tcpu1, 3)
-    tpercent = int(round(cpuapprows/tapprows, 2)*100)
-    print "Rows written:", rowsw, " Row size:", rowsz
-    print "Time writing rows: %s s (real) %s s (cpu)  %s%%" % \
-          (tapprows, cpuapprows, tpercent)
+    tpercent = int(round(cpuapprows / tapprows, 2) * 100)
+    print("Rows written:", rowsw, " Row size:", rowsz)
+    print("Time writing rows: %s s (real) %s s (cpu)  %s%%" %
+          (tapprows, cpuapprows, tpercent))
     rowsecf = rowsw / tapprows
     table.row["rowsecf"] = rowsecf
-    #print "Write rows/sec: ", rowsecf
-    print "Total file size:", round((size1+size2)/(1024.*1024.), 3), "MB",
-    print ", Write KB/s (pure data):", int(rowsw * rowsz / (tapprows * 1024))
-    #print "Write KB/s :", int((size1+size2) / ((time1+time2) * 1024))
+    # print "Write rows/sec: ", rowsecf
+    print("Total file size:",
+          round((size1 + size2) / (1024. * 1024.), 3), "MB", end=' ')
+    print(", Write KB/s (pure data):", int(rowsw * rowsz / (tapprows * 1024)))
+    # print "Write KB/s :", int((size1+size2) / ((time1+time2) * 1024))
     tidxrows = time2
     cpuidxrows = round(tcpu2, 3)
-    tpercent = int(round(cpuidxrows/tidxrows, 2)*100)
-    print "Rows indexed:", irows, " (IMRows):", irows / float(10**6)
-    print "Time indexing rows: %s s (real) %s s (cpu)  %s%%" % \
-          (round(tidxrows, 3), cpuidxrows, tpercent)
+    tpercent = int(round(cpuidxrows / tidxrows, 2) * 100)
+    print("Rows indexed:", irows, " (IMRows):", irows / float(10 ** 6))
+    print("Time indexing rows: %s s (real) %s s (cpu)  %s%%" %
+          (round(tidxrows, 3), cpuidxrows, tpercent))
     rowseci = irows / tidxrows
     table.row["rowseci"] = rowseci
     table.row.append()
     bf.close()
 
+
 def readFile(filename, atom, riter, indexmode, dselect, verbose):
     # Open the HDF5 file in read-only mode
 
-    fileh = open_file(filename, mode = "r")
+    fileh = open_file(filename, mode="r")
     table = fileh.root.table
     var1 = table.cols.var1
     var2 = table.cols.var2
@@ -210,33 +217,35 @@ def readFile(filename, atom, riter, indexmode, dselect, verbose):
         if var2.index.nelements > 0:
             where = table._whereIndexed
         else:
-            warnings.warn("Not indexed table or empty index. Defaulting to in-kernel selection")
+            warnings.warn(
+                "Not indexed table or empty index. Defaulting to in-kernel "
+                "selection")
             indexmode = "inkernel"
             where = table._whereInRange
     elif indexmode == "inkernel":
         where = table.where
     if verbose:
-        print "Max rows in buf:", table.nrowsinbuf
-        print "Rows in", table._v_pathname, ":", table.nrows
-        print "Buffersize:", table.rowsize * table.nrowsinbuf
-        print "MaxTuples:", table.nrowsinbuf
+        print("Max rows in buf:", table.nrowsinbuf)
+        print("Rows in", table._v_pathname, ":", table.nrows)
+        print("Buffersize:", table.rowsize * table.nrowsinbuf)
+        print("MaxTuples:", table.nrowsinbuf)
         if indexmode == "indexed":
-            print "Chunk size:", var2.index.sorted.chunksize
-            print "Number of elements per slice:", var2.index.nelemslice
-            print "Slice number in", table._v_pathname, ":", var2.index.nrows
+            print("Chunk size:", var2.index.sorted.chunksize)
+            print("Number of elements per slice:", var2.index.nelemslice)
+            print("Slice number in", table._v_pathname, ":", var2.index.nrows)
 
     #table.nrowsinbuf = 10
-    #print "nrowsinbuf-->", table.nrowsinbuf
+    # print "nrowsinbuf-->", table.nrowsinbuf
     rowselected = 0
     time2 = 0.
     tcpu2 = 0.
     results = []
-    print "Select mode:", indexmode, ". Selecting for type:", atom
+    print("Select mode:", indexmode, ". Selecting for type:", atom)
     # Initialize the random generator always with the same integer
     # in order to have reproductible results on each read iteration
     random.seed(19)
     numpy.random.seed(19)
-    for i in xrange(riter):
+    for i in range(riter):
         # The interval for look values at. This is aproximately equivalent to
         # the number of elements to select
         rnd = numpy.random.randint(table.nrows)
@@ -251,7 +260,7 @@ def readFile(filename, atom, riter, indexmode, dselect, verbose):
                 results = [p.nrow for p in table
                            if p["var1"] == val]
         elif atom == "int":
-            val = rnd+dselect
+            val = rnd + dselect
             if indexmode in ["indexed", "inkernel"]:
                 results = [p.nrow
                            for p in where('(rnd <= var3) & (var3 < val)')]
@@ -259,9 +268,9 @@ def readFile(filename, atom, riter, indexmode, dselect, verbose):
                 results = [p.nrow for p in table
                            if rnd <= p["var2"] < val]
         elif atom == "float":
-            val = rnd+dselect
+            val = rnd + dselect
             if indexmode in ["indexed", "inkernel"]:
-                t1=time.time()
+                t1 = time.time()
                 results = [p.nrow
                            for p in where('(rnd <= var3) & (var3 < val)')]
             else:
@@ -270,7 +279,7 @@ def readFile(filename, atom, riter, indexmode, dselect, verbose):
         else:
             raise ValueError("Value for atom '%s' not supported." % atom)
         rowselected += len(results)
-        #print "selected values-->", results
+        # print "selected values-->", results
         if i == 0:
             # First iteration
             time1 = time.time() - t1
@@ -294,8 +303,8 @@ def readFile(filename, atom, riter, indexmode, dselect, verbose):
         time2 = time2 / (riter - correction)
         tcpu2 = tcpu2 / (riter - correction)
     if verbose and 1:
-        print "Values that fullfill the conditions:"
-        print results
+        print("Values that fullfill the conditions:")
+        print(results)
 
     #rowsread = table.nrows * riter
     rowsread = table.nrows
@@ -306,15 +315,16 @@ def readFile(filename, atom, riter, indexmode, dselect, verbose):
 
     return (rowsread, rowselected, rowsize, time1, time2, tcpu1, tcpu2)
 
+
 def benchSearch(file, riter, indexmode, bfile, heavy, psyco, dselect, verbose):
 
     # Open the benchfile in append mode
     bf = open_file(bfile, "a")
     recsize = "small"
     if worst:
-        tableparent = "/"+recsize+"/search_worst/"+indexmode+"/"
+        tableparent = "/" + recsize + "/search_worst/" + indexmode + "/"
     else:
-        tableparent = "/"+recsize+"/search_best/"+indexmode+"/"
+        tableparent = "/" + recsize + "/search_best/" + indexmode + "/"
 
     # Do the benchmarks
     if not heavy:
@@ -327,7 +337,7 @@ def benchSearch(file, riter, indexmode, bfile, heavy, psyco, dselect, verbose):
         tablepath = tableparent + atom
         table = bf.get_node(tablepath)
         (rowsr, rowsel, rowssz, time1, time2, tcpu1, tcpu2) = \
-                readFile(file, atom, riter, indexmode, dselect, verbose)
+            readFile(file, atom, riter, indexmode, dselect, verbose)
         row = table.row
         row["nrows"] = rowsr
         row["rowsel"] = rowsel
@@ -340,41 +350,40 @@ def benchSearch(file, riter, indexmode, bfile, heavy, psyco, dselect, verbose):
         cpureadrows2 = round(tcpu2, 6)
         row["tcpu2"] = tcpu2
         row["psyco"] = psyco
-        tpercent = int(round(cpureadrows/treadrows, 2)*100)
+        tpercent = int(round(cpureadrows / treadrows, 2) * 100)
         if riter > 1:
-            tpercent2 = int(round(cpureadrows2/treadrows2, 2)*100)
+            tpercent2 = int(round(cpureadrows2 / treadrows2, 2) * 100)
         else:
             tpercent2 = 0.
-        tMrows = rowsr / (1000*1000.)
+        tMrows = rowsr / (1000 * 1000.)
         sKrows = rowsel / 1000.
-        if atom == "string": # just to print once
-            print "Rows read:", rowsr, "Mread:", round(tMrows, 6), "Mrows"
-        print "Rows selected:", rowsel, "Ksel:", round(sKrows, 6), "Krows"
-        print "Time selecting (1st time): %s s (real) %s s (cpu)  %s%%" % \
-              (treadrows, cpureadrows, tpercent)
+        if atom == "string":  # just to print once
+            print("Rows read:", rowsr, "Mread:", round(tMrows, 6), "Mrows")
+        print("Rows selected:", rowsel, "Ksel:", round(sKrows, 6), "Krows")
+        print("Time selecting (1st time): %s s (real) %s s (cpu)  %s%%" %
+              (treadrows, cpureadrows, tpercent))
         if riter > 1:
-            print "Time selecting (cached): %s s (real) %s s (cpu)  %s%%" % \
-                  (treadrows2, cpureadrows2, tpercent2)
+            print("Time selecting (cached): %s s (real) %s s (cpu)  %s%%" %
+                  (treadrows2, cpureadrows2, tpercent2))
         #rowsec1 = round(rowsr / float(treadrows), 6)/10**6
         rowsec1 = rowsr / treadrows
         row["rowsec1"] = rowsec1
-        print "Read Mrows/sec: ",
-        print round(rowsec1 / 10.**6, 6), "(first time)",
+        print("Read Mrows/sec: ", end=' ')
+        print(round(rowsec1 / 10. ** 6, 6), "(first time)", end=' ')
         if riter > 1:
             rowsec2 = rowsr / treadrows2
             row["rowsec2"] = rowsec2
-            print round(rowsec2 / 10.**6, 6), "(cache time)"
+            print(round(rowsec2 / 10. ** 6, 6), "(cache time)")
         else:
-            print
+            print()
         # Append the info to the table
         row.append()
         table.flush()
     # Close the benchmark file
     bf.close()
 
-if __name__=="__main__":
-    import sys
-    import os.path
+
+if __name__ == "__main__":
     import getopt
     try:
         import psyco
@@ -382,8 +391,6 @@ if __name__=="__main__":
     except:
         psyco_imported = 0
 
-    import time
-
     usage = """usage: %s [-v] [-p] [-R] [-r] [-w] [-c level] [-l complib] [-S] [-F] [-n nrows] [-x] [-b file] [-t] [-h] [-k riter] [-m indexmode] [-N range] [-d range] datafile
             -v verbose
             -p use "psyco" if available
@@ -405,7 +412,8 @@ if __name__=="__main__":
             -k number of iterations for reading\n""" % sys.argv[0]
 
     try:
-        opts, pargs = getopt.getopt(sys.argv[1:], 'vpSFRrowxthk:b:c:l:n:m:N:d:')
+        opts, pargs = getopt.getopt(
+            sys.argv[1:], 'vpSFRrowxthk:b:c:l:n:m:N:d:')
     except:
         sys.stderr.write(usage)
         sys.exit(0)
@@ -466,9 +474,11 @@ if __name__=="__main__":
         elif option[0] == '-m':
             indexmode = option[1]
             if indexmode not in supported_imodes:
-                raise ValueError("Indexmode should be any of '%s' and you passed '%s'" % (supported_imodes, indexmode))
+                raise ValueError(
+                    "Indexmode should be any of '%s' and you passed '%s'" %
+                    (supported_imodes, indexmode))
         elif option[0] == '-n':
-            nrows = int(float(option[1])*1000)
+            nrows = int(float(option[1]) * 1000)
         elif option[0] == '-N':
             noise = float(option[1])
         elif option[0] == '-d':
@@ -481,8 +491,8 @@ if __name__=="__main__":
 
     if complib == "none":
         # This means no compression at all
-        complib="zlib"  # just to make PyTables not complaining
-        complevel=0
+        complib = "zlib"  # just to make PyTables not complaining
+        complevel = 0
 
     # Catch the hdf5 file passed as the last argument
     file = pargs[0]
@@ -497,11 +507,11 @@ if __name__=="__main__":
 
     if testwrite:
         if verbose:
-            print "Compression level:", complevel
+            print("Compression level:", complevel)
             if complevel > 0:
-                print "Compression library:", complib
+                print("Compression library:", complib)
                 if shuffle:
-                    print "Suffling..."
+                    print("Suffling...")
         if psyco_imported and usepsyco:
             psyco.bind(createFile)
         benchCreate(file, nrows, filters, index, bfile, heavy,
diff --git a/bench/searchsorted-bench.py b/bench/searchsorted-bench.py
index e06942b..f595de5 100644
--- a/bench/searchsorted-bench.py
+++ b/bench/searchsorted-bench.py
@@ -1,8 +1,10 @@
 #!/usr/bin/env python
 
+from __future__ import print_function
 import time
 from tables import *
 
+
 class Small(IsDescription):
     var1 = StringCol(itemsize=4)
     var2 = Int32Col()
@@ -10,31 +12,34 @@ class Small(IsDescription):
     var4 = BoolCol()
 
 # Define a user record to characterize some kind of particles
+
+
 class Medium(IsDescription):
-    var1        = StringCol(itemsize=16)  # 16-character String
-    #float1      = Float64Col(dflt=2.3)
-    #float2      = Float64Col(dflt=2.3)
-    #zADCcount    = Int16Col()    # signed short integer
-    var2        = Int32Col()    # signed short integer
-    var3        = Float64Col()
-    grid_i      = Int32Col()    # integer
-    grid_j      = Int32Col()    # integer
-    pressure    = Float32Col()    # float  (single-precision)
-    energy      = Float64Col(shape=2)    # double (double-precision)
+    var1 = StringCol(itemsize=16)   # 16-character String
+    #float1 = Float64Col(dflt=2.3)
+    #float2 = Float64Col(dflt=2.3)
+    # zADCcount    = Int16Col()      # signed short integer
+    var2 = Int32Col()               # signed short integer
+    var3 = Float64Col()
+    grid_i = Int32Col()             # integer
+    grid_j = Int32Col()             # integer
+    pressure = Float32Col()         # float  (single-precision)
+    energy = Float64Col(shape=2)    # double (double-precision)
+
 
 def createFile(filename, nrows, filters, atom, recsize, index, verbose):
 
     # Open a file in "w"rite mode
-    fileh = open_file(filename, mode = "w", title="Searchsorted Benchmark",
-                     filters=filters)
+    fileh = open_file(filename, mode="w", title="Searchsorted Benchmark",
+                      filters=filters)
     title = "This is the IndexArray title"
     # Create an IndexArray instance
     rowswritten = 0
     # Create an entry
-    klass = {"small":Small, "medium":Medium}
+    klass = {"small": Small, "medium": Medium}
     table = fileh.create_table(fileh.root, 'table', klass[recsize], title,
-                              None, nrows)
-    for i in xrange(nrows):
+                               None, nrows)
+    for i in range(nrows):
         #table.row['var1'] = str(i)
         #table.row['var2'] = random.randrange(nrows)
         table.row['var2'] = i
@@ -60,18 +65,19 @@ def createFile(filename, nrows, filters, atom, recsize, index, verbose):
         else:
             raise ValueError("Index type not supported yet")
         if verbose:
-            print "Number of indexed rows:", indexrows
+            print("Number of indexed rows:", indexrows)
     # Close the file (eventually destroy the extended type)
     fileh.close()
 
     return (rowswritten, rowsize)
 
+
 def readFile(filename, atom, niter, verbose):
     # Open the HDF5 file in read-only mode
 
-    fileh = open_file(filename, mode = "r")
+    fileh = open_file(filename, mode="r")
     table = fileh.root.table
-    print "reading", table
+    print("reading", table)
     if atom == "string":
         idxcol = table.cols.var1.index
     elif atom == "bool":
@@ -81,45 +87,48 @@ def readFile(filename, atom, niter, verbose):
     else:
         idxcol = table.cols.var3.index
     if verbose:
-        print "Max rows in buf:", table.nrowsinbuf
-        print "Rows in", table._v_pathname, ":", table.nrows
-        print "Buffersize:", table.rowsize * table.nrowsinbuf
-        print "MaxTuples:", table.nrowsinbuf
-        print "Chunk size:", idxcol.sorted.chunksize
-        print "Number of elements per slice:", idxcol.nelemslice
-        print "Slice number in", table._v_pathname, ":", idxcol.nrows
+        print("Max rows in buf:", table.nrowsinbuf)
+        print("Rows in", table._v_pathname, ":", table.nrows)
+        print("Buffersize:", table.rowsize * table.nrowsinbuf)
+        print("MaxTuples:", table.nrowsinbuf)
+        print("Chunk size:", idxcol.sorted.chunksize)
+        print("Number of elements per slice:", idxcol.nelemslice)
+        print("Slice number in", table._v_pathname, ":", idxcol.nrows)
 
     rowselected = 0
     if atom == "string":
-        for i in xrange(niter):
+        for i in range(niter):
             #results = [table.row["var3"] for i in table.where(2+i<=table.cols.var2 < 10+i)]
-#             results = [table.row.nrow() for i in table.where(2<=table.cols.var2 < 10)]
-            results = [p["var1"] #p.nrow()
+            #results = [table.row.nrow() for i in table.where(2<=table.cols.var2 < 10)]
+            results = [p["var1"]  # p.nrow()
                        for p in table.where(table.cols.var1 == "1111")]
 #                      for p in table.where("1000"<=table.cols.var1<="1010")]
             rowselected += len(results)
     elif atom == "bool":
-        for i in xrange(niter):
-            results = [p["var2"] #p.nrow()
-                       for p in table.where(table.cols.var4==0)]
+        for i in range(niter):
+            results = [p["var2"]  # p.nrow()
+                       for p in table.where(table.cols.var4 == 0)]
             rowselected += len(results)
     elif atom == "int":
-        for i in xrange(niter):
+        for i in range(niter):
             #results = [table.row["var3"] for i in table.where(2+i<=table.cols.var2 < 10+i)]
-#             results = [table.row.nrow() for i in table.where(2<=table.cols.var2 < 10)]
-            results = [p["var2"] #p.nrow()
-#                        for p in table.where(110*i<=table.cols.var2<110*(i+1))]
-#                       for p in table.where(1000-30<table.cols.var2<1000+60)]
-                       for p in table.where(table.cols.var2<=400)]
+            #results = [table.row.nrow() for i in table.where(2<=table.cols.var2 < 10)]
+            results = [p["var2"]  # p.nrow()
+                       # for p in table.where(110*i<=table.cols.var2<110*(i+1))]
+                       # for p in table.where(1000-30<table.cols.var2<1000+60)]
+                       for p in table.where(table.cols.var2 <= 400)]
             rowselected += len(results)
     elif atom == "float":
-        for i in xrange(niter):
+        for i in range(niter):
 #         results = [(table.row.nrow(), table.row["var3"])
 #                    for i in table.where(3<=table.cols.var3 < 5.)]
 #             results = [(p.nrow(), p["var3"])
-#                        for p in table.where(1000.-i<=table.cols.var3<1000.+i)]
-            results = [p["var3"] # (p.nrow(), p["var3"])
-                       for p in table.where(100*i<=table.cols.var3<100*(i+1))]
+# for p in table.where(1000.-i<=table.cols.var3<1000.+i)]
+            results = [
+                p["var3"]  # (p.nrow(), p["var3"])
+                for p in table.where(
+                    100 * i <= table.cols.var3 < 100 * (i + 1))
+            ]
 #                        for p in table
 #                        if 100*i<=p["var3"]<100*(i+1)]
 #             results = [ (p.nrow(), p["var3"]) for p in table
@@ -128,8 +137,8 @@ def readFile(filename, atom, niter, verbose):
         else:
             raise ValueError("Unsuported atom value")
     if verbose and 1:
-        print "Values that fullfill the conditions:"
-        print results
+        print("Values that fullfill the conditions:")
+        print(results)
 
     rowsread = table.nrows * niter
     rowsize = table.rowsize
@@ -143,7 +152,7 @@ def readFile(filename, atom, niter, verbose):
 def searchFile(filename, atom, verbose, item):
     # Open the HDF5 file in read-only mode
 
-    fileh = open_file(filename, mode = "r")
+    fileh = open_file(filename, mode="r")
     rowsread = 0
     uncomprBytes = 0
     table = fileh.root.table
@@ -153,25 +162,25 @@ def searchFile(filename, atom, verbose, item):
         idxcol = table.cols.var3.index
     else:
         raise ValueError("Unsuported atom value")
-    print "Searching", table, "..."
+    print("Searching", table, "...")
     if verbose:
-        print "Chunk size:", idxcol.sorted.chunksize
-        print "Number of elements per slice:", idxcol.sorted.nelemslice
-        print "Slice number in", table._v_pathname, ":", idxcol.sorted.nrows
+        print("Chunk size:", idxcol.sorted.chunksize)
+        print("Number of elements per slice:", idxcol.sorted.nelemslice)
+        print("Slice number in", table._v_pathname, ":", idxcol.sorted.nrows)
 
     (positions, niter) = idxcol.search(item)
     if verbose:
-        print "Positions for item", item, "==>", positions
-        print "Total iterations in search:", niter
+        print("Positions for item", item, "==>", positions)
+        print("Total iterations in search:", niter)
 
     rowsread += table.nrows
     uncomprBytes += idxcol.sorted.chunksize * niter * idxcol.sorted.itemsize
 
     results = table.read(coords=positions)
-    print "results length:", len(results)
+    print("results length:", len(results))
     if verbose:
-        print "Values that fullfill the conditions:"
-        print results
+        print("Values that fullfill the conditions:")
+        print(results)
 
     # Close the file (eventually destroy the extended type)
     fileh.close()
@@ -179,7 +188,7 @@ def searchFile(filename, atom, verbose, item):
     return (rowsread, uncomprBytes, niter)
 
 
-if __name__=="__main__":
+if __name__ == "__main__":
     import sys
     import getopt
     try:
@@ -280,11 +289,11 @@ if __name__=="__main__":
     file = pargs[0]
 
     if testwrite:
-        print "Compression level:", complevel
+        print("Compression level:", complevel)
         if complevel > 0:
-            print "Compression library:", complib
+            print("Compression library:", complib)
             if shuffle:
-                print "Suffling..."
+                print("Suffling...")
         t1 = time.time()
         cpu1 = time.clock()
         if psyco_imported and usepsyco:
@@ -293,14 +302,14 @@ if __name__=="__main__":
                                     atom, recsize, index, verbose)
         t2 = time.time()
         cpu2 = time.clock()
-        tapprows = round(t2-t1, 3)
-        cpuapprows = round(cpu2-cpu1, 3)
-        tpercent = int(round(cpuapprows/tapprows, 2)*100)
-        print "Rows written:", rowsw, " Row size:", rowsz
-        print "Time writing rows: %s s (real) %s s (cpu)  %s%%" % \
-              (tapprows, cpuapprows, tpercent)
-        print "Write rows/sec: ", int(rowsw / float(tapprows))
-        print "Write KB/s :", int(rowsw * rowsz / (tapprows * 1024))
+        tapprows = round(t2 - t1, 3)
+        cpuapprows = round(cpu2 - cpu1, 3)
+        tpercent = int(round(cpuapprows / tapprows, 2) * 100)
+        print("Rows written:", rowsw, " Row size:", rowsz)
+        print("Time writing rows: %s s (real) %s s (cpu)  %s%%" %
+              (tapprows, cpuapprows, tpercent))
+        print("Write rows/sec: ", int(rowsw / float(tapprows)))
+        print("Write KB/s :", int(rowsw * rowsz / (tapprows * 1024)))
 
     if testread:
         if psyco_imported and usepsyco:
@@ -315,17 +324,17 @@ if __name__=="__main__":
                 (rowsr, rowsel, rowsz) = readFile(file, atom, niter, verbose)
         t2 = time.time()
         cpu2 = time.clock()
-        treadrows = round(t2-t1, 3)
-        cpureadrows = round(cpu2-cpu1, 3)
-        tpercent = int(round(cpureadrows/treadrows, 2)*100)
-        tMrows = rowsr/(1000*1000.)
-        sKrows = rowsel/1000.
-        print "Rows read:", rowsr, "Mread:", round(tMrows, 3), "Mrows"
-        print "Rows selected:", rowsel, "Ksel:", round(sKrows, 3), "Krows"
-        print "Time reading rows: %s s (real) %s s (cpu)  %s%%" % \
-              (treadrows, cpureadrows, tpercent)
-        print "Read Mrows/sec: ", round(tMrows / float(treadrows), 3)
-        #print "Read KB/s :", int(rowsr * rowsz / (treadrows * 1024))
+        treadrows = round(t2 - t1, 3)
+        cpureadrows = round(cpu2 - cpu1, 3)
+        tpercent = int(round(cpureadrows / treadrows, 2) * 100)
+        tMrows = rowsr / (1000 * 1000.)
+        sKrows = rowsel / 1000.
+        print("Rows read:", rowsr, "Mread:", round(tMrows, 3), "Mrows")
+        print("Rows selected:", rowsel, "Ksel:", round(sKrows, 3), "Krows")
+        print("Time reading rows: %s s (real) %s s (cpu)  %s%%" %
+              (treadrows, cpureadrows, tpercent))
+        print("Read Mrows/sec: ", round(tMrows / float(treadrows), 3))
+        # print "Read KB/s :", int(rowsr * rowsz / (treadrows * 1024))
 #       print "Uncompr MB :", int(uncomprB / (1024 * 1024))
 #       print "Uncompr MB/s :", int(uncomprB / (treadrows * 1024 * 1024))
 #       print "Total chunks uncompr :", int(niter)
diff --git a/bench/searchsorted-bench2.py b/bench/searchsorted-bench2.py
index 3d8e3a2..f8446ae 100644
--- a/bench/searchsorted-bench2.py
+++ b/bench/searchsorted-bench2.py
@@ -1,8 +1,10 @@
 #!/usr/bin/env python
 
+from __future__ import print_function
 import time
 from tables import *
 
+
 class Small(IsDescription):
     var1 = StringCol(itemsize=4)
     var2 = Int32Col()
@@ -10,30 +12,33 @@ class Small(IsDescription):
     var4 = BoolCol()
 
 # Define a user record to characterize some kind of particles
+
+
 class Medium(IsDescription):
-    var1        = StringCol(itemsize=16, dflt="")  # 16-character String
-    #float1      = Float64Col(dflt=2.3)
-    #float2      = Float64Col(dflt=2.3)
-    #zADCcount    = Int16Col()    # signed short integer
-    var2        = Int32Col()    # signed short integer
-    var3        = Float64Col()
-    grid_i      = Int32Col()    # integer
-    grid_j      = Int32Col()    # integer
-    pressure    = Float32Col()    # float  (single-precision)
-    energy      = Float64Col(shape=2)    # double (double-precision)
+    var1 = StringCol(itemsize=16, dflt="")  # 16-character String
+    #float1 = Float64Col(dflt=2.3)
+    #float2 = Float64Col(dflt=2.3)
+    # zADCcount = Int16Col()          # signed short integer
+    var2 = Int32Col()               # signed short integer
+    var3 = Float64Col()
+    grid_i = Int32Col()             # integer
+    grid_j = Int32Col()             # integer
+    pressure = Float32Col()         # float  (single-precision)
+    energy = Float64Col(shape=2)    # double (double-precision)
+
 
 def createFile(filename, nrows, filters, atom, recsize, index, verbose):
 
     # Open a file in "w"rite mode
-    fileh = open_file(filename, mode = "w", title="Searchsorted Benchmark",
-                     filters=filters)
+    fileh = open_file(filename, mode="w", title="Searchsorted Benchmark",
+                      filters=filters)
     title = "This is the IndexArray title"
     # Create an IndexArray instance
     rowswritten = 0
     # Create an entry
-    klass = {"small":Small, "medium":Medium}
+    klass = {"small": Small, "medium": Medium}
     table = fileh.create_table(fileh.root, 'table', klass[recsize], title,
-                              None, nrows)
+                               None, nrows)
     for i in range(nrows):
         #table.row['var1'] = str(i)
         #table.row['var2'] = random.randrange(nrows)
@@ -60,18 +65,19 @@ def createFile(filename, nrows, filters, atom, recsize, index, verbose):
         else:
             raise ValueError("Index type not supported yet")
         if verbose:
-            print "Number of indexed rows:", indexrows
+            print("Number of indexed rows:", indexrows)
     # Close the file (eventually destroy the extended type)
     fileh.close()
 
     return (rowswritten, rowsize)
 
+
 def readFile(filename, atom, niter, verbose):
     # Open the HDF5 file in read-only mode
 
-    fileh = open_file(filename, mode = "r")
+    fileh = open_file(filename, mode="r")
     table = fileh.root.table
-    print "reading", table
+    print("reading", table)
     if atom == "string":
         idxcol = table.cols.var1.index
     elif atom == "bool":
@@ -81,45 +87,48 @@ def readFile(filename, atom, niter, verbose):
     else:
         idxcol = table.cols.var3.index
     if verbose:
-        print "Max rows in buf:", table.nrowsinbuf
-        print "Rows in", table._v_pathname, ":", table.nrows
-        print "Buffersize:", table.rowsize * table.nrowsinbuf
-        print "MaxTuples:", table.nrowsinbuf
-        print "Chunk size:", idxcol.sorted.chunksize
-        print "Number of elements per slice:", idxcol.nelemslice
-        print "Slice number in", table._v_pathname, ":", idxcol.nrows
+        print("Max rows in buf:", table.nrowsinbuf)
+        print("Rows in", table._v_pathname, ":", table.nrows)
+        print("Buffersize:", table.rowsize * table.nrowsinbuf)
+        print("MaxTuples:", table.nrowsinbuf)
+        print("Chunk size:", idxcol.sorted.chunksize)
+        print("Number of elements per slice:", idxcol.nelemslice)
+        print("Slice number in", table._v_pathname, ":", idxcol.nrows)
 
     rowselected = 0
     if atom == "string":
-        for i in xrange(niter):
+        for i in range(niter):
             #results = [table.row["var3"] for i in table(where=2+i<=table.cols.var2 < 10+i)]
-#             results = [table.row.nrow() for i in table(where=2<=table.cols.var2 < 10)]
-            results = [p["var1"] #p.nrow()
+            #results = [table.row.nrow() for i in table(where=2<=table.cols.var2 < 10)]
+            results = [p["var1"]  # p.nrow()
                        for p in table(where=table.cols.var1 == "1111")]
 #                      for p in table(where="1000"<=table.cols.var1<="1010")]
             rowselected += len(results)
     elif atom == "bool":
-        for i in xrange(niter):
-            results = [p["var2"] #p.nrow()
-                       for p in table(where=table.cols.var4==0)]
+        for i in range(niter):
+            results = [p["var2"]  # p.nrow()
+                       for p in table(where=table.cols.var4 == 0)]
             rowselected += len(results)
     elif atom == "int":
-        for i in xrange(niter):
+        for i in range(niter):
             #results = [table.row["var3"] for i in table(where=2+i<=table.cols.var2 < 10+i)]
-#             results = [table.row.nrow() for i in table(where=2<=table.cols.var2 < 10)]
-            results = [p["var2"] #p.nrow()
-#                        for p in table(where=110*i<=table.cols.var2<110*(i+1))]
-#                       for p in table(where=1000-30<table.cols.var2<1000+60)]
-                       for p in table(where=table.cols.var2<=400)]
+            #results = [table.row.nrow() for i in table(where=2<=table.cols.var2 < 10)]
+            results = [p["var2"]  # p.nrow()
+                       #                        for p in table(where=110*i<=table.cols.var2<110*(i+1))]
+                       # for p in table(where=1000-30<table.cols.var2<1000+60)]
+                       for p in table(where=table.cols.var2 <= 400)]
             rowselected += len(results)
     elif atom == "float":
-        for i in xrange(niter):
+        for i in range(niter):
 #         results = [(table.row.nrow(), table.row["var3"])
 #                    for i in table(where=3<=table.cols.var3 < 5.)]
 #             results = [(p.nrow(), p["var3"])
-#                        for p in table(where=1000.-i<=table.cols.var3<1000.+i)]
-            results = [p["var3"] # (p.nrow(), p["var3"])
-                       for p in table(where=100*i<=table.cols.var3<100*(i+1))]
+# for p in table(where=1000.-i<=table.cols.var3<1000.+i)]
+            results = [
+                p["var3"]  # (p.nrow(), p["var3"])
+                for p in table(
+                    where=100 * i <= table.cols.var3 < 100 * (i + 1))
+            ]
 #                        for p in table
 #                        if 100*i<=p["var3"]<100*(i+1)]
 #             results = [ (p.nrow(), p["var3"]) for p in table
@@ -128,8 +137,8 @@ def readFile(filename, atom, niter, verbose):
         else:
             raise ValueError("Unsuported atom value")
     if verbose and 1:
-        print "Values that fullfill the conditions:"
-        print results
+        print("Values that fullfill the conditions:")
+        print(results)
 
     rowsread = table.nrows * niter
     rowsize = table.rowsize
@@ -143,7 +152,7 @@ def readFile(filename, atom, niter, verbose):
 def searchFile(filename, atom, verbose, item):
     # Open the HDF5 file in read-only mode
 
-    fileh = open_file(filename, mode = "r")
+    fileh = open_file(filename, mode="r")
     rowsread = 0
     uncomprBytes = 0
     table = fileh.root.table
@@ -153,25 +162,25 @@ def searchFile(filename, atom, verbose, item):
         idxcol = table.cols.var3.index
     else:
         raise ValueError("Unsuported atom value")
-    print "Searching", table, "..."
+    print("Searching", table, "...")
     if verbose:
-        print "Chunk size:", idxcol.sorted.chunksize
-        print "Number of elements per slice:", idxcol.sorted.nelemslice
-        print "Slice number in", table._v_pathname, ":", idxcol.sorted.nrows
+        print("Chunk size:", idxcol.sorted.chunksize)
+        print("Number of elements per slice:", idxcol.sorted.nelemslice)
+        print("Slice number in", table._v_pathname, ":", idxcol.sorted.nrows)
 
     (positions, niter) = idxcol.search(item)
     if verbose:
-        print "Positions for item", item, "==>", positions
-        print "Total iterations in search:", niter
+        print("Positions for item", item, "==>", positions)
+        print("Total iterations in search:", niter)
 
     rowsread += table.nrows
     uncomprBytes += idxcol.sorted.chunksize * niter * idxcol.sorted.itemsize
 
     results = table.read(coords=positions)
-    print "results length:", len(results)
+    print("results length:", len(results))
     if verbose:
-        print "Values that fullfill the conditions:"
-        print results
+        print("Values that fullfill the conditions:")
+        print(results)
 
     # Close the file (eventually destroy the extended type)
     fileh.close()
@@ -179,7 +188,7 @@ def searchFile(filename, atom, verbose, item):
     return (rowsread, uncomprBytes, niter)
 
 
-if __name__=="__main__":
+if __name__ == "__main__":
     import sys
     import getopt
     try:
@@ -280,11 +289,11 @@ if __name__=="__main__":
     file = pargs[0]
 
     if testwrite:
-        print "Compression level:", complevel
+        print("Compression level:", complevel)
         if complevel > 0:
-            print "Compression library:", complib
+            print("Compression library:", complib)
             if shuffle:
-                print "Suffling..."
+                print("Suffling...")
         t1 = time.time()
         cpu1 = time.clock()
         if psyco_imported and usepsyco:
@@ -293,14 +302,14 @@ if __name__=="__main__":
                                     atom, recsize, index, verbose)
         t2 = time.time()
         cpu2 = time.clock()
-        tapprows = round(t2-t1, 3)
-        cpuapprows = round(cpu2-cpu1, 3)
-        tpercent = int(round(cpuapprows/tapprows, 2)*100)
-        print "Rows written:", rowsw, " Row size:", rowsz
-        print "Time writing rows: %s s (real) %s s (cpu)  %s%%" % \
-              (tapprows, cpuapprows, tpercent)
-        print "Write rows/sec: ", int(rowsw / float(tapprows))
-        print "Write KB/s :", int(rowsw * rowsz / (tapprows * 1024))
+        tapprows = round(t2 - t1, 3)
+        cpuapprows = round(cpu2 - cpu1, 3)
+        tpercent = int(round(cpuapprows / tapprows, 2) * 100)
+        print("Rows written:", rowsw, " Row size:", rowsz)
+        print("Time writing rows: %s s (real) %s s (cpu)  %s%%" %
+              (tapprows, cpuapprows, tpercent))
+        print("Write rows/sec: ", int(rowsw / float(tapprows)))
+        print("Write KB/s :", int(rowsw * rowsz / (tapprows * 1024)))
 
     if testread:
         if psyco_imported and usepsyco:
@@ -315,17 +324,17 @@ if __name__=="__main__":
                 (rowsr, rowsel, rowsz) = readFile(file, atom, niter, verbose)
         t2 = time.time()
         cpu2 = time.clock()
-        treadrows = round(t2-t1, 3)
-        cpureadrows = round(cpu2-cpu1, 3)
-        tpercent = int(round(cpureadrows/treadrows, 2)*100)
-        tMrows = rowsr/(1000*1000.)
-        sKrows = rowsel/1000.
-        print "Rows read:", rowsr, "Mread:", round(tMrows, 3), "Mrows"
-        print "Rows selected:", rowsel, "Ksel:", round(sKrows, 3), "Krows"
-        print "Time reading rows: %s s (real) %s s (cpu)  %s%%" % \
-              (treadrows, cpureadrows, tpercent)
-        print "Read Mrows/sec: ", round(tMrows / float(treadrows), 3)
-        #print "Read KB/s :", int(rowsr * rowsz / (treadrows * 1024))
+        treadrows = round(t2 - t1, 3)
+        cpureadrows = round(cpu2 - cpu1, 3)
+        tpercent = int(round(cpureadrows / treadrows, 2) * 100)
+        tMrows = rowsr / (1000 * 1000.)
+        sKrows = rowsel / 1000.
+        print("Rows read:", rowsr, "Mread:", round(tMrows, 3), "Mrows")
+        print("Rows selected:", rowsel, "Ksel:", round(sKrows, 3), "Krows")
+        print("Time reading rows: %s s (real) %s s (cpu)  %s%%" %
+              (treadrows, cpureadrows, tpercent))
+        print("Read Mrows/sec: ", round(tMrows / float(treadrows), 3))
+        # print "Read KB/s :", int(rowsr * rowsz / (treadrows * 1024))
 #       print "Uncompr MB :", int(uncomprB / (1024 * 1024))
 #       print "Uncompr MB/s :", int(uncomprB / (treadrows * 1024 * 1024))
 #       print "Total chunks uncompr :", int(niter)
diff --git a/bench/shelve-bench.py b/bench/shelve-bench.py
index b0d0db7..4a891c0 100644
--- a/bench/shelve-bench.py
+++ b/bench/shelve-bench.py
@@ -1,52 +1,65 @@
 #!/usr/bin/env python
 
+from __future__ import print_function
 from tables import *
-import numarray as NA
-import struct, sys
+import numpy as NA
+import struct
+import sys
 import shelve
 import psyco
 
 # This class is accessible only for the examples
+
+
 class Small(IsDescription):
-    """ A record has several columns. They are represented here as
-    class attributes, whose names are the column names and their
-    values will become their types. The IsDescription class will take care
-    the user will not add any new variables and that its type is
-    correct."""
+
+    """Record descriptor.
+
+    A record has several columns. They are represented here as class
+    attributes, whose names are the column names and their values will
+    become their types. The IsDescription class will take care the user
+    will not add any new variables and that its type is correct.
+
+    """
 
     var1 = StringCol(itemsize=4)
     var2 = Int32Col()
     var3 = Float64Col()
 
 # Define a user record to characterize some kind of particles
+
+
 class Medium(IsDescription):
-    name        = StringCol(itemsize=16)  # 16-character String
-    float1      = Float64Col(shape=2, dflt=2.3)
-    #float1      = Float64Col(dflt=1.3)
-    #float2      = Float64Col(dflt=2.3)
-    ADCcount    = Int16Col()    # signed short integer
-    grid_i      = Int32Col()    # integer
-    grid_j      = Int32Col()    # integer
-    pressure    = Float32Col()    # float  (single-precision)
-    energy      = Flaot64Col()    # double (double-precision)
+    name = StringCol(itemsize=16)   # 16-character String
+    float1 = Float64Col(shape=2, dflt=2.3)
+    #float1 = Float64Col(dflt=1.3)
+    #float2 = Float64Col(dflt=2.3)
+    ADCcount = Int16Col()           # signed short integer
+    grid_i = Int32Col()             # integer
+    grid_j = Int32Col()             # integer
+    pressure = Float32Col()         # float  (single-precision)
+    energy = Float64Col()           # double (double-precision)
 
 # Define a user record to characterize some kind of particles
+
+
 class Big(IsDescription):
-    name        = StringCol(itemsize=16)  # 16-character String
-    #float1      = Float64Col(shape=32, dflt=NA.arange(32))
-    #float2      = Float64Col(shape=32, dflt=NA.arange(32))
-    float1      = Float64Col(shape=32, dflt=range(32))
-    float2      = Float64Col(shape=32, dflt=[2.2]*32)
-    ADCcount    = Int16Col()    # signed short integer
-    grid_i      = Int32Col()    # integer
-    grid_j      = Int32Col()    # integer
-    pressure    = Float32Col()    # float  (single-precision)
-    energy      = Float64Col()    # double (double-precision)
+    name = StringCol(itemsize=16)   # 16-character String
+    #float1 = Float64Col(shape=32, dflt=NA.arange(32))
+    #float2 = Float64Col(shape=32, dflt=NA.arange(32))
+    float1 = Float64Col(shape=32, dflt=range(32))
+    float2 = Float64Col(shape=32, dflt=[2.2] * 32)
+    ADCcount = Int16Col()           # signed short integer
+    grid_i = Int32Col()             # integer
+    grid_j = Int32Col()             # integer
+    pressure = Float32Col()         # float  (single-precision)
+    energy = Float64Col()           # double (double-precision)
+
 
 def createFile(filename, totalrows, recsize):
 
     # Open a 'n'ew file
-    fileh = shelve.open(filename, flag = "n")
+    fileh = shelve.open(filename, flag="n")
 
     rowswritten = 0
     # Get the record object associated with the new table
@@ -58,19 +71,19 @@ def createFile(filename, totalrows, recsize):
         d = Medium()
     else:
         d = Small()
-    #print d
-    #sys.exit(0)
+    # print d
+    # sys.exit(0)
     for j in range(3):
         # Create a table
-        #table = fileh.create_table(group, 'tuple'+str(j), Record(), title,
+        # table = fileh.create_table(group, 'tuple'+str(j), Record(), title,
         #                          compress = 6, expectedrows = totalrows)
         # Create a Table instance
-        tablename = 'tuple'+str(j)
+        tablename = 'tuple' + str(j)
         table = []
         # Fill the table
         if recsize == "big" or recsize == "medium":
-            for i in xrange(totalrows):
-                d.name  = 'Particle: %6d' % (i)
+            for i in range(totalrows):
+                d.name = 'Particle: %6d' % (i)
                 #d.TDCcount = i % 256
                 d.ADCcount = (i * 256) % (1 << 16)
                 if recsize == "big":
@@ -82,20 +95,20 @@ def createFile(filename, totalrows, recsize):
                     d.float2 = arr2
                     pass
                 else:
-                    d.float1 = NA.array([i**2]*2, NA.Float64)
+                    d.float1 = NA.array([i ** 2] * 2, NA.Float64)
                     #d.float1 = float(i)
                     #d.float2 = float(i)
                 d.grid_i = i
                 d.grid_j = 10 - i
-                d.pressure = float(i*i)
+                d.pressure = float(i * i)
                 d.energy = float(d.pressure ** 4)
                 table.append((d.ADCcount, d.energy, d.float1, d.float2,
                               d.grid_i, d.grid_j, d.name, d.pressure))
                 # Only on float case
-                #table.append((d.ADCcount, d.energy, d.float1,
+                # table.append((d.ADCcount, d.energy, d.float1,
                 #              d.grid_i, d.grid_j, d.name, d.pressure))
         else:
-            for i in xrange(totalrows):
+            for i in range(totalrows):
                 d.var1 = str(i)
                 d.var2 = i
                 d.var3 = 12.1e10
@@ -105,31 +118,31 @@ def createFile(filename, totalrows, recsize):
         fileh[tablename] = table
         rowswritten += totalrows
 
-
     # Close the file
     fileh.close()
     return (rowswritten, struct.calcsize(d._v_fmt))
 
+
 def readFile(filename, recsize):
     # Open the HDF5 file in read-only mode
     fileh = shelve.open(filename, "r")
     for table in ['tuple0', 'tuple1', 'tuple2']:
         if recsize == "big" or recsize == "medium":
-            e = [ t[2] for t in fileh[table] if t[4] < 20 ]
+            e = [t[2] for t in fileh[table] if t[4] < 20]
             # if there is only one float (array)
             #e = [ t[1] for t in fileh[table] if t[3] < 20 ]
         else:
-            e = [ t[1] for t in fileh[table] if t[1] < 20 ]
+            e = [t[1] for t in fileh[table] if t[1] < 20]
 
-        print "resulting selection list ==>", e
-        print "Total selected records ==> ", len(e)
+        print("resulting selection list ==>", e)
+        print("Total selected records ==> ", len(e))
 
     # Close the file (eventually destroy the extended type)
     fileh.close()
 
 
 # Add code to test here
-if __name__=="__main__":
+if __name__ == "__main__":
     import getopt
     import time
 
@@ -169,18 +182,18 @@ if __name__=="__main__":
     psyco.bind(createFile)
     (rowsw, rowsz) = createFile(file, iterations, recsize)
     t2 = time.clock()
-    tapprows = round(t2-t1, 3)
+    tapprows = round(t2 - t1, 3)
 
     t1 = time.clock()
     psyco.bind(readFile)
     readFile(file, recsize)
     t2 = time.clock()
-    treadrows = round(t2-t1, 3)
-
-    print "Rows written:", rowsw, " Row size:", rowsz
-    print "Time appending rows:", tapprows
-    print "Write rows/sec: ", int(iterations * 3/ float(tapprows))
-    print "Write KB/s :", int(rowsw * rowsz / (tapprows * 1024))
-    print "Time reading rows:", treadrows
-    print "Read rows/sec: ", int(iterations * 3/ float(treadrows))
-    print "Read KB/s :", int(rowsw * rowsz / (treadrows * 1024))
+    treadrows = round(t2 - t1, 3)
+
+    print("Rows written:", rowsw, " Row size:", rowsz)
+    print("Time appending rows:", tapprows)
+    print("Write rows/sec: ", int(iterations * 3 / float(tapprows)))
+    print("Write KB/s :", int(rowsw * rowsz / (tapprows * 1024)))
+    print("Time reading rows:", treadrows)
+    print("Read rows/sec: ", int(iterations * 3 / float(treadrows)))
+    print("Read KB/s :", int(rowsw * rowsz / (treadrows * 1024)))
diff --git a/bench/sqlite-search-bench.py b/bench/sqlite-search-bench.py
index e113fca..995fe0e 100644
--- a/bench/sqlite-search-bench.py
+++ b/bench/sqlite-search-bench.py
@@ -1,4 +1,6 @@
 #!/usr/bin/python
+
+from __future__ import print_function
 import sqlite
 import random
 import time
@@ -6,54 +8,53 @@ import sys
 import os
 import os.path
 from tables import *
-import numarray
-from numarray import strings
-from numarray import random_array
+import numpy as np
 
 randomvalues = 0
 standarddeviation = 10000
 # Initialize the random generator always with the same integer
 # in order to have reproductible results
 random.seed(19)
-random_array.seed(19, 20)
+np.random.seed((19, 20))
 
 # defaults
 psycon = 0
 worst = 0
 
+
 def createNewBenchFile(bfile, verbose):
 
     class Create(IsDescription):
-        nrows   = Int32Col(pos=0)
-        irows   = Int32Col(pos=1)
-        tfill   = Float64Col(pos=2)
-        tidx    = Float64Col(pos=3)
-        tcfill  = Float64Col(pos=4)
-        tcidx   = Float64Col(pos=5)
+        nrows = Int32Col(pos=0)
+        irows = Int32Col(pos=1)
+        tfill = Float64Col(pos=2)
+        tidx = Float64Col(pos=3)
+        tcfill = Float64Col(pos=4)
+        tcidx = Float64Col(pos=5)
         rowsecf = Float64Col(pos=6)
         rowseci = Float64Col(pos=7)
-        fsize   = Float64Col(pos=8)
-        isize   = Float64Col(pos=9)
-        psyco   = BoolCol(pos=10)
+        fsize = Float64Col(pos=8)
+        isize = Float64Col(pos=9)
+        psyco = BoolCol(pos=10)
 
     class Search(IsDescription):
-        nrows   = Int32Col(pos=0)
-        rowsel  = Int32Col(pos=1)
-        time1   = Float64Col(pos=2)
-        time2   = Float64Col(pos=3)
-        tcpu1   = Float64Col(pos=4)
-        tcpu2   = Float64Col(pos=5)
+        nrows = Int32Col(pos=0)
+        rowsel = Int32Col(pos=1)
+        time1 = Float64Col(pos=2)
+        time2 = Float64Col(pos=3)
+        tcpu1 = Float64Col(pos=4)
+        tcpu2 = Float64Col(pos=5)
         rowsec1 = Float64Col(pos=6)
         rowsec2 = Float64Col(pos=7)
-        psyco   = BoolCol(pos=8)
+        psyco = BoolCol(pos=8)
 
     if verbose:
-        print "Creating a new benchfile:", bfile
+        print("Creating a new benchfile:", bfile)
     # Open the benchmarking file
     bf = open_file(bfile, "w")
     # Create groups
     for recsize in ["sqlite_small"]:
-        group = bf.create_group("/", recsize, recsize+" Group")
+        group = bf.create_group("/", recsize, recsize + " Group")
         # Attach the row size of table as attribute
         if recsize == "small":
             group._v_attrs.rowsize = 16
@@ -64,27 +65,31 @@ def createNewBenchFile(bfile, verbose):
         groupS = bf.create_group(group, "search", "Search Group")
         # Create Tables for searching
         for mode in ["indexed", "standard"]:
-            group = bf.create_group(groupS, mode, mode+" Group")
+            group = bf.create_group(groupS, mode, mode + " Group")
             # for searching bench
-            #for atom in ["string", "int", "float", "bool"]:
+            # for atom in ["string", "int", "float", "bool"]:
             for atom in ["string", "int", "float"]:
-                bf.create_table(group, atom, Search, atom+" bench")
+                bf.create_table(group, atom, Search, atom + " bench")
     bf.close()
 
+
 def createFile(filename, nrows, filters, indexmode, heavy, noise, bfile,
                verbose):
 
     # Initialize some variables
-    t1      = 0.; t2      = 0.
-    tcpu1   = 0.; tcpu2   = 0.
-    rowsecf = 0.; rowseci = 0.
-    size1   = 0.; size2   = 0.
-
+    t1 = 0.
+    t2 = 0.
+    tcpu1 = 0.
+    tcpu2 = 0.
+    rowsecf = 0.
+    rowseci = 0.
+    size1 = 0.
+    size2 = 0.
 
     if indexmode == "standard":
-        print "Creating a new database:", dbfile
-        instd=os.popen("/usr/local/bin/sqlite "+dbfile, "w")
-        CREATESTD="""
+        print("Creating a new database:", dbfile)
+        instd = os.popen("/usr/local/bin/sqlite " + dbfile, "w")
+        CREATESTD = """
 CREATE TABLE small (
 -- Name         Type            -- Example
 ---------------------------------------
@@ -94,7 +99,7 @@ var2            INTEGER,        -- 111
 var3            FLOAT        --  12.32
 );
 """
-        CREATEIDX="""
+        CREATEIDX = """
 CREATE TABLE small (
 -- Name         Type            -- Example
 ---------------------------------------
@@ -114,7 +119,7 @@ CREATE INDEX ivar3 ON small(var3);
     conn = sqlite.connect(dbfile)
     cursor = conn.cursor()
     if indexmode == "standard":
-        place_holders = ",".join(['%s']*3)
+        place_holders = ",".join(['%s'] * 3)
         # Insert rows
         SQL = "insert into small values(NULL, %s)" % place_holders
         time1 = time.time()
@@ -123,33 +128,34 @@ CREATE INDEX ivar3 ON small(var3);
         nrowsbuf = 1000
         minimum = 0
         maximum = nrows
-        for i in xrange(0, nrows, nrowsbuf):
-            if i+nrowsbuf > nrows:
+        for i in range(0, nrows, nrowsbuf):
+            if i + nrowsbuf > nrows:
                 j = nrows
             else:
-                j = i+nrowsbuf
+                j = i + nrowsbuf
             if randomvalues:
-                var3 = random_array.uniform(minimum, maximum, shape=[j-i])
+                var3 = np.random.uniform(minimum, maximum, shape=[j - i])
             else:
-                var3 = numarray.arange(i, j, type=numarray.Float64)
+                var3 = np.arange(i, j, type=np.Float64)
                 if noise:
-                    var3 += random_array.uniform(-3, 3, shape=[j-i])
-            var2 = numarray.array(var3, type=numarray.Int32)
-            var1 = strings.array(None, shape=[j-i], itemsize=4)
+                    var3 += np.random.uniform(-3, 3, shape=[j - i])
+            var2 = np.array(var3, type=np.Int32)
+            var1 = np.array(None, shape=[j - i], dtype='s4')
             if not heavy:
-                for n in xrange(j-i):
+                for n in range(j - i):
                     var1[n] = str("%.4s" % var2[n])
-            for n in xrange(j-i):
+            for n in range(j - i):
                 fields = (var1[n], var2[n], var3[n])
                 cursor.execute(SQL, fields)
             conn.commit()
-        t1 = round(time.time()-time1, 5)
-        tcpu1 = round(time.clock()-cpu1, 5)
-        rowsecf = nrows/t1
+        t1 = round(time.time() - time1, 5)
+        tcpu1 = round(time.clock() - cpu1, 5)
+        rowsecf = nrows / t1
         size1 = os.stat(dbfile)[6]
-        print "******** Results for writing nrows = %s" % (nrows), "*********"
-        print "Insert time:", t1, ", KRows/s:", round((nrows/10.**3)/t1, 3),
-        print ", File size:", round(size1/(1024.*1024.), 3), "MB"
+        print("******** Results for writing nrows = %s" % (nrows), "*********")
+        print(("Insert time:", t1, ", KRows/s:",
+              round((nrows / 10. ** 3) / t1, 3),))
+        print(", File size:", round(size1 / (1024. * 1024.), 3), "MB")
 
     # Indexem
     if indexmode == "indexed":
@@ -162,12 +168,14 @@ CREATE INDEX ivar3 ON small(var3);
         conn.commit()
         cursor.execute("CREATE INDEX ivar3 ON small(var3)")
         conn.commit()
-        t2 = round(time.time()-time1, 5)
-        tcpu2 = round(time.clock()-cpu1, 5)
-        rowseci = nrows/t2
-        print "Index time:", t2, ", IKRows/s:", round((nrows/10.**3)/t2, 3),
+        t2 = round(time.time() - time1, 5)
+        tcpu2 = round(time.clock() - cpu1, 5)
+        rowseci = nrows / t2
+        print(("Index time:", t2, ", IKRows/s:",
+              round((nrows / 10. ** 3) / t2, 3),))
         size2 = os.stat(dbfile)[6] - size1
-        print ", Final size with index:", round(size2/(1024.*1024), 3), "MB"
+        print((", Final size with index:",
+              round(size2 / (1024. * 1024), 3), "MB"))
 
     conn.close()
 
@@ -175,13 +183,13 @@ CREATE INDEX ivar3 ON small(var3);
     bf = open_file(bfile, "a")
     recsize = "sqlite_small"
     if indexmode == "indexed":
-        table = bf.get_node("/"+recsize+"/create_indexed")
+        table = bf.get_node("/" + recsize + "/create_indexed")
     else:
-        table = bf.get_node("/"+recsize+"/create_standard")
+        table = bf.get_node("/" + recsize + "/create_standard")
     table.row["nrows"] = nrows
     table.row["irows"] = nrows
     table.row["tfill"] = t1
-    table.row["tidx"]  = t2
+    table.row["tidx"] = t2
     table.row["tcfill"] = tcpu1
     table.row["tcidx"] = tcpu2
     table.row["psyco"] = psycon
@@ -194,6 +202,7 @@ CREATE INDEX ivar3 ON small(var3);
 
     return
 
+
 def readFile(dbfile, nrows, indexmode, heavy, dselect, bfile, riter):
     # Connect to the database.
     conn = sqlite.connect(db=dbfile, mode=755)
@@ -216,50 +225,50 @@ def readFile(dbfile, nrows, indexmode, heavy, dselect, bfile, riter):
 
     # Open the benchmark database
     bf = open_file(bfile, "a")
-    #default values for the case that columns are not indexed
+    # default values for the case that columns are not indexed
     t2 = 0.
     tcpu2 = 0.
     # Some previous computations for the case of random values
     if randomvalues:
         # algorithm to choose a value separated from mean
-#         # If want to select fewer values, select this
+# If want to select fewer values, select this
 #         if nrows/2 > standarddeviation*3:
-#             # Choose five standard deviations away from mean value
+# Choose five standard deviations away from mean value
 #             dev = standarddeviation*5
-#             #dev = standarddeviation*math.log10(nrows/1000.)
+# dev = standarddeviation*math.log10(nrows/1000.)
 
         # This algorithm give place to too asymmetric result values
 #         if standarddeviation*10 < nrows/2:
-#             # Choose four standard deviations away from mean value
+# Choose four standard deviations away from mean value
 #             dev = standarddeviation*4
 #         else:
 #             dev = 100
         # Yet Another Algorithm
-        if nrows/2 > standarddeviation*10:
-            dev = standarddeviation*4.
-        elif nrows/2 > standarddeviation:
-            dev = standarddeviation*2.
-        elif nrows/2 > standarddeviation/10.:
-            dev = standarddeviation/10.
+        if nrows / 2 > standarddeviation * 10:
+            dev = standarddeviation * 4.
+        elif nrows / 2 > standarddeviation:
+            dev = standarddeviation * 2.
+        elif nrows / 2 > standarddeviation / 10.:
+            dev = standarddeviation / 10.
         else:
-            dev = standarddeviation/100.
+            dev = standarddeviation / 100.
 
-        valmax = int(round((nrows/2.)-dev))
+        valmax = int(round((nrows / 2.) - dev))
         # split the selection range in regular chunks
-        if riter > valmax*2:
-            riter = valmax*2
-        chunksize = (valmax*2/riter)*10
+        if riter > valmax * 2:
+            riter = valmax * 2
+        chunksize = (valmax * 2 / riter) * 10
         # Get a list of integers for the intervals
         randlist = range(0, valmax, chunksize)
-        randlist.extend(range(nrows-valmax, nrows, chunksize))
+        randlist.extend(range(nrows - valmax, nrows, chunksize))
         # expand the list ten times so as to use the cache
-        randlist = randlist*10
+        randlist = randlist * 10
         # shuffle the list
         random.shuffle(randlist)
         # reset the value of chunksize
-        chunksize = chunksize/10
-        #print "chunksize-->", chunksize
-        #randlist.sort();print "randlist-->", randlist
+        chunksize = chunksize / 10
+        # print "chunksize-->", chunksize
+        # randlist.sort();print "randlist-->", randlist
     else:
         chunksize = 3
     if heavy:
@@ -272,7 +281,7 @@ def readFile(dbfile, nrows, indexmode, heavy, dselect, bfile, riter):
         time2 = 0
         cpu2 = 0
         rowsel = 0
-        for i in xrange(riter):
+        for i in range(riter):
             rnd = random.randrange(nrows)
             time1 = time.time()
             cpu1 = time.clock()
@@ -281,12 +290,13 @@ def readFile(dbfile, nrows, indexmode, heavy, dselect, bfile, riter):
                 cursor.execute(SQL1, str(rnd)[-4:])
             elif atom == "int":
                 #cursor.execute(SQL2 % (rnd, rnd+3))
-                cursor.execute(SQL2 % (rnd, rnd+dselect))
+                cursor.execute(SQL2 % (rnd, rnd + dselect))
             elif atom == "float":
                 #cursor.execute(SQL3 % (float(rnd), float(rnd+3)))
-                cursor.execute(SQL3 % (float(rnd), float(rnd+dselect)))
+                cursor.execute(SQL3 % (float(rnd), float(rnd + dselect)))
             else:
-                raise ValueError("atom must take a value in ['string','int','float']")
+                raise ValueError(
+                    "atom must take a value in ['string','int','float']")
             if i == 0:
                 t1 = time.time() - time1
                 tcpu1 = time.clock() - cpu1
@@ -306,20 +316,21 @@ def readFile(dbfile, nrows, indexmode, heavy, dselect, bfile, riter):
                 correction = 5
             else:
                 correction = 1
-            t2 = time2/(riter-correction)
-            tcpu2 = cpu2/(riter-correction)
+            t2 = time2 / (riter - correction)
+            tcpu2 = cpu2 / (riter - correction)
 
-        print "*** Query results for atom = %s, nrows = %s, indexmode = %s ***" % (atom, nrows, indexmode)
-        print "Query time:", round(t1, 5), ", cached time:", round(t2, 5)
-        print "MRows/s:", round((nrows/10.**6)/t1, 3),
+        print(("*** Query results for atom = %s, nrows = %s, "
+              "indexmode = %s ***" % (atom, nrows, indexmode)))
+        print("Query time:", round(t1, 5), ", cached time:", round(t2, 5))
+        print("MRows/s:", round((nrows / 10. ** 6) / t1, 3), end=' ')
         if t2 > 0:
-            print ", cached MRows/s:", round((nrows/10.**6)/t2, 3)
+            print(", cached MRows/s:", round((nrows / 10. ** 6) / t2, 3))
         else:
-            print
+            print()
 
         # Collect benchmark data
         recsize = "sqlite_small"
-        tablepath = "/"+recsize+"/search/"+indexmode+"/"+atom
+        tablepath = "/" + recsize + "/search/" + indexmode + "/" + atom
         table = bf.get_node(tablepath)
         table.row["nrows"] = nrows
         table.row["rowsel"] = rowsel
@@ -328,9 +339,9 @@ def readFile(dbfile, nrows, indexmode, heavy, dselect, bfile, riter):
         table.row["tcpu1"] = tcpu1
         table.row["tcpu2"] = tcpu2
         table.row["psyco"] = psycon
-        table.row["rowsec1"] = nrows/t1
+        table.row["rowsec1"] = nrows / t1
         if t2 > 0:
-            table.row["rowsec2"] = nrows/t2
+            table.row["rowsec2"] = nrows / t2
         table.row.append()
         table.flush()  # Flush the data
 
@@ -340,7 +351,7 @@ def readFile(dbfile, nrows, indexmode, heavy, dselect, bfile, riter):
 
     return
 
-if __name__=="__main__":
+if __name__ == "__main__":
     import getopt
     try:
         import psyco
@@ -410,16 +421,18 @@ if __name__=="__main__":
         elif option[0] == '-m':
             indexmode = option[1]
             if indexmode not in supported_imodes:
-                raise ValueError("Indexmode should be any of '%s' and you passed '%s'" % (supported_imodes, indexmode))
+                raise ValueError(
+                    "Indexmode should be any of '%s' and you passed '%s'" %
+                    (supported_imodes, indexmode))
         elif option[0] == '-n':
-            nrows = int(float(option[1])*1000)
+            nrows = int(float(option[1]) * 1000)
         elif option[0] == '-d':
             dselect = float(option[1])
         elif option[0] == '-k':
             riter = int(option[1])
 
     # remaining parameters
-    dbfile=pargs[0]
+    dbfile = pargs[0]
 
     if worst:
         nrows -= 1  # the worst case
diff --git a/bench/sqlite3-search-bench.py b/bench/sqlite3-search-bench.py
index 8dc1aee..60c418a 100644
--- a/bench/sqlite3-search-bench.py
+++ b/bench/sqlite3-search-bench.py
@@ -1,4 +1,6 @@
-import os, os.path
+from __future__ import print_function
+import os
+import os.path
 from time import time
 import numpy
 import random
@@ -6,21 +8,24 @@ import random
 # in order to always generate the same random sequence
 random.seed(19)
 
+
 def fill_arrays(start, stop):
     col_i = numpy.arange(start, stop, dtype=numpy.int32)
     if userandom:
-        col_j = numpy.random.uniform(0, nrows, stop-start)
+        col_j = numpy.random.uniform(0, nrows, stop - start)
     else:
         col_j = numpy.array(col_i, dtype=numpy.float64)
     return col_i, col_j
 
 # Generator for ensure pytables benchmark compatibility
+
+
 def int_generator(nrows):
-    step = 1000*100
+    step = 1000 * 100
     j = 0
-    for i in xrange(nrows):
-        if i >= step*j:
-            stop = (j+1)*step
+    for i in range(nrows):
+        if i >= step * j:
+            stop = (j + 1) * step
             if stop > nrows:  # Seems unnecessary
                 stop = nrows
             col_i, col_j = fill_arrays(i, stop)
@@ -29,13 +34,15 @@ def int_generator(nrows):
         yield (col_i[k], col_j[k])
         k += 1
 
+
 def int_generator_slow(nrows):
-    for i in xrange(nrows):
+    for i in range(nrows):
         if userandom:
             yield (i, float(random.randint(0, nrows)))
         else:
             yield (i, float(i))
 
+
 def open_db(filename, remove=0):
     if remove and os.path.exists(filename):
         os.remove(filename)
@@ -43,55 +50,60 @@ def open_db(filename, remove=0):
     cur = con.cursor()
     return con, cur
 
+
 def create_db(filename, nrows):
     con, cur = open_db(filename, remove=1)
     cur.execute("create table ints(i integer, j real)")
-    t1=time()
+    t1 = time()
     # This is twice as fast as a plain loop
     cur.executemany("insert into ints(i,j) values (?,?)", int_generator(nrows))
     con.commit()
-    ctime = time()-t1
+    ctime = time() - t1
     if verbose:
-        print "insert time:", round(ctime, 5)
-        print "Krows/s:", round((nrows/1000.)/ctime, 5)
+        print("insert time:", round(ctime, 5))
+        print("Krows/s:", round((nrows / 1000.) / ctime, 5))
     close_db(con, cur)
 
+
 def index_db(filename):
     con, cur = open_db(filename)
-    t1=time()
+    t1 = time()
     cur.execute("create index ij on ints(j)")
     con.commit()
-    itime = time()-t1
+    itime = time() - t1
     if verbose:
-        print "index time:", round(itime, 5)
-        print "Krows/s:", round(nrows/itime, 5)
+        print("index time:", round(itime, 5))
+        print("Krows/s:", round(nrows / itime, 5))
     # Close the DB
     close_db(con, cur)
 
+
 def query_db(filename, rng):
     con, cur = open_db(filename)
-    t1=time()
+    t1 = time()
     ntimes = 10
     for i in range(ntimes):
         # between clause does not seem to take advantage of indexes
-        #cur.execute("select j from ints where j between %s and %s" % \
-        cur.execute("select i from ints where j >= %s and j <= %s" % \
-        #cur.execute("select i from ints where i >= %s and i <= %s" % \
-                    (rng[0]+i, rng[1]+i))
+        # cur.execute("select j from ints where j between %s and %s" % \
+        cur.execute("select i from ints where j >= %s and j <= %s" %
+                    # cur.execute("select i from ints where i >= %s and i <=
+                    # %s" %
+                    (rng[0] + i, rng[1] + i))
         results = cur.fetchall()
     con.commit()
-    qtime = (time()-t1)/ntimes
+    qtime = (time() - t1) / ntimes
     if verbose:
-        print "query time:", round(qtime, 5)
-        print "Mrows/s:", round((nrows/1000.)/qtime, 5)
-        print results
+        print("query time:", round(qtime, 5))
+        print("Mrows/s:", round((nrows / 1000.) / qtime, 5))
+        print(results)
     close_db(con, cur)
 
+
 def close_db(con, cur):
     cur.close()
     con.close()
 
-if __name__=="__main__":
+if __name__ == "__main__":
     import sys
     import getopt
     try:
@@ -159,13 +171,13 @@ if __name__=="__main__":
         from pysqlite2 import dbapi2 as sqlite
 
     if verbose:
-        print "pysqlite version:", sqlite.version
+        print("pysqlite version:", sqlite.version)
         if userandom:
-            print "using random values"
+            print("using random values")
 
     if docreate:
         if verbose:
-            print "writing %s krows" % nrows
+            print("writing %s krows" % nrows)
         if psyco_imported and usepsyco:
             psyco.bind(create_db)
         nrows *= 1000
diff --git a/bench/stress-test.py b/bench/stress-test.py
index c2df2dd..109e7a4 100644
--- a/bench/stress-test.py
+++ b/bench/stress-test.py
@@ -1,8 +1,13 @@
-import sys, time, gc  # , types
+from __future__ import print_function
+import gc
+import sys
+import time
+#import types
 import numpy
 from tables import Group  # , MetaIsDescription
 from tables import *
 
+
 class Test(IsDescription):
     ngroup = Int32Col(pos=1)
     ntable = Int32Col(pos=2)
@@ -13,7 +18,8 @@ TestDict = {
     "ngroup": Int32Col(pos=1),
     "ntable": Int32Col(pos=2),
     "nrow": Int32Col(pos=3),
-    }
+}
+
 
 def createFileArr(filename, ngroups, ntables, nrows):
 
@@ -24,35 +30,36 @@ def createFileArr(filename, ngroups, ntables, nrows):
 
     for k in range(ngroups):
         # Create the group
-        fileh.create_group("/", 'group%04d'% k, "Group %d" % k)
+        fileh.create_group("/", 'group%04d' % k, "Group %d" % k)
 
     fileh.close()
 
     # Now, create the arrays
     arr = numpy.arange(nrows)
     for k in range(ngroups):
-        fileh = open_file(filename, mode="a", root_uep='group%04d'% k)
+        fileh = open_file(filename, mode="a", root_uep='group%04d' % k)
         for j in range(ntables):
             # Create the array
-            fileh.create_array("/", 'array%04d'% j, arr, "Array %d" % j)
+            fileh.create_array("/", 'array%04d' % j, arr, "Array %d" % j)
         fileh.close()
 
-    return (ngroups*ntables*nrows, 4)
+    return (ngroups * ntables * nrows, 4)
+
 
 def readFileArr(filename, ngroups, recsize, verbose):
 
     rowsread = 0
     for ngroup in range(ngroups):
-        fileh = open_file(filename, mode="r", root_uep='group%04d'% ngroup)
+        fileh = open_file(filename, mode="r", root_uep='group%04d' % ngroup)
         # Get the group
         group = fileh.root
         narrai = 0
         if verbose:
-            print "Group ==>", group
+            print("Group ==>", group)
         for arrai in fileh.list_nodes(group, 'Array'):
             if verbose > 1:
-                print "Array ==>", arrai
-                print "Rows in", arrai._v_pathname, ":", arrai.shape
+                print("Array ==>", arrai)
+                print("Rows in", arrai._v_pathname, ":", arrai.shape)
 
             arr = arrai.read()
 
@@ -62,7 +69,8 @@ def readFileArr(filename, ngroups, recsize, verbose):
         # Close the file (eventually destroy the extended type)
         fileh.close()
 
-    return (rowsread, 4, rowsread*4)
+    return (rowsread, 4, rowsread * 4)
+
 
 def createFile(filename, ngroups, ntables, nrows, complevel, complib, recsize):
 
@@ -73,7 +81,7 @@ def createFile(filename, ngroups, ntables, nrows, complevel, complib, recsize):
 
     for k in range(ngroups):
         # Create the group
-        group = fileh.create_group("/", 'group%04d'% k, "Group %d" % k)
+        group = fileh.create_group("/", 'group%04d' % k, "Group %d" % k)
 
     fileh.close()
 
@@ -83,21 +91,21 @@ def createFile(filename, ngroups, ntables, nrows, complevel, complib, recsize):
         rowsize = 0
 
     for k in range(ngroups):
-        print "Filling tables in group:", k
-        fileh = open_file(filename, mode="a", root_uep='group%04d'% k)
+        print("Filling tables in group:", k)
+        fileh = open_file(filename, mode="a", root_uep='group%04d' % k)
         # Get the group
         group = fileh.root
         for j in range(ntables):
             # Create a table
-            #table = fileh.create_table(group, 'table%04d'% j, Test,
-            table = fileh.create_table(group, 'table%04d'% j, TestDict,
-                                      'Table%04d'%j,
-                                      complevel, complib, nrows)
+            # table = fileh.create_table(group, 'table%04d'% j, Test,
+            table = fileh.create_table(group, 'table%04d' % j, TestDict,
+                                       'Table%04d' % j,
+                                       complevel, complib, nrows)
             rowsize = table.rowsize
             # Get the row object associated with the new table
             row = table.row
             # Fill the table
-            for i in xrange(nrows):
+            for i in range(nrows):
                 row['ngroup'] = k
                 row['ntable'] = j
                 row['nrow'] = i
@@ -111,6 +119,7 @@ def createFile(filename, ngroups, ntables, nrows, complevel, complib, recsize):
 
     return (rowswritten, rowsize)
 
+
 def readFile(filename, ngroups, recsize, verbose):
     # Open the HDF5 file in read-only mode
 
@@ -118,21 +127,21 @@ def readFile(filename, ngroups, recsize, verbose):
     buffersize = 0
     rowsread = 0
     for ngroup in range(ngroups):
-        fileh = open_file(filename, mode="r", root_uep='group%04d'% ngroup)
+        fileh = open_file(filename, mode="r", root_uep='group%04d' % ngroup)
         # Get the group
         group = fileh.root
         ntable = 0
         if verbose:
-            print "Group ==>", group
+            print("Group ==>", group)
         for table in fileh.list_nodes(group, 'Table'):
             rowsize = table.rowsize
-            buffersize=table.rowsize * table.nrowsinbuf
+            buffersize = table.rowsize * table.nrowsinbuf
             if verbose > 1:
-                print "Table ==>", table
-                print "Max rows in buf:", table.nrowsinbuf
-                print "Rows in", table._v_pathname, ":", table.nrows
-                print "Buffersize:", table.rowsize * table.nrowsinbuf
-                print "MaxTuples:", table.nrowsinbuf
+                print("Table ==>", table)
+                print("Max rows in buf:", table.nrowsinbuf)
+                print("Rows in", table._v_pathname, ":", table.nrows)
+                print("Buffersize:", table.rowsize * table.nrowsinbuf)
+                print("MaxTuples:", table.nrowsinbuf)
 
             nrow = 0
             if table.nrows > 0:  # only read if we have rows in tables
@@ -142,9 +151,9 @@ def readFile(filename, ngroups, recsize, verbose):
                         assert row["ntable"] == ntable
                         assert row["nrow"] == nrow
                     except:
-                        print "Error in group: %d, table: %d, row: %d" % \
-                              (ngroup, ntable, nrow)
-                        print "Record ==>", row
+                        print("Error in group: %d, table: %d, row: %d" %
+                              (ngroup, ntable, nrow))
+                        print("Record ==>", row)
                     nrow += 1
 
             assert nrow == table.nrows
@@ -156,7 +165,9 @@ def readFile(filename, ngroups, recsize, verbose):
 
     return (rowsread, rowsize, buffersize)
 
+
 class TrackRefs:
+
     """Object to track reference counts across test runs."""
 
     def __init__(self, verbose=0):
@@ -172,17 +183,17 @@ class TrackRefs:
             all = sys.getrefcount(o)
             t = type(o)
             if verbose:
-                #if t == types.TupleType:
+                # if t == types.TupleType:
                 if isinstance(o, Group):
-                #if isinstance(o, MetaIsDescription):
-                    print "-->", o, "refs:", all
+                # if isinstance(o, MetaIsDescription):
+                    print("-->", o, "refs:", all)
                     refrs = gc.get_referrers(o)
                     trefrs = []
                     for refr in refrs:
                         trefrs.append(type(refr))
-                    print "Referrers -->", refrs
-                    print "Referrers types -->", trefrs
-            #if t == types.StringType: print "-->",o
+                    print("Referrers -->", refrs)
+                    print("Referrers types -->", trefrs)
+            # if t == types.StringType: print "-->",o
             if t in type2count:
                 type2count[t] += 1
                 type2all[t] += all
@@ -191,64 +202,67 @@ class TrackRefs:
                 type2all[t] = all
 
         ct = sorted([(type2count[t] - self.type2count.get(t, 0),
-               type2all[t] - self.type2all.get(t, 0),
-               t)
-              for t in type2count.iterkeys()])
+                      type2all[t] - self.type2all.get(t, 0),
+                      t)
+                     for t in type2count.keys()])
         ct.reverse()
         for delta1, delta2, t in ct:
             if delta1 or delta2:
-                print "%-55s %8d %8d" % (t, delta1, delta2)
+                print("%-55s %8d %8d" % (t, delta1, delta2))
 
         self.type2count = type2count
         self.type2all = type2all
 
+
 def dump_refs(preheat=10, iter1=10, iter2=10, *testargs):
 
     rc1 = rc2 = None
-    #testMethod()
-    for i in xrange(preheat):
+    # testMethod()
+    for i in range(preheat):
         testMethod(*testargs)
     gc.collect()
     rc1 = sys.gettotalrefcount()
     track = TrackRefs()
-    for i in xrange(iter1):
+    for i in range(iter1):
         testMethod(*testargs)
-    print "First output of TrackRefs:"
+    print("First output of TrackRefs:")
     gc.collect()
     rc2 = sys.gettotalrefcount()
     track.update()
-    print >>sys.stderr, "Inc refs in function testMethod --> %5d" % (rc2-rc1)
-    for i in xrange(iter2):
+    print("Inc refs in function testMethod --> %5d" % (rc2 - rc1),
+          file=sys.stderr)
+    for i in range(iter2):
         testMethod(*testargs)
         track.update(verbose=1)
-    print "Second output of TrackRefs:"
+    print("Second output of TrackRefs:")
     gc.collect()
     rc3 = sys.gettotalrefcount()
 
-    print >>sys.stderr, "Inc refs in function testMethod --> %5d" % (rc3-rc2)
+    print("Inc refs in function testMethod --> %5d" % (rc3 - rc2),
+          file=sys.stderr)
+
 
 def dump_garbage():
-    """
-    show us waht the garbage is about
-    """
+    """show us waht the garbage is about."""
     # Force collection
-    print "\nGARBAGE:"
+    print("\nGARBAGE:")
     gc.collect()
 
-    print "\nGARBAGE OBJECTS:"
+    print("\nGARBAGE OBJECTS:")
     for x in gc.garbage:
         s = str(x)
         #if len(s) > 80: s = s[:77] + "..."
-        print type(x), "\n   ", s
+        print(type(x), "\n   ", s)
+
+    # print "\nTRACKED OBJECTS:"
+    # reportLoggedInstances("*")
 
-    #print "\nTRACKED OBJECTS:"
-    #reportLoggedInstances("*")
 
 def testMethod(file, usearray, testwrite, testread, complib, complevel,
                ngroups, ntables, nrows):
 
     if complevel > 0:
-        print "Compression library:", complib
+        print("Compression library:", complib)
     if testwrite:
         t1 = time.time()
         cpu1 = time.clock()
@@ -259,34 +273,35 @@ def testMethod(file, usearray, testwrite, testread, complib, complevel,
                                         complevel, complib, recsize)
         t2 = time.time()
         cpu2 = time.clock()
-        tapprows = round(t2-t1, 3)
-        cpuapprows = round(cpu2-cpu1, 3)
-        tpercent = int(round(cpuapprows/tapprows, 2)*100)
-        print "Rows written:", rowsw, " Row size:", rowsz
-        print "Time writing rows: %s s (real) %s s (cpu)  %s%%" % \
-              (tapprows, cpuapprows, tpercent)
-        print "Write rows/sec: ", int(rowsw / float(tapprows))
-        print "Write KB/s :", int(rowsw * rowsz / (tapprows * 1024))
+        tapprows = round(t2 - t1, 3)
+        cpuapprows = round(cpu2 - cpu1, 3)
+        tpercent = int(round(cpuapprows / tapprows, 2) * 100)
+        print("Rows written:", rowsw, " Row size:", rowsz)
+        print("Time writing rows: %s s (real) %s s (cpu)  %s%%" %
+              (tapprows, cpuapprows, tpercent))
+        print("Write rows/sec: ", int(rowsw / float(tapprows)))
+        print("Write KB/s :", int(rowsw * rowsz / (tapprows * 1024)))
 
     if testread:
         t1 = time.time()
         cpu1 = time.clock()
         if usearray:
-            (rowsr, rowsz, bufsz)=readFileArr(file, ngroups, recsize, verbose)
+            (rowsr, rowsz, bufsz) = readFileArr(file,
+                                                ngroups, recsize, verbose)
         else:
             (rowsr, rowsz, bufsz) = readFile(file, ngroups, recsize, verbose)
         t2 = time.time()
         cpu2 = time.clock()
-        treadrows = round(t2-t1, 3)
-        cpureadrows = round(cpu2-cpu1, 3)
-        tpercent = int(round(cpureadrows/treadrows, 2)*100)
-        print "Rows read:", rowsr, " Row size:", rowsz, "Buf size:", bufsz
-        print "Time reading rows: %s s (real) %s s (cpu)  %s%%" % \
-              (treadrows, cpureadrows, tpercent)
-        print "Read rows/sec: ", int(rowsr / float(treadrows))
-        print "Read KB/s :", int(rowsr * rowsz / (treadrows * 1024))
-
-if __name__=="__main__":
+        treadrows = round(t2 - t1, 3)
+        cpureadrows = round(cpu2 - cpu1, 3)
+        tpercent = int(round(cpureadrows / treadrows, 2) * 100)
+        print("Rows read:", rowsr, " Row size:", rowsz, "Buf size:", bufsz)
+        print("Time reading rows: %s s (real) %s s (cpu)  %s%%" %
+              (treadrows, cpureadrows, tpercent))
+        print("Read rows/sec: ", int(rowsr / float(treadrows)))
+        print("Read KB/s :", int(rowsr * rowsz / (treadrows * 1024)))
+
+if __name__ == "__main__":
     import getopt
     import profile
     try:
@@ -295,7 +310,6 @@ if __name__=="__main__":
     except:
         psyco_imported = 0
 
-
     usage = """usage: %s [-d debug] [-v level] [-p] [-r] [-w] [-l complib] [-c complevel] [-g ngroups] [-t ntables] [-i nrows] file
     -d debugging level
     -v verbosity level
@@ -379,7 +393,7 @@ if __name__=="__main__":
     else:
 #         testMethod(file, usearray, testwrite, testread, complib, complevel,
 #                    ngroups, ntables, nrows)
-        profile.run("testMethod(file, usearray, testwrite, testread, " + \
+        profile.run("testMethod(file, usearray, testwrite, testread, " +
                     "complib, complevel, ngroups, ntables, nrows)")
 
     # Show the dirt
diff --git a/bench/stress-test2.py b/bench/stress-test2.py
index f6fe7bc..1dc91fe 100644
--- a/bench/stress-test2.py
+++ b/bench/stress-test2.py
@@ -1,6 +1,11 @@
-import sys, time, random, gc
+from __future__ import print_function
+import gc
+import sys
+import time
+import random
 from tables import *
 
+
 class Test(IsDescription):
     ngroup = Int32Col(pos=1)
     ntable = Int32Col(pos=2)
@@ -8,6 +13,7 @@ class Test(IsDescription):
     time = Float64Col(pos=5)
     random = Float32Col(pos=4)
 
+
 def createFile(filename, ngroups, ntables, nrows, complevel, complib, recsize):
 
     # First, create the groups
@@ -17,27 +23,27 @@ def createFile(filename, ngroups, ntables, nrows, complevel, complib, recsize):
 
     for k in range(ngroups):
         # Create the group
-        group = fileh.create_group("/", 'group%04d'% k, "Group %d" % k)
+        group = fileh.create_group("/", 'group%04d' % k, "Group %d" % k)
 
     fileh.close()
 
     # Now, create the tables
     rowswritten = 0
     for k in range(ngroups):
-        fileh = open_file(filename, mode="a", root_uep='group%04d'% k)
+        fileh = open_file(filename, mode="a", root_uep='group%04d' % k)
         # Get the group
         group = fileh.root
         for j in range(ntables):
             # Create a table
-            table = fileh.create_table(group, 'table%04d'% j, Test,
-                                      'Table%04d'%j,
-                                      complevel, complib, nrows)
+            table = fileh.create_table(group, 'table%04d' % j, Test,
+                                       'Table%04d' % j,
+                                       complevel, complib, nrows)
             # Get the row object associated with the new table
             row = table.row
             # Fill the table
-            for i in xrange(nrows):
+            for i in range(nrows):
                 row['time'] = time.time()
-                row['random'] = random.random()*40+100
+                row['random'] = random.random() * 40 + 100
                 row['ngroup'] = k
                 row['ntable'] = j
                 row['nrow'] = i
@@ -51,26 +57,27 @@ def createFile(filename, ngroups, ntables, nrows, complevel, complib, recsize):
 
     return (rowswritten, table.rowsize)
 
+
 def readFile(filename, ngroups, recsize, verbose):
     # Open the HDF5 file in read-only mode
 
     rowsread = 0
     for ngroup in range(ngroups):
-        fileh = open_file(filename, mode="r", root_uep='group%04d'% ngroup)
+        fileh = open_file(filename, mode="r", root_uep='group%04d' % ngroup)
         # Get the group
         group = fileh.root
         ntable = 0
         if verbose:
-            print "Group ==>", group
+            print("Group ==>", group)
         for table in fileh.list_nodes(group, 'Table'):
             rowsize = table.rowsize
-            buffersize=table.rowsize * table.nrowsinbuf
+            buffersize = table.rowsize * table.nrowsinbuf
             if verbose > 1:
-                print "Table ==>", table
-                print "Max rows in buf:", table.nrowsinbuf
-                print "Rows in", table._v_pathname, ":", table.nrows
-                print "Buffersize:", table.rowsize * table.nrowsinbuf
-                print "MaxTuples:", table.nrowsinbuf
+                print("Table ==>", table)
+                print("Max rows in buf:", table.nrowsinbuf)
+                print("Rows in", table._v_pathname, ":", table.nrows)
+                print("Buffersize:", table.rowsize * table.nrowsinbuf)
+                print("MaxTuples:", table.nrowsinbuf)
 
             nrow = 0
             time_1 = 0.0
@@ -85,9 +92,9 @@ def readFile(filename, ngroups, recsize, verbose):
                     #assert 100 <= row["random"] <= 139.999
                     assert 100 <= row["random"] <= 140
                 except:
-                    print "Error in group: %d, table: %d, row: %d" % \
-                          (ngroup, ntable, nrow)
-                    print "Record ==>", row
+                    print("Error in group: %d, table: %d, row: %d" %
+                          (ngroup, ntable, nrow))
+                    print("Record ==>", row)
                 time_1 = row["time"]
                 nrow += 1
 
@@ -100,21 +107,20 @@ def readFile(filename, ngroups, recsize, verbose):
 
     return (rowsread, rowsize, buffersize)
 
+
 def dump_garbage():
-    """
-    show us waht the garbage is about
-    """
+    """show us waht the garbage is about."""
     # Force collection
-    print "\nGARBAGE:"
+    print("\nGARBAGE:")
     gc.collect()
 
-    print "\nGARBAGE OBJECTS:"
+    print("\nGARBAGE OBJECTS:")
     for x in gc.garbage:
         s = str(x)
         #if len(s) > 80: s = s[:77] + "..."
-        print type(x), "\n   ", s
+        print(type(x), "\n   ", s)
 
-if __name__=="__main__":
+if __name__ == "__main__":
     import getopt
     try:
         import psyco
@@ -122,7 +128,6 @@ if __name__=="__main__":
     except:
         psyco_imported = 0
 
-
     usage = """usage: %s [-d debug] [-v level] [-p] [-r] [-w] [-l complib] [-c complevel] [-g ngroups] [-t ntables] [-i nrows] file
     -d debugging level
     -v verbosity level
@@ -190,9 +195,9 @@ if __name__=="__main__":
     # Catch the hdf5 file passed as the last argument
     file = pargs[0]
 
-    print "Compression level:", complevel
+    print("Compression level:", complevel)
     if complevel > 0:
-        print "Compression library:", complib
+        print("Compression library:", complib)
     if testwrite:
         t1 = time.time()
         cpu1 = time.clock()
@@ -202,14 +207,14 @@ if __name__=="__main__":
                                     complevel, complib, recsize)
         t2 = time.time()
         cpu2 = time.clock()
-        tapprows = round(t2-t1, 3)
-        cpuapprows = round(cpu2-cpu1, 3)
-        tpercent = int(round(cpuapprows/tapprows, 2)*100)
-        print "Rows written:", rowsw, " Row size:", rowsz
-        print "Time writing rows: %s s (real) %s s (cpu)  %s%%" % \
-              (tapprows, cpuapprows, tpercent)
-        print "Write rows/sec: ", int(rowsw / float(tapprows))
-        print "Write KB/s :", int(rowsw * rowsz / (tapprows * 1024))
+        tapprows = round(t2 - t1, 3)
+        cpuapprows = round(cpu2 - cpu1, 3)
+        tpercent = int(round(cpuapprows / tapprows, 2) * 100)
+        print("Rows written:", rowsw, " Row size:", rowsz)
+        print("Time writing rows: %s s (real) %s s (cpu)  %s%%" %
+              (tapprows, cpuapprows, tpercent))
+        print("Write rows/sec: ", int(rowsw / float(tapprows)))
+        print("Write KB/s :", int(rowsw * rowsz / (tapprows * 1024)))
 
     if testread:
         t1 = time.time()
@@ -219,14 +224,14 @@ if __name__=="__main__":
         (rowsr, rowsz, bufsz) = readFile(file, ngroups, recsize, verbose)
         t2 = time.time()
         cpu2 = time.clock()
-        treadrows = round(t2-t1, 3)
-        cpureadrows = round(cpu2-cpu1, 3)
-        tpercent = int(round(cpureadrows/treadrows, 2)*100)
-        print "Rows read:", rowsr, " Row size:", rowsz, "Buf size:", bufsz
-        print "Time reading rows: %s s (real) %s s (cpu)  %s%%" % \
-              (treadrows, cpureadrows, tpercent)
-        print "Read rows/sec: ", int(rowsr / float(treadrows))
-        print "Read KB/s :", int(rowsr * rowsz / (treadrows * 1024))
+        treadrows = round(t2 - t1, 3)
+        cpureadrows = round(cpu2 - cpu1, 3)
+        tpercent = int(round(cpureadrows / treadrows, 2) * 100)
+        print("Rows read:", rowsr, " Row size:", rowsz, "Buf size:", bufsz)
+        print("Time reading rows: %s s (real) %s s (cpu)  %s%%" %
+              (treadrows, cpureadrows, tpercent))
+        print("Read rows/sec: ", int(rowsr / float(treadrows)))
+        print("Read KB/s :", int(rowsr * rowsz / (treadrows * 1024)))
 
     # Show the dirt
     if debug > 1:
diff --git a/bench/stress-test3.py b/bench/stress-test3.py
index b2b4a4c..42630f5 100644
--- a/bench/stress-test3.py
+++ b/bench/stress-test3.py
@@ -1,20 +1,25 @@
 #!/usr/bin/env python
 
-""" This script allows to create arbitrarily large files with the desired
+"""This script allows to create arbitrarily large files with the desired
 combination of groups, tables per group and rows per table.
 
 Issue "python stress-test3.py" without parameters for a help on usage.
 
 """
 
-import sys, time, gc
+from __future__ import print_function
+import gc
+import sys
+import time
 from tables import *
 
+
 class Test(IsDescription):
     ngroup = Int32Col(pos=1)
     ntable = Int32Col(pos=2)
     nrow = Int32Col(pos=3)
-    string = StringCol(length=500, pos = 4)
+    string = StringCol(500, pos=4)
+
 
 def createFileArr(filename, ngroups, ntables, nrows):
 
@@ -25,26 +30,27 @@ def createFileArr(filename, ngroups, ntables, nrows):
 
     for k in range(ngroups):
         # Create the group
-        fileh.create_group("/", 'group%04d'% k, "Group %d" % k)
+        fileh.create_group("/", 'group%04d' % k, "Group %d" % k)
 
     fileh.close()
 
     return (0, 4)
 
+
 def readFileArr(filename, ngroups, recsize, verbose):
 
     rowsread = 0
     for ngroup in range(ngroups):
-        fileh = open_file(filename, mode="r", root_uep='group%04d'% ngroup)
+        fileh = open_file(filename, mode="r", root_uep='group%04d' % ngroup)
         # Get the group
         group = fileh.root
         ntable = 0
         if verbose:
-            print "Group ==>", group
+            print("Group ==>", group)
         for table in fileh.list_nodes(group, 'Array'):
             if verbose > 1:
-                print "Array ==>", table
-                print "Rows in", table._v_pathname, ":", table.shape
+                print("Array ==>", table)
+                print("Rows in", table._v_pathname, ":", table.shape)
 
             arr = table.read()
 
@@ -56,6 +62,7 @@ def readFileArr(filename, ngroups, recsize, verbose):
 
     return (rowsread, 4, 0)
 
+
 def createFile(filename, ngroups, ntables, nrows, complevel, complib, recsize):
 
     # First, create the groups
@@ -65,26 +72,26 @@ def createFile(filename, ngroups, ntables, nrows, complevel, complib, recsize):
 
     for k in range(ngroups):
         # Create the group
-        group = fileh.create_group("/", 'group%04d'% k, "Group %d" % k)
+        group = fileh.create_group("/", 'group%04d' % k, "Group %d" % k)
 
     fileh.close()
 
     # Now, create the tables
     rowswritten = 0
     for k in range(ngroups):
-        fileh = open_file(filename, mode="a", root_uep='group%04d'% k)
+        fileh = open_file(filename, mode="a", root_uep='group%04d' % k)
         # Get the group
         group = fileh.root
         for j in range(ntables):
             # Create a table
-            table = fileh.create_table(group, 'table%04d'% j, Test,
-                                      'Table%04d'%j,
-                                      Filters(complevel, complib), nrows)
+            table = fileh.create_table(group, 'table%04d' % j, Test,
+                                       'Table%04d' % j,
+                                       Filters(complevel, complib), nrows)
             rowsize = table.rowsize
             # Get the row object associated with the new table
             row = table.row
             # Fill the table
-            for i in xrange(nrows):
+            for i in range(nrows):
                 row['ngroup'] = k
                 row['ntable'] = j
                 row['nrow'] = i
@@ -98,26 +105,27 @@ def createFile(filename, ngroups, ntables, nrows, complevel, complib, recsize):
 
     return (rowswritten, rowsize)
 
+
 def readFile(filename, ngroups, recsize, verbose):
     # Open the HDF5 file in read-only mode
 
     rowsread = 0
     for ngroup in range(ngroups):
-        fileh = open_file(filename, mode="r", root_uep='group%04d'% ngroup)
+        fileh = open_file(filename, mode="r", root_uep='group%04d' % ngroup)
         # Get the group
         group = fileh.root
         ntable = 0
         if verbose:
-            print "Group ==>", group
+            print("Group ==>", group)
         for table in fileh.list_nodes(group, 'Table'):
             rowsize = table.rowsize
-            buffersize=table.rowsize * table.nrowsinbuf
+            buffersize = table.rowsize * table.nrowsinbuf
             if verbose > 1:
-                print "Table ==>", table
-                print "Max rows in buf:", table.nrowsinbuf
-                print "Rows in", table._v_pathname, ":", table.nrows
-                print "Buffersize:", table.rowsize * table.nrowsinbuf
-                print "MaxTuples:", table.nrowsinbuf
+                print("Table ==>", table)
+                print("Max rows in buf:", table.nrowsinbuf)
+                print("Rows in", table._v_pathname, ":", table.nrows)
+                print("Buffersize:", table.rowsize * table.nrowsinbuf)
+                print("MaxTuples:", table.nrowsinbuf)
 
             nrow = 0
             for row in table:
@@ -126,9 +134,9 @@ def readFile(filename, ngroups, recsize, verbose):
                     assert row["ntable"] == ntable
                     assert row["nrow"] == nrow
                 except:
-                    print "Error in group: %d, table: %d, row: %d" % \
-                          (ngroup, ntable, nrow)
-                    print "Record ==>", row
+                    print("Error in group: %d, table: %d, row: %d" %
+                          (ngroup, ntable, nrow))
+                    print("Record ==>", row)
                 nrow += 1
 
             assert nrow == table.nrows
@@ -140,21 +148,20 @@ def readFile(filename, ngroups, recsize, verbose):
 
     return (rowsread, rowsize, buffersize)
 
+
 def dump_garbage():
-    """
-    show us waht the garbage is about
-    """
+    """show us waht the garbage is about."""
     # Force collection
-    print "\nGARBAGE:"
+    print("\nGARBAGE:")
     gc.collect()
 
-    print "\nGARBAGE OBJECTS:"
+    print("\nGARBAGE OBJECTS:")
     for x in gc.garbage:
         s = str(x)
         #if len(s) > 80: s = s[:77] + "..."
-        print type(x), "\n   ", s
+        print(type(x), "\n   ", s)
 
-if __name__=="__main__":
+if __name__ == "__main__":
     import getopt
     try:
         import psyco
@@ -162,7 +169,6 @@ if __name__=="__main__":
     except:
         psyco_imported = 0
 
-
     usage = """usage: %s [-d debug] [-v level] [-p] [-r] [-w] [-l complib] [-c complevel] [-g ngroups] [-t ntables] [-i nrows] file
     -d debugging level
     -v verbosity level
@@ -234,9 +240,9 @@ if __name__=="__main__":
     # Catch the hdf5 file passed as the last argument
     file = pargs[0]
 
-    print "Compression level:", complevel
+    print("Compression level:", complevel)
     if complevel > 0:
-        print "Compression library:", complib
+        print("Compression library:", complib)
     if testwrite:
         t1 = time.time()
         cpu1 = time.clock()
@@ -249,14 +255,14 @@ if __name__=="__main__":
                                         complevel, complib, recsize)
         t2 = time.time()
         cpu2 = time.clock()
-        tapprows = round(t2-t1, 3)
-        cpuapprows = round(cpu2-cpu1, 3)
-        tpercent = int(round(cpuapprows/tapprows, 2)*100)
-        print "Rows written:", rowsw, " Row size:", rowsz
-        print "Time writing rows: %s s (real) %s s (cpu)  %s%%" % \
-              (tapprows, cpuapprows, tpercent)
-        print "Write rows/sec: ", int(rowsw / float(tapprows))
-        print "Write KB/s :", int(rowsw * rowsz / (tapprows * 1024))
+        tapprows = round(t2 - t1, 3)
+        cpuapprows = round(cpu2 - cpu1, 3)
+        tpercent = int(round(cpuapprows / tapprows, 2) * 100)
+        print("Rows written:", rowsw, " Row size:", rowsz)
+        print("Time writing rows: %s s (real) %s s (cpu)  %s%%" %
+              (tapprows, cpuapprows, tpercent))
+        print("Write rows/sec: ", int(rowsw / float(tapprows)))
+        print("Write KB/s :", int(rowsw * rowsz / (tapprows * 1024)))
 
     if testread:
         t1 = time.time()
@@ -264,19 +270,20 @@ if __name__=="__main__":
         if psyco_imported and usepsyco:
             psyco.bind(readFile)
         if usearray:
-            (rowsr, rowsz, bufsz)=readFileArr(file, ngroups, recsize, verbose)
+            (rowsr, rowsz, bufsz) = readFileArr(file,
+                                                ngroups, recsize, verbose)
         else:
             (rowsr, rowsz, bufsz) = readFile(file, ngroups, recsize, verbose)
         t2 = time.time()
         cpu2 = time.clock()
-        treadrows = round(t2-t1, 3)
-        cpureadrows = round(cpu2-cpu1, 3)
-        tpercent = int(round(cpureadrows/treadrows, 2)*100)
-        print "Rows read:", rowsr, " Row size:", rowsz, "Buf size:", bufsz
-        print "Time reading rows: %s s (real) %s s (cpu)  %s%%" % \
-              (treadrows, cpureadrows, tpercent)
-        print "Read rows/sec: ", int(rowsr / float(treadrows))
-        print "Read KB/s :", int(rowsr * rowsz / (treadrows * 1024))
+        treadrows = round(t2 - t1, 3)
+        cpureadrows = round(cpu2 - cpu1, 3)
+        tpercent = int(round(cpureadrows / treadrows, 2) * 100)
+        print("Rows read:", rowsr, " Row size:", rowsz, "Buf size:", bufsz)
+        print("Time reading rows: %s s (real) %s s (cpu)  %s%%" %
+              (treadrows, cpureadrows, tpercent))
+        print("Read rows/sec: ", int(rowsr / float(treadrows)))
+        print("Read KB/s :", int(rowsr * rowsz / (treadrows * 1024)))
 
     # Show the dirt
     if debug > 1:
diff --git a/bench/table-bench.py b/bench/table-bench.py
index 4cfcae1..64d304b 100644
--- a/bench/table-bench.py
+++ b/bench/table-bench.py
@@ -1,46 +1,54 @@
 #!/usr/bin/env python
 
+from __future__ import print_function
 import numpy as NP
 from tables import *
 
 # This class is accessible only for the examples
+
+
 class Small(IsDescription):
     var1 = StringCol(itemsize=4, pos=2)
     var2 = Int32Col(pos=1)
     var3 = Float64Col(pos=0)
 
 # Define a user record to characterize some kind of particles
+
+
 class Medium(IsDescription):
-    name        = StringCol(itemsize=16, pos=0)  # 16-character String
-    float1      = Float64Col(shape=2, dflt=NP.arange(2), pos=1)
-    #float1      = Float64Col(dflt=2.3)
-    #float2      = Float64Col(dflt=2.3)
-    #zADCcount    = Int16Col()    # signed short integer
-    ADCcount    = Int32Col(pos=6)    # signed short integer
-    grid_i      = Int32Col(pos=7)    # integer
-    grid_j      = Int32Col(pos=8)    # integer
-    pressure    = Float32Col(pos=9)    # float  (single-precision)
-    energy      = Float64Col(pos=2)    # double (double-precision)
-    #unalig      = Int8Col()          # just to unalign data
+    name = StringCol(itemsize=16, pos=0)    # 16-character String
+    float1 = Float64Col(shape=2, dflt=NP.arange(2), pos=1)
+    #float1 = Float64Col(dflt=2.3)
+    #float2 = Float64Col(dflt=2.3)
+    # zADCcount    = Int16Col()               # signed short integer
+    ADCcount = Int32Col(pos=6)              # signed short integer
+    grid_i = Int32Col(pos=7)                # integer
+    grid_j = Int32Col(pos=8)                # integer
+    pressure = Float32Col(pos=9)            # float  (single-precision)
+    energy = Float64Col(pos=2)              # double (double-precision)
+    # unalig      = Int8Col()                 # just to unalign data
 
 # Define a user record to characterize some kind of particles
+
+
 class Big(IsDescription):
-    name        = StringCol(itemsize=16)  # 16-character String
-    float1      = Float64Col(shape=32, dflt=NP.arange(32))
-    float2      = Float64Col(shape=32, dflt=2.2)
-    TDCcount    = Int8Col()    # signed short integer
+    name = StringCol(itemsize=16)           # 16-character String
+    float1 = Float64Col(shape=32, dflt=NP.arange(32))
+    float2 = Float64Col(shape=32, dflt=2.2)
+    TDCcount = Int8Col()                    # signed short integer
     #ADCcount    = Int32Col()
-    #ADCcount    = Int16Col()    # signed short integer
-    grid_i      = Int32Col()    # integer
-    grid_j      = Int32Col()    # integer
-    pressure    = Float32Col()    # float  (single-precision)
-    energy      = Float64Col()    # double (double-precision)
+    # ADCcount = Int16Col()                   # signed short integer
+    grid_i = Int32Col()                       # integer
+    grid_j = Int32Col()                       # integer
+    pressure = Float32Col()                   # float  (single-precision)
+    energy = Float64Col()                     # double (double-precision)
+
 
 def createFile(filename, totalrows, filters, recsize):
 
     # Open a file in "w"rite mode
-    fileh = open_file(filename, mode = "w", title="Table Benchmark",
-                     filters=filters)
+    fileh = open_file(filename, mode="w", title="Table Benchmark",
+                      filters=filters)
 
     # Table title
     title = "This is the table title"
@@ -51,17 +59,17 @@ def createFile(filename, totalrows, filters, recsize):
     for j in range(3):
         # Create a table
         if recsize == "big":
-            table = fileh.create_table(group, 'tuple'+str(j), Big, title,
-                                      None,
-                                      totalrows)
+            table = fileh.create_table(group, 'tuple' + str(j), Big, title,
+                                       None,
+                                       totalrows)
         elif recsize == "medium":
-            table = fileh.create_table(group, 'tuple'+str(j), Medium, title,
-                                      None,
-                                      totalrows)
+            table = fileh.create_table(group, 'tuple' + str(j), Medium, title,
+                                       None,
+                                       totalrows)
         elif recsize == "small":
-            table = fileh.create_table(group, 'tuple'+str(j), Small, title,
-                                      None,
-                                      totalrows)
+            table = fileh.create_table(group, 'tuple' + str(j), Small, title,
+                                       None,
+                                       totalrows)
         else:
             raise RuntimeError("This should never happen")
 
@@ -71,7 +79,7 @@ def createFile(filename, totalrows, filters, recsize):
         d = table.row
         # Fill the table
         if recsize == "big":
-            for i in xrange(totalrows):
+            for i in range(totalrows):
                 # d['name']  = 'Part: %6d' % (i)
                 d['TDCcount'] = i % 256
                 #d['float1'] = NP.array([i]*32, NP.float64)
@@ -81,13 +89,13 @@ def createFile(filename, totalrows, filters, recsize):
                 # Common part with medium
                 d['grid_i'] = i
                 d['grid_j'] = 10 - i
-                d['pressure'] = float(i*i)
+                d['pressure'] = float(i * i)
                 # d['energy'] = float(d['pressure'] ** 4)
                 d['energy'] = d['pressure']
                 # d['idnumber'] = i * (2 ** 34)
                 d.append()
         elif recsize == "medium":
-            for i in xrange(totalrows):
+            for i in range(totalrows):
                 #d['name']  = 'Part: %6d' % (i)
                 #d['float1'] = NP.array([i]*2, NP.float64)
                 #d['float1'] = arr
@@ -96,19 +104,19 @@ def createFile(filename, totalrows, filters, recsize):
                 # Common part with big:
                 d['grid_i'] = i
                 d['grid_j'] = 10 - i
-                d['pressure'] = i*2
+                d['pressure'] = i * 2
                 # d['energy'] = float(d['pressure'] ** 4)
                 d['energy'] = d['pressure']
                 d.append()
-        else: # Small record
-            for i in xrange(totalrows):
+        else:  # Small record
+            for i in range(totalrows):
                 #d['var1'] = str(random.randrange(1000000))
                 #d['var3'] = random.randrange(10000000)
                 d['var1'] = str(i)
                 #d['var2'] = random.randrange(totalrows)
                 d['var2'] = i
                 #d['var3'] = 12.1e10
-                d['var3'] = totalrows-i
+                d['var3'] = totalrows - i
                 d.append()  # This is a 10% faster than table.append()
         rowswritten += totalrows
 
@@ -117,10 +125,10 @@ def createFile(filename, totalrows, filters, recsize):
             pass
 #            table._createIndex("var3", Filters(1,"zlib",shuffle=1))
 
-        #table.flush()
+        # table.flush()
         group._v_attrs.test2 = "just a test"
         # Create a new group
-        group2 = fileh.create_group(group, 'group'+str(j))
+        group2 = fileh.create_group(group, 'group' + str(j))
         # Iterate over this new group (group2)
         group = group2
         table.flush()
@@ -129,25 +137,26 @@ def createFile(filename, totalrows, filters, recsize):
     fileh.close()
     return (rowswritten, rowsize)
 
+
 def readFile(filename, recsize, verbose):
     # Open the HDF5 file in read-only mode
 
-    fileh = open_file(filename, mode = "r")
+    fileh = open_file(filename, mode="r")
     rowsread = 0
     for groupobj in fileh.walk_groups(fileh.root):
-        #print "Group pathname:", groupobj._v_pathname
+        # print "Group pathname:", groupobj._v_pathname
         row = 0
         for table in fileh.list_nodes(groupobj, 'Table'):
             rowsize = table.rowsize
-            print "reading", table
+            print("reading", table)
             if verbose:
-                print "Max rows in buf:", table.nrowsinbuf
-                print "Rows in", table._v_pathname, ":", table.nrows
-                print "Buffersize:", table.rowsize * table.nrowsinbuf
-                print "MaxTuples:", table.nrowsinbuf
+                print("Max rows in buf:", table.nrowsinbuf)
+                print("Rows in", table._v_pathname, ":", table.nrows)
+                print("Buffersize:", table.rowsize * table.nrowsinbuf)
+                print("MaxTuples:", table.nrowsinbuf)
 
             if recsize == "big" or recsize == "medium":
-                #e = [ p.float1 for p in table.iterrows()
+                # e = [ p.float1 for p in table.iterrows()
                 #      if p.grid_i < 2 ]
                 #e = [ str(p) for p in table.iterrows() ]
                 #      if p.grid_i < 2 ]
@@ -158,8 +167,8 @@ def readFile(filename, recsize, verbose):
 #                e = [ p['grid_i'] for p in table.where("grid_i<=20")]
 #                 e = [ p['grid_i'] for p in
 #                       table.where('grid_i <= 20')]
-                e = [ p['grid_i'] for p in
-                      table.where('(grid_i <= 20) & (grid_j == 20)')]
+                e = [p['grid_i'] for p in
+                     table.where('(grid_i <= 20) & (grid_j == 20)')]
 #                 e = [ p['grid_i'] for p in table.iterrows()
 #                       if p.nrow() == 20 ]
 #                 e = [ table.delrow(p.nrow()) for p in table.iterrows()
@@ -167,7 +176,7 @@ def readFile(filename, recsize, verbose):
                 # The version with a for loop is only 1% better than
                 # comprenhension list
                 #e = []
-                #for p in table.iterrows():
+                # for p in table.iterrows():
                 #    if p.grid_i < 20:
                 #        e.append(p.grid_j)
             else:  # small record case
@@ -177,11 +186,11 @@ def readFile(filename, recsize, verbose):
 #                      if p['var2'] < 20 ]
 #               e = [ p['var3'] for p in table.where("var3 <= 20")]
 # Cuts 1) and 2) issues the same results but 2) is about 10 times faster
-# ######## Cut 1)
+# Cut 1)
 #                e = [ p.nrow() for p in
 #                      table.where(table.cols.var2 > 5)
 #                      if p["var2"] < 10]
-# ######## Cut 2)
+# Cut 2)
 #                 e = [ p.nrow() for p in
 #                       table.where(table.cols.var2 < 10)
 #                       if p["var2"] > 5]
@@ -192,12 +201,13 @@ def readFile(filename, recsize, verbose):
 #                      table if p["var3"] <= 10]
 #               e = [ p['var3'] for p in table.where("var3 <= 20")]
 #                e = [ p['var3'] for p in
-#                      table.where(table.cols.var1 == "10")]  # More
+# table.where(table.cols.var1 == "10")]  # More
                      # than ten times faster than the next one
 #                e = [ p['var3'] for p in table
 #                      if p['var1'] == "10"]
 #                e = [ p['var3'] for p in table.where('var2 <= 20')]
-                e = [ p['var3'] for p in table.where('(var2 <= 20) & (var2 >= 3)')]
+                e = [p['var3']
+                     for p in table.where('(var2 <= 20) & (var2 >= 3)')]
                 # e = [ p[0] for p in table.where('var2 <= 20')]
                 #e = [ p['var3'] for p in table if p['var2'] <= 20 ]
                 # e = [ p[:] for p in table if p[1] <= 20 ]
@@ -207,28 +217,29 @@ def readFile(filename, recsize, verbose):
 #                       if p.nrow() <= 20 ]
                 #e = [ p['var3'] for p in table.iterrows(1,0,1000)]
                 #e = [ p['var3'] for p in table.iterrows(1,100)]
-                #e = [ p['var3'] for p in table.iterrows(step=2)
+                # e = [ p['var3'] for p in table.iterrows(step=2)
                 #      if p.nrow() < 20 ]
-                #e = [ p['var2'] for p in table.iterrows()
+                # e = [ p['var2'] for p in table.iterrows()
                 #      if p['var2'] < 20 ]
-                #for p in table.iterrows():
+                # for p in table.iterrows():
                 #      pass
             if verbose:
-                #print "Last record read:", p
-                print "resulting selection list ==>", e
+                # print "Last record read:", p
+                print("resulting selection list ==>", e)
 
             rowsread += table.nrows
             row += 1
             if verbose:
-                print "Total selected records ==> ", len(e)
+                print("Total selected records ==> ", len(e))
 
     # Close the file (eventually destroy the extended type)
     fileh.close()
 
     return (rowsread, rowsize)
 
+
 def readField(filename, field, rng, verbose):
-    fileh = open_file(filename, mode = "r")
+    fileh = open_file(filename, mode="r")
     rowsread = 0
     if rng is None:
         rng = [0, -1, 1]
@@ -237,26 +248,27 @@ def readField(filename, field, rng, verbose):
     for groupobj in fileh.walk_groups(fileh.root):
         for table in fileh.list_nodes(groupobj, 'Table'):
             rowsize = table.rowsize
-            #table.nrowsinbuf = 3 # For testing purposes
+            # table.nrowsinbuf = 3 # For testing purposes
             if verbose:
-                print "Max rows in buf:", table.nrowsinbuf
-                print "Rows in", table._v_pathname, ":", table.nrows
-                print "Buffersize:", table.rowsize * table.nrowsinbuf
-                print "MaxTuples:", table.nrowsinbuf
-                print "(field, start, stop, step) ==>", (field, rng[0], rng[1], rng[2])
+                print("Max rows in buf:", table.nrowsinbuf)
+                print("Rows in", table._v_pathname, ":", table.nrows)
+                print("Buffersize:", table.rowsize * table.nrowsinbuf)
+                print("MaxTuples:", table.nrowsinbuf)
+                print("(field, start, stop, step) ==>", (field, rng[0], rng[1],
+                                                         rng[2]))
 
             e = table.read(rng[0], rng[1], rng[2], field)
 
             rowsread += table.nrows
             if verbose:
-                print "Selected rows ==> ", e
-                print "Total selected rows ==> ", len(e)
+                print("Selected rows ==> ", e)
+                print("Total selected rows ==> ", len(e))
 
     # Close the file (eventually destroy the extended type)
     fileh.close()
     return (rowsread, rowsize)
 
-if __name__=="__main__":
+if __name__ == "__main__":
     import sys
     import getopt
 
@@ -349,16 +361,16 @@ if __name__=="__main__":
     file = pargs[0]
 
     if verbose:
-        print "numpy version:", NP.__version__
+        print("numpy version:", NP.__version__)
         if psyco_imported and usepsyco:
-            print "Using psyco version:", psyco.version_info
+            print("Using psyco version:", psyco.version_info)
 
     if testwrite:
-        print "Compression level:", complevel
+        print("Compression level:", complevel)
         if complevel > 0:
-            print "Compression library:", complib
+            print("Compression library:", complib)
             if shuffle:
-                print "Suffling..."
+                print("Suffling...")
         t1 = time.time()
         cpu1 = time.clock()
         if psyco_imported and usepsyco:
@@ -366,7 +378,10 @@ if __name__=="__main__":
         if profile:
             import profile as prof
             import pstats
-            prof.run('(rowsw, rowsz) = createFile(file, iterations, filters, recsize)', 'table-bench.prof')
+            prof.run(
+                '(rowsw, rowsz) = createFile(file, iterations, filters, '
+                'recsize)',
+                'table-bench.prof')
             stats = pstats.Stats('table-bench.prof')
             stats.strip_dirs()
             stats.sort_stats('time', 'calls')
@@ -375,21 +390,21 @@ if __name__=="__main__":
             (rowsw, rowsz) = createFile(file, iterations, filters, recsize)
         t2 = time.time()
         cpu2 = time.clock()
-        tapprows = round(t2-t1, 3)
-        cpuapprows = round(cpu2-cpu1, 3)
-        tpercent = int(round(cpuapprows/tapprows, 2)*100)
-        print "Rows written:", rowsw, " Row size:", rowsz
-        print "Time writing rows: %s s (real) %s s (cpu)  %s%%" % \
-              (tapprows, cpuapprows, tpercent)
-        print "Write rows/sec: ", int(rowsw / float(tapprows))
-        print "Write KB/s :", int(rowsw * rowsz / (tapprows * 1024))
+        tapprows = round(t2 - t1, 3)
+        cpuapprows = round(cpu2 - cpu1, 3)
+        tpercent = int(round(cpuapprows / tapprows, 2) * 100)
+        print("Rows written:", rowsw, " Row size:", rowsz)
+        print("Time writing rows: %s s (real) %s s (cpu)  %s%%" %
+              (tapprows, cpuapprows, tpercent))
+        print("Write rows/sec: ", int(rowsw / float(tapprows)))
+        print("Write KB/s :", int(rowsw * rowsz / (tapprows * 1024)))
 
     if testread:
         t1 = time.time()
         cpu1 = time.clock()
         if psyco_imported and usepsyco:
             psyco.bind(readFile)
-            #psyco.bind(readField)
+            # psyco.bind(readField)
             pass
         if rng or fieldName:
             (rowsr, rowsz) = readField(file, fieldName, rng, verbose)
@@ -399,11 +414,11 @@ if __name__=="__main__":
                 (rowsr, rowsz) = readFile(file, recsize, verbose)
         t2 = time.time()
         cpu2 = time.clock()
-        treadrows = round(t2-t1, 3)
-        cpureadrows = round(cpu2-cpu1, 3)
-        tpercent = int(round(cpureadrows/treadrows, 2)*100)
-        print "Rows read:", rowsr, " Row size:", rowsz
-        print "Time reading rows: %s s (real) %s s (cpu)  %s%%" % \
-              (treadrows, cpureadrows, tpercent)
-        print "Read rows/sec: ", int(rowsr / float(treadrows))
-        print "Read KB/s :", int(rowsr * rowsz / (treadrows * 1024))
+        treadrows = round(t2 - t1, 3)
+        cpureadrows = round(cpu2 - cpu1, 3)
+        tpercent = int(round(cpureadrows / treadrows, 2) * 100)
+        print("Rows read:", rowsr, " Row size:", rowsz)
+        print("Time reading rows: %s s (real) %s s (cpu)  %s%%" %
+              (treadrows, cpureadrows, tpercent))
+        print("Read rows/sec: ", int(rowsr / float(treadrows)))
+        print("Read KB/s :", int(rowsr * rowsz / (treadrows * 1024)))
diff --git a/bench/table-copy.py b/bench/table-copy.py
index 3d9c13e..c249cdf 100644
--- a/bench/table-copy.py
+++ b/bench/table-copy.py
@@ -1,110 +1,116 @@
-import time
-
-import numpy as np
-import tables
-
-N = 144000
-#N = 144
-
-def timed(func, *args, **kwargs):
-    start = time.time()
-    res = func(*args, **kwargs)
-    print "%fs elapsed." % (time.time() - start)
-    return res
-
-def create_table(output_path):
-    print "creating array...",
-    dt = np.dtype([('field%d' % i, int) for i in range(320)])
-    a = np.zeros(N, dtype=dt)
-    print "done."
-
-    output_file = tables.open_file(output_path, mode="w")
-    table = output_file.create_table("/", "test", dt) #, filters=blosc4)
-    print "appending data...",
-    table.append(a)
-    print "flushing...",
-    table.flush()
-    print "done."
-    output_file.close()
-
-def copy1(input_path, output_path):
-    print "copying data from %s to %s..." % (input_path, output_path)
-    input_file = tables.open_file(input_path, mode="r")
-    output_file = tables.open_file(output_path, mode="w")
-
-    # copy nodes as a batch
-    input_file.copy_node("/", output_file.root, recursive=True)
-    output_file.close()
-    input_file.close()
-
-def copy2(input_path, output_path):
-    print "copying data from %s to %s..." % (input_path, output_path)
-    input_file = tables.open_file(input_path, mode="r")
-    input_file.copy_file(output_path, overwrite=True)
-    input_file.close()
-
-def copy3(input_path, output_path):
-    print "copying data from %s to %s..." % (input_path, output_path)
-    input_file = tables.open_file(input_path, mode="r")
-    output_file = tables.open_file(output_path, mode="w")
-    table = input_file.root.test
-    table.copy(output_file.root)
-    output_file.close()
-    input_file.close()
-
-def copy4(input_path, output_path, complib='zlib', complevel=0):
-    print "copying data from %s to %s..." % (input_path, output_path)
-    input_file = tables.open_file(input_path, mode="r")
-    output_file = tables.open_file(output_path, mode="w")
-
-    input_table = input_file.root.test
-    print "reading data...",
-    data = input_file.root.test.read()
-    print "done."
-
-    filter = tables.Filters(complevel=complevel, complib=complib)
-    output_table = output_file.create_table("/", "test", input_table.dtype,
-                                           filters=filter)
-    print "appending data...",
-    output_table.append(data)
-    print "flushing...",
-    output_table.flush()
-    print "done."
-
-    input_file.close()
-    output_file.close()
-
-
-def copy5(input_path, output_path, complib='zlib', complevel=0):
-    print "copying data from %s to %s..." % (input_path, output_path)
-    input_file = tables.open_file(input_path, mode="r")
-    output_file = tables.open_file(output_path, mode="w")
-
-    input_table = input_file.root.test
-
-    filter = tables.Filters(complevel=complevel, complib=complib)
-    output_table = output_file.create_table("/", "test", input_table.dtype,
-                                           filters=filter)
-    chunksize = 10000
-    rowsleft = len(input_table)
-    start = 0
-    for chunk in range((len(input_table) / chunksize) + 1):
-        stop = start + min(chunksize, rowsleft)
-        data = input_table.read(start, stop)
-        output_table.append(data)
-        output_table.flush()
-        rowsleft -= chunksize
-        start = stop
-
-    input_file.close()
-    output_file.close()
-
-
-
-if __name__ == '__main__':
-    timed(create_table, 'tmp.h5')
-#    timed(copy1, 'tmp.h5', 'test1.h5')
-    timed(copy2, 'tmp.h5', 'test2.h5')
-#    timed(copy3, 'tmp.h5', 'test3.h5')
-    timed(copy4, 'tmp.h5', 'test4.h5')
-    timed(copy5, 'tmp.h5', 'test5.h5')
+from __future__ import print_function
+import time
+
+import numpy as np
+import tables
+
+N = 144000
+#N = 144
+
+
+def timed(func, *args, **kwargs):
+    start = time.time()
+    res = func(*args, **kwargs)
+    print("%fs elapsed." % (time.time() - start))
+    return res
+
+
+def create_table(output_path):
+    print("creating array...", end=' ')
+    dt = np.dtype([('field%d' % i, int) for i in range(320)])
+    a = np.zeros(N, dtype=dt)
+    print("done.")
+
+    output_file = tables.open_file(output_path, mode="w")
+    table = output_file.create_table("/", "test", dt)  # , filters=blosc4)
+    print("appending data...", end=' ')
+    table.append(a)
+    print("flushing...", end=' ')
+    table.flush()
+    print("done.")
+    output_file.close()
+
+
+def copy1(input_path, output_path):
+    print("copying data from %s to %s..." % (input_path, output_path))
+    input_file = tables.open_file(input_path, mode="r")
+    output_file = tables.open_file(output_path, mode="w")
+
+    # copy nodes as a batch
+    input_file.copy_node("/", output_file.root, recursive=True)
+    output_file.close()
+    input_file.close()
+
+
+def copy2(input_path, output_path):
+    print("copying data from %s to %s..." % (input_path, output_path))
+    input_file = tables.open_file(input_path, mode="r")
+    input_file.copy_file(output_path, overwrite=True)
+    input_file.close()
+
+
+def copy3(input_path, output_path):
+    print("copying data from %s to %s..." % (input_path, output_path))
+    input_file = tables.open_file(input_path, mode="r")
+    output_file = tables.open_file(output_path, mode="w")
+    table = input_file.root.test
+    table.copy(output_file.root)
+    output_file.close()
+    input_file.close()
+
+
+def copy4(input_path, output_path, complib='zlib', complevel=0):
+    print("copying data from %s to %s..." % (input_path, output_path))
+    input_file = tables.open_file(input_path, mode="r")
+    output_file = tables.open_file(output_path, mode="w")
+
+    input_table = input_file.root.test
+    print("reading data...", end=' ')
+    data = input_file.root.test.read()
+    print("done.")
+
+    filter = tables.Filters(complevel=complevel, complib=complib)
+    output_table = output_file.create_table("/", "test", input_table.dtype,
+                                            filters=filter)
+    print("appending data...", end=' ')
+    output_table.append(data)
+    print("flushing...", end=' ')
+    output_table.flush()
+    print("done.")
+
+    input_file.close()
+    output_file.close()
+
+
+def copy5(input_path, output_path, complib='zlib', complevel=0):
+    print("copying data from %s to %s..." % (input_path, output_path))
+    input_file = tables.open_file(input_path, mode="r")
+    output_file = tables.open_file(output_path, mode="w")
+
+    input_table = input_file.root.test
+
+    filter = tables.Filters(complevel=complevel, complib=complib)
+    output_table = output_file.create_table("/", "test", input_table.dtype,
+                                            filters=filter)
+    chunksize = 10000
+    rowsleft = len(input_table)
+    start = 0
+    for chunk in range((len(input_table) / chunksize) + 1):
+        stop = start + min(chunksize, rowsleft)
+        data = input_table.read(start, stop)
+        output_table.append(data)
+        output_table.flush()
+        rowsleft -= chunksize
+        start = stop
+
+    input_file.close()
+    output_file.close()
+
+
+if __name__ == '__main__':
+    timed(create_table, 'tmp.h5')
+#    timed(copy1, 'tmp.h5', 'test1.h5')
+    timed(copy2, 'tmp.h5', 'test2.h5')
+#    timed(copy3, 'tmp.h5', 'test3.h5')
+    timed(copy4, 'tmp.h5', 'test4.h5')
+    timed(copy5, 'tmp.h5', 'test5.h5')
diff --git a/bench/undo_redo.py b/bench/undo_redo.py
index 274f6a6..e1a1932 100644
--- a/bench/undo_redo.py
+++ b/bench/undo_redo.py
@@ -6,12 +6,14 @@
 # 2005-03-09
 ###########################################################################
 
+from __future__ import print_function
 import numpy
 from time import time
 import tables
 
 verbose = 0
 
+
 class BasicBenchmark(object):
 
     def __init__(self, filename, testname, vecsize, nobjects, niter):
@@ -23,14 +25,14 @@ class BasicBenchmark(object):
         self.niter = niter
 
         # Initialize the arrays
-        self.a1 = numpy.arange(0, 1*self.vecsize)
-        self.a2 = numpy.arange(1*self.vecsize, 2*self.vecsize)
-        self.a3 = numpy.arange(2*self.vecsize, 3*self.vecsize)
+        self.a1 = numpy.arange(0, 1 * self.vecsize)
+        self.a2 = numpy.arange(1 * self.vecsize, 2 * self.vecsize)
+        self.a3 = numpy.arange(2 * self.vecsize, 3 * self.vecsize)
 
     def setUp(self):
 
         # Create an HDF5 file
-        self.fileh = tables.open_file(self.file, mode = "w")
+        self.fileh = tables.open_file(self.file, mode="w")
         # open the do/undo
         self.fileh.enable_undo()
 
@@ -38,14 +40,14 @@ class BasicBenchmark(object):
         self.fileh.disable_undo()
         self.fileh.close()
         # Remove the temporary file
-        #os.remove(self.file)
+        # os.remove(self.file)
 
     def createNode(self):
-        """Checking a undo/redo create_array"""
+        """Checking a undo/redo create_array."""
 
         for i in range(self.nobjects):
             # Create a new array
-            self.fileh.create_array('/', 'array'+str(i), self.a1)
+            self.fileh.create_array('/', 'array' + str(i), self.a1)
             # Put a mark
             self.fileh.mark()
         # Unwind all marks sequentially
@@ -53,34 +55,38 @@ class BasicBenchmark(object):
             t1 = time()
             for i in range(self.nobjects):
                 self.fileh.undo()
-                if verbose: print "u",
-            if verbose: print
+                if verbose:
+                    print("u", end=' ')
+            if verbose:
+                print()
             undo = time() - t1
             # Rewind all marks sequentially
             t1 = time()
             for i in range(self.nobjects):
                 self.fileh.redo()
-                if verbose: print "r",
-            if verbose: print
+                if verbose:
+                    print("r", end=' ')
+            if verbose:
+                print()
             redo = time() - t1
 
-            print "Time for Undo, Redo (createNode):", undo, "s, ", redo, "s"
+            print("Time for Undo, Redo (createNode):", undo, "s, ", redo, "s")
 
     def copy_children(self):
-        """Checking a undo/redo copy_children"""
+        """Checking a undo/redo copy_children."""
 
         # Create a group
         self.fileh.create_group('/', 'agroup')
         # Create several objects there
         for i in range(10):
             # Create a new array
-            self.fileh.create_array('/agroup', 'array'+str(i), self.a1)
+            self.fileh.create_array('/agroup', 'array' + str(i), self.a1)
         # Excercise copy_children
         for i in range(self.nobjects):
             # Create another group for destination
-            self.fileh.create_group('/', 'anothergroup'+str(i))
+            self.fileh.create_group('/', 'anothergroup' + str(i))
             # Copy children from /agroup to /anothergroup+i
-            self.fileh.copy_children('/agroup', '/anothergroup'+str(i))
+            self.fileh.copy_children('/agroup', '/anothergroup' + str(i))
             # Put a mark
             self.fileh.mark()
         # Unwind all marks sequentially
@@ -88,28 +94,32 @@ class BasicBenchmark(object):
             t1 = time()
             for i in range(self.nobjects):
                 self.fileh.undo()
-                if verbose: print "u",
-            if verbose: print
+                if verbose:
+                    print("u", end=' ')
+            if verbose:
+                print()
             undo = time() - t1
             # Rewind all marks sequentially
             t1 = time()
             for i in range(self.nobjects):
                 self.fileh.redo()
-                if verbose: print "r",
-            if verbose: print
+                if verbose:
+                    print("r", end=' ')
+            if verbose:
+                print()
             redo = time() - t1
 
-            print "Time for Undo, Redo (copy_children):", undo, "s, ", redo, "s"
-
+            print(("Time for Undo, Redo (copy_children):", undo, "s, ",
+                  redo, "s"))
 
     def set_attr(self):
-        """Checking a undo/redo for setting attributes"""
+        """Checking a undo/redo for setting attributes."""
 
         # Create a new array
         self.fileh.create_array('/', 'array', self.a1)
         for i in range(self.nobjects):
             # Set an attribute
-            setattr(self.fileh.root.array.attrs, "attr"+str(i), str(self.a1))
+            setattr(self.fileh.root.array.attrs, "attr" + str(i), str(self.a1))
             # Put a mark
             self.fileh.mark()
         # Unwind all marks sequentially
@@ -117,18 +127,22 @@ class BasicBenchmark(object):
             t1 = time()
             for i in range(self.nobjects):
                 self.fileh.undo()
-                if verbose: print "u",
-            if verbose: print
+                if verbose:
+                    print("u", end=' ')
+            if verbose:
+                print()
             undo = time() - t1
             # Rewind all marks sequentially
             t1 = time()
             for i in range(self.nobjects):
                 self.fileh.redo()
-                if verbose: print "r",
-            if verbose: print
+                if verbose:
+                    print("r", end=' ')
+            if verbose:
+                print()
             redo = time() - t1
 
-            print "Time for Undo, Redo (set_attr):", undo, "s, ", redo, "s"
+            print("Time for Undo, Redo (set_attr):", undo, "s, ", redo, "s")
 
     def runall(self):
 
@@ -147,7 +161,8 @@ class BasicBenchmark(object):
 
 
 if __name__ == '__main__':
-    import sys, getopt
+    import sys
+    import getopt
 
     usage = """usage: %s [-v] [-p] [-t test] [-s vecsize] [-n niter] datafile
               -v verbose  (total dump of profiling)
@@ -176,7 +191,6 @@ if __name__ == '__main__':
     nobjects = 1
     niter = 1
 
-
     # Get the options
     for option in opts:
         if option[0] == '-v':
@@ -185,7 +199,8 @@ if __name__ == '__main__':
             profile = 1
         elif option[0] == '-t':
             testname = option[1]
-            if testname not in ['createNode', 'copy_children', 'set_attr', 'all']:
+            if testname not in ['createNode', 'copy_children', 'set_attr',
+                                'all']:
                 sys.stderr.write(usage)
                 sys.exit(0)
         elif option[0] == '-s':
@@ -197,10 +212,10 @@ if __name__ == '__main__':
 
     filename = pargs[0]
 
-
     bench = BasicBenchmark(filename, testname, vecsize, nobjects, niter)
     if profile:
-        import hotshot, hotshot.stats
+        import hotshot
+        import hotshot.stats
         prof = hotshot.Profile("do_undo.prof")
         prof.runcall(bench.runall)
         prof.close()
@@ -214,6 +229,6 @@ if __name__ == '__main__':
     else:
         bench.runall()
 
-## Local Variables:
-## mode: python
-## End:
+# Local Variables:
+# mode: python
+# End:
diff --git a/bench/widetree.py b/bench/widetree.py
index 609348f..c5f6651 100644
--- a/bench/widetree.py
+++ b/bench/widetree.py
@@ -1,4 +1,6 @@
-import hotshot, hotshot.stats
+from __future__ import print_function
+import hotshot
+import hotshot.stats
 
 import unittest
 import os
@@ -8,101 +10,98 @@ from tables import *
 
 verbose = 0
 
-class WideTreeTestCase(unittest.TestCase):
-    """Checks for maximum number of childs for a Group.
 
+class WideTreeTestCase(unittest.TestCase):
+    """Checks for maximum number of childs for a Group."""
 
-    """
     def test00_Leafs(self):
-        """Checking creation of large number of leafs (1024) per group
+        """Checking creation of large number of leafs (1024) per group.
 
-        Variable 'maxchilds' controls this check. PyTables support
-        up to 4096 childs per group, but this would take too much
-        memory (up to 64 MB) for testing purposes (may be we can add a
-        test for big platforms). A 1024 childs run takes up to 30 MB.
-        A 512 childs test takes around 25 MB.
+        Variable 'maxchilds' controls this check. PyTables support up to
+        4096 childs per group, but this would take too much memory (up
+        to 64 MB) for testing purposes (may be we can add a test for big
+        platforms). A 1024 childs run takes up to 30 MB. A 512 childs
+        test takes around 25 MB.
 
         """
 
         import time
         maxchilds = 1000
         if verbose:
-            print '\n', '-=' * 30
-            print "Running %s.test00_wideTree..." % \
-                  self.__class__.__name__
-            print "Maximum number of childs tested :", maxchilds
+            print('\n', '-=' * 30)
+            print("Running %s.test00_wideTree..." % self.__class__.__name__)
+            print("Maximum number of childs tested :", maxchilds)
         # Open a new empty HDF5 file
         #file = tempfile.mktemp(".h5")
         file = "test_widetree.h5"
 
-        fileh = open_file(file, mode = "w")
+        fileh = open_file(file, mode="w")
         if verbose:
-            print "Children writing progress: ",
+            print("Children writing progress: ", end=' ')
         for child in range(maxchilds):
             if verbose:
-                print "%3d," % (child),
+                print("%3d," % (child), end=' ')
             a = [1, 1]
             fileh.create_group(fileh.root, 'group' + str(child),
-                              "child: %d" % child)
+                               "child: %d" % child)
             fileh.create_array("/group" + str(child), 'array' + str(child),
-                              a, "child: %d" % child)
+                               a, "child: %d" % child)
         if verbose:
-            print
+            print()
         # Close the file
         fileh.close()
 
         t1 = time.time()
         # Open the previous HDF5 file in read-only mode
-        fileh = open_file(file, mode = "r")
-        print "\nTime spent opening a file with %d groups + %d arrays: %s s" % \
-              (maxchilds, maxchilds, time.time()-t1)
+        fileh = open_file(file, mode="r")
+        print(("\nTime spent opening a file with %d groups + %d arrays: "
+              "%s s" % (maxchilds, maxchilds, time.time() - t1)))
         if verbose:
-            print "\nChildren reading progress: ",
+            print("\nChildren reading progress: ", end=' ')
         # Close the file
         fileh.close()
         # Then, delete the file
-        #os.remove(file)
+        # os.remove(file)
 
     def test01_wideTree(self):
-        """Checking creation of large number of groups (1024) per group
+        """Checking creation of large number of groups (1024) per group.
 
-        Variable 'maxchilds' controls this check. PyTables support
-        up to 4096 childs per group, but this would take too much
-        memory (up to 64 MB) for testing purposes (may be we can add a
-        test for big platforms). A 1024 childs run takes up to 30 MB.
-        A 512 childs test takes around 25 MB.
+        Variable 'maxchilds' controls this check. PyTables support up to
+        4096 childs per group, but this would take too much memory (up
+        to 64 MB) for testing purposes (may be we can add a test for big
+        platforms). A 1024 childs run takes up to 30 MB. A 512 childs
+        test takes around 25 MB.
 
         """
 
         import time
         maxchilds = 1000
         if verbose:
-            print '\n', '-=' * 30
-            print "Running %s.test00_wideTree..." % \
-                  self.__class__.__name__
-            print "Maximum number of childs tested :", maxchilds
+            print('\n', '-=' * 30)
+            print("Running %s.test00_wideTree..." % self.__class__.__name__)
+            print("Maximum number of childs tested :", maxchilds)
         # Open a new empty HDF5 file
         file = tempfile.mktemp(".h5")
         #file = "test_widetree.h5"
 
-        fileh = open_file(file, mode = "w")
+        fileh = open_file(file, mode="w")
         if verbose:
-            print "Children writing progress: ",
+            print("Children writing progress: ", end=' ')
         for child in range(maxchilds):
             if verbose:
-                print "%3d," % (child),
+                print("%3d," % (child), end=' ')
             fileh.create_group(fileh.root, 'group' + str(child),
-                              "child: %d" % child)
+                               "child: %d" % child)
         if verbose:
-            print
+            print()
         # Close the file
         fileh.close()
 
         t1 = time.time()
         # Open the previous HDF5 file in read-only mode
-        fileh = open_file(file, mode = "r")
-        print "\nTime spent opening a file with %d groups: %s s" % \
-              (maxchilds, time.time()-t1)
+        fileh = open_file(file, mode="r")
+        print("\nTime spent opening a file with %d groups: %s s" %
+              (maxchilds, time.time() - t1))
         # Close the file
         fileh.close()
         # Then, delete the file
@@ -110,6 +109,7 @@ class WideTreeTestCase(unittest.TestCase):
 
 #----------------------------------------------------------------------
 
+
 def suite():
     theSuite = unittest.TestSuite()
     theSuite.addTest(unittest.makeSuite(WideTreeTestCase))
diff --git a/bench/widetree2.py b/bench/widetree2.py
index b4fa72c..34f044e 100644
--- a/bench/widetree2.py
+++ b/bench/widetree2.py
@@ -1,3 +1,4 @@
+from __future__ import print_function
 import unittest
 
 from tables import *
@@ -6,13 +7,16 @@ from tables import *
 
 verbose = 0
 
+
 class Test(IsDescription):
     ngroup = Int32Col(pos=1)
     ntable = Int32Col(pos=2)
     nrow = Int32Col(pos=3)
     #string = StringCol(itemsize=500, pos=4)
 
+
 class WideTreeTestCase(unittest.TestCase):
+
     def test00_Leafs(self):
 
         # Open a new empty HDF5 file
@@ -23,32 +27,32 @@ class WideTreeTestCase(unittest.TestCase):
         complevel = 0
         complib = "lzo"
 
-        print "Writing..."
+        print("Writing...")
         # Open a file in "w"rite mode
         fileh = open_file(filename, mode="w", title="PyTables Stress Test")
 
         for k in range(ngroups):
             # Create the group
-            group = fileh.create_group("/", 'group%04d'% k, "Group %d" % k)
+            group = fileh.create_group("/", 'group%04d' % k, "Group %d" % k)
 
         fileh.close()
 
         # Now, create the tables
         rowswritten = 0
         for k in range(ngroups):
-            print "Filling tables in group:", k
-            fileh = open_file(filename, mode="a", root_uep='group%04d'% k)
+            print("Filling tables in group:", k)
+            fileh = open_file(filename, mode="a", root_uep='group%04d' % k)
             # Get the group
             group = fileh.root
             for j in range(ntables):
                 # Create a table
-                table = fileh.create_table(group, 'table%04d'% j, Test,
-                                          'Table%04d'%j,
-                                          Filters(complevel, complib), nrows)
+                table = fileh.create_table(group, 'table%04d' % j, Test,
+                                           'Table%04d' % j,
+                                           Filters(complevel, complib), nrows)
                 # Get the row object associated with the new table
                 row = table.row
                 # Fill the table
-                for i in xrange(nrows):
+                for i in range(nrows):
                     row['ngroup'] = k
                     row['ntable'] = j
                     row['nrow'] = i
@@ -60,24 +64,24 @@ class WideTreeTestCase(unittest.TestCase):
             # Close the file
             fileh.close()
 
-
         # read the file
-        print "Reading..."
+        print("Reading...")
         rowsread = 0
         for ngroup in range(ngroups):
-            fileh = open_file(filename, mode="r", root_uep='group%04d'% ngroup)
+            fileh = open_file(filename, mode="r", root_uep='group%04d' %
+                              ngroup)
             # Get the group
             group = fileh.root
             ntable = 0
             if verbose:
-                print "Group ==>", group
+                print("Group ==>", group)
             for table in fileh.list_nodes(group, 'Table'):
                 if verbose > 1:
-                    print "Table ==>", table
-                    print "Max rows in buf:", table.nrowsinbuf
-                    print "Rows in", table._v_pathname, ":", table.nrows
-                    print "Buffersize:", table.rowsize * table.nrowsinbuf
-                    print "MaxTuples:", table.nrowsinbuf
+                    print("Table ==>", table)
+                    print("Max rows in buf:", table.nrowsinbuf)
+                    print("Rows in", table._v_pathname, ":", table.nrows)
+                    print("Buffersize:", table.rowsize * table.nrowsinbuf)
+                    print("MaxTuples:", table.nrowsinbuf)
 
                 nrow = 0
                 for row in table:
@@ -86,9 +90,9 @@ class WideTreeTestCase(unittest.TestCase):
                         assert row["ntable"] == ntable
                         assert row["nrow"] == nrow
                     except:
-                        print "Error in group: %d, table: %d, row: %d" % \
-                              (ngroup, ntable, nrow)
-                        print "Record ==>", row
+                        print("Error in group: %d, table: %d, row: %d" %
+                              (ngroup, ntable, nrow))
+                        print("Record ==>", row)
                     nrow += 1
 
                 assert nrow == table.nrows
@@ -99,9 +103,7 @@ class WideTreeTestCase(unittest.TestCase):
             fileh.close()
 
 
-
 #----------------------------------------------------------------------
-
 def suite():
     theSuite = unittest.TestSuite()
     theSuite.addTest(unittest.makeSuite(WideTreeTestCase))
diff --git a/c-blosc/.gitignore b/c-blosc/.gitignore
new file mode 100644
index 0000000..faf2156
--- /dev/null
+++ b/c-blosc/.gitignore
@@ -0,0 +1 @@
+bench/bench
diff --git a/c-blosc/.mailmap b/c-blosc/.mailmap
new file mode 100644
index 0000000..19ca67c
--- /dev/null
+++ b/c-blosc/.mailmap
@@ -0,0 +1,4 @@
+Francesc Alted <francesc at continuum.io> FrancescAlted <faltet at gmail.com>
+Francesc Alted <francesc at continuum.io> FrancescAlted <francesc at continuum.io>
+Francesc Alted <francesc at continuum.io> FrancescAlted <faltet at pytables.org>
+
diff --git a/c-blosc/.travis.yml b/c-blosc/.travis.yml
new file mode 100644
index 0000000..5ba27cd
--- /dev/null
+++ b/c-blosc/.travis.yml
@@ -0,0 +1,12 @@
+language: c
+compiler:
+  - gcc
+  - clang
+install: sudo apt-get install libhdf5-serial-dev
+#install: sudo apt-get install libsnappy-dev zlib1g-dev libhdf5-serial-dev
+#install: sudo apt-get install liblz4-dev libsnappy-dev zlib1g-dev libhdf5-dev
+before_script:
+  - mkdir build
+  - cd build
+  - cmake -DBUILD_HDF5_FILTER=TRUE ..
+script: make && make test
diff --git a/c-blosc/ANNOUNCE.rst b/c-blosc/ANNOUNCE.rst
new file mode 100644
index 0000000..507305d
--- /dev/null
+++ b/c-blosc/ANNOUNCE.rst
@@ -0,0 +1,68 @@
+===============================================================
+ Announcing Blosc 1.3.2
+ A blocking, shuffling and lossless compression library
+===============================================================
+
+What is new?
+============
+
+This is a maintenance release, where basically support for MSVC 2008
+has been added for Snappy internal sources and versioning symbols have
+been included in internal sources.
+
+For more info, please see the release notes in:
+
+https://github.com/FrancescAlted/blosc/wiki/Release-notes
+
+
+What is it?
+===========
+
+Blosc (http://www.blosc.org) is a high performance compressor
+optimized for binary data.  It has been designed to transmit data to
+the processor cache faster than the traditional, non-compressed,
+direct memory fetch approach via a memcpy() OS call.
+
+Blosc is the first compressor (that I'm aware of) that is meant not
+only to reduce the size of large datasets on-disk or in-memory, but
+also to accelerate object manipulations that are memory-bound.
+
+There is also a handy command line for Blosc called Bloscpack
+(https://github.com/esc/bloscpack) that allows you to compress large
+binary datafiles on-disk.  Although the format for Bloscpack has not
+stabilized yet, it allows you to effectively use Blosc from you
+favorite shell.
+
+
+Download sources
+================
+
+Please go to main web site:
+
+http://www.blosc.org/
+
+and proceed from there.  The github repository is over here:
+
+https://github.com/FrancescAlted/blosc
+
+Blosc is distributed using the MIT license, see LICENSES/BLOSC.txt for
+details.
+
+
+Mailing list
+============
+
+There is an official Blosc mailing list at:
+
+blosc at googlegroups.com
+http://groups.google.es/group/blosc
+
+
+Enjoy Data!
+
+
+.. Local Variables:
+.. mode: rst
+.. coding: utf-8
+.. fill-column: 70
+.. End:
diff --git a/c-blosc/CMakeLists.txt b/c-blosc/CMakeLists.txt
new file mode 100644
index 0000000..b1296a8
--- /dev/null
+++ b/c-blosc/CMakeLists.txt
@@ -0,0 +1,207 @@
+# CMake build system for Blosc
+# ============================
+#
+# Available options:
+#
+#   BUILD_STATIC: default ON
+#       build the static version of the Blosc library
+#   BUILD_HDF5_FILTER: default OFF
+#       build the compression filter for the HDF5 library
+#   BUILD_TESTS: default ON
+#       build test programs and generates the "test" target
+#   BUILD_BENCHMARKS: default ON
+#       build the benchmark program
+#   DEACTIVATE_LZ4: default OFF
+#       do not include support for the LZ4 library
+#   DEACTIVATE_SNAPPY: default OFF
+#       do not include support for the Snappy library
+#   DEACTIVATE_ZLIB: default OFF
+#       do not include support for the Zlib library
+#   PREFER_EXTERNAL_COMPLIBS: default ON
+#       when found, use the installed compression libs instead of included sources
+#   TEST_INCLUDE_BENCH_SINGLE_1: default ON
+#       add a test that runs the benchmark program passing "single" with 1 thread
+#       as first parameter
+#   TEST_INCLUDE_BENCH_SINGLE_N: default ON
+#       add a test that runs the benchmark program passing "single" with all threads
+#       as first parameter
+#   TEST_INCLUDE_BENCH_SUITE: default OFF
+#       add a test that runs the benchmark program passing "suite"
+#       as first parameter
+#   TEST_INCLUDE_BENCH_SUITE_PARALLEL: default OFF
+#       add a test that runs the benchmark program passing "parallel"
+#       as first parameter
+#   TEST_INCLUDE_BENCH_HARDSUITE: default OFF
+#       add a test that runs the benchmark program passing "hardsuite"
+#       as first parameter
+#   TEST_INCLUDE_BENCH_EXTREMESUITE: default OFF
+#       add a test that runs the benchmark program passing "extremesuite"
+#       as first parameter
+#   TEST_INCLUDE_BENCH_DEBUGSUITE: default OFF
+#       add a test that runs the benchmark program passing "debugsuite"
+#       as first parameter
+#
+# Components:
+#
+#    LIB: includes blosc.so
+#    DEV: static includes blosc.a and blosc.h
+#    HDF5_FILTER: includes blosc_filter.so
+#    HDF5_FILTER_DEV: includes blosc_filter.h
+
+
+cmake_minimum_required(VERSION 2.8)
+project(blosc)
+
+# parse the full version numbers from blosc.h
+file(READ ${CMAKE_CURRENT_SOURCE_DIR}/blosc/blosc.h _blosc_h_contents)
+string(REGEX REPLACE ".*#define[ \t]+BLOSC_VERSION_MAJOR[ \t]+([0-9]+).*"
+     "\\1" BLOSC_VERSION_MAJOR ${_blosc_h_contents})
+string(REGEX REPLACE ".*#define[ \t]+BLOSC_VERSION_MINOR[ \t]+([0-9]+).*"
+    "\\1" BLOSC_VERSION_MINOR ${_blosc_h_contents})
+string(REGEX REPLACE ".*#define[ \t]+BLOSC_VERSION_RELEASE[ \t]+([0-9]+).*"
+    "\\1" BLOSC_VERSION_PATCH ${_blosc_h_contents})
+string(REGEX REPLACE ".*#define[ \t]+BLOSC_VERSION_STRING[ \t]+\"([-0-9A-Za-z.]+)\".*"
+    "\\1" BLOSC_VERSION_STRING ${_blosc_h_contents})
+
+message("Configuring for Blosc version: " ${BLOSC_VERSION_STRING})
+
+# options
+option(BUILD_STATIC
+    "Build a static version of the blosc library." ON)
+option(BUILD_HDF5_FILTER
+    "Build a blosc based compression filter for the HDF5 library" OFF)
+option(BUILD_TESTS
+    "Build test programs form the blosc compression library" ON)
+option(BUILD_BENCHMARKS
+    "Build benchmark programs form the blosc compression library" ON)
+option(DEACTIVATE_LZ4
+    "Do not include support for the LZ4 library." OFF)
+option(DEACTIVATE_SNAPPY
+    "Do not include support for the SNAPPY library." OFF)
+option(DEACTIVATE_ZLIB
+    "Do not include support for the ZLIB library." OFF)
+option(PREFER_EXTERNAL_COMPLIBS
+    "When found, use the installed compression libs instead of included sources." ON)
+
+set(CMAKE_MODULE_PATH "${CMAKE_SOURCE_DIR}/cmake")
+
+
+if(NOT PREFER_EXTERNAL_COMPLIBS)
+    message(STATUS "Finding external libraries disabled.  Using internal sources.")
+endif(NOT PREFER_EXTERNAL_COMPLIBS)
+
+
+if(NOT DEACTIVATE_LZ4)
+    if(PREFER_EXTERNAL_COMPLIBS)
+        find_package(LZ4)
+    endif(PREFER_EXTERNAL_COMPLIBS)
+    # HAVE_LZ4 will be set to true because even if the library is
+    # not found, we will use the included sources for it
+    set(HAVE_LZ4 TRUE)
+endif(NOT DEACTIVATE_LZ4)
+
+if(NOT DEACTIVATE_SNAPPY)
+    if(PREFER_EXTERNAL_COMPLIBS)
+        find_package(Snappy)
+    endif(PREFER_EXTERNAL_COMPLIBS)
+    # HAVE_SNAPPY will be set to true because even if the library is not found,
+    # we will use the included sources for it
+    set(HAVE_SNAPPY TRUE)
+endif(NOT DEACTIVATE_SNAPPY)
+
+if(NOT DEACTIVATE_ZLIB)
+    # import the ZLIB_ROOT environment variable to help finding the zlib library
+    if(PREFER_EXTERNAL_COMPLIBS)
+        set(ZLIB_ROOT $ENV{ZLIB_ROOT})
+        find_package( ZLIB )
+        if (NOT ZLIB_FOUND )
+            message(STATUS "No zlib found.  Using internal sources.")
+        endif (NOT ZLIB_FOUND )
+    endif(PREFER_EXTERNAL_COMPLIBS)
+    # HAVE_ZLIB will be set to true because even if the library is not found,
+    # we will use the included sources for it
+    set(HAVE_ZLIB TRUE)
+endif(NOT DEACTIVATE_ZLIB)
+
+# create the config.h file
+configure_file ("blosc/config.h.in"  "blosc/config.h" )
+# now make sure that you set the build directory on your "Include" path when compiling
+include_directories("${PROJECT_BINARY_DIR}/blosc/")
+
+# force the default build type to Release.
+if(NOT CMAKE_BUILD_TYPE)
+    set(CMAKE_BUILD_TYPE "Release" CACHE STRING
+        "Choose the type of build, options are: Debug Release RelWithDebInfo MinSizeRel."
+        FORCE)
+endif(NOT CMAKE_BUILD_TYPE)
+
+
+# flags
+# @TODO: set -Wall
+# @NOTE: -O3 is enabled in Release mode (CMAKE_BUILD_TYPE="Release")
+
+# Set the "-msse2" build flag only if the CMAKE_C_FLAGS is not already set.
+# Probably "-msse2" should be appended to CMAKE_C_FLAGS_RELEASE.
+if(CMAKE_C_COMPILER_ID STREQUAL GNU OR CMAKE_C_COMPILER_ID STREQUAL Clang)
+     if(NOT CMAKE_C_FLAGS)
+         set(CMAKE_C_FLAGS -msse2 CACHE STRING "C flags." FORCE)
+     endif(NOT CMAKE_C_FLAGS)
+endif(CMAKE_C_COMPILER_ID STREQUAL GNU OR CMAKE_C_COMPILER_ID STREQUAL Clang)
+
+if(MSVC)
+    if(NOT CMAKE_C_FLAGS)
+        set(CMAKE_C_FLAGS "/Ox" CACHE STRING "C flags." FORCE)
+    endif(NOT CMAKE_C_FLAGS)
+endif(MSVC)
+
+if(WIN32)
+    # For some supporting headers
+    include_directories("${CMAKE_CURRENT_SOURCE_DIR}/blosc")
+endif(WIN32)
+
+
+# subdirectories
+add_subdirectory(blosc)
+
+if(BUILD_TESTS)
+    enable_testing()
+    add_subdirectory(tests)
+endif(BUILD_TESTS)
+
+if(BUILD_HDF5_FILTER)
+    add_subdirectory(hdf5)
+endif(BUILD_HDF5_FILTER)
+
+if(BUILD_BENCHMARKS)
+    add_subdirectory(bench)
+endif(BUILD_BENCHMARKS)
+
+
+# uninstall target
+configure_file(
+    "${CMAKE_CURRENT_SOURCE_DIR}/cmake_uninstall.cmake.in"
+    "${CMAKE_CURRENT_BINARY_DIR}/cmake_uninstall.cmake"
+    IMMEDIATE @ONLY)
+
+add_custom_target(uninstall
+    COMMAND ${CMAKE_COMMAND} -P ${CMAKE_CURRENT_BINARY_DIR}/cmake_uninstall.cmake)
+
+
+# packaging
+include(InstallRequiredSystemLibraries)
+
+set(CPACK_GENERATOR TGZ ZIP)
+set(CPACK_SOURCE_GENERATOR TGZ ZIP)
+set(CPACK_PACKAGE_VERSION_MAJOR ${BLOSC_VERSION_MAJOR})
+set(CPACK_PACKAGE_VERSION_MINOR ${BLOSC_VERSION_MINOR})
+set(CPACK_PACKAGE_VERSION_PATCH ${BLOSC_VERSION_PATCH})
+set(CPACK_PACKAGE_VERSION ${BLOSC_STRING_VERSION})
+set(CPACK_PACKAGE_DESCRIPTION_FILE "${CMAKE_CURRENT_SOURCE_DIR}/README.rst")
+set(CPACK_PACKAGE_DESCRIPTION_SUMMARY
+    "A blocking, shuffling and lossless compression library")
+set(CPACK_RESOURCE_FILE_LICENSE "${CMAKE_CURRENT_SOURCE_DIR}/LICENSES/BLOSC.txt")
+set(CPACK_SOURCE_IGNORE_FILES "/build.*;.*~;\\\\.git.*;\\\\.DS_Store")
+set(CPACK_STRIP_FILES TRUE)
+set(CPACK_SOURCE_STRIP_FILES TRUE)
+
+include(CPack)
diff --git a/c-blosc/LICENSES/BLOSC.txt b/c-blosc/LICENSES/BLOSC.txt
new file mode 100644
index 0000000..5b0feb7
--- /dev/null
+++ b/c-blosc/LICENSES/BLOSC.txt
@@ -0,0 +1,23 @@
+Blosc - A blocking, shuffling and lossless compression library
+
+Copyright (C) 2009-2012 Francesc Alted <faltet at gmail.com>
+Copyright (C) 2013      Francesc Alted <faltet at gmail.com>
+
+Permission is hereby granted, free of charge, to any person obtaining a copy
+of this software and associated documentation files (the "Software"), to deal
+in the Software without restriction, including without limitation the rights
+to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
+copies of the Software, and to permit persons to whom the Software is
+furnished to do so, subject to the following conditions:
+
+The above copyright notice and this permission notice shall be included in
+all copies or substantial portions of the Software.
+
+THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
+AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
+OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
+THE SOFTWARE.
+
diff --git a/c-blosc/LICENSES/FASTLZ.txt b/c-blosc/LICENSES/FASTLZ.txt
new file mode 100644
index 0000000..4a6abd6
--- /dev/null
+++ b/c-blosc/LICENSES/FASTLZ.txt
@@ -0,0 +1,24 @@
+FastLZ - lightning-fast lossless compression library
+
+Copyright (C) 2007 Ariya Hidayat (ariya at kde.org)
+Copyright (C) 2006 Ariya Hidayat (ariya at kde.org)
+Copyright (C) 2005 Ariya Hidayat (ariya at kde.org)
+
+Permission is hereby granted, free of charge, to any person obtaining a copy
+of this software and associated documentation files (the "Software"), to deal
+in the Software without restriction, including without limitation the rights
+to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
+copies of the Software, and to permit persons to whom the Software is
+furnished to do so, subject to the following conditions:
+
+The above copyright notice and this permission notice shall be included in
+all copies or substantial portions of the Software.
+
+THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
+AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
+OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
+THE SOFTWARE.
+
diff --git a/LICENSE.txt b/c-blosc/LICENSES/H5PY.txt
similarity index 74%
copy from LICENSE.txt
copy to c-blosc/LICENSES/H5PY.txt
index dbd08e5..15b30f2 100644
--- a/LICENSE.txt
+++ b/c-blosc/LICENSES/H5PY.txt
@@ -1,9 +1,7 @@
-Copyright Notice and Statement for PyTables Software Library and Utilities:
+Copyright Notice and Statement for the h5py Project
 
-Copyright (c) 2002-2004 by Francesc Alted
-Copyright (c) 2005-2007 by Carabos Coop. V.
-Copyright (c) 2008-2010 by Francesc Alted
-Copyright (c) 2011-2013 by PyTables maintainers
+Copyright (c) 2008 Andrew Collette
+http://h5py.alfven.org
 All rights reserved.
 
 Redistribution and use in source and binary forms, with or without
@@ -18,9 +16,9 @@ b. Redistributions in binary form must reproduce the above copyright
    documentation and/or other materials provided with the
    distribution.
 
-c. Neither the name of Francesc Alted nor the names of its
-   contributors may be used to endorse or promote products derived
-   from this software without specific prior written permission.
+c. Neither the name of the author nor the names of contributors may 
+   be used to endorse or promote products derived from this software 
+   without specific prior written permission.
 
 THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
 "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
@@ -33,3 +31,4 @@ DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
 THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
 (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
 OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+
diff --git a/LICENSE.txt b/c-blosc/LICENSES/LZ4.txt
similarity index 53%
copy from LICENSE.txt
copy to c-blosc/LICENSES/LZ4.txt
index dbd08e5..39784cb 100644
--- a/LICENSE.txt
+++ b/c-blosc/LICENSES/LZ4.txt
@@ -1,26 +1,18 @@
-Copyright Notice and Statement for PyTables Software Library and Utilities:
+LZ4 - Fast LZ compression algorithm
 
-Copyright (c) 2002-2004 by Francesc Alted
-Copyright (c) 2005-2007 by Carabos Coop. V.
-Copyright (c) 2008-2010 by Francesc Alted
-Copyright (c) 2011-2013 by PyTables maintainers
-All rights reserved.
+Copyright (C) 2011-2013, Yann Collet.
+BSD 2-Clause License (http://www.opensource.org/licenses/bsd-license.php)
 
 Redistribution and use in source and binary forms, with or without
 modification, are permitted provided that the following conditions are
 met:
 
-a. Redistributions of source code must retain the above copyright
-   notice, this list of conditions and the following disclaimer.
-
-b. Redistributions in binary form must reproduce the above copyright
-   notice, this list of conditions and the following disclaimer in the
-   documentation and/or other materials provided with the
-   distribution.
-
-c. Neither the name of Francesc Alted nor the names of its
-   contributors may be used to endorse or promote products derived
-   from this software without specific prior written permission.
+    * Redistributions of source code must retain the above copyright
+notice, this list of conditions and the following disclaimer.
+    * Redistributions in binary form must reproduce the above
+copyright notice, this list of conditions and the following disclaimer
+in the documentation and/or other materials provided with the
+distribution.
 
 THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
 "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
@@ -33,3 +25,8 @@ DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
 THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
 (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
 OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+
+You can contact the author at :
+- LZ4 homepage : http://fastcompression.blogspot.com/p/lz4.html
+- LZ4 source repository : http://code.google.com/p/lz4/
+
diff --git a/LICENSE.txt b/c-blosc/LICENSES/SNAPPY.txt
similarity index 54%
copy from LICENSE.txt
copy to c-blosc/LICENSES/SNAPPY.txt
index dbd08e5..8d6bd9f 100644
--- a/LICENSE.txt
+++ b/c-blosc/LICENSES/SNAPPY.txt
@@ -1,26 +1,19 @@
-Copyright Notice and Statement for PyTables Software Library and Utilities:
-
-Copyright (c) 2002-2004 by Francesc Alted
-Copyright (c) 2005-2007 by Carabos Coop. V.
-Copyright (c) 2008-2010 by Francesc Alted
-Copyright (c) 2011-2013 by PyTables maintainers
+Copyright 2011, Google Inc.
 All rights reserved.
 
 Redistribution and use in source and binary forms, with or without
 modification, are permitted provided that the following conditions are
 met:
 
-a. Redistributions of source code must retain the above copyright
-   notice, this list of conditions and the following disclaimer.
-
-b. Redistributions in binary form must reproduce the above copyright
-   notice, this list of conditions and the following disclaimer in the
-   documentation and/or other materials provided with the
-   distribution.
-
-c. Neither the name of Francesc Alted nor the names of its
-   contributors may be used to endorse or promote products derived
-   from this software without specific prior written permission.
+    * Redistributions of source code must retain the above copyright
+notice, this list of conditions and the following disclaimer.
+    * Redistributions in binary form must reproduce the above
+copyright notice, this list of conditions and the following disclaimer
+in the documentation and/or other materials provided with the
+distribution.
+    * Neither the name of Google Inc. nor the names of its
+contributors may be used to endorse or promote products derived from
+this software without specific prior written permission.
 
 THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
 "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
diff --git a/LICENSES/STDINT.txt b/c-blosc/LICENSES/STDINT.txt
similarity index 100%
copy from LICENSES/STDINT.txt
copy to c-blosc/LICENSES/STDINT.txt
diff --git a/c-blosc/LICENSES/ZLIB.txt b/c-blosc/LICENSES/ZLIB.txt
new file mode 100644
index 0000000..5d74f5c
--- /dev/null
+++ b/c-blosc/LICENSES/ZLIB.txt
@@ -0,0 +1,22 @@
+Copyright notice:
+
+ (C) 1995-2013 Jean-loup Gailly and Mark Adler
+
+  This software is provided 'as-is', without any express or implied
+  warranty.  In no event will the authors be held liable for any damages
+  arising from the use of this software.
+
+  Permission is granted to anyone to use this software for any purpose,
+  including commercial applications, and to alter it and redistribute it
+  freely, subject to the following restrictions:
+
+  1. The origin of this software must not be misrepresented; you must not
+     claim that you wrote the original software. If you use this software
+     in a product, an acknowledgment in the product documentation would be
+     appreciated but is not required.
+  2. Altered source versions must be plainly marked as such, and must not be
+     misrepresented as being the original software.
+  3. This notice may not be removed or altered from any source distribution.
+
+  Jean-loup Gailly        Mark Adler
+  jloup at gzip.org          madler at alumni.caltech.edu
diff --git a/c-blosc/README.rst b/c-blosc/README.rst
new file mode 100644
index 0000000..f7f3a96
--- /dev/null
+++ b/c-blosc/README.rst
@@ -0,0 +1,286 @@
+===============================================================
+ Blosc: A blocking, shuffling and lossless compression library
+===============================================================
+
+:Author: Francesc Alted
+:Contact: faltet at gmail.com
+:URL: http://www.blosc.org
+
+What is it?
+===========
+
+Blosc [1]_ is a high performance compressor optimized for binary data.
+It has been designed to transmit data to the processor cache faster
+than the traditional, non-compressed, direct memory fetch approach via
+a memcpy() OS call.  Blosc is the first compressor (that I'm aware of)
+that is meant not only to reduce the size of large datasets on-disk or
+in-memory, but also to accelerate memory-bound computations.
+
+It uses the blocking technique (as described in [2]_) to reduce
+activity on the memory bus as much as possible. In short, this
+technique works by dividing datasets in blocks that are small enough
+to fit in caches of modern processors and perform compression /
+decompression there.  It also leverages, if available, SIMD
+instructions (SSE2) and multi-threading capabilities of CPUs, in order
+to accelerate the compression / decompression process to a maximum.
+
+Blosc is actually a metacompressor, that meaning that it can use a range
+of compression libraries for performing the actual
+compression/decompression. Right now, it comes with integrated support
+for BloscLZ (the original one), LZ4, LZ4HC, Snappy and Zlib. Blosc comes
+with full sources for all compressors, so in case it does not find the
+libraries installed in your system, it will compile from the included
+sources and they will be integrated into the Blosc library anyway. That
+means that you can trust in having all supported compressors integrated
+in Blosc in all supported platforms.
+
+You can see some benchmarks about Blosc performance in [3]_
+
+Blosc is distributed using the MIT license, see LICENSES/BLOSC.txt for
+details.
+
+.. [1] http://www.blosc.org
+.. [2] http://blosc.org/docs/StarvingCPUs-CISE-2010.pdf
+.. [3] http://blosc.org/trac/wiki/SyntheticBenchmarks
+
+Meta-compression and other advantages over existing compressors
+===============================================================
+
+Blosc is not like other compressors: it should rather be called a
+meta-compressor.  This is so because it can use different compressors
+and pre-conditioners (programs that generally improve compression
+ratio).  At any rate, it can also be called a compressor because it
+happens that it already integrates one compressor and one
+pre-conditioner, so it can actually work like so.
+
+Currently it uses BloscLZ, a compressor heavily based on FastLZ
+(http://fastlz.org/), and a highly optimized (it can use SSE2
+instructions, if available) Shuffle pre-conditioner. However,
+different compressors or pre-conditioners may be added in the future.
+
+Blosc is in charge of coordinating the compressor and pre-conditioners
+so that they can leverage the blocking technique (described above) as
+well as multi-threaded execution (if several cores are available)
+automatically. That makes that every compressor and pre-conditioner
+will work at very high speeds, even if it was not initially designed
+for doing blocking or multi-threading.
+
+Other advantages of Blosc are:
+
+* Meant for binary data: can take advantage of the type size
+  meta-information for improved compression ratio (using the
+  integrated shuffle pre-conditioner).
+
+* Small overhead on non-compressible data: only a maximum of 16
+  additional bytes over the source buffer length are needed to
+  compress *every* input.
+
+* Maximum destination length: contrarily to many other
+  compressors, both compression and decompression routines have
+  support for maximum size lengths for the destination buffer.
+
+* Replacement for memcpy(): it supports a 0 compression level that
+  does not compress at all and only adds 16 bytes of overhead. In
+  this mode Blosc can copy memory usually faster than a plain
+  memcpy().
+
+When taken together, all these features set Blosc apart from other
+similar solutions.
+
+Compiling your application with a minimalistic Blosc
+====================================================
+
+The minimal Blosc consists of the next files (in blosc/ directory)::
+
+    blosc.h and blosc.c      -- the main routines
+    shuffle.h and shuffle.c  -- the shuffle code
+    blosclz.h and blosclz.c  -- the blosclz compressor
+
+Just add these files to your project in order to use Blosc.  For
+information on compression and decompression routines, see blosc.h.
+
+To compile using GCC (4.4 or higher recommended) on Unix:
+
+.. code-block:: console
+
+   $ gcc -O3 -msse2 -o myprog myprog.c blosc/*.c -lpthread
+
+Using Windows and MINGW:
+
+.. code-block:: console
+
+   $ gcc -O3 -msse2 -o myprog myprog.c blosc\*.c
+
+Using Windows and MSVC (2010 or higher recommended):
+
+.. code-block:: console
+
+  $ cl /Ox /Femyprog.exe myprog.c blosc\*.c
+
+A simple usage example is the benchmark in the bench/bench.c file.
+Another example for using Blosc as a generic HDF5 filter is in the
+hdf5/ directory.
+
+I have not tried to compile this with compilers other than GCC, clang,
+MINGW, Intel ICC or MSVC yet. Please report your experiences with your
+own platforms.
+
+Adding support for other compressors (LZ4, LZ4HC, Snappy, Zlib)
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+If you want to add support for the LZ4, LZ4HC, Snappy or Zlib
+compressors, just add the symbols HAVE_LZ4 (will include both LZ4 and
+LZ4HC), HAVE_SNAPPY and HAVE_ZLIB during compilation and add the
+libraries. For example, for compiling Blosc with Zlib support do:
+
+.. code-block:: console
+
+   $ gcc -O3 -msse2 -o myprog myprog.c blosc/*.c -lpthread -DHAVE_ZLIB -lz
+
+In the bench/ directory there a couple of Makefile files (one for UNIX
+and the other for MinGW) with more complete building examples, like
+selecting between libraries or internal sources for the compressors.
+
+Compiling the Blosc library with CMake
+======================================
+
+Blosc can also be built, tested and installed using CMake_. Although
+this procedure is a bit more invloved than the one described above, it
+is the most general because it allows to integrate other compressors
+than BloscLZ either from libraries or from internal sources. Hence,
+serious library developers should use this way.
+
+The following procedure describes the "out of source" build.
+
+Create the build directory and move into it:
+
+.. code-block:: console
+
+  $ mkdir build
+  $ cd build
+
+Now run CMake configuration and optionally specify the installation
+directory (e.g. '/usr' or '/usr/local'):
+
+.. code-block:: console
+
+  $ cmake -DCMAKE_INSTALL_PREFIX=your_install_prefix_directory ..
+
+CMake allows to configure Blosc in many different ways, like prefering
+internal or external sources for compressors or enabling/disabling
+them.  Please note that configuration can also be performed using UI
+tools provided by CMake_ (ccmake or cmake-gui):
+
+.. code-block:: console
+
+  $ ccmake ..      # run a curses-based interface
+  $ cmake-gui ..   # run a graphical interface
+
+Build, test and install Blosc:
+
+.. code-block:: console
+
+  $ make
+  $ make test
+  $ make install
+
+The static and dynamic version of the Blosc library, together with
+header files, will be installed into the specified
+CMAKE_INSTALL_PREFIX.
+
+.. _CMake: http://www.cmake.org
+
+Adding support for other compressors (LZ4, LZ4HC, Snappy, Zlib) with CMake
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+The CMake files in Blosc are configured to automatically detect other
+compressors like LZ4, LZ4HC, Snappy or Zlib by default.  So as long as
+the libraries and the header files for these libraries are accessible,
+these will be used by default.
+
+*Note on Zlib*: the library should be easily found on UNIX systems,
+although on Windows, you can help CMake to find it by setting the
+environment variable 'ZLIB_ROOT' to where zlib 'include' and 'lib'
+directories are. Also, make sure that Zlib DDL library is in your
+'\Windows' directory.
+
+However, the full sources for LZ4, LZ4HC, Snappy and Zlib have been
+included in Blosc too. So, in general, you should not worry about not
+having (or CMake not finding) the libraries in your system because in
+this case, their sources will be automaticall compiled for you. That
+effectively means that you can be confident in having a complete
+support for all the supported compression libraries in all supported
+platforms.
+
+If you want to force Blosc to use the included compression sources
+instead of trying to find the libraries in the system first, you can
+switch off the PREFER_EXTERNAL_COMPLIBS CMake option:
+
+.. code-block:: console
+
+  $ cmake -DPREFER_EXTERNAL_COMPLIBS=OFF ..
+
+You can also disable support for some compression libraries:
+
+.. code-block:: console
+
+  $ cmake -DDEACTIVATE_SNAPPY=ON ..
+
+Mac OSX troubleshooting
+=======================
+
+If you run into compilation troubles when using Mac OSX, please make
+sure that you have installed the command line developer tools.  You
+can always install them with:
+
+.. code-block:: console
+
+  $ xcode-select --install
+
+Wrapper for Python
+==================
+
+Blosc has an official wrapper for Python.  See:
+
+http://blosc.pydata.org
+https://github.com/FrancescAlted/python-blosc
+
+Filter for HDF5
+===============
+
+For those that want to use Blosc as a filter in the HDF5 library,
+there is a sample implementation in the hdf5/ directory.
+
+Mailing list
+============
+
+There is an official mailing list for Blosc at:
+
+blosc at googlegroups.com
+http://groups.google.es/group/blosc
+
+Acknowledgments
+===============
+
+I'd like to thank the PyTables community that have collaborated in the
+exhaustive testing of Blosc.  With an aggregate amount of more than
+300 TB of different datasets compressed *and* decompressed
+successfully, I can say that Blosc is pretty safe now and ready for
+production purposes.
+
+Other important contributions:
+
+* Valentin Haenel did a terrific work implementing the support for the
+  Snappy compression, fixing typos and improving docs and the plotting
+  script.
+
+* Thibault North, with ideas from Oscar Villellas, contributed a way
+  to call Blosc from different threads in a safe way.
+
+* The CMake support was initially contributed by Thibault North, and
+  Antonio Valentino and Mark Wiebe made great enhancements to it.
+
+
+----
+
+  **Enjoy data!**
diff --git a/c-blosc/README_HEADER.rst b/c-blosc/README_HEADER.rst
new file mode 100644
index 0000000..96172d5
--- /dev/null
+++ b/c-blosc/README_HEADER.rst
@@ -0,0 +1,66 @@
+Blosc Header Format
+===================
+
+Blosc (as of Version 1.0.0) has the following 16 byte header that stores
+information about the compressed buffer::
+
+    |-0-|-1-|-2-|-3-|-4-|-5-|-6-|-7-|-8-|-9-|-A-|-B-|-C-|-D-|-E-|-F-|
+      ^   ^   ^   ^ |     nbytes    |   blocksize   |    ctbytes    |
+      |   |   |   |
+      |   |   |   +--typesize
+      |   |   +------flags
+      |   +----------versionlz
+      +--------------version
+
+Datatypes of the Header Entries
+-------------------------------
+
+All entries are little endian.
+
+:version:
+    (``uint8``) Blosc format version.
+:versionlz:
+    (``uint8``) Blosclz format  version (internal Lempel-Ziv algorithm).
+:flags and compressor enumeration:
+    (``bitfield``) The flags of the buffer 
+
+    :bit 0 (``0x01``):
+        Whether the shuffle filter has been applied or not.
+    :bit 1 (``0x02``):
+        Whether the internal buffer is a pure memcpy or not.
+    :bit 2 (``0x04``):
+        Reserved
+    :bit 3 (``0x08``):
+        Reserved
+    :bit 4 (``0x16``):
+        Reserved
+    :bit 5 (``0x32``):
+        Part of the enumeration for compressors.
+    :bit 6 (``0x64``):
+        Part of the enumeration for compressors.
+    :bit 7 (``0x64``):
+        Part of the enumeration for compressors.
+
+    The last three bits form an enumeration that allows to use alternative
+    compressors.
+
+    :``0``:
+        ``blosclz``
+    :``1``:
+        ``lz4``
+    :``2``:
+        ``lz4hc``
+    :``3``:
+        ``snappy``
+    :``4``:
+        ``zlib``
+
+:typesize:
+    (``uint8``) Number of bytes for the atomic type.
+:nbytes:
+    (``uint32``) Uncompressed size of the buffer.
+:blocksize:
+    (``uint32``) Size of internal blocks.
+:ctbytes:
+    (``uint32``) Compressed size of the buffer.
+
diff --git a/c-blosc/README_THREADED.rst b/c-blosc/README_THREADED.rst
new file mode 100644
index 0000000..4d427f9
--- /dev/null
+++ b/c-blosc/README_THREADED.rst
@@ -0,0 +1,33 @@
+Blosc supports threading
+========================
+
+Threads are the most efficient way to program parallel code for
+multi-core processors, but also the more difficult to program well.
+Also, they has a non-negligible start-up time that does not fit well
+with a high-performance compressor as Blosc tries to be.
+
+In order to reduce the overhead of threads as much as possible, I've
+decided to implement a pool of threads (the workers) that are waiting
+for the main process (the master) to send them jobs (basically,
+compressing and decompressing small blocks of the initial buffer).
+
+Despite this and many other internal optimizations in the threaded
+code, it does not work faster than the serial version for buffer sizes
+around 64/128 KB or less.  This is for Intel Quad Core2 (Q8400 @ 2.66
+GHz) / Linux (openSUSE 11.2, 64 bit), but your mileage may vary (and
+will vary!) for other processors / operating systems.
+
+In contrast, for buffers larger than 64/128 KB, the threaded version
+starts to perform significantly better, being the sweet point at 1 MB
+(again, this is with my setup).  For larger buffer sizes than 1 MB,
+the threaded code slows down again, but it is probably due to a cache
+size issue and besides, it is still considerably faster than serial
+code.
+
+This is why Blosc falls back to use the serial version for such a
+'small' buffers.  So, you don't have to worry too much about deciding
+whether you should set the number of threads to 1 (serial) or more
+(parallel).  Just set it to the number of cores in your processor and
+your are done!
+
+Francesc Alted
diff --git a/c-blosc/RELEASE_NOTES.rst b/c-blosc/RELEASE_NOTES.rst
new file mode 100644
index 0000000..baf3901
--- /dev/null
+++ b/c-blosc/RELEASE_NOTES.rst
@@ -0,0 +1,318 @@
+===============================
+ Release notes for Blosc 1.3.2
+===============================
+
+:Author: Francesc Alted
+:Contact: faltet at gmail.com
+:URL: http://www.blosc.org
+
+
+Changes from 1.3.1 to 1.3.2
+===========================
+
+* Fix for compiling Snappy sources against MSVC 2008.  Thanks to Mark
+  Wiebe!
+
+* Version for internal LZ4 and Snappy are now supported.  When compiled
+  against the external libraries, this info is not available because
+  they do not support the symbols (yet).
+
+
+Changes from 1.3.0 to 1.3.1
+===========================
+
+* Fixes for a series of issues with the filter for HDF5 and, in
+  particular, a problem in the decompression buffer size that made it
+  impossible to use the blosc_filter in combination with other ones
+  (e.g. fletcher32).  See
+  https://github.com/PyTables/PyTables/issues/21.
+
+  Thanks to Antonio Valentino for the fix!
+
+
+Changes from 1.2.4 to 1.3.0
+===========================
+
+A nice handful of compressors have been added to Blosc:
+
+* LZ4 (http://code.google.com/p/lz4/): A very fast
+  compressor/decompressor.  Could be thought as a replacement of the
+  original BloscLZ, but it can behave better is some scenarios.
+
+* LZ4HC (http://code.google.com/p/lz4/): This is a variation of LZ4
+  that achieves much better compression ratio at the cost of being
+  much slower for compressing.  Decompression speed is unaffected (and
+  sometimes better than when using LZ4 itself!), so this is very good
+  for read-only datasets.
+
+* Snappy (http://code.google.com/p/snappy/): A very fast
+  compressor/decompressor.  Could be thought as a replacement of the
+  original BloscLZ, but it can behave better is some scenarios.
+
+* Zlib (http://www.zlib.net/): This is a classic.  It achieves very
+  good compression ratios, at the cost of speed.  However,
+  decompression speed is still pretty good, so it is a good candidate
+  for read-only datasets.
+
+With this, you can select the compression library with the new
+function::
+
+  int blosc_set_complib(char* complib);
+
+where you pass the library that you want to use (currently "blosclz",
+"lz4", "lz4hc", "snappy" and "zlib", but the list can grow in the
+future).
+
+You can get more info about compressors support in you Blosc build by
+using these functions::
+
+  char* blosc_list_compressors(void);
+  int blosc_get_complib_info(char *compressor, char **complib, char **version);
+
+
+Changes from 1.2.2 to 1.2.3
+===========================
+
+- Added a `blosc_init()` and `blosc_destroy()` so that the global lock
+  can be initialized safely.  These new functions will also allow other
+  kind of initializations/destructions in the future.
+
+  Existing applications using Blosc do not need to start using the new
+  functions right away, as long as they calling `blosc_set_nthreads()`
+  previous to anything else.  However, using them is highly recommended.
+
+  Thanks to Oscar Villellas for the init/destroy suggestion, it is a
+  nice idea!
+
+
+Changes from 1.2.1 to 1.2.2
+===========================
+
+- All important warnings removed for all tested platforms.  This will
+  allow less intrusiveness compilation experiences with applications
+  including Blosc source code.
+
+- The `bench/bench.c` has been updated so that it can be compiled on
+  Windows again.
+
+- The new web site has been set to: http://www.blosc.org
+
+
+Changes from 1.2 to 1.2.1
+=========================
+
+- Fixed a problem with global lock not being initialized.  This
+  affected mostly to Windows platforms.  Thanks to Christoph
+  Gohlke for finding the cure!
+
+
+Changes from 1.1.5 to 1.2
+=========================
+
+- Now it is possible to call Blosc simultaneously from a parent threaded
+  application without problems.  This has been solved by setting a
+  global lock so that the different calling threads do not execute Blosc
+  routines at the same time.  Of course, real threading work is still
+  available *inside* Blosc itself.  Thanks to Thibault North.
+
+- Support for cmake is now included.  Linux, Mac OSX and Windows
+  platforms are supported.  Thanks to Thibault North, Antonio Valentino
+  and Mark Wiebe.
+
+- Fixed many compilers warnings (specially about unused variables).
+
+- As a consequence of the above, as minimal change in the API has been
+  introduced.  That is, the previous API::
+
+    void blosc_free_resources(void)
+
+  has changed to::
+
+    int blosc_free_resources(void)
+
+  Now, a return value of 0 means that the resources have been released
+  successfully.  If the return value is negative, then it is not
+  guaranteed that all the resources have been freed.
+
+- Many typos were fixed and docs have been improved.  The script for
+  generating nice plots for the included benchmarks has been improved
+  too.  Thanks to Valetin Haenel.
+
+
+Changes from 1.1.4 to 1.1.5
+===========================
+
+- Fix compile error with msvc compilers (Christoph Gohlke)
+
+
+Changes from 1.1.3 to 1.1.4
+===========================
+
+- Redefinition of the BLOSC_MAX_BUFFERSIZE constant as (INT_MAX -
+  BLOSC_MAX_OVERHEAD) instead of just INT_MAX.  This prevents to produce
+  outputs larger than INT_MAX, which is not supported.
+
+- `exit()` call has been replaced by a ``return -1`` in blosc_compress()
+  when checking for buffer sizes.  Now programs will not just exit when
+  the buffer is too large, but return a negative code.
+
+- Improvements in explicit casts.  Blosc compiles without warnings
+  (with GCC) now.
+
+- Lots of improvements in docs, in particular a nice ascii-art diagram
+  of the Blosc format (Valentin Haenel).
+
+- Improvements to the plot-speeds.py (Valentin Haenel).
+
+- [HDF5 filter] Adapted HDF5 filter to use HDF5 1.8 by default
+  (Antonio Valentino).
+
+- [HDF5 filter] New version of H5Z_class_t definition (Antonio Valentino).
+
+
+Changes from 1.1.2 to 1.1.3
+===========================
+
+- Much improved compression ratio when using large blocks (> 64 KB) and
+  high compression levels (> 6) under some circumstances (special data
+  distribution).  Closes #7.
+
+
+Changes from 1.1.1 to 1.1.2
+===========================
+
+- Fixes for small typesizes (#6 and #1 of python-blosc).
+
+
+Changes from 1.1 to 1.1.1
+=========================
+
+- Added code to avoid calling blosc_set_nthreads more than necessary.
+  That will improve performance up to 3x or more, specially for small
+  chunksizes (< 1 MB).
+
+
+Changes from 1.0 to 1.1
+=======================
+
+- Added code for emulating pthreads API on Windows.  No need to link
+  explicitly with pthreads lib on Windows anymore.  However, performance
+  is a somewhat worse because the new emulation layer does not support
+  the `pthread_barrier_wait()` call natively.  But the big improvement
+  in installation easiness is worth this penalty (most specially on
+  64-bit Windows, where pthreads-win32 support is flaky).
+
+- New BLOSC_MAX_BUFFERSIZE, BLOSC_MAX_TYPESIZE and BLOSC_MAX_THREADS
+  symbols are available in blosc.h.  These can be useful for validating
+  parameters in clients.  Thanks to Robert Smallshire for suggesting
+  that.
+
+- A new BLOSC_MIN_HEADER_LENGTH symbol in blosc.h tells how many bytes
+  long is the minimum length of a Blosc header.  `blosc_cbuffer_sizes()`
+  only needs these bytes to be passed to work correctly.
+
+- Removed many warnings (related with potentially dangerous type-casting
+  code) issued by MSVC 2008 in 64-bit mode.
+
+- Fixed a problem with the computation of the blocksize in the Blosc
+  filter for HDF5.
+
+- Fixed a problem with large datatypes.  See
+  http://www.pytables.org/trac/ticket/288 for more info.
+
+- Now Blosc is able to work well even if you fork an existing process
+  with a pool of threads.  Bug discovered when PyTables runs in
+  multiprocess environments.  See http://pytables.org/trac/ticket/295
+  for details.
+
+- Added a new `blosc_getitem()` call to allow the retrieval of items in
+  sizes smaller than the complete buffer.  That is useful for the carray
+  project, but certainly for others too.
+
+
+Changes from 0.9.5 to 1.0
+=========================
+
+- Added a filter for HDF5 so that people can use Blosc outside PyTables,
+  if they want to.
+
+- Many small improvements, specially in README files.
+
+- Do not assume that size_t is uint_32 for every platform.
+
+- Added more protection for large buffers or in allocation memory
+  routines.
+
+- The src/ directory has been renamed to blosc/.
+
+- The `maxbytes` parameter in `blosc_compress()` has been renamed to
+  `destsize`.  This is for consistency with the `blosc_decompress()`
+  parameters.
+
+
+Changes from 0.9.4 to 0.9.5
+===========================
+
+- Now, compression level 0 is allowed, meaning not compression at all.
+  The overhead of this mode will be always BLOSC_MAX_OVERHEAD (16)
+  bytes.  This mode actually represents using Blosc as a basic memory
+  container.
+
+- Supported a new parameter `maxbytes` for ``blosc_compress()``.  It
+  represents a maximum of bytes for output.  Tests unit added too.
+
+- Added 3 new functions for querying different metadata on compressed
+  buffers.  A test suite for testing the new API has been added too.
+
+
+Changes from 0.9.3 to 0.9.4
+===========================
+
+- Support for cross-platform big/little endian compatibility in Blosc
+  headers has been added.
+
+- Fixed several failures exposed by the extremesuite.  The problem was a
+  bad check for limits in the buffer size while compressing.
+
+- Added a new suite in bench.c called ``debugsuite`` that is
+  appropriate for debugging purposes.  Now, the ``extremesuite`` can be
+  used for running the complete (and extremely long) suite.
+
+
+Changes from 0.9.0 to 0.9.3
+===========================
+
+- Fixed several nasty bugs uncovered by the new suites in bench.c.
+  Thanks to Tony Theodore and Gabriel Beckers for their (very)
+  responsive beta testing and feedback.
+
+- Added several modes (suites), namely ``suite``, ``hardsuite`` and
+  ``extremehardsuite`` in bench.c so as to allow different levels of
+  testing.
+
+
+Changes from 0.8.0 to 0.9
+=========================
+
+- Internal format version bumped to 2 in order to allow an easy way to
+  indicate that a buffer is being saved uncompressed.  This is not
+  supported yet, but it might be in the future.
+
+- Blosc can use threads now for leveraging the increasing number of
+  multi-core processors out there.  See README-threaded.txt for more
+  info.
+
+- Added a protection for MacOSX so that it has to not link against
+  posix_memalign() funtion, which seems not available in old versions of
+  MacOSX (for example, Tiger).  At nay rate, posix_memalign() is not
+  necessary on Mac because 16 bytes alignment is ensured by default.
+  Thanks to Ivan Vilata.  Fixes #3.
+
+
+
+
+.. Local Variables:
+.. mode: rst
+.. coding: utf-8
+.. fill-column: 72
+.. End:
diff --git a/c-blosc/RELEASING.rst b/c-blosc/RELEASING.rst
new file mode 100644
index 0000000..514fb6d
--- /dev/null
+++ b/c-blosc/RELEASING.rst
@@ -0,0 +1,102 @@
+================
+Releasing Blosc
+================
+
+:Author: Francesc Alted
+:Contact: faltet at gmail.com
+:Date: 2014-01-15
+
+
+Preliminaries
+-------------
+
+- Make sure that ``RELEASE_NOTES.rst`` and ``ANNOUNCE.rst`` are up to
+  date with the latest news in the release.
+
+- Check that *VERSION* symbols in blosc/blosc.h contains the correct info.
+
+Testing
+-------
+
+Create a new build/ directory, change into it and issue::
+
+  $ cmake ..
+  $ make
+  $ make test
+
+To actually test Blosc the hard way, look at the end of:
+
+http://blosc.org/trac/wiki/SyntheticBenchmarks
+
+where instructions on how to intensively test (and benchmark) Blosc
+are given.
+
+Packaging
+---------
+
+- Unpack the archive of the repository in a temporary directory::
+
+  $ export VERSION="the version number"
+  $ mkdir /tmp/blosc-$VERSION
+  # IMPORTANT: make sure that you are at the root of the repo now!
+  $ git archive master | tar -x -C /tmp/blosc-$VERSION
+
+- And package the repo::
+
+  $ cd /tmp
+  $ tar cvfz blosc-$VERSION.tar.gz blosc-$VERSION
+
+Do a quick check that the tarball is sane.
+
+
+Uploading
+---------
+
+- Go to the downloads section in blosc.org and upload the source
+  tarball.
+
+
+Tagging
+-------
+
+- Create a tag ``X.Y.Z`` from ``master``.  Use the next message::
+
+    $ git tag -a vX.Y.Z -m "Tagging version X.Y.Z"
+
+- Push the tag to the github repo::
+
+    $ git push --tags
+
+
+Announcing
+----------
+
+- Update the release notes in the github wiki:
+
+https://github.com/FrancescAlted/blosc/wiki/Release-notes
+
+- Send an announcement to the blosc, pytables, carray and
+  comp.compression lists.  Use the ``ANNOUNCE.rst`` file as skeleton
+  (possibly as the definitive version).
+
+Post-release actions
+--------------------
+
+- Edit *VERSION* symbols in blosc/blosc.h in master to increment the
+  version to the next minor one (i.e. X.Y.Z --> X.Y.(Z+1).dev).
+
+- Create new headers for adding new features in ``RELEASE_NOTES.rst``
+  and empty the release-specific information in ``ANNOUNCE.rst`` and
+  add this place-holder instead:
+
+  #XXX version-specific blurb XXX#
+
+
+That's all folks!
+
+
+.. Local Variables:
+.. mode: rst
+.. coding: utf-8
+.. fill-column: 70
+.. End:
diff --git a/c-blosc/bench/CMakeLists.txt b/c-blosc/bench/CMakeLists.txt
new file mode 100644
index 0000000..b79e623
--- /dev/null
+++ b/c-blosc/bench/CMakeLists.txt
@@ -0,0 +1,72 @@
+# sources
+set(SOURCES bench.c)
+
+
+# targets
+add_executable(bench ${SOURCES})
+target_link_libraries(bench blosc_shared)
+
+
+# tests
+if(BUILD_TESTS)
+
+    option(TEST_INCLUDE_BENCH_SINGLE_1 "Include bench single (1 thread) in the tests" ON)
+    if(TEST_INCLUDE_BENCH_SINGLE_1)
+        add_test(test_blosclz_1 bench blosclz single 1)
+        if (HAVE_LZ4)
+            add_test(test_lz4_1     bench lz4     single 1)
+            add_test(test_lz4hc_1   bench lz4hc   single 1)
+        endif (HAVE_LZ4)
+        if (HAVE_SNAPPY)
+            add_test(test_snappy_1  bench snappy  single 1)
+        endif (HAVE_SNAPPY)
+        if (HAVE_ZLIB)
+            add_test(test_zlib_1    bench zlib    single 1)
+        endif (HAVE_ZLIB)
+    endif(TEST_INCLUDE_BENCH_SINGLE_1)
+
+    option(TEST_INCLUDE_BENCH_SINGLE_N "Include bench single (multithread) in the tests" ON)
+    if(TEST_INCLUDE_BENCH_SINGLE_N)
+        add_test(test_blosclz_n bench blosclz single)
+        if (HAVE_LZ4)
+            add_test(test_lz4_n     bench lz4     single)
+            add_test(test_lz4hc_n   bench lz4hc   single)
+        endif (HAVE_LZ4)
+        if (HAVE_SNAPPY)
+            add_test(test_snappy_n  bench snappy  single)
+        endif (HAVE_SNAPPY)
+        if (HAVE_ZLIB)
+            add_test(test_zlib_n    bench zlib    single)
+        endif (HAVE_ZLIB)
+    endif(TEST_INCLUDE_BENCH_SINGLE_N)
+
+    option(TEST_INCLUDE_BENCH_SUITE "Include bench suite in the tests" OFF)
+    if(TEST_INCLUDE_BENCH_SUITE)
+        add_test(test_blosclz bench blosclz suite)
+        if (HAVE_LZ4)
+            add_test(test_lz4     bench lz4     suite)
+            add_test(test_lz4hc   bench lz4hc   suite)
+        endif (HAVE_LZ4)
+        if (HAVE_SNAPPY)
+            add_test(test_snappy  bench snappy  suite)
+        endif (HAVE_SNAPPY)
+        if (HAVE_ZLIB)
+            add_test(test_zlib    bench zlib    suite)
+        endif (HAVE_ZLIB)
+    endif(TEST_INCLUDE_BENCH_SUITE)
+
+    option(TEST_INCLUDE_BENCH_HARDSUITE "Include bench hardsuite in the tests" OFF)
+    if(TEST_INCLUDE_BENCH_HARDSUITE)
+        add_test(test_hardsuite blosc blosclz hardsuite)
+    endif(TEST_INCLUDE_BENCH_HARDSUITE)
+
+    option(TEST_INCLUDE_BENCH_EXTREMESUITE "Include bench extremesuite in the tests" OFF)
+    if(TEST_INCLUDE_BENCH_EXTREMESUITE)
+        add_test(test_extremesuite bench blosclz extremesuite)
+    endif(TEST_INCLUDE_BENCH_EXTREMESUITE)
+
+    option(TEST_INCLUDE_BENCH_DEBUGSUITE "Include bench debugsuite in the tests" OFF)
+    if(TEST_INCLUDE_BENCH_DEBUGSUITE)
+        add_test(test_debugsuite bench debugsuite)
+    endif(TEST_INCLUDE_BENCH_DEBUGSUITE)
+endif(BUILD_TESTS)
diff --git a/c-blosc/bench/Makefile b/c-blosc/bench/Makefile
new file mode 100644
index 0000000..54cb820
--- /dev/null
+++ b/c-blosc/bench/Makefile
@@ -0,0 +1,40 @@
+CC = gcc  # clang++, g++ or just gcc if not compiling Snappy (C++ code)
+CFLAGS = -O3 -g -msse2 -Wall
+LDFLAGS = -lpthread  # for UNIX or for Windows with pthread installed
+#LDFLAGS = -static  # for mingw
+SOURCES = $(wildcard ../blosc/*.c)
+EXECUTABLE = bench
+
+# Support for internal LZ4 and LZ4HC
+LZ4_DIR = ../internal-complibs/lz4-r110
+CFLAGS += -DHAVE_LZ4 -I$(LZ4_DIR)
+SOURCES += $(wildcard $(LZ4_DIR)/*.c)
+
+# Support for external LZ4 and LZ4HC
+#LDFLAGS += -DHAVE_LZ4 -llz4
+
+# Support for internal Snappy
+#SNAPPY_DIR = ../internal-complibs/snappy-1.1.1
+#CFLAGS += -DHAVE_SNAPPY -I$(SNAPPY_DIR)
+#SOURCES += $(wildcard $(SNAPPY_DIR)/*.cc)
+
+# Support for external Snappy
+LDFLAGS += -DHAVE_SNAPPY -lsnappy
+
+# Support for external Zlib
+LDFLAGS += -DHAVE_ZLIB -lz
+
+# Support for internal Zlib
+#ZLIB_DIR = ../internal-complibs/zlib-1.2.8
+#CFLAGS += -DHAVE_ZLIB -I$(ZLIB_DIR)
+#SOURCES += $(wildcard $(ZLIB_DIR)/*.c)
+
+SOURCES += bench.c
+
+all: $(SOURCES) $(EXECUTABLE)
+
+$(EXECUTABLE): $(SOURCES)
+	$(CC) $(CFLAGS) $(SOURCES) -o $@ $(LDFLAGS)
+
+clean:
+	rm -rf $(EXECUTABLE)
diff --git a/c-blosc/bench/Makefile.mingw b/c-blosc/bench/Makefile.mingw
new file mode 100644
index 0000000..3a0f858
--- /dev/null
+++ b/c-blosc/bench/Makefile.mingw
@@ -0,0 +1,45 @@
+# Makefile for the MinGW suite for Windows
+CC = g++  # clang++, g++ or just gcc if not compiling Snappy (C++ code)
+CFLAGS = -O3 -g -msse2 -Wall
+#LDFLAGS = -lpthread  # for UNIX or for Windows with pthread installed
+LDFLAGS = -static  # for mingw
+SOURCES = $(wildcard ../blosc/*.c)
+EXECUTABLE = bench
+
+# Support for internal LZ4
+LZ4_DIR = ../internal-complibs/lz4-r110
+CFLAGS += -DHAVE_LZ4 -I$(LZ4_DIR)
+SOURCES += $(wildcard $(LZ4_DIR)/*.c)
+
+# Support for external LZ4
+#LDFLAGS += -DHAVE_LZ4 -llz4
+
+# Support for internal Snappy
+SNAPPY_DIR = ../internal-complibs/snappy-1.1.1
+CFLAGS += -DHAVE_SNAPPY -I$(SNAPPY_DIR)
+SOURCES += $(wildcard $(SNAPPY_DIR)/*.cc)
+
+# Support for external Snappy
+#LDFLAGS += -DHAVE_SNAPPY -lsnappy
+
+# Support for the msvc zlib:
+ZLIB_ROOT=/libs/zlib128
+LDFLAGS=-DHAVE_ZLIB -I$(ZLIB_ROOT)/include -lzdll -L$(ZLIB_ROOT)/lib
+
+# Support for the mingw zlib:
+#ZLIB_ROOT=/libs/libz-1.2.8
+#LDFLAGS=-DHAVE_ZLIB -I$(ZLIB_ROOT)/include -lz -L$(ZLIB_ROOT)/lib
+
+# Support for internal Zlib
+#ZLIB_DIR = ../internal-complibs/zlib-1.2.8
+#CFLAGS += -DHAVE_ZLIB -I$(ZLIB_DIR)
+#SOURCES += $(wildcard $(ZLIB_DIR)/*.c)
+
+
+all: $(SOURCES) $(EXECUTABLE)
+
+$(EXECUTABLE): $(SOURCES)
+	$(CC) $(CFLAGS) bench.c $(SOURCES) -o $@ $(LDFLAGS)
+
+clean:
+	rm -rf $(EXECUTABLE)
diff --git a/c-blosc/bench/bench.c b/c-blosc/bench/bench.c
new file mode 100644
index 0000000..a990b6f
--- /dev/null
+++ b/c-blosc/bench/bench.c
@@ -0,0 +1,539 @@
+/*********************************************************************
+  Small benchmark for testing basic capabilities of Blosc.
+
+  You can select different degrees of 'randomness' in input buffer, as
+  well as external datafiles (uncomment the lines after "For data
+  coming from a file" comment).
+
+  For usage instructions of this benchmark, please see:
+
+    http://blosc.pytables.org/trac/wiki/SyntheticBenchmarks
+
+  I'm collecting speeds for different machines, so the output of your
+  benchmarks and your processor specifications are welcome!
+
+  Author: Francesc Alted <faltet at gmail.com>
+
+  See LICENSES/BLOSC.txt for details about copyright and rights to use.
+**********************************************************************/
+
+#include <stdlib.h>
+#include <stdio.h>
+#include <string.h>
+#include <sys/types.h>
+#include <sys/stat.h>
+#include <fcntl.h>
+#if defined(_WIN32) && !defined(__MINGW32__)
+  #include <time.h>
+#else
+  #include <unistd.h>
+  #include <sys/time.h>
+#endif
+#include <math.h>
+
+
+struct bench_wrap_args
+{
+  char *compressor;
+  int nthreads;
+  int size;
+  int elsize;
+  int rshift;
+  FILE * output_file;
+};
+
+void *bench_wrap(void * args);
+
+#include "../blosc/blosc.h"
+
+#define KB  1024
+#define MB  (1024*KB)
+#define GB  (1024*MB)
+
+#define NCHUNKS (32*1024)       /* maximum number of chunks */
+#define MAX_THREADS 16
+
+
+int nchunks = NCHUNKS;
+int niter = 3;                  /* default number of iterations */
+double totalsize = 0.;          /* total compressed/decompressed size */
+
+#if defined(_WIN32) && !defined(__MINGW32__)
+#include <windows.h>
+#if defined(_MSC_VER) || defined(_MSC_EXTENSIONS)
+  #define DELTA_EPOCH_IN_MICROSECS  11644473600000000Ui64
+#else
+  #define DELTA_EPOCH_IN_MICROSECS  11644473600000000ULL
+#endif
+
+struct timezone
+{
+  int  tz_minuteswest; /* minutes W of Greenwich */
+  int  tz_dsttime;     /* type of dst correction */
+};
+
+int gettimeofday(struct timeval *tv, struct timezone *tz)
+{
+  FILETIME ft;
+  unsigned __int64 tmpres = 0;
+  static int tzflag;
+
+  if (NULL != tv)
+  {
+    GetSystemTimeAsFileTime(&ft);
+
+    tmpres |= ft.dwHighDateTime;
+    tmpres <<= 32;
+    tmpres |= ft.dwLowDateTime;
+
+    /*converting file time to unix epoch*/
+    tmpres -= DELTA_EPOCH_IN_MICROSECS;
+    tmpres /= 10;  /*convert into microseconds*/
+    tv->tv_sec = (long)(tmpres / 1000000UL);
+    tv->tv_usec = (long)(tmpres % 1000000UL);
+  }
+
+  if (NULL != tz)
+  {
+    if (!tzflag)
+    {
+      _tzset();
+      tzflag++;
+    }
+    tz->tz_minuteswest = _timezone / 60;
+    tz->tz_dsttime = _daylight;
+  }
+
+  return 0;
+}
+#endif   /* _WIN32 */
+
+
+/* Given two timeval stamps, return the difference in seconds */
+float getseconds(struct timeval last, struct timeval current) {
+  int sec, usec;
+
+  sec = current.tv_sec - last.tv_sec;
+  usec = current.tv_usec - last.tv_usec;
+  return (float)(((double)sec + usec*1e-6));
+}
+
+/* Given two timeval stamps, return the time per chunk in usec */
+float get_usec_chunk(struct timeval last, struct timeval current) {
+  return (float)(getseconds(last, current)/(niter*nchunks)*1e6);
+}
+
+
+int get_value(int i, int rshift) {
+  int v;
+
+  v = (i<<26)^(i<<18)^(i<<11)^(i<<3)^i;
+  if (rshift < 32) {
+    v &= (1 << rshift) - 1;
+  }
+  return v;
+}
+
+
+void init_buffer(void *src, int size, int rshift) {
+  unsigned int i;
+  int *_src = (int *)src;
+
+  /* To have reproducible results */
+  srand(1);
+
+  /* Initialize the original buffer */
+  for (i = 0; i < size/sizeof(int); ++i) {
+    /* Choose one below */
+    /* _src[i] = 0;
+     * _src[i] = 0x01010101;
+     * _src[i] = 0x01020304;
+     * _src[i] = i * 1/.3;
+     * _src[i] = i;
+     * _src[i] = rand() >> (32-rshift); */
+    _src[i] = get_value(i, rshift);
+  }
+}
+
+
+void do_bench(char *compressor, int nthreads, int size, int elsize,
+              int rshift, FILE * ofile) {
+  void *src, *srccpy;
+  void *dest[NCHUNKS], *dest2;
+  int nbytes = 0, cbytes = 0;
+  int i, j;
+  struct timeval last, current;
+  float tmemcpy, tshuf, tunshuf;
+  int clevel, doshuffle=1;
+  unsigned char *orig, *round;
+
+  blosc_set_nthreads(nthreads);
+  if(blosc_set_compressor(compressor) < 0){
+    printf("Compiled w/o support for compressor: '%s', so sorry.\n",
+           compressor);
+    exit(1);
+  }
+
+  /* Initialize buffers */
+  src = malloc(size);
+  srccpy = malloc(size);
+  dest2 = malloc(size);
+  /* zero src to initialize byte on it, and not only multiples of 4 */
+  memset(src, 0, size);
+  init_buffer(src, size, rshift);
+  memcpy(srccpy, src, size);
+  for (j = 0; j < nchunks; j++) {
+    dest[j] = malloc(size+BLOSC_MAX_OVERHEAD);
+  }
+
+  /* Warm destination memory (memcpy() will go a bit faster later on) */
+  for (j = 0; j < nchunks; j++) {
+    memcpy(dest[j], src, size);
+  }
+
+  fprintf(ofile, "--> %d, %d, %d, %d, %s\n", nthreads, size, elsize, rshift, compressor);
+  fprintf(ofile, "********************** Run info ******************************\n");
+  fprintf(ofile, "Blosc version: %s (%s)\n", BLOSC_VERSION_STRING, BLOSC_VERSION_DATE);
+  fprintf(ofile, "Using synthetic data with %d significant bits (out of 32)\n", rshift);
+  fprintf(ofile, "Dataset size: %d bytes\tType size: %d bytes\n", size, elsize);
+  fprintf(ofile, "Working set: %.1f MB\t\t", (size*nchunks) / (float)MB);
+  fprintf(ofile, "Number of threads: %d\n", nthreads);
+  fprintf(ofile, "********************** Running benchmarks *********************\n");
+
+  gettimeofday(&last, NULL);
+  for (i = 0; i < niter; i++) {
+    for (j = 0; j < nchunks; j++) {
+      memcpy(dest[j], src, size);
+    }
+  }
+  gettimeofday(&current, NULL);
+  tmemcpy = get_usec_chunk(last, current);
+  fprintf(ofile, "memcpy(write):\t\t %6.1f us, %.1f MB/s\n",
+         tmemcpy, size/(tmemcpy*MB/1e6));
+
+  gettimeofday(&last, NULL);
+  for (i = 0; i < niter; i++) {
+    for (j = 0; j < nchunks; j++) {
+      memcpy(dest2, dest[j], size);
+    }
+  }
+  gettimeofday(&current, NULL);
+  tmemcpy = get_usec_chunk(last, current);
+  fprintf(ofile, "memcpy(read):\t\t %6.1f us, %.1f MB/s\n",
+         tmemcpy, size/(tmemcpy*MB/1e6));
+
+  for (clevel=0; clevel<10; clevel++) {
+
+    fprintf(ofile, "Compression level: %d\n", clevel);
+
+    gettimeofday(&last, NULL);
+    for (i = 0; i < niter; i++) {
+      for (j = 0; j < nchunks; j++) {
+        cbytes = blosc_compress(clevel, doshuffle, elsize, size, src,
+                                dest[j], size+BLOSC_MAX_OVERHEAD);
+      }
+    }
+    gettimeofday(&current, NULL);
+    tshuf = get_usec_chunk(last, current);
+    fprintf(ofile, "comp(write):\t %6.1f us, %.1f MB/s\t  ",
+           tshuf, size/(tshuf*MB/1e6));
+    fprintf(ofile, "Final bytes: %d  ", cbytes);
+    if (cbytes > 0) {
+      fprintf(ofile, "Ratio: %3.2f", size/(float)cbytes);
+    }
+    fprintf(ofile, "\n");
+
+    /* Compressor was unable to compress.  Copy the buffer manually. */
+    if (cbytes == 0) {
+      for (j = 0; j < nchunks; j++) {
+        memcpy(dest[j], src, size);
+      }
+    }
+
+    gettimeofday(&last, NULL);
+    for (i = 0; i < niter; i++) {
+      for (j = 0; j < nchunks; j++) {
+        if (cbytes == 0) {
+          memcpy(dest2, dest[j], size);
+          nbytes = size;
+        }
+        else {
+          nbytes = blosc_decompress(dest[j], dest2, size);
+        }
+      }
+    }
+    gettimeofday(&current, NULL);
+    tunshuf = get_usec_chunk(last, current);
+    fprintf(ofile, "decomp(read):\t %6.1f us, %.1f MB/s\t  ",
+           tunshuf, nbytes/(tunshuf*MB/1e6));
+    if (nbytes < 0) {
+      fprintf(ofile, "FAILED.  Error code: %d\n", nbytes);
+    }
+    /* fprintf(ofile, "Orig bytes: %d\tFinal bytes: %d\n", cbytes, nbytes); */
+
+    /* Check if data has had a good roundtrip */
+    orig = (unsigned char *)srccpy;
+    round = (unsigned char *)dest2;
+    for(i = 0; i<size; ++i){
+      if (orig[i] != round[i]) {
+        fprintf(ofile, "\nError: Original data and round-trip do not match in pos %d\n",
+               (int)i);
+        fprintf(ofile, "Orig--> %x, round-trip--> %x\n", orig[i], round[i]);
+        break;
+      }
+    }
+
+    if (i == size) fprintf(ofile, "OK\n");
+
+  } /* End clevel loop */
+
+
+  /* To compute the totalsize, we should take into account the 10
+     compression levels */
+  totalsize += (size * nchunks * niter * 10.);
+
+  free(src); free(srccpy); free(dest2);
+  for (i = 0; i < nchunks; i++) {
+    free(dest[i]);
+  }
+
+}
+
+
+/* Compute a sensible value for nchunks */
+int get_nchunks(int size_, int ws) {
+  int nchunks;
+
+  nchunks = ws / size_;
+  if (nchunks > NCHUNKS) nchunks = NCHUNKS;
+  if (nchunks < 1) nchunks = 1;
+  return nchunks;
+}
+
+void *bench_wrap(void * args)
+{
+    struct bench_wrap_args * arg = (struct bench_wrap_args *) args;
+    do_bench(arg->compressor, arg->nthreads, arg->size, arg->elsize,
+             arg->rshift, arg->output_file);
+    return 0;
+}
+
+void print_compress_info(void)
+{
+  char *name = NULL, *version = NULL;
+  int ret;
+
+  printf("Blosc version: %s (%s)\n", BLOSC_VERSION_STRING, BLOSC_VERSION_DATE);
+
+  printf("List of supported compressors in this build: %s\n",
+         blosc_list_compressors());
+
+  printf("Supported compression libraries:\n");
+  ret = blosc_get_complib_info("blosclz", &name, &version);
+  if (ret >= 0) printf("  %s: %s\n", name, version);
+  ret = blosc_get_complib_info("lz4", &name, &version);
+  if (ret >= 0) printf("  %s: %s\n", name, version);
+  ret = blosc_get_complib_info("snappy", &name, &version);
+  if (ret >= 0) printf("  %s: %s\n", name, version);
+  ret = blosc_get_complib_info("zlib", &name, &version);
+  if (ret >= 0) printf("  %s: %s\n", name, version);
+
+}
+
+
+int main(int argc, char *argv[]) {
+  char compressor[32];
+  char bsuite[32];
+  int single = 1;
+  int suite = 0;
+  int hard_suite = 0;
+  int extreme_suite = 0;
+  int debug_suite = 0;
+  int nthreads = 4;                     /* The number of threads */
+  int size = 2*MB;                      /* Buffer size */
+  int elsize = 8;                       /* Datatype size */
+  int rshift = 19;                      /* Significant bits */
+  int workingset = 256*MB;              /* The maximum allocated memory */
+  int nthreads_, size_, elsize_, rshift_, i;
+  FILE * output_file = stdout;
+  struct timeval last, current;
+  float totaltime;
+  char usage[256];
+
+  print_compress_info();
+
+  strncpy(usage, "Usage: bench [blosclz | lz4 | lz4hc | snappy | zlib] "
+          "[[single | suite | hardsuite | extremesuite | debugsuite] "
+          "[nthreads [bufsize(bytes) [typesize [sbits ]]]]]", 255);
+
+  if (argc < 2) {
+    printf("%s\n", usage);
+    exit(1);
+  }
+
+  strcpy(compressor, argv[1]);
+
+  if (strcmp(compressor, "blosclz") != 0 &&
+      strcmp(compressor, "lz4") != 0 &&
+      strcmp(compressor, "lz4hc") != 0 &&
+      strcmp(compressor, "snappy") != 0 &&
+      strcmp(compressor, "zlib") != 0) {
+    printf("No such compressor: '%s'\n", compressor);
+    exit(2);
+  }
+
+  if (argc < 3)
+    strcpy(bsuite, "single");
+  else
+    strcpy(bsuite, argv[2]);
+
+  if (strcmp(bsuite, "single") == 0) {
+    single = 1;
+  }
+  else if (strcmp(bsuite, "suite") == 0) {
+    suite = 1;
+  }
+  else if (strcmp(bsuite, "hardsuite") == 0) {
+    hard_suite = 1;
+    workingset = 64*MB;
+    /* Values here are ending points for loops */
+    nthreads = 2;
+    size = 8*MB;
+    elsize = 32;
+    rshift = 32;
+  }
+  else if (strcmp(bsuite, "extremesuite") == 0) {
+    extreme_suite = 1;
+    workingset = 32*MB;
+    niter = 1;
+    /* Values here are ending points for loops */
+    nthreads = 4;
+    size = 16*MB;
+    elsize = 32;
+    rshift = 32;
+  }
+  else if (strcmp(bsuite, "debugsuite") == 0) {
+    debug_suite = 1;
+    workingset = 32*MB;
+    niter = 1;
+    /* Warning: values here are starting points for loops.  This is
+       useful for debugging. */
+    nthreads = 1;
+    size = 16*KB;
+    elsize = 1;
+    rshift = 0;
+  }
+  else {
+    printf("%s\n", usage);
+    exit(1);
+  }
+
+  printf("Using compressor: %s\n", compressor);
+  printf("Running suite: %s\n", bsuite);
+
+  if (argc >= 4) {
+    nthreads = atoi(argv[3]);
+  }
+  if (argc >= 5) {
+    size = atoi(argv[4]);
+  }
+  if (argc >= 6) {
+    elsize = atoi(argv[5]);
+  }
+  if (argc >= 7) {
+    rshift = atoi(argv[6]);
+  }
+
+  if ((argc >= 8) || !(single || suite || hard_suite || extreme_suite)) {
+    printf("%s\n", usage);
+    exit(1);
+  }
+
+  nchunks = get_nchunks(size, workingset);
+  gettimeofday(&last, NULL);
+
+  blosc_init();
+
+  if (suite) {
+    for (nthreads_=1; nthreads_ <= nthreads; nthreads_++) {
+      do_bench(compressor, nthreads_, size, elsize, rshift, output_file);
+    }
+  }
+  else if (hard_suite) {
+    /* Let's start the rshift loop by 4 so that 19 is visited.  This
+       is to allow a direct comparison with the plain suite, that runs
+       precisely at 19 significant bits. */
+    for (rshift_ = 4; rshift_ <= rshift; rshift_ += 5) {
+      for (elsize_ = 1; elsize_ <= elsize; elsize_ *= 2) {
+        /* The next loop is for getting sizes that are not power of 2 */
+        for (i = -elsize_; i <= elsize_; i += elsize_) {
+          for (size_ = 32*KB; size_ <= size; size_ *= 2) {
+            nchunks = get_nchunks(size_+i, workingset);
+    	    niter = 1;
+            for (nthreads_ = 1; nthreads_ <= nthreads; nthreads_++) {
+              do_bench(compressor, nthreads_, size_+i, elsize_, rshift_, output_file);
+              gettimeofday(&current, NULL);
+              totaltime = getseconds(last, current);
+              printf("Elapsed time:\t %6.1f s.  Processed data: %.1f GB\n",
+                     totaltime, totalsize / GB);
+            }
+          }
+        }
+      }
+    }
+  }
+  else if (extreme_suite) {
+    for (rshift_ = 0; rshift_ <= rshift; rshift_++) {
+      for (elsize_ = 1; elsize_ <= elsize; elsize_++) {
+        /* The next loop is for getting sizes that are not power of 2 */
+        for (i = -elsize_*2; i <= elsize_*2; i += elsize_) {
+          for (size_ = 32*KB; size_ <= size; size_ *= 2) {
+            nchunks = get_nchunks(size_+i, workingset);
+            for (nthreads_ = 1; nthreads_ <= nthreads; nthreads_++) {
+              do_bench(compressor, nthreads_, size_+i, elsize_, rshift_, output_file);
+              gettimeofday(&current, NULL);
+              totaltime = getseconds(last, current);
+              printf("Elapsed time:\t %6.1f s.  Processed data: %.1f GB\n",
+                     totaltime, totalsize / GB);
+            }
+          }
+        }
+      }
+    }
+  }
+  else if (debug_suite) {
+    for (rshift_ = rshift; rshift_ <= 32; rshift_++) {
+      for (elsize_ = elsize; elsize_ <= 32; elsize_++) {
+        /* The next loop is for getting sizes that are not power of 2 */
+        for (i = -elsize_*2; i <= elsize_*2; i += elsize_) {
+          for (size_ = size; size_ <= 16*MB; size_ *= 2) {
+            nchunks = get_nchunks(size_+i, workingset);
+            for (nthreads_ = nthreads; nthreads_ <= 6; nthreads_++) {
+              do_bench(compressor, nthreads_, size_+i, elsize_, rshift_, output_file);
+              gettimeofday(&current, NULL);
+              totaltime = getseconds(last, current);
+              printf("Elapsed time:\t %6.1f s.  Processed data: %.1f GB\n",
+                     totaltime, totalsize / GB);
+            }
+          }
+        }
+      }
+    }
+  }
+  /* Single mode */
+  else {
+    do_bench(compressor, nthreads, size, elsize, rshift, output_file);
+  }
+
+  /* Print out some statistics */
+  gettimeofday(&current, NULL);
+  totaltime = getseconds(last, current);
+  printf("\nRound-trip compr/decompr on %.1f GB\n", totalsize / GB);
+  printf("Elapsed time:\t %6.1f s, %.1f MB/s\n",
+         totaltime, totalsize*2*1.1/(MB*totaltime));
+
+  /* Free blosc resources */
+  blosc_free_resources();
+  blosc_destroy();
+  return 0;
+}
diff --git a/c-blosc/bench/plot-speeds.py b/c-blosc/bench/plot-speeds.py
new file mode 100644
index 0000000..90f79c1
--- /dev/null
+++ b/c-blosc/bench/plot-speeds.py
@@ -0,0 +1,197 @@
+"""Script for plotting the results of the 'suite' benchmark.
+Invoke without parameters for usage hints.
+
+:Author: Francesc Alted
+:Date: 2010-06-01
+"""
+
+import matplotlib as mpl
+from pylab import *
+
+KB_ = 1024
+MB_ = 1024*KB_
+GB_ = 1024*MB_
+NCHUNKS = 128    # keep in sync with bench.c
+
+linewidth=2
+#markers= ['+', ',', 'o', '.', 's', 'v', 'x', '>', '<', '^']
+#markers= [ 'x', '+', 'o', 's', 'v', '^', '>', '<', ]
+markers= [ 's', 'o', 'v', '^', '+', 'x', '>', '<', '.', ',' ]
+markersize = 8
+
+def get_values(filename):
+    f = open(filename)
+    values = {"memcpyw": [], "memcpyr": []}
+
+    for line in f:
+        if line.startswith('-->'):
+            tmp = line.split('-->')[1]
+            nthreads, size, elsize, sbits = [int(i) for i in tmp.split(', ')]
+            values["size"] = size * NCHUNKS / MB_;
+            values["elsize"] = elsize;
+            values["sbits"] = sbits;
+            # New run for nthreads
+            (ratios, speedsw, speedsr) = ([], [], [])
+            # Add a new entry for (ratios, speedw, speedr)
+            values[nthreads] = (ratios, speedsw, speedsr)
+            #print "-->", nthreads, size, elsize, sbits
+        elif line.startswith('memcpy(write):'):
+            tmp = line.split(',')[1]
+            memcpyw = float(tmp.split(' ')[1])
+            values["memcpyw"].append(memcpyw)
+        elif line.startswith('memcpy(read):'):
+            tmp = line.split(',')[1]
+            memcpyr = float(tmp.split(' ')[1])
+            values["memcpyr"].append(memcpyr)
+        elif line.startswith('comp(write):'):
+            tmp = line.split(',')[1]
+            speedw = float(tmp.split(' ')[1])
+            ratio = float(line.split(':')[-1])
+            speedsw.append(speedw)
+            ratios.append(ratio)
+        elif line.startswith('decomp(read):'):
+            tmp = line.split(',')[1]
+            speedr = float(tmp.split(' ')[1])
+            speedsr.append(speedr)
+            if "OK" not in line:
+                print "WARNING!  OK not found in decomp line!"
+
+    f.close()
+    return nthreads, values
+
+
+def show_plot(plots, yaxis, legends, gtitle, xmax=None):
+    xlabel('Compresssion ratio')
+    ylabel('Speed (MB/s)')
+    title(gtitle)
+    xlim(0, xmax)
+    #ylim(0, 10000)
+    ylim(0, None)
+    grid(True)
+
+#     legends = [f[f.find('-'):f.index('.out')] for f in filenames]
+#     legends = [l.replace('-', ' ') for l in legends]
+    #legend([p[0] for p in plots], legends, loc = "upper left")
+    legend([p[0] for p in plots
+            if not isinstance(p, mpl.lines.Line2D)],
+           legends, loc = "best")
+
+
+    #subplots_adjust(bottom=0.2, top=None, wspace=0.2, hspace=0.2)
+    if outfile:
+        print "Saving plot to:", outfile
+        savefig(outfile)
+    else:
+        show()
+
+if __name__ == '__main__':
+
+    from optparse import OptionParser
+
+    usage = "usage: %prog [-o outfile] [-t title ] [-d|-c] filename"
+    compress_title = 'Compression speed'
+    decompress_title = 'Decompression speed'
+    yaxis = 'No axis name'
+
+    parser = OptionParser(usage=usage)
+    parser.add_option('-o',
+                      '--outfile',
+                      dest='outfile',
+                      help='filename for output ' + \
+                      '(many extensions supported, e.g. .png, .jpg, .pdf)')
+
+    parser.add_option('-t',
+                      '--title',
+                      dest='title',
+                      help='title of the plot',)
+
+    parser.add_option('-l',
+                      '--limit',
+                      dest='limit',
+                      help='expression to limit number of threads shown',)
+
+    parser.add_option('-x',
+                      '--xmax',
+                      dest='xmax',
+                      help='limit the x-axis',
+                      default=None)
+
+    parser.add_option('-d', '--decompress', action='store_true',
+            dest='dspeed',
+            help='plot decompression data',
+            default=False)
+    parser.add_option('-c', '--compress', action='store_true',
+            dest='cspeed',
+            help='plot compression data',
+            default=False)
+
+    (options, args) = parser.parse_args()
+    if len(args) == 0:
+        parser.error("No input arguments")
+    elif len(args) > 1:
+        parser.error("Too many input arguments")
+    else:
+        pass
+
+    if options.dspeed and options.cspeed:
+        parser.error("Can only select one of [-d, -c]")
+    elif options.cspeed:
+        options.dspeed = False
+        plot_title = compress_title
+    else: # either neither or dspeed
+        options.dspeed = True
+        plot_title = decompress_title
+
+    filename = args[0]
+    outfile = options.outfile
+    cspeed = options.cspeed
+    dspeed = options.dspeed
+
+    plots = []
+    legends = []
+    nthreads, values = get_values(filename)
+    #print "Values:", values
+
+    if options.limit:
+        thread_range = eval(options.limit)
+    else:
+        thread_range = range(1, nthreads+1)
+
+    if options.title:
+        plot_title = options.title
+    else:
+        plot_title += " (%(size).1f MB, %(elsize)d bytes, %(sbits)d bits)" % values
+
+    gtitle = plot_title
+
+    for nt in thread_range:
+        #print "Values for %s threads --> %s" % (nt, values[nt])
+        (ratios, speedw, speedr) = values[nt]
+        if cspeed:
+            speed = speedw
+        else:
+            speed = speedr
+        #plot_ = semilogx(ratios, speed, linewidth=2)
+        plot_ = plot(ratios, speed, linewidth=2)
+        plots.append(plot_)
+        nmarker = nt
+        if nt >= len(markers):
+            nmarker = nt%len(markers)
+        setp(plot_, marker=markers[nmarker], markersize=markersize,
+             linewidth=linewidth)
+        legends.append("%d threads" % nt)
+
+    # Add memcpy lines
+    if cspeed:
+        mean = sum(values["memcpyw"]) / nthreads
+        message = "memcpy (write to memory)"
+    else:
+        mean = sum(values["memcpyr"]) / nthreads
+        message = "memcpy (read from memory)"
+    plot_ = axhline(mean, linewidth=3, linestyle='-.', color='black')
+    text(4.0, mean+50, message)
+    plots.append(plot_)
+    show_plot(plots, yaxis, legends, gtitle, xmax=int(options.xmax) if
+            options.xmax else None)
+
+
diff --git a/c-blosc/blosc/CMakeLists.txt b/c-blosc/blosc/CMakeLists.txt
new file mode 100644
index 0000000..2ce9cd5
--- /dev/null
+++ b/c-blosc/blosc/CMakeLists.txt
@@ -0,0 +1,104 @@
+# a simple way to detect that we are using CMAKE
+add_definitions(-DUSING_CMAKE)
+
+set(INTERNAL_LIBS ${CMAKE_SOURCE_DIR}/internal-complibs)
+
+# includes
+if(NOT DEACTIVATE_LZ4)
+    if (LZ4_FOUND)
+        include_directories( ${LZ4_INCLUDE_DIR} )
+    else(LZ4_FOUND)
+        set(LZ4_LOCAL_DIR ${INTERNAL_LIBS}/lz4-r110)
+        include_directories( ${LZ4_LOCAL_DIR} )
+    endif(LZ4_FOUND)
+endif(NOT DEACTIVATE_LZ4)
+
+if(NOT DEACTIVATE_SNAPPY)
+    if (SNAPPY_FOUND)
+        include_directories( ${SNAPPY_INCLUDE_DIR} )
+    else(SNAPPY_FOUND)
+        set(SNAPPY_LOCAL_DIR ${INTERNAL_LIBS}/snappy-1.1.1)
+        include_directories( ${SNAPPY_LOCAL_DIR} )
+    endif(SNAPPY_FOUND)
+endif(NOT DEACTIVATE_SNAPPY)
+
+if(NOT DEACTIVATE_ZLIB)
+    if (ZLIB_FOUND)
+        include_directories( ${ZLIB_INCLUDE_DIR} )
+    else(ZLIB_FOUND)
+        set(ZLIB_LOCAL_DIR ${INTERNAL_LIBS}/zlib-1.2.8)
+        include_directories( ${ZLIB_LOCAL_DIR} )
+    endif(ZLIB_FOUND)
+endif(NOT DEACTIVATE_ZLIB)
+
+# library sources
+set(SOURCES blosc.c blosclz.c shuffle.c)
+# library install directory
+set(lib_dir lib${LIB_SUFFIX})
+set(version_string ${BLOSC_VERSION_MAJOR}.${BLOSC_VERSION_MINOR}.${BLOSC_VERSION_PATCH})
+
+set(CMAKE_THREAD_PREFER_PTHREAD TRUE)
+if(WIN32)
+    # try to use the system library
+    find_package(Threads)
+    if(NOT Threads_FOUND)
+        message(STATUS "using the internal pthread library for win32 systems.")
+        set(SOURCES ${SOURCES} win32/pthread.c)
+    else(NOT Threads_FOUND)
+        set(LIBS ${LIBS} ${CMAKE_THREAD_LIBS_INIT})
+    endif(NOT Threads_FOUND)
+else(WIN32)
+    find_package(Threads REQUIRED)
+    set(LIBS ${LIBS} ${CMAKE_THREAD_LIBS_INIT})
+endif(WIN32)
+
+if(NOT DEACTIVATE_LZ4)
+    if(LZ4_FOUND)
+        set(LIBS ${LIBS} ${LZ4_LIBRARY})
+    else(LZ4_FOUND)
+        file(GLOB LZ4_FILES ${LZ4_LOCAL_DIR}/*.c)
+        set(SOURCES ${SOURCES} ${LZ4_FILES})
+    endif(LZ4_FOUND)
+endif(NOT DEACTIVATE_LZ4)
+
+if(NOT DEACTIVATE_SNAPPY)
+    if(SNAPPY_FOUND)
+        set(LIBS ${LIBS} ${SNAPPY_LIBRARY})
+    else(SNAPPY_FOUND)
+        file(GLOB SNAPPY_FILES ${SNAPPY_LOCAL_DIR}/*.cc)
+        set(SOURCES ${SOURCES} ${SNAPPY_FILES})
+    endif(SNAPPY_FOUND)
+endif(NOT DEACTIVATE_SNAPPY)
+
+if(NOT DEACTIVATE_ZLIB)
+    if(ZLIB_FOUND)
+        set(LIBS ${LIBS} ${ZLIB_LIBRARY})
+    else(ZLIB_FOUND)
+        file(GLOB ZLIB_FILES ${ZLIB_LOCAL_DIR}/*.c)
+        set(SOURCES ${SOURCES} ${ZLIB_FILES})
+    endif(ZLIB_FOUND)
+endif(NOT DEACTIVATE_ZLIB)
+
+
+# targets
+add_library(blosc_shared SHARED ${SOURCES})
+set_target_properties(blosc_shared PROPERTIES OUTPUT_NAME blosc)
+set_target_properties(blosc_shared PROPERTIES
+        VERSION ${version_string}
+        SOVERSION ${version_string}
+    )
+target_link_libraries(blosc_shared ${LIBS})
+
+if(BUILD_STATIC)
+    add_library(blosc_static STATIC ${SOURCES})
+    set_target_properties(blosc_static PROPERTIES OUTPUT_NAME blosc)
+    target_link_libraries(blosc_static ${LIBS})
+endif(BUILD_STATIC)
+
+
+# install
+install(FILES blosc.h DESTINATION include COMPONENT DEV)
+install(TARGETS blosc_shared DESTINATION ${lib_dir} COMPONENT LIB)
+if(BUILD_STATIC)
+    install(TARGETS blosc_static DESTINATION ${lib_dir} COMPONENT DEV)
+endif(BUILD_STATIC)
diff --git a/blosc/blosc.c b/c-blosc/blosc/blosc.c
similarity index 71%
rename from blosc/blosc.c
rename to c-blosc/blosc/blosc.c
index fdc82b5..36f9243 100644
--- a/blosc/blosc.c
+++ b/c-blosc/blosc/blosc.c
@@ -14,9 +14,22 @@
 #include <sys/types.h>
 #include <sys/stat.h>
 #include <assert.h>
+#if defined(USING_CMAKE)
+  #include "config.h"
+#endif /*  USING_CMAKE */
 #include "blosc.h"
-#include "blosclz.h"
 #include "shuffle.h"
+#include "blosclz.h"
+#if defined(HAVE_LZ4)
+  #include "lz4.h"
+  #include "lz4hc.h"
+#endif /*  HAVE_LZ4 */
+#if defined(HAVE_SNAPPY)
+  #include "snappy-c.h"
+#endif /*  HAVE_SNAPPY */
+#if defined(HAVE_ZLIB)
+  #include "zlib.h"
+#endif /*  HAVE_ZLIB */
 
 #if defined(_WIN32) && !defined(__MINGW32__)
   #include <windows.h>
@@ -60,16 +73,17 @@ static int pid = 0;                    /* the PID for this process */
 static int init_lib = 0;               /* is library initalized? */
 
 /* Global variables for threads */
-static int32_t nthreads = 1;            /* number of desired threads in pool */
-static int32_t init_threads_done = 0;   /* pool of threads initialized? */
-static int32_t end_threads = 0;         /* should exisiting threads end? */
-static int32_t init_sentinels_done = 0; /* sentinels initialized? */
-static int32_t giveup_code;             /* error code when give up */
-static int32_t nblock;                  /* block counter */
+static int32_t nthreads = 1;              /* number of desired threads in pool */
+static int32_t compressor = BLOSC_BLOSCLZ;  /* the compressor to use by default */
+static int32_t init_threads_done = 0;     /* pool of threads initialized? */
+static int32_t end_threads = 0;           /* should exisiting threads end? */
+static int32_t init_sentinels_done = 0;   /* sentinels initialized? */
+static int32_t giveup_code;               /* error code when give up */
+static int32_t nblock;                    /* block counter */
 static pthread_t threads[BLOSC_MAX_THREADS];  /* opaque structure for threads */
 static int32_t tids[BLOSC_MAX_THREADS];       /* ID per each thread */
 #if !defined(_WIN32)
-static pthread_attr_t ct_attr;          /* creation time attrs for threads */
+static pthread_attr_t ct_attr;            /* creation time attrs for threads */
 #endif
 
 /* Have problems using posix barriers when symbol value is 200112L */
@@ -230,6 +244,204 @@ static int32_t sw32(int32_t a)
 }
 
 
+/*
+ * Conversion routines between compressor and compression libraries
+ */
+
+/* Return the library code associated with the compressor name */
+static int compname_to_clibcode(const char *compname)
+{
+  if (strcmp(compname, BLOSC_BLOSCLZ_COMPNAME) == 0)
+    return BLOSC_BLOSCLZ_LIB;
+  if (strcmp(compname, BLOSC_LZ4_COMPNAME) == 0)
+    return BLOSC_LZ4_LIB;
+  if (strcmp(compname, BLOSC_LZ4HC_COMPNAME) == 0)
+    return BLOSC_LZ4_LIB;
+  if (strcmp(compname, BLOSC_SNAPPY_COMPNAME) == 0)
+    return BLOSC_SNAPPY_LIB;
+  if (strcmp(compname, BLOSC_ZLIB_COMPNAME) == 0)
+    return BLOSC_ZLIB_LIB;
+  return -1;
+}
+
+/* Return the library name associated with the compressor code */
+static char *clibcode_to_clibname(int clibcode)
+{
+  if (clibcode == BLOSC_BLOSCLZ_LIB) return BLOSC_BLOSCLZ_LIBNAME;
+  if (clibcode == BLOSC_LZ4_LIB) return BLOSC_LZ4_LIBNAME;
+  if (clibcode == BLOSC_SNAPPY_LIB) return BLOSC_SNAPPY_LIBNAME;
+  if (clibcode == BLOSC_ZLIB_LIB) return BLOSC_ZLIB_LIBNAME;
+  return NULL;			/* should never happen */
+}
+
+
+/*
+ * Conversion routines between compressor names and compressor codes
+ */
+
+/* Get the compressor name associated with the compressor code */
+int blosc_compcode_to_compname(int compcode, char **compname)
+{
+  int code = -1;    /* -1 means non-existent compressor code */
+  char *name = NULL;
+
+  /* Map the compressor code */
+  if (compcode == BLOSC_BLOSCLZ)
+    name = BLOSC_BLOSCLZ_COMPNAME;
+  else if (compcode == BLOSC_LZ4)
+    name = BLOSC_LZ4_COMPNAME;
+  else if (compcode == BLOSC_LZ4HC)
+    name = BLOSC_LZ4HC_COMPNAME;
+  else if (compcode == BLOSC_SNAPPY)
+    name = BLOSC_SNAPPY_COMPNAME;
+  else if (compcode == BLOSC_ZLIB)
+    name = BLOSC_ZLIB_COMPNAME;
+
+  *compname = name;
+
+  /* Guess if there is support for this code */
+  if (compcode == BLOSC_BLOSCLZ)
+    code = BLOSC_BLOSCLZ;
+#if defined(HAVE_LZ4)
+  else if (compcode == BLOSC_LZ4)
+    code = BLOSC_LZ4;
+  else if (compcode == BLOSC_LZ4HC)
+    code = BLOSC_LZ4HC;
+#endif /*  HAVE_LZ4 */
+#if defined(HAVE_SNAPPY)
+  else if (compcode == BLOSC_SNAPPY)
+    code = BLOSC_SNAPPY;
+#endif /*  HAVE_SNAPPY */
+#if defined(HAVE_ZLIB)
+  else if (compcode == BLOSC_ZLIB)
+    code = BLOSC_ZLIB;
+#endif /*  HAVE_ZLIB */
+
+  return code;
+}
+
+/* Get the compressor code for the compressor name. -1 if it is not available */
+int blosc_compname_to_compcode(const char *compname)
+{
+  int code = -1;  /* -1 means non-existent compressor code */
+
+  if (strcmp(compname, BLOSC_BLOSCLZ_COMPNAME) == 0) {
+    code = BLOSC_BLOSCLZ;
+  }
+#if defined(HAVE_LZ4)
+  else if (strcmp(compname, BLOSC_LZ4_COMPNAME) == 0) {
+    code = BLOSC_LZ4;
+  }
+  else if (strcmp(compname, BLOSC_LZ4HC_COMPNAME) == 0) {
+    code = BLOSC_LZ4HC;
+  }
+#endif /*  HAVE_LZ4 */
+#if defined(HAVE_SNAPPY)
+  else if (strcmp(compname, BLOSC_SNAPPY_COMPNAME) == 0) {
+    code = BLOSC_SNAPPY;
+  }
+#endif /*  HAVE_SNAPPY */
+#if defined(HAVE_ZLIB)
+  else if (strcmp(compname, BLOSC_ZLIB_COMPNAME) == 0) {
+    code = BLOSC_ZLIB;
+  }
+#endif /*  HAVE_ZLIB */
+
+return code;
+}
+
+
+#if defined(HAVE_LZ4)
+static int lz4_wrap_compress(const char* input, size_t input_length,
+                             char* output, size_t maxout)
+{
+  int cbytes;
+  cbytes = LZ4_compress_limitedOutput(input, output, (int)input_length,
+                                      (int)maxout);
+  return cbytes;
+}
+
+static int lz4hc_wrap_compress(const char* input, size_t input_length,
+                               char* output, size_t maxout)
+{
+  int cbytes;
+  if (input_length > (size_t)(2<<30))
+    return -1;   /* input larger than 1 GB is not supported */
+  cbytes = LZ4_compressHC_limitedOutput(input, output, (int)input_length,
+					(int)maxout);
+  return cbytes;
+}
+
+static int lz4_wrap_decompress(const char* input, size_t compressed_length,
+                               char* output, size_t maxout)
+{
+  size_t cbytes;
+  cbytes = LZ4_decompress_fast(input, output, (int)maxout);
+  if (cbytes != compressed_length) {
+    return 0;
+  }
+  return (int)maxout;
+}
+
+#endif /* HAVE_LZ4 */
+
+#if defined(HAVE_SNAPPY)
+static int snappy_wrap_compress(const char* input, size_t input_length,
+                                char* output, size_t maxout)
+{
+  snappy_status status;
+  size_t cl = maxout;
+  status = snappy_compress(input, input_length, output, &cl);
+  if (status != SNAPPY_OK){
+    return 0;
+  }
+  return (int)cl;
+}
+
+static int snappy_wrap_decompress(const char* input, size_t compressed_length,
+                                  char* output, size_t maxout)
+{
+  snappy_status status;
+  size_t ul = maxout;
+  status = snappy_uncompress(input, compressed_length, output, &ul);
+  if (status != SNAPPY_OK){
+    return 0;
+  }
+  return (int)ul;
+}
+#endif /* HAVE_SNAPPY */
+
+#if defined(HAVE_ZLIB)
+/* zlib is not very respectful with sharing name space with others.
+ Fortunately, its names do not collide with those already in blosc. */
+static int zlib_wrap_compress(const char* input, size_t input_length,
+                              char* output, size_t maxout, int clevel)
+{
+  int status;
+  uLongf cl = maxout;
+  status = compress2(
+	     (Bytef*)output, &cl, (Bytef*)input, (uLong)input_length, clevel);
+  if (status != Z_OK){
+    return 0;
+  }
+  return (int)cl;
+}
+
+static int zlib_wrap_decompress(const char* input, size_t compressed_length,
+                                char* output, size_t maxout)
+{
+  int status;
+  uLongf ul = maxout;
+  status = uncompress(
+             (Bytef*)output, &ul, (Bytef*)input, (uLong)compressed_length);
+  if (status != Z_OK){
+    return 0;
+  }
+  return (int)ul;
+}
+
+#endif /*  HAVE_ZLIB */
+
 /* Shuffle & compress a single block */
 static int blosc_c(int32_t blocksize, int32_t leftoverblock,
                    int32_t ntbytes, int32_t maxbytes,
@@ -241,6 +453,7 @@ static int blosc_c(int32_t blocksize, int32_t leftoverblock,
   int32_t maxout;
   int32_t typesize = params.typesize;
   uint8_t *_tmp;
+  char *compname;
 
   if ((params.flags & BLOSC_DOSHUFFLE) && (typesize > 1)) {
     /* Shuffle this block (this makes sense only if typesize > 1) */
@@ -267,16 +480,54 @@ static int blosc_c(int32_t blocksize, int32_t leftoverblock,
     ntbytes += (int32_t)sizeof(int32_t);
     ctbytes += (int32_t)sizeof(int32_t);
     maxout = neblock;
+    #if defined(HAVE_SNAPPY)
+    if (compressor == BLOSC_SNAPPY) {
+      /* TODO perhaps refactor this to keep the value stashed somewhere */
+      maxout = snappy_max_compressed_length(neblock);
+    }
+    #endif /*  HAVE_SNAPPY */
     if (ntbytes+maxout > maxbytes) {
       maxout = maxbytes - ntbytes;   /* avoid buffer overrun */
       if (maxout <= 0) {
         return 0;                  /* non-compressible block */
       }
     }
-    cbytes = blosclz_compress(params.clevel, _tmp+j*neblock, neblock,
-                              dest, maxout);
+    if (compressor == BLOSC_BLOSCLZ) {
+      cbytes = blosclz_compress(params.clevel, _tmp+j*neblock, neblock,
+                                dest, maxout);
+    }
+    #if defined(HAVE_LZ4)
+    else if (compressor == BLOSC_LZ4) {
+      cbytes = lz4_wrap_compress((char *)_tmp+j*neblock, (size_t)neblock,
+                                 (char *)dest, (size_t)maxout);
+    }
+    else if (compressor == BLOSC_LZ4HC) {
+      cbytes = lz4hc_wrap_compress((char *)_tmp+j*neblock, (size_t)neblock,
+                                   (char *)dest, (size_t)maxout);
+    }
+    #endif /*  HAVE_LZ4 */
+    #if defined(HAVE_SNAPPY)
+    else if (compressor == BLOSC_SNAPPY) {
+      cbytes = snappy_wrap_compress((char *)_tmp+j*neblock, (size_t)neblock,
+                                    (char *)dest, (size_t)maxout);
+    }
+    #endif /*  HAVE_SNAPPY */
+    #if defined(HAVE_ZLIB)
+    else if (compressor == BLOSC_ZLIB) {
+      cbytes = zlib_wrap_compress((char *)_tmp+j*neblock, (size_t)neblock,
+                                  (char *)dest, (size_t)maxout, params.clevel);
+    }
+    #endif /*  HAVE_ZLIB */
+
+    else {
+      blosc_compcode_to_compname(compressor, &compname);
+      fprintf(stderr, "Blosc has not been compiled with '%s' ", compname);
+      fprintf(stderr, "compression support.  Please use one having it.");
+      return -5;    /* signals no compression support */
+    }
+
     if (cbytes >= maxout) {
-      /* Buffer overrun caused by blosclz_compress (should never happen) */
+      /* Buffer overrun caused by compression (should never happen) */
       return -1;
     }
     else if (cbytes < 0) {
@@ -302,7 +553,6 @@ static int blosc_c(int32_t blocksize, int32_t leftoverblock,
   return ctbytes;
 }
 
-
 /* Decompress & unshuffle a single block */
 static int blosc_d(int32_t blocksize, int32_t leftoverblock,
                    uint8_t *src, uint8_t *dest, uint8_t *tmp, uint8_t *tmp2)
@@ -314,6 +564,8 @@ static int blosc_d(int32_t blocksize, int32_t leftoverblock,
   int32_t ntbytes = 0;           /* number of uncompressed bytes in block */
   uint8_t *_tmp;
   int32_t typesize = params.typesize;
+  int compressor_format;
+  char *compname;
 
   if ((params.flags & BLOSC_DOSHUFFLE) && (typesize > 1)) {
     _tmp = tmp;
@@ -322,6 +574,8 @@ static int blosc_d(int32_t blocksize, int32_t leftoverblock,
     _tmp = dest;
   }
 
+  compressor_format = (params.flags & 0xe0) >> 5;
+
   /* Compress for each shuffled slice split for this block. */
   if ((typesize <= MAX_SPLITS) && (blocksize/typesize) >= MIN_BUFFERSIZE &&
       (!leftoverblock)) {
@@ -341,10 +595,41 @@ static int blosc_d(int32_t blocksize, int32_t leftoverblock,
       nbytes = neblock;
     }
     else {
-      nbytes = blosclz_decompress(src, cbytes, _tmp, neblock);
+      if (compressor_format == BLOSC_BLOSCLZ_FORMAT) {
+        nbytes = blosclz_decompress(src, cbytes, _tmp, neblock);
+      }
+      #if defined(HAVE_LZ4)
+      else if (compressor_format == BLOSC_LZ4_FORMAT) {
+        nbytes = lz4_wrap_decompress((char *)src, (size_t)cbytes,
+                                     (char*)_tmp, (size_t)neblock);
+      }
+      #endif /*  HAVE_LZ4 */
+      #if defined(HAVE_SNAPPY)
+      else if (compressor_format == BLOSC_SNAPPY_FORMAT) {
+        nbytes = snappy_wrap_decompress((char *)src, (size_t)cbytes,
+                                        (char*)_tmp, (size_t)neblock);
+      }
+      #endif /*  HAVE_SNAPPY */
+      #if defined(HAVE_ZLIB)
+      else if (compressor_format == BLOSC_ZLIB_FORMAT) {
+        nbytes = zlib_wrap_decompress((char *)src, (size_t)cbytes,
+                                      (char*)_tmp, (size_t)neblock);
+      }
+      #endif /*  HAVE_ZLIB */
+      else {
+        blosc_compcode_to_compname(compressor_format, &compname);
+        fprintf(stderr,
+                "Blosc has not been compiled with decompression "
+                "support for '%s' format. ", compname);
+        fprintf(stderr, "Please recompile for adding this support.\n");
+        return -5;    /* signals no decompression support */
+      }
+
+      /* Check that decompressed bytes number is correct */
       if (nbytes != neblock) {
-        return -2;
+	return -2;
       }
+
     }
     src += cbytes;
     ctbytes += cbytes;
@@ -467,13 +752,14 @@ static int parallel_blosc(void)
 /* Convenience functions for creating and releasing temporaries */
 static int create_temporaries(void)
 {
-  int32_t tid;
+  int32_t tid, ebsize;
   int32_t typesize = params.typesize;
   int32_t blocksize = params.blocksize;
+
   /* Extended blocksize for temporary destination.  Extended blocksize
    is only useful for compression in parallel mode, but it doesn't
    hurt serial mode either. */
-  int32_t ebsize = blocksize + typesize*(int32_t)sizeof(int32_t);
+  ebsize = blocksize + typesize * (int32_t)sizeof(int32_t);
 
   /* Create temporary area for each thread */
   for (tid = 0; tid < nthreads; tid++) {
@@ -572,6 +858,21 @@ static int32_t compute_blocksize(int32_t clevel, int32_t typesize,
   }
   else if (nbytes >= L1*4) {
     blocksize = L1 * 4;
+
+    /* For Zlib, increase the block sizes in a factor of 8 because it
+       is meant for compression large blocks (it shows a big overhead
+       in compressing small ones). */
+    if (compressor == BLOSC_ZLIB) {
+      blocksize *= 8;
+    }
+
+    /* For LZ4HC, increase the block sizes in a factor of 8 because it
+       is meant for compression large blocks (it shows a big overhead
+       in compressing small ones). */
+    if (compressor == BLOSC_LZ4HC) {
+      blocksize *= 8;
+    }
+
     if (clevel == 0) {
       blocksize /= 16;
     }
@@ -591,6 +892,21 @@ static int32_t compute_blocksize(int32_t clevel, int32_t typesize,
       blocksize *= 2;
     }
   }
+  else if (nbytes > (16 * 16))  {
+      /* align to typesize to make use of vectorized shuffles */
+      if (typesize == 2) {
+          blocksize -= blocksize % (16 * 2);
+      }
+      else if (typesize == 4) {
+          blocksize -= blocksize % (16 * 4);
+      }
+      else if (typesize == 8) {
+          blocksize -= blocksize % (16 * 8);
+      }
+      else if (typesize == 16) {
+          blocksize -= blocksize % (16 * 16);
+      }
+  }
 
   /* Check that blocksize is not too large */
   if (blocksize > (int32_t)nbytes) {
@@ -605,17 +921,19 @@ static int32_t compute_blocksize(int32_t clevel, int32_t typesize,
   /* blocksize must not exceed (64 KB * typesize) in order to allow
      BloscLZ to achieve better compression ratios (the ultimate reason
      for this is that hash_log in BloscLZ cannot be larger than 15) */
-  if ((blocksize / typesize) > 64*KB) {
+  if ((compressor == BLOSC_BLOSCLZ) && (blocksize / typesize) > 64*KB) {
     blocksize = 64 * KB * typesize;
   }
 
   return blocksize;
 }
 
+#define BLOSC_UNLOCK_RETURN(val) \
+  return (pthread_mutex_unlock(&global_comp_mutex), val)
 
 /* The public routine for compression.  See blosc.h for docstrings. */
 int blosc_compress(int clevel, int doshuffle, size_t typesize, size_t nbytes,
-      const void *src, void *dest, size_t destsize)
+                   const void *src, void *dest, size_t destsize)
 {
   uint8_t *_dest=NULL;         /* current pos for destination buffer */
   uint8_t *flags;              /* flags for header.  Currently booked:
@@ -629,6 +947,7 @@ int blosc_compress(int clevel, int doshuffle, size_t typesize, size_t nbytes,
   int32_t ntbytes = 0;        /* the number of compressed bytes */
   int32_t *ntbytes_;          /* placeholder for bytes in output buffer */
   int32_t maxbytes = (int32_t)destsize;  /* maximum size for dest buffer */
+  int compressor_format = -1; /* the format for compressor */
 
   /* Check buffer size limits */
   if (nbytes > BLOSC_MAX_BUFFERSIZE) {
@@ -670,18 +989,44 @@ int blosc_compress(int clevel, int doshuffle, size_t typesize, size_t nbytes,
 
   _dest = (uint8_t *)(dest);
   /* Write header for this block */
-  _dest[0] = BLOSC_VERSION_FORMAT;         /* blosc format version */
-  _dest[1] = BLOSCLZ_VERSION_FORMAT;       /* blosclz format version */
-  flags = _dest+2;                         /* flags */
-  _dest[2] = 0;                            /* zeroes flags */
-  _dest[3] = (uint8_t)typesize;            /* type size */
+  _dest[0] = BLOSC_VERSION_FORMAT;              /* blosc format version */
+  if (compressor == BLOSC_BLOSCLZ) {
+    compressor_format = BLOSC_BLOSCLZ_FORMAT;
+    _dest[1] = BLOSC_BLOSCLZ_VERSION_FORMAT;    /* blosclz format version */
+  }
+  #if defined(HAVE_LZ4)
+  else if (compressor == BLOSC_LZ4) {
+    compressor_format = BLOSC_LZ4_FORMAT;
+    _dest[1] = BLOSC_LZ4_VERSION_FORMAT;       /* lz4 format version */
+  }
+  else if (compressor == BLOSC_LZ4HC) {
+    compressor_format = BLOSC_LZ4_FORMAT;
+    _dest[1] = BLOSC_LZ4_VERSION_FORMAT;       /* lz4hc is the same than lz4 */
+  }
+  #endif /*  HAVE_LZ4 */
+  #if defined(HAVE_SNAPPY)
+  else if (compressor == BLOSC_SNAPPY) {
+    compressor_format = BLOSC_SNAPPY_FORMAT;
+    _dest[1] = BLOSC_SNAPPY_VERSION_FORMAT;    /* snappy format version */
+  }
+  #endif /*  HAVE_SNAPPY */
+  #if defined(HAVE_ZLIB)
+  else if (compressor == BLOSC_ZLIB) {
+    compressor_format = BLOSC_ZLIB_FORMAT;
+    _dest[1] = BLOSC_ZLIB_VERSION_FORMAT;      /* zlib format version */
+  }
+  #endif /*  HAVE_ZLIB */
+
+  flags = _dest+2;                             /* flags */
+  _dest[2] = 0;                                /* zeroes flags */
+  _dest[3] = (uint8_t)typesize;                /* type size */
   _dest += 4;
-  ((int32_t *)_dest)[0] = sw32(nbytes_);  /* size of the buffer */
-  ((int32_t *)_dest)[1] = sw32(blocksize);/* block size */
-  ntbytes_ = (int32_t *)(_dest+8);        /* compressed buffer size */
+  ((int32_t *)_dest)[0] = sw32(nbytes_);       /* size of the buffer */
+  ((int32_t *)_dest)[1] = sw32(blocksize);     /* block size */
+  ntbytes_ = (int32_t *)(_dest+8);             /* compressed buffer size */
   _dest += sizeof(int32_t)*3;
-  bstarts = (int32_t *)_dest;             /* starts for every block */
-  _dest += sizeof(int32_t)*nblocks;        /* space for pointers to blocks */
+  bstarts = (int32_t *)_dest;                  /* starts for every block */
+  _dest += sizeof(int32_t)*nblocks;          /* space for pointers to blocks */
   ntbytes = (int32_t)(_dest - (uint8_t *)dest);
 
   if (clevel == 0) {
@@ -696,9 +1041,11 @@ int blosc_compress(int clevel, int doshuffle, size_t typesize, size_t nbytes,
 
   if (doshuffle == 1) {
     /* Shuffle is active */
-    *flags |= BLOSC_DOSHUFFLE;              /* bit 0 set to one in flags */
+    *flags |= BLOSC_DOSHUFFLE;          /* bit 0 set to one in flags */
   }
 
+  *flags |= compressor_format << 5;        /* compressor format start at bit 5 */
+
   /* Take global lock for the time of compression */
   pthread_mutex_lock(&global_comp_mutex);
   /* Populate parameters for compression routines */
@@ -720,7 +1067,7 @@ int blosc_compress(int clevel, int doshuffle, size_t typesize, size_t nbytes,
     /* Do the actual compression */
     ntbytes = do_job();
     if (ntbytes < 0) {
-      return -1;
+      BLOSC_UNLOCK_RETURN(-1);
     }
     if ((ntbytes == 0) && (nbytes_+BLOSC_MAX_OVERHEAD <= maxbytes)) {
       /* Last chance for fitting `src` buffer in `dest`.  Update flags
@@ -741,7 +1088,7 @@ int blosc_compress(int clevel, int doshuffle, size_t typesize, size_t nbytes,
       params.ntbytes = BLOSC_MAX_OVERHEAD;
       ntbytes = do_job();
       if (ntbytes < 0) {
-	return -1;
+        BLOSC_UNLOCK_RETURN(-1);
       }
     }
     else {
@@ -755,7 +1102,7 @@ int blosc_compress(int clevel, int doshuffle, size_t typesize, size_t nbytes,
 
   /* Release global lock */
   pthread_mutex_unlock(&global_comp_mutex);
-  
+
   assert((int32_t)ntbytes <= (int32_t)maxbytes);
   return ntbytes;
 }
@@ -776,9 +1123,9 @@ int blosc_decompress(const void *src, void *dest, size_t destsize)
   _src = (uint8_t *)(src);
 
   /* Read the header block */
-  version = _src[0];                         /* blosc format version */
-  versionlz = _src[1];                       /* blosclz format version */
-  flags = _src[2];                           /* flags */
+  version = _src[0];                        /* blosc format version */
+  versionlz = _src[1];                      /* blosclz format version */
+  flags = _src[2];                          /* flags */
   typesize = (int32_t)_src[3];              /* typesize */
   _src += 4;
   nbytes = sw32(((int32_t *)_src)[0]);      /* buffer size */
@@ -805,7 +1152,7 @@ int blosc_decompress(const void *src, void *dest, size_t destsize)
 
   /* Take global lock for the time of decompression */
   pthread_mutex_lock(&global_comp_mutex);
-  
+
   /* Populate parameters for decompression routines */
   params.compress = 0;
   params.clevel = 0;            /* specific for compression */
@@ -827,7 +1174,7 @@ int blosc_decompress(const void *src, void *dest, size_t destsize)
        cache size or multi-cores */
       ntbytes = do_job();
       if (ntbytes < 0) {
-	return -1;
+        BLOSC_UNLOCK_RETURN(-1);
       }
     }
     else {
@@ -839,12 +1186,12 @@ int blosc_decompress(const void *src, void *dest, size_t destsize)
     /* Do the actual decompression */
     ntbytes = do_job();
     if (ntbytes < 0) {
-      return -1;
+      BLOSC_UNLOCK_RETURN(-1);
     }
   }
   /* Release global lock */
   pthread_mutex_unlock(&global_comp_mutex);
-  
+
   assert(ntbytes <= (int32_t)destsize);
   return ntbytes;
 }
@@ -874,7 +1221,7 @@ int blosc_getitem(const void *src, int start, int nitems, void *dest)
 
   /* Take global lock  */
   pthread_mutex_lock(&global_comp_mutex);
-  
+
   /* Read the header block */
   version = _src[0];                         /* blosc format version */
   versionlz = _src[1];                       /* blosclz format version */
@@ -901,12 +1248,12 @@ int blosc_getitem(const void *src, int start, int nitems, void *dest)
   /* Check region boundaries */
   if ((start < 0) || (start*typesize > nbytes)) {
     fprintf(stderr, "`start` out of bounds");
-    return (-1);
+    BLOSC_UNLOCK_RETURN(-1);
   }
 
   if ((stop < 0) || (stop*typesize > nbytes)) {
     fprintf(stderr, "`start`+`nitems` out of bounds");
-    return (-1);
+    BLOSC_UNLOCK_RETURN(-1);
   }
 
   /* Parameters needed by blosc_d */
@@ -917,11 +1264,11 @@ int blosc_getitem(const void *src, int start, int nitems, void *dest)
   if (tmp == NULL || tmp2 == NULL || current_temp.blocksize < blocksize) {
     tmp = my_malloc(blocksize);
     if (tmp == NULL) {
-      return -1;
+      BLOSC_UNLOCK_RETURN(-1);
     }
     tmp2 = my_malloc(blocksize);
     if (tmp2 == NULL) {
-      return -1;
+      BLOSC_UNLOCK_RETURN(-1);
     }
     tmp_init = 1;
   }
@@ -970,7 +1317,7 @@ int blosc_getitem(const void *src, int start, int nitems, void *dest)
     }
     ntbytes += cbytes;
   }
-  
+
   /* Release global lock */
   pthread_mutex_unlock(&global_comp_mutex);
 
@@ -984,7 +1331,7 @@ int blosc_getitem(const void *src, int start, int nitems, void *dest)
 
 
 /* Decompress & unshuffle several blocks in a single thread */
-static int t_blosc(void *tids)
+static void *t_blosc(void *tids)
 {
   int32_t tid = *(int32_t *)tids;
   int32_t cbytes, ntdest;
@@ -1017,7 +1364,7 @@ static int t_blosc(void *tids)
 
     /* Check if thread has been asked to return */
     if (end_threads) {
-      return(0);
+      return(NULL);
     }
 
     pthread_mutex_lock(&count_mutex);
@@ -1031,7 +1378,7 @@ static int t_blosc(void *tids)
 
     /* Get parameters for this thread before entering the main loop */
     blocksize = params.blocksize;
-    ebsize = blocksize + params.typesize*(int32_t)sizeof(int32_t);
+    ebsize = blocksize + params.typesize * (int32_t)sizeof(int32_t);
     compress = params.compress;
     flags = params.flags;
     maxbytes = params.maxbytes;
@@ -1159,7 +1506,7 @@ static int t_blosc(void *tids)
   }  /* closes while(1) */
 
   /* This should never be reached, but anyway */
-  return(0);
+  return(NULL);
 }
 
 
@@ -1191,11 +1538,9 @@ static int init_threads(void)
   for (tid = 0; tid < nthreads; tid++) {
     tids[tid] = tid;
 #if !defined(_WIN32)
-    rc2 = pthread_create(&threads[tid], &ct_attr, (void*)t_blosc,
-			(void *)&tids[tid]);
+    rc2 = pthread_create(&threads[tid], &ct_attr, t_blosc, (void *)&tids[tid]);
 #else
-    rc2 = pthread_create(&threads[tid], NULL, (void*)t_blosc,
-			(void *)&tids[tid]);
+    rc2 = pthread_create(&threads[tid], NULL, t_blosc, (void *)&tids[tid]);
 #endif
     if (rc2) {
       fprintf(stderr, "ERROR; return code from pthread_create() is %d\n", rc2);
@@ -1216,7 +1561,7 @@ void blosc_init(void) {
   init_lib = 1;
 }
 
-int blosc_set_nthreads(int nthreads_new) 
+int blosc_set_nthreads(int nthreads_new)
 {
   int ret;
 
@@ -1226,11 +1571,11 @@ int blosc_set_nthreads(int nthreads_new)
 
   /* Take global lock  */
   pthread_mutex_lock(&global_comp_mutex);
-  
+
   ret = blosc_set_nthreads_(nthreads_new);
   /* Release global lock  */
   pthread_mutex_unlock(&global_comp_mutex);
-  
+
   return ret;
 }
 
@@ -1282,6 +1627,87 @@ int blosc_set_nthreads_(int nthreads_new)
   return nthreads_old;
 }
 
+int blosc_set_compressor(const char *compname)
+{
+  int code;
+
+  /* Check if should initialize */
+  if (!init_lib) blosc_init();
+
+  code = blosc_compname_to_compcode(compname);
+
+  /* Take global lock  */
+  pthread_mutex_lock(&global_comp_mutex);
+
+  compressor = code;
+
+  /* Release global lock  */
+  pthread_mutex_unlock(&global_comp_mutex);
+
+  return code;
+}
+
+char* blosc_list_compressors(void)
+{
+  static int compressors_list_done = 0;
+  static char ret[256];
+
+  if (compressors_list_done) return ret;
+  ret[0] = '\0';
+  strcat(ret, BLOSC_BLOSCLZ_COMPNAME);
+#if defined(HAVE_LZ4)
+  strcat(ret, ","); strcat(ret, BLOSC_LZ4_COMPNAME);
+  strcat(ret, ","); strcat(ret, BLOSC_LZ4HC_COMPNAME);
+#endif /*  HAVE_LZ4 */
+#if defined(HAVE_SNAPPY)
+  strcat(ret, ","); strcat(ret, BLOSC_SNAPPY_COMPNAME);
+#endif /*  HAVE_SNAPPY */
+#if defined(HAVE_ZLIB)
+  strcat(ret, ","); strcat(ret, BLOSC_ZLIB_COMPNAME);
+#endif /*  HAVE_ZLIB */
+  compressors_list_done = 1;
+  return ret;
+}
+
+int blosc_get_complib_info(char *compname, char **complib, char **version)
+{
+  int clibcode;
+  char *clibname;
+  char *clibversion = "unknown";
+  char *sbuffer[256];
+
+  clibcode = compname_to_clibcode(compname);
+  clibname = clibcode_to_clibname(clibcode);
+
+  /* complib version */
+  if (clibcode == BLOSC_BLOSCLZ_LIB) {
+    clibversion = BLOSCLZ_VERSION_STRING;
+  }
+#if defined(HAVE_LZ4)
+  else if (clibcode == BLOSC_LZ4_LIB) {
+#if defined(LZ4_VERSION_STRING)
+    clibversion = LZ4_VERSION_STRING;
+#endif /* LZ4_VERSION_STRING */
+  }
+#endif /*  HAVE_LZ4 */
+#if defined(HAVE_SNAPPY)
+  else if (clibcode == BLOSC_SNAPPY_LIB) {
+#if defined(SNAPPY_VERSION)
+    sprintf(sbuffer, "%d.%d.%d", SNAPPY_MAJOR, SNAPPY_MINOR, SNAPPY_PATCHLEVEL);
+    clibversion = sbuffer;
+#endif /*  SNAPPY_VERSION */
+  }
+#endif /*  HAVE_SNAPPY */
+#if defined(HAVE_ZLIB)
+  else if (clibcode == BLOSC_ZLIB_LIB) {
+    clibversion = ZLIB_VERSION;
+  }
+#endif /*  HAVE_ZLIB */
+
+  *complib = strdup(clibname);
+  *version = strdup(clibversion);
+  return clibcode;
+}
 
 /* Free possible memory temporaries and thread resources */
 int blosc_free_resources(void)
@@ -1289,7 +1715,7 @@ int blosc_free_resources(void)
   int32_t t;
   int rc2;
   void *status;
- 
+
    /* Take global lock  */
   pthread_mutex_lock(&global_comp_mutex);
 
@@ -1407,9 +1833,9 @@ void blosc_set_blocksize(size_t size)
 {
   /* Take global lock  */
   pthread_mutex_lock(&global_comp_mutex);
-  
+
   force_blocksize = (int32_t)size;
-  
+
    /* Release global lock  */
   pthread_mutex_unlock(&global_comp_mutex);
 }
diff --git a/blosc/blosc.h b/c-blosc/blosc/blosc.h
similarity index 56%
rename from blosc/blosc.h
rename to c-blosc/blosc/blosc.h
index 6a7129a..36fc819 100644
--- a/blosc/blosc.h
+++ b/c-blosc/blosc/blosc.h
@@ -13,19 +13,17 @@
 
 /* Version numbers */
 #define BLOSC_VERSION_MAJOR    1    /* for major interface/format changes  */
-#define BLOSC_VERSION_MINOR    2    /* for minor interface/format changes  */
-#define BLOSC_VERSION_RELEASE  3    /* for tweaks, bug-fixes, or development */
+#define BLOSC_VERSION_MINOR    3    /* for minor interface/format changes  */
+#define BLOSC_VERSION_RELEASE  2    /* for tweaks, bug-fixes, or development */
 
-#define BLOSC_VERSION_STRING   "1.2.3"  /* string version.  Sync with above! */
+#define BLOSC_VERSION_STRING   "1.3.2"  /* string version.  Sync with above! */
 #define BLOSC_VERSION_REVISION "$Rev$"   /* revision version */
-#define BLOSC_VERSION_DATE     "$Date:: 2013-05-17 #$"    /* date version */
+#define BLOSC_VERSION_DATE     "$Date:: 2014-01-17 #$"    /* date version */
+
+#define BLOSCLZ_VERSION_STRING "1.0.1"   /* the internal compressor version */
 
 /* The *_VERS_FORMAT should be just 1-byte long */
 #define BLOSC_VERSION_FORMAT    2   /* Blosc format version, starting at 1 */
-#define BLOSCLZ_VERSION_FORMAT  1   /* Blosclz format version, starting at 1 */
-
-/* The combined blosc and blosclz formats */
-#define BLOSC_VERSION_CFORMAT (BLOSC_VERSION_FORMAT << 8) & (BLOSCLZ_VERSION_FORMAT)
 
 /* Minimum header length */
 #define BLOSC_MIN_HEADER_LENGTH 16
@@ -48,26 +46,64 @@
 #define BLOSC_DOSHUFFLE 0x1
 #define BLOSC_MEMCPYED  0x2
 
+/* Codes for the different compressors shipped with Blosc */
+#define BLOSC_BLOSCLZ   0
+#define BLOSC_LZ4       1
+#define BLOSC_LZ4HC     2
+#define BLOSC_SNAPPY    3
+#define BLOSC_ZLIB      4
+
+/* Names for the different compressors shipped with Blosc */
+#define BLOSC_BLOSCLZ_COMPNAME   "blosclz"
+#define BLOSC_LZ4_COMPNAME       "lz4"
+#define BLOSC_LZ4HC_COMPNAME     "lz4hc"
+#define BLOSC_SNAPPY_COMPNAME    "snappy"
+#define BLOSC_ZLIB_COMPNAME      "zlib"
+
+/* Codes for the different compression libraries shipped with Blosc */
+#define BLOSC_BLOSCLZ_LIB   0
+#define BLOSC_LZ4_LIB       1
+#define BLOSC_SNAPPY_LIB    2
+#define BLOSC_ZLIB_LIB      3
+
+/* Names for the different compression libraries shipped with Blosc */
+#define BLOSC_BLOSCLZ_LIBNAME   "BloscLZ"
+#define BLOSC_LZ4_LIBNAME       "LZ4"
+#define BLOSC_SNAPPY_LIBNAME    "Snappy"
+#define BLOSC_ZLIB_LIBNAME      "Zlib"
+
+/* The codes for compressor formats shipped with Blosc (code must be < 8) */
+#define BLOSC_BLOSCLZ_FORMAT  BLOSC_BLOSCLZ_LIB
+#define BLOSC_LZ4_FORMAT      BLOSC_LZ4_LIB
+    /* LZ4HC and LZ4 share the same format */
+#define BLOSC_LZ4HC_FORMAT    BLOSC_LZ4_LIB
+#define BLOSC_SNAPPY_FORMAT   BLOSC_SNAPPY_LIB
+#define BLOSC_ZLIB_FORMAT     BLOSC_ZLIB_LIB
+
+
+/* The version formats for compressors shipped with Blosc */
+/* All versions here starts at 1 */
+#define BLOSC_BLOSCLZ_VERSION_FORMAT  1
+#define BLOSC_LZ4_VERSION_FORMAT      1
+#define BLOSC_LZ4HC_VERSION_FORMAT    1  /* LZ4HC and LZ4 share the same format */
+#define BLOSC_SNAPPY_VERSION_FORMAT   1
+#define BLOSC_ZLIB_VERSION_FORMAT     1
 
 
 /**
-  Initialize the Blosc library. You must call this previous to any other
-  Blosc call, and make sure that you call this in a non-threaded environment.
-  Other Blosc calls can be called in a threaded environment, if desired.
-
- */
-
+  Initialize the Blosc library. You must call this previous to any
+  other Blosc call, and make sure that you call this in a non-threaded
+  environment.  Other Blosc calls can be called in a threaded
+  environment, if desired.
+  */
 void blosc_init(void);
 
 
 /**
-
-  Destroy the Blosc library environment. You must call this after to you are
-  done with all the Blosc calls, and make sure that you call this in a
-  non-threaded environment.
-
- */
-
+  Destroy the Blosc library environment. You must call this after to
+  you are done with all the Blosc calls, and make sure that you call
+  this in a non-threaded environment.
+  */
 void blosc_destroy(void);
 
 
@@ -92,6 +128,11 @@ void blosc_destroy(void);
   (`nbytes`+BLOSC_MAX_OVERHEAD), the compression will always succeed.
   The `src` buffer and the `dest` buffer can not overlap.
 
+  Compression is memory safe and guaranteed not to write the `dest`
+  buffer more than what is specified in `destsize`.  However, it is
+  not re-entrant and not thread-safe (despite the fact that it uses
+  threads internally).
+
   If `src` buffer cannot be compressed into `destsize`, the return
   value is zero and you should discard the contents of the `dest`
   buffer.
@@ -99,23 +140,14 @@ void blosc_destroy(void);
   A negative return value means that an internal error happened.  This
   should never happen.  If you see this, please report it back
   together with the buffer data causing this and compression settings.
-
-  Compression is memory safe and guaranteed not to write the `dest`
-  buffer more than what is specified in `destsize`.  However, it is
-  not re-entrant and not thread-safe (despite the fact that it uses
-  threads internally).
- */
-
+  */
 int blosc_compress(int clevel, int doshuffle, size_t typesize, size_t nbytes,
                    const void *src, void *dest, size_t destsize);
 
 
 /**
   Decompress a block of compressed data in `src`, put the result in
-  `dest` and returns the size of the decompressed block. If error
-  occurs, e.g. the compressed data is corrupted or the output buffer
-  is not large enough, then 0 (zero) or a negative value will be
-  returned instead.
+  `dest` and returns the size of the decompressed block.
 
   The `src` buffer and the `dest` buffer can not overlap.
 
@@ -123,37 +155,102 @@ int blosc_compress(int clevel, int doshuffle, size_t typesize, size_t nbytes,
   buffer more than what is specified in `destsize`.  However, it is
   not re-entrant and not thread-safe (despite the fact that it uses
   threads internally).
-*/
 
+  If an error occurs, e.g. the compressed data is corrupted or the
+  output buffer is not large enough, then 0 (zero) or a negative value
+  will be returned instead.
+  */
 int blosc_decompress(const void *src, void *dest, size_t destsize);
 
 
 /**
   Get `nitems` (of typesize size) in `src` buffer starting in `start`.
   The items are returned in `dest` buffer, which has to have enough
-  space for storing all items.  Returns the number of bytes copied to
-  `dest` or a negative value if some error happens.
- */
+  space for storing all items.
 
+  Returns the number of bytes copied to `dest` or a negative value if
+  some error happens.
+  */
 int blosc_getitem(const void *src, int start, int nitems, void *dest);
 
 
 /**
   Initialize a pool of threads for compression/decompression.  If
   `nthreads` is 1, then the serial version is chosen and a possible
-  previous existing pool is ended.  Returns the previous number of
-  threads.  If this is not called, `nthreads` is set to 1 internally.
-*/
+  previous existing pool is ended.  If this is not called, `nthreads`
+  is set to 1 internally.
 
+  Returns the previous number of threads.
+  */
 int blosc_set_nthreads(int nthreads);
 
 
 /**
+  Select the compressor to be used.  The supported ones are "blosclz",
+  "lz4", "lz4hc", "snappy" and "zlib".  If this function is not
+  called, then "blosclz" will be used.
+
+  In case the compressor is not recognized, or there is not support
+  for it in this build, it returns a -1.  Else it returns the code for
+  the compressor (>=0).
+  */
+int blosc_set_compressor(const char* compname);
+
+
+/**
+  Get the `compname` associated with the `compcode`.
+
+  If the compressor code is not recognized, or there is not support
+  for it in this build, -1 is returned.  Else, the compressor code is
+  returned.
+ */
+int blosc_compcode_to_compname(int compcode, char **compname);
+
+
+/**
+  Return the compressor code associated with the compressor name.
+
+  If the compressor name is not recognized, or there is not support
+  for it in this build, -1 is returned instead.
+ */
+int blosc_compname_to_compcode(const char *compname);
+
+
+/**
+  Get a list of compressors supported in the current build.  The
+  returned value is a string with a concatenation of "blosclz", "lz4",
+  "lz4hc", "snappy" or "zlib" separated by commas, depending on which
+  ones are present in the build.
+
+  This function does not leak, so you should not free() the returned
+  list.
+
+  This function should always succeed.
+  */
+char* blosc_list_compressors(void);
+
+
+/**
+  Get info from compression libraries included in the current build.
+  In `compname` you pass the compressor name that you want info from.
+  In `complib` and `version` you get the compression library name and
+  version (if available) as output.
+
+  In `complib` and `version` you get a pointer to the compressor name
+  and the version in string format respectively.  After using the name
+  and version, you should free() them so as to avoid leaks.
+
+  If the compressor is supported, it returns the code for the library
+  (>=0).  If it is not supported, this function returns -1.
+  */
+int blosc_get_complib_info(char *compname, char **complib, char **version);
+
+
+/**
   Free possible memory temporaries and thread resources.  Use this when you
   are not going to use Blosc for a long while.  In case of problems releasing
   the resources, it returns a negative number, else it returns 0.
-*/
-
+  */
 int blosc_free_resources(void);
 
 
@@ -167,8 +264,7 @@ int blosc_free_resources(void);
   compressed buffer for this call to work.
 
   This function should always succeed.
-*/
-
+  */
 void blosc_cbuffer_sizes(const void *cbuffer, size_t *nbytes,
                          size_t *cbytes, size_t *blocksize);
 
@@ -186,8 +282,7 @@ void blosc_cbuffer_sizes(const void *cbuffer, size_t *nbytes,
   says whether the buffer is shuffled or not).
 
   This function should always succeed.
-*/
-
+  */
 void blosc_cbuffer_metainfo(const void *cbuffer, size_t *typesize,
                             int *flags);
 
@@ -195,10 +290,10 @@ void blosc_cbuffer_metainfo(const void *cbuffer, size_t *typesize,
 /**
   Return information about a compressed buffer, namely the internal
   Blosc format version (`version`) and the format for the internal
-  Lempel-Ziv algorithm (`versionlz`).  This function should always
-  succeed.
-*/
+  Lempel-Ziv algorithm (`versionlz`).
 
+  This function should always succeed.
+  */
 void blosc_cbuffer_versions(const void *cbuffer, int *version,
                             int *versionlz);
 
@@ -214,8 +309,7 @@ void blosc_cbuffer_versions(const void *cbuffer, int *version,
 /**
   Force the use of a specific blocksize.  If 0, an automatic
   blocksize will be used (the default).
-*/
-
+  */
 void blosc_set_blocksize(size_t blocksize);
 
 
diff --git a/blosc/blosclz.c b/c-blosc/blosc/blosclz.c
similarity index 95%
rename from blosc/blosclz.c
rename to c-blosc/blosc/blosclz.c
index 5f0ac5f..b750b73 100644
--- a/blosc/blosclz.c
+++ b/c-blosc/blosc/blosclz.c
@@ -66,12 +66,8 @@
 /*
  * Use inlined functions for supported systems.
  */
-#if defined(__GNUC__) || defined(__DMC__) || defined(__POCC__) || defined(__WATCOMC__) || defined(__SUNPRO_C)
-#define BLOSCLZ_INLINE inline
-#elif defined(__BORLANDC__) || defined(_MSC_VER) || defined(__LCC__)
-#define BLOSCLZ_INLINE __inline
-#else
-#define BLOSCLZ_INLINE
+#if defined(_MSC_VER) && !defined(__cplusplus)   /* Visual Studio */
+#define inline __inline  /* Visual C is not C99, but supports some kind of inline */
 #endif
 
 #define MAX_COPY       32
@@ -86,7 +82,7 @@
 #endif
 
 
-static BLOSCLZ_INLINE int32_t hash_function(uint8_t* p, uint8_t hash_log)
+static inline int32_t hash_function(uint8_t* p, uint8_t hash_log)
 {
   int32_t v;
 
@@ -109,13 +105,12 @@ int blosclz_compress(int opt_level, const void* input,
   uint8_t* op = (uint8_t*) output;
 
   /* Hash table depends on the opt level.  Hash_log cannot be larger than 15. */
-  uint8_t hash_log_[10] = {-1, 8, 9, 9, 11, 11, 12, 13, 14, 15};
+  int8_t hash_log_[10] = {-1, 8, 9, 9, 11, 11, 12, 13, 14, 15};
   uint8_t hash_log = hash_log_[opt_level];
   uint16_t hash_size = 1 << hash_log;
   uint16_t *htab;
   uint8_t* op_limit;
 
-  int32_t hslot;
   int32_t hval;
   uint8_t copy;
 
@@ -132,7 +127,7 @@ int blosclz_compress(int opt_level, const void* input,
     return 0;                   /* Mark this as uncompressible */
   }
 
-  htab = (uint16_t *) malloc(hash_size*sizeof(uint16_t));
+  htab = (uint16_t *) calloc(hash_size, sizeof(uint16_t));
 
   /* sanity check */
   if(BLOSCLZ_UNEXPECT_CONDITIONAL(length < 4)) {
@@ -148,10 +143,6 @@ int blosclz_compress(int opt_level, const void* input,
     else goto out;
   }
 
-  /* initializes hash table */
-  for (hslot = 0; hslot < hash_size; hslot++)
-    htab[hslot] = 0;
-
   /* we start with literal copy */
   copy = 2;
   *op++ = MAX_COPY-1;
diff --git a/blosc/blosclz.h b/c-blosc/blosc/blosclz.h
similarity index 100%
rename from blosc/blosclz.h
rename to c-blosc/blosc/blosclz.h
diff --git a/c-blosc/blosc/config.h.in b/c-blosc/blosc/config.h.in
new file mode 100644
index 0000000..6689769
--- /dev/null
+++ b/c-blosc/blosc/config.h.in
@@ -0,0 +1,9 @@
+#ifndef _CONFIGURATION_HEADER_GUARD_H_
+#define _CONFIGURATION_HEADER_GUARD_H_
+
+#cmakedefine HAVE_LZ4 @HAVE_LZ4@
+#cmakedefine HAVE_SNAPPY @HAVE_SNAPPY@
+#cmakedefine HAVE_ZLIB @HAVE_ZLIB@
+
+
+#endif
diff --git a/blosc/shuffle.c b/c-blosc/blosc/shuffle.c
similarity index 96%
rename from blosc/shuffle.c
rename to c-blosc/blosc/shuffle.c
index cbc6701..c909a7b 100644
--- a/blosc/shuffle.c
+++ b/c-blosc/blosc/shuffle.c
@@ -242,12 +242,12 @@ shuffle16(uint8_t* dest, uint8_t* src, size_t size)
 void shuffle(size_t bytesoftype, size_t blocksize,
              uint8_t* _src, uint8_t* _dest) {
   int unaligned_dest = (int)((uintptr_t)_dest % 16);
-  int power_of_two = (blocksize & (blocksize - 1)) == 0;
+  int multiple_of_block = (blocksize % (16 * bytesoftype)) == 0;
   int too_small = (blocksize < 256);
 
-  if (unaligned_dest || !power_of_two || too_small) {
-    /* _dest buffer is not aligned, not a power of two or is too
-       small.  Call the non-sse2 version. */
+  if (unaligned_dest || !multiple_of_block || too_small) {
+    /* _dest buffer is not aligned, not multiple of the vectorization size
+     * or is too small.  Call the non-sse2 version. */
     _shuffle(bytesoftype, blocksize, _src, _dest);
     return;
   }
@@ -456,12 +456,12 @@ void unshuffle(size_t bytesoftype, size_t blocksize,
                uint8_t* _src, uint8_t* _dest) {
   int unaligned_src = (int)((uintptr_t)_src % 16);
   int unaligned_dest = (int)((uintptr_t)_dest % 16);
-  int power_of_two = (blocksize & (blocksize - 1)) == 0;
+  int multiple_of_block = (blocksize % (16 * bytesoftype)) == 0;
   int too_small = (blocksize < 256);
 
-  if (unaligned_src || unaligned_dest || !power_of_two || too_small) {
-    /* _src or _dest buffer is not aligned, not a power of two or is
-       too small.  Call the non-sse2 version. */
+  if (unaligned_src || unaligned_dest || !multiple_of_block || too_small) {
+    /* _src or _dest buffer is not aligned, not multiple of the vectorization
+     * size or is not too small.  Call the non-sse2 version. */
     _unshuffle(bytesoftype, blocksize, _src, _dest);
     return;
   }
diff --git a/blosc/shuffle.h b/c-blosc/blosc/shuffle.h
similarity index 100%
rename from blosc/shuffle.h
rename to c-blosc/blosc/shuffle.h
diff --git a/blosc/win32/pthread.c b/c-blosc/blosc/win32/pthread.c
similarity index 82%
rename from blosc/win32/pthread.c
rename to c-blosc/blosc/win32/pthread.c
index d1a9d91..28c81e0 100644
--- a/blosc/win32/pthread.c
+++ b/c-blosc/blosc/win32/pthread.c
@@ -4,6 +4,24 @@
  *
  * Copyright (C) 2009 Andrzej K. Haczewski <ahaczewski at gmail.com>
  *
+ * Permission is hereby granted, free of charge, to any person obtaining a copy
+ * of this software and associated documentation files (the "Software"), to deal
+ * in the Software without restriction, including without limitation the rights
+ * to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
+ * copies of the Software, and to permit persons to whom the Software is
+ * furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice shall be included in
+ * all copies or substantial portions of the Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
+ * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+ * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
+ * OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
+ * THE SOFTWARE.
+ *
  * DISCLAIMER: The implementation is Git-specific, it is subset of original
  * Pthreads API, without lots of other features that Git doesn't use.
  * Git also makes sure that the passed arguments are valid, so there's
@@ -27,7 +45,7 @@ void die(const char *err, ...)
 
 static unsigned __stdcall win32_start_routine(void *arg)
 {
-	pthread_t *thread = arg;
+	pthread_t *thread = (pthread_t*)arg;
 	thread->arg = thread->start_routine(thread->arg);
 	return 0;
 }
diff --git a/blosc/win32/pthread.h b/c-blosc/blosc/win32/pthread.h
similarity index 55%
rename from blosc/win32/pthread.h
rename to c-blosc/blosc/win32/pthread.h
index c72f100..a95f90e 100644
--- a/blosc/win32/pthread.h
+++ b/c-blosc/blosc/win32/pthread.h
@@ -1,7 +1,31 @@
 /*
- * Header used to adapt pthread-based POSIX code to Windows API threads.
+ * Code for simulating pthreads API on Windows.  This is Git-specific,
+ * but it is enough for Numexpr needs too.
  *
  * Copyright (C) 2009 Andrzej K. Haczewski <ahaczewski at gmail.com>
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a copy
+ * of this software and associated documentation files (the "Software"), to deal
+ * in the Software without restriction, including without limitation the rights
+ * to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
+ * copies of the Software, and to permit persons to whom the Software is
+ * furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice shall be included in
+ * all copies or substantial portions of the Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
+ * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+ * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
+ * OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
+ * THE SOFTWARE.
+ *
+ * DISCLAIMER: The implementation is Git-specific, it is subset of original
+ * Pthreads API, without lots of other features that Git doesn't use.
+ * Git also makes sure that the passed arguments are valid, so there's
+ * no need for double-checking.
  */
 
 #ifndef PTHREAD_H
diff --git a/blosc/win32/stdint-windows.h b/c-blosc/blosc/win32/stdint-windows.h
similarity index 92%
rename from blosc/win32/stdint-windows.h
rename to c-blosc/blosc/win32/stdint-windows.h
index d02608a..4fe0ef9 100644
--- a/blosc/win32/stdint-windows.h
+++ b/c-blosc/blosc/win32/stdint-windows.h
@@ -1,7 +1,7 @@
 // ISO C9x  compliant stdint.h for Microsoft Visual Studio
 // Based on ISO/IEC 9899:TC2 Committee draft (May 6, 2005) WG14/N1124 
 // 
-//  Copyright (c) 2006-2008 Alexander Chemeris
+//  Copyright (c) 2006-2013 Alexander Chemeris
 // 
 // Redistribution and use in source and binary forms, with or without
 // modification, are permitted provided that the following conditions are met:
@@ -13,8 +13,9 @@
 //      notice, this list of conditions and the following disclaimer in the
 //      documentation and/or other materials provided with the distribution.
 // 
-//   3. The name of the author may be used to endorse or promote products
-//      derived from this software without specific prior written permission.
+//   3. Neither the name of the product nor the names of its contributors may
+//      be used to endorse or promote products derived from this software
+//      without specific prior written permission.
 // 
 // THIS SOFTWARE IS PROVIDED BY THE AUTHOR ``AS IS'' AND ANY EXPRESS OR IMPLIED
 // WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF
@@ -40,6 +41,10 @@
 #pragma once
 #endif
 
+#if _MSC_VER >= 1600 // [
+#include <stdint.h>
+#else // ] _MSC_VER >= 1600 [
+
 #include <limits.h>
 
 // For Visual Studio 6 in C++ mode and for many Visual Studio versions when
@@ -238,10 +243,17 @@ typedef uint64_t  uintmax_t;
 #define UINT64_C(val) val##ui64
 
 // 7.18.4.2 Macros for greatest-width integer constants
-#define INTMAX_C   INT64_C
-#define UINTMAX_C  UINT64_C
+// These #ifndef's are needed to prevent collisions with <boost/cstdint.hpp>.
+// Check out Issue 9 for the details.
+#ifndef INTMAX_C //   [
+#  define INTMAX_C   INT64_C
+#endif // INTMAX_C    ]
+#ifndef UINTMAX_C //  [
+#  define UINTMAX_C  UINT64_C
+#endif // UINTMAX_C   ]
 
 #endif // __STDC_CONSTANT_MACROS ]
 
+#endif // _MSC_VER >= 1600 ]
 
 #endif // _MSC_STDINT_H_ ]
diff --git a/c-blosc/cmake/FindLZ4.cmake b/c-blosc/cmake/FindLZ4.cmake
new file mode 100644
index 0000000..84d057f
--- /dev/null
+++ b/c-blosc/cmake/FindLZ4.cmake
@@ -0,0 +1,10 @@
+find_path(LZ4_INCLUDE_DIR lz4.h)
+
+find_library(LZ4_LIBRARY NAMES lz4)
+
+if (LZ4_INCLUDE_DIR AND LZ4_LIBRARY)
+    set(LZ4_FOUND TRUE)
+    message(STATUS "Found LZ4 library: ${LZ4_LIBRARY}")
+else ()
+    message(STATUS "No lz4 found.  Using internal sources.")
+endif ()
diff --git a/c-blosc/cmake/FindSnappy.cmake b/c-blosc/cmake/FindSnappy.cmake
new file mode 100644
index 0000000..688d4d5
--- /dev/null
+++ b/c-blosc/cmake/FindSnappy.cmake
@@ -0,0 +1,10 @@
+find_path(SNAPPY_INCLUDE_DIR snappy-c.h)
+
+find_library(SNAPPY_LIBRARY NAMES snappy)
+
+if (SNAPPY_INCLUDE_DIR AND SNAPPY_LIBRARY)
+    set(SNAPPY_FOUND TRUE)
+    message(STATUS "Found SNAPPY library: ${SNAPPY_LIBRARY}")
+else ()
+    message(STATUS "No snappy found.  Using internal sources.")
+endif ()
diff --git a/c-blosc/cmake_uninstall.cmake.in b/c-blosc/cmake_uninstall.cmake.in
new file mode 100644
index 0000000..c6d8094
--- /dev/null
+++ b/c-blosc/cmake_uninstall.cmake.in
@@ -0,0 +1,22 @@
+if (NOT EXISTS "@CMAKE_CURRENT_BINARY_DIR@/install_manifest.txt")
+    message(FATAL_ERROR "Cannot find install manifest: \"@CMAKE_CURRENT_BINARY_DIR@/install_manifest.txt\"")
+endif(NOT EXISTS "@CMAKE_CURRENT_BINARY_DIR@/install_manifest.txt")
+
+file(READ "@CMAKE_CURRENT_BINARY_DIR@/install_manifest.txt" files)
+string(REGEX REPLACE "\n" ";" files "${files}")
+list(REVERSE files)
+foreach (file ${files})
+    message(STATUS "Uninstalling \"$ENV{DESTDIR}${file}\"")
+    if (EXISTS "$ENV{DESTDIR}${file}")
+        execute_process(
+            COMMAND @CMAKE_COMMAND@ -E remove "$ENV{DESTDIR}${file}"
+            OUTPUT_VARIABLE rm_out
+            RESULT_VARIABLE rm_retval
+        )
+        if(NOT ${rm_retval} EQUAL 0)
+            message(FATAL_ERROR "Problem when removing \"$ENV{DESTDIR}${file}\"")
+        endif (NOT ${rm_retval} EQUAL 0)
+    else (EXISTS "$ENV{DESTDIR}${file}")
+        message(STATUS "File \"$ENV{DESTDIR}${file}\" does not exist.")
+    endif (EXISTS "$ENV{DESTDIR}${file}")
+endforeach(file)
diff --git a/c-blosc/hdf5/CMakeLists.txt b/c-blosc/hdf5/CMakeLists.txt
new file mode 100644
index 0000000..d9cd847
--- /dev/null
+++ b/c-blosc/hdf5/CMakeLists.txt
@@ -0,0 +1,38 @@
+# sources
+set(SOURCES blosc_filter.c)
+
+include_directories("${PROJECT_SOURCE_DIR}/blosc")
+
+# dependencies
+find_package(HDF5 REQUIRED)
+include_directories(HDF5_INCLIDE_DIRS)
+
+
+# targets
+add_library(blosc_filter_shared SHARED ${SOURCES})
+set_target_properties(blosc_filter_shared PROPERTIES OUTPUT_NAME blosc_filter)
+target_link_libraries(blosc_filter_shared blosc_shared ${HDF5_LIBRARIES})
+
+if(BUILD_STATIC)
+    add_library(blosc_filter_static ${SOURCES})
+    set_target_properties(
+        blosc_filter_static PROPERTIES OUTPUT_NAME blosc_filter)
+    target_link_libraries(blosc_filter_static blosc_static)
+endif(BUILD_STATIC)
+
+
+# install
+install(FILES blosc_filter.h DESTINATION include COMPONENT HDF5_FILTER_DEV)
+install(TARGETS blosc_filter_static DESTINATION lib COMPONENT HDF5_FILTER)
+if(BUILD_STATIC)
+    install(
+        TARGETS blosc_filter_shared DESTINATION lib COMPONENT HDF5_FILTER_DEV)
+endif(BUILD_STATIC)
+
+
+# test
+if(BUILD_TESTS)
+    add_executable(example example.c)
+    target_link_libraries(example blosc_filter_static ${HDF5_LIBRARIES})
+    add_test(test_hdf5_filter example)
+endif(BUILD_TESTS)
diff --git a/c-blosc/hdf5/README.rst b/c-blosc/hdf5/README.rst
new file mode 100644
index 0000000..15c6b35
--- /dev/null
+++ b/c-blosc/hdf5/README.rst
@@ -0,0 +1,62 @@
+Using the Blosc filter from HDF5
+================================
+
+In order to register Blosc into your HDF5 application, you only need
+to call a function in blosc_filter.h, with the following signature:
+
+    int register_blosc(char **version, char **date)
+
+Calling this will register the filter with the HDF5 library and will
+return info about the Blosc release in `**version` and `**date`
+char pointers.
+
+A non-negative return value indicates success.  If the registration
+fails, an error is pushed onto the current error stack and a negative
+value is returned.
+
+An example C program ("example.c") is included which demonstrates the
+proper use of the filter.
+
+This filter has been tested against HDF5 versions 1.6.5 through
+1.8.10.  It is released under the MIT license (see LICENSE.txt for
+details).
+
+
+Compiling
+=========
+
+The filter consists of a single '.c' source file and '.h' header,
+along with an embedded version of the BLOSC compression library.
+Also, as Blosc uses SSE2 and multithreading, you must remember to use
+some special flags and libraries to make sure that these features are
+used (only necessary when compiling Blosc from sources).
+
+To compile using GCC on UNIX:
+
+  gcc -O3 -msse2 -lhdf5 ../blosc/*.c blosc_filter.c \
+        example.c -o example -lpthread
+
+or, if you have the Blosc library already installed (recommended):
+
+  gcc -O3 -lhdf5 -lblosc blosc_filter.c example.c -o example -lpthread
+
+Using MINGW on Windows:
+
+  gcc -O3 -lhdf5 -lblosc blosc_filter.c example.c -o example
+
+Using Windows and MSVC (2008 or higher recommended):
+
+  cl /Ox /Feexample.exe example.c ..\blosc\*.c blosc_filter.c
+
+Intel ICC compilers should work too.
+
+For activating the support for other compressors than the integrated
+BloscLZ (like LZ4, LZ4HC, Snappy or Zlib) see the README file in the
+main Blosc directory.
+
+
+Acknowledgments
+===============
+
+This HDF5 filter interface and its example is based in the LZF interface
+(http://h5py.alfven.org) by Andrew Collette.
diff --git a/blosc/blosc_filter.c b/c-blosc/hdf5/blosc_filter.c
similarity index 81%
rename from blosc/blosc_filter.c
rename to c-blosc/hdf5/blosc_filter.c
index 363b03c..cf3f9c6 100644
--- a/blosc/blosc_filter.c
+++ b/c-blosc/hdf5/blosc_filter.c
@@ -16,7 +16,6 @@
 #include <string.h>
 #include <errno.h>
 #include "hdf5.h"
-#include "../blosc/blosc.h"
 #include "blosc_filter.h"
 
 #if H5Epush_vers == 2
@@ -169,12 +168,16 @@ size_t blosc_filter(unsigned flags, size_t cd_nelmts,
                     size_t *buf_size, void **buf){
 
     void* outbuf = NULL;
-    int status = 0;              /* Return code from Blosc routines */
+    int status = 0;                /* Return code from Blosc routines */
     size_t typesize;
     size_t outbuf_size;
-    int clevel = 5;              /* Compression level default */
-    int doshuffle = 1;           /* Shuffle default */
-
+    int clevel = 5;                /* Compression level default */
+    int doshuffle = 1;             /* Shuffle default */
+    int compcode;                  /* Blosc compressor */
+    int code;
+    char *compname = NULL;
+    char *complist;
+    char errmsg[256];
 
     /* Filter params that are always set */
     typesize = cd_values[2];      /* The datatype size */
@@ -186,12 +189,26 @@ size_t blosc_filter(unsigned flags, size_t cd_nelmts,
     if (cd_nelmts >= 6) {
         doshuffle = cd_values[5];     /* Shuffle? */
     }
+    if (cd_nelmts >= 7) {
+        compcode = cd_values[6];     /* The Blosc compressor used */
+	/* Check that we actually have support for the compressor code */
+        complist = blosc_list_compressors();
+	code = blosc_compcode_to_compname(compcode, &compname);
+	if (code == -1) {
+	    sprintf(errmsg, "this Blosc library does not have support for "
+                    "the '%s' compressor, but only for: %s",
+		    compname, complist);
+            PUSH_ERR("blosc_filter", H5E_CALLBACK, errmsg);
+            goto failed;
+	}
+    }
 
     /* We're compressing */
     if(!(flags & H5Z_FLAG_REVERSE)){
 
 #ifdef BLOSC_DEBUG
-        fprintf(stderr, "Blosc: Compress %zd chunk w/buffer %zd\n", nbytes, outbuf_size);
+        fprintf(stderr, "Blosc: Compress %zd chunk w/buffer %zd\n",
+		nbytes, outbuf_size);
 #endif
 
         /* Allocate an output buffer exactly as long as the input data; if
@@ -209,6 +226,10 @@ size_t blosc_filter(unsigned flags, size_t cd_nelmts,
             goto failed;
         }
 
+	/* Select the correct compressor to use */
+        if (compname != NULL)
+	  blosc_set_compressor(compname);
+
         status = blosc_compress(clevel, doshuffle, typesize, nbytes,
                                 *buf, outbuf, nbytes);
         if (status < 0) {
@@ -218,13 +239,24 @@ size_t blosc_filter(unsigned flags, size_t cd_nelmts,
 
     /* We're decompressing */
     } else {
-
+        /* declare dummy variables */
+        size_t cbytes, blocksize;
 
 #ifdef BLOSC_DEBUG
         fprintf(stderr, "Blosc: Decompress %zd chunk w/buffer %zd\n", nbytes, outbuf_size);
 #endif
 
         free(outbuf);
+
+        /* Extract the exact outbuf_size from the buffer header.
+         *
+         * NOTE: the guess value got from "cd_values" corresponds to the
+         * uncompressed chunk size but it should not be used in a general
+         * cases since other filters in the pipeline can modify the buffere
+         *  size.
+         */
+        blosc_cbuffer_sizes(*buf, &outbuf_size, &cbytes, &blocksize);
+
         outbuf = malloc(outbuf_size);
 
         if(outbuf == NULL){
diff --git a/blosc/blosc_filter.h b/c-blosc/hdf5/blosc_filter.h
similarity index 72%
rename from blosc/blosc_filter.h
rename to c-blosc/hdf5/blosc_filter.h
index 8bf560b..4c3b386 100644
--- a/blosc/blosc_filter.h
+++ b/c-blosc/hdf5/blosc_filter.h
@@ -5,8 +5,11 @@
 extern "C" {
 #endif
 
+#include "blosc.h"
+
 /* Filter revision number, starting at 1 */
-#define FILTER_BLOSC_VERSION 1
+/* #define FILTER_BLOSC_VERSION 1 */
+#define FILTER_BLOSC_VERSION 2	/* multiple compressors since Blosc 1.3 */
 
 /* Filter ID registered with the HDF Group */
 #define FILTER_BLOSC 32001
@@ -19,4 +22,3 @@ int register_blosc(char **version, char **date);
 #endif
 
 #endif
-
diff --git a/c-blosc/hdf5/example.c b/c-blosc/hdf5/example.c
new file mode 100644
index 0000000..3b386e3
--- /dev/null
+++ b/c-blosc/hdf5/example.c
@@ -0,0 +1,126 @@
+/*
+    Copyright (C) 2010  Francesc Alted
+    http://blosc.pytables.org
+    License: MIT (see LICENSE.txt)
+
+    Example program demonstrating use of the Blosc filter from C code.
+    This is based on the LZF example (http://h5py.alfven.org) by
+    Andrew Collette.
+
+    To compile this program:
+
+    h5cc [-DH5_USE_16_API] -lblosc blosc_filter.c example.c \
+         -o example -lpthread
+
+    To run:
+
+    $ ./example
+    Blosc version info: 1.3.0 ($Date:: 2014-01-11 #$)
+    Success!
+    $ h5ls -v example.h5
+    Opened "example.h5" with sec2 driver.
+    dset                     Dataset {100/100, 100/100, 100/100}
+        Location:  1:800
+        Links:     1
+        Chunks:    {1, 100, 100} 40000 bytes
+        Storage:   4000000 logical bytes, 126002 allocated bytes, 3174.55% utilization
+        Filter-0:  blosc-32001 OPT {2, 2, 4, 40000, 4, 1, 2}
+        Type:      native float
+
+*/
+
+#include <stdio.h>
+#include "hdf5.h"
+#include "blosc_filter.h"
+
+#define SIZE 100*100*100
+#define SHAPE {100,100,100}
+#define CHUNKSHAPE {1,100,100}
+
+int main(){
+
+    static float data[SIZE];
+    static float data_out[SIZE];
+    const hsize_t shape[] = SHAPE;
+    const hsize_t chunkshape[] = CHUNKSHAPE;
+    char *version, *date;
+    int r, i;
+    unsigned int cd_values[7];
+    int return_code = 1;
+
+    hid_t fid, sid, dset, plist = 0;
+
+    for(i=0; i<SIZE; i++){
+        data[i] = i;
+    }
+
+    /* Register the filter with the library */
+    r = register_blosc(&version, &date);
+    printf("Blosc version info: %s (%s)\n", version, date);
+
+    if(r<0) goto failed;
+
+    sid = H5Screate_simple(3, shape, NULL);
+    if(sid<0) goto failed;
+
+    fid = H5Fcreate("example.h5", H5F_ACC_TRUNC, H5P_DEFAULT, H5P_DEFAULT);
+    if(fid<0) goto failed;
+
+    plist = H5Pcreate(H5P_DATASET_CREATE);
+    if(plist<0) goto failed;
+
+    /* Chunked layout required for filters */
+    r = H5Pset_chunk(plist, 3, chunkshape);
+    if(r<0) goto failed;
+
+    /* Using the blosc filter in combianation with other ones also works */
+    /*
+    r = H5Pset_fletcher32(plist);
+    if(r<0) goto failed;
+    */
+
+    /* This is the easiest way to call Blosc with default values: 5
+     for BloscLZ and shuffle active. */
+    /* r = H5Pset_filter(plist, FILTER_BLOSC, H5Z_FLAG_OPTIONAL, 0, NULL);  */
+
+    /* But you can also taylor Blosc parameters to your needs */
+    /* 0 to 3 (inclusive) param slots are reserved. */
+    cd_values[4] = 4;       /* compression level */
+    cd_values[5] = 1;       /* 0: shuffle not active, 1: shuffle active */
+    cd_values[6] = BLOSC_LZ4HC; /* the actual compressor to use */
+
+    /* Set the filter with 7 params */
+    r = H5Pset_filter(plist, FILTER_BLOSC, H5Z_FLAG_OPTIONAL, 7, cd_values);
+
+    if(r<0) goto failed;
+
+#if H5_USE_16_API
+    dset = H5Dcreate(fid, "dset", H5T_NATIVE_FLOAT, sid, plist);
+#else
+    dset = H5Dcreate(fid, "dset", H5T_NATIVE_FLOAT, sid, H5P_DEFAULT, plist, H5P_DEFAULT);
+#endif
+    if(dset<0) goto failed;
+
+    r = H5Dwrite(dset, H5T_NATIVE_FLOAT, H5S_ALL, H5S_ALL, H5P_DEFAULT, &data);
+    if(r<0) goto failed;
+
+    r = H5Dread(dset, H5T_NATIVE_FLOAT, H5S_ALL, H5S_ALL, H5P_DEFAULT, &data_out);
+    if(r<0) goto failed;
+
+    for(i=0;i<SIZE;i++){
+        if(data[i] != data_out[i]) goto failed;
+    }
+
+    fprintf(stdout, "Success!\n");
+
+    return_code = 0;
+
+    failed:
+
+    if(dset>0)  H5Dclose(dset);
+    if(sid>0)   H5Sclose(sid);
+    if(plist>0) H5Pclose(plist);
+    if(fid>0)   H5Fclose(fid);
+
+    return return_code;
+}
diff --git a/c-blosc/internal-complibs/lz4-r110/add-version.patch b/c-blosc/internal-complibs/lz4-r110/add-version.patch
new file mode 100644
index 0000000..fef243d
--- /dev/null
+++ b/c-blosc/internal-complibs/lz4-r110/add-version.patch
@@ -0,0 +1,14 @@
+diff --git a/internal-complibs/lz4-r110/lz4.h b/internal-complibs/lz4-r110/lz4.h
+index af05dbc..33fcbe4 100644
+--- a/internal-complibs/lz4-r110/lz4.h
++++ b/internal-complibs/lz4-r110/lz4.h
+@@ -37,6 +37,9 @@
+ extern "C" {
+ #endif
+ 
++// The next is for getting the LZ4 version.
++// Please note that this is only defined in the Blosc sources of LZ4.
++#define LZ4_VERSION_STRING "r110"
+ 
+ //**************************************
+ // Compiler Options
diff --git a/c-blosc/internal-complibs/lz4-r110/lz4.c b/c-blosc/internal-complibs/lz4-r110/lz4.c
new file mode 100644
index 0000000..f521b0f
--- /dev/null
+++ b/c-blosc/internal-complibs/lz4-r110/lz4.c
@@ -0,0 +1,865 @@
+/*
+   LZ4 - Fast LZ compression algorithm
+   Copyright (C) 2011-2013, Yann Collet.
+   BSD 2-Clause License (http://www.opensource.org/licenses/bsd-license.php)
+
+   Redistribution and use in source and binary forms, with or without
+   modification, are permitted provided that the following conditions are
+   met:
+
+       * Redistributions of source code must retain the above copyright
+   notice, this list of conditions and the following disclaimer.
+       * Redistributions in binary form must reproduce the above
+   copyright notice, this list of conditions and the following disclaimer
+   in the documentation and/or other materials provided with the
+   distribution.
+
+   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+
+   You can contact the author at :
+   - LZ4 source repository : http://code.google.com/p/lz4/
+   - LZ4 public forum : https://groups.google.com/forum/#!forum/lz4c
+*/
+
+//**************************************
+// Tuning parameters
+//**************************************
+// MEMORY_USAGE :
+// Memory usage formula : N->2^N Bytes (examples : 10 -> 1KB; 12 -> 4KB ; 16 -> 64KB; 20 -> 1MB; etc.)
+// Increasing memory usage improves compression ratio
+// Reduced memory usage can improve speed, due to cache effect
+// Default value is 14, for 16KB, which nicely fits into Intel x86 L1 cache
+#define MEMORY_USAGE 14
+
+// HEAPMODE :
+// Select how default compression functions will allocate memory for their hash table,
+// in memory stack (0:default, fastest), or in memory heap (1:requires memory allocation (malloc)).
+#define HEAPMODE 0
+
+
+//**************************************
+// CPU Feature Detection
+//**************************************
+// 32 or 64 bits ?
+#if (defined(__x86_64__) || defined(_M_X64) || defined(_WIN64) \
+  || defined(__powerpc64__) || defined(__ppc64__) || defined(__PPC64__) \
+  || defined(__64BIT__) || defined(_LP64) || defined(__LP64__) \
+  || defined(__ia64) || defined(__itanium__) || defined(_M_IA64) )   // Detects 64 bits mode
+#  define LZ4_ARCH64 1
+#else
+#  define LZ4_ARCH64 0
+#endif
+
+// Little Endian or Big Endian ?
+// Overwrite the #define below if you know your architecture endianess
+#if defined (__GLIBC__)
+#  include <endian.h>
+#  if (__BYTE_ORDER == __BIG_ENDIAN)
+#     define LZ4_BIG_ENDIAN 1
+#  endif
+#elif (defined(__BIG_ENDIAN__) || defined(__BIG_ENDIAN) || defined(_BIG_ENDIAN)) && !(defined(__LITTLE_ENDIAN__) || defined(__LITTLE_ENDIAN) || defined(_LITTLE_ENDIAN))
+#  define LZ4_BIG_ENDIAN 1
+#elif defined(__sparc) || defined(__sparc__) \
+   || defined(__powerpc__) || defined(__ppc__) || defined(__PPC__) \
+   || defined(__hpux)  || defined(__hppa) \
+   || defined(_MIPSEB) || defined(__s390__)
+#  define LZ4_BIG_ENDIAN 1
+#else
+// Little Endian assumed. PDP Endian and other very rare endian format are unsupported.
+#endif
+
+// Unaligned memory access is automatically enabled for "common" CPU, such as x86.
+// For others CPU, such as ARM, the compiler may be more cautious, inserting unnecessary extra code to ensure aligned access property
+// If you know your target CPU supports unaligned memory access, you want to force this option manually to improve performance
+#if defined(__ARM_FEATURE_UNALIGNED)
+#  define LZ4_FORCE_UNALIGNED_ACCESS 1
+#endif
+
+// Define this parameter if your target system or compiler does not support hardware bit count
+#if defined(_MSC_VER) && defined(_WIN32_WCE)            // Visual Studio for Windows CE does not support Hardware bit count
+#  define LZ4_FORCE_SW_BITCOUNT
+#endif
+
+// BIG_ENDIAN_NATIVE_BUT_INCOMPATIBLE :
+// This option may provide a small boost to performance for some big endian cpu, although probably modest.
+// You may set this option to 1 if data will remain within closed environment.
+// This option is useless on Little_Endian CPU (such as x86)
+//#define BIG_ENDIAN_NATIVE_BUT_INCOMPATIBLE 1
+
+
+//**************************************
+// Compiler Options
+//**************************************
+#if defined(__STDC_VERSION__) && (__STDC_VERSION__ >= 199901L)   // C99
+/* "restrict" is a known keyword */
+#else
+#  define restrict // Disable restrict
+#endif
+
+#ifdef _MSC_VER    // Visual Studio
+#  define FORCE_INLINE static __forceinline
+#  include <intrin.h>                    // For Visual 2005
+#  if LZ4_ARCH64   // 64-bits
+#    pragma intrinsic(_BitScanForward64) // For Visual 2005
+#    pragma intrinsic(_BitScanReverse64) // For Visual 2005
+#  else            // 32-bits
+#    pragma intrinsic(_BitScanForward)   // For Visual 2005
+#    pragma intrinsic(_BitScanReverse)   // For Visual 2005
+#  endif
+#  pragma warning(disable : 4127)        // disable: C4127: conditional expression is constant
+#else
+#  ifdef __GNUC__
+#    define FORCE_INLINE static inline __attribute__((always_inline))
+#  else
+#    define FORCE_INLINE static inline
+#  endif
+#endif
+
+#ifdef _MSC_VER
+#  define lz4_bswap16(x) _byteswap_ushort(x)
+#else
+#  define lz4_bswap16(x) ((unsigned short int) ((((x) >> 8) & 0xffu) | (((x) & 0xffu) << 8)))
+#endif
+
+#define GCC_VERSION (__GNUC__ * 100 + __GNUC_MINOR__)
+
+#if (GCC_VERSION >= 302) || (__INTEL_COMPILER >= 800) || defined(__clang__)
+#  define expect(expr,value)    (__builtin_expect ((expr),(value)) )
+#else
+#  define expect(expr,value)    (expr)
+#endif
+
+#define likely(expr)     expect((expr) != 0, 1)
+#define unlikely(expr)   expect((expr) != 0, 0)
+
+
+//**************************************
+// Memory routines
+//**************************************
+#include <stdlib.h>   // malloc, calloc, free
+#define ALLOCATOR(n,s) calloc(n,s)
+#define FREEMEM        free
+#include <string.h>   // memset, memcpy
+#define MEM_INIT       memset
+
+
+//**************************************
+// Includes
+//**************************************
+#include "lz4.h"
+
+
+//**************************************
+// Basic Types
+//**************************************
+#if defined (__STDC_VERSION__) && __STDC_VERSION__ >= 199901L   // C99
+# include <stdint.h>
+  typedef  uint8_t BYTE;
+  typedef uint16_t U16;
+  typedef uint32_t U32;
+  typedef  int32_t S32;
+  typedef uint64_t U64;
+#else
+  typedef unsigned char       BYTE;
+  typedef unsigned short      U16;
+  typedef unsigned int        U32;
+  typedef   signed int        S32;
+  typedef unsigned long long  U64;
+#endif
+
+#if defined(__GNUC__)  && !defined(LZ4_FORCE_UNALIGNED_ACCESS)
+#  define _PACKED __attribute__ ((packed))
+#else
+#  define _PACKED
+#endif
+
+#if !defined(LZ4_FORCE_UNALIGNED_ACCESS) && !defined(__GNUC__)
+#  if defined(__IBMC__) || defined(__SUNPRO_C) || defined(__SUNPRO_CC)
+#    pragma pack(1)
+#  else
+#    pragma pack(push, 1)
+#  endif
+#endif
+
+typedef struct { U16 v; }  _PACKED U16_S;
+typedef struct { U32 v; }  _PACKED U32_S;
+typedef struct { U64 v; }  _PACKED U64_S;
+typedef struct {size_t v;} _PACKED size_t_S;
+
+#if !defined(LZ4_FORCE_UNALIGNED_ACCESS) && !defined(__GNUC__)
+#  if defined(__SUNPRO_C) || defined(__SUNPRO_CC)
+#    pragma pack(0)
+#  else
+#    pragma pack(pop)
+#  endif
+#endif
+
+#define A16(x)   (((U16_S *)(x))->v)
+#define A32(x)   (((U32_S *)(x))->v)
+#define A64(x)   (((U64_S *)(x))->v)
+#define AARCH(x) (((size_t_S *)(x))->v)
+
+
+//**************************************
+// Constants
+//**************************************
+#define LZ4_HASHLOG   (MEMORY_USAGE-2)
+#define HASHTABLESIZE (1 << MEMORY_USAGE)
+#define HASHNBCELLS4  (1 << LZ4_HASHLOG)
+
+#define MINMATCH 4
+
+#define COPYLENGTH 8
+#define LASTLITERALS 5
+#define MFLIMIT (COPYLENGTH+MINMATCH)
+const int LZ4_minLength = (MFLIMIT+1);
+
+#define LZ4_64KLIMIT ((1<<16) + (MFLIMIT-1))
+#define SKIPSTRENGTH 6     // Increasing this value will make the compression run slower on incompressible data
+
+#define MAXD_LOG 16
+#define MAX_DISTANCE ((1 << MAXD_LOG) - 1)
+
+#define ML_BITS  4
+#define ML_MASK  ((1U<<ML_BITS)-1)
+#define RUN_BITS (8-ML_BITS)
+#define RUN_MASK ((1U<<RUN_BITS)-1)
+
+#define KB *(1U<<10)
+#define MB *(1U<<20)
+#define GB *(1U<<30)
+
+
+//**************************************
+// Structures and local types
+//**************************************
+
+typedef struct {
+    U32 hashTable[HASHNBCELLS4];
+    const BYTE* bufferStart;
+    const BYTE* base;
+    const BYTE* nextBlock;
+} LZ4_Data_Structure;
+
+typedef enum { notLimited = 0, limited = 1 } limitedOutput_directive;
+typedef enum { byPtr, byU32, byU16 } tableType_t;
+
+typedef enum { noPrefix = 0, withPrefix = 1 } prefix64k_directive;
+
+typedef enum { endOnOutputSize = 0, endOnInputSize = 1 } endCondition_directive;
+typedef enum { full = 0, partial = 1 } earlyEnd_directive;
+
+
+//**************************************
+// Architecture-specific macros
+//**************************************
+#define STEPSIZE                  sizeof(size_t)
+#define LZ4_COPYSTEP(d,s)         { AARCH(d) = AARCH(s); d+=STEPSIZE; s+=STEPSIZE; }
+#define LZ4_COPY8(d,s)            { LZ4_COPYSTEP(d,s); if (STEPSIZE<8) LZ4_COPYSTEP(d,s); }
+#define LZ4_SECURECOPY(d,s,e)     { if ((STEPSIZE==4)||(d<e)) LZ4_WILDCOPY(d,s,e); }
+
+#if LZ4_ARCH64   // 64-bit
+#  define HTYPE                   U32
+#  define INITBASE(base)          const BYTE* const base = ip
+#else            // 32-bit
+#  define HTYPE                   const BYTE*
+#  define INITBASE(base)          const int base = 0
+#endif
+
+#if (defined(LZ4_BIG_ENDIAN) && !defined(BIG_ENDIAN_NATIVE_BUT_INCOMPATIBLE))
+#  define LZ4_READ_LITTLEENDIAN_16(d,s,p) { U16 v = A16(p); v = lz4_bswap16(v); d = (s) - v; }
+#  define LZ4_WRITE_LITTLEENDIAN_16(p,i)  { U16 v = (U16)(i); v = lz4_bswap16(v); A16(p) = v; p+=2; }
+#else      // Little Endian
+#  define LZ4_READ_LITTLEENDIAN_16(d,s,p) { d = (s) - A16(p); }
+#  define LZ4_WRITE_LITTLEENDIAN_16(p,v)  { A16(p) = v; p+=2; }
+#endif
+
+
+//**************************************
+// Macros
+//**************************************
+#define LZ4_WILDCOPY(d,s,e)     { do { LZ4_COPY8(d,s) } while (d<e); }           // at the end, d>=e;
+
+
+//****************************
+// Private functions
+//****************************
+#if LZ4_ARCH64
+
+FORCE_INLINE int LZ4_NbCommonBytes (register U64 val)
+{
+# if defined(LZ4_BIG_ENDIAN)
+#   if defined(_MSC_VER) && !defined(LZ4_FORCE_SW_BITCOUNT)
+    unsigned long r = 0;
+    _BitScanReverse64( &r, val );
+    return (int)(r>>3);
+#   elif defined(__GNUC__) && (GCC_VERSION >= 304) && !defined(LZ4_FORCE_SW_BITCOUNT)
+    return (__builtin_clzll(val) >> 3);
+#   else
+    int r;
+    if (!(val>>32)) { r=4; } else { r=0; val>>=32; }
+    if (!(val>>16)) { r+=2; val>>=8; } else { val>>=24; }
+    r += (!val);
+    return r;
+#   endif
+# else
+#   if defined(_MSC_VER) && !defined(LZ4_FORCE_SW_BITCOUNT)
+    unsigned long r = 0;
+    _BitScanForward64( &r, val );
+    return (int)(r>>3);
+#   elif defined(__GNUC__) && (GCC_VERSION >= 304) && !defined(LZ4_FORCE_SW_BITCOUNT)
+    return (__builtin_ctzll(val) >> 3);
+#   else
+    static const int DeBruijnBytePos[64] = { 0, 0, 0, 0, 0, 1, 1, 2, 0, 3, 1, 3, 1, 4, 2, 7, 0, 2, 3, 6, 1, 5, 3, 5, 1, 3, 4, 4, 2, 5, 6, 7, 7, 0, 1, 2, 3, 3, 4, 6, 2, 6, 5, 5, 3, 4, 5, 6, 7, 1, 2, 4, 6, 4, 4, 5, 7, 2, 6, 5, 7, 6, 7, 7 };
+    return DeBruijnBytePos[((U64)((val & -(long long)val) * 0x0218A392CDABBD3FULL)) >> 58];
+#   endif
+# endif
+}
+
+#else
+
+FORCE_INLINE int LZ4_NbCommonBytes (register U32 val)
+{
+# if defined(LZ4_BIG_ENDIAN)
+#   if defined(_MSC_VER) && !defined(LZ4_FORCE_SW_BITCOUNT)
+    unsigned long r = 0;
+    _BitScanReverse( &r, val );
+    return (int)(r>>3);
+#   elif defined(__GNUC__) && (GCC_VERSION >= 304) && !defined(LZ4_FORCE_SW_BITCOUNT)
+    return (__builtin_clz(val) >> 3);
+#   else
+    int r;
+    if (!(val>>16)) { r=2; val>>=8; } else { r=0; val>>=24; }
+    r += (!val);
+    return r;
+#   endif
+# else
+#   if defined(_MSC_VER) && !defined(LZ4_FORCE_SW_BITCOUNT)
+    unsigned long r;
+    _BitScanForward( &r, val );
+    return (int)(r>>3);
+#   elif defined(__GNUC__) && (GCC_VERSION >= 304) && !defined(LZ4_FORCE_SW_BITCOUNT)
+    return (__builtin_ctz(val) >> 3);
+#   else
+    static const int DeBruijnBytePos[32] = { 0, 0, 3, 0, 3, 1, 3, 0, 3, 2, 2, 1, 3, 2, 0, 1, 3, 3, 1, 2, 2, 2, 2, 0, 3, 1, 2, 0, 1, 0, 1, 1 };
+    return DeBruijnBytePos[((U32)((val & -(S32)val) * 0x077CB531U)) >> 27];
+#   endif
+# endif
+}
+
+#endif
+
+
+//****************************
+// Compression functions
+//****************************
+FORCE_INLINE int LZ4_hashSequence(U32 sequence, tableType_t tableType)
+{
+    if (tableType == byU16)
+        return (((sequence) * 2654435761U) >> ((MINMATCH*8)-(LZ4_HASHLOG+1)));
+    else
+        return (((sequence) * 2654435761U) >> ((MINMATCH*8)-LZ4_HASHLOG));
+}
+
+FORCE_INLINE int LZ4_hashPosition(const BYTE* p, tableType_t tableType) { return LZ4_hashSequence(A32(p), tableType); }
+
+FORCE_INLINE void LZ4_putPositionOnHash(const BYTE* p, U32 h, void* tableBase, tableType_t tableType, const BYTE* srcBase)
+{
+    switch (tableType)
+    {
+    case byPtr: { const BYTE** hashTable = (const BYTE**) tableBase; hashTable[h] = p; break; }
+    case byU32: { U32* hashTable = (U32*) tableBase; hashTable[h] = (U32)(p-srcBase); break; }
+    case byU16: { U16* hashTable = (U16*) tableBase; hashTable[h] = (U16)(p-srcBase); break; }
+    }
+}
+
+FORCE_INLINE void LZ4_putPosition(const BYTE* p, void* tableBase, tableType_t tableType, const BYTE* srcBase)
+{
+    U32 h = LZ4_hashPosition(p, tableType);
+    LZ4_putPositionOnHash(p, h, tableBase, tableType, srcBase);
+}
+
+FORCE_INLINE const BYTE* LZ4_getPositionOnHash(U32 h, void* tableBase, tableType_t tableType, const BYTE* srcBase)
+{
+    if (tableType == byPtr) { const BYTE** hashTable = (const BYTE**) tableBase; return hashTable[h]; }
+    if (tableType == byU32) { U32* hashTable = (U32*) tableBase; return hashTable[h] + srcBase; }
+    { U16* hashTable = (U16*) tableBase; return hashTable[h] + srcBase; }   // default, to ensure a return
+}
+
+FORCE_INLINE const BYTE* LZ4_getPosition(const BYTE* p, void* tableBase, tableType_t tableType, const BYTE* srcBase)
+{
+    U32 h = LZ4_hashPosition(p, tableType);
+    return LZ4_getPositionOnHash(h, tableBase, tableType, srcBase);
+}
+
+
+FORCE_INLINE int LZ4_compress_generic(
+                 void* ctx,
+                 const char* source,
+                 char* dest,
+                 int inputSize,
+                 int maxOutputSize,
+
+                 limitedOutput_directive limitedOutput,
+                 tableType_t tableType,
+                 prefix64k_directive prefix)
+{
+    const BYTE* ip = (const BYTE*) source;
+    const BYTE* const base = (prefix==withPrefix) ? ((LZ4_Data_Structure*)ctx)->base : (const BYTE*) source;
+    const BYTE* const lowLimit = ((prefix==withPrefix) ? ((LZ4_Data_Structure*)ctx)->bufferStart : (const BYTE*)source);
+    const BYTE* anchor = (const BYTE*) source;
+    const BYTE* const iend = ip + inputSize;
+    const BYTE* const mflimit = iend - MFLIMIT;
+    const BYTE* const matchlimit = iend - LASTLITERALS;
+
+    BYTE* op = (BYTE*) dest;
+    BYTE* const oend = op + maxOutputSize;
+
+    int length;
+    const int skipStrength = SKIPSTRENGTH;
+    U32 forwardH;
+
+    // Init conditions
+    if ((U32)inputSize > (U32)LZ4_MAX_INPUT_SIZE) return 0;                                // Unsupported input size, too large (or negative)
+    if ((prefix==withPrefix) && (ip != ((LZ4_Data_Structure*)ctx)->nextBlock)) return 0;   // must continue from end of previous block
+    if (prefix==withPrefix) ((LZ4_Data_Structure*)ctx)->nextBlock=iend;                    // do it now, due to potential early exit
+    if ((tableType == byU16) && (inputSize>=LZ4_64KLIMIT)) return 0;                       // Size too large (not within 64K limit)
+    if (inputSize<LZ4_minLength) goto _last_literals;                                      // Input too small, no compression (all literals)
+
+    // First Byte
+    LZ4_putPosition(ip, ctx, tableType, base);
+    ip++; forwardH = LZ4_hashPosition(ip, tableType);
+
+    // Main Loop
+    for ( ; ; )
+    {
+        int findMatchAttempts = (1U << skipStrength) + 3;
+        const BYTE* forwardIp = ip;
+        const BYTE* ref;
+        BYTE* token;
+
+        // Find a match
+        do {
+            U32 h = forwardH;
+            int step = findMatchAttempts++ >> skipStrength;
+            ip = forwardIp;
+            forwardIp = ip + step;
+
+            if unlikely(forwardIp > mflimit) { goto _last_literals; }
+
+            forwardH = LZ4_hashPosition(forwardIp, tableType);
+            ref = LZ4_getPositionOnHash(h, ctx, tableType, base);
+            LZ4_putPositionOnHash(ip, h, ctx, tableType, base);
+
+        } while ((ref + MAX_DISTANCE < ip) || (A32(ref) != A32(ip)));
+
+        // Catch up
+        while ((ip>anchor) && (ref > lowLimit) && unlikely(ip[-1]==ref[-1])) { ip--; ref--; }
+
+        // Encode Literal length
+        length = (int)(ip - anchor);
+        token = op++;
+        if ((limitedOutput) && unlikely(op + length + (2 + 1 + LASTLITERALS) + (length/255) > oend)) return 0;   // Check output limit
+        if (length>=(int)RUN_MASK)
+        {
+            int len = length-RUN_MASK;
+            *token=(RUN_MASK<<ML_BITS);
+            for(; len >= 255 ; len-=255) *op++ = 255;
+            *op++ = (BYTE)len;
+        }
+        else *token = (BYTE)(length<<ML_BITS);
+
+        // Copy Literals
+        { BYTE* end=(op)+(length); LZ4_WILDCOPY(op,anchor,end); op=end; }
+
+_next_match:
+        // Encode Offset
+        LZ4_WRITE_LITTLEENDIAN_16(op,(U16)(ip-ref));
+
+        // Start Counting
+        ip+=MINMATCH; ref+=MINMATCH;    // MinMatch already verified
+        anchor = ip;
+        while likely(ip<matchlimit-(STEPSIZE-1))
+        {
+            size_t diff = AARCH(ref) ^ AARCH(ip);
+            if (!diff) { ip+=STEPSIZE; ref+=STEPSIZE; continue; }
+            ip += LZ4_NbCommonBytes(diff);
+            goto _endCount;
+        }
+        if (LZ4_ARCH64) if ((ip<(matchlimit-3)) && (A32(ref) == A32(ip))) { ip+=4; ref+=4; }
+        if ((ip<(matchlimit-1)) && (A16(ref) == A16(ip))) { ip+=2; ref+=2; }
+        if ((ip<matchlimit) && (*ref == *ip)) ip++;
+_endCount:
+
+        // Encode MatchLength
+        length = (int)(ip - anchor);
+        if ((limitedOutput) && unlikely(op + (1 + LASTLITERALS) + (length>>8) > oend)) return 0;    // Check output limit
+        if (length>=(int)ML_MASK)
+        {
+            *token += ML_MASK;
+            length -= ML_MASK;
+            for (; length > 509 ; length-=510) { *op++ = 255; *op++ = 255; }
+            if (length >= 255) { length-=255; *op++ = 255; }
+            *op++ = (BYTE)length;
+        }
+        else *token += (BYTE)(length);
+
+        // Test end of chunk
+        if (ip > mflimit) { anchor = ip;  break; }
+
+        // Fill table
+        LZ4_putPosition(ip-2, ctx, tableType, base);
+
+        // Test next position
+        ref = LZ4_getPosition(ip, ctx, tableType, base);
+        LZ4_putPosition(ip, ctx, tableType, base);
+        if ((ref + MAX_DISTANCE >= ip) && (A32(ref) == A32(ip))) { token = op++; *token=0; goto _next_match; }
+
+        // Prepare next loop
+        anchor = ip++;
+        forwardH = LZ4_hashPosition(ip, tableType);
+    }
+
+_last_literals:
+    // Encode Last Literals
+    {
+        int lastRun = (int)(iend - anchor);
+        if ((limitedOutput) && (((char*)op - dest) + lastRun + 1 + ((lastRun+255-RUN_MASK)/255) > (U32)maxOutputSize)) return 0;   // Check output limit
+        if (lastRun>=(int)RUN_MASK) { *op++=(RUN_MASK<<ML_BITS); lastRun-=RUN_MASK; for(; lastRun >= 255 ; lastRun-=255) *op++ = 255; *op++ = (BYTE) lastRun; }
+        else *op++ = (BYTE)(lastRun<<ML_BITS);
+        memcpy(op, anchor, iend - anchor);
+        op += iend-anchor;
+    }
+
+    // End
+    return (int) (((char*)op)-dest);
+}
+
+
+int LZ4_compress(const char* source, char* dest, int inputSize)
+{
+#if (HEAPMODE)
+    void* ctx = ALLOCATOR(HASHNBCELLS4, 4);   // Aligned on 4-bytes boundaries
+#else
+    U32 ctx[1U<<(MEMORY_USAGE-2)] = {0};      // Ensure data is aligned on 4-bytes boundaries
+#endif
+    int result;
+
+    if (inputSize < (int)LZ4_64KLIMIT)
+        result = LZ4_compress_generic((void*)ctx, source, dest, inputSize, 0, notLimited, byU16, noPrefix);
+    else
+        result = LZ4_compress_generic((void*)ctx, source, dest, inputSize, 0, notLimited, (sizeof(void*)==8) ? byU32 : byPtr, noPrefix);
+
+#if (HEAPMODE)
+    FREEMEM(ctx);
+#endif
+    return result;
+}
+
+int LZ4_compress_limitedOutput(const char* source, char* dest, int inputSize, int maxOutputSize)
+{
+#if (HEAPMODE)
+    void* ctx = ALLOCATOR(HASHNBCELLS4, 4);   // Aligned on 4-bytes boundaries
+#else
+    U32 ctx[1U<<(MEMORY_USAGE-2)] = {0};      // Ensure data is aligned on 4-bytes boundaries
+#endif
+    int result;
+
+    if (inputSize < (int)LZ4_64KLIMIT)
+        result = LZ4_compress_generic((void*)ctx, source, dest, inputSize, maxOutputSize, limited, byU16, noPrefix);
+    else
+        result = LZ4_compress_generic((void*)ctx, source, dest, inputSize, maxOutputSize, limited, (sizeof(void*)==8) ? byU32 : byPtr, noPrefix);
+
+#if (HEAPMODE)
+    FREEMEM(ctx);
+#endif
+    return result;
+}
+
+
+//*****************************
+// Using an external allocation
+//*****************************
+
+int LZ4_sizeofState() { return 1 << MEMORY_USAGE; }
+
+
+int LZ4_compress_withState (void* state, const char* source, char* dest, int inputSize)
+{
+    if (((size_t)(state)&3) != 0) return 0;   // Error : state is not aligned on 4-bytes boundary
+    MEM_INIT(state, 0, LZ4_sizeofState());
+
+    if (inputSize < (int)LZ4_64KLIMIT)
+        return LZ4_compress_generic(state, source, dest, inputSize, 0, notLimited, byU16, noPrefix);
+    else
+        return LZ4_compress_generic(state, source, dest, inputSize, 0, notLimited, (sizeof(void*)==8) ? byU32 : byPtr, noPrefix);
+}
+
+
+int LZ4_compress_limitedOutput_withState (void* state, const char* source, char* dest, int inputSize, int maxOutputSize)
+{
+    if (((size_t)(state)&3) != 0) return 0;   // Error : state is not aligned on 4-bytes boundary
+    MEM_INIT(state, 0, LZ4_sizeofState());
+
+    if (inputSize < (int)LZ4_64KLIMIT)
+        return LZ4_compress_generic(state, source, dest, inputSize, maxOutputSize, limited, byU16, noPrefix);
+    else
+        return LZ4_compress_generic(state, source, dest, inputSize, maxOutputSize, limited, (sizeof(void*)==8) ? byU32 : byPtr, noPrefix);
+}
+
+
+//****************************
+// Stream functions
+//****************************
+
+int LZ4_sizeofStreamState()
+{
+    return sizeof(LZ4_Data_Structure);
+}
+
+FORCE_INLINE void LZ4_init(LZ4_Data_Structure* lz4ds, const BYTE* base)
+{
+    MEM_INIT(lz4ds->hashTable, 0, sizeof(lz4ds->hashTable));
+    lz4ds->bufferStart = base;
+    lz4ds->base = base;
+    lz4ds->nextBlock = base;
+}
+
+int LZ4_resetStreamState(void* state, const char* inputBuffer)
+{
+    if ((((size_t)state) & 3) != 0) return 1;   // Error : pointer is not aligned on 4-bytes boundary
+    LZ4_init((LZ4_Data_Structure*)state, (const BYTE*)inputBuffer);
+    return 0;
+}
+
+void* LZ4_create (const char* inputBuffer)
+{
+    void* lz4ds = ALLOCATOR(1, sizeof(LZ4_Data_Structure));
+    LZ4_init ((LZ4_Data_Structure*)lz4ds, (const BYTE*)inputBuffer);
+    return lz4ds;
+}
+
+
+int LZ4_free (void* LZ4_Data)
+{
+    FREEMEM(LZ4_Data);
+    return (0);
+}
+
+
+char* LZ4_slideInputBuffer (void* LZ4_Data)
+{
+    LZ4_Data_Structure* lz4ds = (LZ4_Data_Structure*)LZ4_Data;
+    size_t delta = lz4ds->nextBlock - (lz4ds->bufferStart + 64 KB);
+
+    if ( (lz4ds->base - delta > lz4ds->base)                          // underflow control
+       || ((size_t)(lz4ds->nextBlock - lz4ds->base) > 0xE0000000) )   // close to 32-bits limit
+    {
+        size_t deltaLimit = (lz4ds->nextBlock - 64 KB) - lz4ds->base;
+        int nH;
+
+        for (nH=0; nH < HASHNBCELLS4; nH++)
+        {
+            if ((size_t)(lz4ds->hashTable[nH]) < deltaLimit) lz4ds->hashTable[nH] = 0;
+            else lz4ds->hashTable[nH] -= (U32)deltaLimit;
+        }
+        memcpy((void*)(lz4ds->bufferStart), (const void*)(lz4ds->nextBlock - 64 KB), 64 KB);
+        lz4ds->base = lz4ds->bufferStart;
+        lz4ds->nextBlock = lz4ds->base + 64 KB;
+    }
+    else
+    {
+        memcpy((void*)(lz4ds->bufferStart), (const void*)(lz4ds->nextBlock - 64 KB), 64 KB);
+        lz4ds->nextBlock -= delta;
+        lz4ds->base -= delta;
+    }
+
+    return (char*)(lz4ds->nextBlock);
+}
+
+
+int LZ4_compress_continue (void* LZ4_Data, const char* source, char* dest, int inputSize)
+{
+    return LZ4_compress_generic(LZ4_Data, source, dest, inputSize, 0, notLimited, byU32, withPrefix);
+}
+
+
+int LZ4_compress_limitedOutput_continue (void* LZ4_Data, const char* source, char* dest, int inputSize, int maxOutputSize)
+{
+    return LZ4_compress_generic(LZ4_Data, source, dest, inputSize, maxOutputSize, limited, byU32, withPrefix);
+}
+
+
+//****************************
+// Decompression functions
+//****************************
+
+// This generic decompression function cover all use cases.
+// It shall be instanciated several times, using different sets of directives
+// Note that it is essential this generic function is really inlined,
+// in order to remove useless branches during compilation optimisation.
+FORCE_INLINE int LZ4_decompress_generic(
+                 const char* source,
+                 char* dest,
+                 int inputSize,          //
+                 int outputSize,         // If endOnInput==endOnInputSize, this value is the max size of Output Buffer.
+
+                 int endOnInput,         // endOnOutputSize, endOnInputSize
+                 int prefix64k,          // noPrefix, withPrefix
+                 int partialDecoding,    // full, partial
+                 int targetOutputSize    // only used if partialDecoding==partial
+                 )
+{
+    // Local Variables
+    const BYTE* restrict ip = (const BYTE*) source;
+    const BYTE* ref;
+    const BYTE* const iend = ip + inputSize;
+
+    BYTE* op = (BYTE*) dest;
+    BYTE* const oend = op + outputSize;
+    BYTE* cpy;
+    BYTE* oexit = op + targetOutputSize;
+
+    const size_t dec32table[] = {0, 3, 2, 3, 0, 0, 0, 0};   // static reduces speed for LZ4_decompress_safe() on GCC64
+    static const size_t dec64table[] = {0, 0, 0, (size_t)-1, 0, 1, 2, 3};
+
+
+    // Special cases
+    if ((partialDecoding) && (oexit> oend-MFLIMIT)) oexit = oend-MFLIMIT;                        // targetOutputSize too high => decode everything
+    if ((endOnInput) && unlikely(outputSize==0)) return ((inputSize==1) && (*ip==0)) ? 0 : -1;   // Empty output buffer
+    if ((!endOnInput) && unlikely(outputSize==0)) return (*ip==0?1:-1);
+
+
+    // Main Loop
+    while (1)
+    {
+        unsigned token;
+        size_t length;
+
+        // get runlength
+        token = *ip++;
+        if ((length=(token>>ML_BITS)) == RUN_MASK)
+        {
+            unsigned s=255;
+            while (((endOnInput)?ip<iend:1) && (s==255))
+            {
+                s = *ip++;
+                length += s;
+            }
+        }
+
+        // copy literals
+        cpy = op+length;
+        if (((endOnInput) && ((cpy>(partialDecoding?oexit:oend-MFLIMIT)) || (ip+length>iend-(2+1+LASTLITERALS))) )
+            || ((!endOnInput) && (cpy>oend-COPYLENGTH)))
+        {
+            if (partialDecoding)
+            {
+                if (cpy > oend) goto _output_error;                           // Error : write attempt beyond end of output buffer
+                if ((endOnInput) && (ip+length > iend)) goto _output_error;   // Error : read attempt beyond end of input buffer
+            }
+            else
+            {
+                if ((!endOnInput) && (cpy != oend)) goto _output_error;       // Error : block decoding must stop exactly there
+                if ((endOnInput) && ((ip+length != iend) || (cpy > oend))) goto _output_error;   // Error : input must be consumed
+            }
+            memcpy(op, ip, length);
+            ip += length;
+            op += length;
+            break;                                       // Necessarily EOF, due to parsing restrictions
+        }
+        LZ4_WILDCOPY(op, ip, cpy); ip -= (op-cpy); op = cpy;
+
+        // get offset
+        LZ4_READ_LITTLEENDIAN_16(ref,cpy,ip); ip+=2;
+        if ((prefix64k==noPrefix) && unlikely(ref < (BYTE* const)dest)) goto _output_error;   // Error : offset outside destination buffer
+
+        // get matchlength
+        if ((length=(token&ML_MASK)) == ML_MASK)
+        {
+            while ((!endOnInput) || (ip<iend-(LASTLITERALS+1)))   // Ensure enough bytes remain for LASTLITERALS + token
+            {
+                unsigned s = *ip++;
+                length += s;
+                if (s==255) continue;
+                break;
+            }
+        }
+
+        // copy repeated sequence
+        if unlikely((op-ref)<(int)STEPSIZE)
+        {
+            const size_t dec64 = dec64table[(sizeof(void*)==4) ? 0 : op-ref];
+            op[0] = ref[0];
+            op[1] = ref[1];
+            op[2] = ref[2];
+            op[3] = ref[3];
+            op += 4, ref += 4; ref -= dec32table[op-ref];
+            A32(op) = A32(ref);
+            op += STEPSIZE-4; ref -= dec64;
+        } else { LZ4_COPYSTEP(op,ref); }
+        cpy = op + length - (STEPSIZE-4);
+
+        if unlikely(cpy>oend-COPYLENGTH-(STEPSIZE-4))
+        {
+            if (cpy > oend-LASTLITERALS) goto _output_error;    // Error : last 5 bytes must be literals
+            LZ4_SECURECOPY(op, ref, (oend-COPYLENGTH));
+            while(op<cpy) *op++=*ref++;
+            op=cpy;
+            continue;
+        }
+        LZ4_WILDCOPY(op, ref, cpy);
+        op=cpy;   // correction
+    }
+
+    // end of decoding
+    if (endOnInput)
+       return (int) (((char*)op)-dest);     // Nb of output bytes decoded
+    else
+       return (int) (((char*)ip)-source);   // Nb of input bytes read
+
+    // Overflow error detected
+_output_error:
+    return (int) (-(((char*)ip)-source))-1;
+}
+
+
+int LZ4_decompress_safe(const char* source, char* dest, int inputSize, int maxOutputSize)
+{
+    return LZ4_decompress_generic(source, dest, inputSize, maxOutputSize, endOnInputSize, noPrefix, full, 0);
+}
+
+int LZ4_decompress_safe_withPrefix64k(const char* source, char* dest, int inputSize, int maxOutputSize)
+{
+    return LZ4_decompress_generic(source, dest, inputSize, maxOutputSize, endOnInputSize, withPrefix, full, 0);
+}
+
+int LZ4_decompress_safe_partial(const char* source, char* dest, int inputSize, int targetOutputSize, int maxOutputSize)
+{
+    return LZ4_decompress_generic(source, dest, inputSize, maxOutputSize, endOnInputSize, noPrefix, partial, targetOutputSize);
+}
+
+int LZ4_decompress_fast_withPrefix64k(const char* source, char* dest, int outputSize)
+{
+    return LZ4_decompress_generic(source, dest, 0, outputSize, endOnOutputSize, withPrefix, full, 0);
+}
+
+int LZ4_decompress_fast(const char* source, char* dest, int outputSize)
+{
+#ifdef _MSC_VER   // This version is faster with Visual
+    return LZ4_decompress_generic(source, dest, 0, outputSize, endOnOutputSize, noPrefix, full, 0);
+#else
+    return LZ4_decompress_generic(source, dest, 0, outputSize, endOnOutputSize, withPrefix, full, 0);
+#endif
+}
+
diff --git a/c-blosc/internal-complibs/lz4-r110/lz4.h b/c-blosc/internal-complibs/lz4-r110/lz4.h
new file mode 100644
index 0000000..33fcbe4
--- /dev/null
+++ b/c-blosc/internal-complibs/lz4-r110/lz4.h
@@ -0,0 +1,252 @@
+/*
+   LZ4 - Fast LZ compression algorithm
+   Header File
+   Copyright (C) 2011-2013, Yann Collet.
+   BSD 2-Clause License (http://www.opensource.org/licenses/bsd-license.php)
+
+   Redistribution and use in source and binary forms, with or without
+   modification, are permitted provided that the following conditions are
+   met:
+
+       * Redistributions of source code must retain the above copyright
+   notice, this list of conditions and the following disclaimer.
+       * Redistributions in binary form must reproduce the above
+   copyright notice, this list of conditions and the following disclaimer
+   in the documentation and/or other materials provided with the
+   distribution.
+
+   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+
+   You can contact the author at :
+   - LZ4 homepage : http://fastcompression.blogspot.com/p/lz4.html
+   - LZ4 source repository : http://code.google.com/p/lz4/
+*/
+#pragma once
+
+#if defined (__cplusplus)
+extern "C" {
+#endif
+
+// The next is for getting the LZ4 version.
+// Please note that this is only defined in the Blosc sources of LZ4.
+#define LZ4_VERSION_STRING "r110"
+
+//**************************************
+// Compiler Options
+//**************************************
+#if defined(_MSC_VER) && !defined(__cplusplus)   // Visual Studio
+#  define inline __inline           // Visual C is not C99, but supports some kind of inline
+#endif
+
+
+//****************************
+// Simple Functions
+//****************************
+
+int LZ4_compress        (const char* source, char* dest, int inputSize);
+int LZ4_decompress_safe (const char* source, char* dest, int inputSize, int maxOutputSize);
+
+/*
+LZ4_compress() :
+    Compresses 'inputSize' bytes from 'source' into 'dest'.
+    Destination buffer must be already allocated,
+    and must be sized to handle worst cases situations (input data not compressible)
+    Worst case size evaluation is provided by function LZ4_compressBound()
+    inputSize : Max supported value is LZ4_MAX_INPUT_VALUE
+    return : the number of bytes written in buffer dest
+             or 0 if the compression fails
+
+LZ4_decompress_safe() :
+    maxOutputSize : is the size of the destination buffer (which must be already allocated)
+    return : the number of bytes decoded in the destination buffer (necessarily <= maxOutputSize)
+             If the source stream is detected malformed, the function will stop decoding and return a negative result.
+             This function is protected against buffer overflow exploits (never writes outside of output buffer, and never reads outside of input buffer). Therefore, it is protected against malicious data packets
+*/
+
+
+//****************************
+// Advanced Functions
+//****************************
+#define LZ4_MAX_INPUT_SIZE        0x7E000000   // 2 113 929 216 bytes
+#define LZ4_COMPRESSBOUND(isize)  ((unsigned int)(isize) > (unsigned int)LZ4_MAX_INPUT_SIZE ? 0 : (isize) + ((isize)/255) + 16)
+static inline int LZ4_compressBound(int isize)  { return LZ4_COMPRESSBOUND(isize); }
+
+/*
+LZ4_compressBound() :
+    Provides the maximum size that LZ4 may output in a "worst case" scenario (input data not compressible)
+    primarily useful for memory allocation of output buffer.
+    inline function is recommended for the general case,
+    macro is also provided when result needs to be evaluated at compilation (such as stack memory allocation).
+
+    isize  : is the input size. Max supported value is LZ4_MAX_INPUT_SIZE
+    return : maximum output size in a "worst case" scenario
+             or 0, if input size is too large ( > LZ4_MAX_INPUT_SIZE)
+*/
+
+
+int LZ4_compress_limitedOutput (const char* source, char* dest, int inputSize, int maxOutputSize);
+
+/*
+LZ4_compress_limitedOutput() :
+    Compress 'inputSize' bytes from 'source' into an output buffer 'dest' of maximum size 'maxOutputSize'.
+    If it cannot achieve it, compression will stop, and result of the function will be zero.
+    This function never writes outside of provided output buffer.
+
+    inputSize  : Max supported value is LZ4_MAX_INPUT_VALUE
+    maxOutputSize : is the size of the destination buffer (which must be already allocated)
+    return : the number of bytes written in buffer 'dest'
+             or 0 if the compression fails
+*/
+
+
+int LZ4_decompress_fast (const char* source, char* dest, int outputSize);
+
+/*
+LZ4_decompress_fast() :
+    outputSize : is the original (uncompressed) size
+    return : the number of bytes read from the source buffer (in other words, the compressed size)
+             If the source stream is malformed, the function will stop decoding and return a negative result.
+    note : This function is a bit faster than LZ4_decompress_safe()
+           This function never writes outside of output buffers, but may read beyond input buffer in case of malicious data packet.
+           Use this function preferably into a trusted environment (data to decode comes from a trusted source).
+           Destination buffer must be already allocated. Its size must be a minimum of 'outputSize' bytes.
+*/
+
+int LZ4_decompress_safe_partial (const char* source, char* dest, int inputSize, int targetOutputSize, int maxOutputSize);
+
+/*
+LZ4_decompress_safe_partial() :
+    This function decompress a compressed block of size 'inputSize' at position 'source'
+    into output buffer 'dest' of size 'maxOutputSize'.
+    The function tries to stop decompressing operation as soon as 'targetOutputSize' has been reached,
+    reducing decompression time.
+    return : the number of bytes decoded in the destination buffer (necessarily <= maxOutputSize)
+       Note : this number can be < 'targetOutputSize' should the compressed block to decode be smaller.
+             Always control how many bytes were decoded.
+             If the source stream is detected malformed, the function will stop decoding and return a negative result.
+             This function never writes outside of output buffer, and never reads outside of input buffer. It is therefore protected against malicious data packets
+*/
+
+
+//*****************************
+// Using an external allocation
+//*****************************
+int LZ4_sizeofState();
+int LZ4_compress_withState               (void* state, const char* source, char* dest, int inputSize);
+int LZ4_compress_limitedOutput_withState (void* state, const char* source, char* dest, int inputSize, int maxOutputSize);
+
+/*
+These functions are provided should you prefer to allocate memory for compression tables with your own allocation methods.
+To know how much memory must be allocated for the compression tables, use :
+int LZ4_sizeofState();
+
+Note that tables must be aligned on 4-bytes boundaries, otherwise compression will fail (return code 0).
+
+The allocated memory can be provided to the compressions functions using 'void* state' parameter.
+LZ4_compress_withState() and LZ4_compress_limitedOutput_withState() are equivalent to previously described functions.
+They just use the externally allocated memory area instead of allocating their own (on stack, or on heap).
+*/
+
+
+//****************************
+// Streaming Functions
+//****************************
+
+void* LZ4_create (const char* inputBuffer);
+int   LZ4_compress_continue (void* LZ4_Data, const char* source, char* dest, int inputSize);
+int   LZ4_compress_limitedOutput_continue (void* LZ4_Data, const char* source, char* dest, int inputSize, int maxOutputSize);
+char* LZ4_slideInputBuffer (void* LZ4_Data);
+int   LZ4_free (void* LZ4_Data);
+
+/*
+These functions allow the compression of dependent blocks, where each block benefits from prior 64 KB within preceding blocks.
+In order to achieve this, it is necessary to start creating the LZ4 Data Structure, thanks to the function :
+
+void* LZ4_create (const char* inputBuffer);
+The result of the function is the (void*) pointer on the LZ4 Data Structure.
+This pointer will be needed in all other functions.
+If the pointer returned is NULL, then the allocation has failed, and compression must be aborted.
+The only parameter 'const char* inputBuffer' must, obviously, point at the beginning of input buffer.
+The input buffer must be already allocated, and size at least 192KB.
+'inputBuffer' will also be the 'const char* source' of the first block.
+
+All blocks are expected to lay next to each other within the input buffer, starting from 'inputBuffer'.
+To compress each block, use either LZ4_compress_continue() or LZ4_compress_limitedOutput_continue().
+Their behavior are identical to LZ4_compress() or LZ4_compress_limitedOutput(),
+but require the LZ4 Data Structure as their first argument, and check that each block starts right after the previous one.
+If next block does not begin immediately after the previous one, the compression will fail (return 0).
+
+When it's no longer possible to lay the next block after the previous one (not enough space left into input buffer), a call to :
+char* LZ4_slideInputBuffer(void* LZ4_Data);
+must be performed. It will typically copy the latest 64KB of input at the beginning of input buffer.
+Note that, for this function to work properly, minimum size of an input buffer must be 192KB.
+==> The memory position where the next input data block must start is provided as the result of the function.
+
+Compression can then resume, using LZ4_compress_continue() or LZ4_compress_limitedOutput_continue(), as usual.
+
+When compression is completed, a call to LZ4_free() will release the memory used by the LZ4 Data Structure.
+*/
+
+int LZ4_sizeofStreamState();
+int LZ4_resetStreamState(void* state, const char* inputBuffer);
+
+/*
+These functions achieve the same result as :
+void* LZ4_create (const char* inputBuffer);
+
+They are provided here to allow the user program to allocate memory using its own routines.
+
+To know how much space must be allocated, use LZ4_sizeofStreamState();
+Note also that space must be 4-bytes aligned.
+
+Once space is allocated, you must initialize it using : LZ4_resetStreamState(void* state, const char* inputBuffer);
+void* state is a pointer to the space allocated.
+It must be aligned on 4-bytes boundaries, and be large enough.
+The parameter 'const char* inputBuffer' must, obviously, point at the beginning of input buffer.
+The input buffer must be already allocated, and size at least 192KB.
+'inputBuffer' will also be the 'const char* source' of the first block.
+
+The same space can be re-used multiple times, just by initializing it each time with LZ4_resetStreamState().
+return value of LZ4_resetStreamState() must be 0 is OK.
+Any other value means there was an error (typically, pointer is not aligned on 4-bytes boundaries).
+*/
+
+
+int LZ4_decompress_safe_withPrefix64k (const char* source, char* dest, int inputSize, int maxOutputSize);
+int LZ4_decompress_fast_withPrefix64k (const char* source, char* dest, int outputSize);
+
+/*
+*_withPrefix64k() :
+    These decoding functions work the same as their "normal name" versions,
+    but can use up to 64KB of data in front of 'char* dest'.
+    These functions are necessary to decode inter-dependant blocks.
+*/
+
+
+//****************************
+// Obsolete Functions
+//****************************
+
+static inline int LZ4_uncompress (const char* source, char* dest, int outputSize) { return LZ4_decompress_fast(source, dest, outputSize); }
+static inline int LZ4_uncompress_unknownOutputSize (const char* source, char* dest, int isize, int maxOutputSize) { return LZ4_decompress_safe(source, dest, isize, maxOutputSize); }
+
+/*
+These functions are deprecated and should no longer be used.
+They are provided here for compatibility with existing user programs.
+*/
+
+
+
+#if defined (__cplusplus)
+}
+#endif
diff --git a/c-blosc/internal-complibs/lz4-r110/lz4hc.c b/c-blosc/internal-complibs/lz4-r110/lz4hc.c
new file mode 100644
index 0000000..f28283f
--- /dev/null
+++ b/c-blosc/internal-complibs/lz4-r110/lz4hc.c
@@ -0,0 +1,856 @@
+/*
+   LZ4 HC - High Compression Mode of LZ4
+   Copyright (C) 2011-2013, Yann Collet.
+   BSD 2-Clause License (http://www.opensource.org/licenses/bsd-license.php)
+
+   Redistribution and use in source and binary forms, with or without
+   modification, are permitted provided that the following conditions are
+   met:
+
+       * Redistributions of source code must retain the above copyright
+   notice, this list of conditions and the following disclaimer.
+       * Redistributions in binary form must reproduce the above
+   copyright notice, this list of conditions and the following disclaimer
+   in the documentation and/or other materials provided with the
+   distribution.
+
+   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+
+   You can contact the author at :
+   - LZ4 homepage : http://fastcompression.blogspot.com/p/lz4.html
+   - LZ4 source repository : http://code.google.com/p/lz4/
+*/
+
+//**************************************
+// Memory routines
+//**************************************
+#include <stdlib.h>   // calloc, free
+#define ALLOCATOR(s)  calloc(1,s)
+#define FREEMEM       free
+#include <string.h>   // memset, memcpy
+#define MEM_INIT      memset
+
+
+//**************************************
+// CPU Feature Detection
+//**************************************
+// 32 or 64 bits ?
+#if (defined(__x86_64__) || defined(_M_X64) || defined(_WIN64) \
+  || defined(__powerpc64__) || defined(__ppc64__) || defined(__PPC64__) \
+  || defined(__64BIT__) || defined(_LP64) || defined(__LP64__) \
+  || defined(__ia64) || defined(__itanium__) || defined(_M_IA64) )   // Detects 64 bits mode
+#  define LZ4_ARCH64 1
+#else
+#  define LZ4_ARCH64 0
+#endif
+
+// Little Endian or Big Endian ?
+// Overwrite the #define below if you know your architecture endianess
+#if defined (__GLIBC__)
+#  include <endian.h>
+#  if (__BYTE_ORDER == __BIG_ENDIAN)
+#     define LZ4_BIG_ENDIAN 1
+#  endif
+#elif (defined(__BIG_ENDIAN__) || defined(__BIG_ENDIAN) || defined(_BIG_ENDIAN)) && !(defined(__LITTLE_ENDIAN__) || defined(__LITTLE_ENDIAN) || defined(_LITTLE_ENDIAN))
+#  define LZ4_BIG_ENDIAN 1
+#elif defined(__sparc) || defined(__sparc__) \
+   || defined(__powerpc__) || defined(__ppc__) || defined(__PPC__) \
+   || defined(__hpux)  || defined(__hppa) \
+   || defined(_MIPSEB) || defined(__s390__)
+#  define LZ4_BIG_ENDIAN 1
+#else
+// Little Endian assumed. PDP Endian and other very rare endian format are unsupported.
+#endif
+
+// Unaligned memory access is automatically enabled for "common" CPU, such as x86.
+// For others CPU, the compiler will be more cautious, and insert extra code to ensure aligned access is respected
+// If you know your target CPU supports unaligned memory access, you want to force this option manually to improve performance
+#if defined(__ARM_FEATURE_UNALIGNED)
+#  define LZ4_FORCE_UNALIGNED_ACCESS 1
+#endif
+
+// Define this parameter if your target system or compiler does not support hardware bit count
+#if defined(_MSC_VER) && defined(_WIN32_WCE)            // Visual Studio for Windows CE does not support Hardware bit count
+#  define LZ4_FORCE_SW_BITCOUNT
+#endif
+
+
+//**************************************
+// Compiler Options
+//**************************************
+#if defined (__STDC_VERSION__) && __STDC_VERSION__ >= 199901L   // C99
+  /* "restrict" is a known keyword */
+#else
+#  define restrict  // Disable restrict
+#endif
+
+#ifdef _MSC_VER    // Visual Studio
+#  define FORCE_INLINE static __forceinline
+#  include <intrin.h>                    // For Visual 2005
+#  if LZ4_ARCH64   // 64-bits
+#    pragma intrinsic(_BitScanForward64) // For Visual 2005
+#    pragma intrinsic(_BitScanReverse64) // For Visual 2005
+#  else            // 32-bits
+#    pragma intrinsic(_BitScanForward)   // For Visual 2005
+#    pragma intrinsic(_BitScanReverse)   // For Visual 2005
+#  endif
+#  pragma warning(disable : 4127)        // disable: C4127: conditional expression is constant
+#  pragma warning(disable : 4701)        // disable: C4701: potentially uninitialized local variable used
+#else
+#  ifdef __GNUC__
+#    define FORCE_INLINE static inline __attribute__((always_inline))
+#  else
+#    define FORCE_INLINE static inline
+#  endif
+#endif
+
+#ifdef _MSC_VER  // Visual Studio
+#  define lz4_bswap16(x) _byteswap_ushort(x)
+#else
+#  define lz4_bswap16(x)  ((unsigned short int) ((((x) >> 8) & 0xffu) | (((x) & 0xffu) << 8)))
+#endif
+
+
+//**************************************
+// Includes
+//**************************************
+#include "lz4hc.h"
+#include "lz4.h"
+
+
+//**************************************
+// Basic Types
+//**************************************
+#if defined (__STDC_VERSION__) && __STDC_VERSION__ >= 199901L   // C99
+# include <stdint.h>
+  typedef uint8_t  BYTE;
+  typedef uint16_t U16;
+  typedef uint32_t U32;
+  typedef  int32_t S32;
+  typedef uint64_t U64;
+#else
+  typedef unsigned char       BYTE;
+  typedef unsigned short      U16;
+  typedef unsigned int        U32;
+  typedef   signed int        S32;
+  typedef unsigned long long  U64;
+#endif
+
+#if defined(__GNUC__)  && !defined(LZ4_FORCE_UNALIGNED_ACCESS)
+#  define _PACKED __attribute__ ((packed))
+#else
+#  define _PACKED
+#endif
+
+#if !defined(LZ4_FORCE_UNALIGNED_ACCESS) && !defined(__GNUC__)
+#  ifdef __IBMC__
+#    pragma pack(1)
+#  else
+#    pragma pack(push, 1)
+#  endif
+#endif
+
+typedef struct _U16_S { U16 v; } _PACKED U16_S;
+typedef struct _U32_S { U32 v; } _PACKED U32_S;
+typedef struct _U64_S { U64 v; } _PACKED U64_S;
+
+#if !defined(LZ4_FORCE_UNALIGNED_ACCESS) && !defined(__GNUC__)
+#  pragma pack(pop)
+#endif
+
+#define A64(x) (((U64_S *)(x))->v)
+#define A32(x) (((U32_S *)(x))->v)
+#define A16(x) (((U16_S *)(x))->v)
+
+
+//**************************************
+// Constants
+//**************************************
+#define MINMATCH 4
+
+#define DICTIONARY_LOGSIZE 16
+#define MAXD (1<<DICTIONARY_LOGSIZE)
+#define MAXD_MASK ((U32)(MAXD - 1))
+#define MAX_DISTANCE (MAXD - 1)
+
+#define HASH_LOG (DICTIONARY_LOGSIZE-1)
+#define HASHTABLESIZE (1 << HASH_LOG)
+#define HASH_MASK (HASHTABLESIZE - 1)
+
+#define MAX_NB_ATTEMPTS 256
+
+#define ML_BITS  4
+#define ML_MASK  (size_t)((1U<<ML_BITS)-1)
+#define RUN_BITS (8-ML_BITS)
+#define RUN_MASK ((1U<<RUN_BITS)-1)
+
+#define COPYLENGTH 8
+#define LASTLITERALS 5
+#define MFLIMIT (COPYLENGTH+MINMATCH)
+#define MINLENGTH (MFLIMIT+1)
+#define OPTIMAL_ML (int)((ML_MASK-1)+MINMATCH)
+
+#define KB *(1U<<10)
+#define MB *(1U<<20)
+#define GB *(1U<<30)
+
+
+//**************************************
+// Architecture-specific macros
+//**************************************
+#if LZ4_ARCH64   // 64-bit
+#  define STEPSIZE 8
+#  define LZ4_COPYSTEP(s,d)     A64(d) = A64(s); d+=8; s+=8;
+#  define LZ4_COPYPACKET(s,d)   LZ4_COPYSTEP(s,d)
+#  define UARCH U64
+#  define AARCH A64
+#  define HTYPE                 U32
+#  define INITBASE(b,s)         const BYTE* const b = s
+#else   // 32-bit
+#  define STEPSIZE 4
+#  define LZ4_COPYSTEP(s,d)     A32(d) = A32(s); d+=4; s+=4;
+#  define LZ4_COPYPACKET(s,d)   LZ4_COPYSTEP(s,d); LZ4_COPYSTEP(s,d);
+#  define UARCH U32
+#  define AARCH A32
+//#  define HTYPE                 const BYTE*
+//#  define INITBASE(b,s)         const int b = 0
+#  define HTYPE                 U32
+#  define INITBASE(b,s)         const BYTE* const b = s
+#endif
+
+#if defined(LZ4_BIG_ENDIAN)
+#  define LZ4_READ_LITTLEENDIAN_16(d,s,p) { U16 v = A16(p); v = lz4_bswap16(v); d = (s) - v; }
+#  define LZ4_WRITE_LITTLEENDIAN_16(p,i)  { U16 v = (U16)(i); v = lz4_bswap16(v); A16(p) = v; p+=2; }
+#else   // Little Endian
+#  define LZ4_READ_LITTLEENDIAN_16(d,s,p) { d = (s) - A16(p); }
+#  define LZ4_WRITE_LITTLEENDIAN_16(p,v)  { A16(p) = v; p+=2; }
+#endif
+
+
+//************************************************************
+// Local Types
+//************************************************************
+typedef struct
+{
+    const BYTE* inputBuffer;
+    const BYTE* base;
+    const BYTE* end;
+    HTYPE hashTable[HASHTABLESIZE];
+    U16 chainTable[MAXD];
+    const BYTE* nextToUpdate;
+} LZ4HC_Data_Structure;
+
+
+//**************************************
+// Macros
+//**************************************
+#define LZ4_WILDCOPY(s,d,e)    do { LZ4_COPYPACKET(s,d) } while (d<e);
+#define LZ4_BLINDCOPY(s,d,l)   { BYTE* e=d+l; LZ4_WILDCOPY(s,d,e); d=e; }
+#define HASH_FUNCTION(i)       (((i) * 2654435761U) >> ((MINMATCH*8)-HASH_LOG))
+#define HASH_VALUE(p)          HASH_FUNCTION(A32(p))
+#define HASH_POINTER(p)        (HashTable[HASH_VALUE(p)] + base)
+#define DELTANEXT(p)           chainTable[(size_t)(p) & MAXD_MASK]
+#define GETNEXT(p)             ((p) - (size_t)DELTANEXT(p))
+
+
+//**************************************
+// Private functions
+//**************************************
+#if LZ4_ARCH64
+
+FORCE_INLINE int LZ4_NbCommonBytes (register U64 val)
+{
+#if defined(LZ4_BIG_ENDIAN)
+#  if defined(_MSC_VER) && !defined(LZ4_FORCE_SW_BITCOUNT)
+    unsigned long r = 0;
+    _BitScanReverse64( &r, val );
+    return (int)(r>>3);
+#  elif defined(__GNUC__) && ((__GNUC__ * 100 + __GNUC_MINOR__) >= 304) && !defined(LZ4_FORCE_SW_BITCOUNT)
+    return (__builtin_clzll(val) >> 3);
+#  else
+    int r;
+    if (!(val>>32)) { r=4; } else { r=0; val>>=32; }
+    if (!(val>>16)) { r+=2; val>>=8; } else { val>>=24; }
+    r += (!val);
+    return r;
+#  endif
+#else
+#  if defined(_MSC_VER) && !defined(LZ4_FORCE_SW_BITCOUNT)
+    unsigned long r = 0;
+    _BitScanForward64( &r, val );
+    return (int)(r>>3);
+#  elif defined(__GNUC__) && ((__GNUC__ * 100 + __GNUC_MINOR__) >= 304) && !defined(LZ4_FORCE_SW_BITCOUNT)
+    return (__builtin_ctzll(val) >> 3);
+#  else
+    static const int DeBruijnBytePos[64] = { 0, 0, 0, 0, 0, 1, 1, 2, 0, 3, 1, 3, 1, 4, 2, 7, 0, 2, 3, 6, 1, 5, 3, 5, 1, 3, 4, 4, 2, 5, 6, 7, 7, 0, 1, 2, 3, 3, 4, 6, 2, 6, 5, 5, 3, 4, 5, 6, 7, 1, 2, 4, 6, 4, 4, 5, 7, 2, 6, 5, 7, 6, 7, 7 };
+    return DeBruijnBytePos[((U64)((val & -val) * 0x0218A392CDABBD3F)) >> 58];
+#  endif
+#endif
+}
+
+#else
+
+FORCE_INLINE int LZ4_NbCommonBytes (register U32 val)
+{
+#if defined(LZ4_BIG_ENDIAN)
+#  if defined(_MSC_VER) && !defined(LZ4_FORCE_SW_BITCOUNT)
+    unsigned long r;
+    _BitScanReverse( &r, val );
+    return (int)(r>>3);
+#  elif defined(__GNUC__) && ((__GNUC__ * 100 + __GNUC_MINOR__) >= 304) && !defined(LZ4_FORCE_SW_BITCOUNT)
+    return (__builtin_clz(val) >> 3);
+#  else
+    int r;
+    if (!(val>>16)) { r=2; val>>=8; } else { r=0; val>>=24; }
+    r += (!val);
+    return r;
+#  endif
+#else
+#  if defined(_MSC_VER) && !defined(LZ4_FORCE_SW_BITCOUNT)
+    unsigned long r;
+    _BitScanForward( &r, val );
+    return (int)(r>>3);
+#  elif defined(__GNUC__) && ((__GNUC__ * 100 + __GNUC_MINOR__) >= 304) && !defined(LZ4_FORCE_SW_BITCOUNT)
+    return (__builtin_ctz(val) >> 3);
+#  else
+    static const int DeBruijnBytePos[32] = { 0, 0, 3, 0, 3, 1, 3, 0, 3, 2, 2, 1, 3, 2, 0, 1, 3, 3, 1, 2, 2, 2, 2, 0, 3, 1, 2, 0, 1, 0, 1, 1 };
+    return DeBruijnBytePos[((U32)((val & -(S32)val) * 0x077CB531U)) >> 27];
+#  endif
+#endif
+}
+
+#endif
+
+
+int LZ4_sizeofStreamStateHC()
+{
+    return sizeof(LZ4HC_Data_Structure);
+}
+
+FORCE_INLINE void LZ4_initHC (LZ4HC_Data_Structure* hc4, const BYTE* base)
+{
+    MEM_INIT((void*)hc4->hashTable, 0, sizeof(hc4->hashTable));
+    MEM_INIT(hc4->chainTable, 0xFF, sizeof(hc4->chainTable));
+    hc4->nextToUpdate = base + 1;
+    hc4->base = base;
+    hc4->inputBuffer = base;
+    hc4->end = base;
+}
+
+int LZ4_resetStreamStateHC(void* state, const char* inputBuffer)
+{
+    if ((((size_t)state) & (sizeof(void*)-1)) != 0) return 1;   // Error : pointer is not aligned for pointer (32 or 64 bits)
+    LZ4_initHC((LZ4HC_Data_Structure*)state, (const BYTE*)inputBuffer);
+    return 0;
+}
+
+
+void* LZ4_createHC (const char* inputBuffer)
+{
+    void* hc4 = ALLOCATOR(sizeof(LZ4HC_Data_Structure));
+    LZ4_initHC ((LZ4HC_Data_Structure*)hc4, (const BYTE*)inputBuffer);
+    return hc4;
+}
+
+
+int LZ4_freeHC (void* LZ4HC_Data)
+{
+    FREEMEM(LZ4HC_Data);
+    return (0);
+}
+
+
+// Update chains up to ip (excluded)
+FORCE_INLINE void LZ4HC_Insert (LZ4HC_Data_Structure* hc4, const BYTE* ip)
+{
+    U16*   chainTable = hc4->chainTable;
+    HTYPE* HashTable  = hc4->hashTable;
+    INITBASE(base,hc4->base);
+
+    while(hc4->nextToUpdate < ip)
+    {
+        const BYTE* const p = hc4->nextToUpdate;
+        size_t delta = (p) - HASH_POINTER(p);
+        if (delta>MAX_DISTANCE) delta = MAX_DISTANCE;
+        DELTANEXT(p) = (U16)delta;
+        HashTable[HASH_VALUE(p)] = (HTYPE)((p) - base);
+        hc4->nextToUpdate++;
+    }
+}
+
+
+char* LZ4_slideInputBufferHC(void* LZ4HC_Data)
+{
+    LZ4HC_Data_Structure* hc4 = (LZ4HC_Data_Structure*)LZ4HC_Data;
+    U32 distance = (U32)(hc4->end - hc4->inputBuffer) - 64 KB;
+    distance = (distance >> 16) << 16;   // Must be a multiple of 64 KB
+    LZ4HC_Insert(hc4, hc4->end - MINMATCH);
+    memcpy((void*)(hc4->end - 64 KB - distance), (const void*)(hc4->end - 64 KB), 64 KB);
+    hc4->nextToUpdate -= distance;
+    hc4->base -= distance;
+    if ((U32)(hc4->inputBuffer - hc4->base) > 1 GB + 64 KB)   // Avoid overflow
+    {
+        int i;
+        hc4->base += 1 GB;
+        for (i=0; i<HASHTABLESIZE; i++) hc4->hashTable[i] -= 1 GB;
+    }
+    hc4->end -= distance;
+    return (char*)(hc4->end);
+}
+
+
+FORCE_INLINE size_t LZ4HC_CommonLength (const BYTE* p1, const BYTE* p2, const BYTE* const matchlimit)
+{
+    const BYTE* p1t = p1;
+
+    while (p1t<matchlimit-(STEPSIZE-1))
+    {
+        UARCH diff = AARCH(p2) ^ AARCH(p1t);
+        if (!diff) { p1t+=STEPSIZE; p2+=STEPSIZE; continue; }
+        p1t += LZ4_NbCommonBytes(diff);
+        return (p1t - p1);
+    }
+    if (LZ4_ARCH64) if ((p1t<(matchlimit-3)) && (A32(p2) == A32(p1t))) { p1t+=4; p2+=4; }
+    if ((p1t<(matchlimit-1)) && (A16(p2) == A16(p1t))) { p1t+=2; p2+=2; }
+    if ((p1t<matchlimit) && (*p2 == *p1t)) p1t++;
+    return (p1t - p1);
+}
+
+
+FORCE_INLINE int LZ4HC_InsertAndFindBestMatch (LZ4HC_Data_Structure* hc4, const BYTE* ip, const BYTE* const matchlimit, const BYTE** matchpos)
+{
+    U16* const chainTable = hc4->chainTable;
+    HTYPE* const HashTable = hc4->hashTable;
+    const BYTE* ref;
+    INITBASE(base,hc4->base);
+    int nbAttempts=MAX_NB_ATTEMPTS;
+    size_t repl=0, ml=0;
+    U16 delta=0;  // useless assignment, to remove an uninitialization warning
+
+    // HC4 match finder
+    LZ4HC_Insert(hc4, ip);
+    ref = HASH_POINTER(ip);
+
+#define REPEAT_OPTIMIZATION
+#ifdef REPEAT_OPTIMIZATION
+    // Detect repetitive sequences of length <= 4
+    if ((U32)(ip-ref) <= 4)        // potential repetition
+    {
+        if (A32(ref) == A32(ip))   // confirmed
+        {
+            delta = (U16)(ip-ref);
+            repl = ml  = LZ4HC_CommonLength(ip+MINMATCH, ref+MINMATCH, matchlimit) + MINMATCH;
+            *matchpos = ref;
+        }
+        ref = GETNEXT(ref);
+    }
+#endif
+
+    while (((U32)(ip-ref) <= MAX_DISTANCE) && (nbAttempts))
+    {
+        nbAttempts--;
+        if (*(ref+ml) == *(ip+ml))
+        if (A32(ref) == A32(ip))
+        {
+            size_t mlt = LZ4HC_CommonLength(ip+MINMATCH, ref+MINMATCH, matchlimit) + MINMATCH;
+            if (mlt > ml) { ml = mlt; *matchpos = ref; }
+        }
+        ref = GETNEXT(ref);
+    }
+
+#ifdef REPEAT_OPTIMIZATION
+    // Complete table
+    if (repl)
+    {
+        const BYTE* ptr = ip;
+        const BYTE* end;
+
+        end = ip + repl - (MINMATCH-1);
+        while(ptr < end-delta)
+        {
+            DELTANEXT(ptr) = delta;    // Pre-Load
+            ptr++;
+        }
+        do
+        {
+            DELTANEXT(ptr) = delta;
+            HashTable[HASH_VALUE(ptr)] = (HTYPE)((ptr) - base);     // Head of chain
+            ptr++;
+        } while(ptr < end);
+        hc4->nextToUpdate = end;
+    }
+#endif
+
+    return (int)ml;
+}
+
+
+FORCE_INLINE int LZ4HC_InsertAndGetWiderMatch (LZ4HC_Data_Structure* hc4, const BYTE* ip, const BYTE* startLimit, const BYTE* matchlimit, int longest, const BYTE** matchpos, const BYTE** startpos)
+{
+    U16* const  chainTable = hc4->chainTable;
+    HTYPE* const HashTable = hc4->hashTable;
+    INITBASE(base,hc4->base);
+    const BYTE*  ref;
+    int nbAttempts = MAX_NB_ATTEMPTS;
+    int delta = (int)(ip-startLimit);
+
+    // First Match
+    LZ4HC_Insert(hc4, ip);
+    ref = HASH_POINTER(ip);
+
+    while (((U32)(ip-ref) <= MAX_DISTANCE) && (nbAttempts))
+    {
+        nbAttempts--;
+        if (*(startLimit + longest) == *(ref - delta + longest))
+        if (A32(ref) == A32(ip))
+        {
+#if 1
+            const BYTE* reft = ref+MINMATCH;
+            const BYTE* ipt = ip+MINMATCH;
+            const BYTE* startt = ip;
+
+            while (ipt<matchlimit-(STEPSIZE-1))
+            {
+                UARCH diff = AARCH(reft) ^ AARCH(ipt);
+                if (!diff) { ipt+=STEPSIZE; reft+=STEPSIZE; continue; }
+                ipt += LZ4_NbCommonBytes(diff);
+                goto _endCount;
+            }
+            if (LZ4_ARCH64) if ((ipt<(matchlimit-3)) && (A32(reft) == A32(ipt))) { ipt+=4; reft+=4; }
+            if ((ipt<(matchlimit-1)) && (A16(reft) == A16(ipt))) { ipt+=2; reft+=2; }
+            if ((ipt<matchlimit) && (*reft == *ipt)) ipt++;
+_endCount:
+            reft = ref;
+#else
+            // Easier for code maintenance, but unfortunately slower too
+            const BYTE* startt = ip;
+            const BYTE* reft = ref;
+            const BYTE* ipt = ip + MINMATCH + LZ4HC_CommonLength(ip+MINMATCH, ref+MINMATCH, matchlimit);
+#endif
+
+            while ((startt>startLimit) && (reft > hc4->inputBuffer) && (startt[-1] == reft[-1])) {startt--; reft--;}
+
+            if ((ipt-startt) > longest)
+            {
+                longest = (int)(ipt-startt);
+                *matchpos = reft;
+                *startpos = startt;
+            }
+        }
+        ref = GETNEXT(ref);
+    }
+
+    return longest;
+}
+
+
+typedef enum { noLimit = 0, limitedOutput = 1 } limitedOutput_directive;
+
+FORCE_INLINE int LZ4HC_encodeSequence (
+                       const BYTE** ip,
+                       BYTE** op,
+                       const BYTE** anchor,
+                       int matchLength,
+                       const BYTE* ref,
+                       limitedOutput_directive limitedOutputBuffer,
+                       BYTE* oend)
+{
+    int length;
+    BYTE* token;
+
+    // Encode Literal length
+    length = (int)(*ip - *anchor);
+    token = (*op)++;
+    if ((limitedOutputBuffer) && ((*op + length + (2 + 1 + LASTLITERALS) + (length>>8)) > oend)) return 1;   // Check output limit
+    if (length>=(int)RUN_MASK) { int len; *token=(RUN_MASK<<ML_BITS); len = length-RUN_MASK; for(; len > 254 ; len-=255) *(*op)++ = 255;  *(*op)++ = (BYTE)len; }
+    else *token = (BYTE)(length<<ML_BITS);
+
+    // Copy Literals
+    LZ4_BLINDCOPY(*anchor, *op, length);
+
+    // Encode Offset
+    LZ4_WRITE_LITTLEENDIAN_16(*op,(U16)(*ip-ref));
+
+    // Encode MatchLength
+    length = (int)(matchLength-MINMATCH);
+    if ((limitedOutputBuffer) && (*op + (1 + LASTLITERALS) + (length>>8) > oend)) return 1;   // Check output limit
+    if (length>=(int)ML_MASK) { *token+=ML_MASK; length-=ML_MASK; for(; length > 509 ; length-=510) { *(*op)++ = 255; *(*op)++ = 255; } if (length > 254) { length-=255; *(*op)++ = 255; } *(*op)++ = (BYTE)length; }
+    else *token += (BYTE)(length);
+
+    // Prepare next loop
+    *ip += matchLength;
+    *anchor = *ip;
+
+    return 0;
+}
+
+
+static int LZ4HC_compress_generic (
+                 void* ctxvoid,
+                 const char* source,
+                 char* dest,
+                 int inputSize,
+                 int maxOutputSize,
+                 limitedOutput_directive limit
+                )
+{
+    LZ4HC_Data_Structure* ctx = (LZ4HC_Data_Structure*) ctxvoid;
+    const BYTE* ip = (const BYTE*) source;
+    const BYTE* anchor = ip;
+    const BYTE* const iend = ip + inputSize;
+    const BYTE* const mflimit = iend - MFLIMIT;
+    const BYTE* const matchlimit = (iend - LASTLITERALS);
+
+    BYTE* op = (BYTE*) dest;
+    BYTE* const oend = op + maxOutputSize;
+
+    int   ml, ml2, ml3, ml0;
+    const BYTE* ref=NULL;
+    const BYTE* start2=NULL;
+    const BYTE* ref2=NULL;
+    const BYTE* start3=NULL;
+    const BYTE* ref3=NULL;
+    const BYTE* start0;
+    const BYTE* ref0;
+
+
+    // Ensure blocks follow each other
+    if (ip != ctx->end) return 0;
+    ctx->end += inputSize;
+
+    ip++;
+
+    // Main Loop
+    while (ip < mflimit)
+    {
+        ml = LZ4HC_InsertAndFindBestMatch (ctx, ip, matchlimit, (&ref));
+        if (!ml) { ip++; continue; }
+
+        // saved, in case we would skip too much
+        start0 = ip;
+        ref0 = ref;
+        ml0 = ml;
+
+_Search2:
+        if (ip+ml < mflimit)
+            ml2 = LZ4HC_InsertAndGetWiderMatch(ctx, ip + ml - 2, ip + 1, matchlimit, ml, &ref2, &start2);
+        else ml2 = ml;
+
+        if (ml2 == ml)  // No better match
+        {
+            if (LZ4HC_encodeSequence(&ip, &op, &anchor, ml, ref, limit, oend)) return 0;
+            continue;
+        }
+
+        if (start0 < ip)
+        {
+            if (start2 < ip + ml0)   // empirical
+            {
+                ip = start0;
+                ref = ref0;
+                ml = ml0;
+            }
+        }
+
+        // Here, start0==ip
+        if ((start2 - ip) < 3)   // First Match too small : removed
+        {
+            ml = ml2;
+            ip = start2;
+            ref =ref2;
+            goto _Search2;
+        }
+
+_Search3:
+        // Currently we have :
+        // ml2 > ml1, and
+        // ip1+3 <= ip2 (usually < ip1+ml1)
+        if ((start2 - ip) < OPTIMAL_ML)
+        {
+            int correction;
+            int new_ml = ml;
+            if (new_ml > OPTIMAL_ML) new_ml = OPTIMAL_ML;
+            if (ip+new_ml > start2 + ml2 - MINMATCH) new_ml = (int)(start2 - ip) + ml2 - MINMATCH;
+            correction = new_ml - (int)(start2 - ip);
+            if (correction > 0)
+            {
+                start2 += correction;
+                ref2 += correction;
+                ml2 -= correction;
+            }
+        }
+        // Now, we have start2 = ip+new_ml, with new_ml = min(ml, OPTIMAL_ML=18)
+
+        if (start2 + ml2 < mflimit)
+            ml3 = LZ4HC_InsertAndGetWiderMatch(ctx, start2 + ml2 - 3, start2, matchlimit, ml2, &ref3, &start3);
+        else ml3 = ml2;
+
+        if (ml3 == ml2) // No better match : 2 sequences to encode
+        {
+            // ip & ref are known; Now for ml
+            if (start2 < ip+ml)  ml = (int)(start2 - ip);
+            // Now, encode 2 sequences
+            if (LZ4HC_encodeSequence(&ip, &op, &anchor, ml, ref, limit, oend)) return 0;
+            ip = start2;
+            if (LZ4HC_encodeSequence(&ip, &op, &anchor, ml2, ref2, limit, oend)) return 0;
+            continue;
+        }
+
+        if (start3 < ip+ml+3) // Not enough space for match 2 : remove it
+        {
+            if (start3 >= (ip+ml)) // can write Seq1 immediately ==> Seq2 is removed, so Seq3 becomes Seq1
+            {
+                if (start2 < ip+ml)
+                {
+                    int correction = (int)(ip+ml - start2);
+                    start2 += correction;
+                    ref2 += correction;
+                    ml2 -= correction;
+                    if (ml2 < MINMATCH)
+                    {
+                        start2 = start3;
+                        ref2 = ref3;
+                        ml2 = ml3;
+                    }
+                }
+
+                if (LZ4HC_encodeSequence(&ip, &op, &anchor, ml, ref, limit, oend)) return 0;
+                ip  = start3;
+                ref = ref3;
+                ml  = ml3;
+
+                start0 = start2;
+                ref0 = ref2;
+                ml0 = ml2;
+                goto _Search2;
+            }
+
+            start2 = start3;
+            ref2 = ref3;
+            ml2 = ml3;
+            goto _Search3;
+        }
+
+        // OK, now we have 3 ascending matches; let's write at least the first one
+        // ip & ref are known; Now for ml
+        if (start2 < ip+ml)
+        {
+            if ((start2 - ip) < (int)ML_MASK)
+            {
+                int correction;
+                if (ml > OPTIMAL_ML) ml = OPTIMAL_ML;
+                if (ip + ml > start2 + ml2 - MINMATCH) ml = (int)(start2 - ip) + ml2 - MINMATCH;
+                correction = ml - (int)(start2 - ip);
+                if (correction > 0)
+                {
+                    start2 += correction;
+                    ref2 += correction;
+                    ml2 -= correction;
+                }
+            }
+            else
+            {
+                ml = (int)(start2 - ip);
+            }
+        }
+        if (LZ4HC_encodeSequence(&ip, &op, &anchor, ml, ref, limit, oend)) return 0;
+
+        ip = start2;
+        ref = ref2;
+        ml = ml2;
+
+        start2 = start3;
+        ref2 = ref3;
+        ml2 = ml3;
+
+        goto _Search3;
+
+    }
+
+    // Encode Last Literals
+    {
+        int lastRun = (int)(iend - anchor);
+        if ((limit) && (((char*)op - dest) + lastRun + 1 + ((lastRun+255-RUN_MASK)/255) > (U32)maxOutputSize)) return 0;  // Check output limit
+        if (lastRun>=(int)RUN_MASK) { *op++=(RUN_MASK<<ML_BITS); lastRun-=RUN_MASK; for(; lastRun > 254 ; lastRun-=255) *op++ = 255; *op++ = (BYTE) lastRun; }
+        else *op++ = (BYTE)(lastRun<<ML_BITS);
+        memcpy(op, anchor, iend - anchor);
+        op += iend-anchor;
+    }
+
+    // End
+    return (int) (((char*)op)-dest);
+}
+
+
+int LZ4_compressHC(const char* source, char* dest, int inputSize)
+{
+    void* ctx = LZ4_createHC(source);
+    int result;
+    if (ctx==NULL) return 0;
+
+    result = LZ4HC_compress_generic (ctx, source, dest, inputSize, 0, noLimit);
+
+    LZ4_freeHC(ctx);
+    return result;
+}
+
+int LZ4_compressHC_limitedOutput(const char* source, char* dest, int inputSize, int maxOutputSize)
+{
+    void* ctx = LZ4_createHC(source);
+    int result;
+    if (ctx==NULL) return 0;
+
+    result = LZ4HC_compress_generic (ctx, source, dest, inputSize, maxOutputSize, limitedOutput);
+
+    LZ4_freeHC(ctx);
+    return result;
+}
+
+
+//*****************************
+// Using an external allocation
+//*****************************
+
+int LZ4_sizeofStateHC() { return sizeof(LZ4HC_Data_Structure); }
+
+
+int LZ4_compressHC_withStateHC (void* state, const char* source, char* dest, int inputSize)
+{
+    if (((size_t)(state)&(sizeof(void*)-1)) != 0) return 0;   // Error : state is not aligned for pointers (32 or 64 bits)
+    LZ4_initHC ((LZ4HC_Data_Structure*)state, (const BYTE*)source);
+    return LZ4HC_compress_generic (state, source, dest, inputSize, 0, noLimit);
+}
+
+
+int LZ4_compressHC_limitedOutput_withStateHC (void* state, const char* source, char* dest, int inputSize, int maxOutputSize)
+{
+    if (((size_t)(state)&(sizeof(void*)-1)) != 0) return 0;   // Error : state is not aligned for pointers (32 or 64 bits)
+    LZ4_initHC ((LZ4HC_Data_Structure*)state, (const BYTE*)source);
+    return LZ4HC_compress_generic (state, source, dest, inputSize, maxOutputSize, limitedOutput);
+}
+
+
+//****************************
+// Stream functions
+//****************************
+
+int LZ4_compressHC_continue (void* LZ4HC_Data, const char* source, char* dest, int inputSize)
+{
+    return LZ4HC_compress_generic (LZ4HC_Data, source, dest, inputSize, 0, noLimit);
+}
+
+int LZ4_compressHC_limitedOutput_continue (void* LZ4HC_Data, const char* source, char* dest, int inputSize, int maxOutputSize)
+{
+    return LZ4HC_compress_generic (LZ4HC_Data, source, dest, inputSize, maxOutputSize, limitedOutput);
+}
+
diff --git a/c-blosc/internal-complibs/lz4-r110/lz4hc.h b/c-blosc/internal-complibs/lz4-r110/lz4hc.h
new file mode 100644
index 0000000..4fb1916
--- /dev/null
+++ b/c-blosc/internal-complibs/lz4-r110/lz4hc.h
@@ -0,0 +1,157 @@
+/*
+   LZ4 HC - High Compression Mode of LZ4
+   Header File
+   Copyright (C) 2011-2013, Yann Collet.
+   BSD 2-Clause License (http://www.opensource.org/licenses/bsd-license.php)
+
+   Redistribution and use in source and binary forms, with or without
+   modification, are permitted provided that the following conditions are
+   met:
+
+       * Redistributions of source code must retain the above copyright
+   notice, this list of conditions and the following disclaimer.
+       * Redistributions in binary form must reproduce the above
+   copyright notice, this list of conditions and the following disclaimer
+   in the documentation and/or other materials provided with the
+   distribution.
+
+   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+
+   You can contact the author at :
+   - LZ4 homepage : http://fastcompression.blogspot.com/p/lz4.html
+   - LZ4 source repository : http://code.google.com/p/lz4/
+*/
+#pragma once
+
+
+#if defined (__cplusplus)
+extern "C" {
+#endif
+
+
+int LZ4_compressHC (const char* source, char* dest, int inputSize);
+/*
+LZ4_compressHC :
+    return : the number of bytes in compressed buffer dest
+             or 0 if compression fails.
+    note : destination buffer must be already allocated.
+        To avoid any problem, size it to handle worst cases situations (input data not compressible)
+        Worst case size evaluation is provided by function LZ4_compressBound() (see "lz4.h")
+*/
+
+int LZ4_compressHC_limitedOutput (const char* source, char* dest, int inputSize, int maxOutputSize);
+/*
+LZ4_compress_limitedOutput() :
+    Compress 'inputSize' bytes from 'source' into an output buffer 'dest' of maximum size 'maxOutputSize'.
+    If it cannot achieve it, compression will stop, and result of the function will be zero.
+    This function never writes outside of provided output buffer.
+
+    inputSize  : Max supported value is 1 GB
+    maxOutputSize : is maximum allowed size into the destination buffer (which must be already allocated)
+    return : the number of output bytes written in buffer 'dest'
+             or 0 if compression fails.
+*/
+
+
+/* Note :
+Decompression functions are provided within LZ4 source code (see "lz4.h") (BSD license)
+*/
+
+
+//*****************************
+// Using an external allocation
+//*****************************
+int LZ4_sizeofStateHC();
+int LZ4_compressHC_withStateHC               (void* state, const char* source, char* dest, int inputSize);
+int LZ4_compressHC_limitedOutput_withStateHC (void* state, const char* source, char* dest, int inputSize, int maxOutputSize);
+
+/*
+These functions are provided should you prefer to allocate memory for compression tables with your own allocation methods.
+To know how much memory must be allocated for the compression tables, use :
+int LZ4_sizeofStateHC();
+
+Note that tables must be aligned for pointer (32 or 64 bits), otherwise compression will fail (return code 0).
+
+The allocated memory can be provided to the compressions functions using 'void* state' parameter.
+LZ4_compress_withStateHC() and LZ4_compress_limitedOutput_withStateHC() are equivalent to previously described functions.
+They just use the externally allocated memory area instead of allocating their own (on stack, or on heap).
+*/
+
+
+//****************************
+// Streaming Functions
+//****************************
+
+void* LZ4_createHC (const char* inputBuffer);
+int   LZ4_compressHC_continue (void* LZ4HC_Data, const char* source, char* dest, int inputSize);
+int   LZ4_compressHC_limitedOutput_continue (void* LZ4HC_Data, const char* source, char* dest, int inputSize, int maxOutputSize);
+char* LZ4_slideInputBufferHC (void* LZ4HC_Data);
+int   LZ4_freeHC (void* LZ4HC_Data);
+
+/*
+These functions allow the compression of dependent blocks, where each block benefits from prior 64 KB within preceding blocks.
+In order to achieve this, it is necessary to start creating the LZ4HC Data Structure, thanks to the function :
+
+void* LZ4_createHC (const char* inputBuffer);
+The result of the function is the (void*) pointer on the LZ4HC Data Structure.
+This pointer will be needed in all other functions.
+If the pointer returned is NULL, then the allocation has failed, and compression must be aborted.
+The only parameter 'const char* inputBuffer' must, obviously, point at the beginning of input buffer.
+The input buffer must be already allocated, and size at least 192KB.
+'inputBuffer' will also be the 'const char* source' of the first block.
+
+All blocks are expected to lay next to each other within the input buffer, starting from 'inputBuffer'.
+To compress each block, use either LZ4_compressHC_continue() or LZ4_compressHC_limitedOutput_continue().
+Their behavior are identical to LZ4_compressHC() or LZ4_compressHC_limitedOutput(),
+but require the LZ4HC Data Structure as their first argument, and check that each block starts right after the previous one.
+If next block does not begin immediately after the previous one, the compression will fail (return 0).
+
+When it's no longer possible to lay the next block after the previous one (not enough space left into input buffer), a call to :
+char* LZ4_slideInputBufferHC(void* LZ4HC_Data);
+must be performed. It will typically copy the latest 64KB of input at the beginning of input buffer.
+Note that, for this function to work properly, minimum size of an input buffer must be 192KB.
+==> The memory position where the next input data block must start is provided as the result of the function.
+
+Compression can then resume, using LZ4_compressHC_continue() or LZ4_compressHC_limitedOutput_continue(), as usual.
+
+When compression is completed, a call to LZ4_freeHC() will release the memory used by the LZ4HC Data Structure.
+*/
+
+int LZ4_sizeofStreamStateHC();
+int LZ4_resetStreamStateHC(void* state, const char* inputBuffer);
+
+/*
+These functions achieve the same result as :
+void* LZ4_createHC (const char* inputBuffer);
+
+They are provided here to allow the user program to allocate memory using its own routines.
+
+To know how much space must be allocated, use LZ4_sizeofStreamStateHC();
+Note also that space must be aligned for pointers (32 or 64 bits).
+
+Once space is allocated, you must initialize it using : LZ4_resetStreamStateHC(void* state, const char* inputBuffer);
+void* state is a pointer to the space allocated.
+It must be aligned for pointers (32 or 64 bits), and be large enough.
+The parameter 'const char* inputBuffer' must, obviously, point at the beginning of input buffer.
+The input buffer must be already allocated, and size at least 192KB.
+'inputBuffer' will also be the 'const char* source' of the first block.
+
+The same space can be re-used multiple times, just by initializing it each time with LZ4_resetStreamState().
+return value of LZ4_resetStreamStateHC() must be 0 is OK.
+Any other value means there was an error (typically, state is not aligned for pointers (32 or 64 bits)).
+*/
+
+
+#if defined (__cplusplus)
+}
+#endif
diff --git a/c-blosc/internal-complibs/snappy-1.1.1/add-version.patch b/c-blosc/internal-complibs/snappy-1.1.1/add-version.patch
new file mode 100644
index 0000000..d9b9873
--- /dev/null
+++ b/c-blosc/internal-complibs/snappy-1.1.1/add-version.patch
@@ -0,0 +1,19 @@
+diff --git a/internal-complibs/snappy-1.1.1/snappy-c.h b/internal-complibs/snappy-1.1.1/snappy-c.h
+index c6c2a86..eabe3ae 100644
+--- a/internal-complibs/snappy-1.1.1/snappy-c.h
++++ b/internal-complibs/snappy-1.1.1/snappy-c.h
+@@ -37,6 +37,14 @@
+ extern "C" {
+ #endif
+ 
++// The next is for getting the Snappy version even if used the C API
++// Please note that this is only defined in the Blosc sources of Snappy.
++#define SNAPPY_MAJOR 1
++#define SNAPPY_MINOR 1
++#define SNAPPY_PATCHLEVEL 1
++#define SNAPPY_VERSION \
++    ((SNAPPY_MAJOR << 16) | (SNAPPY_MINOR << 8) | SNAPPY_PATCHLEVEL)
++
+ #include <stddef.h>
+ 
+ /*
diff --git a/c-blosc/internal-complibs/snappy-1.1.1/msvc1.patch b/c-blosc/internal-complibs/snappy-1.1.1/msvc1.patch
new file mode 100644
index 0000000..21f0aaa
--- /dev/null
+++ b/c-blosc/internal-complibs/snappy-1.1.1/msvc1.patch
@@ -0,0 +1,17 @@
+--- a/internal-complibs/snappy-1.1.1/snappy.h
++++ b/internal-complibs/snappy-1.1.1/snappy.h
+@@ -44,6 +44,14 @@
+
+ #include "snappy-stubs-public.h"
+
++// Windows does not define ssize_t by default.  This is a workaround.
++// Please note that this is only defined in the Blosc sources of Snappy.
++#if defined(_WIN32) && !defined(__MINGW32__)
++#include <BaseTsd.h>
++typedef SSIZE_T ssize_t;
++#endif
++
++
+ namespace snappy {
+   class Source;
+   class Sink;
diff --git a/c-blosc/internal-complibs/snappy-1.1.1/msvc2.patch b/c-blosc/internal-complibs/snappy-1.1.1/msvc2.patch
new file mode 100644
index 0000000..ccface4
--- /dev/null
+++ b/c-blosc/internal-complibs/snappy-1.1.1/msvc2.patch
@@ -0,0 +1,27 @@
+diff --git a/internal-complibs/snappy-1.1.1/snappy-stubs-public.h b/internal-complibs/snappy-1.1.1/snappy-stubs-public.h
+index ecda439..4cc8965 100644
+--- a/internal-complibs/snappy-1.1.1/snappy-stubs-public.h
++++ b/internal-complibs/snappy-1.1.1/snappy-stubs-public.h
+@@ -36,8 +36,21 @@
+ #ifndef UTIL_SNAPPY_OPENSOURCE_SNAPPY_STUBS_PUBLIC_H_
+ #define UTIL_SNAPPY_OPENSOURCE_SNAPPY_STUBS_PUBLIC_H_
+ 
+-#if 1
++// MSVC 2008 does not include stdint.h.  This is a workaround by Mark W.
++// Please note that this is only defined in the Blosc sources of Snappy.
++#if !defined(_MSC_VER) || _MSC_VER >= 1600
+ #include <stdint.h>
++#else
++typedef signed char int8_t;
++typedef short int16_t;
++typedef int int32_t;
++typedef __int64 int64_t;
++typedef ptrdiff_t intptr_t;
++typedef unsigned char uint8_t;
++typedef unsigned short uint16_t;
++typedef unsigned int uint32_t;
++typedef unsigned __int64 uint64_t;
++typedef size_t uintptr_t;
+ #endif
+ 
+ #if 1
diff --git a/c-blosc/internal-complibs/snappy-1.1.1/snappy-c.cc b/c-blosc/internal-complibs/snappy-1.1.1/snappy-c.cc
new file mode 100644
index 0000000..473a0b0
--- /dev/null
+++ b/c-blosc/internal-complibs/snappy-1.1.1/snappy-c.cc
@@ -0,0 +1,90 @@
+// Copyright 2011 Martin Gieseking <martin.gieseking at uos.de>.
+//
+// Redistribution and use in source and binary forms, with or without
+// modification, are permitted provided that the following conditions are
+// met:
+//
+//     * Redistributions of source code must retain the above copyright
+// notice, this list of conditions and the following disclaimer.
+//     * Redistributions in binary form must reproduce the above
+// copyright notice, this list of conditions and the following disclaimer
+// in the documentation and/or other materials provided with the
+// distribution.
+//     * Neither the name of Google Inc. nor the names of its
+// contributors may be used to endorse or promote products derived from
+// this software without specific prior written permission.
+//
+// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+// "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+// LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+// A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+// OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+// SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+// LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+// DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+// THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+// (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+// OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+
+#include "snappy.h"
+#include "snappy-c.h"
+
+extern "C" {
+
+snappy_status snappy_compress(const char* input,
+                              size_t input_length,
+                              char* compressed,
+                              size_t *compressed_length) {
+  if (*compressed_length < snappy_max_compressed_length(input_length)) {
+    return SNAPPY_BUFFER_TOO_SMALL;
+  }
+  snappy::RawCompress(input, input_length, compressed, compressed_length);
+  return SNAPPY_OK;
+}
+
+snappy_status snappy_uncompress(const char* compressed,
+                                size_t compressed_length,
+                                char* uncompressed,
+                                size_t* uncompressed_length) {
+  size_t real_uncompressed_length;
+  if (!snappy::GetUncompressedLength(compressed,
+                                     compressed_length,
+                                     &real_uncompressed_length)) {
+    return SNAPPY_INVALID_INPUT;
+  }
+  if (*uncompressed_length < real_uncompressed_length) {
+    return SNAPPY_BUFFER_TOO_SMALL;
+  }
+  if (!snappy::RawUncompress(compressed, compressed_length, uncompressed)) {
+    return SNAPPY_INVALID_INPUT;
+  }
+  *uncompressed_length = real_uncompressed_length;
+  return SNAPPY_OK;
+}
+
+size_t snappy_max_compressed_length(size_t source_length) {
+  return snappy::MaxCompressedLength(source_length);
+}
+
+snappy_status snappy_uncompressed_length(const char *compressed,
+                                         size_t compressed_length,
+                                         size_t *result) {
+  if (snappy::GetUncompressedLength(compressed,
+                                    compressed_length,
+                                    result)) {
+    return SNAPPY_OK;
+  } else {
+    return SNAPPY_INVALID_INPUT;
+  }
+}
+
+snappy_status snappy_validate_compressed_buffer(const char *compressed,
+                                                size_t compressed_length) {
+  if (snappy::IsValidCompressedBuffer(compressed, compressed_length)) {
+    return SNAPPY_OK;
+  } else {
+    return SNAPPY_INVALID_INPUT;
+  }
+}
+
+}  // extern "C"
diff --git a/c-blosc/internal-complibs/snappy-1.1.1/snappy-c.h b/c-blosc/internal-complibs/snappy-1.1.1/snappy-c.h
new file mode 100644
index 0000000..e463fd4
--- /dev/null
+++ b/c-blosc/internal-complibs/snappy-1.1.1/snappy-c.h
@@ -0,0 +1,146 @@
+/*
+ * Copyright 2011 Martin Gieseking <martin.gieseking at uos.de>.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions are
+ * met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ * notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above
+ * copyright notice, this list of conditions and the following disclaimer
+ * in the documentation and/or other materials provided with the
+ * distribution.
+ *     * Neither the name of Google Inc. nor the names of its
+ * contributors may be used to endorse or promote products derived from
+ * this software without specific prior written permission.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ *
+ * Plain C interface (a wrapper around the C++ implementation).
+ */
+
+#ifndef UTIL_SNAPPY_OPENSOURCE_SNAPPY_C_H_
+#define UTIL_SNAPPY_OPENSOURCE_SNAPPY_C_H_
+
+#ifdef __cplusplus
+extern "C" {
+#endif
+
+// The next is for getting the Snappy version even if used the C API.
+// Please note that this is only defined in the Blosc sources of Snappy.
+#define SNAPPY_MAJOR 1
+#define SNAPPY_MINOR 1
+#define SNAPPY_PATCHLEVEL 1
+#define SNAPPY_VERSION \
+    ((SNAPPY_MAJOR << 16) | (SNAPPY_MINOR << 8) | SNAPPY_PATCHLEVEL)
+
+#include <stddef.h>
+
+/*
+ * Return values; see the documentation for each function to know
+ * what each can return.
+ */
+typedef enum {
+  SNAPPY_OK = 0,
+  SNAPPY_INVALID_INPUT = 1,
+  SNAPPY_BUFFER_TOO_SMALL = 2
+} snappy_status;
+
+/*
+ * Takes the data stored in "input[0..input_length-1]" and stores
+ * it in the array pointed to by "compressed".
+ *
+ * <compressed_length> signals the space available in "compressed".
+ * If it is not at least equal to "snappy_max_compressed_length(input_length)",
+ * SNAPPY_BUFFER_TOO_SMALL is returned. After successful compression,
+ * <compressed_length> contains the true length of the compressed output,
+ * and SNAPPY_OK is returned.
+ *
+ * Example:
+ *   size_t output_length = snappy_max_compressed_length(input_length);
+ *   char* output = (char*)malloc(output_length);
+ *   if (snappy_compress(input, input_length, output, &output_length)
+ *       == SNAPPY_OK) {
+ *     ... Process(output, output_length) ...
+ *   }
+ *   free(output);
+ */
+snappy_status snappy_compress(const char* input,
+                              size_t input_length,
+                              char* compressed,
+                              size_t* compressed_length);
+
+/*
+ * Given data in "compressed[0..compressed_length-1]" generated by
+ * calling the snappy_compress routine, this routine stores
+ * the uncompressed data to
+ *   uncompressed[0..uncompressed_length-1].
+ * Returns failure (a value not equal to SNAPPY_OK) if the message
+ * is corrupted and could not be decrypted.
+ *
+ * <uncompressed_length> signals the space available in "uncompressed".
+ * If it is not at least equal to the value returned by
+ * snappy_uncompressed_length for this stream, SNAPPY_BUFFER_TOO_SMALL
+ * is returned. After successful decompression, <uncompressed_length>
+ * contains the true length of the decompressed output.
+ *
+ * Example:
+ *   size_t output_length;
+ *   if (snappy_uncompressed_length(input, input_length, &output_length)
+ *       != SNAPPY_OK) {
+ *     ... fail ...
+ *   }
+ *   char* output = (char*)malloc(output_length);
+ *   if (snappy_uncompress(input, input_length, output, &output_length)
+ *       == SNAPPY_OK) {
+ *     ... Process(output, output_length) ...
+ *   }
+ *   free(output);
+ */
+snappy_status snappy_uncompress(const char* compressed,
+                                size_t compressed_length,
+                                char* uncompressed,
+                                size_t* uncompressed_length);
+
+/*
+ * Returns the maximal size of the compressed representation of
+ * input data that is "source_length" bytes in length.
+ */
+size_t snappy_max_compressed_length(size_t source_length);
+
+/*
+ * REQUIRES: "compressed[]" was produced by snappy_compress()
+ * Returns SNAPPY_OK and stores the length of the uncompressed data in
+ * *result normally. Returns SNAPPY_INVALID_INPUT on parsing error.
+ * This operation takes O(1) time.
+ */
+snappy_status snappy_uncompressed_length(const char* compressed,
+                                         size_t compressed_length,
+                                         size_t* result);
+
+/*
+ * Check if the contents of "compressed[]" can be uncompressed successfully.
+ * Does not return the uncompressed data; if so, returns SNAPPY_OK,
+ * or if not, returns SNAPPY_INVALID_INPUT.
+ * Takes time proportional to compressed_length, but is usually at least a
+ * factor of four faster than actual decompression.
+ */
+snappy_status snappy_validate_compressed_buffer(const char* compressed,
+                                                size_t compressed_length);
+
+#ifdef __cplusplus
+}  // extern "C"
+#endif
+
+#endif  /* UTIL_SNAPPY_OPENSOURCE_SNAPPY_C_H_ */
diff --git a/c-blosc/internal-complibs/snappy-1.1.1/snappy-internal.h b/c-blosc/internal-complibs/snappy-1.1.1/snappy-internal.h
new file mode 100644
index 0000000..c99d331
--- /dev/null
+++ b/c-blosc/internal-complibs/snappy-1.1.1/snappy-internal.h
@@ -0,0 +1,150 @@
+// Copyright 2008 Google Inc. All Rights Reserved.
+//
+// Redistribution and use in source and binary forms, with or without
+// modification, are permitted provided that the following conditions are
+// met:
+//
+//     * Redistributions of source code must retain the above copyright
+// notice, this list of conditions and the following disclaimer.
+//     * Redistributions in binary form must reproduce the above
+// copyright notice, this list of conditions and the following disclaimer
+// in the documentation and/or other materials provided with the
+// distribution.
+//     * Neither the name of Google Inc. nor the names of its
+// contributors may be used to endorse or promote products derived from
+// this software without specific prior written permission.
+//
+// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+// "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+// LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+// A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+// OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+// SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+// LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+// DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+// THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+// (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+// OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+//
+// Internals shared between the Snappy implementation and its unittest.
+
+#ifndef UTIL_SNAPPY_SNAPPY_INTERNAL_H_
+#define UTIL_SNAPPY_SNAPPY_INTERNAL_H_
+
+#include "snappy-stubs-internal.h"
+
+namespace snappy {
+namespace internal {
+
+class WorkingMemory {
+ public:
+  WorkingMemory() : large_table_(NULL) { }
+  ~WorkingMemory() { delete[] large_table_; }
+
+  // Allocates and clears a hash table using memory in "*this",
+  // stores the number of buckets in "*table_size" and returns a pointer to
+  // the base of the hash table.
+  uint16* GetHashTable(size_t input_size, int* table_size);
+
+ private:
+  uint16 small_table_[1<<10];    // 2KB
+  uint16* large_table_;          // Allocated only when needed
+
+  DISALLOW_COPY_AND_ASSIGN(WorkingMemory);
+};
+
+// Flat array compression that does not emit the "uncompressed length"
+// prefix. Compresses "input" string to the "*op" buffer.
+//
+// REQUIRES: "input_length <= kBlockSize"
+// REQUIRES: "op" points to an array of memory that is at least
+// "MaxCompressedLength(input_length)" in size.
+// REQUIRES: All elements in "table[0..table_size-1]" are initialized to zero.
+// REQUIRES: "table_size" is a power of two
+//
+// Returns an "end" pointer into "op" buffer.
+// "end - op" is the compressed size of "input".
+char* CompressFragment(const char* input,
+                       size_t input_length,
+                       char* op,
+                       uint16* table,
+                       const int table_size);
+
+// Return the largest n such that
+//
+//   s1[0,n-1] == s2[0,n-1]
+//   and n <= (s2_limit - s2).
+//
+// Does not read *s2_limit or beyond.
+// Does not read *(s1 + (s2_limit - s2)) or beyond.
+// Requires that s2_limit >= s2.
+//
+// Separate implementation for x86_64, for speed.  Uses the fact that
+// x86_64 is little endian.
+#if defined(ARCH_K8)
+static inline int FindMatchLength(const char* s1,
+                                  const char* s2,
+                                  const char* s2_limit) {
+  assert(s2_limit >= s2);
+  int matched = 0;
+
+  // Find out how long the match is. We loop over the data 64 bits at a
+  // time until we find a 64-bit block that doesn't match; then we find
+  // the first non-matching bit and use that to calculate the total
+  // length of the match.
+  while (PREDICT_TRUE(s2 <= s2_limit - 8)) {
+    if (PREDICT_FALSE(UNALIGNED_LOAD64(s2) == UNALIGNED_LOAD64(s1 + matched))) {
+      s2 += 8;
+      matched += 8;
+    } else {
+      // On current (mid-2008) Opteron models there is a 3% more
+      // efficient code sequence to find the first non-matching byte.
+      // However, what follows is ~10% better on Intel Core 2 and newer,
+      // and we expect AMD's bsf instruction to improve.
+      uint64 x = UNALIGNED_LOAD64(s2) ^ UNALIGNED_LOAD64(s1 + matched);
+      int matching_bits = Bits::FindLSBSetNonZero64(x);
+      matched += matching_bits >> 3;
+      return matched;
+    }
+  }
+  while (PREDICT_TRUE(s2 < s2_limit)) {
+    if (PREDICT_TRUE(s1[matched] == *s2)) {
+      ++s2;
+      ++matched;
+    } else {
+      return matched;
+    }
+  }
+  return matched;
+}
+#else
+static inline int FindMatchLength(const char* s1,
+                                  const char* s2,
+                                  const char* s2_limit) {
+  // Implementation based on the x86-64 version, above.
+  assert(s2_limit >= s2);
+  int matched = 0;
+
+  while (s2 <= s2_limit - 4 &&
+         UNALIGNED_LOAD32(s2) == UNALIGNED_LOAD32(s1 + matched)) {
+    s2 += 4;
+    matched += 4;
+  }
+  if (LittleEndian::IsLittleEndian() && s2 <= s2_limit - 4) {
+    uint32 x = UNALIGNED_LOAD32(s2) ^ UNALIGNED_LOAD32(s1 + matched);
+    int matching_bits = Bits::FindLSBSetNonZero(x);
+    matched += matching_bits >> 3;
+  } else {
+    while ((s2 < s2_limit) && (s1[matched] == *s2)) {
+      ++s2;
+      ++matched;
+    }
+  }
+  return matched;
+}
+#endif
+
+}  // end namespace internal
+}  // end namespace snappy
+
+#endif  // UTIL_SNAPPY_SNAPPY_INTERNAL_H_
diff --git a/c-blosc/internal-complibs/snappy-1.1.1/snappy-sinksource.cc b/c-blosc/internal-complibs/snappy-1.1.1/snappy-sinksource.cc
new file mode 100644
index 0000000..5844552
--- /dev/null
+++ b/c-blosc/internal-complibs/snappy-1.1.1/snappy-sinksource.cc
@@ -0,0 +1,71 @@
+// Copyright 2011 Google Inc. All Rights Reserved.
+//
+// Redistribution and use in source and binary forms, with or without
+// modification, are permitted provided that the following conditions are
+// met:
+//
+//     * Redistributions of source code must retain the above copyright
+// notice, this list of conditions and the following disclaimer.
+//     * Redistributions in binary form must reproduce the above
+// copyright notice, this list of conditions and the following disclaimer
+// in the documentation and/or other materials provided with the
+// distribution.
+//     * Neither the name of Google Inc. nor the names of its
+// contributors may be used to endorse or promote products derived from
+// this software without specific prior written permission.
+//
+// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+// "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+// LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+// A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+// OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+// SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+// LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+// DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+// THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+// (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+// OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+
+#include <string.h>
+
+#include "snappy-sinksource.h"
+
+namespace snappy {
+
+Source::~Source() { }
+
+Sink::~Sink() { }
+
+char* Sink::GetAppendBuffer(size_t length, char* scratch) {
+  return scratch;
+}
+
+ByteArraySource::~ByteArraySource() { }
+
+size_t ByteArraySource::Available() const { return left_; }
+
+const char* ByteArraySource::Peek(size_t* len) {
+  *len = left_;
+  return ptr_;
+}
+
+void ByteArraySource::Skip(size_t n) {
+  left_ -= n;
+  ptr_ += n;
+}
+
+UncheckedByteArraySink::~UncheckedByteArraySink() { }
+
+void UncheckedByteArraySink::Append(const char* data, size_t n) {
+  // Do no copying if the caller filled in the result of GetAppendBuffer()
+  if (data != dest_) {
+    memcpy(dest_, data, n);
+  }
+  dest_ += n;
+}
+
+char* UncheckedByteArraySink::GetAppendBuffer(size_t len, char* scratch) {
+  return dest_;
+}
+
+}
diff --git a/c-blosc/internal-complibs/snappy-1.1.1/snappy-sinksource.h b/c-blosc/internal-complibs/snappy-1.1.1/snappy-sinksource.h
new file mode 100644
index 0000000..faabfa1
--- /dev/null
+++ b/c-blosc/internal-complibs/snappy-1.1.1/snappy-sinksource.h
@@ -0,0 +1,137 @@
+// Copyright 2011 Google Inc. All Rights Reserved.
+//
+// Redistribution and use in source and binary forms, with or without
+// modification, are permitted provided that the following conditions are
+// met:
+//
+//     * Redistributions of source code must retain the above copyright
+// notice, this list of conditions and the following disclaimer.
+//     * Redistributions in binary form must reproduce the above
+// copyright notice, this list of conditions and the following disclaimer
+// in the documentation and/or other materials provided with the
+// distribution.
+//     * Neither the name of Google Inc. nor the names of its
+// contributors may be used to endorse or promote products derived from
+// this software without specific prior written permission.
+//
+// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+// "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+// LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+// A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+// OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+// SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+// LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+// DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+// THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+// (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+// OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+
+#ifndef UTIL_SNAPPY_SNAPPY_SINKSOURCE_H_
+#define UTIL_SNAPPY_SNAPPY_SINKSOURCE_H_
+
+#include <stddef.h>
+
+
+namespace snappy {
+
+// A Sink is an interface that consumes a sequence of bytes.
+class Sink {
+ public:
+  Sink() { }
+  virtual ~Sink();
+
+  // Append "bytes[0,n-1]" to this.
+  virtual void Append(const char* bytes, size_t n) = 0;
+
+  // Returns a writable buffer of the specified length for appending.
+  // May return a pointer to the caller-owned scratch buffer which
+  // must have at least the indicated length.  The returned buffer is
+  // only valid until the next operation on this Sink.
+  //
+  // After writing at most "length" bytes, call Append() with the
+  // pointer returned from this function and the number of bytes
+  // written.  Many Append() implementations will avoid copying
+  // bytes if this function returned an internal buffer.
+  //
+  // If a non-scratch buffer is returned, the caller may only pass a
+  // prefix of it to Append().  That is, it is not correct to pass an
+  // interior pointer of the returned array to Append().
+  //
+  // The default implementation always returns the scratch buffer.
+  virtual char* GetAppendBuffer(size_t length, char* scratch);
+
+
+ private:
+  // No copying
+  Sink(const Sink&);
+  void operator=(const Sink&);
+};
+
+// A Source is an interface that yields a sequence of bytes
+class Source {
+ public:
+  Source() { }
+  virtual ~Source();
+
+  // Return the number of bytes left to read from the source
+  virtual size_t Available() const = 0;
+
+  // Peek at the next flat region of the source.  Does not reposition
+  // the source.  The returned region is empty iff Available()==0.
+  //
+  // Returns a pointer to the beginning of the region and store its
+  // length in *len.
+  //
+  // The returned region is valid until the next call to Skip() or
+  // until this object is destroyed, whichever occurs first.
+  //
+  // The returned region may be larger than Available() (for example
+  // if this ByteSource is a view on a substring of a larger source).
+  // The caller is responsible for ensuring that it only reads the
+  // Available() bytes.
+  virtual const char* Peek(size_t* len) = 0;
+
+  // Skip the next n bytes.  Invalidates any buffer returned by
+  // a previous call to Peek().
+  // REQUIRES: Available() >= n
+  virtual void Skip(size_t n) = 0;
+
+ private:
+  // No copying
+  Source(const Source&);
+  void operator=(const Source&);
+};
+
+// A Source implementation that yields the contents of a flat array
+class ByteArraySource : public Source {
+ public:
+  ByteArraySource(const char* p, size_t n) : ptr_(p), left_(n) { }
+  virtual ~ByteArraySource();
+  virtual size_t Available() const;
+  virtual const char* Peek(size_t* len);
+  virtual void Skip(size_t n);
+ private:
+  const char* ptr_;
+  size_t left_;
+};
+
+// A Sink implementation that writes to a flat array without any bound checks.
+class UncheckedByteArraySink : public Sink {
+ public:
+  explicit UncheckedByteArraySink(char* dest) : dest_(dest) { }
+  virtual ~UncheckedByteArraySink();
+  virtual void Append(const char* data, size_t n);
+  virtual char* GetAppendBuffer(size_t len, char* scratch);
+
+  // Return the current output pointer so that a caller can see how
+  // many bytes were produced.
+  // Note: this is not a Sink method.
+  char* CurrentDestination() const { return dest_; }
+ private:
+  char* dest_;
+};
+
+
+}
+
+#endif  // UTIL_SNAPPY_SNAPPY_SINKSOURCE_H_
diff --git a/c-blosc/internal-complibs/snappy-1.1.1/snappy-stubs-internal.cc b/c-blosc/internal-complibs/snappy-1.1.1/snappy-stubs-internal.cc
new file mode 100644
index 0000000..6ed3343
--- /dev/null
+++ b/c-blosc/internal-complibs/snappy-1.1.1/snappy-stubs-internal.cc
@@ -0,0 +1,42 @@
+// Copyright 2011 Google Inc. All Rights Reserved.
+//
+// Redistribution and use in source and binary forms, with or without
+// modification, are permitted provided that the following conditions are
+// met:
+//
+//     * Redistributions of source code must retain the above copyright
+// notice, this list of conditions and the following disclaimer.
+//     * Redistributions in binary form must reproduce the above
+// copyright notice, this list of conditions and the following disclaimer
+// in the documentation and/or other materials provided with the
+// distribution.
+//     * Neither the name of Google Inc. nor the names of its
+// contributors may be used to endorse or promote products derived from
+// this software without specific prior written permission.
+//
+// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+// "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+// LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+// A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+// OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+// SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+// LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+// DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+// THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+// (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+// OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+
+#include <algorithm>
+#include <string>
+
+#include "snappy-stubs-internal.h"
+
+namespace snappy {
+
+void Varint::Append32(string* s, uint32 value) {
+  char buf[Varint::kMax32];
+  const char* p = Varint::Encode32(buf, value);
+  s->append(buf, p - buf);
+}
+
+}  // namespace snappy
diff --git a/c-blosc/internal-complibs/snappy-1.1.1/snappy-stubs-internal.h b/c-blosc/internal-complibs/snappy-1.1.1/snappy-stubs-internal.h
new file mode 100644
index 0000000..12393b6
--- /dev/null
+++ b/c-blosc/internal-complibs/snappy-1.1.1/snappy-stubs-internal.h
@@ -0,0 +1,491 @@
+// Copyright 2011 Google Inc. All Rights Reserved.
+//
+// Redistribution and use in source and binary forms, with or without
+// modification, are permitted provided that the following conditions are
+// met:
+//
+//     * Redistributions of source code must retain the above copyright
+// notice, this list of conditions and the following disclaimer.
+//     * Redistributions in binary form must reproduce the above
+// copyright notice, this list of conditions and the following disclaimer
+// in the documentation and/or other materials provided with the
+// distribution.
+//     * Neither the name of Google Inc. nor the names of its
+// contributors may be used to endorse or promote products derived from
+// this software without specific prior written permission.
+//
+// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+// "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+// LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+// A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+// OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+// SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+// LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+// DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+// THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+// (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+// OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+//
+// Various stubs for the open-source version of Snappy.
+
+#ifndef UTIL_SNAPPY_OPENSOURCE_SNAPPY_STUBS_INTERNAL_H_
+#define UTIL_SNAPPY_OPENSOURCE_SNAPPY_STUBS_INTERNAL_H_
+
+#ifdef HAVE_CONFIG_H
+#include "config.h"
+#endif
+
+#include <string>
+
+#include <assert.h>
+#include <stdlib.h>
+#include <string.h>
+
+#ifdef HAVE_SYS_MMAN_H
+#include <sys/mman.h>
+#endif
+
+#include "snappy-stubs-public.h"
+
+#if defined(__x86_64__)
+
+// Enable 64-bit optimized versions of some routines.
+#define ARCH_K8 1
+
+#endif
+
+// Needed by OS X, among others.
+#ifndef MAP_ANONYMOUS
+#define MAP_ANONYMOUS MAP_ANON
+#endif
+
+// Pull in std::min, std::ostream, and the likes. This is safe because this
+// header file is never used from any public header files.
+using namespace std;
+
+// The size of an array, if known at compile-time.
+// Will give unexpected results if used on a pointer.
+// We undefine it first, since some compilers already have a definition.
+#ifdef ARRAYSIZE
+#undef ARRAYSIZE
+#endif
+#define ARRAYSIZE(a) (sizeof(a) / sizeof(*(a)))
+
+// Static prediction hints.
+#ifdef HAVE_BUILTIN_EXPECT
+#define PREDICT_FALSE(x) (__builtin_expect(x, 0))
+#define PREDICT_TRUE(x) (__builtin_expect(!!(x), 1))
+#else
+#define PREDICT_FALSE(x) x
+#define PREDICT_TRUE(x) x
+#endif
+
+// This is only used for recomputing the tag byte table used during
+// decompression; for simplicity we just remove it from the open-source
+// version (anyone who wants to regenerate it can just do the call
+// themselves within main()).
+#define DEFINE_bool(flag_name, default_value, description) \
+  bool FLAGS_ ## flag_name = default_value
+#define DECLARE_bool(flag_name) \
+  extern bool FLAGS_ ## flag_name
+
+namespace snappy {
+
+static const uint32 kuint32max = static_cast<uint32>(0xFFFFFFFF);
+static const int64 kint64max = static_cast<int64>(0x7FFFFFFFFFFFFFFFLL);
+
+// Potentially unaligned loads and stores.
+
+// x86 and PowerPC can simply do these loads and stores native.
+
+#if defined(__i386__) || defined(__x86_64__) || defined(__powerpc__)
+
+#define UNALIGNED_LOAD16(_p) (*reinterpret_cast<const uint16 *>(_p))
+#define UNALIGNED_LOAD32(_p) (*reinterpret_cast<const uint32 *>(_p))
+#define UNALIGNED_LOAD64(_p) (*reinterpret_cast<const uint64 *>(_p))
+
+#define UNALIGNED_STORE16(_p, _val) (*reinterpret_cast<uint16 *>(_p) = (_val))
+#define UNALIGNED_STORE32(_p, _val) (*reinterpret_cast<uint32 *>(_p) = (_val))
+#define UNALIGNED_STORE64(_p, _val) (*reinterpret_cast<uint64 *>(_p) = (_val))
+
+// ARMv7 and newer support native unaligned accesses, but only of 16-bit
+// and 32-bit values (not 64-bit); older versions either raise a fatal signal,
+// do an unaligned read and rotate the words around a bit, or do the reads very
+// slowly (trip through kernel mode). There's no simple #define that says just
+// “ARMv7 or higher”, so we have to filter away all ARMv5 and ARMv6
+// sub-architectures.
+//
+// This is a mess, but there's not much we can do about it.
+
+#elif defined(__arm__) && \
+      !defined(__ARM_ARCH_4__) && \
+      !defined(__ARM_ARCH_4T__) && \
+      !defined(__ARM_ARCH_5__) && \
+      !defined(__ARM_ARCH_5T__) && \
+      !defined(__ARM_ARCH_5TE__) && \
+      !defined(__ARM_ARCH_5TEJ__) && \
+      !defined(__ARM_ARCH_6__) && \
+      !defined(__ARM_ARCH_6J__) && \
+      !defined(__ARM_ARCH_6K__) && \
+      !defined(__ARM_ARCH_6Z__) && \
+      !defined(__ARM_ARCH_6ZK__) && \
+      !defined(__ARM_ARCH_6T2__)
+
+#define UNALIGNED_LOAD16(_p) (*reinterpret_cast<const uint16 *>(_p))
+#define UNALIGNED_LOAD32(_p) (*reinterpret_cast<const uint32 *>(_p))
+
+#define UNALIGNED_STORE16(_p, _val) (*reinterpret_cast<uint16 *>(_p) = (_val))
+#define UNALIGNED_STORE32(_p, _val) (*reinterpret_cast<uint32 *>(_p) = (_val))
+
+// TODO(user): NEON supports unaligned 64-bit loads and stores.
+// See if that would be more efficient on platforms supporting it,
+// at least for copies.
+
+inline uint64 UNALIGNED_LOAD64(const void *p) {
+  uint64 t;
+  memcpy(&t, p, sizeof t);
+  return t;
+}
+
+inline void UNALIGNED_STORE64(void *p, uint64 v) {
+  memcpy(p, &v, sizeof v);
+}
+
+#else
+
+// These functions are provided for architectures that don't support
+// unaligned loads and stores.
+
+inline uint16 UNALIGNED_LOAD16(const void *p) {
+  uint16 t;
+  memcpy(&t, p, sizeof t);
+  return t;
+}
+
+inline uint32 UNALIGNED_LOAD32(const void *p) {
+  uint32 t;
+  memcpy(&t, p, sizeof t);
+  return t;
+}
+
+inline uint64 UNALIGNED_LOAD64(const void *p) {
+  uint64 t;
+  memcpy(&t, p, sizeof t);
+  return t;
+}
+
+inline void UNALIGNED_STORE16(void *p, uint16 v) {
+  memcpy(p, &v, sizeof v);
+}
+
+inline void UNALIGNED_STORE32(void *p, uint32 v) {
+  memcpy(p, &v, sizeof v);
+}
+
+inline void UNALIGNED_STORE64(void *p, uint64 v) {
+  memcpy(p, &v, sizeof v);
+}
+
+#endif
+
+// This can be more efficient than UNALIGNED_LOAD64 + UNALIGNED_STORE64
+// on some platforms, in particular ARM.
+inline void UnalignedCopy64(const void *src, void *dst) {
+  if (sizeof(void *) == 8) {
+    UNALIGNED_STORE64(dst, UNALIGNED_LOAD64(src));
+  } else {
+    const char *src_char = reinterpret_cast<const char *>(src);
+    char *dst_char = reinterpret_cast<char *>(dst);
+
+    UNALIGNED_STORE32(dst_char, UNALIGNED_LOAD32(src_char));
+    UNALIGNED_STORE32(dst_char + 4, UNALIGNED_LOAD32(src_char + 4));
+  }
+}
+
+// The following guarantees declaration of the byte swap functions.
+#ifdef WORDS_BIGENDIAN
+
+#ifdef HAVE_SYS_BYTEORDER_H
+#include <sys/byteorder.h>
+#endif
+
+#ifdef HAVE_SYS_ENDIAN_H
+#include <sys/endian.h>
+#endif
+
+#ifdef _MSC_VER
+#include <stdlib.h>
+#define bswap_16(x) _byteswap_ushort(x)
+#define bswap_32(x) _byteswap_ulong(x)
+#define bswap_64(x) _byteswap_uint64(x)
+
+#elif defined(__APPLE__)
+// Mac OS X / Darwin features
+#include <libkern/OSByteOrder.h>
+#define bswap_16(x) OSSwapInt16(x)
+#define bswap_32(x) OSSwapInt32(x)
+#define bswap_64(x) OSSwapInt64(x)
+
+#elif defined(HAVE_BYTESWAP_H)
+#include <byteswap.h>
+
+#elif defined(bswap32)
+// FreeBSD defines bswap{16,32,64} in <sys/endian.h> (already #included).
+#define bswap_16(x) bswap16(x)
+#define bswap_32(x) bswap32(x)
+#define bswap_64(x) bswap64(x)
+
+#elif defined(BSWAP_64)
+// Solaris 10 defines BSWAP_{16,32,64} in <sys/byteorder.h> (already #included).
+#define bswap_16(x) BSWAP_16(x)
+#define bswap_32(x) BSWAP_32(x)
+#define bswap_64(x) BSWAP_64(x)
+
+#else
+
+inline uint16 bswap_16(uint16 x) {
+  return (x << 8) | (x >> 8);
+}
+
+inline uint32 bswap_32(uint32 x) {
+  x = ((x & 0xff00ff00UL) >> 8) | ((x & 0x00ff00ffUL) << 8);
+  return (x >> 16) | (x << 16);
+}
+
+inline uint64 bswap_64(uint64 x) {
+  x = ((x & 0xff00ff00ff00ff00ULL) >> 8) | ((x & 0x00ff00ff00ff00ffULL) << 8);
+  x = ((x & 0xffff0000ffff0000ULL) >> 16) | ((x & 0x0000ffff0000ffffULL) << 16);
+  return (x >> 32) | (x << 32);
+}
+
+#endif
+
+#endif  // WORDS_BIGENDIAN
+
+// Convert to little-endian storage, opposite of network format.
+// Convert x from host to little endian: x = LittleEndian.FromHost(x);
+// convert x from little endian to host: x = LittleEndian.ToHost(x);
+//
+//  Store values into unaligned memory converting to little endian order:
+//    LittleEndian.Store16(p, x);
+//
+//  Load unaligned values stored in little endian converting to host order:
+//    x = LittleEndian.Load16(p);
+class LittleEndian {
+ public:
+  // Conversion functions.
+#ifdef WORDS_BIGENDIAN
+
+  static uint16 FromHost16(uint16 x) { return bswap_16(x); }
+  static uint16 ToHost16(uint16 x) { return bswap_16(x); }
+
+  static uint32 FromHost32(uint32 x) { return bswap_32(x); }
+  static uint32 ToHost32(uint32 x) { return bswap_32(x); }
+
+  static bool IsLittleEndian() { return false; }
+
+#else  // !defined(WORDS_BIGENDIAN)
+
+  static uint16 FromHost16(uint16 x) { return x; }
+  static uint16 ToHost16(uint16 x) { return x; }
+
+  static uint32 FromHost32(uint32 x) { return x; }
+  static uint32 ToHost32(uint32 x) { return x; }
+
+  static bool IsLittleEndian() { return true; }
+
+#endif  // !defined(WORDS_BIGENDIAN)
+
+  // Functions to do unaligned loads and stores in little-endian order.
+  static uint16 Load16(const void *p) {
+    return ToHost16(UNALIGNED_LOAD16(p));
+  }
+
+  static void Store16(void *p, uint16 v) {
+    UNALIGNED_STORE16(p, FromHost16(v));
+  }
+
+  static uint32 Load32(const void *p) {
+    return ToHost32(UNALIGNED_LOAD32(p));
+  }
+
+  static void Store32(void *p, uint32 v) {
+    UNALIGNED_STORE32(p, FromHost32(v));
+  }
+};
+
+// Some bit-manipulation functions.
+class Bits {
+ public:
+  // Return floor(log2(n)) for positive integer n.  Returns -1 iff n == 0.
+  static int Log2Floor(uint32 n);
+
+  // Return the first set least / most significant bit, 0-indexed.  Returns an
+  // undefined value if n == 0.  FindLSBSetNonZero() is similar to ffs() except
+  // that it's 0-indexed.
+  static int FindLSBSetNonZero(uint32 n);
+  static int FindLSBSetNonZero64(uint64 n);
+
+ private:
+  DISALLOW_COPY_AND_ASSIGN(Bits);
+};
+
+#ifdef HAVE_BUILTIN_CTZ
+
+inline int Bits::Log2Floor(uint32 n) {
+  return n == 0 ? -1 : 31 ^ __builtin_clz(n);
+}
+
+inline int Bits::FindLSBSetNonZero(uint32 n) {
+  return __builtin_ctz(n);
+}
+
+inline int Bits::FindLSBSetNonZero64(uint64 n) {
+  return __builtin_ctzll(n);
+}
+
+#else  // Portable versions.
+
+inline int Bits::Log2Floor(uint32 n) {
+  if (n == 0)
+    return -1;
+  int log = 0;
+  uint32 value = n;
+  for (int i = 4; i >= 0; --i) {
+    int shift = (1 << i);
+    uint32 x = value >> shift;
+    if (x != 0) {
+      value = x;
+      log += shift;
+    }
+  }
+  assert(value == 1);
+  return log;
+}
+
+inline int Bits::FindLSBSetNonZero(uint32 n) {
+  int rc = 31;
+  for (int i = 4, shift = 1 << 4; i >= 0; --i) {
+    const uint32 x = n << shift;
+    if (x != 0) {
+      n = x;
+      rc -= shift;
+    }
+    shift >>= 1;
+  }
+  return rc;
+}
+
+// FindLSBSetNonZero64() is defined in terms of FindLSBSetNonZero().
+inline int Bits::FindLSBSetNonZero64(uint64 n) {
+  const uint32 bottombits = static_cast<uint32>(n);
+  if (bottombits == 0) {
+    // Bottom bits are zero, so scan in top bits
+    return 32 + FindLSBSetNonZero(static_cast<uint32>(n >> 32));
+  } else {
+    return FindLSBSetNonZero(bottombits);
+  }
+}
+
+#endif  // End portable versions.
+
+// Variable-length integer encoding.
+class Varint {
+ public:
+  // Maximum lengths of varint encoding of uint32.
+  static const int kMax32 = 5;
+
+  // Attempts to parse a varint32 from a prefix of the bytes in [ptr,limit-1].
+  // Never reads a character at or beyond limit.  If a valid/terminated varint32
+  // was found in the range, stores it in *OUTPUT and returns a pointer just
+  // past the last byte of the varint32. Else returns NULL.  On success,
+  // "result <= limit".
+  static const char* Parse32WithLimit(const char* ptr, const char* limit,
+                                      uint32* OUTPUT);
+
+  // REQUIRES   "ptr" points to a buffer of length sufficient to hold "v".
+  // EFFECTS    Encodes "v" into "ptr" and returns a pointer to the
+  //            byte just past the last encoded byte.
+  static char* Encode32(char* ptr, uint32 v);
+
+  // EFFECTS    Appends the varint representation of "value" to "*s".
+  static void Append32(string* s, uint32 value);
+};
+
+inline const char* Varint::Parse32WithLimit(const char* p,
+                                            const char* l,
+                                            uint32* OUTPUT) {
+  const unsigned char* ptr = reinterpret_cast<const unsigned char*>(p);
+  const unsigned char* limit = reinterpret_cast<const unsigned char*>(l);
+  uint32 b, result;
+  if (ptr >= limit) return NULL;
+  b = *(ptr++); result = b & 127;          if (b < 128) goto done;
+  if (ptr >= limit) return NULL;
+  b = *(ptr++); result |= (b & 127) <<  7; if (b < 128) goto done;
+  if (ptr >= limit) return NULL;
+  b = *(ptr++); result |= (b & 127) << 14; if (b < 128) goto done;
+  if (ptr >= limit) return NULL;
+  b = *(ptr++); result |= (b & 127) << 21; if (b < 128) goto done;
+  if (ptr >= limit) return NULL;
+  b = *(ptr++); result |= (b & 127) << 28; if (b < 16) goto done;
+  return NULL;       // Value is too long to be a varint32
+ done:
+  *OUTPUT = result;
+  return reinterpret_cast<const char*>(ptr);
+}
+
+inline char* Varint::Encode32(char* sptr, uint32 v) {
+  // Operate on characters as unsigneds
+  unsigned char* ptr = reinterpret_cast<unsigned char*>(sptr);
+  static const int B = 128;
+  if (v < (1<<7)) {
+    *(ptr++) = v;
+  } else if (v < (1<<14)) {
+    *(ptr++) = v | B;
+    *(ptr++) = v>>7;
+  } else if (v < (1<<21)) {
+    *(ptr++) = v | B;
+    *(ptr++) = (v>>7) | B;
+    *(ptr++) = v>>14;
+  } else if (v < (1<<28)) {
+    *(ptr++) = v | B;
+    *(ptr++) = (v>>7) | B;
+    *(ptr++) = (v>>14) | B;
+    *(ptr++) = v>>21;
+  } else {
+    *(ptr++) = v | B;
+    *(ptr++) = (v>>7) | B;
+    *(ptr++) = (v>>14) | B;
+    *(ptr++) = (v>>21) | B;
+    *(ptr++) = v>>28;
+  }
+  return reinterpret_cast<char*>(ptr);
+}
+
+// If you know the internal layout of the std::string in use, you can
+// replace this function with one that resizes the string without
+// filling the new space with zeros (if applicable) --
+// it will be non-portable but faster.
+inline void STLStringResizeUninitialized(string* s, size_t new_size) {
+  s->resize(new_size);
+}
+
+// Return a mutable char* pointing to a string's internal buffer,
+// which may not be null-terminated. Writing through this pointer will
+// modify the string.
+//
+// string_as_array(&str)[i] is valid for 0 <= i < str.size() until the
+// next call to a string method that invalidates iterators.
+//
+// As of 2006-04, there is no standard-blessed way of getting a
+// mutable reference to a string's internal buffer. However, issue 530
+// (http://www.open-std.org/JTC1/SC22/WG21/docs/lwg-defects.html#530)
+// proposes this as the method. It will officially be part of the standard
+// for C++0x. This should already work on all current implementations.
+inline char* string_as_array(string* str) {
+  return str->empty() ? NULL : &*str->begin();
+}
+
+}  // namespace snappy
+
+#endif  // UTIL_SNAPPY_OPENSOURCE_SNAPPY_STUBS_INTERNAL_H_
diff --git a/c-blosc/internal-complibs/snappy-1.1.1/snappy-stubs-public.h b/c-blosc/internal-complibs/snappy-1.1.1/snappy-stubs-public.h
new file mode 100644
index 0000000..4cc8965
--- /dev/null
+++ b/c-blosc/internal-complibs/snappy-1.1.1/snappy-stubs-public.h
@@ -0,0 +1,111 @@
+// Copyright 2011 Google Inc. All Rights Reserved.
+// Author: sesse at google.com (Steinar H. Gunderson)
+//
+// Redistribution and use in source and binary forms, with or without
+// modification, are permitted provided that the following conditions are
+// met:
+//
+//     * Redistributions of source code must retain the above copyright
+// notice, this list of conditions and the following disclaimer.
+//     * Redistributions in binary form must reproduce the above
+// copyright notice, this list of conditions and the following disclaimer
+// in the documentation and/or other materials provided with the
+// distribution.
+//     * Neither the name of Google Inc. nor the names of its
+// contributors may be used to endorse or promote products derived from
+// this software without specific prior written permission.
+//
+// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+// "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+// LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+// A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+// OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+// SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+// LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+// DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+// THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+// (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+// OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+//
+// Various type stubs for the open-source version of Snappy.
+//
+// This file cannot include config.h, as it is included from snappy.h,
+// which is a public header. Instead, snappy-stubs-public.h is generated by
+// from snappy-stubs-public.h.in at configure time.
+
+#ifndef UTIL_SNAPPY_OPENSOURCE_SNAPPY_STUBS_PUBLIC_H_
+#define UTIL_SNAPPY_OPENSOURCE_SNAPPY_STUBS_PUBLIC_H_
+
+// MSVC 2008 does not include stdint.h.  This is a workaround by Mark W.
+// Please note that this is only defined in the Blosc sources of Snappy.
+#if !defined(_MSC_VER) || _MSC_VER >= 1600
+#include <stdint.h>
+#else
+typedef signed char int8_t;
+typedef short int16_t;
+typedef int int32_t;
+typedef __int64 int64_t;
+typedef ptrdiff_t intptr_t;
+typedef unsigned char uint8_t;
+typedef unsigned short uint16_t;
+typedef unsigned int uint32_t;
+typedef unsigned __int64 uint64_t;
+typedef size_t uintptr_t;
+#endif
+
+#if 1
+#include <stddef.h>
+#endif
+
+#if 0
+#include <sys/uio.h>
+#endif
+
+#define SNAPPY_MAJOR 1
+#define SNAPPY_MINOR 1
+#define SNAPPY_PATCHLEVEL 1
+#define SNAPPY_VERSION \
+    ((SNAPPY_MAJOR << 16) | (SNAPPY_MINOR << 8) | SNAPPY_PATCHLEVEL)
+
+#include <string>
+
+namespace snappy {
+
+#if 1
+typedef int8_t int8;
+typedef uint8_t uint8;
+typedef int16_t int16;
+typedef uint16_t uint16;
+typedef int32_t int32;
+typedef uint32_t uint32;
+typedef int64_t int64;
+typedef uint64_t uint64;
+#else
+typedef signed char int8;
+typedef unsigned char uint8;
+typedef short int16;
+typedef unsigned short uint16;
+typedef int int32;
+typedef unsigned int uint32;
+typedef long long int64;
+typedef unsigned long long uint64;
+#endif
+
+typedef std::string string;
+
+#define DISALLOW_COPY_AND_ASSIGN(TypeName) \
+  TypeName(const TypeName&);               \
+  void operator=(const TypeName&)
+
+#if !0
+// Windows does not have an iovec type, yet the concept is universally useful.
+// It is simple to define it ourselves, so we put it inside our own namespace.
+struct iovec {
+	void* iov_base;
+	size_t iov_len;
+};
+#endif
+
+}  // namespace snappy
+
+#endif  // UTIL_SNAPPY_OPENSOURCE_SNAPPY_STUBS_PUBLIC_H_
diff --git a/c-blosc/internal-complibs/snappy-1.1.1/snappy.cc b/c-blosc/internal-complibs/snappy-1.1.1/snappy.cc
new file mode 100644
index 0000000..f8d0d23
--- /dev/null
+++ b/c-blosc/internal-complibs/snappy-1.1.1/snappy.cc
@@ -0,0 +1,1306 @@
+// Copyright 2005 Google Inc. All Rights Reserved.
+//
+// Redistribution and use in source and binary forms, with or without
+// modification, are permitted provided that the following conditions are
+// met:
+//
+//     * Redistributions of source code must retain the above copyright
+// notice, this list of conditions and the following disclaimer.
+//     * Redistributions in binary form must reproduce the above
+// copyright notice, this list of conditions and the following disclaimer
+// in the documentation and/or other materials provided with the
+// distribution.
+//     * Neither the name of Google Inc. nor the names of its
+// contributors may be used to endorse or promote products derived from
+// this software without specific prior written permission.
+//
+// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+// "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+// LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+// A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+// OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+// SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+// LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+// DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+// THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+// (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+// OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+
+#include "snappy.h"
+#include "snappy-internal.h"
+#include "snappy-sinksource.h"
+
+#include <stdio.h>
+
+#include <algorithm>
+#include <string>
+#include <vector>
+
+
+namespace snappy {
+
+// Any hash function will produce a valid compressed bitstream, but a good
+// hash function reduces the number of collisions and thus yields better
+// compression for compressible input, and more speed for incompressible
+// input. Of course, it doesn't hurt if the hash function is reasonably fast
+// either, as it gets called a lot.
+static inline uint32 HashBytes(uint32 bytes, int shift) {
+  uint32 kMul = 0x1e35a7bd;
+  return (bytes * kMul) >> shift;
+}
+static inline uint32 Hash(const char* p, int shift) {
+  return HashBytes(UNALIGNED_LOAD32(p), shift);
+}
+
+size_t MaxCompressedLength(size_t source_len) {
+  // Compressed data can be defined as:
+  //    compressed := item* literal*
+  //    item       := literal* copy
+  //
+  // The trailing literal sequence has a space blowup of at most 62/60
+  // since a literal of length 60 needs one tag byte + one extra byte
+  // for length information.
+  //
+  // Item blowup is trickier to measure.  Suppose the "copy" op copies
+  // 4 bytes of data.  Because of a special check in the encoding code,
+  // we produce a 4-byte copy only if the offset is < 65536.  Therefore
+  // the copy op takes 3 bytes to encode, and this type of item leads
+  // to at most the 62/60 blowup for representing literals.
+  //
+  // Suppose the "copy" op copies 5 bytes of data.  If the offset is big
+  // enough, it will take 5 bytes to encode the copy op.  Therefore the
+  // worst case here is a one-byte literal followed by a five-byte copy.
+  // I.e., 6 bytes of input turn into 7 bytes of "compressed" data.
+  //
+  // This last factor dominates the blowup, so the final estimate is:
+  return 32 + source_len + source_len/6;
+}
+
+enum {
+  LITERAL = 0,
+  COPY_1_BYTE_OFFSET = 1,  // 3 bit length + 3 bits of offset in opcode
+  COPY_2_BYTE_OFFSET = 2,
+  COPY_4_BYTE_OFFSET = 3
+};
+static const int kMaximumTagLength = 5;  // COPY_4_BYTE_OFFSET plus the actual offset.
+
+// Copy "len" bytes from "src" to "op", one byte at a time.  Used for
+// handling COPY operations where the input and output regions may
+// overlap.  For example, suppose:
+//    src    == "ab"
+//    op     == src + 2
+//    len    == 20
+// After IncrementalCopy(src, op, len), the result will have
+// eleven copies of "ab"
+//    ababababababababababab
+// Note that this does not match the semantics of either memcpy()
+// or memmove().
+static inline void IncrementalCopy(const char* src, char* op, ssize_t len) {
+  assert(len > 0);
+  do {
+    *op++ = *src++;
+  } while (--len > 0);
+}
+
+// Equivalent to IncrementalCopy except that it can write up to ten extra
+// bytes after the end of the copy, and that it is faster.
+//
+// The main part of this loop is a simple copy of eight bytes at a time until
+// we've copied (at least) the requested amount of bytes.  However, if op and
+// src are less than eight bytes apart (indicating a repeating pattern of
+// length < 8), we first need to expand the pattern in order to get the correct
+// results. For instance, if the buffer looks like this, with the eight-byte
+// <src> and <op> patterns marked as intervals:
+//
+//    abxxxxxxxxxxxx
+//    [------]           src
+//      [------]         op
+//
+// a single eight-byte copy from <src> to <op> will repeat the pattern once,
+// after which we can move <op> two bytes without moving <src>:
+//
+//    ababxxxxxxxxxx
+//    [------]           src
+//        [------]       op
+//
+// and repeat the exercise until the two no longer overlap.
+//
+// This allows us to do very well in the special case of one single byte
+// repeated many times, without taking a big hit for more general cases.
+//
+// The worst case of extra writing past the end of the match occurs when
+// op - src == 1 and len == 1; the last copy will read from byte positions
+// [0..7] and write to [4..11], whereas it was only supposed to write to
+// position 1. Thus, ten excess bytes.
+
+namespace {
+
+const int kMaxIncrementCopyOverflow = 10;
+
+inline void IncrementalCopyFastPath(const char* src, char* op, ssize_t len) {
+  while (op - src < 8) {
+    UnalignedCopy64(src, op);
+    len -= op - src;
+    op += op - src;
+  }
+  while (len > 0) {
+    UnalignedCopy64(src, op);
+    src += 8;
+    op += 8;
+    len -= 8;
+  }
+}
+
+}  // namespace
+
+static inline char* EmitLiteral(char* op,
+                                const char* literal,
+                                int len,
+                                bool allow_fast_path) {
+  int n = len - 1;      // Zero-length literals are disallowed
+  if (n < 60) {
+    // Fits in tag byte
+    *op++ = LITERAL | (n << 2);
+
+    // The vast majority of copies are below 16 bytes, for which a
+    // call to memcpy is overkill. This fast path can sometimes
+    // copy up to 15 bytes too much, but that is okay in the
+    // main loop, since we have a bit to go on for both sides:
+    //
+    //   - The input will always have kInputMarginBytes = 15 extra
+    //     available bytes, as long as we're in the main loop, and
+    //     if not, allow_fast_path = false.
+    //   - The output will always have 32 spare bytes (see
+    //     MaxCompressedLength).
+    if (allow_fast_path && len <= 16) {
+      UnalignedCopy64(literal, op);
+      UnalignedCopy64(literal + 8, op + 8);
+      return op + len;
+    }
+  } else {
+    // Encode in upcoming bytes
+    char* base = op;
+    int count = 0;
+    op++;
+    while (n > 0) {
+      *op++ = n & 0xff;
+      n >>= 8;
+      count++;
+    }
+    assert(count >= 1);
+    assert(count <= 4);
+    *base = LITERAL | ((59+count) << 2);
+  }
+  memcpy(op, literal, len);
+  return op + len;
+}
+
+static inline char* EmitCopyLessThan64(char* op, size_t offset, int len) {
+  assert(len <= 64);
+  assert(len >= 4);
+  assert(offset < 65536);
+
+  if ((len < 12) && (offset < 2048)) {
+    size_t len_minus_4 = len - 4;
+    assert(len_minus_4 < 8);            // Must fit in 3 bits
+    *op++ = COPY_1_BYTE_OFFSET + ((len_minus_4) << 2) + ((offset >> 8) << 5);
+    *op++ = offset & 0xff;
+  } else {
+    *op++ = COPY_2_BYTE_OFFSET + ((len-1) << 2);
+    LittleEndian::Store16(op, offset);
+    op += 2;
+  }
+  return op;
+}
+
+static inline char* EmitCopy(char* op, size_t offset, int len) {
+  // Emit 64 byte copies but make sure to keep at least four bytes reserved
+  while (len >= 68) {
+    op = EmitCopyLessThan64(op, offset, 64);
+    len -= 64;
+  }
+
+  // Emit an extra 60 byte copy if have too much data to fit in one copy
+  if (len > 64) {
+    op = EmitCopyLessThan64(op, offset, 60);
+    len -= 60;
+  }
+
+  // Emit remainder
+  op = EmitCopyLessThan64(op, offset, len);
+  return op;
+}
+
+
+bool GetUncompressedLength(const char* start, size_t n, size_t* result) {
+  uint32 v = 0;
+  const char* limit = start + n;
+  if (Varint::Parse32WithLimit(start, limit, &v) != NULL) {
+    *result = v;
+    return true;
+  } else {
+    return false;
+  }
+}
+
+namespace internal {
+uint16* WorkingMemory::GetHashTable(size_t input_size, int* table_size) {
+  // Use smaller hash table when input.size() is smaller, since we
+  // fill the table, incurring O(hash table size) overhead for
+  // compression, and if the input is short, we won't need that
+  // many hash table entries anyway.
+  assert(kMaxHashTableSize >= 256);
+  size_t htsize = 256;
+  while (htsize < kMaxHashTableSize && htsize < input_size) {
+    htsize <<= 1;
+  }
+
+  uint16* table;
+  if (htsize <= ARRAYSIZE(small_table_)) {
+    table = small_table_;
+  } else {
+    if (large_table_ == NULL) {
+      large_table_ = new uint16[kMaxHashTableSize];
+    }
+    table = large_table_;
+  }
+
+  *table_size = htsize;
+  memset(table, 0, htsize * sizeof(*table));
+  return table;
+}
+}  // end namespace internal
+
+// For 0 <= offset <= 4, GetUint32AtOffset(GetEightBytesAt(p), offset) will
+// equal UNALIGNED_LOAD32(p + offset).  Motivation: On x86-64 hardware we have
+// empirically found that overlapping loads such as
+//  UNALIGNED_LOAD32(p) ... UNALIGNED_LOAD32(p+1) ... UNALIGNED_LOAD32(p+2)
+// are slower than UNALIGNED_LOAD64(p) followed by shifts and casts to uint32.
+//
+// We have different versions for 64- and 32-bit; ideally we would avoid the
+// two functions and just inline the UNALIGNED_LOAD64 call into
+// GetUint32AtOffset, but GCC (at least not as of 4.6) is seemingly not clever
+// enough to avoid loading the value multiple times then. For 64-bit, the load
+// is done when GetEightBytesAt() is called, whereas for 32-bit, the load is
+// done at GetUint32AtOffset() time.
+
+#ifdef ARCH_K8
+
+typedef uint64 EightBytesReference;
+
+static inline EightBytesReference GetEightBytesAt(const char* ptr) {
+  return UNALIGNED_LOAD64(ptr);
+}
+
+static inline uint32 GetUint32AtOffset(uint64 v, int offset) {
+  assert(offset >= 0);
+  assert(offset <= 4);
+  return v >> (LittleEndian::IsLittleEndian() ? 8 * offset : 32 - 8 * offset);
+}
+
+#else
+
+typedef const char* EightBytesReference;
+
+static inline EightBytesReference GetEightBytesAt(const char* ptr) {
+  return ptr;
+}
+
+static inline uint32 GetUint32AtOffset(const char* v, int offset) {
+  assert(offset >= 0);
+  assert(offset <= 4);
+  return UNALIGNED_LOAD32(v + offset);
+}
+
+#endif
+
+// Flat array compression that does not emit the "uncompressed length"
+// prefix. Compresses "input" string to the "*op" buffer.
+//
+// REQUIRES: "input" is at most "kBlockSize" bytes long.
+// REQUIRES: "op" points to an array of memory that is at least
+// "MaxCompressedLength(input.size())" in size.
+// REQUIRES: All elements in "table[0..table_size-1]" are initialized to zero.
+// REQUIRES: "table_size" is a power of two
+//
+// Returns an "end" pointer into "op" buffer.
+// "end - op" is the compressed size of "input".
+namespace internal {
+char* CompressFragment(const char* input,
+                       size_t input_size,
+                       char* op,
+                       uint16* table,
+                       const int table_size) {
+  // "ip" is the input pointer, and "op" is the output pointer.
+  const char* ip = input;
+  assert(input_size <= kBlockSize);
+  assert((table_size & (table_size - 1)) == 0); // table must be power of two
+  const int shift = 32 - Bits::Log2Floor(table_size);
+  assert(static_cast<int>(kuint32max >> shift) == table_size - 1);
+  const char* ip_end = input + input_size;
+  const char* base_ip = ip;
+  // Bytes in [next_emit, ip) will be emitted as literal bytes.  Or
+  // [next_emit, ip_end) after the main loop.
+  const char* next_emit = ip;
+
+  const size_t kInputMarginBytes = 15;
+  if (PREDICT_TRUE(input_size >= kInputMarginBytes)) {
+    const char* ip_limit = input + input_size - kInputMarginBytes;
+
+    for (uint32 next_hash = Hash(++ip, shift); ; ) {
+      assert(next_emit < ip);
+      // The body of this loop calls EmitLiteral once and then EmitCopy one or
+      // more times.  (The exception is that when we're close to exhausting
+      // the input we goto emit_remainder.)
+      //
+      // In the first iteration of this loop we're just starting, so
+      // there's nothing to copy, so calling EmitLiteral once is
+      // necessary.  And we only start a new iteration when the
+      // current iteration has determined that a call to EmitLiteral will
+      // precede the next call to EmitCopy (if any).
+      //
+      // Step 1: Scan forward in the input looking for a 4-byte-long match.
+      // If we get close to exhausting the input then goto emit_remainder.
+      //
+      // Heuristic match skipping: If 32 bytes are scanned with no matches
+      // found, start looking only at every other byte. If 32 more bytes are
+      // scanned, look at every third byte, etc.. When a match is found,
+      // immediately go back to looking at every byte. This is a small loss
+      // (~5% performance, ~0.1% density) for compressible data due to more
+      // bookkeeping, but for non-compressible data (such as JPEG) it's a huge
+      // win since the compressor quickly "realizes" the data is incompressible
+      // and doesn't bother looking for matches everywhere.
+      //
+      // The "skip" variable keeps track of how many bytes there are since the
+      // last match; dividing it by 32 (ie. right-shifting by five) gives the
+      // number of bytes to move ahead for each iteration.
+      uint32 skip = 32;
+
+      const char* next_ip = ip;
+      const char* candidate;
+      do {
+        ip = next_ip;
+        uint32 hash = next_hash;
+        assert(hash == Hash(ip, shift));
+        uint32 bytes_between_hash_lookups = skip++ >> 5;
+        next_ip = ip + bytes_between_hash_lookups;
+        if (PREDICT_FALSE(next_ip > ip_limit)) {
+          goto emit_remainder;
+        }
+        next_hash = Hash(next_ip, shift);
+        candidate = base_ip + table[hash];
+        assert(candidate >= base_ip);
+        assert(candidate < ip);
+
+        table[hash] = ip - base_ip;
+      } while (PREDICT_TRUE(UNALIGNED_LOAD32(ip) !=
+                            UNALIGNED_LOAD32(candidate)));
+
+      // Step 2: A 4-byte match has been found.  We'll later see if more
+      // than 4 bytes match.  But, prior to the match, input
+      // bytes [next_emit, ip) are unmatched.  Emit them as "literal bytes."
+      assert(next_emit + 16 <= ip_end);
+      op = EmitLiteral(op, next_emit, ip - next_emit, true);
+
+      // Step 3: Call EmitCopy, and then see if another EmitCopy could
+      // be our next move.  Repeat until we find no match for the
+      // input immediately after what was consumed by the last EmitCopy call.
+      //
+      // If we exit this loop normally then we need to call EmitLiteral next,
+      // though we don't yet know how big the literal will be.  We handle that
+      // by proceeding to the next iteration of the main loop.  We also can exit
+      // this loop via goto if we get close to exhausting the input.
+      EightBytesReference input_bytes;
+      uint32 candidate_bytes = 0;
+
+      do {
+        // We have a 4-byte match at ip, and no need to emit any
+        // "literal bytes" prior to ip.
+        const char* base = ip;
+        int matched = 4 + FindMatchLength(candidate + 4, ip + 4, ip_end);
+        ip += matched;
+        size_t offset = base - candidate;
+        assert(0 == memcmp(base, candidate, matched));
+        op = EmitCopy(op, offset, matched);
+        // We could immediately start working at ip now, but to improve
+        // compression we first update table[Hash(ip - 1, ...)].
+        const char* insert_tail = ip - 1;
+        next_emit = ip;
+        if (PREDICT_FALSE(ip >= ip_limit)) {
+          goto emit_remainder;
+        }
+        input_bytes = GetEightBytesAt(insert_tail);
+        uint32 prev_hash = HashBytes(GetUint32AtOffset(input_bytes, 0), shift);
+        table[prev_hash] = ip - base_ip - 1;
+        uint32 cur_hash = HashBytes(GetUint32AtOffset(input_bytes, 1), shift);
+        candidate = base_ip + table[cur_hash];
+        candidate_bytes = UNALIGNED_LOAD32(candidate);
+        table[cur_hash] = ip - base_ip;
+      } while (GetUint32AtOffset(input_bytes, 1) == candidate_bytes);
+
+      next_hash = HashBytes(GetUint32AtOffset(input_bytes, 2), shift);
+      ++ip;
+    }
+  }
+
+ emit_remainder:
+  // Emit the remaining bytes as a literal
+  if (next_emit < ip_end) {
+    op = EmitLiteral(op, next_emit, ip_end - next_emit, false);
+  }
+
+  return op;
+}
+}  // end namespace internal
+
+// Signature of output types needed by decompression code.
+// The decompression code is templatized on a type that obeys this
+// signature so that we do not pay virtual function call overhead in
+// the middle of a tight decompression loop.
+//
+// class DecompressionWriter {
+//  public:
+//   // Called before decompression
+//   void SetExpectedLength(size_t length);
+//
+//   // Called after decompression
+//   bool CheckLength() const;
+//
+//   // Called repeatedly during decompression
+//   bool Append(const char* ip, size_t length);
+//   bool AppendFromSelf(uint32 offset, size_t length);
+//
+//   // The rules for how TryFastAppend differs from Append are somewhat
+//   // convoluted:
+//   //
+//   //  - TryFastAppend is allowed to decline (return false) at any
+//   //    time, for any reason -- just "return false" would be
+//   //    a perfectly legal implementation of TryFastAppend.
+//   //    The intention is for TryFastAppend to allow a fast path
+//   //    in the common case of a small append.
+//   //  - TryFastAppend is allowed to read up to <available> bytes
+//   //    from the input buffer, whereas Append is allowed to read
+//   //    <length>. However, if it returns true, it must leave
+//   //    at least five (kMaximumTagLength) bytes in the input buffer
+//   //    afterwards, so that there is always enough space to read the
+//   //    next tag without checking for a refill.
+//   //  - TryFastAppend must always return decline (return false)
+//   //    if <length> is 61 or more, as in this case the literal length is not
+//   //    decoded fully. In practice, this should not be a big problem,
+//   //    as it is unlikely that one would implement a fast path accepting
+//   //    this much data.
+//   //
+//   bool TryFastAppend(const char* ip, size_t available, size_t length);
+// };
+
+// -----------------------------------------------------------------------
+// Lookup table for decompression code.  Generated by ComputeTable() below.
+// -----------------------------------------------------------------------
+
+// Mapping from i in range [0,4] to a mask to extract the bottom 8*i bits
+static const uint32 wordmask[] = {
+  0u, 0xffu, 0xffffu, 0xffffffu, 0xffffffffu
+};
+
+// Data stored per entry in lookup table:
+//      Range   Bits-used       Description
+//      ------------------------------------
+//      1..64   0..7            Literal/copy length encoded in opcode byte
+//      0..7    8..10           Copy offset encoded in opcode byte / 256
+//      0..4    11..13          Extra bytes after opcode
+//
+// We use eight bits for the length even though 7 would have sufficed
+// because of efficiency reasons:
+//      (1) Extracting a byte is faster than a bit-field
+//      (2) It properly aligns copy offset so we do not need a <<8
+static const uint16 char_table[256] = {
+  0x0001, 0x0804, 0x1001, 0x2001, 0x0002, 0x0805, 0x1002, 0x2002,
+  0x0003, 0x0806, 0x1003, 0x2003, 0x0004, 0x0807, 0x1004, 0x2004,
+  0x0005, 0x0808, 0x1005, 0x2005, 0x0006, 0x0809, 0x1006, 0x2006,
+  0x0007, 0x080a, 0x1007, 0x2007, 0x0008, 0x080b, 0x1008, 0x2008,
+  0x0009, 0x0904, 0x1009, 0x2009, 0x000a, 0x0905, 0x100a, 0x200a,
+  0x000b, 0x0906, 0x100b, 0x200b, 0x000c, 0x0907, 0x100c, 0x200c,
+  0x000d, 0x0908, 0x100d, 0x200d, 0x000e, 0x0909, 0x100e, 0x200e,
+  0x000f, 0x090a, 0x100f, 0x200f, 0x0010, 0x090b, 0x1010, 0x2010,
+  0x0011, 0x0a04, 0x1011, 0x2011, 0x0012, 0x0a05, 0x1012, 0x2012,
+  0x0013, 0x0a06, 0x1013, 0x2013, 0x0014, 0x0a07, 0x1014, 0x2014,
+  0x0015, 0x0a08, 0x1015, 0x2015, 0x0016, 0x0a09, 0x1016, 0x2016,
+  0x0017, 0x0a0a, 0x1017, 0x2017, 0x0018, 0x0a0b, 0x1018, 0x2018,
+  0x0019, 0x0b04, 0x1019, 0x2019, 0x001a, 0x0b05, 0x101a, 0x201a,
+  0x001b, 0x0b06, 0x101b, 0x201b, 0x001c, 0x0b07, 0x101c, 0x201c,
+  0x001d, 0x0b08, 0x101d, 0x201d, 0x001e, 0x0b09, 0x101e, 0x201e,
+  0x001f, 0x0b0a, 0x101f, 0x201f, 0x0020, 0x0b0b, 0x1020, 0x2020,
+  0x0021, 0x0c04, 0x1021, 0x2021, 0x0022, 0x0c05, 0x1022, 0x2022,
+  0x0023, 0x0c06, 0x1023, 0x2023, 0x0024, 0x0c07, 0x1024, 0x2024,
+  0x0025, 0x0c08, 0x1025, 0x2025, 0x0026, 0x0c09, 0x1026, 0x2026,
+  0x0027, 0x0c0a, 0x1027, 0x2027, 0x0028, 0x0c0b, 0x1028, 0x2028,
+  0x0029, 0x0d04, 0x1029, 0x2029, 0x002a, 0x0d05, 0x102a, 0x202a,
+  0x002b, 0x0d06, 0x102b, 0x202b, 0x002c, 0x0d07, 0x102c, 0x202c,
+  0x002d, 0x0d08, 0x102d, 0x202d, 0x002e, 0x0d09, 0x102e, 0x202e,
+  0x002f, 0x0d0a, 0x102f, 0x202f, 0x0030, 0x0d0b, 0x1030, 0x2030,
+  0x0031, 0x0e04, 0x1031, 0x2031, 0x0032, 0x0e05, 0x1032, 0x2032,
+  0x0033, 0x0e06, 0x1033, 0x2033, 0x0034, 0x0e07, 0x1034, 0x2034,
+  0x0035, 0x0e08, 0x1035, 0x2035, 0x0036, 0x0e09, 0x1036, 0x2036,
+  0x0037, 0x0e0a, 0x1037, 0x2037, 0x0038, 0x0e0b, 0x1038, 0x2038,
+  0x0039, 0x0f04, 0x1039, 0x2039, 0x003a, 0x0f05, 0x103a, 0x203a,
+  0x003b, 0x0f06, 0x103b, 0x203b, 0x003c, 0x0f07, 0x103c, 0x203c,
+  0x0801, 0x0f08, 0x103d, 0x203d, 0x1001, 0x0f09, 0x103e, 0x203e,
+  0x1801, 0x0f0a, 0x103f, 0x203f, 0x2001, 0x0f0b, 0x1040, 0x2040
+};
+
+// In debug mode, allow optional computation of the table at startup.
+// Also, check that the decompression table is correct.
+#ifndef NDEBUG
+DEFINE_bool(snappy_dump_decompression_table, false,
+            "If true, we print the decompression table at startup.");
+
+static uint16 MakeEntry(unsigned int extra,
+                        unsigned int len,
+                        unsigned int copy_offset) {
+  // Check that all of the fields fit within the allocated space
+  assert(extra       == (extra & 0x7));          // At most 3 bits
+  assert(copy_offset == (copy_offset & 0x7));    // At most 3 bits
+  assert(len         == (len & 0x7f));           // At most 7 bits
+  return len | (copy_offset << 8) | (extra << 11);
+}
+
+static void ComputeTable() {
+  uint16 dst[256];
+
+  // Place invalid entries in all places to detect missing initialization
+  int assigned = 0;
+  for (int i = 0; i < 256; i++) {
+    dst[i] = 0xffff;
+  }
+
+  // Small LITERAL entries.  We store (len-1) in the top 6 bits.
+  for (unsigned int len = 1; len <= 60; len++) {
+    dst[LITERAL | ((len-1) << 2)] = MakeEntry(0, len, 0);
+    assigned++;
+  }
+
+  // Large LITERAL entries.  We use 60..63 in the high 6 bits to
+  // encode the number of bytes of length info that follow the opcode.
+  for (unsigned int extra_bytes = 1; extra_bytes <= 4; extra_bytes++) {
+    // We set the length field in the lookup table to 1 because extra
+    // bytes encode len-1.
+    dst[LITERAL | ((extra_bytes+59) << 2)] = MakeEntry(extra_bytes, 1, 0);
+    assigned++;
+  }
+
+  // COPY_1_BYTE_OFFSET.
+  //
+  // The tag byte in the compressed data stores len-4 in 3 bits, and
+  // offset/256 in 5 bits.  offset%256 is stored in the next byte.
+  //
+  // This format is used for length in range [4..11] and offset in
+  // range [0..2047]
+  for (unsigned int len = 4; len < 12; len++) {
+    for (unsigned int offset = 0; offset < 2048; offset += 256) {
+      dst[COPY_1_BYTE_OFFSET | ((len-4)<<2) | ((offset>>8)<<5)] =
+        MakeEntry(1, len, offset>>8);
+      assigned++;
+    }
+  }
+
+  // COPY_2_BYTE_OFFSET.
+  // Tag contains len-1 in top 6 bits, and offset in next two bytes.
+  for (unsigned int len = 1; len <= 64; len++) {
+    dst[COPY_2_BYTE_OFFSET | ((len-1)<<2)] = MakeEntry(2, len, 0);
+    assigned++;
+  }
+
+  // COPY_4_BYTE_OFFSET.
+  // Tag contents len-1 in top 6 bits, and offset in next four bytes.
+  for (unsigned int len = 1; len <= 64; len++) {
+    dst[COPY_4_BYTE_OFFSET | ((len-1)<<2)] = MakeEntry(4, len, 0);
+    assigned++;
+  }
+
+  // Check that each entry was initialized exactly once.
+  if (assigned != 256) {
+    fprintf(stderr, "ComputeTable: assigned only %d of 256\n", assigned);
+    abort();
+  }
+  for (int i = 0; i < 256; i++) {
+    if (dst[i] == 0xffff) {
+      fprintf(stderr, "ComputeTable: did not assign byte %d\n", i);
+      abort();
+    }
+  }
+
+  if (FLAGS_snappy_dump_decompression_table) {
+    printf("static const uint16 char_table[256] = {\n  ");
+    for (int i = 0; i < 256; i++) {
+      printf("0x%04x%s",
+             dst[i],
+             ((i == 255) ? "\n" : (((i%8) == 7) ? ",\n  " : ", ")));
+    }
+    printf("};\n");
+  }
+
+  // Check that computed table matched recorded table
+  for (int i = 0; i < 256; i++) {
+    if (dst[i] != char_table[i]) {
+      fprintf(stderr, "ComputeTable: byte %d: computed (%x), expect (%x)\n",
+              i, static_cast<int>(dst[i]), static_cast<int>(char_table[i]));
+      abort();
+    }
+  }
+}
+#endif /* !NDEBUG */
+
+// Helper class for decompression
+class SnappyDecompressor {
+ private:
+  Source*       reader_;         // Underlying source of bytes to decompress
+  const char*   ip_;             // Points to next buffered byte
+  const char*   ip_limit_;       // Points just past buffered bytes
+  uint32        peeked_;         // Bytes peeked from reader (need to skip)
+  bool          eof_;            // Hit end of input without an error?
+  char          scratch_[kMaximumTagLength];  // See RefillTag().
+
+  // Ensure that all of the tag metadata for the next tag is available
+  // in [ip_..ip_limit_-1].  Also ensures that [ip,ip+4] is readable even
+  // if (ip_limit_ - ip_ < 5).
+  //
+  // Returns true on success, false on error or end of input.
+  bool RefillTag();
+
+ public:
+  explicit SnappyDecompressor(Source* reader)
+      : reader_(reader),
+        ip_(NULL),
+        ip_limit_(NULL),
+        peeked_(0),
+        eof_(false) {
+  }
+
+  ~SnappyDecompressor() {
+    // Advance past any bytes we peeked at from the reader
+    reader_->Skip(peeked_);
+  }
+
+  // Returns true iff we have hit the end of the input without an error.
+  bool eof() const {
+    return eof_;
+  }
+
+  // Read the uncompressed length stored at the start of the compressed data.
+  // On succcess, stores the length in *result and returns true.
+  // On failure, returns false.
+  bool ReadUncompressedLength(uint32* result) {
+    assert(ip_ == NULL);       // Must not have read anything yet
+    // Length is encoded in 1..5 bytes
+    *result = 0;
+    uint32 shift = 0;
+    while (true) {
+      if (shift >= 32) return false;
+      size_t n;
+      const char* ip = reader_->Peek(&n);
+      if (n == 0) return false;
+      const unsigned char c = *(reinterpret_cast<const unsigned char*>(ip));
+      reader_->Skip(1);
+      *result |= static_cast<uint32>(c & 0x7f) << shift;
+      if (c < 128) {
+        break;
+      }
+      shift += 7;
+    }
+    return true;
+  }
+
+  // Process the next item found in the input.
+  // Returns true if successful, false on error or end of input.
+  template <class Writer>
+  void DecompressAllTags(Writer* writer) {
+    const char* ip = ip_;
+
+    // We could have put this refill fragment only at the beginning of the loop.
+    // However, duplicating it at the end of each branch gives the compiler more
+    // scope to optimize the <ip_limit_ - ip> expression based on the local
+    // context, which overall increases speed.
+    #define MAYBE_REFILL() \
+        if (ip_limit_ - ip < kMaximumTagLength) { \
+          ip_ = ip; \
+          if (!RefillTag()) return; \
+          ip = ip_; \
+        }
+
+    MAYBE_REFILL();
+    for ( ;; ) {
+      const unsigned char c = *(reinterpret_cast<const unsigned char*>(ip++));
+
+      if ((c & 0x3) == LITERAL) {
+        size_t literal_length = (c >> 2) + 1u;
+        if (writer->TryFastAppend(ip, ip_limit_ - ip, literal_length)) {
+          assert(literal_length < 61);
+          ip += literal_length;
+          // NOTE(user): There is no MAYBE_REFILL() here, as TryFastAppend()
+          // will not return true unless there's already at least five spare
+          // bytes in addition to the literal.
+          continue;
+        }
+        if (PREDICT_FALSE(literal_length >= 61)) {
+          // Long literal.
+          const size_t literal_length_length = literal_length - 60;
+          literal_length =
+              (LittleEndian::Load32(ip) & wordmask[literal_length_length]) + 1;
+          ip += literal_length_length;
+        }
+
+        size_t avail = ip_limit_ - ip;
+        while (avail < literal_length) {
+          if (!writer->Append(ip, avail)) return;
+          literal_length -= avail;
+          reader_->Skip(peeked_);
+          size_t n;
+          ip = reader_->Peek(&n);
+          avail = n;
+          peeked_ = avail;
+          if (avail == 0) return;  // Premature end of input
+          ip_limit_ = ip + avail;
+        }
+        if (!writer->Append(ip, literal_length)) {
+          return;
+        }
+        ip += literal_length;
+        MAYBE_REFILL();
+      } else {
+        const uint32 entry = char_table[c];
+        const uint32 trailer = LittleEndian::Load32(ip) & wordmask[entry >> 11];
+        const uint32 length = entry & 0xff;
+        ip += entry >> 11;
+
+        // copy_offset/256 is encoded in bits 8..10.  By just fetching
+        // those bits, we get copy_offset (since the bit-field starts at
+        // bit 8).
+        const uint32 copy_offset = entry & 0x700;
+        if (!writer->AppendFromSelf(copy_offset + trailer, length)) {
+          return;
+        }
+        MAYBE_REFILL();
+      }
+    }
+
+#undef MAYBE_REFILL
+  }
+};
+
+bool SnappyDecompressor::RefillTag() {
+  const char* ip = ip_;
+  if (ip == ip_limit_) {
+    // Fetch a new fragment from the reader
+    reader_->Skip(peeked_);   // All peeked bytes are used up
+    size_t n;
+    ip = reader_->Peek(&n);
+    peeked_ = n;
+    if (n == 0) {
+      eof_ = true;
+      return false;
+    }
+    ip_limit_ = ip + n;
+  }
+
+  // Read the tag character
+  assert(ip < ip_limit_);
+  const unsigned char c = *(reinterpret_cast<const unsigned char*>(ip));
+  const uint32 entry = char_table[c];
+  const uint32 needed = (entry >> 11) + 1;  // +1 byte for 'c'
+  assert(needed <= sizeof(scratch_));
+
+  // Read more bytes from reader if needed
+  uint32 nbuf = ip_limit_ - ip;
+  if (nbuf < needed) {
+    // Stitch together bytes from ip and reader to form the word
+    // contents.  We store the needed bytes in "scratch_".  They
+    // will be consumed immediately by the caller since we do not
+    // read more than we need.
+    memmove(scratch_, ip, nbuf);
+    reader_->Skip(peeked_);  // All peeked bytes are used up
+    peeked_ = 0;
+    while (nbuf < needed) {
+      size_t length;
+      const char* src = reader_->Peek(&length);
+      if (length == 0) return false;
+      uint32 to_add = min<uint32>(needed - nbuf, length);
+      memcpy(scratch_ + nbuf, src, to_add);
+      nbuf += to_add;
+      reader_->Skip(to_add);
+    }
+    assert(nbuf == needed);
+    ip_ = scratch_;
+    ip_limit_ = scratch_ + needed;
+  } else if (nbuf < kMaximumTagLength) {
+    // Have enough bytes, but move into scratch_ so that we do not
+    // read past end of input
+    memmove(scratch_, ip, nbuf);
+    reader_->Skip(peeked_);  // All peeked bytes are used up
+    peeked_ = 0;
+    ip_ = scratch_;
+    ip_limit_ = scratch_ + nbuf;
+  } else {
+    // Pass pointer to buffer returned by reader_.
+    ip_ = ip;
+  }
+  return true;
+}
+
+template <typename Writer>
+static bool InternalUncompress(Source* r, Writer* writer) {
+  // Read the uncompressed length from the front of the compressed input
+  SnappyDecompressor decompressor(r);
+  uint32 uncompressed_len = 0;
+  if (!decompressor.ReadUncompressedLength(&uncompressed_len)) return false;
+  return InternalUncompressAllTags(&decompressor, writer, uncompressed_len);
+}
+
+template <typename Writer>
+static bool InternalUncompressAllTags(SnappyDecompressor* decompressor,
+                                      Writer* writer,
+                                      uint32 uncompressed_len) {
+  writer->SetExpectedLength(uncompressed_len);
+
+  // Process the entire input
+  decompressor->DecompressAllTags(writer);
+  return (decompressor->eof() && writer->CheckLength());
+}
+
+bool GetUncompressedLength(Source* source, uint32* result) {
+  SnappyDecompressor decompressor(source);
+  return decompressor.ReadUncompressedLength(result);
+}
+
+size_t Compress(Source* reader, Sink* writer) {
+  size_t written = 0;
+  size_t N = reader->Available();
+  char ulength[Varint::kMax32];
+  char* p = Varint::Encode32(ulength, N);
+  writer->Append(ulength, p-ulength);
+  written += (p - ulength);
+
+  internal::WorkingMemory wmem;
+  char* scratch = NULL;
+  char* scratch_output = NULL;
+
+  while (N > 0) {
+    // Get next block to compress (without copying if possible)
+    size_t fragment_size;
+    const char* fragment = reader->Peek(&fragment_size);
+    assert(fragment_size != 0);  // premature end of input
+    const size_t num_to_read = min(N, kBlockSize);
+    size_t bytes_read = fragment_size;
+
+    size_t pending_advance = 0;
+    if (bytes_read >= num_to_read) {
+      // Buffer returned by reader is large enough
+      pending_advance = num_to_read;
+      fragment_size = num_to_read;
+    } else {
+      // Read into scratch buffer
+      if (scratch == NULL) {
+        // If this is the last iteration, we want to allocate N bytes
+        // of space, otherwise the max possible kBlockSize space.
+        // num_to_read contains exactly the correct value
+        scratch = new char[num_to_read];
+      }
+      memcpy(scratch, fragment, bytes_read);
+      reader->Skip(bytes_read);
+
+      while (bytes_read < num_to_read) {
+        fragment = reader->Peek(&fragment_size);
+        size_t n = min<size_t>(fragment_size, num_to_read - bytes_read);
+        memcpy(scratch + bytes_read, fragment, n);
+        bytes_read += n;
+        reader->Skip(n);
+      }
+      assert(bytes_read == num_to_read);
+      fragment = scratch;
+      fragment_size = num_to_read;
+    }
+    assert(fragment_size == num_to_read);
+
+    // Get encoding table for compression
+    int table_size;
+    uint16* table = wmem.GetHashTable(num_to_read, &table_size);
+
+    // Compress input_fragment and append to dest
+    const int max_output = MaxCompressedLength(num_to_read);
+
+    // Need a scratch buffer for the output, in case the byte sink doesn't
+    // have room for us directly.
+    if (scratch_output == NULL) {
+      scratch_output = new char[max_output];
+    } else {
+      // Since we encode kBlockSize regions followed by a region
+      // which is <= kBlockSize in length, a previously allocated
+      // scratch_output[] region is big enough for this iteration.
+    }
+    char* dest = writer->GetAppendBuffer(max_output, scratch_output);
+    char* end = internal::CompressFragment(fragment, fragment_size,
+                                           dest, table, table_size);
+    writer->Append(dest, end - dest);
+    written += (end - dest);
+
+    N -= num_to_read;
+    reader->Skip(pending_advance);
+  }
+
+  delete[] scratch;
+  delete[] scratch_output;
+
+  return written;
+}
+
+// -----------------------------------------------------------------------
+// IOVec interfaces
+// -----------------------------------------------------------------------
+
+// A type that writes to an iovec.
+// Note that this is not a "ByteSink", but a type that matches the
+// Writer template argument to SnappyDecompressor::DecompressAllTags().
+class SnappyIOVecWriter {
+ private:
+  const struct iovec* output_iov_;
+  const size_t output_iov_count_;
+
+  // We are currently writing into output_iov_[curr_iov_index_].
+  int curr_iov_index_;
+
+  // Bytes written to output_iov_[curr_iov_index_] so far.
+  size_t curr_iov_written_;
+
+  // Total bytes decompressed into output_iov_ so far.
+  size_t total_written_;
+
+  // Maximum number of bytes that will be decompressed into output_iov_.
+  size_t output_limit_;
+
+  inline char* GetIOVecPointer(int index, size_t offset) {
+    return reinterpret_cast<char*>(output_iov_[index].iov_base) +
+        offset;
+  }
+
+ public:
+  // Does not take ownership of iov. iov must be valid during the
+  // entire lifetime of the SnappyIOVecWriter.
+  inline SnappyIOVecWriter(const struct iovec* iov, size_t iov_count)
+      : output_iov_(iov),
+        output_iov_count_(iov_count),
+        curr_iov_index_(0),
+        curr_iov_written_(0),
+        total_written_(0),
+        output_limit_(-1) {
+  }
+
+  inline void SetExpectedLength(size_t len) {
+    output_limit_ = len;
+  }
+
+  inline bool CheckLength() const {
+    return total_written_ == output_limit_;
+  }
+
+  inline bool Append(const char* ip, size_t len) {
+    if (total_written_ + len > output_limit_) {
+      return false;
+    }
+
+    while (len > 0) {
+      assert(curr_iov_written_ <= output_iov_[curr_iov_index_].iov_len);
+      if (curr_iov_written_ >= output_iov_[curr_iov_index_].iov_len) {
+        // This iovec is full. Go to the next one.
+        if (curr_iov_index_ + 1 >= output_iov_count_) {
+          return false;
+        }
+        curr_iov_written_ = 0;
+        ++curr_iov_index_;
+      }
+
+      const size_t to_write = std::min(
+          len, output_iov_[curr_iov_index_].iov_len - curr_iov_written_);
+      memcpy(GetIOVecPointer(curr_iov_index_, curr_iov_written_),
+             ip,
+             to_write);
+      curr_iov_written_ += to_write;
+      total_written_ += to_write;
+      ip += to_write;
+      len -= to_write;
+    }
+
+    return true;
+  }
+
+  inline bool TryFastAppend(const char* ip, size_t available, size_t len) {
+    const size_t space_left = output_limit_ - total_written_;
+    if (len <= 16 && available >= 16 + kMaximumTagLength && space_left >= 16 &&
+        output_iov_[curr_iov_index_].iov_len - curr_iov_written_ >= 16) {
+      // Fast path, used for the majority (about 95%) of invocations.
+      char* ptr = GetIOVecPointer(curr_iov_index_, curr_iov_written_);
+      UnalignedCopy64(ip, ptr);
+      UnalignedCopy64(ip + 8, ptr + 8);
+      curr_iov_written_ += len;
+      total_written_ += len;
+      return true;
+    }
+
+    return false;
+  }
+
+  inline bool AppendFromSelf(size_t offset, size_t len) {
+    if (offset > total_written_ || offset == 0) {
+      return false;
+    }
+    const size_t space_left = output_limit_ - total_written_;
+    if (len > space_left) {
+      return false;
+    }
+
+    // Locate the iovec from which we need to start the copy.
+    int from_iov_index = curr_iov_index_;
+    size_t from_iov_offset = curr_iov_written_;
+    while (offset > 0) {
+      if (from_iov_offset >= offset) {
+        from_iov_offset -= offset;
+        break;
+      }
+
+      offset -= from_iov_offset;
+      --from_iov_index;
+      assert(from_iov_index >= 0);
+      from_iov_offset = output_iov_[from_iov_index].iov_len;
+    }
+
+    // Copy <len> bytes starting from the iovec pointed to by from_iov_index to
+    // the current iovec.
+    while (len > 0) {
+      assert(from_iov_index <= curr_iov_index_);
+      if (from_iov_index != curr_iov_index_) {
+        const size_t to_copy = std::min(
+            output_iov_[from_iov_index].iov_len - from_iov_offset,
+            len);
+        Append(GetIOVecPointer(from_iov_index, from_iov_offset), to_copy);
+        len -= to_copy;
+        if (len > 0) {
+          ++from_iov_index;
+          from_iov_offset = 0;
+        }
+      } else {
+        assert(curr_iov_written_ <= output_iov_[curr_iov_index_].iov_len);
+        size_t to_copy = std::min(output_iov_[curr_iov_index_].iov_len -
+                                      curr_iov_written_,
+                                  len);
+        if (to_copy == 0) {
+          // This iovec is full. Go to the next one.
+          if (curr_iov_index_ + 1 >= output_iov_count_) {
+            return false;
+          }
+          ++curr_iov_index_;
+          curr_iov_written_ = 0;
+          continue;
+        }
+        if (to_copy > len) {
+          to_copy = len;
+        }
+        IncrementalCopy(GetIOVecPointer(from_iov_index, from_iov_offset),
+                        GetIOVecPointer(curr_iov_index_, curr_iov_written_),
+                        to_copy);
+        curr_iov_written_ += to_copy;
+        from_iov_offset += to_copy;
+        total_written_ += to_copy;
+        len -= to_copy;
+      }
+    }
+
+    return true;
+  }
+
+};
+
+bool RawUncompressToIOVec(const char* compressed, size_t compressed_length,
+                          const struct iovec* iov, size_t iov_cnt) {
+  ByteArraySource reader(compressed, compressed_length);
+  return RawUncompressToIOVec(&reader, iov, iov_cnt);
+}
+
+bool RawUncompressToIOVec(Source* compressed, const struct iovec* iov,
+                          size_t iov_cnt) {
+  SnappyIOVecWriter output(iov, iov_cnt);
+  return InternalUncompress(compressed, &output);
+}
+
+// -----------------------------------------------------------------------
+// Flat array interfaces
+// -----------------------------------------------------------------------
+
+// A type that writes to a flat array.
+// Note that this is not a "ByteSink", but a type that matches the
+// Writer template argument to SnappyDecompressor::DecompressAllTags().
+class SnappyArrayWriter {
+ private:
+  char* base_;
+  char* op_;
+  char* op_limit_;
+
+ public:
+  inline explicit SnappyArrayWriter(char* dst)
+      : base_(dst),
+        op_(dst) {
+  }
+
+  inline void SetExpectedLength(size_t len) {
+    op_limit_ = op_ + len;
+  }
+
+  inline bool CheckLength() const {
+    return op_ == op_limit_;
+  }
+
+  inline bool Append(const char* ip, size_t len) {
+    char* op = op_;
+    const size_t space_left = op_limit_ - op;
+    if (space_left < len) {
+      return false;
+    }
+    memcpy(op, ip, len);
+    op_ = op + len;
+    return true;
+  }
+
+  inline bool TryFastAppend(const char* ip, size_t available, size_t len) {
+    char* op = op_;
+    const size_t space_left = op_limit_ - op;
+    if (len <= 16 && available >= 16 + kMaximumTagLength && space_left >= 16) {
+      // Fast path, used for the majority (about 95%) of invocations.
+      UnalignedCopy64(ip, op);
+      UnalignedCopy64(ip + 8, op + 8);
+      op_ = op + len;
+      return true;
+    } else {
+      return false;
+    }
+  }
+
+  inline bool AppendFromSelf(size_t offset, size_t len) {
+    char* op = op_;
+    const size_t space_left = op_limit_ - op;
+
+    // Check if we try to append from before the start of the buffer.
+    // Normally this would just be a check for "produced < offset",
+    // but "produced <= offset - 1u" is equivalent for every case
+    // except the one where offset==0, where the right side will wrap around
+    // to a very big number. This is convenient, as offset==0 is another
+    // invalid case that we also want to catch, so that we do not go
+    // into an infinite loop.
+    assert(op >= base_);
+    size_t produced = op - base_;
+    if (produced <= offset - 1u) {
+      return false;
+    }
+    if (len <= 16 && offset >= 8 && space_left >= 16) {
+      // Fast path, used for the majority (70-80%) of dynamic invocations.
+      UnalignedCopy64(op - offset, op);
+      UnalignedCopy64(op - offset + 8, op + 8);
+    } else {
+      if (space_left >= len + kMaxIncrementCopyOverflow) {
+        IncrementalCopyFastPath(op - offset, op, len);
+      } else {
+        if (space_left < len) {
+          return false;
+        }
+        IncrementalCopy(op - offset, op, len);
+      }
+    }
+
+    op_ = op + len;
+    return true;
+  }
+};
+
+bool RawUncompress(const char* compressed, size_t n, char* uncompressed) {
+  ByteArraySource reader(compressed, n);
+  return RawUncompress(&reader, uncompressed);
+}
+
+bool RawUncompress(Source* compressed, char* uncompressed) {
+  SnappyArrayWriter output(uncompressed);
+  return InternalUncompress(compressed, &output);
+}
+
+bool Uncompress(const char* compressed, size_t n, string* uncompressed) {
+  size_t ulength;
+  if (!GetUncompressedLength(compressed, n, &ulength)) {
+    return false;
+  }
+  // On 32-bit builds: max_size() < kuint32max.  Check for that instead
+  // of crashing (e.g., consider externally specified compressed data).
+  if (ulength > uncompressed->max_size()) {
+    return false;
+  }
+  STLStringResizeUninitialized(uncompressed, ulength);
+  return RawUncompress(compressed, n, string_as_array(uncompressed));
+}
+
+
+// A Writer that drops everything on the floor and just does validation
+class SnappyDecompressionValidator {
+ private:
+  size_t expected_;
+  size_t produced_;
+
+ public:
+  inline SnappyDecompressionValidator() : produced_(0) { }
+  inline void SetExpectedLength(size_t len) {
+    expected_ = len;
+  }
+  inline bool CheckLength() const {
+    return expected_ == produced_;
+  }
+  inline bool Append(const char* ip, size_t len) {
+    produced_ += len;
+    return produced_ <= expected_;
+  }
+  inline bool TryFastAppend(const char* ip, size_t available, size_t length) {
+    return false;
+  }
+  inline bool AppendFromSelf(size_t offset, size_t len) {
+    // See SnappyArrayWriter::AppendFromSelf for an explanation of
+    // the "offset - 1u" trick.
+    if (produced_ <= offset - 1u) return false;
+    produced_ += len;
+    return produced_ <= expected_;
+  }
+};
+
+bool IsValidCompressedBuffer(const char* compressed, size_t n) {
+  ByteArraySource reader(compressed, n);
+  SnappyDecompressionValidator writer;
+  return InternalUncompress(&reader, &writer);
+}
+
+void RawCompress(const char* input,
+                 size_t input_length,
+                 char* compressed,
+                 size_t* compressed_length) {
+  ByteArraySource reader(input, input_length);
+  UncheckedByteArraySink writer(compressed);
+  Compress(&reader, &writer);
+
+  // Compute how many bytes were added
+  *compressed_length = (writer.CurrentDestination() - compressed);
+}
+
+size_t Compress(const char* input, size_t input_length, string* compressed) {
+  // Pre-grow the buffer to the max length of the compressed output
+  compressed->resize(MaxCompressedLength(input_length));
+
+  size_t compressed_length;
+  RawCompress(input, input_length, string_as_array(compressed),
+              &compressed_length);
+  compressed->resize(compressed_length);
+  return compressed_length;
+}
+
+
+} // end namespace snappy
+
diff --git a/c-blosc/internal-complibs/snappy-1.1.1/snappy.h b/c-blosc/internal-complibs/snappy-1.1.1/snappy.h
new file mode 100644
index 0000000..244cc09
--- /dev/null
+++ b/c-blosc/internal-complibs/snappy-1.1.1/snappy.h
@@ -0,0 +1,192 @@
+// Copyright 2005 and onwards Google Inc.
+//
+// Redistribution and use in source and binary forms, with or without
+// modification, are permitted provided that the following conditions are
+// met:
+//
+//     * Redistributions of source code must retain the above copyright
+// notice, this list of conditions and the following disclaimer.
+//     * Redistributions in binary form must reproduce the above
+// copyright notice, this list of conditions and the following disclaimer
+// in the documentation and/or other materials provided with the
+// distribution.
+//     * Neither the name of Google Inc. nor the names of its
+// contributors may be used to endorse or promote products derived from
+// this software without specific prior written permission.
+//
+// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+// "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+// LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+// A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+// OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+// SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+// LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+// DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+// THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+// (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+// OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+//
+// A light-weight compression algorithm.  It is designed for speed of
+// compression and decompression, rather than for the utmost in space
+// savings.
+//
+// For getting better compression ratios when you are compressing data
+// with long repeated sequences or compressing data that is similar to
+// other data, while still compressing fast, you might look at first
+// using BMDiff and then compressing the output of BMDiff with
+// Snappy.
+
+#ifndef UTIL_SNAPPY_SNAPPY_H__
+#define UTIL_SNAPPY_SNAPPY_H__
+
+#include <stddef.h>
+#include <string>
+
+#include "snappy-stubs-public.h"
+
+// Windows does not define ssize_t by default.  This is a workaround.
+// Please note that this is only defined in the Blosc sources of Snappy.
+#if defined(_WIN32)  && !defined(__MINGW32__)
+#include <BaseTsd.h>
+typedef SSIZE_T ssize_t;
+#endif
+
+
+namespace snappy {
+  class Source;
+  class Sink;
+
+  // ------------------------------------------------------------------------
+  // Generic compression/decompression routines.
+  // ------------------------------------------------------------------------
+
+  // Compress the bytes read from "*source" and append to "*sink". Return the
+  // number of bytes written.
+  size_t Compress(Source* source, Sink* sink);
+
+  // Find the uncompressed length of the given stream, as given by the header.
+  // Note that the true length could deviate from this; the stream could e.g.
+  // be truncated.
+  //
+  // Also note that this leaves "*source" in a state that is unsuitable for
+  // further operations, such as RawUncompress(). You will need to rewind
+  // or recreate the source yourself before attempting any further calls.
+  bool GetUncompressedLength(Source* source, uint32* result);
+
+  // ------------------------------------------------------------------------
+  // Higher-level string based routines (should be sufficient for most users)
+  // ------------------------------------------------------------------------
+
+  // Sets "*output" to the compressed version of "input[0,input_length-1]".
+  // Original contents of *output are lost.
+  //
+  // REQUIRES: "input[]" is not an alias of "*output".
+  size_t Compress(const char* input, size_t input_length, string* output);
+
+  // Decompresses "compressed[0,compressed_length-1]" to "*uncompressed".
+  // Original contents of "*uncompressed" are lost.
+  //
+  // REQUIRES: "compressed[]" is not an alias of "*uncompressed".
+  //
+  // returns false if the message is corrupted and could not be decompressed
+  bool Uncompress(const char* compressed, size_t compressed_length,
+                  string* uncompressed);
+
+
+  // ------------------------------------------------------------------------
+  // Lower-level character array based routines.  May be useful for
+  // efficiency reasons in certain circumstances.
+  // ------------------------------------------------------------------------
+
+  // REQUIRES: "compressed" must point to an area of memory that is at
+  // least "MaxCompressedLength(input_length)" bytes in length.
+  //
+  // Takes the data stored in "input[0..input_length]" and stores
+  // it in the array pointed to by "compressed".
+  //
+  // "*compressed_length" is set to the length of the compressed output.
+  //
+  // Example:
+  //    char* output = new char[snappy::MaxCompressedLength(input_length)];
+  //    size_t output_length;
+  //    RawCompress(input, input_length, output, &output_length);
+  //    ... Process(output, output_length) ...
+  //    delete [] output;
+  void RawCompress(const char* input,
+                   size_t input_length,
+                   char* compressed,
+                   size_t* compressed_length);
+
+  // Given data in "compressed[0..compressed_length-1]" generated by
+  // calling the Snappy::Compress routine, this routine
+  // stores the uncompressed data to
+  //    uncompressed[0..GetUncompressedLength(compressed)-1]
+  // returns false if the message is corrupted and could not be decrypted
+  bool RawUncompress(const char* compressed, size_t compressed_length,
+                     char* uncompressed);
+
+  // Given data from the byte source 'compressed' generated by calling
+  // the Snappy::Compress routine, this routine stores the uncompressed
+  // data to
+  //    uncompressed[0..GetUncompressedLength(compressed,compressed_length)-1]
+  // returns false if the message is corrupted and could not be decrypted
+  bool RawUncompress(Source* compressed, char* uncompressed);
+
+  // Given data in "compressed[0..compressed_length-1]" generated by
+  // calling the Snappy::Compress routine, this routine
+  // stores the uncompressed data to the iovec "iov". The number of physical
+  // buffers in "iov" is given by iov_cnt and their cumulative size
+  // must be at least GetUncompressedLength(compressed). The individual buffers
+  // in "iov" must not overlap with each other.
+  //
+  // returns false if the message is corrupted and could not be decrypted
+  bool RawUncompressToIOVec(const char* compressed, size_t compressed_length,
+                            const struct iovec* iov, size_t iov_cnt);
+
+  // Given data from the byte source 'compressed' generated by calling
+  // the Snappy::Compress routine, this routine stores the uncompressed
+  // data to the iovec "iov". The number of physical
+  // buffers in "iov" is given by iov_cnt and their cumulative size
+  // must be at least GetUncompressedLength(compressed). The individual buffers
+  // in "iov" must not overlap with each other.
+  //
+  // returns false if the message is corrupted and could not be decrypted
+  bool RawUncompressToIOVec(Source* compressed, const struct iovec* iov,
+                            size_t iov_cnt);
+
+  // Returns the maximal size of the compressed representation of
+  // input data that is "source_bytes" bytes in length;
+  size_t MaxCompressedLength(size_t source_bytes);
+
+  // REQUIRES: "compressed[]" was produced by RawCompress() or Compress()
+  // Returns true and stores the length of the uncompressed data in
+  // *result normally.  Returns false on parsing error.
+  // This operation takes O(1) time.
+  bool GetUncompressedLength(const char* compressed, size_t compressed_length,
+                             size_t* result);
+
+  // Returns true iff the contents of "compressed[]" can be uncompressed
+  // successfully.  Does not return the uncompressed data.  Takes
+  // time proportional to compressed_length, but is usually at least
+  // a factor of four faster than actual decompression.
+  bool IsValidCompressedBuffer(const char* compressed,
+                               size_t compressed_length);
+
+  // The size of a compression block. Note that many parts of the compression
+  // code assumes that kBlockSize <= 65536; in particular, the hash table
+  // can only store 16-bit offsets, and EmitCopy() also assumes the offset
+  // is 65535 bytes or less. Note also that if you change this, it will
+  // affect the framing format (see framing_format.txt).
+  //
+  // Note that there might be older data around that is compressed with larger
+  // block sizes, so the decompression code should not rely on the
+  // non-existence of long backreferences.
+  static const int kBlockLog = 16;
+  static const size_t kBlockSize = 1 << kBlockLog;
+
+  static const int kMaxHashTableBits = 14;
+  static const size_t kMaxHashTableSize = 1 << kMaxHashTableBits;
+}  // end namespace snappy
+
+
+#endif  // UTIL_SNAPPY_SNAPPY_H__
diff --git a/c-blosc/internal-complibs/zlib-1.2.8/adler32.c b/c-blosc/internal-complibs/zlib-1.2.8/adler32.c
new file mode 100644
index 0000000..a868f07
--- /dev/null
+++ b/c-blosc/internal-complibs/zlib-1.2.8/adler32.c
@@ -0,0 +1,179 @@
+/* adler32.c -- compute the Adler-32 checksum of a data stream
+ * Copyright (C) 1995-2011 Mark Adler
+ * For conditions of distribution and use, see copyright notice in zlib.h
+ */
+
+/* @(#) $Id$ */
+
+#include "zutil.h"
+
+#define local static
+
+local uLong adler32_combine_ OF((uLong adler1, uLong adler2, z_off64_t len2));
+
+#define BASE 65521      /* largest prime smaller than 65536 */
+#define NMAX 5552
+/* NMAX is the largest n such that 255n(n+1)/2 + (n+1)(BASE-1) <= 2^32-1 */
+
+#define DO1(buf,i)  {adler += (buf)[i]; sum2 += adler;}
+#define DO2(buf,i)  DO1(buf,i); DO1(buf,i+1);
+#define DO4(buf,i)  DO2(buf,i); DO2(buf,i+2);
+#define DO8(buf,i)  DO4(buf,i); DO4(buf,i+4);
+#define DO16(buf)   DO8(buf,0); DO8(buf,8);
+
+/* use NO_DIVIDE if your processor does not do division in hardware --
+   try it both ways to see which is faster */
+#ifdef NO_DIVIDE
+/* note that this assumes BASE is 65521, where 65536 % 65521 == 15
+   (thank you to John Reiser for pointing this out) */
+#  define CHOP(a) \
+    do { \
+        unsigned long tmp = a >> 16; \
+        a &= 0xffffUL; \
+        a += (tmp << 4) - tmp; \
+    } while (0)
+#  define MOD28(a) \
+    do { \
+        CHOP(a); \
+        if (a >= BASE) a -= BASE; \
+    } while (0)
+#  define MOD(a) \
+    do { \
+        CHOP(a); \
+        MOD28(a); \
+    } while (0)
+#  define MOD63(a) \
+    do { /* this assumes a is not negative */ \
+        z_off64_t tmp = a >> 32; \
+        a &= 0xffffffffL; \
+        a += (tmp << 8) - (tmp << 5) + tmp; \
+        tmp = a >> 16; \
+        a &= 0xffffL; \
+        a += (tmp << 4) - tmp; \
+        tmp = a >> 16; \
+        a &= 0xffffL; \
+        a += (tmp << 4) - tmp; \
+        if (a >= BASE) a -= BASE; \
+    } while (0)
+#else
+#  define MOD(a) a %= BASE
+#  define MOD28(a) a %= BASE
+#  define MOD63(a) a %= BASE
+#endif
+
+/* ========================================================================= */
+uLong ZEXPORT adler32(adler, buf, len)
+    uLong adler;
+    const Bytef *buf;
+    uInt len;
+{
+    unsigned long sum2;
+    unsigned n;
+
+    /* split Adler-32 into component sums */
+    sum2 = (adler >> 16) & 0xffff;
+    adler &= 0xffff;
+
+    /* in case user likes doing a byte at a time, keep it fast */
+    if (len == 1) {
+        adler += buf[0];
+        if (adler >= BASE)
+            adler -= BASE;
+        sum2 += adler;
+        if (sum2 >= BASE)
+            sum2 -= BASE;
+        return adler | (sum2 << 16);
+    }
+
+    /* initial Adler-32 value (deferred check for len == 1 speed) */
+    if (buf == Z_NULL)
+        return 1L;
+
+    /* in case short lengths are provided, keep it somewhat fast */
+    if (len < 16) {
+        while (len--) {
+            adler += *buf++;
+            sum2 += adler;
+        }
+        if (adler >= BASE)
+            adler -= BASE;
+        MOD28(sum2);            /* only added so many BASE's */
+        return adler | (sum2 << 16);
+    }
+
+    /* do length NMAX blocks -- requires just one modulo operation */
+    while (len >= NMAX) {
+        len -= NMAX;
+        n = NMAX / 16;          /* NMAX is divisible by 16 */
+        do {
+            DO16(buf);          /* 16 sums unrolled */
+            buf += 16;
+        } while (--n);
+        MOD(adler);
+        MOD(sum2);
+    }
+
+    /* do remaining bytes (less than NMAX, still just one modulo) */
+    if (len) {                  /* avoid modulos if none remaining */
+        while (len >= 16) {
+            len -= 16;
+            DO16(buf);
+            buf += 16;
+        }
+        while (len--) {
+            adler += *buf++;
+            sum2 += adler;
+        }
+        MOD(adler);
+        MOD(sum2);
+    }
+
+    /* return recombined sums */
+    return adler | (sum2 << 16);
+}
+
+/* ========================================================================= */
+local uLong adler32_combine_(adler1, adler2, len2)
+    uLong adler1;
+    uLong adler2;
+    z_off64_t len2;
+{
+    unsigned long sum1;
+    unsigned long sum2;
+    unsigned rem;
+
+    /* for negative len, return invalid adler32 as a clue for debugging */
+    if (len2 < 0)
+        return 0xffffffffUL;
+
+    /* the derivation of this formula is left as an exercise for the reader */
+    MOD63(len2);                /* assumes len2 >= 0 */
+    rem = (unsigned)len2;
+    sum1 = adler1 & 0xffff;
+    sum2 = rem * sum1;
+    MOD(sum2);
+    sum1 += (adler2 & 0xffff) + BASE - 1;
+    sum2 += ((adler1 >> 16) & 0xffff) + ((adler2 >> 16) & 0xffff) + BASE - rem;
+    if (sum1 >= BASE) sum1 -= BASE;
+    if (sum1 >= BASE) sum1 -= BASE;
+    if (sum2 >= (BASE << 1)) sum2 -= (BASE << 1);
+    if (sum2 >= BASE) sum2 -= BASE;
+    return sum1 | (sum2 << 16);
+}
+
+/* ========================================================================= */
+uLong ZEXPORT adler32_combine(adler1, adler2, len2)
+    uLong adler1;
+    uLong adler2;
+    z_off_t len2;
+{
+    return adler32_combine_(adler1, adler2, len2);
+}
+
+uLong ZEXPORT adler32_combine64(adler1, adler2, len2)
+    uLong adler1;
+    uLong adler2;
+    z_off64_t len2;
+{
+    return adler32_combine_(adler1, adler2, len2);
+}
diff --git a/c-blosc/internal-complibs/zlib-1.2.8/compress.c b/c-blosc/internal-complibs/zlib-1.2.8/compress.c
new file mode 100644
index 0000000..6e97626
--- /dev/null
+++ b/c-blosc/internal-complibs/zlib-1.2.8/compress.c
@@ -0,0 +1,80 @@
+/* compress.c -- compress a memory buffer
+ * Copyright (C) 1995-2005 Jean-loup Gailly.
+ * For conditions of distribution and use, see copyright notice in zlib.h
+ */
+
+/* @(#) $Id$ */
+
+#define ZLIB_INTERNAL
+#include "zlib.h"
+
+/* ===========================================================================
+     Compresses the source buffer into the destination buffer. The level
+   parameter has the same meaning as in deflateInit.  sourceLen is the byte
+   length of the source buffer. Upon entry, destLen is the total size of the
+   destination buffer, which must be at least 0.1% larger than sourceLen plus
+   12 bytes. Upon exit, destLen is the actual size of the compressed buffer.
+
+     compress2 returns Z_OK if success, Z_MEM_ERROR if there was not enough
+   memory, Z_BUF_ERROR if there was not enough room in the output buffer,
+   Z_STREAM_ERROR if the level parameter is invalid.
+*/
+int ZEXPORT compress2 (dest, destLen, source, sourceLen, level)
+    Bytef *dest;
+    uLongf *destLen;
+    const Bytef *source;
+    uLong sourceLen;
+    int level;
+{
+    z_stream stream;
+    int err;
+
+    stream.next_in = (z_const Bytef *)source;
+    stream.avail_in = (uInt)sourceLen;
+#ifdef MAXSEG_64K
+    /* Check for source > 64K on 16-bit machine: */
+    if ((uLong)stream.avail_in != sourceLen) return Z_BUF_ERROR;
+#endif
+    stream.next_out = dest;
+    stream.avail_out = (uInt)*destLen;
+    if ((uLong)stream.avail_out != *destLen) return Z_BUF_ERROR;
+
+    stream.zalloc = (alloc_func)0;
+    stream.zfree = (free_func)0;
+    stream.opaque = (voidpf)0;
+
+    err = deflateInit(&stream, level);
+    if (err != Z_OK) return err;
+
+    err = deflate(&stream, Z_FINISH);
+    if (err != Z_STREAM_END) {
+        deflateEnd(&stream);
+        return err == Z_OK ? Z_BUF_ERROR : err;
+    }
+    *destLen = stream.total_out;
+
+    err = deflateEnd(&stream);
+    return err;
+}
+
+/* ===========================================================================
+ */
+int ZEXPORT compress (dest, destLen, source, sourceLen)
+    Bytef *dest;
+    uLongf *destLen;
+    const Bytef *source;
+    uLong sourceLen;
+{
+    return compress2(dest, destLen, source, sourceLen, Z_DEFAULT_COMPRESSION);
+}
+
+/* ===========================================================================
+     If the default memLevel or windowBits for deflateInit() is changed, then
+   this function needs to be updated.
+ */
+uLong ZEXPORT compressBound (sourceLen)
+    uLong sourceLen;
+{
+    return sourceLen + (sourceLen >> 12) + (sourceLen >> 14) +
+           (sourceLen >> 25) + 13;
+}
diff --git a/c-blosc/internal-complibs/zlib-1.2.8/crc32.c b/c-blosc/internal-complibs/zlib-1.2.8/crc32.c
new file mode 100644
index 0000000..979a719
--- /dev/null
+++ b/c-blosc/internal-complibs/zlib-1.2.8/crc32.c
@@ -0,0 +1,425 @@
+/* crc32.c -- compute the CRC-32 of a data stream
+ * Copyright (C) 1995-2006, 2010, 2011, 2012 Mark Adler
+ * For conditions of distribution and use, see copyright notice in zlib.h
+ *
+ * Thanks to Rodney Brown <rbrown64 at csc.com.au> for his contribution of faster
+ * CRC methods: exclusive-oring 32 bits of data at a time, and pre-computing
+ * tables for updating the shift register in one step with three exclusive-ors
+ * instead of four steps with four exclusive-ors.  This results in about a
+ * factor of two increase in speed on a Power PC G4 (PPC7455) using gcc -O3.
+ */
+
+/* @(#) $Id$ */
+
+/*
+  Note on the use of DYNAMIC_CRC_TABLE: there is no mutex or semaphore
+  protection on the static variables used to control the first-use generation
+  of the crc tables.  Therefore, if you #define DYNAMIC_CRC_TABLE, you should
+  first call get_crc_table() to initialize the tables before allowing more than
+  one thread to use crc32().
+
+  DYNAMIC_CRC_TABLE and MAKECRCH can be #defined to write out crc32.h.
+ */
+
+#ifdef MAKECRCH
+#  include <stdio.h>
+#  ifndef DYNAMIC_CRC_TABLE
+#    define DYNAMIC_CRC_TABLE
+#  endif /* !DYNAMIC_CRC_TABLE */
+#endif /* MAKECRCH */
+
+#include "zutil.h"      /* for STDC and FAR definitions */
+
+#define local static
+
+/* Definitions for doing the crc four data bytes at a time. */
+#if !defined(NOBYFOUR) && defined(Z_U4)
+#  define BYFOUR
+#endif
+#ifdef BYFOUR
+   local unsigned long crc32_little OF((unsigned long,
+                        const unsigned char FAR *, unsigned));
+   local unsigned long crc32_big OF((unsigned long,
+                        const unsigned char FAR *, unsigned));
+#  define TBLS 8
+#else
+#  define TBLS 1
+#endif /* BYFOUR */
+
+/* Local functions for crc concatenation */
+local unsigned long gf2_matrix_times OF((unsigned long *mat,
+                                         unsigned long vec));
+local void gf2_matrix_square OF((unsigned long *square, unsigned long *mat));
+local uLong crc32_combine_ OF((uLong crc1, uLong crc2, z_off64_t len2));
+
+
+#ifdef DYNAMIC_CRC_TABLE
+
+local volatile int crc_table_empty = 1;
+local z_crc_t FAR crc_table[TBLS][256];
+local void make_crc_table OF((void));
+#ifdef MAKECRCH
+   local void write_table OF((FILE *, const z_crc_t FAR *));
+#endif /* MAKECRCH */
+/*
+  Generate tables for a byte-wise 32-bit CRC calculation on the polynomial:
+  x^32+x^26+x^23+x^22+x^16+x^12+x^11+x^10+x^8+x^7+x^5+x^4+x^2+x+1.
+
+  Polynomials over GF(2) are represented in binary, one bit per coefficient,
+  with the lowest powers in the most significant bit.  Then adding polynomials
+  is just exclusive-or, and multiplying a polynomial by x is a right shift by
+  one.  If we call the above polynomial p, and represent a byte as the
+  polynomial q, also with the lowest power in the most significant bit (so the
+  byte 0xb1 is the polynomial x^7+x^3+x+1), then the CRC is (q*x^32) mod p,
+  where a mod b means the remainder after dividing a by b.
+
+  This calculation is done using the shift-register method of multiplying and
+  taking the remainder.  The register is initialized to zero, and for each
+  incoming bit, x^32 is added mod p to the register if the bit is a one (where
+  x^32 mod p is p+x^32 = x^26+...+1), and the register is multiplied mod p by
+  x (which is shifting right by one and adding x^32 mod p if the bit shifted
+  out is a one).  We start with the highest power (least significant bit) of
+  q and repeat for all eight bits of q.
+
+  The first table is simply the CRC of all possible eight bit values.  This is
+  all the information needed to generate CRCs on data a byte at a time for all
+  combinations of CRC register values and incoming bytes.  The remaining tables
+  allow for word-at-a-time CRC calculation for both big-endian and little-
+  endian machines, where a word is four bytes.
+*/
+local void make_crc_table()
+{
+    z_crc_t c;
+    int n, k;
+    z_crc_t poly;                       /* polynomial exclusive-or pattern */
+    /* terms of polynomial defining this crc (except x^32): */
+    static volatile int first = 1;      /* flag to limit concurrent making */
+    static const unsigned char p[] = {0,1,2,4,5,7,8,10,11,12,16,22,23,26};
+
+    /* See if another task is already doing this (not thread-safe, but better
+       than nothing -- significantly reduces duration of vulnerability in
+       case the advice about DYNAMIC_CRC_TABLE is ignored) */
+    if (first) {
+        first = 0;
+
+        /* make exclusive-or pattern from polynomial (0xedb88320UL) */
+        poly = 0;
+        for (n = 0; n < (int)(sizeof(p)/sizeof(unsigned char)); n++)
+            poly |= (z_crc_t)1 << (31 - p[n]);
+
+        /* generate a crc for every 8-bit value */
+        for (n = 0; n < 256; n++) {
+            c = (z_crc_t)n;
+            for (k = 0; k < 8; k++)
+                c = c & 1 ? poly ^ (c >> 1) : c >> 1;
+            crc_table[0][n] = c;
+        }
+
+#ifdef BYFOUR
+        /* generate crc for each value followed by one, two, and three zeros,
+           and then the byte reversal of those as well as the first table */
+        for (n = 0; n < 256; n++) {
+            c = crc_table[0][n];
+            crc_table[4][n] = ZSWAP32(c);
+            for (k = 1; k < 4; k++) {
+                c = crc_table[0][c & 0xff] ^ (c >> 8);
+                crc_table[k][n] = c;
+                crc_table[k + 4][n] = ZSWAP32(c);
+            }
+        }
+#endif /* BYFOUR */
+
+        crc_table_empty = 0;
+    }
+    else {      /* not first */
+        /* wait for the other guy to finish (not efficient, but rare) */
+        while (crc_table_empty)
+            ;
+    }
+
+#ifdef MAKECRCH
+    /* write out CRC tables to crc32.h */
+    {
+        FILE *out;
+
+        out = fopen("crc32.h", "w");
+        if (out == NULL) return;
+        fprintf(out, "/* crc32.h -- tables for rapid CRC calculation\n");
+        fprintf(out, " * Generated automatically by crc32.c\n */\n\n");
+        fprintf(out, "local const z_crc_t FAR ");
+        fprintf(out, "crc_table[TBLS][256] =\n{\n  {\n");
+        write_table(out, crc_table[0]);
+#  ifdef BYFOUR
+        fprintf(out, "#ifdef BYFOUR\n");
+        for (k = 1; k < 8; k++) {
+            fprintf(out, "  },\n  {\n");
+            write_table(out, crc_table[k]);
+        }
+        fprintf(out, "#endif\n");
+#  endif /* BYFOUR */
+        fprintf(out, "  }\n};\n");
+        fclose(out);
+    }
+#endif /* MAKECRCH */
+}
+
+#ifdef MAKECRCH
+local void write_table(out, table)
+    FILE *out;
+    const z_crc_t FAR *table;
+{
+    int n;
+
+    for (n = 0; n < 256; n++)
+        fprintf(out, "%s0x%08lxUL%s", n % 5 ? "" : "    ",
+                (unsigned long)(table[n]),
+                n == 255 ? "\n" : (n % 5 == 4 ? ",\n" : ", "));
+}
+#endif /* MAKECRCH */
+
+#else /* !DYNAMIC_CRC_TABLE */
+/* ========================================================================
+ * Tables of CRC-32s of all single-byte values, made by make_crc_table().
+ */
+#include "crc32.h"
+#endif /* DYNAMIC_CRC_TABLE */
+
+/* =========================================================================
+ * This function can be used by asm versions of crc32()
+ */
+const z_crc_t FAR * ZEXPORT get_crc_table()
+{
+#ifdef DYNAMIC_CRC_TABLE
+    if (crc_table_empty)
+        make_crc_table();
+#endif /* DYNAMIC_CRC_TABLE */
+    return (const z_crc_t FAR *)crc_table;
+}
+
+/* ========================================================================= */
+#define DO1 crc = crc_table[0][((int)crc ^ (*buf++)) & 0xff] ^ (crc >> 8)
+#define DO8 DO1; DO1; DO1; DO1; DO1; DO1; DO1; DO1
+
+/* ========================================================================= */
+unsigned long ZEXPORT crc32(crc, buf, len)
+    unsigned long crc;
+    const unsigned char FAR *buf;
+    uInt len;
+{
+    if (buf == Z_NULL) return 0UL;
+
+#ifdef DYNAMIC_CRC_TABLE
+    if (crc_table_empty)
+        make_crc_table();
+#endif /* DYNAMIC_CRC_TABLE */
+
+#ifdef BYFOUR
+    if (sizeof(void *) == sizeof(ptrdiff_t)) {
+        z_crc_t endian;
+
+        endian = 1;
+        if (*((unsigned char *)(&endian)))
+            return crc32_little(crc, buf, len);
+        else
+            return crc32_big(crc, buf, len);
+    }
+#endif /* BYFOUR */
+    crc = crc ^ 0xffffffffUL;
+    while (len >= 8) {
+        DO8;
+        len -= 8;
+    }
+    if (len) do {
+        DO1;
+    } while (--len);
+    return crc ^ 0xffffffffUL;
+}
+
+#ifdef BYFOUR
+
+/* ========================================================================= */
+#define DOLIT4 c ^= *buf4++; \
+        c = crc_table[3][c & 0xff] ^ crc_table[2][(c >> 8) & 0xff] ^ \
+            crc_table[1][(c >> 16) & 0xff] ^ crc_table[0][c >> 24]
+#define DOLIT32 DOLIT4; DOLIT4; DOLIT4; DOLIT4; DOLIT4; DOLIT4; DOLIT4; DOLIT4
+
+/* ========================================================================= */
+local unsigned long crc32_little(crc, buf, len)
+    unsigned long crc;
+    const unsigned char FAR *buf;
+    unsigned len;
+{
+    register z_crc_t c;
+    register const z_crc_t FAR *buf4;
+
+    c = (z_crc_t)crc;
+    c = ~c;
+    while (len && ((ptrdiff_t)buf & 3)) {
+        c = crc_table[0][(c ^ *buf++) & 0xff] ^ (c >> 8);
+        len--;
+    }
+
+    buf4 = (const z_crc_t FAR *)(const void FAR *)buf;
+    while (len >= 32) {
+        DOLIT32;
+        len -= 32;
+    }
+    while (len >= 4) {
+        DOLIT4;
+        len -= 4;
+    }
+    buf = (const unsigned char FAR *)buf4;
+
+    if (len) do {
+        c = crc_table[0][(c ^ *buf++) & 0xff] ^ (c >> 8);
+    } while (--len);
+    c = ~c;
+    return (unsigned long)c;
+}
+
+/* ========================================================================= */
+#define DOBIG4 c ^= *++buf4; \
+        c = crc_table[4][c & 0xff] ^ crc_table[5][(c >> 8) & 0xff] ^ \
+            crc_table[6][(c >> 16) & 0xff] ^ crc_table[7][c >> 24]
+#define DOBIG32 DOBIG4; DOBIG4; DOBIG4; DOBIG4; DOBIG4; DOBIG4; DOBIG4; DOBIG4
+
+/* ========================================================================= */
+local unsigned long crc32_big(crc, buf, len)
+    unsigned long crc;
+    const unsigned char FAR *buf;
+    unsigned len;
+{
+    register z_crc_t c;
+    register const z_crc_t FAR *buf4;
+
+    c = ZSWAP32((z_crc_t)crc);
+    c = ~c;
+    while (len && ((ptrdiff_t)buf & 3)) {
+        c = crc_table[4][(c >> 24) ^ *buf++] ^ (c << 8);
+        len--;
+    }
+
+    buf4 = (const z_crc_t FAR *)(const void FAR *)buf;
+    buf4--;
+    while (len >= 32) {
+        DOBIG32;
+        len -= 32;
+    }
+    while (len >= 4) {
+        DOBIG4;
+        len -= 4;
+    }
+    buf4++;
+    buf = (const unsigned char FAR *)buf4;
+
+    if (len) do {
+        c = crc_table[4][(c >> 24) ^ *buf++] ^ (c << 8);
+    } while (--len);
+    c = ~c;
+    return (unsigned long)(ZSWAP32(c));
+}
+
+#endif /* BYFOUR */
+
+#define GF2_DIM 32      /* dimension of GF(2) vectors (length of CRC) */
+
+/* ========================================================================= */
+local unsigned long gf2_matrix_times(mat, vec)
+    unsigned long *mat;
+    unsigned long vec;
+{
+    unsigned long sum;
+
+    sum = 0;
+    while (vec) {
+        if (vec & 1)
+            sum ^= *mat;
+        vec >>= 1;
+        mat++;
+    }
+    return sum;
+}
+
+/* ========================================================================= */
+local void gf2_matrix_square(square, mat)
+    unsigned long *square;
+    unsigned long *mat;
+{
+    int n;
+
+    for (n = 0; n < GF2_DIM; n++)
+        square[n] = gf2_matrix_times(mat, mat[n]);
+}
+
+/* ========================================================================= */
+local uLong crc32_combine_(crc1, crc2, len2)
+    uLong crc1;
+    uLong crc2;
+    z_off64_t len2;
+{
+    int n;
+    unsigned long row;
+    unsigned long even[GF2_DIM];    /* even-power-of-two zeros operator */
+    unsigned long odd[GF2_DIM];     /* odd-power-of-two zeros operator */
+
+    /* degenerate case (also disallow negative lengths) */
+    if (len2 <= 0)
+        return crc1;
+
+    /* put operator for one zero bit in odd */
+    odd[0] = 0xedb88320UL;          /* CRC-32 polynomial */
+    row = 1;
+    for (n = 1; n < GF2_DIM; n++) {
+        odd[n] = row;
+        row <<= 1;
+    }
+
+    /* put operator for two zero bits in even */
+    gf2_matrix_square(even, odd);
+
+    /* put operator for four zero bits in odd */
+    gf2_matrix_square(odd, even);
+
+    /* apply len2 zeros to crc1 (first square will put the operator for one
+       zero byte, eight zero bits, in even) */
+    do {
+        /* apply zeros operator for this bit of len2 */
+        gf2_matrix_square(even, odd);
+        if (len2 & 1)
+            crc1 = gf2_matrix_times(even, crc1);
+        len2 >>= 1;
+
+        /* if no more bits set, then done */
+        if (len2 == 0)
+            break;
+
+        /* another iteration of the loop with odd and even swapped */
+        gf2_matrix_square(odd, even);
+        if (len2 & 1)
+            crc1 = gf2_matrix_times(odd, crc1);
+        len2 >>= 1;
+
+        /* if no more bits set, then done */
+    } while (len2 != 0);
+
+    /* return combined crc */
+    crc1 ^= crc2;
+    return crc1;
+}
+
+/* ========================================================================= */
+uLong ZEXPORT crc32_combine(crc1, crc2, len2)
+    uLong crc1;
+    uLong crc2;
+    z_off_t len2;
+{
+    return crc32_combine_(crc1, crc2, len2);
+}
+
+uLong ZEXPORT crc32_combine64(crc1, crc2, len2)
+    uLong crc1;
+    uLong crc2;
+    z_off64_t len2;
+{
+    return crc32_combine_(crc1, crc2, len2);
+}
diff --git a/c-blosc/internal-complibs/zlib-1.2.8/crc32.h b/c-blosc/internal-complibs/zlib-1.2.8/crc32.h
new file mode 100644
index 0000000..9e0c778
--- /dev/null
+++ b/c-blosc/internal-complibs/zlib-1.2.8/crc32.h
@@ -0,0 +1,441 @@
+/* crc32.h -- tables for rapid CRC calculation
+ * Generated automatically by crc32.c
+ */
+
+local const z_crc_t FAR crc_table[TBLS][256] =
+{
+  {
+    0x00000000UL, 0x77073096UL, 0xee0e612cUL, 0x990951baUL, 0x076dc419UL,
+    0x706af48fUL, 0xe963a535UL, 0x9e6495a3UL, 0x0edb8832UL, 0x79dcb8a4UL,
+    0xe0d5e91eUL, 0x97d2d988UL, 0x09b64c2bUL, 0x7eb17cbdUL, 0xe7b82d07UL,
+    0x90bf1d91UL, 0x1db71064UL, 0x6ab020f2UL, 0xf3b97148UL, 0x84be41deUL,
+    0x1adad47dUL, 0x6ddde4ebUL, 0xf4d4b551UL, 0x83d385c7UL, 0x136c9856UL,
+    0x646ba8c0UL, 0xfd62f97aUL, 0x8a65c9ecUL, 0x14015c4fUL, 0x63066cd9UL,
+    0xfa0f3d63UL, 0x8d080df5UL, 0x3b6e20c8UL, 0x4c69105eUL, 0xd56041e4UL,
+    0xa2677172UL, 0x3c03e4d1UL, 0x4b04d447UL, 0xd20d85fdUL, 0xa50ab56bUL,
+    0x35b5a8faUL, 0x42b2986cUL, 0xdbbbc9d6UL, 0xacbcf940UL, 0x32d86ce3UL,
+    0x45df5c75UL, 0xdcd60dcfUL, 0xabd13d59UL, 0x26d930acUL, 0x51de003aUL,
+    0xc8d75180UL, 0xbfd06116UL, 0x21b4f4b5UL, 0x56b3c423UL, 0xcfba9599UL,
+    0xb8bda50fUL, 0x2802b89eUL, 0x5f058808UL, 0xc60cd9b2UL, 0xb10be924UL,
+    0x2f6f7c87UL, 0x58684c11UL, 0xc1611dabUL, 0xb6662d3dUL, 0x76dc4190UL,
+    0x01db7106UL, 0x98d220bcUL, 0xefd5102aUL, 0x71b18589UL, 0x06b6b51fUL,
+    0x9fbfe4a5UL, 0xe8b8d433UL, 0x7807c9a2UL, 0x0f00f934UL, 0x9609a88eUL,
+    0xe10e9818UL, 0x7f6a0dbbUL, 0x086d3d2dUL, 0x91646c97UL, 0xe6635c01UL,
+    0x6b6b51f4UL, 0x1c6c6162UL, 0x856530d8UL, 0xf262004eUL, 0x6c0695edUL,
+    0x1b01a57bUL, 0x8208f4c1UL, 0xf50fc457UL, 0x65b0d9c6UL, 0x12b7e950UL,
+    0x8bbeb8eaUL, 0xfcb9887cUL, 0x62dd1ddfUL, 0x15da2d49UL, 0x8cd37cf3UL,
+    0xfbd44c65UL, 0x4db26158UL, 0x3ab551ceUL, 0xa3bc0074UL, 0xd4bb30e2UL,
+    0x4adfa541UL, 0x3dd895d7UL, 0xa4d1c46dUL, 0xd3d6f4fbUL, 0x4369e96aUL,
+    0x346ed9fcUL, 0xad678846UL, 0xda60b8d0UL, 0x44042d73UL, 0x33031de5UL,
+    0xaa0a4c5fUL, 0xdd0d7cc9UL, 0x5005713cUL, 0x270241aaUL, 0xbe0b1010UL,
+    0xc90c2086UL, 0x5768b525UL, 0x206f85b3UL, 0xb966d409UL, 0xce61e49fUL,
+    0x5edef90eUL, 0x29d9c998UL, 0xb0d09822UL, 0xc7d7a8b4UL, 0x59b33d17UL,
+    0x2eb40d81UL, 0xb7bd5c3bUL, 0xc0ba6cadUL, 0xedb88320UL, 0x9abfb3b6UL,
+    0x03b6e20cUL, 0x74b1d29aUL, 0xead54739UL, 0x9dd277afUL, 0x04db2615UL,
+    0x73dc1683UL, 0xe3630b12UL, 0x94643b84UL, 0x0d6d6a3eUL, 0x7a6a5aa8UL,
+    0xe40ecf0bUL, 0x9309ff9dUL, 0x0a00ae27UL, 0x7d079eb1UL, 0xf00f9344UL,
+    0x8708a3d2UL, 0x1e01f268UL, 0x6906c2feUL, 0xf762575dUL, 0x806567cbUL,
+    0x196c3671UL, 0x6e6b06e7UL, 0xfed41b76UL, 0x89d32be0UL, 0x10da7a5aUL,
+    0x67dd4accUL, 0xf9b9df6fUL, 0x8ebeeff9UL, 0x17b7be43UL, 0x60b08ed5UL,
+    0xd6d6a3e8UL, 0xa1d1937eUL, 0x38d8c2c4UL, 0x4fdff252UL, 0xd1bb67f1UL,
+    0xa6bc5767UL, 0x3fb506ddUL, 0x48b2364bUL, 0xd80d2bdaUL, 0xaf0a1b4cUL,
+    0x36034af6UL, 0x41047a60UL, 0xdf60efc3UL, 0xa867df55UL, 0x316e8eefUL,
+    0x4669be79UL, 0xcb61b38cUL, 0xbc66831aUL, 0x256fd2a0UL, 0x5268e236UL,
+    0xcc0c7795UL, 0xbb0b4703UL, 0x220216b9UL, 0x5505262fUL, 0xc5ba3bbeUL,
+    0xb2bd0b28UL, 0x2bb45a92UL, 0x5cb36a04UL, 0xc2d7ffa7UL, 0xb5d0cf31UL,
+    0x2cd99e8bUL, 0x5bdeae1dUL, 0x9b64c2b0UL, 0xec63f226UL, 0x756aa39cUL,
+    0x026d930aUL, 0x9c0906a9UL, 0xeb0e363fUL, 0x72076785UL, 0x05005713UL,
+    0x95bf4a82UL, 0xe2b87a14UL, 0x7bb12baeUL, 0x0cb61b38UL, 0x92d28e9bUL,
+    0xe5d5be0dUL, 0x7cdcefb7UL, 0x0bdbdf21UL, 0x86d3d2d4UL, 0xf1d4e242UL,
+    0x68ddb3f8UL, 0x1fda836eUL, 0x81be16cdUL, 0xf6b9265bUL, 0x6fb077e1UL,
+    0x18b74777UL, 0x88085ae6UL, 0xff0f6a70UL, 0x66063bcaUL, 0x11010b5cUL,
+    0x8f659effUL, 0xf862ae69UL, 0x616bffd3UL, 0x166ccf45UL, 0xa00ae278UL,
+    0xd70dd2eeUL, 0x4e048354UL, 0x3903b3c2UL, 0xa7672661UL, 0xd06016f7UL,
+    0x4969474dUL, 0x3e6e77dbUL, 0xaed16a4aUL, 0xd9d65adcUL, 0x40df0b66UL,
+    0x37d83bf0UL, 0xa9bcae53UL, 0xdebb9ec5UL, 0x47b2cf7fUL, 0x30b5ffe9UL,
+    0xbdbdf21cUL, 0xcabac28aUL, 0x53b39330UL, 0x24b4a3a6UL, 0xbad03605UL,
+    0xcdd70693UL, 0x54de5729UL, 0x23d967bfUL, 0xb3667a2eUL, 0xc4614ab8UL,
+    0x5d681b02UL, 0x2a6f2b94UL, 0xb40bbe37UL, 0xc30c8ea1UL, 0x5a05df1bUL,
+    0x2d02ef8dUL
+#ifdef BYFOUR
+  },
+  {
+    0x00000000UL, 0x191b3141UL, 0x32366282UL, 0x2b2d53c3UL, 0x646cc504UL,
+    0x7d77f445UL, 0x565aa786UL, 0x4f4196c7UL, 0xc8d98a08UL, 0xd1c2bb49UL,
+    0xfaefe88aUL, 0xe3f4d9cbUL, 0xacb54f0cUL, 0xb5ae7e4dUL, 0x9e832d8eUL,
+    0x87981ccfUL, 0x4ac21251UL, 0x53d92310UL, 0x78f470d3UL, 0x61ef4192UL,
+    0x2eaed755UL, 0x37b5e614UL, 0x1c98b5d7UL, 0x05838496UL, 0x821b9859UL,
+    0x9b00a918UL, 0xb02dfadbUL, 0xa936cb9aUL, 0xe6775d5dUL, 0xff6c6c1cUL,
+    0xd4413fdfUL, 0xcd5a0e9eUL, 0x958424a2UL, 0x8c9f15e3UL, 0xa7b24620UL,
+    0xbea97761UL, 0xf1e8e1a6UL, 0xe8f3d0e7UL, 0xc3de8324UL, 0xdac5b265UL,
+    0x5d5daeaaUL, 0x44469febUL, 0x6f6bcc28UL, 0x7670fd69UL, 0x39316baeUL,
+    0x202a5aefUL, 0x0b07092cUL, 0x121c386dUL, 0xdf4636f3UL, 0xc65d07b2UL,
+    0xed705471UL, 0xf46b6530UL, 0xbb2af3f7UL, 0xa231c2b6UL, 0x891c9175UL,
+    0x9007a034UL, 0x179fbcfbUL, 0x0e848dbaUL, 0x25a9de79UL, 0x3cb2ef38UL,
+    0x73f379ffUL, 0x6ae848beUL, 0x41c51b7dUL, 0x58de2a3cUL, 0xf0794f05UL,
+    0xe9627e44UL, 0xc24f2d87UL, 0xdb541cc6UL, 0x94158a01UL, 0x8d0ebb40UL,
+    0xa623e883UL, 0xbf38d9c2UL, 0x38a0c50dUL, 0x21bbf44cUL, 0x0a96a78fUL,
+    0x138d96ceUL, 0x5ccc0009UL, 0x45d73148UL, 0x6efa628bUL, 0x77e153caUL,
+    0xbabb5d54UL, 0xa3a06c15UL, 0x888d3fd6UL, 0x91960e97UL, 0xded79850UL,
+    0xc7cca911UL, 0xece1fad2UL, 0xf5facb93UL, 0x7262d75cUL, 0x6b79e61dUL,
+    0x4054b5deUL, 0x594f849fUL, 0x160e1258UL, 0x0f152319UL, 0x243870daUL,
+    0x3d23419bUL, 0x65fd6ba7UL, 0x7ce65ae6UL, 0x57cb0925UL, 0x4ed03864UL,
+    0x0191aea3UL, 0x188a9fe2UL, 0x33a7cc21UL, 0x2abcfd60UL, 0xad24e1afUL,
+    0xb43fd0eeUL, 0x9f12832dUL, 0x8609b26cUL, 0xc94824abUL, 0xd05315eaUL,
+    0xfb7e4629UL, 0xe2657768UL, 0x2f3f79f6UL, 0x362448b7UL, 0x1d091b74UL,
+    0x04122a35UL, 0x4b53bcf2UL, 0x52488db3UL, 0x7965de70UL, 0x607eef31UL,
+    0xe7e6f3feUL, 0xfefdc2bfUL, 0xd5d0917cUL, 0xcccba03dUL, 0x838a36faUL,
+    0x9a9107bbUL, 0xb1bc5478UL, 0xa8a76539UL, 0x3b83984bUL, 0x2298a90aUL,
+    0x09b5fac9UL, 0x10aecb88UL, 0x5fef5d4fUL, 0x46f46c0eUL, 0x6dd93fcdUL,
+    0x74c20e8cUL, 0xf35a1243UL, 0xea412302UL, 0xc16c70c1UL, 0xd8774180UL,
+    0x9736d747UL, 0x8e2de606UL, 0xa500b5c5UL, 0xbc1b8484UL, 0x71418a1aUL,
+    0x685abb5bUL, 0x4377e898UL, 0x5a6cd9d9UL, 0x152d4f1eUL, 0x0c367e5fUL,
+    0x271b2d9cUL, 0x3e001cddUL, 0xb9980012UL, 0xa0833153UL, 0x8bae6290UL,
+    0x92b553d1UL, 0xddf4c516UL, 0xc4eff457UL, 0xefc2a794UL, 0xf6d996d5UL,
+    0xae07bce9UL, 0xb71c8da8UL, 0x9c31de6bUL, 0x852aef2aUL, 0xca6b79edUL,
+    0xd37048acUL, 0xf85d1b6fUL, 0xe1462a2eUL, 0x66de36e1UL, 0x7fc507a0UL,
+    0x54e85463UL, 0x4df36522UL, 0x02b2f3e5UL, 0x1ba9c2a4UL, 0x30849167UL,
+    0x299fa026UL, 0xe4c5aeb8UL, 0xfdde9ff9UL, 0xd6f3cc3aUL, 0xcfe8fd7bUL,
+    0x80a96bbcUL, 0x99b25afdUL, 0xb29f093eUL, 0xab84387fUL, 0x2c1c24b0UL,
+    0x350715f1UL, 0x1e2a4632UL, 0x07317773UL, 0x4870e1b4UL, 0x516bd0f5UL,
+    0x7a468336UL, 0x635db277UL, 0xcbfad74eUL, 0xd2e1e60fUL, 0xf9ccb5ccUL,
+    0xe0d7848dUL, 0xaf96124aUL, 0xb68d230bUL, 0x9da070c8UL, 0x84bb4189UL,
+    0x03235d46UL, 0x1a386c07UL, 0x31153fc4UL, 0x280e0e85UL, 0x674f9842UL,
+    0x7e54a903UL, 0x5579fac0UL, 0x4c62cb81UL, 0x8138c51fUL, 0x9823f45eUL,
+    0xb30ea79dUL, 0xaa1596dcUL, 0xe554001bUL, 0xfc4f315aUL, 0xd7626299UL,
+    0xce7953d8UL, 0x49e14f17UL, 0x50fa7e56UL, 0x7bd72d95UL, 0x62cc1cd4UL,
+    0x2d8d8a13UL, 0x3496bb52UL, 0x1fbbe891UL, 0x06a0d9d0UL, 0x5e7ef3ecUL,
+    0x4765c2adUL, 0x6c48916eUL, 0x7553a02fUL, 0x3a1236e8UL, 0x230907a9UL,
+    0x0824546aUL, 0x113f652bUL, 0x96a779e4UL, 0x8fbc48a5UL, 0xa4911b66UL,
+    0xbd8a2a27UL, 0xf2cbbce0UL, 0xebd08da1UL, 0xc0fdde62UL, 0xd9e6ef23UL,
+    0x14bce1bdUL, 0x0da7d0fcUL, 0x268a833fUL, 0x3f91b27eUL, 0x70d024b9UL,
+    0x69cb15f8UL, 0x42e6463bUL, 0x5bfd777aUL, 0xdc656bb5UL, 0xc57e5af4UL,
+    0xee530937UL, 0xf7483876UL, 0xb809aeb1UL, 0xa1129ff0UL, 0x8a3fcc33UL,
+    0x9324fd72UL
+  },
+  {
+    0x00000000UL, 0x01c26a37UL, 0x0384d46eUL, 0x0246be59UL, 0x0709a8dcUL,
+    0x06cbc2ebUL, 0x048d7cb2UL, 0x054f1685UL, 0x0e1351b8UL, 0x0fd13b8fUL,
+    0x0d9785d6UL, 0x0c55efe1UL, 0x091af964UL, 0x08d89353UL, 0x0a9e2d0aUL,
+    0x0b5c473dUL, 0x1c26a370UL, 0x1de4c947UL, 0x1fa2771eUL, 0x1e601d29UL,
+    0x1b2f0bacUL, 0x1aed619bUL, 0x18abdfc2UL, 0x1969b5f5UL, 0x1235f2c8UL,
+    0x13f798ffUL, 0x11b126a6UL, 0x10734c91UL, 0x153c5a14UL, 0x14fe3023UL,
+    0x16b88e7aUL, 0x177ae44dUL, 0x384d46e0UL, 0x398f2cd7UL, 0x3bc9928eUL,
+    0x3a0bf8b9UL, 0x3f44ee3cUL, 0x3e86840bUL, 0x3cc03a52UL, 0x3d025065UL,
+    0x365e1758UL, 0x379c7d6fUL, 0x35dac336UL, 0x3418a901UL, 0x3157bf84UL,
+    0x3095d5b3UL, 0x32d36beaUL, 0x331101ddUL, 0x246be590UL, 0x25a98fa7UL,
+    0x27ef31feUL, 0x262d5bc9UL, 0x23624d4cUL, 0x22a0277bUL, 0x20e69922UL,
+    0x2124f315UL, 0x2a78b428UL, 0x2bbade1fUL, 0x29fc6046UL, 0x283e0a71UL,
+    0x2d711cf4UL, 0x2cb376c3UL, 0x2ef5c89aUL, 0x2f37a2adUL, 0x709a8dc0UL,
+    0x7158e7f7UL, 0x731e59aeUL, 0x72dc3399UL, 0x7793251cUL, 0x76514f2bUL,
+    0x7417f172UL, 0x75d59b45UL, 0x7e89dc78UL, 0x7f4bb64fUL, 0x7d0d0816UL,
+    0x7ccf6221UL, 0x798074a4UL, 0x78421e93UL, 0x7a04a0caUL, 0x7bc6cafdUL,
+    0x6cbc2eb0UL, 0x6d7e4487UL, 0x6f38fadeUL, 0x6efa90e9UL, 0x6bb5866cUL,
+    0x6a77ec5bUL, 0x68315202UL, 0x69f33835UL, 0x62af7f08UL, 0x636d153fUL,
+    0x612bab66UL, 0x60e9c151UL, 0x65a6d7d4UL, 0x6464bde3UL, 0x662203baUL,
+    0x67e0698dUL, 0x48d7cb20UL, 0x4915a117UL, 0x4b531f4eUL, 0x4a917579UL,
+    0x4fde63fcUL, 0x4e1c09cbUL, 0x4c5ab792UL, 0x4d98dda5UL, 0x46c49a98UL,
+    0x4706f0afUL, 0x45404ef6UL, 0x448224c1UL, 0x41cd3244UL, 0x400f5873UL,
+    0x4249e62aUL, 0x438b8c1dUL, 0x54f16850UL, 0x55330267UL, 0x5775bc3eUL,
+    0x56b7d609UL, 0x53f8c08cUL, 0x523aaabbUL, 0x507c14e2UL, 0x51be7ed5UL,
+    0x5ae239e8UL, 0x5b2053dfUL, 0x5966ed86UL, 0x58a487b1UL, 0x5deb9134UL,
+    0x5c29fb03UL, 0x5e6f455aUL, 0x5fad2f6dUL, 0xe1351b80UL, 0xe0f771b7UL,
+    0xe2b1cfeeUL, 0xe373a5d9UL, 0xe63cb35cUL, 0xe7fed96bUL, 0xe5b86732UL,
+    0xe47a0d05UL, 0xef264a38UL, 0xeee4200fUL, 0xeca29e56UL, 0xed60f461UL,
+    0xe82fe2e4UL, 0xe9ed88d3UL, 0xebab368aUL, 0xea695cbdUL, 0xfd13b8f0UL,
+    0xfcd1d2c7UL, 0xfe976c9eUL, 0xff5506a9UL, 0xfa1a102cUL, 0xfbd87a1bUL,
+    0xf99ec442UL, 0xf85cae75UL, 0xf300e948UL, 0xf2c2837fUL, 0xf0843d26UL,
+    0xf1465711UL, 0xf4094194UL, 0xf5cb2ba3UL, 0xf78d95faUL, 0xf64fffcdUL,
+    0xd9785d60UL, 0xd8ba3757UL, 0xdafc890eUL, 0xdb3ee339UL, 0xde71f5bcUL,
+    0xdfb39f8bUL, 0xddf521d2UL, 0xdc374be5UL, 0xd76b0cd8UL, 0xd6a966efUL,
+    0xd4efd8b6UL, 0xd52db281UL, 0xd062a404UL, 0xd1a0ce33UL, 0xd3e6706aUL,
+    0xd2241a5dUL, 0xc55efe10UL, 0xc49c9427UL, 0xc6da2a7eUL, 0xc7184049UL,
+    0xc25756ccUL, 0xc3953cfbUL, 0xc1d382a2UL, 0xc011e895UL, 0xcb4dafa8UL,
+    0xca8fc59fUL, 0xc8c97bc6UL, 0xc90b11f1UL, 0xcc440774UL, 0xcd866d43UL,
+    0xcfc0d31aUL, 0xce02b92dUL, 0x91af9640UL, 0x906dfc77UL, 0x922b422eUL,
+    0x93e92819UL, 0x96a63e9cUL, 0x976454abUL, 0x9522eaf2UL, 0x94e080c5UL,
+    0x9fbcc7f8UL, 0x9e7eadcfUL, 0x9c381396UL, 0x9dfa79a1UL, 0x98b56f24UL,
+    0x99770513UL, 0x9b31bb4aUL, 0x9af3d17dUL, 0x8d893530UL, 0x8c4b5f07UL,
+    0x8e0de15eUL, 0x8fcf8b69UL, 0x8a809decUL, 0x8b42f7dbUL, 0x89044982UL,
+    0x88c623b5UL, 0x839a6488UL, 0x82580ebfUL, 0x801eb0e6UL, 0x81dcdad1UL,
+    0x8493cc54UL, 0x8551a663UL, 0x8717183aUL, 0x86d5720dUL, 0xa9e2d0a0UL,
+    0xa820ba97UL, 0xaa6604ceUL, 0xaba46ef9UL, 0xaeeb787cUL, 0xaf29124bUL,
+    0xad6fac12UL, 0xacadc625UL, 0xa7f18118UL, 0xa633eb2fUL, 0xa4755576UL,
+    0xa5b73f41UL, 0xa0f829c4UL, 0xa13a43f3UL, 0xa37cfdaaUL, 0xa2be979dUL,
+    0xb5c473d0UL, 0xb40619e7UL, 0xb640a7beUL, 0xb782cd89UL, 0xb2cddb0cUL,
+    0xb30fb13bUL, 0xb1490f62UL, 0xb08b6555UL, 0xbbd72268UL, 0xba15485fUL,
+    0xb853f606UL, 0xb9919c31UL, 0xbcde8ab4UL, 0xbd1ce083UL, 0xbf5a5edaUL,
+    0xbe9834edUL
+  },
+  {
+    0x00000000UL, 0xb8bc6765UL, 0xaa09c88bUL, 0x12b5afeeUL, 0x8f629757UL,
+    0x37def032UL, 0x256b5fdcUL, 0x9dd738b9UL, 0xc5b428efUL, 0x7d084f8aUL,
+    0x6fbde064UL, 0xd7018701UL, 0x4ad6bfb8UL, 0xf26ad8ddUL, 0xe0df7733UL,
+    0x58631056UL, 0x5019579fUL, 0xe8a530faUL, 0xfa109f14UL, 0x42acf871UL,
+    0xdf7bc0c8UL, 0x67c7a7adUL, 0x75720843UL, 0xcdce6f26UL, 0x95ad7f70UL,
+    0x2d111815UL, 0x3fa4b7fbUL, 0x8718d09eUL, 0x1acfe827UL, 0xa2738f42UL,
+    0xb0c620acUL, 0x087a47c9UL, 0xa032af3eUL, 0x188ec85bUL, 0x0a3b67b5UL,
+    0xb28700d0UL, 0x2f503869UL, 0x97ec5f0cUL, 0x8559f0e2UL, 0x3de59787UL,
+    0x658687d1UL, 0xdd3ae0b4UL, 0xcf8f4f5aUL, 0x7733283fUL, 0xeae41086UL,
+    0x525877e3UL, 0x40edd80dUL, 0xf851bf68UL, 0xf02bf8a1UL, 0x48979fc4UL,
+    0x5a22302aUL, 0xe29e574fUL, 0x7f496ff6UL, 0xc7f50893UL, 0xd540a77dUL,
+    0x6dfcc018UL, 0x359fd04eUL, 0x8d23b72bUL, 0x9f9618c5UL, 0x272a7fa0UL,
+    0xbafd4719UL, 0x0241207cUL, 0x10f48f92UL, 0xa848e8f7UL, 0x9b14583dUL,
+    0x23a83f58UL, 0x311d90b6UL, 0x89a1f7d3UL, 0x1476cf6aUL, 0xaccaa80fUL,
+    0xbe7f07e1UL, 0x06c36084UL, 0x5ea070d2UL, 0xe61c17b7UL, 0xf4a9b859UL,
+    0x4c15df3cUL, 0xd1c2e785UL, 0x697e80e0UL, 0x7bcb2f0eUL, 0xc377486bUL,
+    0xcb0d0fa2UL, 0x73b168c7UL, 0x6104c729UL, 0xd9b8a04cUL, 0x446f98f5UL,
+    0xfcd3ff90UL, 0xee66507eUL, 0x56da371bUL, 0x0eb9274dUL, 0xb6054028UL,
+    0xa4b0efc6UL, 0x1c0c88a3UL, 0x81dbb01aUL, 0x3967d77fUL, 0x2bd27891UL,
+    0x936e1ff4UL, 0x3b26f703UL, 0x839a9066UL, 0x912f3f88UL, 0x299358edUL,
+    0xb4446054UL, 0x0cf80731UL, 0x1e4da8dfUL, 0xa6f1cfbaUL, 0xfe92dfecUL,
+    0x462eb889UL, 0x549b1767UL, 0xec277002UL, 0x71f048bbUL, 0xc94c2fdeUL,
+    0xdbf98030UL, 0x6345e755UL, 0x6b3fa09cUL, 0xd383c7f9UL, 0xc1366817UL,
+    0x798a0f72UL, 0xe45d37cbUL, 0x5ce150aeUL, 0x4e54ff40UL, 0xf6e89825UL,
+    0xae8b8873UL, 0x1637ef16UL, 0x048240f8UL, 0xbc3e279dUL, 0x21e91f24UL,
+    0x99557841UL, 0x8be0d7afUL, 0x335cb0caUL, 0xed59b63bUL, 0x55e5d15eUL,
+    0x47507eb0UL, 0xffec19d5UL, 0x623b216cUL, 0xda874609UL, 0xc832e9e7UL,
+    0x708e8e82UL, 0x28ed9ed4UL, 0x9051f9b1UL, 0x82e4565fUL, 0x3a58313aUL,
+    0xa78f0983UL, 0x1f336ee6UL, 0x0d86c108UL, 0xb53aa66dUL, 0xbd40e1a4UL,
+    0x05fc86c1UL, 0x1749292fUL, 0xaff54e4aUL, 0x322276f3UL, 0x8a9e1196UL,
+    0x982bbe78UL, 0x2097d91dUL, 0x78f4c94bUL, 0xc048ae2eUL, 0xd2fd01c0UL,
+    0x6a4166a5UL, 0xf7965e1cUL, 0x4f2a3979UL, 0x5d9f9697UL, 0xe523f1f2UL,
+    0x4d6b1905UL, 0xf5d77e60UL, 0xe762d18eUL, 0x5fdeb6ebUL, 0xc2098e52UL,
+    0x7ab5e937UL, 0x680046d9UL, 0xd0bc21bcUL, 0x88df31eaUL, 0x3063568fUL,
+    0x22d6f961UL, 0x9a6a9e04UL, 0x07bda6bdUL, 0xbf01c1d8UL, 0xadb46e36UL,
+    0x15080953UL, 0x1d724e9aUL, 0xa5ce29ffUL, 0xb77b8611UL, 0x0fc7e174UL,
+    0x9210d9cdUL, 0x2aacbea8UL, 0x38191146UL, 0x80a57623UL, 0xd8c66675UL,
+    0x607a0110UL, 0x72cfaefeUL, 0xca73c99bUL, 0x57a4f122UL, 0xef189647UL,
+    0xfdad39a9UL, 0x45115eccUL, 0x764dee06UL, 0xcef18963UL, 0xdc44268dUL,
+    0x64f841e8UL, 0xf92f7951UL, 0x41931e34UL, 0x5326b1daUL, 0xeb9ad6bfUL,
+    0xb3f9c6e9UL, 0x0b45a18cUL, 0x19f00e62UL, 0xa14c6907UL, 0x3c9b51beUL,
+    0x842736dbUL, 0x96929935UL, 0x2e2efe50UL, 0x2654b999UL, 0x9ee8defcUL,
+    0x8c5d7112UL, 0x34e11677UL, 0xa9362eceUL, 0x118a49abUL, 0x033fe645UL,
+    0xbb838120UL, 0xe3e09176UL, 0x5b5cf613UL, 0x49e959fdUL, 0xf1553e98UL,
+    0x6c820621UL, 0xd43e6144UL, 0xc68bceaaUL, 0x7e37a9cfUL, 0xd67f4138UL,
+    0x6ec3265dUL, 0x7c7689b3UL, 0xc4caeed6UL, 0x591dd66fUL, 0xe1a1b10aUL,
+    0xf3141ee4UL, 0x4ba87981UL, 0x13cb69d7UL, 0xab770eb2UL, 0xb9c2a15cUL,
+    0x017ec639UL, 0x9ca9fe80UL, 0x241599e5UL, 0x36a0360bUL, 0x8e1c516eUL,
+    0x866616a7UL, 0x3eda71c2UL, 0x2c6fde2cUL, 0x94d3b949UL, 0x090481f0UL,
+    0xb1b8e695UL, 0xa30d497bUL, 0x1bb12e1eUL, 0x43d23e48UL, 0xfb6e592dUL,
+    0xe9dbf6c3UL, 0x516791a6UL, 0xccb0a91fUL, 0x740cce7aUL, 0x66b96194UL,
+    0xde0506f1UL
+  },
+  {
+    0x00000000UL, 0x96300777UL, 0x2c610eeeUL, 0xba510999UL, 0x19c46d07UL,
+    0x8ff46a70UL, 0x35a563e9UL, 0xa395649eUL, 0x3288db0eUL, 0xa4b8dc79UL,
+    0x1ee9d5e0UL, 0x88d9d297UL, 0x2b4cb609UL, 0xbd7cb17eUL, 0x072db8e7UL,
+    0x911dbf90UL, 0x6410b71dUL, 0xf220b06aUL, 0x4871b9f3UL, 0xde41be84UL,
+    0x7dd4da1aUL, 0xebe4dd6dUL, 0x51b5d4f4UL, 0xc785d383UL, 0x56986c13UL,
+    0xc0a86b64UL, 0x7af962fdUL, 0xecc9658aUL, 0x4f5c0114UL, 0xd96c0663UL,
+    0x633d0ffaUL, 0xf50d088dUL, 0xc8206e3bUL, 0x5e10694cUL, 0xe44160d5UL,
+    0x727167a2UL, 0xd1e4033cUL, 0x47d4044bUL, 0xfd850dd2UL, 0x6bb50aa5UL,
+    0xfaa8b535UL, 0x6c98b242UL, 0xd6c9bbdbUL, 0x40f9bcacUL, 0xe36cd832UL,
+    0x755cdf45UL, 0xcf0dd6dcUL, 0x593dd1abUL, 0xac30d926UL, 0x3a00de51UL,
+    0x8051d7c8UL, 0x1661d0bfUL, 0xb5f4b421UL, 0x23c4b356UL, 0x9995bacfUL,
+    0x0fa5bdb8UL, 0x9eb80228UL, 0x0888055fUL, 0xb2d90cc6UL, 0x24e90bb1UL,
+    0x877c6f2fUL, 0x114c6858UL, 0xab1d61c1UL, 0x3d2d66b6UL, 0x9041dc76UL,
+    0x0671db01UL, 0xbc20d298UL, 0x2a10d5efUL, 0x8985b171UL, 0x1fb5b606UL,
+    0xa5e4bf9fUL, 0x33d4b8e8UL, 0xa2c90778UL, 0x34f9000fUL, 0x8ea80996UL,
+    0x18980ee1UL, 0xbb0d6a7fUL, 0x2d3d6d08UL, 0x976c6491UL, 0x015c63e6UL,
+    0xf4516b6bUL, 0x62616c1cUL, 0xd8306585UL, 0x4e0062f2UL, 0xed95066cUL,
+    0x7ba5011bUL, 0xc1f40882UL, 0x57c40ff5UL, 0xc6d9b065UL, 0x50e9b712UL,
+    0xeab8be8bUL, 0x7c88b9fcUL, 0xdf1ddd62UL, 0x492dda15UL, 0xf37cd38cUL,
+    0x654cd4fbUL, 0x5861b24dUL, 0xce51b53aUL, 0x7400bca3UL, 0xe230bbd4UL,
+    0x41a5df4aUL, 0xd795d83dUL, 0x6dc4d1a4UL, 0xfbf4d6d3UL, 0x6ae96943UL,
+    0xfcd96e34UL, 0x468867adUL, 0xd0b860daUL, 0x732d0444UL, 0xe51d0333UL,
+    0x5f4c0aaaUL, 0xc97c0dddUL, 0x3c710550UL, 0xaa410227UL, 0x10100bbeUL,
+    0x86200cc9UL, 0x25b56857UL, 0xb3856f20UL, 0x09d466b9UL, 0x9fe461ceUL,
+    0x0ef9de5eUL, 0x98c9d929UL, 0x2298d0b0UL, 0xb4a8d7c7UL, 0x173db359UL,
+    0x810db42eUL, 0x3b5cbdb7UL, 0xad6cbac0UL, 0x2083b8edUL, 0xb6b3bf9aUL,
+    0x0ce2b603UL, 0x9ad2b174UL, 0x3947d5eaUL, 0xaf77d29dUL, 0x1526db04UL,
+    0x8316dc73UL, 0x120b63e3UL, 0x843b6494UL, 0x3e6a6d0dUL, 0xa85a6a7aUL,
+    0x0bcf0ee4UL, 0x9dff0993UL, 0x27ae000aUL, 0xb19e077dUL, 0x44930ff0UL,
+    0xd2a30887UL, 0x68f2011eUL, 0xfec20669UL, 0x5d5762f7UL, 0xcb676580UL,
+    0x71366c19UL, 0xe7066b6eUL, 0x761bd4feUL, 0xe02bd389UL, 0x5a7ada10UL,
+    0xcc4add67UL, 0x6fdfb9f9UL, 0xf9efbe8eUL, 0x43beb717UL, 0xd58eb060UL,
+    0xe8a3d6d6UL, 0x7e93d1a1UL, 0xc4c2d838UL, 0x52f2df4fUL, 0xf167bbd1UL,
+    0x6757bca6UL, 0xdd06b53fUL, 0x4b36b248UL, 0xda2b0dd8UL, 0x4c1b0aafUL,
+    0xf64a0336UL, 0x607a0441UL, 0xc3ef60dfUL, 0x55df67a8UL, 0xef8e6e31UL,
+    0x79be6946UL, 0x8cb361cbUL, 0x1a8366bcUL, 0xa0d26f25UL, 0x36e26852UL,
+    0x95770cccUL, 0x03470bbbUL, 0xb9160222UL, 0x2f260555UL, 0xbe3bbac5UL,
+    0x280bbdb2UL, 0x925ab42bUL, 0x046ab35cUL, 0xa7ffd7c2UL, 0x31cfd0b5UL,
+    0x8b9ed92cUL, 0x1daede5bUL, 0xb0c2649bUL, 0x26f263ecUL, 0x9ca36a75UL,
+    0x0a936d02UL, 0xa906099cUL, 0x3f360eebUL, 0x85670772UL, 0x13570005UL,
+    0x824abf95UL, 0x147ab8e2UL, 0xae2bb17bUL, 0x381bb60cUL, 0x9b8ed292UL,
+    0x0dbed5e5UL, 0xb7efdc7cUL, 0x21dfdb0bUL, 0xd4d2d386UL, 0x42e2d4f1UL,
+    0xf8b3dd68UL, 0x6e83da1fUL, 0xcd16be81UL, 0x5b26b9f6UL, 0xe177b06fUL,
+    0x7747b718UL, 0xe65a0888UL, 0x706a0fffUL, 0xca3b0666UL, 0x5c0b0111UL,
+    0xff9e658fUL, 0x69ae62f8UL, 0xd3ff6b61UL, 0x45cf6c16UL, 0x78e20aa0UL,
+    0xeed20dd7UL, 0x5483044eUL, 0xc2b30339UL, 0x612667a7UL, 0xf71660d0UL,
+    0x4d476949UL, 0xdb776e3eUL, 0x4a6ad1aeUL, 0xdc5ad6d9UL, 0x660bdf40UL,
+    0xf03bd837UL, 0x53aebca9UL, 0xc59ebbdeUL, 0x7fcfb247UL, 0xe9ffb530UL,
+    0x1cf2bdbdUL, 0x8ac2bacaUL, 0x3093b353UL, 0xa6a3b424UL, 0x0536d0baUL,
+    0x9306d7cdUL, 0x2957de54UL, 0xbf67d923UL, 0x2e7a66b3UL, 0xb84a61c4UL,
+    0x021b685dUL, 0x942b6f2aUL, 0x37be0bb4UL, 0xa18e0cc3UL, 0x1bdf055aUL,
+    0x8def022dUL
+  },
+  {
+    0x00000000UL, 0x41311b19UL, 0x82623632UL, 0xc3532d2bUL, 0x04c56c64UL,
+    0x45f4777dUL, 0x86a75a56UL, 0xc796414fUL, 0x088ad9c8UL, 0x49bbc2d1UL,
+    0x8ae8effaUL, 0xcbd9f4e3UL, 0x0c4fb5acUL, 0x4d7eaeb5UL, 0x8e2d839eUL,
+    0xcf1c9887UL, 0x5112c24aUL, 0x1023d953UL, 0xd370f478UL, 0x9241ef61UL,
+    0x55d7ae2eUL, 0x14e6b537UL, 0xd7b5981cUL, 0x96848305UL, 0x59981b82UL,
+    0x18a9009bUL, 0xdbfa2db0UL, 0x9acb36a9UL, 0x5d5d77e6UL, 0x1c6c6cffUL,
+    0xdf3f41d4UL, 0x9e0e5acdUL, 0xa2248495UL, 0xe3159f8cUL, 0x2046b2a7UL,
+    0x6177a9beUL, 0xa6e1e8f1UL, 0xe7d0f3e8UL, 0x2483dec3UL, 0x65b2c5daUL,
+    0xaaae5d5dUL, 0xeb9f4644UL, 0x28cc6b6fUL, 0x69fd7076UL, 0xae6b3139UL,
+    0xef5a2a20UL, 0x2c09070bUL, 0x6d381c12UL, 0xf33646dfUL, 0xb2075dc6UL,
+    0x715470edUL, 0x30656bf4UL, 0xf7f32abbUL, 0xb6c231a2UL, 0x75911c89UL,
+    0x34a00790UL, 0xfbbc9f17UL, 0xba8d840eUL, 0x79dea925UL, 0x38efb23cUL,
+    0xff79f373UL, 0xbe48e86aUL, 0x7d1bc541UL, 0x3c2ade58UL, 0x054f79f0UL,
+    0x447e62e9UL, 0x872d4fc2UL, 0xc61c54dbUL, 0x018a1594UL, 0x40bb0e8dUL,
+    0x83e823a6UL, 0xc2d938bfUL, 0x0dc5a038UL, 0x4cf4bb21UL, 0x8fa7960aUL,
+    0xce968d13UL, 0x0900cc5cUL, 0x4831d745UL, 0x8b62fa6eUL, 0xca53e177UL,
+    0x545dbbbaUL, 0x156ca0a3UL, 0xd63f8d88UL, 0x970e9691UL, 0x5098d7deUL,
+    0x11a9ccc7UL, 0xd2fae1ecUL, 0x93cbfaf5UL, 0x5cd76272UL, 0x1de6796bUL,
+    0xdeb55440UL, 0x9f844f59UL, 0x58120e16UL, 0x1923150fUL, 0xda703824UL,
+    0x9b41233dUL, 0xa76bfd65UL, 0xe65ae67cUL, 0x2509cb57UL, 0x6438d04eUL,
+    0xa3ae9101UL, 0xe29f8a18UL, 0x21cca733UL, 0x60fdbc2aUL, 0xafe124adUL,
+    0xeed03fb4UL, 0x2d83129fUL, 0x6cb20986UL, 0xab2448c9UL, 0xea1553d0UL,
+    0x29467efbUL, 0x687765e2UL, 0xf6793f2fUL, 0xb7482436UL, 0x741b091dUL,
+    0x352a1204UL, 0xf2bc534bUL, 0xb38d4852UL, 0x70de6579UL, 0x31ef7e60UL,
+    0xfef3e6e7UL, 0xbfc2fdfeUL, 0x7c91d0d5UL, 0x3da0cbccUL, 0xfa368a83UL,
+    0xbb07919aUL, 0x7854bcb1UL, 0x3965a7a8UL, 0x4b98833bUL, 0x0aa99822UL,
+    0xc9fab509UL, 0x88cbae10UL, 0x4f5def5fUL, 0x0e6cf446UL, 0xcd3fd96dUL,
+    0x8c0ec274UL, 0x43125af3UL, 0x022341eaUL, 0xc1706cc1UL, 0x804177d8UL,
+    0x47d73697UL, 0x06e62d8eUL, 0xc5b500a5UL, 0x84841bbcUL, 0x1a8a4171UL,
+    0x5bbb5a68UL, 0x98e87743UL, 0xd9d96c5aUL, 0x1e4f2d15UL, 0x5f7e360cUL,
+    0x9c2d1b27UL, 0xdd1c003eUL, 0x120098b9UL, 0x533183a0UL, 0x9062ae8bUL,
+    0xd153b592UL, 0x16c5f4ddUL, 0x57f4efc4UL, 0x94a7c2efUL, 0xd596d9f6UL,
+    0xe9bc07aeUL, 0xa88d1cb7UL, 0x6bde319cUL, 0x2aef2a85UL, 0xed796bcaUL,
+    0xac4870d3UL, 0x6f1b5df8UL, 0x2e2a46e1UL, 0xe136de66UL, 0xa007c57fUL,
+    0x6354e854UL, 0x2265f34dUL, 0xe5f3b202UL, 0xa4c2a91bUL, 0x67918430UL,
+    0x26a09f29UL, 0xb8aec5e4UL, 0xf99fdefdUL, 0x3accf3d6UL, 0x7bfde8cfUL,
+    0xbc6ba980UL, 0xfd5ab299UL, 0x3e099fb2UL, 0x7f3884abUL, 0xb0241c2cUL,
+    0xf1150735UL, 0x32462a1eUL, 0x73773107UL, 0xb4e17048UL, 0xf5d06b51UL,
+    0x3683467aUL, 0x77b25d63UL, 0x4ed7facbUL, 0x0fe6e1d2UL, 0xccb5ccf9UL,
+    0x8d84d7e0UL, 0x4a1296afUL, 0x0b238db6UL, 0xc870a09dUL, 0x8941bb84UL,
+    0x465d2303UL, 0x076c381aUL, 0xc43f1531UL, 0x850e0e28UL, 0x42984f67UL,
+    0x03a9547eUL, 0xc0fa7955UL, 0x81cb624cUL, 0x1fc53881UL, 0x5ef42398UL,
+    0x9da70eb3UL, 0xdc9615aaUL, 0x1b0054e5UL, 0x5a314ffcUL, 0x996262d7UL,
+    0xd85379ceUL, 0x174fe149UL, 0x567efa50UL, 0x952dd77bUL, 0xd41ccc62UL,
+    0x138a8d2dUL, 0x52bb9634UL, 0x91e8bb1fUL, 0xd0d9a006UL, 0xecf37e5eUL,
+    0xadc26547UL, 0x6e91486cUL, 0x2fa05375UL, 0xe836123aUL, 0xa9070923UL,
+    0x6a542408UL, 0x2b653f11UL, 0xe479a796UL, 0xa548bc8fUL, 0x661b91a4UL,
+    0x272a8abdUL, 0xe0bccbf2UL, 0xa18dd0ebUL, 0x62defdc0UL, 0x23efe6d9UL,
+    0xbde1bc14UL, 0xfcd0a70dUL, 0x3f838a26UL, 0x7eb2913fUL, 0xb924d070UL,
+    0xf815cb69UL, 0x3b46e642UL, 0x7a77fd5bUL, 0xb56b65dcUL, 0xf45a7ec5UL,
+    0x370953eeUL, 0x763848f7UL, 0xb1ae09b8UL, 0xf09f12a1UL, 0x33cc3f8aUL,
+    0x72fd2493UL
+  },
+  {
+    0x00000000UL, 0x376ac201UL, 0x6ed48403UL, 0x59be4602UL, 0xdca80907UL,
+    0xebc2cb06UL, 0xb27c8d04UL, 0x85164f05UL, 0xb851130eUL, 0x8f3bd10fUL,
+    0xd685970dUL, 0xe1ef550cUL, 0x64f91a09UL, 0x5393d808UL, 0x0a2d9e0aUL,
+    0x3d475c0bUL, 0x70a3261cUL, 0x47c9e41dUL, 0x1e77a21fUL, 0x291d601eUL,
+    0xac0b2f1bUL, 0x9b61ed1aUL, 0xc2dfab18UL, 0xf5b56919UL, 0xc8f23512UL,
+    0xff98f713UL, 0xa626b111UL, 0x914c7310UL, 0x145a3c15UL, 0x2330fe14UL,
+    0x7a8eb816UL, 0x4de47a17UL, 0xe0464d38UL, 0xd72c8f39UL, 0x8e92c93bUL,
+    0xb9f80b3aUL, 0x3cee443fUL, 0x0b84863eUL, 0x523ac03cUL, 0x6550023dUL,
+    0x58175e36UL, 0x6f7d9c37UL, 0x36c3da35UL, 0x01a91834UL, 0x84bf5731UL,
+    0xb3d59530UL, 0xea6bd332UL, 0xdd011133UL, 0x90e56b24UL, 0xa78fa925UL,
+    0xfe31ef27UL, 0xc95b2d26UL, 0x4c4d6223UL, 0x7b27a022UL, 0x2299e620UL,
+    0x15f32421UL, 0x28b4782aUL, 0x1fdeba2bUL, 0x4660fc29UL, 0x710a3e28UL,
+    0xf41c712dUL, 0xc376b32cUL, 0x9ac8f52eUL, 0xada2372fUL, 0xc08d9a70UL,
+    0xf7e75871UL, 0xae591e73UL, 0x9933dc72UL, 0x1c259377UL, 0x2b4f5176UL,
+    0x72f11774UL, 0x459bd575UL, 0x78dc897eUL, 0x4fb64b7fUL, 0x16080d7dUL,
+    0x2162cf7cUL, 0xa4748079UL, 0x931e4278UL, 0xcaa0047aUL, 0xfdcac67bUL,
+    0xb02ebc6cUL, 0x87447e6dUL, 0xdefa386fUL, 0xe990fa6eUL, 0x6c86b56bUL,
+    0x5bec776aUL, 0x02523168UL, 0x3538f369UL, 0x087faf62UL, 0x3f156d63UL,
+    0x66ab2b61UL, 0x51c1e960UL, 0xd4d7a665UL, 0xe3bd6464UL, 0xba032266UL,
+    0x8d69e067UL, 0x20cbd748UL, 0x17a11549UL, 0x4e1f534bUL, 0x7975914aUL,
+    0xfc63de4fUL, 0xcb091c4eUL, 0x92b75a4cUL, 0xa5dd984dUL, 0x989ac446UL,
+    0xaff00647UL, 0xf64e4045UL, 0xc1248244UL, 0x4432cd41UL, 0x73580f40UL,
+    0x2ae64942UL, 0x1d8c8b43UL, 0x5068f154UL, 0x67023355UL, 0x3ebc7557UL,
+    0x09d6b756UL, 0x8cc0f853UL, 0xbbaa3a52UL, 0xe2147c50UL, 0xd57ebe51UL,
+    0xe839e25aUL, 0xdf53205bUL, 0x86ed6659UL, 0xb187a458UL, 0x3491eb5dUL,
+    0x03fb295cUL, 0x5a456f5eUL, 0x6d2fad5fUL, 0x801b35e1UL, 0xb771f7e0UL,
+    0xeecfb1e2UL, 0xd9a573e3UL, 0x5cb33ce6UL, 0x6bd9fee7UL, 0x3267b8e5UL,
+    0x050d7ae4UL, 0x384a26efUL, 0x0f20e4eeUL, 0x569ea2ecUL, 0x61f460edUL,
+    0xe4e22fe8UL, 0xd388ede9UL, 0x8a36abebUL, 0xbd5c69eaUL, 0xf0b813fdUL,
+    0xc7d2d1fcUL, 0x9e6c97feUL, 0xa90655ffUL, 0x2c101afaUL, 0x1b7ad8fbUL,
+    0x42c49ef9UL, 0x75ae5cf8UL, 0x48e900f3UL, 0x7f83c2f2UL, 0x263d84f0UL,
+    0x115746f1UL, 0x944109f4UL, 0xa32bcbf5UL, 0xfa958df7UL, 0xcdff4ff6UL,
+    0x605d78d9UL, 0x5737bad8UL, 0x0e89fcdaUL, 0x39e33edbUL, 0xbcf571deUL,
+    0x8b9fb3dfUL, 0xd221f5ddUL, 0xe54b37dcUL, 0xd80c6bd7UL, 0xef66a9d6UL,
+    0xb6d8efd4UL, 0x81b22dd5UL, 0x04a462d0UL, 0x33cea0d1UL, 0x6a70e6d3UL,
+    0x5d1a24d2UL, 0x10fe5ec5UL, 0x27949cc4UL, 0x7e2adac6UL, 0x494018c7UL,
+    0xcc5657c2UL, 0xfb3c95c3UL, 0xa282d3c1UL, 0x95e811c0UL, 0xa8af4dcbUL,
+    0x9fc58fcaUL, 0xc67bc9c8UL, 0xf1110bc9UL, 0x740744ccUL, 0x436d86cdUL,
+    0x1ad3c0cfUL, 0x2db902ceUL, 0x4096af91UL, 0x77fc6d90UL, 0x2e422b92UL,
+    0x1928e993UL, 0x9c3ea696UL, 0xab546497UL, 0xf2ea2295UL, 0xc580e094UL,
+    0xf8c7bc9fUL, 0xcfad7e9eUL, 0x9613389cUL, 0xa179fa9dUL, 0x246fb598UL,
+    0x13057799UL, 0x4abb319bUL, 0x7dd1f39aUL, 0x3035898dUL, 0x075f4b8cUL,
+    0x5ee10d8eUL, 0x698bcf8fUL, 0xec9d808aUL, 0xdbf7428bUL, 0x82490489UL,
+    0xb523c688UL, 0x88649a83UL, 0xbf0e5882UL, 0xe6b01e80UL, 0xd1dadc81UL,
+    0x54cc9384UL, 0x63a65185UL, 0x3a181787UL, 0x0d72d586UL, 0xa0d0e2a9UL,
+    0x97ba20a8UL, 0xce0466aaUL, 0xf96ea4abUL, 0x7c78ebaeUL, 0x4b1229afUL,
+    0x12ac6fadUL, 0x25c6adacUL, 0x1881f1a7UL, 0x2feb33a6UL, 0x765575a4UL,
+    0x413fb7a5UL, 0xc429f8a0UL, 0xf3433aa1UL, 0xaafd7ca3UL, 0x9d97bea2UL,
+    0xd073c4b5UL, 0xe71906b4UL, 0xbea740b6UL, 0x89cd82b7UL, 0x0cdbcdb2UL,
+    0x3bb10fb3UL, 0x620f49b1UL, 0x55658bb0UL, 0x6822d7bbUL, 0x5f4815baUL,
+    0x06f653b8UL, 0x319c91b9UL, 0xb48adebcUL, 0x83e01cbdUL, 0xda5e5abfUL,
+    0xed3498beUL
+  },
+  {
+    0x00000000UL, 0x6567bcb8UL, 0x8bc809aaUL, 0xeeafb512UL, 0x5797628fUL,
+    0x32f0de37UL, 0xdc5f6b25UL, 0xb938d79dUL, 0xef28b4c5UL, 0x8a4f087dUL,
+    0x64e0bd6fUL, 0x018701d7UL, 0xb8bfd64aUL, 0xddd86af2UL, 0x3377dfe0UL,
+    0x56106358UL, 0x9f571950UL, 0xfa30a5e8UL, 0x149f10faUL, 0x71f8ac42UL,
+    0xc8c07bdfUL, 0xada7c767UL, 0x43087275UL, 0x266fcecdUL, 0x707fad95UL,
+    0x1518112dUL, 0xfbb7a43fUL, 0x9ed01887UL, 0x27e8cf1aUL, 0x428f73a2UL,
+    0xac20c6b0UL, 0xc9477a08UL, 0x3eaf32a0UL, 0x5bc88e18UL, 0xb5673b0aUL,
+    0xd00087b2UL, 0x6938502fUL, 0x0c5fec97UL, 0xe2f05985UL, 0x8797e53dUL,
+    0xd1878665UL, 0xb4e03addUL, 0x5a4f8fcfUL, 0x3f283377UL, 0x8610e4eaUL,
+    0xe3775852UL, 0x0dd8ed40UL, 0x68bf51f8UL, 0xa1f82bf0UL, 0xc49f9748UL,
+    0x2a30225aUL, 0x4f579ee2UL, 0xf66f497fUL, 0x9308f5c7UL, 0x7da740d5UL,
+    0x18c0fc6dUL, 0x4ed09f35UL, 0x2bb7238dUL, 0xc518969fUL, 0xa07f2a27UL,
+    0x1947fdbaUL, 0x7c204102UL, 0x928ff410UL, 0xf7e848a8UL, 0x3d58149bUL,
+    0x583fa823UL, 0xb6901d31UL, 0xd3f7a189UL, 0x6acf7614UL, 0x0fa8caacUL,
+    0xe1077fbeUL, 0x8460c306UL, 0xd270a05eUL, 0xb7171ce6UL, 0x59b8a9f4UL,
+    0x3cdf154cUL, 0x85e7c2d1UL, 0xe0807e69UL, 0x0e2fcb7bUL, 0x6b4877c3UL,
+    0xa20f0dcbUL, 0xc768b173UL, 0x29c70461UL, 0x4ca0b8d9UL, 0xf5986f44UL,
+    0x90ffd3fcUL, 0x7e5066eeUL, 0x1b37da56UL, 0x4d27b90eUL, 0x284005b6UL,
+    0xc6efb0a4UL, 0xa3880c1cUL, 0x1ab0db81UL, 0x7fd76739UL, 0x9178d22bUL,
+    0xf41f6e93UL, 0x03f7263bUL, 0x66909a83UL, 0x883f2f91UL, 0xed589329UL,
+    0x546044b4UL, 0x3107f80cUL, 0xdfa84d1eUL, 0xbacff1a6UL, 0xecdf92feUL,
+    0x89b82e46UL, 0x67179b54UL, 0x027027ecUL, 0xbb48f071UL, 0xde2f4cc9UL,
+    0x3080f9dbUL, 0x55e74563UL, 0x9ca03f6bUL, 0xf9c783d3UL, 0x176836c1UL,
+    0x720f8a79UL, 0xcb375de4UL, 0xae50e15cUL, 0x40ff544eUL, 0x2598e8f6UL,
+    0x73888baeUL, 0x16ef3716UL, 0xf8408204UL, 0x9d273ebcUL, 0x241fe921UL,
+    0x41785599UL, 0xafd7e08bUL, 0xcab05c33UL, 0x3bb659edUL, 0x5ed1e555UL,
+    0xb07e5047UL, 0xd519ecffUL, 0x6c213b62UL, 0x094687daUL, 0xe7e932c8UL,
+    0x828e8e70UL, 0xd49eed28UL, 0xb1f95190UL, 0x5f56e482UL, 0x3a31583aUL,
+    0x83098fa7UL, 0xe66e331fUL, 0x08c1860dUL, 0x6da63ab5UL, 0xa4e140bdUL,
+    0xc186fc05UL, 0x2f294917UL, 0x4a4ef5afUL, 0xf3762232UL, 0x96119e8aUL,
+    0x78be2b98UL, 0x1dd99720UL, 0x4bc9f478UL, 0x2eae48c0UL, 0xc001fdd2UL,
+    0xa566416aUL, 0x1c5e96f7UL, 0x79392a4fUL, 0x97969f5dUL, 0xf2f123e5UL,
+    0x05196b4dUL, 0x607ed7f5UL, 0x8ed162e7UL, 0xebb6de5fUL, 0x528e09c2UL,
+    0x37e9b57aUL, 0xd9460068UL, 0xbc21bcd0UL, 0xea31df88UL, 0x8f566330UL,
+    0x61f9d622UL, 0x049e6a9aUL, 0xbda6bd07UL, 0xd8c101bfUL, 0x366eb4adUL,
+    0x53090815UL, 0x9a4e721dUL, 0xff29cea5UL, 0x11867bb7UL, 0x74e1c70fUL,
+    0xcdd91092UL, 0xa8beac2aUL, 0x46111938UL, 0x2376a580UL, 0x7566c6d8UL,
+    0x10017a60UL, 0xfeaecf72UL, 0x9bc973caUL, 0x22f1a457UL, 0x479618efUL,
+    0xa939adfdUL, 0xcc5e1145UL, 0x06ee4d76UL, 0x6389f1ceUL, 0x8d2644dcUL,
+    0xe841f864UL, 0x51792ff9UL, 0x341e9341UL, 0xdab12653UL, 0xbfd69aebUL,
+    0xe9c6f9b3UL, 0x8ca1450bUL, 0x620ef019UL, 0x07694ca1UL, 0xbe519b3cUL,
+    0xdb362784UL, 0x35999296UL, 0x50fe2e2eUL, 0x99b95426UL, 0xfcdee89eUL,
+    0x12715d8cUL, 0x7716e134UL, 0xce2e36a9UL, 0xab498a11UL, 0x45e63f03UL,
+    0x208183bbUL, 0x7691e0e3UL, 0x13f65c5bUL, 0xfd59e949UL, 0x983e55f1UL,
+    0x2106826cUL, 0x44613ed4UL, 0xaace8bc6UL, 0xcfa9377eUL, 0x38417fd6UL,
+    0x5d26c36eUL, 0xb389767cUL, 0xd6eecac4UL, 0x6fd61d59UL, 0x0ab1a1e1UL,
+    0xe41e14f3UL, 0x8179a84bUL, 0xd769cb13UL, 0xb20e77abUL, 0x5ca1c2b9UL,
+    0x39c67e01UL, 0x80fea99cUL, 0xe5991524UL, 0x0b36a036UL, 0x6e511c8eUL,
+    0xa7166686UL, 0xc271da3eUL, 0x2cde6f2cUL, 0x49b9d394UL, 0xf0810409UL,
+    0x95e6b8b1UL, 0x7b490da3UL, 0x1e2eb11bUL, 0x483ed243UL, 0x2d596efbUL,
+    0xc3f6dbe9UL, 0xa6916751UL, 0x1fa9b0ccUL, 0x7ace0c74UL, 0x9461b966UL,
+    0xf10605deUL
+#endif
+  }
+};
diff --git a/c-blosc/internal-complibs/zlib-1.2.8/deflate.c b/c-blosc/internal-complibs/zlib-1.2.8/deflate.c
new file mode 100644
index 0000000..6969577
--- /dev/null
+++ b/c-blosc/internal-complibs/zlib-1.2.8/deflate.c
@@ -0,0 +1,1967 @@
+/* deflate.c -- compress data using the deflation algorithm
+ * Copyright (C) 1995-2013 Jean-loup Gailly and Mark Adler
+ * For conditions of distribution and use, see copyright notice in zlib.h
+ */
+
+/*
+ *  ALGORITHM
+ *
+ *      The "deflation" process depends on being able to identify portions
+ *      of the input text which are identical to earlier input (within a
+ *      sliding window trailing behind the input currently being processed).
+ *
+ *      The most straightforward technique turns out to be the fastest for
+ *      most input files: try all possible matches and select the longest.
+ *      The key feature of this algorithm is that insertions into the string
+ *      dictionary are very simple and thus fast, and deletions are avoided
+ *      completely. Insertions are performed at each input character, whereas
+ *      string matches are performed only when the previous match ends. So it
+ *      is preferable to spend more time in matches to allow very fast string
+ *      insertions and avoid deletions. The matching algorithm for small
+ *      strings is inspired from that of Rabin & Karp. A brute force approach
+ *      is used to find longer strings when a small match has been found.
+ *      A similar algorithm is used in comic (by Jan-Mark Wams) and freeze
+ *      (by Leonid Broukhis).
+ *         A previous version of this file used a more sophisticated algorithm
+ *      (by Fiala and Greene) which is guaranteed to run in linear amortized
+ *      time, but has a larger average cost, uses more memory and is patented.
+ *      However the F&G algorithm may be faster for some highly redundant
+ *      files if the parameter max_chain_length (described below) is too large.
+ *
+ *  ACKNOWLEDGEMENTS
+ *
+ *      The idea of lazy evaluation of matches is due to Jan-Mark Wams, and
+ *      I found it in 'freeze' written by Leonid Broukhis.
+ *      Thanks to many people for bug reports and testing.
+ *
+ *  REFERENCES
+ *
+ *      Deutsch, L.P.,"DEFLATE Compressed Data Format Specification".
+ *      Available in http://tools.ietf.org/html/rfc1951
+ *
+ *      A description of the Rabin and Karp algorithm is given in the book
+ *         "Algorithms" by R. Sedgewick, Addison-Wesley, p252.
+ *
+ *      Fiala,E.R., and Greene,D.H.
+ *         Data Compression with Finite Windows, Comm.ACM, 32,4 (1989) 490-595
+ *
+ */
+
+/* @(#) $Id$ */
+
+#include "deflate.h"
+
+const char deflate_copyright[] =
+   " deflate 1.2.8 Copyright 1995-2013 Jean-loup Gailly and Mark Adler ";
+/*
+  If you use the zlib library in a product, an acknowledgment is welcome
+  in the documentation of your product. If for some reason you cannot
+  include such an acknowledgment, I would appreciate that you keep this
+  copyright string in the executable of your product.
+ */
+
+/* ===========================================================================
+ *  Function prototypes.
+ */
+typedef enum {
+    need_more,      /* block not completed, need more input or more output */
+    block_done,     /* block flush performed */
+    finish_started, /* finish started, need only more output at next deflate */
+    finish_done     /* finish done, accept no more input or output */
+} block_state;
+
+typedef block_state (*compress_func) OF((deflate_state *s, int flush));
+/* Compression function. Returns the block state after the call. */
+
+local void fill_window    OF((deflate_state *s));
+local block_state deflate_stored OF((deflate_state *s, int flush));
+local block_state deflate_fast   OF((deflate_state *s, int flush));
+#ifndef FASTEST
+local block_state deflate_slow   OF((deflate_state *s, int flush));
+#endif
+local block_state deflate_rle    OF((deflate_state *s, int flush));
+local block_state deflate_huff   OF((deflate_state *s, int flush));
+local void lm_init        OF((deflate_state *s));
+local void putShortMSB    OF((deflate_state *s, uInt b));
+local void flush_pending  OF((z_streamp strm));
+local int read_buf        OF((z_streamp strm, Bytef *buf, unsigned size));
+#ifdef ASMV
+      void match_init OF((void)); /* asm code initialization */
+      uInt longest_match  OF((deflate_state *s, IPos cur_match));
+#else
+local uInt longest_match  OF((deflate_state *s, IPos cur_match));
+#endif
+
+#ifdef DEBUG
+local  void check_match OF((deflate_state *s, IPos start, IPos match,
+                            int length));
+#endif
+
+/* ===========================================================================
+ * Local data
+ */
+
+#define NIL 0
+/* Tail of hash chains */
+
+#ifndef TOO_FAR
+#  define TOO_FAR 4096
+#endif
+/* Matches of length 3 are discarded if their distance exceeds TOO_FAR */
+
+/* Values for max_lazy_match, good_match and max_chain_length, depending on
+ * the desired pack level (0..9). The values given below have been tuned to
+ * exclude worst case performance for pathological files. Better values may be
+ * found for specific files.
+ */
+typedef struct config_s {
+   ush good_length; /* reduce lazy search above this match length */
+   ush max_lazy;    /* do not perform lazy search above this match length */
+   ush nice_length; /* quit search above this match length */
+   ush max_chain;
+   compress_func func;
+} config;
+
+#ifdef FASTEST
+local const config configuration_table[2] = {
+/*      good lazy nice chain */
+/* 0 */ {0,    0,  0,    0, deflate_stored},  /* store only */
+/* 1 */ {4,    4,  8,    4, deflate_fast}}; /* max speed, no lazy matches */
+#else
+local const config configuration_table[10] = {
+/*      good lazy nice chain */
+/* 0 */ {0,    0,  0,    0, deflate_stored},  /* store only */
+/* 1 */ {4,    4,  8,    4, deflate_fast}, /* max speed, no lazy matches */
+/* 2 */ {4,    5, 16,    8, deflate_fast},
+/* 3 */ {4,    6, 32,   32, deflate_fast},
+
+/* 4 */ {4,    4, 16,   16, deflate_slow},  /* lazy matches */
+/* 5 */ {8,   16, 32,   32, deflate_slow},
+/* 6 */ {8,   16, 128, 128, deflate_slow},
+/* 7 */ {8,   32, 128, 256, deflate_slow},
+/* 8 */ {32, 128, 258, 1024, deflate_slow},
+/* 9 */ {32, 258, 258, 4096, deflate_slow}}; /* max compression */
+#endif
+
+/* Note: the deflate() code requires max_lazy >= MIN_MATCH and max_chain >= 4
+ * For deflate_fast() (levels <= 3) good is ignored and lazy has a different
+ * meaning.
+ */
+
+#define EQUAL 0
+/* result of memcmp for equal strings */
+
+#ifndef NO_DUMMY_DECL
+struct static_tree_desc_s {int dummy;}; /* for buggy compilers */
+#endif
+
+/* rank Z_BLOCK between Z_NO_FLUSH and Z_PARTIAL_FLUSH */
+#define RANK(f) (((f) << 1) - ((f) > 4 ? 9 : 0))
+
+/* ===========================================================================
+ * Update a hash value with the given input byte
+ * IN  assertion: all calls to to UPDATE_HASH are made with consecutive
+ *    input characters, so that a running hash key can be computed from the
+ *    previous key instead of complete recalculation each time.
+ */
+#define UPDATE_HASH(s,h,c) (h = (((h)<<s->hash_shift) ^ (c)) & s->hash_mask)
+
+
+/* ===========================================================================
+ * Insert string str in the dictionary and set match_head to the previous head
+ * of the hash chain (the most recent string with same hash key). Return
+ * the previous length of the hash chain.
+ * If this file is compiled with -DFASTEST, the compression level is forced
+ * to 1, and no hash chains are maintained.
+ * IN  assertion: all calls to to INSERT_STRING are made with consecutive
+ *    input characters and the first MIN_MATCH bytes of str are valid
+ *    (except for the last MIN_MATCH-1 bytes of the input file).
+ */
+#ifdef FASTEST
+#define INSERT_STRING(s, str, match_head) \
+   (UPDATE_HASH(s, s->ins_h, s->window[(str) + (MIN_MATCH-1)]), \
+    match_head = s->head[s->ins_h], \
+    s->head[s->ins_h] = (Pos)(str))
+#else
+#define INSERT_STRING(s, str, match_head) \
+   (UPDATE_HASH(s, s->ins_h, s->window[(str) + (MIN_MATCH-1)]), \
+    match_head = s->prev[(str) & s->w_mask] = s->head[s->ins_h], \
+    s->head[s->ins_h] = (Pos)(str))
+#endif
+
+/* ===========================================================================
+ * Initialize the hash table (avoiding 64K overflow for 16 bit systems).
+ * prev[] will be initialized on the fly.
+ */
+#define CLEAR_HASH(s) \
+    s->head[s->hash_size-1] = NIL; \
+    zmemzero((Bytef *)s->head, (unsigned)(s->hash_size-1)*sizeof(*s->head));
+
+/* ========================================================================= */
+int ZEXPORT deflateInit_(strm, level, version, stream_size)
+    z_streamp strm;
+    int level;
+    const char *version;
+    int stream_size;
+{
+    return deflateInit2_(strm, level, Z_DEFLATED, MAX_WBITS, DEF_MEM_LEVEL,
+                         Z_DEFAULT_STRATEGY, version, stream_size);
+    /* To do: ignore strm->next_in if we use it as window */
+}
+
+/* ========================================================================= */
+int ZEXPORT deflateInit2_(strm, level, method, windowBits, memLevel, strategy,
+                  version, stream_size)
+    z_streamp strm;
+    int  level;
+    int  method;
+    int  windowBits;
+    int  memLevel;
+    int  strategy;
+    const char *version;
+    int stream_size;
+{
+    deflate_state *s;
+    int wrap = 1;
+    static const char my_version[] = ZLIB_VERSION;
+
+    ushf *overlay;
+    /* We overlay pending_buf and d_buf+l_buf. This works since the average
+     * output size for (length,distance) codes is <= 24 bits.
+     */
+
+    if (version == Z_NULL || version[0] != my_version[0] ||
+        stream_size != sizeof(z_stream)) {
+        return Z_VERSION_ERROR;
+    }
+    if (strm == Z_NULL) return Z_STREAM_ERROR;
+
+    strm->msg = Z_NULL;
+    if (strm->zalloc == (alloc_func)0) {
+#ifdef Z_SOLO
+        return Z_STREAM_ERROR;
+#else
+        strm->zalloc = zcalloc;
+        strm->opaque = (voidpf)0;
+#endif
+    }
+    if (strm->zfree == (free_func)0)
+#ifdef Z_SOLO
+        return Z_STREAM_ERROR;
+#else
+        strm->zfree = zcfree;
+#endif
+
+#ifdef FASTEST
+    if (level != 0) level = 1;
+#else
+    if (level == Z_DEFAULT_COMPRESSION) level = 6;
+#endif
+
+    if (windowBits < 0) { /* suppress zlib wrapper */
+        wrap = 0;
+        windowBits = -windowBits;
+    }
+#ifdef GZIP
+    else if (windowBits > 15) {
+        wrap = 2;       /* write gzip wrapper instead */
+        windowBits -= 16;
+    }
+#endif
+    if (memLevel < 1 || memLevel > MAX_MEM_LEVEL || method != Z_DEFLATED ||
+        windowBits < 8 || windowBits > 15 || level < 0 || level > 9 ||
+        strategy < 0 || strategy > Z_FIXED) {
+        return Z_STREAM_ERROR;
+    }
+    if (windowBits == 8) windowBits = 9;  /* until 256-byte window bug fixed */
+    s = (deflate_state *) ZALLOC(strm, 1, sizeof(deflate_state));
+    if (s == Z_NULL) return Z_MEM_ERROR;
+    strm->state = (struct internal_state FAR *)s;
+    s->strm = strm;
+
+    s->wrap = wrap;
+    s->gzhead = Z_NULL;
+    s->w_bits = windowBits;
+    s->w_size = 1 << s->w_bits;
+    s->w_mask = s->w_size - 1;
+
+    s->hash_bits = memLevel + 7;
+    s->hash_size = 1 << s->hash_bits;
+    s->hash_mask = s->hash_size - 1;
+    s->hash_shift =  ((s->hash_bits+MIN_MATCH-1)/MIN_MATCH);
+
+    s->window = (Bytef *) ZALLOC(strm, s->w_size, 2*sizeof(Byte));
+    s->prev   = (Posf *)  ZALLOC(strm, s->w_size, sizeof(Pos));
+    s->head   = (Posf *)  ZALLOC(strm, s->hash_size, sizeof(Pos));
+
+    s->high_water = 0;      /* nothing written to s->window yet */
+
+    s->lit_bufsize = 1 << (memLevel + 6); /* 16K elements by default */
+
+    overlay = (ushf *) ZALLOC(strm, s->lit_bufsize, sizeof(ush)+2);
+    s->pending_buf = (uchf *) overlay;
+    s->pending_buf_size = (ulg)s->lit_bufsize * (sizeof(ush)+2L);
+
+    if (s->window == Z_NULL || s->prev == Z_NULL || s->head == Z_NULL ||
+        s->pending_buf == Z_NULL) {
+        s->status = FINISH_STATE;
+        strm->msg = ERR_MSG(Z_MEM_ERROR);
+        deflateEnd (strm);
+        return Z_MEM_ERROR;
+    }
+    s->d_buf = overlay + s->lit_bufsize/sizeof(ush);
+    s->l_buf = s->pending_buf + (1+sizeof(ush))*s->lit_bufsize;
+
+    s->level = level;
+    s->strategy = strategy;
+    s->method = (Byte)method;
+
+    return deflateReset(strm);
+}
+
+/* ========================================================================= */
+int ZEXPORT deflateSetDictionary (strm, dictionary, dictLength)
+    z_streamp strm;
+    const Bytef *dictionary;
+    uInt  dictLength;
+{
+    deflate_state *s;
+    uInt str, n;
+    int wrap;
+    unsigned avail;
+    z_const unsigned char *next;
+
+    if (strm == Z_NULL || strm->state == Z_NULL || dictionary == Z_NULL)
+        return Z_STREAM_ERROR;
+    s = strm->state;
+    wrap = s->wrap;
+    if (wrap == 2 || (wrap == 1 && s->status != INIT_STATE) || s->lookahead)
+        return Z_STREAM_ERROR;
+
+    /* when using zlib wrappers, compute Adler-32 for provided dictionary */
+    if (wrap == 1)
+        strm->adler = adler32(strm->adler, dictionary, dictLength);
+    s->wrap = 0;                    /* avoid computing Adler-32 in read_buf */
+
+    /* if dictionary would fill window, just replace the history */
+    if (dictLength >= s->w_size) {
+        if (wrap == 0) {            /* already empty otherwise */
+            CLEAR_HASH(s);
+            s->strstart = 0;
+            s->block_start = 0L;
+            s->insert = 0;
+        }
+        dictionary += dictLength - s->w_size;  /* use the tail */
+        dictLength = s->w_size;
+    }
+
+    /* insert dictionary into window and hash */
+    avail = strm->avail_in;
+    next = strm->next_in;
+    strm->avail_in = dictLength;
+    strm->next_in = (z_const Bytef *)dictionary;
+    fill_window(s);
+    while (s->lookahead >= MIN_MATCH) {
+        str = s->strstart;
+        n = s->lookahead - (MIN_MATCH-1);
+        do {
+            UPDATE_HASH(s, s->ins_h, s->window[str + MIN_MATCH-1]);
+#ifndef FASTEST
+            s->prev[str & s->w_mask] = s->head[s->ins_h];
+#endif
+            s->head[s->ins_h] = (Pos)str;
+            str++;
+        } while (--n);
+        s->strstart = str;
+        s->lookahead = MIN_MATCH-1;
+        fill_window(s);
+    }
+    s->strstart += s->lookahead;
+    s->block_start = (long)s->strstart;
+    s->insert = s->lookahead;
+    s->lookahead = 0;
+    s->match_length = s->prev_length = MIN_MATCH-1;
+    s->match_available = 0;
+    strm->next_in = next;
+    strm->avail_in = avail;
+    s->wrap = wrap;
+    return Z_OK;
+}
+
+/* ========================================================================= */
+int ZEXPORT deflateResetKeep (strm)
+    z_streamp strm;
+{
+    deflate_state *s;
+
+    if (strm == Z_NULL || strm->state == Z_NULL ||
+        strm->zalloc == (alloc_func)0 || strm->zfree == (free_func)0) {
+        return Z_STREAM_ERROR;
+    }
+
+    strm->total_in = strm->total_out = 0;
+    strm->msg = Z_NULL; /* use zfree if we ever allocate msg dynamically */
+    strm->data_type = Z_UNKNOWN;
+
+    s = (deflate_state *)strm->state;
+    s->pending = 0;
+    s->pending_out = s->pending_buf;
+
+    if (s->wrap < 0) {
+        s->wrap = -s->wrap; /* was made negative by deflate(..., Z_FINISH); */
+    }
+    s->status = s->wrap ? INIT_STATE : BUSY_STATE;
+    strm->adler =
+#ifdef GZIP
+        s->wrap == 2 ? crc32(0L, Z_NULL, 0) :
+#endif
+        adler32(0L, Z_NULL, 0);
+    s->last_flush = Z_NO_FLUSH;
+
+    _tr_init(s);
+
+    return Z_OK;
+}
+
+/* ========================================================================= */
+int ZEXPORT deflateReset (strm)
+    z_streamp strm;
+{
+    int ret;
+
+    ret = deflateResetKeep(strm);
+    if (ret == Z_OK)
+        lm_init(strm->state);
+    return ret;
+}
+
+/* ========================================================================= */
+int ZEXPORT deflateSetHeader (strm, head)
+    z_streamp strm;
+    gz_headerp head;
+{
+    if (strm == Z_NULL || strm->state == Z_NULL) return Z_STREAM_ERROR;
+    if (strm->state->wrap != 2) return Z_STREAM_ERROR;
+    strm->state->gzhead = head;
+    return Z_OK;
+}
+
+/* ========================================================================= */
+int ZEXPORT deflatePending (strm, pending, bits)
+    unsigned *pending;
+    int *bits;
+    z_streamp strm;
+{
+    if (strm == Z_NULL || strm->state == Z_NULL) return Z_STREAM_ERROR;
+    if (pending != Z_NULL)
+        *pending = strm->state->pending;
+    if (bits != Z_NULL)
+        *bits = strm->state->bi_valid;
+    return Z_OK;
+}
+
+/* ========================================================================= */
+int ZEXPORT deflatePrime (strm, bits, value)
+    z_streamp strm;
+    int bits;
+    int value;
+{
+    deflate_state *s;
+    int put;
+
+    if (strm == Z_NULL || strm->state == Z_NULL) return Z_STREAM_ERROR;
+    s = strm->state;
+    if ((Bytef *)(s->d_buf) < s->pending_out + ((Buf_size + 7) >> 3))
+        return Z_BUF_ERROR;
+    do {
+        put = Buf_size - s->bi_valid;
+        if (put > bits)
+            put = bits;
+        s->bi_buf |= (ush)((value & ((1 << put) - 1)) << s->bi_valid);
+        s->bi_valid += put;
+        _tr_flush_bits(s);
+        value >>= put;
+        bits -= put;
+    } while (bits);
+    return Z_OK;
+}
+
+/* ========================================================================= */
+int ZEXPORT deflateParams(strm, level, strategy)
+    z_streamp strm;
+    int level;
+    int strategy;
+{
+    deflate_state *s;
+    compress_func func;
+    int err = Z_OK;
+
+    if (strm == Z_NULL || strm->state == Z_NULL) return Z_STREAM_ERROR;
+    s = strm->state;
+
+#ifdef FASTEST
+    if (level != 0) level = 1;
+#else
+    if (level == Z_DEFAULT_COMPRESSION) level = 6;
+#endif
+    if (level < 0 || level > 9 || strategy < 0 || strategy > Z_FIXED) {
+        return Z_STREAM_ERROR;
+    }
+    func = configuration_table[s->level].func;
+
+    if ((strategy != s->strategy || func != configuration_table[level].func) &&
+        strm->total_in != 0) {
+        /* Flush the last buffer: */
+        err = deflate(strm, Z_BLOCK);
+        if (err == Z_BUF_ERROR && s->pending == 0)
+            err = Z_OK;
+    }
+    if (s->level != level) {
+        s->level = level;
+        s->max_lazy_match   = configuration_table[level].max_lazy;
+        s->good_match       = configuration_table[level].good_length;
+        s->nice_match       = configuration_table[level].nice_length;
+        s->max_chain_length = configuration_table[level].max_chain;
+    }
+    s->strategy = strategy;
+    return err;
+}
+
+/* ========================================================================= */
+int ZEXPORT deflateTune(strm, good_length, max_lazy, nice_length, max_chain)
+    z_streamp strm;
+    int good_length;
+    int max_lazy;
+    int nice_length;
+    int max_chain;
+{
+    deflate_state *s;
+
+    if (strm == Z_NULL || strm->state == Z_NULL) return Z_STREAM_ERROR;
+    s = strm->state;
+    s->good_match = good_length;
+    s->max_lazy_match = max_lazy;
+    s->nice_match = nice_length;
+    s->max_chain_length = max_chain;
+    return Z_OK;
+}
+
+/* =========================================================================
+ * For the default windowBits of 15 and memLevel of 8, this function returns
+ * a close to exact, as well as small, upper bound on the compressed size.
+ * They are coded as constants here for a reason--if the #define's are
+ * changed, then this function needs to be changed as well.  The return
+ * value for 15 and 8 only works for those exact settings.
+ *
+ * For any setting other than those defaults for windowBits and memLevel,
+ * the value returned is a conservative worst case for the maximum expansion
+ * resulting from using fixed blocks instead of stored blocks, which deflate
+ * can emit on compressed data for some combinations of the parameters.
+ *
+ * This function could be more sophisticated to provide closer upper bounds for
+ * every combination of windowBits and memLevel.  But even the conservative
+ * upper bound of about 14% expansion does not seem onerous for output buffer
+ * allocation.
+ */
+uLong ZEXPORT deflateBound(strm, sourceLen)
+    z_streamp strm;
+    uLong sourceLen;
+{
+    deflate_state *s;
+    uLong complen, wraplen;
+    Bytef *str;
+
+    /* conservative upper bound for compressed data */
+    complen = sourceLen +
+              ((sourceLen + 7) >> 3) + ((sourceLen + 63) >> 6) + 5;
+
+    /* if can't get parameters, return conservative bound plus zlib wrapper */
+    if (strm == Z_NULL || strm->state == Z_NULL)
+        return complen + 6;
+
+    /* compute wrapper length */
+    s = strm->state;
+    switch (s->wrap) {
+    case 0:                                 /* raw deflate */
+        wraplen = 0;
+        break;
+    case 1:                                 /* zlib wrapper */
+        wraplen = 6 + (s->strstart ? 4 : 0);
+        break;
+    case 2:                                 /* gzip wrapper */
+        wraplen = 18;
+        if (s->gzhead != Z_NULL) {          /* user-supplied gzip header */
+            if (s->gzhead->extra != Z_NULL)
+                wraplen += 2 + s->gzhead->extra_len;
+            str = s->gzhead->name;
+            if (str != Z_NULL)
+                do {
+                    wraplen++;
+                } while (*str++);
+            str = s->gzhead->comment;
+            if (str != Z_NULL)
+                do {
+                    wraplen++;
+                } while (*str++);
+            if (s->gzhead->hcrc)
+                wraplen += 2;
+        }
+        break;
+    default:                                /* for compiler happiness */
+        wraplen = 6;
+    }
+
+    /* if not default parameters, return conservative bound */
+    if (s->w_bits != 15 || s->hash_bits != 8 + 7)
+        return complen + wraplen;
+
+    /* default settings: return tight bound for that case */
+    return sourceLen + (sourceLen >> 12) + (sourceLen >> 14) +
+           (sourceLen >> 25) + 13 - 6 + wraplen;
+}
+
+/* =========================================================================
+ * Put a short in the pending buffer. The 16-bit value is put in MSB order.
+ * IN assertion: the stream state is correct and there is enough room in
+ * pending_buf.
+ */
+local void putShortMSB (s, b)
+    deflate_state *s;
+    uInt b;
+{
+    put_byte(s, (Byte)(b >> 8));
+    put_byte(s, (Byte)(b & 0xff));
+}
+
+/* =========================================================================
+ * Flush as much pending output as possible. All deflate() output goes
+ * through this function so some applications may wish to modify it
+ * to avoid allocating a large strm->next_out buffer and copying into it.
+ * (See also read_buf()).
+ */
+local void flush_pending(strm)
+    z_streamp strm;
+{
+    unsigned len;
+    deflate_state *s = strm->state;
+
+    _tr_flush_bits(s);
+    len = s->pending;
+    if (len > strm->avail_out) len = strm->avail_out;
+    if (len == 0) return;
+
+    zmemcpy(strm->next_out, s->pending_out, len);
+    strm->next_out  += len;
+    s->pending_out  += len;
+    strm->total_out += len;
+    strm->avail_out  -= len;
+    s->pending -= len;
+    if (s->pending == 0) {
+        s->pending_out = s->pending_buf;
+    }
+}
+
+/* ========================================================================= */
+int ZEXPORT deflate (strm, flush)
+    z_streamp strm;
+    int flush;
+{
+    int old_flush; /* value of flush param for previous deflate call */
+    deflate_state *s;
+
+    if (strm == Z_NULL || strm->state == Z_NULL ||
+        flush > Z_BLOCK || flush < 0) {
+        return Z_STREAM_ERROR;
+    }
+    s = strm->state;
+
+    if (strm->next_out == Z_NULL ||
+        (strm->next_in == Z_NULL && strm->avail_in != 0) ||
+        (s->status == FINISH_STATE && flush != Z_FINISH)) {
+        ERR_RETURN(strm, Z_STREAM_ERROR);
+    }
+    if (strm->avail_out == 0) ERR_RETURN(strm, Z_BUF_ERROR);
+
+    s->strm = strm; /* just in case */
+    old_flush = s->last_flush;
+    s->last_flush = flush;
+
+    /* Write the header */
+    if (s->status == INIT_STATE) {
+#ifdef GZIP
+        if (s->wrap == 2) {
+            strm->adler = crc32(0L, Z_NULL, 0);
+            put_byte(s, 31);
+            put_byte(s, 139);
+            put_byte(s, 8);
+            if (s->gzhead == Z_NULL) {
+                put_byte(s, 0);
+                put_byte(s, 0);
+                put_byte(s, 0);
+                put_byte(s, 0);
+                put_byte(s, 0);
+                put_byte(s, s->level == 9 ? 2 :
+                            (s->strategy >= Z_HUFFMAN_ONLY || s->level < 2 ?
+                             4 : 0));
+                put_byte(s, OS_CODE);
+                s->status = BUSY_STATE;
+            }
+            else {
+                put_byte(s, (s->gzhead->text ? 1 : 0) +
+                            (s->gzhead->hcrc ? 2 : 0) +
+                            (s->gzhead->extra == Z_NULL ? 0 : 4) +
+                            (s->gzhead->name == Z_NULL ? 0 : 8) +
+                            (s->gzhead->comment == Z_NULL ? 0 : 16)
+                        );
+                put_byte(s, (Byte)(s->gzhead->time & 0xff));
+                put_byte(s, (Byte)((s->gzhead->time >> 8) & 0xff));
+                put_byte(s, (Byte)((s->gzhead->time >> 16) & 0xff));
+                put_byte(s, (Byte)((s->gzhead->time >> 24) & 0xff));
+                put_byte(s, s->level == 9 ? 2 :
+                            (s->strategy >= Z_HUFFMAN_ONLY || s->level < 2 ?
+                             4 : 0));
+                put_byte(s, s->gzhead->os & 0xff);
+                if (s->gzhead->extra != Z_NULL) {
+                    put_byte(s, s->gzhead->extra_len & 0xff);
+                    put_byte(s, (s->gzhead->extra_len >> 8) & 0xff);
+                }
+                if (s->gzhead->hcrc)
+                    strm->adler = crc32(strm->adler, s->pending_buf,
+                                        s->pending);
+                s->gzindex = 0;
+                s->status = EXTRA_STATE;
+            }
+        }
+        else
+#endif
+        {
+            uInt header = (Z_DEFLATED + ((s->w_bits-8)<<4)) << 8;
+            uInt level_flags;
+
+            if (s->strategy >= Z_HUFFMAN_ONLY || s->level < 2)
+                level_flags = 0;
+            else if (s->level < 6)
+                level_flags = 1;
+            else if (s->level == 6)
+                level_flags = 2;
+            else
+                level_flags = 3;
+            header |= (level_flags << 6);
+            if (s->strstart != 0) header |= PRESET_DICT;
+            header += 31 - (header % 31);
+
+            s->status = BUSY_STATE;
+            putShortMSB(s, header);
+
+            /* Save the adler32 of the preset dictionary: */
+            if (s->strstart != 0) {
+                putShortMSB(s, (uInt)(strm->adler >> 16));
+                putShortMSB(s, (uInt)(strm->adler & 0xffff));
+            }
+            strm->adler = adler32(0L, Z_NULL, 0);
+        }
+    }
+#ifdef GZIP
+    if (s->status == EXTRA_STATE) {
+        if (s->gzhead->extra != Z_NULL) {
+            uInt beg = s->pending;  /* start of bytes to update crc */
+
+            while (s->gzindex < (s->gzhead->extra_len & 0xffff)) {
+                if (s->pending == s->pending_buf_size) {
+                    if (s->gzhead->hcrc && s->pending > beg)
+                        strm->adler = crc32(strm->adler, s->pending_buf + beg,
+                                            s->pending - beg);
+                    flush_pending(strm);
+                    beg = s->pending;
+                    if (s->pending == s->pending_buf_size)
+                        break;
+                }
+                put_byte(s, s->gzhead->extra[s->gzindex]);
+                s->gzindex++;
+            }
+            if (s->gzhead->hcrc && s->pending > beg)
+                strm->adler = crc32(strm->adler, s->pending_buf + beg,
+                                    s->pending - beg);
+            if (s->gzindex == s->gzhead->extra_len) {
+                s->gzindex = 0;
+                s->status = NAME_STATE;
+            }
+        }
+        else
+            s->status = NAME_STATE;
+    }
+    if (s->status == NAME_STATE) {
+        if (s->gzhead->name != Z_NULL) {
+            uInt beg = s->pending;  /* start of bytes to update crc */
+            int val;
+
+            do {
+                if (s->pending == s->pending_buf_size) {
+                    if (s->gzhead->hcrc && s->pending > beg)
+                        strm->adler = crc32(strm->adler, s->pending_buf + beg,
+                                            s->pending - beg);
+                    flush_pending(strm);
+                    beg = s->pending;
+                    if (s->pending == s->pending_buf_size) {
+                        val = 1;
+                        break;
+                    }
+                }
+                val = s->gzhead->name[s->gzindex++];
+                put_byte(s, val);
+            } while (val != 0);
+            if (s->gzhead->hcrc && s->pending > beg)
+                strm->adler = crc32(strm->adler, s->pending_buf + beg,
+                                    s->pending - beg);
+            if (val == 0) {
+                s->gzindex = 0;
+                s->status = COMMENT_STATE;
+            }
+        }
+        else
+            s->status = COMMENT_STATE;
+    }
+    if (s->status == COMMENT_STATE) {
+        if (s->gzhead->comment != Z_NULL) {
+            uInt beg = s->pending;  /* start of bytes to update crc */
+            int val;
+
+            do {
+                if (s->pending == s->pending_buf_size) {
+                    if (s->gzhead->hcrc && s->pending > beg)
+                        strm->adler = crc32(strm->adler, s->pending_buf + beg,
+                                            s->pending - beg);
+                    flush_pending(strm);
+                    beg = s->pending;
+                    if (s->pending == s->pending_buf_size) {
+                        val = 1;
+                        break;
+                    }
+                }
+                val = s->gzhead->comment[s->gzindex++];
+                put_byte(s, val);
+            } while (val != 0);
+            if (s->gzhead->hcrc && s->pending > beg)
+                strm->adler = crc32(strm->adler, s->pending_buf + beg,
+                                    s->pending - beg);
+            if (val == 0)
+                s->status = HCRC_STATE;
+        }
+        else
+            s->status = HCRC_STATE;
+    }
+    if (s->status == HCRC_STATE) {
+        if (s->gzhead->hcrc) {
+            if (s->pending + 2 > s->pending_buf_size)
+                flush_pending(strm);
+            if (s->pending + 2 <= s->pending_buf_size) {
+                put_byte(s, (Byte)(strm->adler & 0xff));
+                put_byte(s, (Byte)((strm->adler >> 8) & 0xff));
+                strm->adler = crc32(0L, Z_NULL, 0);
+                s->status = BUSY_STATE;
+            }
+        }
+        else
+            s->status = BUSY_STATE;
+    }
+#endif
+
+    /* Flush as much pending output as possible */
+    if (s->pending != 0) {
+        flush_pending(strm);
+        if (strm->avail_out == 0) {
+            /* Since avail_out is 0, deflate will be called again with
+             * more output space, but possibly with both pending and
+             * avail_in equal to zero. There won't be anything to do,
+             * but this is not an error situation so make sure we
+             * return OK instead of BUF_ERROR at next call of deflate:
+             */
+            s->last_flush = -1;
+            return Z_OK;
+        }
+
+    /* Make sure there is something to do and avoid duplicate consecutive
+     * flushes. For repeated and useless calls with Z_FINISH, we keep
+     * returning Z_STREAM_END instead of Z_BUF_ERROR.
+     */
+    } else if (strm->avail_in == 0 && RANK(flush) <= RANK(old_flush) &&
+               flush != Z_FINISH) {
+        ERR_RETURN(strm, Z_BUF_ERROR);
+    }
+
+    /* User must not provide more input after the first FINISH: */
+    if (s->status == FINISH_STATE && strm->avail_in != 0) {
+        ERR_RETURN(strm, Z_BUF_ERROR);
+    }
+
+    /* Start a new block or continue the current one.
+     */
+    if (strm->avail_in != 0 || s->lookahead != 0 ||
+        (flush != Z_NO_FLUSH && s->status != FINISH_STATE)) {
+        block_state bstate;
+
+        bstate = s->strategy == Z_HUFFMAN_ONLY ? deflate_huff(s, flush) :
+                    (s->strategy == Z_RLE ? deflate_rle(s, flush) :
+                        (*(configuration_table[s->level].func))(s, flush));
+
+        if (bstate == finish_started || bstate == finish_done) {
+            s->status = FINISH_STATE;
+        }
+        if (bstate == need_more || bstate == finish_started) {
+            if (strm->avail_out == 0) {
+                s->last_flush = -1; /* avoid BUF_ERROR next call, see above */
+            }
+            return Z_OK;
+            /* If flush != Z_NO_FLUSH && avail_out == 0, the next call
+             * of deflate should use the same flush parameter to make sure
+             * that the flush is complete. So we don't have to output an
+             * empty block here, this will be done at next call. This also
+             * ensures that for a very small output buffer, we emit at most
+             * one empty block.
+             */
+        }
+        if (bstate == block_done) {
+            if (flush == Z_PARTIAL_FLUSH) {
+                _tr_align(s);
+            } else if (flush != Z_BLOCK) { /* FULL_FLUSH or SYNC_FLUSH */
+                _tr_stored_block(s, (char*)0, 0L, 0);
+                /* For a full flush, this empty block will be recognized
+                 * as a special marker by inflate_sync().
+                 */
+                if (flush == Z_FULL_FLUSH) {
+                    CLEAR_HASH(s);             /* forget history */
+                    if (s->lookahead == 0) {
+                        s->strstart = 0;
+                        s->block_start = 0L;
+                        s->insert = 0;
+                    }
+                }
+            }
+            flush_pending(strm);
+            if (strm->avail_out == 0) {
+              s->last_flush = -1; /* avoid BUF_ERROR at next call, see above */
+              return Z_OK;
+            }
+        }
+    }
+    Assert(strm->avail_out > 0, "bug2");
+
+    if (flush != Z_FINISH) return Z_OK;
+    if (s->wrap <= 0) return Z_STREAM_END;
+
+    /* Write the trailer */
+#ifdef GZIP
+    if (s->wrap == 2) {
+        put_byte(s, (Byte)(strm->adler & 0xff));
+        put_byte(s, (Byte)((strm->adler >> 8) & 0xff));
+        put_byte(s, (Byte)((strm->adler >> 16) & 0xff));
+        put_byte(s, (Byte)((strm->adler >> 24) & 0xff));
+        put_byte(s, (Byte)(strm->total_in & 0xff));
+        put_byte(s, (Byte)((strm->total_in >> 8) & 0xff));
+        put_byte(s, (Byte)((strm->total_in >> 16) & 0xff));
+        put_byte(s, (Byte)((strm->total_in >> 24) & 0xff));
+    }
+    else
+#endif
+    {
+        putShortMSB(s, (uInt)(strm->adler >> 16));
+        putShortMSB(s, (uInt)(strm->adler & 0xffff));
+    }
+    flush_pending(strm);
+    /* If avail_out is zero, the application will call deflate again
+     * to flush the rest.
+     */
+    if (s->wrap > 0) s->wrap = -s->wrap; /* write the trailer only once! */
+    return s->pending != 0 ? Z_OK : Z_STREAM_END;
+}
+
+/* ========================================================================= */
+int ZEXPORT deflateEnd (strm)
+    z_streamp strm;
+{
+    int status;
+
+    if (strm == Z_NULL || strm->state == Z_NULL) return Z_STREAM_ERROR;
+
+    status = strm->state->status;
+    if (status != INIT_STATE &&
+        status != EXTRA_STATE &&
+        status != NAME_STATE &&
+        status != COMMENT_STATE &&
+        status != HCRC_STATE &&
+        status != BUSY_STATE &&
+        status != FINISH_STATE) {
+      return Z_STREAM_ERROR;
+    }
+
+    /* Deallocate in reverse order of allocations: */
+    TRY_FREE(strm, strm->state->pending_buf);
+    TRY_FREE(strm, strm->state->head);
+    TRY_FREE(strm, strm->state->prev);
+    TRY_FREE(strm, strm->state->window);
+
+    ZFREE(strm, strm->state);
+    strm->state = Z_NULL;
+
+    return status == BUSY_STATE ? Z_DATA_ERROR : Z_OK;
+}
+
+/* =========================================================================
+ * Copy the source state to the destination state.
+ * To simplify the source, this is not supported for 16-bit MSDOS (which
+ * doesn't have enough memory anyway to duplicate compression states).
+ */
+int ZEXPORT deflateCopy (dest, source)
+    z_streamp dest;
+    z_streamp source;
+{
+#ifdef MAXSEG_64K
+    return Z_STREAM_ERROR;
+#else
+    deflate_state *ds;
+    deflate_state *ss;
+    ushf *overlay;
+
+
+    if (source == Z_NULL || dest == Z_NULL || source->state == Z_NULL) {
+        return Z_STREAM_ERROR;
+    }
+
+    ss = source->state;
+
+    zmemcpy((voidpf)dest, (voidpf)source, sizeof(z_stream));
+
+    ds = (deflate_state *) ZALLOC(dest, 1, sizeof(deflate_state));
+    if (ds == Z_NULL) return Z_MEM_ERROR;
+    dest->state = (struct internal_state FAR *) ds;
+    zmemcpy((voidpf)ds, (voidpf)ss, sizeof(deflate_state));
+    ds->strm = dest;
+
+    ds->window = (Bytef *) ZALLOC(dest, ds->w_size, 2*sizeof(Byte));
+    ds->prev   = (Posf *)  ZALLOC(dest, ds->w_size, sizeof(Pos));
+    ds->head   = (Posf *)  ZALLOC(dest, ds->hash_size, sizeof(Pos));
+    overlay = (ushf *) ZALLOC(dest, ds->lit_bufsize, sizeof(ush)+2);
+    ds->pending_buf = (uchf *) overlay;
+
+    if (ds->window == Z_NULL || ds->prev == Z_NULL || ds->head == Z_NULL ||
+        ds->pending_buf == Z_NULL) {
+        deflateEnd (dest);
+        return Z_MEM_ERROR;
+    }
+    /* following zmemcpy do not work for 16-bit MSDOS */
+    zmemcpy(ds->window, ss->window, ds->w_size * 2 * sizeof(Byte));
+    zmemcpy((voidpf)ds->prev, (voidpf)ss->prev, ds->w_size * sizeof(Pos));
+    zmemcpy((voidpf)ds->head, (voidpf)ss->head, ds->hash_size * sizeof(Pos));
+    zmemcpy(ds->pending_buf, ss->pending_buf, (uInt)ds->pending_buf_size);
+
+    ds->pending_out = ds->pending_buf + (ss->pending_out - ss->pending_buf);
+    ds->d_buf = overlay + ds->lit_bufsize/sizeof(ush);
+    ds->l_buf = ds->pending_buf + (1+sizeof(ush))*ds->lit_bufsize;
+
+    ds->l_desc.dyn_tree = ds->dyn_ltree;
+    ds->d_desc.dyn_tree = ds->dyn_dtree;
+    ds->bl_desc.dyn_tree = ds->bl_tree;
+
+    return Z_OK;
+#endif /* MAXSEG_64K */
+}
+
+/* ===========================================================================
+ * Read a new buffer from the current input stream, update the adler32
+ * and total number of bytes read.  All deflate() input goes through
+ * this function so some applications may wish to modify it to avoid
+ * allocating a large strm->next_in buffer and copying from it.
+ * (See also flush_pending()).
+ */
+local int read_buf(strm, buf, size)
+    z_streamp strm;
+    Bytef *buf;
+    unsigned size;
+{
+    unsigned len = strm->avail_in;
+
+    if (len > size) len = size;
+    if (len == 0) return 0;
+
+    strm->avail_in  -= len;
+
+    zmemcpy(buf, strm->next_in, len);
+    if (strm->state->wrap == 1) {
+        strm->adler = adler32(strm->adler, buf, len);
+    }
+#ifdef GZIP
+    else if (strm->state->wrap == 2) {
+        strm->adler = crc32(strm->adler, buf, len);
+    }
+#endif
+    strm->next_in  += len;
+    strm->total_in += len;
+
+    return (int)len;
+}
+
+/* ===========================================================================
+ * Initialize the "longest match" routines for a new zlib stream
+ */
+local void lm_init (s)
+    deflate_state *s;
+{
+    s->window_size = (ulg)2L*s->w_size;
+
+    CLEAR_HASH(s);
+
+    /* Set the default configuration parameters:
+     */
+    s->max_lazy_match   = configuration_table[s->level].max_lazy;
+    s->good_match       = configuration_table[s->level].good_length;
+    s->nice_match       = configuration_table[s->level].nice_length;
+    s->max_chain_length = configuration_table[s->level].max_chain;
+
+    s->strstart = 0;
+    s->block_start = 0L;
+    s->lookahead = 0;
+    s->insert = 0;
+    s->match_length = s->prev_length = MIN_MATCH-1;
+    s->match_available = 0;
+    s->ins_h = 0;
+#ifndef FASTEST
+#ifdef ASMV
+    match_init(); /* initialize the asm code */
+#endif
+#endif
+}
+
+#ifndef FASTEST
+/* ===========================================================================
+ * Set match_start to the longest match starting at the given string and
+ * return its length. Matches shorter or equal to prev_length are discarded,
+ * in which case the result is equal to prev_length and match_start is
+ * garbage.
+ * IN assertions: cur_match is the head of the hash chain for the current
+ *   string (strstart) and its distance is <= MAX_DIST, and prev_length >= 1
+ * OUT assertion: the match length is not greater than s->lookahead.
+ */
+#ifndef ASMV
+/* For 80x86 and 680x0, an optimized version will be provided in match.asm or
+ * match.S. The code will be functionally equivalent.
+ */
+local uInt longest_match(s, cur_match)
+    deflate_state *s;
+    IPos cur_match;                             /* current match */
+{
+    unsigned chain_length = s->max_chain_length;/* max hash chain length */
+    register Bytef *scan = s->window + s->strstart; /* current string */
+    register Bytef *match;                       /* matched string */
+    register int len;                           /* length of current match */
+    int best_len = s->prev_length;              /* best match length so far */
+    int nice_match = s->nice_match;             /* stop if match long enough */
+    IPos limit = s->strstart > (IPos)MAX_DIST(s) ?
+        s->strstart - (IPos)MAX_DIST(s) : NIL;
+    /* Stop when cur_match becomes <= limit. To simplify the code,
+     * we prevent matches with the string of window index 0.
+     */
+    Posf *prev = s->prev;
+    uInt wmask = s->w_mask;
+
+#ifdef UNALIGNED_OK
+    /* Compare two bytes at a time. Note: this is not always beneficial.
+     * Try with and without -DUNALIGNED_OK to check.
+     */
+    register Bytef *strend = s->window + s->strstart + MAX_MATCH - 1;
+    register ush scan_start = *(ushf*)scan;
+    register ush scan_end   = *(ushf*)(scan+best_len-1);
+#else
+    register Bytef *strend = s->window + s->strstart + MAX_MATCH;
+    register Byte scan_end1  = scan[best_len-1];
+    register Byte scan_end   = scan[best_len];
+#endif
+
+    /* The code is optimized for HASH_BITS >= 8 and MAX_MATCH-2 multiple of 16.
+     * It is easy to get rid of this optimization if necessary.
+     */
+    Assert(s->hash_bits >= 8 && MAX_MATCH == 258, "Code too clever");
+
+    /* Do not waste too much time if we already have a good match: */
+    if (s->prev_length >= s->good_match) {
+        chain_length >>= 2;
+    }
+    /* Do not look for matches beyond the end of the input. This is necessary
+     * to make deflate deterministic.
+     */
+    if ((uInt)nice_match > s->lookahead) nice_match = s->lookahead;
+
+    Assert((ulg)s->strstart <= s->window_size-MIN_LOOKAHEAD, "need lookahead");
+
+    do {
+        Assert(cur_match < s->strstart, "no future");
+        match = s->window + cur_match;
+
+        /* Skip to next match if the match length cannot increase
+         * or if the match length is less than 2.  Note that the checks below
+         * for insufficient lookahead only occur occasionally for performance
+         * reasons.  Therefore uninitialized memory will be accessed, and
+         * conditional jumps will be made that depend on those values.
+         * However the length of the match is limited to the lookahead, so
+         * the output of deflate is not affected by the uninitialized values.
+         */
+#if (defined(UNALIGNED_OK) && MAX_MATCH == 258)
+        /* This code assumes sizeof(unsigned short) == 2. Do not use
+         * UNALIGNED_OK if your compiler uses a different size.
+         */
+        if (*(ushf*)(match+best_len-1) != scan_end ||
+            *(ushf*)match != scan_start) continue;
+
+        /* It is not necessary to compare scan[2] and match[2] since they are
+         * always equal when the other bytes match, given that the hash keys
+         * are equal and that HASH_BITS >= 8. Compare 2 bytes at a time at
+         * strstart+3, +5, ... up to strstart+257. We check for insufficient
+         * lookahead only every 4th comparison; the 128th check will be made
+         * at strstart+257. If MAX_MATCH-2 is not a multiple of 8, it is
+         * necessary to put more guard bytes at the end of the window, or
+         * to check more often for insufficient lookahead.
+         */
+        Assert(scan[2] == match[2], "scan[2]?");
+        scan++, match++;
+        do {
+        } while (*(ushf*)(scan+=2) == *(ushf*)(match+=2) &&
+                 *(ushf*)(scan+=2) == *(ushf*)(match+=2) &&
+                 *(ushf*)(scan+=2) == *(ushf*)(match+=2) &&
+                 *(ushf*)(scan+=2) == *(ushf*)(match+=2) &&
+                 scan < strend);
+        /* The funny "do {}" generates better code on most compilers */
+
+        /* Here, scan <= window+strstart+257 */
+        Assert(scan <= s->window+(unsigned)(s->window_size-1), "wild scan");
+        if (*scan == *match) scan++;
+
+        len = (MAX_MATCH - 1) - (int)(strend-scan);
+        scan = strend - (MAX_MATCH-1);
+
+#else /* UNALIGNED_OK */
+
+        if (match[best_len]   != scan_end  ||
+            match[best_len-1] != scan_end1 ||
+            *match            != *scan     ||
+            *++match          != scan[1])      continue;
+
+        /* The check at best_len-1 can be removed because it will be made
+         * again later. (This heuristic is not always a win.)
+         * It is not necessary to compare scan[2] and match[2] since they
+         * are always equal when the other bytes match, given that
+         * the hash keys are equal and that HASH_BITS >= 8.
+         */
+        scan += 2, match++;
+        Assert(*scan == *match, "match[2]?");
+
+        /* We check for insufficient lookahead only every 8th comparison;
+         * the 256th check will be made at strstart+258.
+         */
+        do {
+        } while (*++scan == *++match && *++scan == *++match &&
+                 *++scan == *++match && *++scan == *++match &&
+                 *++scan == *++match && *++scan == *++match &&
+                 *++scan == *++match && *++scan == *++match &&
+                 scan < strend);
+
+        Assert(scan <= s->window+(unsigned)(s->window_size-1), "wild scan");
+
+        len = MAX_MATCH - (int)(strend - scan);
+        scan = strend - MAX_MATCH;
+
+#endif /* UNALIGNED_OK */
+
+        if (len > best_len) {
+            s->match_start = cur_match;
+            best_len = len;
+            if (len >= nice_match) break;
+#ifdef UNALIGNED_OK
+            scan_end = *(ushf*)(scan+best_len-1);
+#else
+            scan_end1  = scan[best_len-1];
+            scan_end   = scan[best_len];
+#endif
+        }
+    } while ((cur_match = prev[cur_match & wmask]) > limit
+             && --chain_length != 0);
+
+    if ((uInt)best_len <= s->lookahead) return (uInt)best_len;
+    return s->lookahead;
+}
+#endif /* ASMV */
+
+#else /* FASTEST */
+
+/* ---------------------------------------------------------------------------
+ * Optimized version for FASTEST only
+ */
+local uInt longest_match(s, cur_match)
+    deflate_state *s;
+    IPos cur_match;                             /* current match */
+{
+    register Bytef *scan = s->window + s->strstart; /* current string */
+    register Bytef *match;                       /* matched string */
+    register int len;                           /* length of current match */
+    register Bytef *strend = s->window + s->strstart + MAX_MATCH;
+
+    /* The code is optimized for HASH_BITS >= 8 and MAX_MATCH-2 multiple of 16.
+     * It is easy to get rid of this optimization if necessary.
+     */
+    Assert(s->hash_bits >= 8 && MAX_MATCH == 258, "Code too clever");
+
+    Assert((ulg)s->strstart <= s->window_size-MIN_LOOKAHEAD, "need lookahead");
+
+    Assert(cur_match < s->strstart, "no future");
+
+    match = s->window + cur_match;
+
+    /* Return failure if the match length is less than 2:
+     */
+    if (match[0] != scan[0] || match[1] != scan[1]) return MIN_MATCH-1;
+
+    /* The check at best_len-1 can be removed because it will be made
+     * again later. (This heuristic is not always a win.)
+     * It is not necessary to compare scan[2] and match[2] since they
+     * are always equal when the other bytes match, given that
+     * the hash keys are equal and that HASH_BITS >= 8.
+     */
+    scan += 2, match += 2;
+    Assert(*scan == *match, "match[2]?");
+
+    /* We check for insufficient lookahead only every 8th comparison;
+     * the 256th check will be made at strstart+258.
+     */
+    do {
+    } while (*++scan == *++match && *++scan == *++match &&
+             *++scan == *++match && *++scan == *++match &&
+             *++scan == *++match && *++scan == *++match &&
+             *++scan == *++match && *++scan == *++match &&
+             scan < strend);
+
+    Assert(scan <= s->window+(unsigned)(s->window_size-1), "wild scan");
+
+    len = MAX_MATCH - (int)(strend - scan);
+
+    if (len < MIN_MATCH) return MIN_MATCH - 1;
+
+    s->match_start = cur_match;
+    return (uInt)len <= s->lookahead ? (uInt)len : s->lookahead;
+}
+
+#endif /* FASTEST */
+
+#ifdef DEBUG
+/* ===========================================================================
+ * Check that the match at match_start is indeed a match.
+ */
+local void check_match(s, start, match, length)
+    deflate_state *s;
+    IPos start, match;
+    int length;
+{
+    /* check that the match is indeed a match */
+    if (zmemcmp(s->window + match,
+                s->window + start, length) != EQUAL) {
+        fprintf(stderr, " start %u, match %u, length %d\n",
+                start, match, length);
+        do {
+            fprintf(stderr, "%c%c", s->window[match++], s->window[start++]);
+        } while (--length != 0);
+        z_error("invalid match");
+    }
+    if (z_verbose > 1) {
+        fprintf(stderr,"\\[%d,%d]", start-match, length);
+        do { putc(s->window[start++], stderr); } while (--length != 0);
+    }
+}
+#else
+#  define check_match(s, start, match, length)
+#endif /* DEBUG */
+
+/* ===========================================================================
+ * Fill the window when the lookahead becomes insufficient.
+ * Updates strstart and lookahead.
+ *
+ * IN assertion: lookahead < MIN_LOOKAHEAD
+ * OUT assertions: strstart <= window_size-MIN_LOOKAHEAD
+ *    At least one byte has been read, or avail_in == 0; reads are
+ *    performed for at least two bytes (required for the zip translate_eol
+ *    option -- not supported here).
+ */
+local void fill_window(s)
+    deflate_state *s;
+{
+    register unsigned n, m;
+    register Posf *p;
+    unsigned more;    /* Amount of free space at the end of the window. */
+    uInt wsize = s->w_size;
+
+    Assert(s->lookahead < MIN_LOOKAHEAD, "already enough lookahead");
+
+    do {
+        more = (unsigned)(s->window_size -(ulg)s->lookahead -(ulg)s->strstart);
+
+        /* Deal with !@#$% 64K limit: */
+        if (sizeof(int) <= 2) {
+            if (more == 0 && s->strstart == 0 && s->lookahead == 0) {
+                more = wsize;
+
+            } else if (more == (unsigned)(-1)) {
+                /* Very unlikely, but possible on 16 bit machine if
+                 * strstart == 0 && lookahead == 1 (input done a byte at time)
+                 */
+                more--;
+            }
+        }
+
+        /* If the window is almost full and there is insufficient lookahead,
+         * move the upper half to the lower one to make room in the upper half.
+         */
+        if (s->strstart >= wsize+MAX_DIST(s)) {
+
+            zmemcpy(s->window, s->window+wsize, (unsigned)wsize);
+            s->match_start -= wsize;
+            s->strstart    -= wsize; /* we now have strstart >= MAX_DIST */
+            s->block_start -= (long) wsize;
+
+            /* Slide the hash table (could be avoided with 32 bit values
+               at the expense of memory usage). We slide even when level == 0
+               to keep the hash table consistent if we switch back to level > 0
+               later. (Using level 0 permanently is not an optimal usage of
+               zlib, so we don't care about this pathological case.)
+             */
+            n = s->hash_size;
+            p = &s->head[n];
+            do {
+                m = *--p;
+                *p = (Pos)(m >= wsize ? m-wsize : NIL);
+            } while (--n);
+
+            n = wsize;
+#ifndef FASTEST
+            p = &s->prev[n];
+            do {
+                m = *--p;
+                *p = (Pos)(m >= wsize ? m-wsize : NIL);
+                /* If n is not on any hash chain, prev[n] is garbage but
+                 * its value will never be used.
+                 */
+            } while (--n);
+#endif
+            more += wsize;
+        }
+        if (s->strm->avail_in == 0) break;
+
+        /* If there was no sliding:
+         *    strstart <= WSIZE+MAX_DIST-1 && lookahead <= MIN_LOOKAHEAD - 1 &&
+         *    more == window_size - lookahead - strstart
+         * => more >= window_size - (MIN_LOOKAHEAD-1 + WSIZE + MAX_DIST-1)
+         * => more >= window_size - 2*WSIZE + 2
+         * In the BIG_MEM or MMAP case (not yet supported),
+         *   window_size == input_size + MIN_LOOKAHEAD  &&
+         *   strstart + s->lookahead <= input_size => more >= MIN_LOOKAHEAD.
+         * Otherwise, window_size == 2*WSIZE so more >= 2.
+         * If there was sliding, more >= WSIZE. So in all cases, more >= 2.
+         */
+        Assert(more >= 2, "more < 2");
+
+        n = read_buf(s->strm, s->window + s->strstart + s->lookahead, more);
+        s->lookahead += n;
+
+        /* Initialize the hash value now that we have some input: */
+        if (s->lookahead + s->insert >= MIN_MATCH) {
+            uInt str = s->strstart - s->insert;
+            s->ins_h = s->window[str];
+            UPDATE_HASH(s, s->ins_h, s->window[str + 1]);
+#if MIN_MATCH != 3
+            Call UPDATE_HASH() MIN_MATCH-3 more times
+#endif
+            while (s->insert) {
+                UPDATE_HASH(s, s->ins_h, s->window[str + MIN_MATCH-1]);
+#ifndef FASTEST
+                s->prev[str & s->w_mask] = s->head[s->ins_h];
+#endif
+                s->head[s->ins_h] = (Pos)str;
+                str++;
+                s->insert--;
+                if (s->lookahead + s->insert < MIN_MATCH)
+                    break;
+            }
+        }
+        /* If the whole input has less than MIN_MATCH bytes, ins_h is garbage,
+         * but this is not important since only literal bytes will be emitted.
+         */
+
+    } while (s->lookahead < MIN_LOOKAHEAD && s->strm->avail_in != 0);
+
+    /* If the WIN_INIT bytes after the end of the current data have never been
+     * written, then zero those bytes in order to avoid memory check reports of
+     * the use of uninitialized (or uninitialised as Julian writes) bytes by
+     * the longest match routines.  Update the high water mark for the next
+     * time through here.  WIN_INIT is set to MAX_MATCH since the longest match
+     * routines allow scanning to strstart + MAX_MATCH, ignoring lookahead.
+     */
+    if (s->high_water < s->window_size) {
+        ulg curr = s->strstart + (ulg)(s->lookahead);
+        ulg init;
+
+        if (s->high_water < curr) {
+            /* Previous high water mark below current data -- zero WIN_INIT
+             * bytes or up to end of window, whichever is less.
+             */
+            init = s->window_size - curr;
+            if (init > WIN_INIT)
+                init = WIN_INIT;
+            zmemzero(s->window + curr, (unsigned)init);
+            s->high_water = curr + init;
+        }
+        else if (s->high_water < (ulg)curr + WIN_INIT) {
+            /* High water mark at or above current data, but below current data
+             * plus WIN_INIT -- zero out to current data plus WIN_INIT, or up
+             * to end of window, whichever is less.
+             */
+            init = (ulg)curr + WIN_INIT - s->high_water;
+            if (init > s->window_size - s->high_water)
+                init = s->window_size - s->high_water;
+            zmemzero(s->window + s->high_water, (unsigned)init);
+            s->high_water += init;
+        }
+    }
+
+    Assert((ulg)s->strstart <= s->window_size - MIN_LOOKAHEAD,
+           "not enough room for search");
+}
+
+/* ===========================================================================
+ * Flush the current block, with given end-of-file flag.
+ * IN assertion: strstart is set to the end of the current match.
+ */
+#define FLUSH_BLOCK_ONLY(s, last) { \
+   _tr_flush_block(s, (s->block_start >= 0L ? \
+                   (charf *)&s->window[(unsigned)s->block_start] : \
+                   (charf *)Z_NULL), \
+                (ulg)((long)s->strstart - s->block_start), \
+                (last)); \
+   s->block_start = s->strstart; \
+   flush_pending(s->strm); \
+   Tracev((stderr,"[FLUSH]")); \
+}
+
+/* Same but force premature exit if necessary. */
+#define FLUSH_BLOCK(s, last) { \
+   FLUSH_BLOCK_ONLY(s, last); \
+   if (s->strm->avail_out == 0) return (last) ? finish_started : need_more; \
+}
+
+/* ===========================================================================
+ * Copy without compression as much as possible from the input stream, return
+ * the current block state.
+ * This function does not insert new strings in the dictionary since
+ * uncompressible data is probably not useful. This function is used
+ * only for the level=0 compression option.
+ * NOTE: this function should be optimized to avoid extra copying from
+ * window to pending_buf.
+ */
+local block_state deflate_stored(s, flush)
+    deflate_state *s;
+    int flush;
+{
+    /* Stored blocks are limited to 0xffff bytes, pending_buf is limited
+     * to pending_buf_size, and each stored block has a 5 byte header:
+     */
+    ulg max_block_size = 0xffff;
+    ulg max_start;
+
+    if (max_block_size > s->pending_buf_size - 5) {
+        max_block_size = s->pending_buf_size - 5;
+    }
+
+    /* Copy as much as possible from input to output: */
+    for (;;) {
+        /* Fill the window as much as possible: */
+        if (s->lookahead <= 1) {
+
+            Assert(s->strstart < s->w_size+MAX_DIST(s) ||
+                   s->block_start >= (long)s->w_size, "slide too late");
+
+            fill_window(s);
+            if (s->lookahead == 0 && flush == Z_NO_FLUSH) return need_more;
+
+            if (s->lookahead == 0) break; /* flush the current block */
+        }
+        Assert(s->block_start >= 0L, "block gone");
+
+        s->strstart += s->lookahead;
+        s->lookahead = 0;
+
+        /* Emit a stored block if pending_buf will be full: */
+        max_start = s->block_start + max_block_size;
+        if (s->strstart == 0 || (ulg)s->strstart >= max_start) {
+            /* strstart == 0 is possible when wraparound on 16-bit machine */
+            s->lookahead = (uInt)(s->strstart - max_start);
+            s->strstart = (uInt)max_start;
+            FLUSH_BLOCK(s, 0);
+        }
+        /* Flush if we may have to slide, otherwise block_start may become
+         * negative and the data will be gone:
+         */
+        if (s->strstart - (uInt)s->block_start >= MAX_DIST(s)) {
+            FLUSH_BLOCK(s, 0);
+        }
+    }
+    s->insert = 0;
+    if (flush == Z_FINISH) {
+        FLUSH_BLOCK(s, 1);
+        return finish_done;
+    }
+    if ((long)s->strstart > s->block_start)
+        FLUSH_BLOCK(s, 0);
+    return block_done;
+}
+
+/* ===========================================================================
+ * Compress as much as possible from the input stream, return the current
+ * block state.
+ * This function does not perform lazy evaluation of matches and inserts
+ * new strings in the dictionary only for unmatched strings or for short
+ * matches. It is used only for the fast compression options.
+ */
+local block_state deflate_fast(s, flush)
+    deflate_state *s;
+    int flush;
+{
+    IPos hash_head;       /* head of the hash chain */
+    int bflush;           /* set if current block must be flushed */
+
+    for (;;) {
+        /* Make sure that we always have enough lookahead, except
+         * at the end of the input file. We need MAX_MATCH bytes
+         * for the next match, plus MIN_MATCH bytes to insert the
+         * string following the next match.
+         */
+        if (s->lookahead < MIN_LOOKAHEAD) {
+            fill_window(s);
+            if (s->lookahead < MIN_LOOKAHEAD && flush == Z_NO_FLUSH) {
+                return need_more;
+            }
+            if (s->lookahead == 0) break; /* flush the current block */
+        }
+
+        /* Insert the string window[strstart .. strstart+2] in the
+         * dictionary, and set hash_head to the head of the hash chain:
+         */
+        hash_head = NIL;
+        if (s->lookahead >= MIN_MATCH) {
+            INSERT_STRING(s, s->strstart, hash_head);
+        }
+
+        /* Find the longest match, discarding those <= prev_length.
+         * At this point we have always match_length < MIN_MATCH
+         */
+        if (hash_head != NIL && s->strstart - hash_head <= MAX_DIST(s)) {
+            /* To simplify the code, we prevent matches with the string
+             * of window index 0 (in particular we have to avoid a match
+             * of the string with itself at the start of the input file).
+             */
+            s->match_length = longest_match (s, hash_head);
+            /* longest_match() sets match_start */
+        }
+        if (s->match_length >= MIN_MATCH) {
+            check_match(s, s->strstart, s->match_start, s->match_length);
+
+            _tr_tally_dist(s, s->strstart - s->match_start,
+                           s->match_length - MIN_MATCH, bflush);
+
+            s->lookahead -= s->match_length;
+
+            /* Insert new strings in the hash table only if the match length
+             * is not too large. This saves time but degrades compression.
+             */
+#ifndef FASTEST
+            if (s->match_length <= s->max_insert_length &&
+                s->lookahead >= MIN_MATCH) {
+                s->match_length--; /* string at strstart already in table */
+                do {
+                    s->strstart++;
+                    INSERT_STRING(s, s->strstart, hash_head);
+                    /* strstart never exceeds WSIZE-MAX_MATCH, so there are
+                     * always MIN_MATCH bytes ahead.
+                     */
+                } while (--s->match_length != 0);
+                s->strstart++;
+            } else
+#endif
+            {
+                s->strstart += s->match_length;
+                s->match_length = 0;
+                s->ins_h = s->window[s->strstart];
+                UPDATE_HASH(s, s->ins_h, s->window[s->strstart+1]);
+#if MIN_MATCH != 3
+                Call UPDATE_HASH() MIN_MATCH-3 more times
+#endif
+                /* If lookahead < MIN_MATCH, ins_h is garbage, but it does not
+                 * matter since it will be recomputed at next deflate call.
+                 */
+            }
+        } else {
+            /* No match, output a literal byte */
+            Tracevv((stderr,"%c", s->window[s->strstart]));
+            _tr_tally_lit (s, s->window[s->strstart], bflush);
+            s->lookahead--;
+            s->strstart++;
+        }
+        if (bflush) FLUSH_BLOCK(s, 0);
+    }
+    s->insert = s->strstart < MIN_MATCH-1 ? s->strstart : MIN_MATCH-1;
+    if (flush == Z_FINISH) {
+        FLUSH_BLOCK(s, 1);
+        return finish_done;
+    }
+    if (s->last_lit)
+        FLUSH_BLOCK(s, 0);
+    return block_done;
+}
+
+#ifndef FASTEST
+/* ===========================================================================
+ * Same as above, but achieves better compression. We use a lazy
+ * evaluation for matches: a match is finally adopted only if there is
+ * no better match at the next window position.
+ */
+local block_state deflate_slow(s, flush)
+    deflate_state *s;
+    int flush;
+{
+    IPos hash_head;          /* head of hash chain */
+    int bflush;              /* set if current block must be flushed */
+
+    /* Process the input block. */
+    for (;;) {
+        /* Make sure that we always have enough lookahead, except
+         * at the end of the input file. We need MAX_MATCH bytes
+         * for the next match, plus MIN_MATCH bytes to insert the
+         * string following the next match.
+         */
+        if (s->lookahead < MIN_LOOKAHEAD) {
+            fill_window(s);
+            if (s->lookahead < MIN_LOOKAHEAD && flush == Z_NO_FLUSH) {
+                return need_more;
+            }
+            if (s->lookahead == 0) break; /* flush the current block */
+        }
+
+        /* Insert the string window[strstart .. strstart+2] in the
+         * dictionary, and set hash_head to the head of the hash chain:
+         */
+        hash_head = NIL;
+        if (s->lookahead >= MIN_MATCH) {
+            INSERT_STRING(s, s->strstart, hash_head);
+        }
+
+        /* Find the longest match, discarding those <= prev_length.
+         */
+        s->prev_length = s->match_length, s->prev_match = s->match_start;
+        s->match_length = MIN_MATCH-1;
+
+        if (hash_head != NIL && s->prev_length < s->max_lazy_match &&
+            s->strstart - hash_head <= MAX_DIST(s)) {
+            /* To simplify the code, we prevent matches with the string
+             * of window index 0 (in particular we have to avoid a match
+             * of the string with itself at the start of the input file).
+             */
+            s->match_length = longest_match (s, hash_head);
+            /* longest_match() sets match_start */
+
+            if (s->match_length <= 5 && (s->strategy == Z_FILTERED
+#if TOO_FAR <= 32767
+                || (s->match_length == MIN_MATCH &&
+                    s->strstart - s->match_start > TOO_FAR)
+#endif
+                )) {
+
+                /* If prev_match is also MIN_MATCH, match_start is garbage
+                 * but we will ignore the current match anyway.
+                 */
+                s->match_length = MIN_MATCH-1;
+            }
+        }
+        /* If there was a match at the previous step and the current
+         * match is not better, output the previous match:
+         */
+        if (s->prev_length >= MIN_MATCH && s->match_length <= s->prev_length) {
+            uInt max_insert = s->strstart + s->lookahead - MIN_MATCH;
+            /* Do not insert strings in hash table beyond this. */
+
+            check_match(s, s->strstart-1, s->prev_match, s->prev_length);
+
+            _tr_tally_dist(s, s->strstart -1 - s->prev_match,
+                           s->prev_length - MIN_MATCH, bflush);
+
+            /* Insert in hash table all strings up to the end of the match.
+             * strstart-1 and strstart are already inserted. If there is not
+             * enough lookahead, the last two strings are not inserted in
+             * the hash table.
+             */
+            s->lookahead -= s->prev_length-1;
+            s->prev_length -= 2;
+            do {
+                if (++s->strstart <= max_insert) {
+                    INSERT_STRING(s, s->strstart, hash_head);
+                }
+            } while (--s->prev_length != 0);
+            s->match_available = 0;
+            s->match_length = MIN_MATCH-1;
+            s->strstart++;
+
+            if (bflush) FLUSH_BLOCK(s, 0);
+
+        } else if (s->match_available) {
+            /* If there was no match at the previous position, output a
+             * single literal. If there was a match but the current match
+             * is longer, truncate the previous match to a single literal.
+             */
+            Tracevv((stderr,"%c", s->window[s->strstart-1]));
+            _tr_tally_lit(s, s->window[s->strstart-1], bflush);
+            if (bflush) {
+                FLUSH_BLOCK_ONLY(s, 0);
+            }
+            s->strstart++;
+            s->lookahead--;
+            if (s->strm->avail_out == 0) return need_more;
+        } else {
+            /* There is no previous match to compare with, wait for
+             * the next step to decide.
+             */
+            s->match_available = 1;
+            s->strstart++;
+            s->lookahead--;
+        }
+    }
+    Assert (flush != Z_NO_FLUSH, "no flush?");
+    if (s->match_available) {
+        Tracevv((stderr,"%c", s->window[s->strstart-1]));
+        _tr_tally_lit(s, s->window[s->strstart-1], bflush);
+        s->match_available = 0;
+    }
+    s->insert = s->strstart < MIN_MATCH-1 ? s->strstart : MIN_MATCH-1;
+    if (flush == Z_FINISH) {
+        FLUSH_BLOCK(s, 1);
+        return finish_done;
+    }
+    if (s->last_lit)
+        FLUSH_BLOCK(s, 0);
+    return block_done;
+}
+#endif /* FASTEST */
+
+/* ===========================================================================
+ * For Z_RLE, simply look for runs of bytes, generate matches only of distance
+ * one.  Do not maintain a hash table.  (It will be regenerated if this run of
+ * deflate switches away from Z_RLE.)
+ */
+local block_state deflate_rle(s, flush)
+    deflate_state *s;
+    int flush;
+{
+    int bflush;             /* set if current block must be flushed */
+    uInt prev;              /* byte at distance one to match */
+    Bytef *scan, *strend;   /* scan goes up to strend for length of run */
+
+    for (;;) {
+        /* Make sure that we always have enough lookahead, except
+         * at the end of the input file. We need MAX_MATCH bytes
+         * for the longest run, plus one for the unrolled loop.
+         */
+        if (s->lookahead <= MAX_MATCH) {
+            fill_window(s);
+            if (s->lookahead <= MAX_MATCH && flush == Z_NO_FLUSH) {
+                return need_more;
+            }
+            if (s->lookahead == 0) break; /* flush the current block */
+        }
+
+        /* See how many times the previous byte repeats */
+        s->match_length = 0;
+        if (s->lookahead >= MIN_MATCH && s->strstart > 0) {
+            scan = s->window + s->strstart - 1;
+            prev = *scan;
+            if (prev == *++scan && prev == *++scan && prev == *++scan) {
+                strend = s->window + s->strstart + MAX_MATCH;
+                do {
+                } while (prev == *++scan && prev == *++scan &&
+                         prev == *++scan && prev == *++scan &&
+                         prev == *++scan && prev == *++scan &&
+                         prev == *++scan && prev == *++scan &&
+                         scan < strend);
+                s->match_length = MAX_MATCH - (int)(strend - scan);
+                if (s->match_length > s->lookahead)
+                    s->match_length = s->lookahead;
+            }
+            Assert(scan <= s->window+(uInt)(s->window_size-1), "wild scan");
+        }
+
+        /* Emit match if have run of MIN_MATCH or longer, else emit literal */
+        if (s->match_length >= MIN_MATCH) {
+            check_match(s, s->strstart, s->strstart - 1, s->match_length);
+
+            _tr_tally_dist(s, 1, s->match_length - MIN_MATCH, bflush);
+
+            s->lookahead -= s->match_length;
+            s->strstart += s->match_length;
+            s->match_length = 0;
+        } else {
+            /* No match, output a literal byte */
+            Tracevv((stderr,"%c", s->window[s->strstart]));
+            _tr_tally_lit (s, s->window[s->strstart], bflush);
+            s->lookahead--;
+            s->strstart++;
+        }
+        if (bflush) FLUSH_BLOCK(s, 0);
+    }
+    s->insert = 0;
+    if (flush == Z_FINISH) {
+        FLUSH_BLOCK(s, 1);
+        return finish_done;
+    }
+    if (s->last_lit)
+        FLUSH_BLOCK(s, 0);
+    return block_done;
+}
+
+/* ===========================================================================
+ * For Z_HUFFMAN_ONLY, do not look for matches.  Do not maintain a hash table.
+ * (It will be regenerated if this run of deflate switches away from Huffman.)
+ */
+local block_state deflate_huff(s, flush)
+    deflate_state *s;
+    int flush;
+{
+    int bflush;             /* set if current block must be flushed */
+
+    for (;;) {
+        /* Make sure that we have a literal to write. */
+        if (s->lookahead == 0) {
+            fill_window(s);
+            if (s->lookahead == 0) {
+                if (flush == Z_NO_FLUSH)
+                    return need_more;
+                break;      /* flush the current block */
+            }
+        }
+
+        /* Output a literal byte */
+        s->match_length = 0;
+        Tracevv((stderr,"%c", s->window[s->strstart]));
+        _tr_tally_lit (s, s->window[s->strstart], bflush);
+        s->lookahead--;
+        s->strstart++;
+        if (bflush) FLUSH_BLOCK(s, 0);
+    }
+    s->insert = 0;
+    if (flush == Z_FINISH) {
+        FLUSH_BLOCK(s, 1);
+        return finish_done;
+    }
+    if (s->last_lit)
+        FLUSH_BLOCK(s, 0);
+    return block_done;
+}
diff --git a/c-blosc/internal-complibs/zlib-1.2.8/deflate.h b/c-blosc/internal-complibs/zlib-1.2.8/deflate.h
new file mode 100644
index 0000000..ce0299e
--- /dev/null
+++ b/c-blosc/internal-complibs/zlib-1.2.8/deflate.h
@@ -0,0 +1,346 @@
+/* deflate.h -- internal compression state
+ * Copyright (C) 1995-2012 Jean-loup Gailly
+ * For conditions of distribution and use, see copyright notice in zlib.h
+ */
+
+/* WARNING: this file should *not* be used by applications. It is
+   part of the implementation of the compression library and is
+   subject to change. Applications should only use zlib.h.
+ */
+
+/* @(#) $Id$ */
+
+#ifndef DEFLATE_H
+#define DEFLATE_H
+
+#include "zutil.h"
+
+/* define NO_GZIP when compiling if you want to disable gzip header and
+   trailer creation by deflate().  NO_GZIP would be used to avoid linking in
+   the crc code when it is not needed.  For shared libraries, gzip encoding
+   should be left enabled. */
+#ifndef NO_GZIP
+#  define GZIP
+#endif
+
+/* ===========================================================================
+ * Internal compression state.
+ */
+
+#define LENGTH_CODES 29
+/* number of length codes, not counting the special END_BLOCK code */
+
+#define LITERALS  256
+/* number of literal bytes 0..255 */
+
+#define L_CODES (LITERALS+1+LENGTH_CODES)
+/* number of Literal or Length codes, including the END_BLOCK code */
+
+#define D_CODES   30
+/* number of distance codes */
+
+#define BL_CODES  19
+/* number of codes used to transfer the bit lengths */
+
+#define HEAP_SIZE (2*L_CODES+1)
+/* maximum heap size */
+
+#define MAX_BITS 15
+/* All codes must not exceed MAX_BITS bits */
+
+#define Buf_size 16
+/* size of bit buffer in bi_buf */
+
+#define INIT_STATE    42
+#define EXTRA_STATE   69
+#define NAME_STATE    73
+#define COMMENT_STATE 91
+#define HCRC_STATE   103
+#define BUSY_STATE   113
+#define FINISH_STATE 666
+/* Stream status */
+
+
+/* Data structure describing a single value and its code string. */
+typedef struct ct_data_s {
+    union {
+        ush  freq;       /* frequency count */
+        ush  code;       /* bit string */
+    } fc;
+    union {
+        ush  dad;        /* father node in Huffman tree */
+        ush  len;        /* length of bit string */
+    } dl;
+} FAR ct_data;
+
+#define Freq fc.freq
+#define Code fc.code
+#define Dad  dl.dad
+#define Len  dl.len
+
+typedef struct static_tree_desc_s  static_tree_desc;
+
+typedef struct tree_desc_s {
+    ct_data *dyn_tree;           /* the dynamic tree */
+    int     max_code;            /* largest code with non zero frequency */
+    static_tree_desc *stat_desc; /* the corresponding static tree */
+} FAR tree_desc;
+
+typedef ush Pos;
+typedef Pos FAR Posf;
+typedef unsigned IPos;
+
+/* A Pos is an index in the character window. We use short instead of int to
+ * save space in the various tables. IPos is used only for parameter passing.
+ */
+
+typedef struct internal_state {
+    z_streamp strm;      /* pointer back to this zlib stream */
+    int   status;        /* as the name implies */
+    Bytef *pending_buf;  /* output still pending */
+    ulg   pending_buf_size; /* size of pending_buf */
+    Bytef *pending_out;  /* next pending byte to output to the stream */
+    uInt   pending;      /* nb of bytes in the pending buffer */
+    int   wrap;          /* bit 0 true for zlib, bit 1 true for gzip */
+    gz_headerp  gzhead;  /* gzip header information to write */
+    uInt   gzindex;      /* where in extra, name, or comment */
+    Byte  method;        /* can only be DEFLATED */
+    int   last_flush;    /* value of flush param for previous deflate call */
+
+                /* used by deflate.c: */
+
+    uInt  w_size;        /* LZ77 window size (32K by default) */
+    uInt  w_bits;        /* log2(w_size)  (8..16) */
+    uInt  w_mask;        /* w_size - 1 */
+
+    Bytef *window;
+    /* Sliding window. Input bytes are read into the second half of the window,
+     * and move to the first half later to keep a dictionary of at least wSize
+     * bytes. With this organization, matches are limited to a distance of
+     * wSize-MAX_MATCH bytes, but this ensures that IO is always
+     * performed with a length multiple of the block size. Also, it limits
+     * the window size to 64K, which is quite useful on MSDOS.
+     * To do: use the user input buffer as sliding window.
+     */
+
+    ulg window_size;
+    /* Actual size of window: 2*wSize, except when the user input buffer
+     * is directly used as sliding window.
+     */
+
+    Posf *prev;
+    /* Link to older string with same hash index. To limit the size of this
+     * array to 64K, this link is maintained only for the last 32K strings.
+     * An index in this array is thus a window index modulo 32K.
+     */
+
+    Posf *head; /* Heads of the hash chains or NIL. */
+
+    uInt  ins_h;          /* hash index of string to be inserted */
+    uInt  hash_size;      /* number of elements in hash table */
+    uInt  hash_bits;      /* log2(hash_size) */
+    uInt  hash_mask;      /* hash_size-1 */
+
+    uInt  hash_shift;
+    /* Number of bits by which ins_h must be shifted at each input
+     * step. It must be such that after MIN_MATCH steps, the oldest
+     * byte no longer takes part in the hash key, that is:
+     *   hash_shift * MIN_MATCH >= hash_bits
+     */
+
+    long block_start;
+    /* Window position at the beginning of the current output block. Gets
+     * negative when the window is moved backwards.
+     */
+
+    uInt match_length;           /* length of best match */
+    IPos prev_match;             /* previous match */
+    int match_available;         /* set if previous match exists */
+    uInt strstart;               /* start of string to insert */
+    uInt match_start;            /* start of matching string */
+    uInt lookahead;              /* number of valid bytes ahead in window */
+
+    uInt prev_length;
+    /* Length of the best match at previous step. Matches not greater than this
+     * are discarded. This is used in the lazy match evaluation.
+     */
+
+    uInt max_chain_length;
+    /* To speed up deflation, hash chains are never searched beyond this
+     * length.  A higher limit improves compression ratio but degrades the
+     * speed.
+     */
+
+    uInt max_lazy_match;
+    /* Attempt to find a better match only when the current match is strictly
+     * smaller than this value. This mechanism is used only for compression
+     * levels >= 4.
+     */
+#   define max_insert_length  max_lazy_match
+    /* Insert new strings in the hash table only if the match length is not
+     * greater than this length. This saves time but degrades compression.
+     * max_insert_length is used only for compression levels <= 3.
+     */
+
+    int level;    /* compression level (1..9) */
+    int strategy; /* favor or force Huffman coding*/
+
+    uInt good_match;
+    /* Use a faster search when the previous match is longer than this */
+
+    int nice_match; /* Stop searching when current match exceeds this */
+
+                /* used by trees.c: */
+    /* Didn't use ct_data typedef below to suppress compiler warning */
+    struct ct_data_s dyn_ltree[HEAP_SIZE];   /* literal and length tree */
+    struct ct_data_s dyn_dtree[2*D_CODES+1]; /* distance tree */
+    struct ct_data_s bl_tree[2*BL_CODES+1];  /* Huffman tree for bit lengths */
+
+    struct tree_desc_s l_desc;               /* desc. for literal tree */
+    struct tree_desc_s d_desc;               /* desc. for distance tree */
+    struct tree_desc_s bl_desc;              /* desc. for bit length tree */
+
+    ush bl_count[MAX_BITS+1];
+    /* number of codes at each bit length for an optimal tree */
+
+    int heap[2*L_CODES+1];      /* heap used to build the Huffman trees */
+    int heap_len;               /* number of elements in the heap */
+    int heap_max;               /* element of largest frequency */
+    /* The sons of heap[n] are heap[2*n] and heap[2*n+1]. heap[0] is not used.
+     * The same heap array is used to build all trees.
+     */
+
+    uch depth[2*L_CODES+1];
+    /* Depth of each subtree used as tie breaker for trees of equal frequency
+     */
+
+    uchf *l_buf;          /* buffer for literals or lengths */
+
+    uInt  lit_bufsize;
+    /* Size of match buffer for literals/lengths.  There are 4 reasons for
+     * limiting lit_bufsize to 64K:
+     *   - frequencies can be kept in 16 bit counters
+     *   - if compression is not successful for the first block, all input
+     *     data is still in the window so we can still emit a stored block even
+     *     when input comes from standard input.  (This can also be done for
+     *     all blocks if lit_bufsize is not greater than 32K.)
+     *   - if compression is not successful for a file smaller than 64K, we can
+     *     even emit a stored file instead of a stored block (saving 5 bytes).
+     *     This is applicable only for zip (not gzip or zlib).
+     *   - creating new Huffman trees less frequently may not provide fast
+     *     adaptation to changes in the input data statistics. (Take for
+     *     example a binary file with poorly compressible code followed by
+     *     a highly compressible string table.) Smaller buffer sizes give
+     *     fast adaptation but have of course the overhead of transmitting
+     *     trees more frequently.
+     *   - I can't count above 4
+     */
+
+    uInt last_lit;      /* running index in l_buf */
+
+    ushf *d_buf;
+    /* Buffer for distances. To simplify the code, d_buf and l_buf have
+     * the same number of elements. To use different lengths, an extra flag
+     * array would be necessary.
+     */
+
+    ulg opt_len;        /* bit length of current block with optimal trees */
+    ulg static_len;     /* bit length of current block with static trees */
+    uInt matches;       /* number of string matches in current block */
+    uInt insert;        /* bytes at end of window left to insert */
+
+#ifdef DEBUG
+    ulg compressed_len; /* total bit length of compressed file mod 2^32 */
+    ulg bits_sent;      /* bit length of compressed data sent mod 2^32 */
+#endif
+
+    ush bi_buf;
+    /* Output buffer. bits are inserted starting at the bottom (least
+     * significant bits).
+     */
+    int bi_valid;
+    /* Number of valid bits in bi_buf.  All bits above the last valid bit
+     * are always zero.
+     */
+
+    ulg high_water;
+    /* High water mark offset in window for initialized bytes -- bytes above
+     * this are set to zero in order to avoid memory check warnings when
+     * longest match routines access bytes past the input.  This is then
+     * updated to the new high water mark.
+     */
+
+} FAR deflate_state;
+
+/* Output a byte on the stream.
+ * IN assertion: there is enough room in pending_buf.
+ */
+#define put_byte(s, c) {s->pending_buf[s->pending++] = (c);}
+
+
+#define MIN_LOOKAHEAD (MAX_MATCH+MIN_MATCH+1)
+/* Minimum amount of lookahead, except at the end of the input file.
+ * See deflate.c for comments about the MIN_MATCH+1.
+ */
+
+#define MAX_DIST(s)  ((s)->w_size-MIN_LOOKAHEAD)
+/* In order to simplify the code, particularly on 16 bit machines, match
+ * distances are limited to MAX_DIST instead of WSIZE.
+ */
+
+#define WIN_INIT MAX_MATCH
+/* Number of bytes after end of data in window to initialize in order to avoid
+   memory checker errors from longest match routines */
+
+        /* in trees.c */
+void ZLIB_INTERNAL _tr_init OF((deflate_state *s));
+int ZLIB_INTERNAL _tr_tally OF((deflate_state *s, unsigned dist, unsigned lc));
+void ZLIB_INTERNAL _tr_flush_block OF((deflate_state *s, charf *buf,
+                        ulg stored_len, int last));
+void ZLIB_INTERNAL _tr_flush_bits OF((deflate_state *s));
+void ZLIB_INTERNAL _tr_align OF((deflate_state *s));
+void ZLIB_INTERNAL _tr_stored_block OF((deflate_state *s, charf *buf,
+                        ulg stored_len, int last));
+
+#define d_code(dist) \
+   ((dist) < 256 ? _dist_code[dist] : _dist_code[256+((dist)>>7)])
+/* Mapping from a distance to a distance code. dist is the distance - 1 and
+ * must not have side effects. _dist_code[256] and _dist_code[257] are never
+ * used.
+ */
+
+#ifndef DEBUG
+/* Inline versions of _tr_tally for speed: */
+
+#if defined(GEN_TREES_H) || !defined(STDC)
+  extern uch ZLIB_INTERNAL _length_code[];
+  extern uch ZLIB_INTERNAL _dist_code[];
+#else
+  extern const uch ZLIB_INTERNAL _length_code[];
+  extern const uch ZLIB_INTERNAL _dist_code[];
+#endif
+
+# define _tr_tally_lit(s, c, flush) \
+  { uch cc = (c); \
+    s->d_buf[s->last_lit] = 0; \
+    s->l_buf[s->last_lit++] = cc; \
+    s->dyn_ltree[cc].Freq++; \
+    flush = (s->last_lit == s->lit_bufsize-1); \
+   }
+# define _tr_tally_dist(s, distance, length, flush) \
+  { uch len = (length); \
+    ush dist = (distance); \
+    s->d_buf[s->last_lit] = dist; \
+    s->l_buf[s->last_lit++] = len; \
+    dist--; \
+    s->dyn_ltree[_length_code[len]+LITERALS+1].Freq++; \
+    s->dyn_dtree[d_code(dist)].Freq++; \
+    flush = (s->last_lit == s->lit_bufsize-1); \
+  }
+#else
+# define _tr_tally_lit(s, c, flush) flush = _tr_tally(s, 0, c)
+# define _tr_tally_dist(s, distance, length, flush) \
+              flush = _tr_tally(s, distance, length)
+#endif
+
+#endif /* DEFLATE_H */
diff --git a/c-blosc/internal-complibs/zlib-1.2.8/gzclose.c b/c-blosc/internal-complibs/zlib-1.2.8/gzclose.c
new file mode 100644
index 0000000..caeb99a
--- /dev/null
+++ b/c-blosc/internal-complibs/zlib-1.2.8/gzclose.c
@@ -0,0 +1,25 @@
+/* gzclose.c -- zlib gzclose() function
+ * Copyright (C) 2004, 2010 Mark Adler
+ * For conditions of distribution and use, see copyright notice in zlib.h
+ */
+
+#include "gzguts.h"
+
+/* gzclose() is in a separate file so that it is linked in only if it is used.
+   That way the other gzclose functions can be used instead to avoid linking in
+   unneeded compression or decompression routines. */
+int ZEXPORT gzclose(file)
+    gzFile file;
+{
+#ifndef NO_GZCOMPRESS
+    gz_statep state;
+
+    if (file == NULL)
+        return Z_STREAM_ERROR;
+    state = (gz_statep)file;
+
+    return state->mode == GZ_READ ? gzclose_r(file) : gzclose_w(file);
+#else
+    return gzclose_r(file);
+#endif
+}
diff --git a/c-blosc/internal-complibs/zlib-1.2.8/gzguts.h b/c-blosc/internal-complibs/zlib-1.2.8/gzguts.h
new file mode 100644
index 0000000..d87659d
--- /dev/null
+++ b/c-blosc/internal-complibs/zlib-1.2.8/gzguts.h
@@ -0,0 +1,209 @@
+/* gzguts.h -- zlib internal header definitions for gz* operations
+ * Copyright (C) 2004, 2005, 2010, 2011, 2012, 2013 Mark Adler
+ * For conditions of distribution and use, see copyright notice in zlib.h
+ */
+
+#ifdef _LARGEFILE64_SOURCE
+#  ifndef _LARGEFILE_SOURCE
+#    define _LARGEFILE_SOURCE 1
+#  endif
+#  ifdef _FILE_OFFSET_BITS
+#    undef _FILE_OFFSET_BITS
+#  endif
+#endif
+
+#ifdef HAVE_HIDDEN
+#  define ZLIB_INTERNAL __attribute__((visibility ("hidden")))
+#else
+#  define ZLIB_INTERNAL
+#endif
+
+#include <stdio.h>
+#include "zlib.h"
+#ifdef STDC
+#  include <string.h>
+#  include <stdlib.h>
+#  include <limits.h>
+#endif
+#include <fcntl.h>
+
+#ifdef _WIN32
+#  include <stddef.h>
+#endif
+
+#if defined(__TURBOC__) || defined(_MSC_VER) || defined(_WIN32)
+#  include <io.h>
+#endif
+
+#ifdef WINAPI_FAMILY
+#  define open _open
+#  define read _read
+#  define write _write
+#  define close _close
+#endif
+
+#ifdef NO_DEFLATE       /* for compatibility with old definition */
+#  define NO_GZCOMPRESS
+#endif
+
+#if defined(STDC99) || (defined(__TURBOC__) && __TURBOC__ >= 0x550)
+#  ifndef HAVE_VSNPRINTF
+#    define HAVE_VSNPRINTF
+#  endif
+#endif
+
+#if defined(__CYGWIN__)
+#  ifndef HAVE_VSNPRINTF
+#    define HAVE_VSNPRINTF
+#  endif
+#endif
+
+#if defined(MSDOS) && defined(__BORLANDC__) && (BORLANDC > 0x410)
+#  ifndef HAVE_VSNPRINTF
+#    define HAVE_VSNPRINTF
+#  endif
+#endif
+
+#ifndef HAVE_VSNPRINTF
+#  ifdef MSDOS
+/* vsnprintf may exist on some MS-DOS compilers (DJGPP?),
+   but for now we just assume it doesn't. */
+#    define NO_vsnprintf
+#  endif
+#  ifdef __TURBOC__
+#    define NO_vsnprintf
+#  endif
+#  ifdef WIN32
+/* In Win32, vsnprintf is available as the "non-ANSI" _vsnprintf. */
+#    if !defined(vsnprintf) && !defined(NO_vsnprintf)
+#      if !defined(_MSC_VER) || ( defined(_MSC_VER) && _MSC_VER < 1500 )
+#         define vsnprintf _vsnprintf
+#      endif
+#    endif
+#  endif
+#  ifdef __SASC
+#    define NO_vsnprintf
+#  endif
+#  ifdef VMS
+#    define NO_vsnprintf
+#  endif
+#  ifdef __OS400__
+#    define NO_vsnprintf
+#  endif
+#  ifdef __MVS__
+#    define NO_vsnprintf
+#  endif
+#endif
+
+/* unlike snprintf (which is required in C99, yet still not supported by
+   Microsoft more than a decade later!), _snprintf does not guarantee null
+   termination of the result -- however this is only used in gzlib.c where
+   the result is assured to fit in the space provided */
+#ifdef _MSC_VER
+#  define snprintf _snprintf
+#endif
+
+#ifndef local
+#  define local static
+#endif
+/* compile with -Dlocal if your debugger can't find static symbols */
+
+/* gz* functions always use library allocation functions */
+#ifndef STDC
+  extern voidp  malloc OF((uInt size));
+  extern void   free   OF((voidpf ptr));
+#endif
+
+/* get errno and strerror definition */
+#if defined UNDER_CE
+#  include <windows.h>
+#  define zstrerror() gz_strwinerror((DWORD)GetLastError())
+#else
+#  ifndef NO_STRERROR
+#    include <errno.h>
+#    define zstrerror() strerror(errno)
+#  else
+#    define zstrerror() "stdio error (consult errno)"
+#  endif
+#endif
+
+/* provide prototypes for these when building zlib without LFS */
+#if !defined(_LARGEFILE64_SOURCE) || _LFS64_LARGEFILE-0 == 0
+    ZEXTERN gzFile ZEXPORT gzopen64 OF((const char *, const char *));
+    ZEXTERN z_off64_t ZEXPORT gzseek64 OF((gzFile, z_off64_t, int));
+    ZEXTERN z_off64_t ZEXPORT gztell64 OF((gzFile));
+    ZEXTERN z_off64_t ZEXPORT gzoffset64 OF((gzFile));
+#endif
+
+/* default memLevel */
+#if MAX_MEM_LEVEL >= 8
+#  define DEF_MEM_LEVEL 8
+#else
+#  define DEF_MEM_LEVEL  MAX_MEM_LEVEL
+#endif
+
+/* default i/o buffer size -- double this for output when reading (this and
+   twice this must be able to fit in an unsigned type) */
+#define GZBUFSIZE 8192
+
+/* gzip modes, also provide a little integrity check on the passed structure */
+#define GZ_NONE 0
+#define GZ_READ 7247
+#define GZ_WRITE 31153
+#define GZ_APPEND 1     /* mode set to GZ_WRITE after the file is opened */
+
+/* values for gz_state how */
+#define LOOK 0      /* look for a gzip header */
+#define COPY 1      /* copy input directly */
+#define GZIP 2      /* decompress a gzip stream */
+
+/* internal gzip file state data structure */
+typedef struct {
+        /* exposed contents for gzgetc() macro */
+    struct gzFile_s x;      /* "x" for exposed */
+                            /* x.have: number of bytes available at x.next */
+                            /* x.next: next output data to deliver or write */
+                            /* x.pos: current position in uncompressed data */
+        /* used for both reading and writing */
+    int mode;               /* see gzip modes above */
+    int fd;                 /* file descriptor */
+    char *path;             /* path or fd for error messages */
+    unsigned size;          /* buffer size, zero if not allocated yet */
+    unsigned want;          /* requested buffer size, default is GZBUFSIZE */
+    unsigned char *in;      /* input buffer */
+    unsigned char *out;     /* output buffer (double-sized when reading) */
+    int direct;             /* 0 if processing gzip, 1 if transparent */
+        /* just for reading */
+    int how;                /* 0: get header, 1: copy, 2: decompress */
+    z_off64_t start;        /* where the gzip data started, for rewinding */
+    int eof;                /* true if end of input file reached */
+    int past;               /* true if read requested past end */
+        /* just for writing */
+    int level;              /* compression level */
+    int strategy;           /* compression strategy */
+        /* seek request */
+    z_off64_t skip;         /* amount to skip (already rewound if backwards) */
+    int seek;               /* true if seek request pending */
+        /* error information */
+    int err;                /* error code */
+    char *msg;              /* error message */
+        /* zlib inflate or deflate stream */
+    z_stream strm;          /* stream structure in-place (not a pointer) */
+} gz_state;
+typedef gz_state FAR *gz_statep;
+
+/* shared functions */
+void ZLIB_INTERNAL gz_error OF((gz_statep, int, const char *));
+#if defined UNDER_CE
+char ZLIB_INTERNAL *gz_strwinerror OF((DWORD error));
+#endif
+
+/* GT_OFF(x), where x is an unsigned value, is true if x > maximum z_off64_t
+   value -- needed when comparing unsigned to z_off64_t, which is signed
+   (possible z_off64_t types off_t, off64_t, and long are all signed) */
+#ifdef INT_MAX
+#  define GT_OFF(x) (sizeof(int) == sizeof(z_off64_t) && (x) > INT_MAX)
+#else
+unsigned ZLIB_INTERNAL gz_intmax OF((void));
+#  define GT_OFF(x) (sizeof(int) == sizeof(z_off64_t) && (x) > gz_intmax())
+#endif
diff --git a/c-blosc/internal-complibs/zlib-1.2.8/gzlib.c b/c-blosc/internal-complibs/zlib-1.2.8/gzlib.c
new file mode 100644
index 0000000..fae202e
--- /dev/null
+++ b/c-blosc/internal-complibs/zlib-1.2.8/gzlib.c
@@ -0,0 +1,634 @@
+/* gzlib.c -- zlib functions common to reading and writing gzip files
+ * Copyright (C) 2004, 2010, 2011, 2012, 2013 Mark Adler
+ * For conditions of distribution and use, see copyright notice in zlib.h
+ */
+
+#include "gzguts.h"
+
+#if defined(_WIN32) && !defined(__BORLANDC__)
+#  define LSEEK _lseeki64
+#else
+#if defined(_LARGEFILE64_SOURCE) && _LFS64_LARGEFILE-0
+#  define LSEEK lseek64
+#else
+#  define LSEEK lseek
+#endif
+#endif
+
+/* Local functions */
+local void gz_reset OF((gz_statep));
+local gzFile gz_open OF((const void *, int, const char *));
+
+#if defined UNDER_CE
+
+/* Map the Windows error number in ERROR to a locale-dependent error message
+   string and return a pointer to it.  Typically, the values for ERROR come
+   from GetLastError.
+
+   The string pointed to shall not be modified by the application, but may be
+   overwritten by a subsequent call to gz_strwinerror
+
+   The gz_strwinerror function does not change the current setting of
+   GetLastError. */
+char ZLIB_INTERNAL *gz_strwinerror (error)
+     DWORD error;
+{
+    static char buf[1024];
+
+    wchar_t *msgbuf;
+    DWORD lasterr = GetLastError();
+    DWORD chars = FormatMessage(FORMAT_MESSAGE_FROM_SYSTEM
+        | FORMAT_MESSAGE_ALLOCATE_BUFFER,
+        NULL,
+        error,
+        0, /* Default language */
+        (LPVOID)&msgbuf,
+        0,
+        NULL);
+    if (chars != 0) {
+        /* If there is an \r\n appended, zap it.  */
+        if (chars >= 2
+            && msgbuf[chars - 2] == '\r' && msgbuf[chars - 1] == '\n') {
+            chars -= 2;
+            msgbuf[chars] = 0;
+        }
+
+        if (chars > sizeof (buf) - 1) {
+            chars = sizeof (buf) - 1;
+            msgbuf[chars] = 0;
+        }
+
+        wcstombs(buf, msgbuf, chars + 1);
+        LocalFree(msgbuf);
+    }
+    else {
+        sprintf(buf, "unknown win32 error (%ld)", error);
+    }
+
+    SetLastError(lasterr);
+    return buf;
+}
+
+#endif /* UNDER_CE */
+
+/* Reset gzip file state */
+local void gz_reset(state)
+    gz_statep state;
+{
+    state->x.have = 0;              /* no output data available */
+    if (state->mode == GZ_READ) {   /* for reading ... */
+        state->eof = 0;             /* not at end of file */
+        state->past = 0;            /* have not read past end yet */
+        state->how = LOOK;          /* look for gzip header */
+    }
+    state->seek = 0;                /* no seek request pending */
+    gz_error(state, Z_OK, NULL);    /* clear error */
+    state->x.pos = 0;               /* no uncompressed data yet */
+    state->strm.avail_in = 0;       /* no input data yet */
+}
+
+/* Open a gzip file either by name or file descriptor. */
+local gzFile gz_open(path, fd, mode)
+    const void *path;
+    int fd;
+    const char *mode;
+{
+    gz_statep state;
+    size_t len;
+    int oflag;
+#ifdef O_CLOEXEC
+    int cloexec = 0;
+#endif
+#ifdef O_EXCL
+    int exclusive = 0;
+#endif
+
+    /* check input */
+    if (path == NULL)
+        return NULL;
+
+    /* allocate gzFile structure to return */
+    state = (gz_statep)malloc(sizeof(gz_state));
+    if (state == NULL)
+        return NULL;
+    state->size = 0;            /* no buffers allocated yet */
+    state->want = GZBUFSIZE;    /* requested buffer size */
+    state->msg = NULL;          /* no error message yet */
+
+    /* interpret mode */
+    state->mode = GZ_NONE;
+    state->level = Z_DEFAULT_COMPRESSION;
+    state->strategy = Z_DEFAULT_STRATEGY;
+    state->direct = 0;
+    while (*mode) {
+        if (*mode >= '0' && *mode <= '9')
+            state->level = *mode - '0';
+        else
+            switch (*mode) {
+            case 'r':
+                state->mode = GZ_READ;
+                break;
+#ifndef NO_GZCOMPRESS
+            case 'w':
+                state->mode = GZ_WRITE;
+                break;
+            case 'a':
+                state->mode = GZ_APPEND;
+                break;
+#endif
+            case '+':       /* can't read and write at the same time */
+                free(state);
+                return NULL;
+            case 'b':       /* ignore -- will request binary anyway */
+                break;
+#ifdef O_CLOEXEC
+            case 'e':
+                cloexec = 1;
+                break;
+#endif
+#ifdef O_EXCL
+            case 'x':
+                exclusive = 1;
+                break;
+#endif
+            case 'f':
+                state->strategy = Z_FILTERED;
+                break;
+            case 'h':
+                state->strategy = Z_HUFFMAN_ONLY;
+                break;
+            case 'R':
+                state->strategy = Z_RLE;
+                break;
+            case 'F':
+                state->strategy = Z_FIXED;
+                break;
+            case 'T':
+                state->direct = 1;
+                break;
+            default:        /* could consider as an error, but just ignore */
+                ;
+            }
+        mode++;
+    }
+
+    /* must provide an "r", "w", or "a" */
+    if (state->mode == GZ_NONE) {
+        free(state);
+        return NULL;
+    }
+
+    /* can't force transparent read */
+    if (state->mode == GZ_READ) {
+        if (state->direct) {
+            free(state);
+            return NULL;
+        }
+        state->direct = 1;      /* for empty file */
+    }
+
+    /* save the path name for error messages */
+#ifdef _WIN32
+    if (fd == -2) {
+        len = wcstombs(NULL, path, 0);
+        if (len == (size_t)-1)
+            len = 0;
+    }
+    else
+#endif
+        len = strlen((const char *)path);
+    state->path = (char *)malloc(len + 1);
+    if (state->path == NULL) {
+        free(state);
+        return NULL;
+    }
+#ifdef _WIN32
+    if (fd == -2)
+        if (len)
+            wcstombs(state->path, path, len + 1);
+        else
+            *(state->path) = 0;
+    else
+#endif
+#if !defined(NO_snprintf) && !defined(NO_vsnprintf)
+        snprintf(state->path, len + 1, "%s", (const char *)path);
+#else
+        strcpy(state->path, path);
+#endif
+
+    /* compute the flags for open() */
+    oflag =
+#ifdef O_LARGEFILE
+        O_LARGEFILE |
+#endif
+#ifdef O_BINARY
+        O_BINARY |
+#endif
+#ifdef O_CLOEXEC
+        (cloexec ? O_CLOEXEC : 0) |
+#endif
+        (state->mode == GZ_READ ?
+         O_RDONLY :
+         (O_WRONLY | O_CREAT |
+#ifdef O_EXCL
+          (exclusive ? O_EXCL : 0) |
+#endif
+          (state->mode == GZ_WRITE ?
+           O_TRUNC :
+           O_APPEND)));
+
+    /* open the file with the appropriate flags (or just use fd) */
+    state->fd = fd > -1 ? fd : (
+#ifdef _WIN32
+        fd == -2 ? _wopen(path, oflag, 0666) :
+#endif
+        open((const char *)path, oflag, 0666));
+    if (state->fd == -1) {
+        free(state->path);
+        free(state);
+        return NULL;
+    }
+    if (state->mode == GZ_APPEND)
+        state->mode = GZ_WRITE;         /* simplify later checks */
+
+    /* save the current position for rewinding (only if reading) */
+    if (state->mode == GZ_READ) {
+        state->start = LSEEK(state->fd, 0, SEEK_CUR);
+        if (state->start == -1) state->start = 0;
+    }
+
+    /* initialize stream */
+    gz_reset(state);
+
+    /* return stream */
+    return (gzFile)state;
+}
+
+/* -- see zlib.h -- */
+gzFile ZEXPORT gzopen(path, mode)
+    const char *path;
+    const char *mode;
+{
+    return gz_open(path, -1, mode);
+}
+
+/* -- see zlib.h -- */
+gzFile ZEXPORT gzopen64(path, mode)
+    const char *path;
+    const char *mode;
+{
+    return gz_open(path, -1, mode);
+}
+
+/* -- see zlib.h -- */
+gzFile ZEXPORT gzdopen(fd, mode)
+    int fd;
+    const char *mode;
+{
+    char *path;         /* identifier for error messages */
+    gzFile gz;
+
+    if (fd == -1 || (path = (char *)malloc(7 + 3 * sizeof(int))) == NULL)
+        return NULL;
+#if !defined(NO_snprintf) && !defined(NO_vsnprintf)
+    snprintf(path, 7 + 3 * sizeof(int), "<fd:%d>", fd); /* for debugging */
+#else
+    sprintf(path, "<fd:%d>", fd);   /* for debugging */
+#endif
+    gz = gz_open(path, fd, mode);
+    free(path);
+    return gz;
+}
+
+/* -- see zlib.h -- */
+#ifdef _WIN32
+gzFile ZEXPORT gzopen_w(path, mode)
+    const wchar_t *path;
+    const char *mode;
+{
+    return gz_open(path, -2, mode);
+}
+#endif
+
+/* -- see zlib.h -- */
+int ZEXPORT gzbuffer(file, size)
+    gzFile file;
+    unsigned size;
+{
+    gz_statep state;
+
+    /* get internal structure and check integrity */
+    if (file == NULL)
+        return -1;
+    state = (gz_statep)file;
+    if (state->mode != GZ_READ && state->mode != GZ_WRITE)
+        return -1;
+
+    /* make sure we haven't already allocated memory */
+    if (state->size != 0)
+        return -1;
+
+    /* check and set requested size */
+    if (size < 2)
+        size = 2;               /* need two bytes to check magic header */
+    state->want = size;
+    return 0;
+}
+
+/* -- see zlib.h -- */
+int ZEXPORT gzrewind(file)
+    gzFile file;
+{
+    gz_statep state;
+
+    /* get internal structure */
+    if (file == NULL)
+        return -1;
+    state = (gz_statep)file;
+
+    /* check that we're reading and that there's no error */
+    if (state->mode != GZ_READ ||
+            (state->err != Z_OK && state->err != Z_BUF_ERROR))
+        return -1;
+
+    /* back up and start over */
+    if (LSEEK(state->fd, state->start, SEEK_SET) == -1)
+        return -1;
+    gz_reset(state);
+    return 0;
+}
+
+/* -- see zlib.h -- */
+z_off64_t ZEXPORT gzseek64(file, offset, whence)
+    gzFile file;
+    z_off64_t offset;
+    int whence;
+{
+    unsigned n;
+    z_off64_t ret;
+    gz_statep state;
+
+    /* get internal structure and check integrity */
+    if (file == NULL)
+        return -1;
+    state = (gz_statep)file;
+    if (state->mode != GZ_READ && state->mode != GZ_WRITE)
+        return -1;
+
+    /* check that there's no error */
+    if (state->err != Z_OK && state->err != Z_BUF_ERROR)
+        return -1;
+
+    /* can only seek from start or relative to current position */
+    if (whence != SEEK_SET && whence != SEEK_CUR)
+        return -1;
+
+    /* normalize offset to a SEEK_CUR specification */
+    if (whence == SEEK_SET)
+        offset -= state->x.pos;
+    else if (state->seek)
+        offset += state->skip;
+    state->seek = 0;
+
+    /* if within raw area while reading, just go there */
+    if (state->mode == GZ_READ && state->how == COPY &&
+            state->x.pos + offset >= 0) {
+        ret = LSEEK(state->fd, offset - state->x.have, SEEK_CUR);
+        if (ret == -1)
+            return -1;
+        state->x.have = 0;
+        state->eof = 0;
+        state->past = 0;
+        state->seek = 0;
+        gz_error(state, Z_OK, NULL);
+        state->strm.avail_in = 0;
+        state->x.pos += offset;
+        return state->x.pos;
+    }
+
+    /* calculate skip amount, rewinding if needed for back seek when reading */
+    if (offset < 0) {
+        if (state->mode != GZ_READ)         /* writing -- can't go backwards */
+            return -1;
+        offset += state->x.pos;
+        if (offset < 0)                     /* before start of file! */
+            return -1;
+        if (gzrewind(file) == -1)           /* rewind, then skip to offset */
+            return -1;
+    }
+
+    /* if reading, skip what's in output buffer (one less gzgetc() check) */
+    if (state->mode == GZ_READ) {
+        n = GT_OFF(state->x.have) || (z_off64_t)state->x.have > offset ?
+            (unsigned)offset : state->x.have;
+        state->x.have -= n;
+        state->x.next += n;
+        state->x.pos += n;
+        offset -= n;
+    }
+
+    /* request skip (if not zero) */
+    if (offset) {
+        state->seek = 1;
+        state->skip = offset;
+    }
+    return state->x.pos + offset;
+}
+
+/* -- see zlib.h -- */
+z_off_t ZEXPORT gzseek(file, offset, whence)
+    gzFile file;
+    z_off_t offset;
+    int whence;
+{
+    z_off64_t ret;
+
+    ret = gzseek64(file, (z_off64_t)offset, whence);
+    return ret == (z_off_t)ret ? (z_off_t)ret : -1;
+}
+
+/* -- see zlib.h -- */
+z_off64_t ZEXPORT gztell64(file)
+    gzFile file;
+{
+    gz_statep state;
+
+    /* get internal structure and check integrity */
+    if (file == NULL)
+        return -1;
+    state = (gz_statep)file;
+    if (state->mode != GZ_READ && state->mode != GZ_WRITE)
+        return -1;
+
+    /* return position */
+    return state->x.pos + (state->seek ? state->skip : 0);
+}
+
+/* -- see zlib.h -- */
+z_off_t ZEXPORT gztell(file)
+    gzFile file;
+{
+    z_off64_t ret;
+
+    ret = gztell64(file);
+    return ret == (z_off_t)ret ? (z_off_t)ret : -1;
+}
+
+/* -- see zlib.h -- */
+z_off64_t ZEXPORT gzoffset64(file)
+    gzFile file;
+{
+    z_off64_t offset;
+    gz_statep state;
+
+    /* get internal structure and check integrity */
+    if (file == NULL)
+        return -1;
+    state = (gz_statep)file;
+    if (state->mode != GZ_READ && state->mode != GZ_WRITE)
+        return -1;
+
+    /* compute and return effective offset in file */
+    offset = LSEEK(state->fd, 0, SEEK_CUR);
+    if (offset == -1)
+        return -1;
+    if (state->mode == GZ_READ)             /* reading */
+        offset -= state->strm.avail_in;     /* don't count buffered input */
+    return offset;
+}
+
+/* -- see zlib.h -- */
+z_off_t ZEXPORT gzoffset(file)
+    gzFile file;
+{
+    z_off64_t ret;
+
+    ret = gzoffset64(file);
+    return ret == (z_off_t)ret ? (z_off_t)ret : -1;
+}
+
+/* -- see zlib.h -- */
+int ZEXPORT gzeof(file)
+    gzFile file;
+{
+    gz_statep state;
+
+    /* get internal structure and check integrity */
+    if (file == NULL)
+        return 0;
+    state = (gz_statep)file;
+    if (state->mode != GZ_READ && state->mode != GZ_WRITE)
+        return 0;
+
+    /* return end-of-file state */
+    return state->mode == GZ_READ ? state->past : 0;
+}
+
+/* -- see zlib.h -- */
+const char * ZEXPORT gzerror(file, errnum)
+    gzFile file;
+    int *errnum;
+{
+    gz_statep state;
+
+    /* get internal structure and check integrity */
+    if (file == NULL)
+        return NULL;
+    state = (gz_statep)file;
+    if (state->mode != GZ_READ && state->mode != GZ_WRITE)
+        return NULL;
+
+    /* return error information */
+    if (errnum != NULL)
+        *errnum = state->err;
+    return state->err == Z_MEM_ERROR ? "out of memory" :
+                                       (state->msg == NULL ? "" : state->msg);
+}
+
+/* -- see zlib.h -- */
+void ZEXPORT gzclearerr(file)
+    gzFile file;
+{
+    gz_statep state;
+
+    /* get internal structure and check integrity */
+    if (file == NULL)
+        return;
+    state = (gz_statep)file;
+    if (state->mode != GZ_READ && state->mode != GZ_WRITE)
+        return;
+
+    /* clear error and end-of-file */
+    if (state->mode == GZ_READ) {
+        state->eof = 0;
+        state->past = 0;
+    }
+    gz_error(state, Z_OK, NULL);
+}
+
+/* Create an error message in allocated memory and set state->err and
+   state->msg accordingly.  Free any previous error message already there.  Do
+   not try to free or allocate space if the error is Z_MEM_ERROR (out of
+   memory).  Simply save the error message as a static string.  If there is an
+   allocation failure constructing the error message, then convert the error to
+   out of memory. */
+void ZLIB_INTERNAL gz_error(state, err, msg)
+    gz_statep state;
+    int err;
+    const char *msg;
+{
+    /* free previously allocated message and clear */
+    if (state->msg != NULL) {
+        if (state->err != Z_MEM_ERROR)
+            free(state->msg);
+        state->msg = NULL;
+    }
+
+    /* if fatal, set state->x.have to 0 so that the gzgetc() macro fails */
+    if (err != Z_OK && err != Z_BUF_ERROR)
+        state->x.have = 0;
+
+    /* set error code, and if no message, then done */
+    state->err = err;
+    if (msg == NULL)
+        return;
+
+    /* for an out of memory error, return literal string when requested */
+    if (err == Z_MEM_ERROR)
+        return;
+
+    /* construct error message with path */
+    if ((state->msg = (char *)malloc(strlen(state->path) + strlen(msg) + 3)) ==
+            NULL) {
+        state->err = Z_MEM_ERROR;
+        return;
+    }
+#if !defined(NO_snprintf) && !defined(NO_vsnprintf)
+    snprintf(state->msg, strlen(state->path) + strlen(msg) + 3,
+             "%s%s%s", state->path, ": ", msg);
+#else
+    strcpy(state->msg, state->path);
+    strcat(state->msg, ": ");
+    strcat(state->msg, msg);
+#endif
+    return;
+}
+
+#ifndef INT_MAX
+/* portably return maximum value for an int (when limits.h presumed not
+   available) -- we need to do this to cover cases where 2's complement not
+   used, since C standard permits 1's complement and sign-bit representations,
+   otherwise we could just use ((unsigned)-1) >> 1 */
+unsigned ZLIB_INTERNAL gz_intmax()
+{
+    unsigned p, q;
+
+    p = 1;
+    do {
+        q = p;
+        p <<= 1;
+        p++;
+    } while (p > q);
+    return q >> 1;
+}
+#endif
diff --git a/c-blosc/internal-complibs/zlib-1.2.8/gzread.c b/c-blosc/internal-complibs/zlib-1.2.8/gzread.c
new file mode 100644
index 0000000..bf4538e
--- /dev/null
+++ b/c-blosc/internal-complibs/zlib-1.2.8/gzread.c
@@ -0,0 +1,594 @@
+/* gzread.c -- zlib functions for reading gzip files
+ * Copyright (C) 2004, 2005, 2010, 2011, 2012, 2013 Mark Adler
+ * For conditions of distribution and use, see copyright notice in zlib.h
+ */
+
+#include "gzguts.h"
+
+/* Local functions */
+local int gz_load OF((gz_statep, unsigned char *, unsigned, unsigned *));
+local int gz_avail OF((gz_statep));
+local int gz_look OF((gz_statep));
+local int gz_decomp OF((gz_statep));
+local int gz_fetch OF((gz_statep));
+local int gz_skip OF((gz_statep, z_off64_t));
+
+/* Use read() to load a buffer -- return -1 on error, otherwise 0.  Read from
+   state->fd, and update state->eof, state->err, and state->msg as appropriate.
+   This function needs to loop on read(), since read() is not guaranteed to
+   read the number of bytes requested, depending on the type of descriptor. */
+local int gz_load(state, buf, len, have)
+    gz_statep state;
+    unsigned char *buf;
+    unsigned len;
+    unsigned *have;
+{
+    int ret;
+
+    *have = 0;
+    do {
+        ret = read(state->fd, buf + *have, len - *have);
+        if (ret <= 0)
+            break;
+        *have += ret;
+    } while (*have < len);
+    if (ret < 0) {
+        gz_error(state, Z_ERRNO, zstrerror());
+        return -1;
+    }
+    if (ret == 0)
+        state->eof = 1;
+    return 0;
+}
+
+/* Load up input buffer and set eof flag if last data loaded -- return -1 on
+   error, 0 otherwise.  Note that the eof flag is set when the end of the input
+   file is reached, even though there may be unused data in the buffer.  Once
+   that data has been used, no more attempts will be made to read the file.
+   If strm->avail_in != 0, then the current data is moved to the beginning of
+   the input buffer, and then the remainder of the buffer is loaded with the
+   available data from the input file. */
+local int gz_avail(state)
+    gz_statep state;
+{
+    unsigned got;
+    z_streamp strm = &(state->strm);
+
+    if (state->err != Z_OK && state->err != Z_BUF_ERROR)
+        return -1;
+    if (state->eof == 0) {
+        if (strm->avail_in) {       /* copy what's there to the start */
+            unsigned char *p = state->in;
+            unsigned const char *q = strm->next_in;
+            unsigned n = strm->avail_in;
+            do {
+                *p++ = *q++;
+            } while (--n);
+        }
+        if (gz_load(state, state->in + strm->avail_in,
+                    state->size - strm->avail_in, &got) == -1)
+            return -1;
+        strm->avail_in += got;
+        strm->next_in = state->in;
+    }
+    return 0;
+}
+
+/* Look for gzip header, set up for inflate or copy.  state->x.have must be 0.
+   If this is the first time in, allocate required memory.  state->how will be
+   left unchanged if there is no more input data available, will be set to COPY
+   if there is no gzip header and direct copying will be performed, or it will
+   be set to GZIP for decompression.  If direct copying, then leftover input
+   data from the input buffer will be copied to the output buffer.  In that
+   case, all further file reads will be directly to either the output buffer or
+   a user buffer.  If decompressing, the inflate state will be initialized.
+   gz_look() will return 0 on success or -1 on failure. */
+local int gz_look(state)
+    gz_statep state;
+{
+    z_streamp strm = &(state->strm);
+
+    /* allocate read buffers and inflate memory */
+    if (state->size == 0) {
+        /* allocate buffers */
+        state->in = (unsigned char *)malloc(state->want);
+        state->out = (unsigned char *)malloc(state->want << 1);
+        if (state->in == NULL || state->out == NULL) {
+            if (state->out != NULL)
+                free(state->out);
+            if (state->in != NULL)
+                free(state->in);
+            gz_error(state, Z_MEM_ERROR, "out of memory");
+            return -1;
+        }
+        state->size = state->want;
+
+        /* allocate inflate memory */
+        state->strm.zalloc = Z_NULL;
+        state->strm.zfree = Z_NULL;
+        state->strm.opaque = Z_NULL;
+        state->strm.avail_in = 0;
+        state->strm.next_in = Z_NULL;
+        if (inflateInit2(&(state->strm), 15 + 16) != Z_OK) {    /* gunzip */
+            free(state->out);
+            free(state->in);
+            state->size = 0;
+            gz_error(state, Z_MEM_ERROR, "out of memory");
+            return -1;
+        }
+    }
+
+    /* get at least the magic bytes in the input buffer */
+    if (strm->avail_in < 2) {
+        if (gz_avail(state) == -1)
+            return -1;
+        if (strm->avail_in == 0)
+            return 0;
+    }
+
+    /* look for gzip magic bytes -- if there, do gzip decoding (note: there is
+       a logical dilemma here when considering the case of a partially written
+       gzip file, to wit, if a single 31 byte is written, then we cannot tell
+       whether this is a single-byte file, or just a partially written gzip
+       file -- for here we assume that if a gzip file is being written, then
+       the header will be written in a single operation, so that reading a
+       single byte is sufficient indication that it is not a gzip file) */
+    if (strm->avail_in > 1 &&
+            strm->next_in[0] == 31 && strm->next_in[1] == 139) {
+        inflateReset(strm);
+        state->how = GZIP;
+        state->direct = 0;
+        return 0;
+    }
+
+    /* no gzip header -- if we were decoding gzip before, then this is trailing
+       garbage.  Ignore the trailing garbage and finish. */
+    if (state->direct == 0) {
+        strm->avail_in = 0;
+        state->eof = 1;
+        state->x.have = 0;
+        return 0;
+    }
+
+    /* doing raw i/o, copy any leftover input to output -- this assumes that
+       the output buffer is larger than the input buffer, which also assures
+       space for gzungetc() */
+    state->x.next = state->out;
+    if (strm->avail_in) {
+        memcpy(state->x.next, strm->next_in, strm->avail_in);
+        state->x.have = strm->avail_in;
+        strm->avail_in = 0;
+    }
+    state->how = COPY;
+    state->direct = 1;
+    return 0;
+}
+
+/* Decompress from input to the provided next_out and avail_out in the state.
+   On return, state->x.have and state->x.next point to the just decompressed
+   data.  If the gzip stream completes, state->how is reset to LOOK to look for
+   the next gzip stream or raw data, once state->x.have is depleted.  Returns 0
+   on success, -1 on failure. */
+local int gz_decomp(state)
+    gz_statep state;
+{
+    int ret = Z_OK;
+    unsigned had;
+    z_streamp strm = &(state->strm);
+
+    /* fill output buffer up to end of deflate stream */
+    had = strm->avail_out;
+    do {
+        /* get more input for inflate() */
+        if (strm->avail_in == 0 && gz_avail(state) == -1)
+            return -1;
+        if (strm->avail_in == 0) {
+            gz_error(state, Z_BUF_ERROR, "unexpected end of file");
+            break;
+        }
+
+        /* decompress and handle errors */
+        ret = inflate(strm, Z_NO_FLUSH);
+        if (ret == Z_STREAM_ERROR || ret == Z_NEED_DICT) {
+            gz_error(state, Z_STREAM_ERROR,
+                     "internal error: inflate stream corrupt");
+            return -1;
+        }
+        if (ret == Z_MEM_ERROR) {
+            gz_error(state, Z_MEM_ERROR, "out of memory");
+            return -1;
+        }
+        if (ret == Z_DATA_ERROR) {              /* deflate stream invalid */
+            gz_error(state, Z_DATA_ERROR,
+                     strm->msg == NULL ? "compressed data error" : strm->msg);
+            return -1;
+        }
+    } while (strm->avail_out && ret != Z_STREAM_END);
+
+    /* update available output */
+    state->x.have = had - strm->avail_out;
+    state->x.next = strm->next_out - state->x.have;
+
+    /* if the gzip stream completed successfully, look for another */
+    if (ret == Z_STREAM_END)
+        state->how = LOOK;
+
+    /* good decompression */
+    return 0;
+}
+
+/* Fetch data and put it in the output buffer.  Assumes state->x.have is 0.
+   Data is either copied from the input file or decompressed from the input
+   file depending on state->how.  If state->how is LOOK, then a gzip header is
+   looked for to determine whether to copy or decompress.  Returns -1 on error,
+   otherwise 0.  gz_fetch() will leave state->how as COPY or GZIP unless the
+   end of the input file has been reached and all data has been processed.  */
+local int gz_fetch(state)
+    gz_statep state;
+{
+    z_streamp strm = &(state->strm);
+
+    do {
+        switch(state->how) {
+        case LOOK:      /* -> LOOK, COPY (only if never GZIP), or GZIP */
+            if (gz_look(state) == -1)
+                return -1;
+            if (state->how == LOOK)
+                return 0;
+            break;
+        case COPY:      /* -> COPY */
+            if (gz_load(state, state->out, state->size << 1, &(state->x.have))
+                    == -1)
+                return -1;
+            state->x.next = state->out;
+            return 0;
+        case GZIP:      /* -> GZIP or LOOK (if end of gzip stream) */
+            strm->avail_out = state->size << 1;
+            strm->next_out = state->out;
+            if (gz_decomp(state) == -1)
+                return -1;
+        }
+    } while (state->x.have == 0 && (!state->eof || strm->avail_in));
+    return 0;
+}
+
+/* Skip len uncompressed bytes of output.  Return -1 on error, 0 on success. */
+local int gz_skip(state, len)
+    gz_statep state;
+    z_off64_t len;
+{
+    unsigned n;
+
+    /* skip over len bytes or reach end-of-file, whichever comes first */
+    while (len)
+        /* skip over whatever is in output buffer */
+        if (state->x.have) {
+            n = GT_OFF(state->x.have) || (z_off64_t)state->x.have > len ?
+                (unsigned)len : state->x.have;
+            state->x.have -= n;
+            state->x.next += n;
+            state->x.pos += n;
+            len -= n;
+        }
+
+        /* output buffer empty -- return if we're at the end of the input */
+        else if (state->eof && state->strm.avail_in == 0)
+            break;
+
+        /* need more data to skip -- load up output buffer */
+        else {
+            /* get more output, looking for header if required */
+            if (gz_fetch(state) == -1)
+                return -1;
+        }
+    return 0;
+}
+
+/* -- see zlib.h -- */
+int ZEXPORT gzread(file, buf, len)
+    gzFile file;
+    voidp buf;
+    unsigned len;
+{
+    unsigned got, n;
+    gz_statep state;
+    z_streamp strm;
+
+    /* get internal structure */
+    if (file == NULL)
+        return -1;
+    state = (gz_statep)file;
+    strm = &(state->strm);
+
+    /* check that we're reading and that there's no (serious) error */
+    if (state->mode != GZ_READ ||
+            (state->err != Z_OK && state->err != Z_BUF_ERROR))
+        return -1;
+
+    /* since an int is returned, make sure len fits in one, otherwise return
+       with an error (this avoids the flaw in the interface) */
+    if ((int)len < 0) {
+        gz_error(state, Z_DATA_ERROR, "requested length does not fit in int");
+        return -1;
+    }
+
+    /* if len is zero, avoid unnecessary operations */
+    if (len == 0)
+        return 0;
+
+    /* process a skip request */
+    if (state->seek) {
+        state->seek = 0;
+        if (gz_skip(state, state->skip) == -1)
+            return -1;
+    }
+
+    /* get len bytes to buf, or less than len if at the end */
+    got = 0;
+    do {
+        /* first just try copying data from the output buffer */
+        if (state->x.have) {
+            n = state->x.have > len ? len : state->x.have;
+            memcpy(buf, state->x.next, n);
+            state->x.next += n;
+            state->x.have -= n;
+        }
+
+        /* output buffer empty -- return if we're at the end of the input */
+        else if (state->eof && strm->avail_in == 0) {
+            state->past = 1;        /* tried to read past end */
+            break;
+        }
+
+        /* need output data -- for small len or new stream load up our output
+           buffer */
+        else if (state->how == LOOK || len < (state->size << 1)) {
+            /* get more output, looking for header if required */
+            if (gz_fetch(state) == -1)
+                return -1;
+            continue;       /* no progress yet -- go back to copy above */
+            /* the copy above assures that we will leave with space in the
+               output buffer, allowing at least one gzungetc() to succeed */
+        }
+
+        /* large len -- read directly into user buffer */
+        else if (state->how == COPY) {      /* read directly */
+            if (gz_load(state, (unsigned char *)buf, len, &n) == -1)
+                return -1;
+        }
+
+        /* large len -- decompress directly into user buffer */
+        else {  /* state->how == GZIP */
+            strm->avail_out = len;
+            strm->next_out = (unsigned char *)buf;
+            if (gz_decomp(state) == -1)
+                return -1;
+            n = state->x.have;
+            state->x.have = 0;
+        }
+
+        /* update progress */
+        len -= n;
+        buf = (char *)buf + n;
+        got += n;
+        state->x.pos += n;
+    } while (len);
+
+    /* return number of bytes read into user buffer (will fit in int) */
+    return (int)got;
+}
+
+/* -- see zlib.h -- */
+#ifdef Z_PREFIX_SET
+#  undef z_gzgetc
+#else
+#  undef gzgetc
+#endif
+int ZEXPORT gzgetc(file)
+    gzFile file;
+{
+    int ret;
+    unsigned char buf[1];
+    gz_statep state;
+
+    /* get internal structure */
+    if (file == NULL)
+        return -1;
+    state = (gz_statep)file;
+
+    /* check that we're reading and that there's no (serious) error */
+    if (state->mode != GZ_READ ||
+        (state->err != Z_OK && state->err != Z_BUF_ERROR))
+        return -1;
+
+    /* try output buffer (no need to check for skip request) */
+    if (state->x.have) {
+        state->x.have--;
+        state->x.pos++;
+        return *(state->x.next)++;
+    }
+
+    /* nothing there -- try gzread() */
+    ret = gzread(file, buf, 1);
+    return ret < 1 ? -1 : buf[0];
+}
+
+int ZEXPORT gzgetc_(file)
+gzFile file;
+{
+    return gzgetc(file);
+}
+
+/* -- see zlib.h -- */
+int ZEXPORT gzungetc(c, file)
+    int c;
+    gzFile file;
+{
+    gz_statep state;
+
+    /* get internal structure */
+    if (file == NULL)
+        return -1;
+    state = (gz_statep)file;
+
+    /* check that we're reading and that there's no (serious) error */
+    if (state->mode != GZ_READ ||
+        (state->err != Z_OK && state->err != Z_BUF_ERROR))
+        return -1;
+
+    /* process a skip request */
+    if (state->seek) {
+        state->seek = 0;
+        if (gz_skip(state, state->skip) == -1)
+            return -1;
+    }
+
+    /* can't push EOF */
+    if (c < 0)
+        return -1;
+
+    /* if output buffer empty, put byte at end (allows more pushing) */
+    if (state->x.have == 0) {
+        state->x.have = 1;
+        state->x.next = state->out + (state->size << 1) - 1;
+        state->x.next[0] = c;
+        state->x.pos--;
+        state->past = 0;
+        return c;
+    }
+
+    /* if no room, give up (must have already done a gzungetc()) */
+    if (state->x.have == (state->size << 1)) {
+        gz_error(state, Z_DATA_ERROR, "out of room to push characters");
+        return -1;
+    }
+
+    /* slide output data if needed and insert byte before existing data */
+    if (state->x.next == state->out) {
+        unsigned char *src = state->out + state->x.have;
+        unsigned char *dest = state->out + (state->size << 1);
+        while (src > state->out)
+            *--dest = *--src;
+        state->x.next = dest;
+    }
+    state->x.have++;
+    state->x.next--;
+    state->x.next[0] = c;
+    state->x.pos--;
+    state->past = 0;
+    return c;
+}
+
+/* -- see zlib.h -- */
+char * ZEXPORT gzgets(file, buf, len)
+    gzFile file;
+    char *buf;
+    int len;
+{
+    unsigned left, n;
+    char *str;
+    unsigned char *eol;
+    gz_statep state;
+
+    /* check parameters and get internal structure */
+    if (file == NULL || buf == NULL || len < 1)
+        return NULL;
+    state = (gz_statep)file;
+
+    /* check that we're reading and that there's no (serious) error */
+    if (state->mode != GZ_READ ||
+        (state->err != Z_OK && state->err != Z_BUF_ERROR))
+        return NULL;
+
+    /* process a skip request */
+    if (state->seek) {
+        state->seek = 0;
+        if (gz_skip(state, state->skip) == -1)
+            return NULL;
+    }
+
+    /* copy output bytes up to new line or len - 1, whichever comes first --
+       append a terminating zero to the string (we don't check for a zero in
+       the contents, let the user worry about that) */
+    str = buf;
+    left = (unsigned)len - 1;
+    if (left) do {
+        /* assure that something is in the output buffer */
+        if (state->x.have == 0 && gz_fetch(state) == -1)
+            return NULL;                /* error */
+        if (state->x.have == 0) {       /* end of file */
+            state->past = 1;            /* read past end */
+            break;                      /* return what we have */
+        }
+
+        /* look for end-of-line in current output buffer */
+        n = state->x.have > left ? left : state->x.have;
+        eol = (unsigned char *)memchr(state->x.next, '\n', n);
+        if (eol != NULL)
+            n = (unsigned)(eol - state->x.next) + 1;
+
+        /* copy through end-of-line, or remainder if not found */
+        memcpy(buf, state->x.next, n);
+        state->x.have -= n;
+        state->x.next += n;
+        state->x.pos += n;
+        left -= n;
+        buf += n;
+    } while (left && eol == NULL);
+
+    /* return terminated string, or if nothing, end of file */
+    if (buf == str)
+        return NULL;
+    buf[0] = 0;
+    return str;
+}
+
+/* -- see zlib.h -- */
+int ZEXPORT gzdirect(file)
+    gzFile file;
+{
+    gz_statep state;
+
+    /* get internal structure */
+    if (file == NULL)
+        return 0;
+    state = (gz_statep)file;
+
+    /* if the state is not known, but we can find out, then do so (this is
+       mainly for right after a gzopen() or gzdopen()) */
+    if (state->mode == GZ_READ && state->how == LOOK && state->x.have == 0)
+        (void)gz_look(state);
+
+    /* return 1 if transparent, 0 if processing a gzip stream */
+    return state->direct;
+}
+
+/* -- see zlib.h -- */
+int ZEXPORT gzclose_r(file)
+    gzFile file;
+{
+    int ret, err;
+    gz_statep state;
+
+    /* get internal structure */
+    if (file == NULL)
+        return Z_STREAM_ERROR;
+    state = (gz_statep)file;
+
+    /* check that we're reading */
+    if (state->mode != GZ_READ)
+        return Z_STREAM_ERROR;
+
+    /* free memory and close file */
+    if (state->size) {
+        inflateEnd(&(state->strm));
+        free(state->out);
+        free(state->in);
+    }
+    err = state->err == Z_BUF_ERROR ? Z_BUF_ERROR : Z_OK;
+    gz_error(state, Z_OK, NULL);
+    free(state->path);
+    ret = close(state->fd);
+    free(state);
+    return ret ? Z_ERRNO : err;
+}
diff --git a/c-blosc/internal-complibs/zlib-1.2.8/gzwrite.c b/c-blosc/internal-complibs/zlib-1.2.8/gzwrite.c
new file mode 100644
index 0000000..aa767fb
--- /dev/null
+++ b/c-blosc/internal-complibs/zlib-1.2.8/gzwrite.c
@@ -0,0 +1,577 @@
+/* gzwrite.c -- zlib functions for writing gzip files
+ * Copyright (C) 2004, 2005, 2010, 2011, 2012, 2013 Mark Adler
+ * For conditions of distribution and use, see copyright notice in zlib.h
+ */
+
+#include "gzguts.h"
+
+/* Local functions */
+local int gz_init OF((gz_statep));
+local int gz_comp OF((gz_statep, int));
+local int gz_zero OF((gz_statep, z_off64_t));
+
+/* Initialize state for writing a gzip file.  Mark initialization by setting
+   state->size to non-zero.  Return -1 on failure or 0 on success. */
+local int gz_init(state)
+    gz_statep state;
+{
+    int ret;
+    z_streamp strm = &(state->strm);
+
+    /* allocate input buffer */
+    state->in = (unsigned char *)malloc(state->want);
+    if (state->in == NULL) {
+        gz_error(state, Z_MEM_ERROR, "out of memory");
+        return -1;
+    }
+
+    /* only need output buffer and deflate state if compressing */
+    if (!state->direct) {
+        /* allocate output buffer */
+        state->out = (unsigned char *)malloc(state->want);
+        if (state->out == NULL) {
+            free(state->in);
+            gz_error(state, Z_MEM_ERROR, "out of memory");
+            return -1;
+        }
+
+        /* allocate deflate memory, set up for gzip compression */
+        strm->zalloc = Z_NULL;
+        strm->zfree = Z_NULL;
+        strm->opaque = Z_NULL;
+        ret = deflateInit2(strm, state->level, Z_DEFLATED,
+                           MAX_WBITS + 16, DEF_MEM_LEVEL, state->strategy);
+        if (ret != Z_OK) {
+            free(state->out);
+            free(state->in);
+            gz_error(state, Z_MEM_ERROR, "out of memory");
+            return -1;
+        }
+    }
+
+    /* mark state as initialized */
+    state->size = state->want;
+
+    /* initialize write buffer if compressing */
+    if (!state->direct) {
+        strm->avail_out = state->size;
+        strm->next_out = state->out;
+        state->x.next = strm->next_out;
+    }
+    return 0;
+}
+
+/* Compress whatever is at avail_in and next_in and write to the output file.
+   Return -1 if there is an error writing to the output file, otherwise 0.
+   flush is assumed to be a valid deflate() flush value.  If flush is Z_FINISH,
+   then the deflate() state is reset to start a new gzip stream.  If gz->direct
+   is true, then simply write to the output file without compressing, and
+   ignore flush. */
+local int gz_comp(state, flush)
+    gz_statep state;
+    int flush;
+{
+    int ret, got;
+    unsigned have;
+    z_streamp strm = &(state->strm);
+
+    /* allocate memory if this is the first time through */
+    if (state->size == 0 && gz_init(state) == -1)
+        return -1;
+
+    /* write directly if requested */
+    if (state->direct) {
+        got = write(state->fd, strm->next_in, strm->avail_in);
+        if (got < 0 || (unsigned)got != strm->avail_in) {
+            gz_error(state, Z_ERRNO, zstrerror());
+            return -1;
+        }
+        strm->avail_in = 0;
+        return 0;
+    }
+
+    /* run deflate() on provided input until it produces no more output */
+    ret = Z_OK;
+    do {
+        /* write out current buffer contents if full, or if flushing, but if
+           doing Z_FINISH then don't write until we get to Z_STREAM_END */
+        if (strm->avail_out == 0 || (flush != Z_NO_FLUSH &&
+            (flush != Z_FINISH || ret == Z_STREAM_END))) {
+            have = (unsigned)(strm->next_out - state->x.next);
+            if (have && ((got = write(state->fd, state->x.next, have)) < 0 ||
+                         (unsigned)got != have)) {
+                gz_error(state, Z_ERRNO, zstrerror());
+                return -1;
+            }
+            if (strm->avail_out == 0) {
+                strm->avail_out = state->size;
+                strm->next_out = state->out;
+            }
+            state->x.next = strm->next_out;
+        }
+
+        /* compress */
+        have = strm->avail_out;
+        ret = deflate(strm, flush);
+        if (ret == Z_STREAM_ERROR) {
+            gz_error(state, Z_STREAM_ERROR,
+                      "internal error: deflate stream corrupt");
+            return -1;
+        }
+        have -= strm->avail_out;
+    } while (have);
+
+    /* if that completed a deflate stream, allow another to start */
+    if (flush == Z_FINISH)
+        deflateReset(strm);
+
+    /* all done, no errors */
+    return 0;
+}
+
+/* Compress len zeros to output.  Return -1 on error, 0 on success. */
+local int gz_zero(state, len)
+    gz_statep state;
+    z_off64_t len;
+{
+    int first;
+    unsigned n;
+    z_streamp strm = &(state->strm);
+
+    /* consume whatever's left in the input buffer */
+    if (strm->avail_in && gz_comp(state, Z_NO_FLUSH) == -1)
+        return -1;
+
+    /* compress len zeros (len guaranteed > 0) */
+    first = 1;
+    while (len) {
+        n = GT_OFF(state->size) || (z_off64_t)state->size > len ?
+            (unsigned)len : state->size;
+        if (first) {
+            memset(state->in, 0, n);
+            first = 0;
+        }
+        strm->avail_in = n;
+        strm->next_in = state->in;
+        state->x.pos += n;
+        if (gz_comp(state, Z_NO_FLUSH) == -1)
+            return -1;
+        len -= n;
+    }
+    return 0;
+}
+
+/* -- see zlib.h -- */
+int ZEXPORT gzwrite(file, buf, len)
+    gzFile file;
+    voidpc buf;
+    unsigned len;
+{
+    unsigned put = len;
+    gz_statep state;
+    z_streamp strm;
+
+    /* get internal structure */
+    if (file == NULL)
+        return 0;
+    state = (gz_statep)file;
+    strm = &(state->strm);
+
+    /* check that we're writing and that there's no error */
+    if (state->mode != GZ_WRITE || state->err != Z_OK)
+        return 0;
+
+    /* since an int is returned, make sure len fits in one, otherwise return
+       with an error (this avoids the flaw in the interface) */
+    if ((int)len < 0) {
+        gz_error(state, Z_DATA_ERROR, "requested length does not fit in int");
+        return 0;
+    }
+
+    /* if len is zero, avoid unnecessary operations */
+    if (len == 0)
+        return 0;
+
+    /* allocate memory if this is the first time through */
+    if (state->size == 0 && gz_init(state) == -1)
+        return 0;
+
+    /* check for seek request */
+    if (state->seek) {
+        state->seek = 0;
+        if (gz_zero(state, state->skip) == -1)
+            return 0;
+    }
+
+    /* for small len, copy to input buffer, otherwise compress directly */
+    if (len < state->size) {
+        /* copy to input buffer, compress when full */
+        do {
+            unsigned have, copy;
+
+            if (strm->avail_in == 0)
+                strm->next_in = state->in;
+            have = (unsigned)((strm->next_in + strm->avail_in) - state->in);
+            copy = state->size - have;
+            if (copy > len)
+                copy = len;
+            memcpy(state->in + have, buf, copy);
+            strm->avail_in += copy;
+            state->x.pos += copy;
+            buf = (const char *)buf + copy;
+            len -= copy;
+            if (len && gz_comp(state, Z_NO_FLUSH) == -1)
+                return 0;
+        } while (len);
+    }
+    else {
+        /* consume whatever's left in the input buffer */
+        if (strm->avail_in && gz_comp(state, Z_NO_FLUSH) == -1)
+            return 0;
+
+        /* directly compress user buffer to file */
+        strm->avail_in = len;
+        strm->next_in = (z_const Bytef *)buf;
+        state->x.pos += len;
+        if (gz_comp(state, Z_NO_FLUSH) == -1)
+            return 0;
+    }
+
+    /* input was all buffered or compressed (put will fit in int) */
+    return (int)put;
+}
+
+/* -- see zlib.h -- */
+int ZEXPORT gzputc(file, c)
+    gzFile file;
+    int c;
+{
+    unsigned have;
+    unsigned char buf[1];
+    gz_statep state;
+    z_streamp strm;
+
+    /* get internal structure */
+    if (file == NULL)
+        return -1;
+    state = (gz_statep)file;
+    strm = &(state->strm);
+
+    /* check that we're writing and that there's no error */
+    if (state->mode != GZ_WRITE || state->err != Z_OK)
+        return -1;
+
+    /* check for seek request */
+    if (state->seek) {
+        state->seek = 0;
+        if (gz_zero(state, state->skip) == -1)
+            return -1;
+    }
+
+    /* try writing to input buffer for speed (state->size == 0 if buffer not
+       initialized) */
+    if (state->size) {
+        if (strm->avail_in == 0)
+            strm->next_in = state->in;
+        have = (unsigned)((strm->next_in + strm->avail_in) - state->in);
+        if (have < state->size) {
+            state->in[have] = c;
+            strm->avail_in++;
+            state->x.pos++;
+            return c & 0xff;
+        }
+    }
+
+    /* no room in buffer or not initialized, use gz_write() */
+    buf[0] = c;
+    if (gzwrite(file, buf, 1) != 1)
+        return -1;
+    return c & 0xff;
+}
+
+/* -- see zlib.h -- */
+int ZEXPORT gzputs(file, str)
+    gzFile file;
+    const char *str;
+{
+    int ret;
+    unsigned len;
+
+    /* write string */
+    len = (unsigned)strlen(str);
+    ret = gzwrite(file, str, len);
+    return ret == 0 && len != 0 ? -1 : ret;
+}
+
+#if defined(STDC) || defined(Z_HAVE_STDARG_H)
+#include <stdarg.h>
+
+/* -- see zlib.h -- */
+int ZEXPORTVA gzvprintf(gzFile file, const char *format, va_list va)
+{
+    int size, len;
+    gz_statep state;
+    z_streamp strm;
+
+    /* get internal structure */
+    if (file == NULL)
+        return -1;
+    state = (gz_statep)file;
+    strm = &(state->strm);
+
+    /* check that we're writing and that there's no error */
+    if (state->mode != GZ_WRITE || state->err != Z_OK)
+        return 0;
+
+    /* make sure we have some buffer space */
+    if (state->size == 0 && gz_init(state) == -1)
+        return 0;
+
+    /* check for seek request */
+    if (state->seek) {
+        state->seek = 0;
+        if (gz_zero(state, state->skip) == -1)
+            return 0;
+    }
+
+    /* consume whatever's left in the input buffer */
+    if (strm->avail_in && gz_comp(state, Z_NO_FLUSH) == -1)
+        return 0;
+
+    /* do the printf() into the input buffer, put length in len */
+    size = (int)(state->size);
+    state->in[size - 1] = 0;
+#ifdef NO_vsnprintf
+#  ifdef HAS_vsprintf_void
+    (void)vsprintf((char *)(state->in), format, va);
+    for (len = 0; len < size; len++)
+        if (state->in[len] == 0) break;
+#  else
+    len = vsprintf((char *)(state->in), format, va);
+#  endif
+#else
+#  ifdef HAS_vsnprintf_void
+    (void)vsnprintf((char *)(state->in), size, format, va);
+    len = strlen((char *)(state->in));
+#  else
+    len = vsnprintf((char *)(state->in), size, format, va);
+#  endif
+#endif
+
+    /* check that printf() results fit in buffer */
+    if (len <= 0 || len >= (int)size || state->in[size - 1] != 0)
+        return 0;
+
+    /* update buffer and position, defer compression until needed */
+    strm->avail_in = (unsigned)len;
+    strm->next_in = state->in;
+    state->x.pos += len;
+    return len;
+}
+
+int ZEXPORTVA gzprintf(gzFile file, const char *format, ...)
+{
+    va_list va;
+    int ret;
+
+    va_start(va, format);
+    ret = gzvprintf(file, format, va);
+    va_end(va);
+    return ret;
+}
+
+#else /* !STDC && !Z_HAVE_STDARG_H */
+
+/* -- see zlib.h -- */
+int ZEXPORTVA gzprintf (file, format, a1, a2, a3, a4, a5, a6, a7, a8, a9, a10,
+                       a11, a12, a13, a14, a15, a16, a17, a18, a19, a20)
+    gzFile file;
+    const char *format;
+    int a1, a2, a3, a4, a5, a6, a7, a8, a9, a10,
+        a11, a12, a13, a14, a15, a16, a17, a18, a19, a20;
+{
+    int size, len;
+    gz_statep state;
+    z_streamp strm;
+
+    /* get internal structure */
+    if (file == NULL)
+        return -1;
+    state = (gz_statep)file;
+    strm = &(state->strm);
+
+    /* check that can really pass pointer in ints */
+    if (sizeof(int) != sizeof(void *))
+        return 0;
+
+    /* check that we're writing and that there's no error */
+    if (state->mode != GZ_WRITE || state->err != Z_OK)
+        return 0;
+
+    /* make sure we have some buffer space */
+    if (state->size == 0 && gz_init(state) == -1)
+        return 0;
+
+    /* check for seek request */
+    if (state->seek) {
+        state->seek = 0;
+        if (gz_zero(state, state->skip) == -1)
+            return 0;
+    }
+
+    /* consume whatever's left in the input buffer */
+    if (strm->avail_in && gz_comp(state, Z_NO_FLUSH) == -1)
+        return 0;
+
+    /* do the printf() into the input buffer, put length in len */
+    size = (int)(state->size);
+    state->in[size - 1] = 0;
+#ifdef NO_snprintf
+#  ifdef HAS_sprintf_void
+    sprintf((char *)(state->in), format, a1, a2, a3, a4, a5, a6, a7, a8,
+            a9, a10, a11, a12, a13, a14, a15, a16, a17, a18, a19, a20);
+    for (len = 0; len < size; len++)
+        if (state->in[len] == 0) break;
+#  else
+    len = sprintf((char *)(state->in), format, a1, a2, a3, a4, a5, a6, a7, a8,
+                  a9, a10, a11, a12, a13, a14, a15, a16, a17, a18, a19, a20);
+#  endif
+#else
+#  ifdef HAS_snprintf_void
+    snprintf((char *)(state->in), size, format, a1, a2, a3, a4, a5, a6, a7, a8,
+             a9, a10, a11, a12, a13, a14, a15, a16, a17, a18, a19, a20);
+    len = strlen((char *)(state->in));
+#  else
+    len = snprintf((char *)(state->in), size, format, a1, a2, a3, a4, a5, a6,
+                   a7, a8, a9, a10, a11, a12, a13, a14, a15, a16, a17, a18,
+                   a19, a20);
+#  endif
+#endif
+
+    /* check that printf() results fit in buffer */
+    if (len <= 0 || len >= (int)size || state->in[size - 1] != 0)
+        return 0;
+
+    /* update buffer and position, defer compression until needed */
+    strm->avail_in = (unsigned)len;
+    strm->next_in = state->in;
+    state->x.pos += len;
+    return len;
+}
+
+#endif
+
+/* -- see zlib.h -- */
+int ZEXPORT gzflush(file, flush)
+    gzFile file;
+    int flush;
+{
+    gz_statep state;
+
+    /* get internal structure */
+    if (file == NULL)
+        return -1;
+    state = (gz_statep)file;
+
+    /* check that we're writing and that there's no error */
+    if (state->mode != GZ_WRITE || state->err != Z_OK)
+        return Z_STREAM_ERROR;
+
+    /* check flush parameter */
+    if (flush < 0 || flush > Z_FINISH)
+        return Z_STREAM_ERROR;
+
+    /* check for seek request */
+    if (state->seek) {
+        state->seek = 0;
+        if (gz_zero(state, state->skip) == -1)
+            return -1;
+    }
+
+    /* compress remaining data with requested flush */
+    gz_comp(state, flush);
+    return state->err;
+}
+
+/* -- see zlib.h -- */
+int ZEXPORT gzsetparams(file, level, strategy)
+    gzFile file;
+    int level;
+    int strategy;
+{
+    gz_statep state;
+    z_streamp strm;
+
+    /* get internal structure */
+    if (file == NULL)
+        return Z_STREAM_ERROR;
+    state = (gz_statep)file;
+    strm = &(state->strm);
+
+    /* check that we're writing and that there's no error */
+    if (state->mode != GZ_WRITE || state->err != Z_OK)
+        return Z_STREAM_ERROR;
+
+    /* if no change is requested, then do nothing */
+    if (level == state->level && strategy == state->strategy)
+        return Z_OK;
+
+    /* check for seek request */
+    if (state->seek) {
+        state->seek = 0;
+        if (gz_zero(state, state->skip) == -1)
+            return -1;
+    }
+
+    /* change compression parameters for subsequent input */
+    if (state->size) {
+        /* flush previous input with previous parameters before changing */
+        if (strm->avail_in && gz_comp(state, Z_PARTIAL_FLUSH) == -1)
+            return state->err;
+        deflateParams(strm, level, strategy);
+    }
+    state->level = level;
+    state->strategy = strategy;
+    return Z_OK;
+}
+
+/* -- see zlib.h -- */
+int ZEXPORT gzclose_w(file)
+    gzFile file;
+{
+    int ret = Z_OK;
+    gz_statep state;
+
+    /* get internal structure */
+    if (file == NULL)
+        return Z_STREAM_ERROR;
+    state = (gz_statep)file;
+
+    /* check that we're writing */
+    if (state->mode != GZ_WRITE)
+        return Z_STREAM_ERROR;
+
+    /* check for seek request */
+    if (state->seek) {
+        state->seek = 0;
+        if (gz_zero(state, state->skip) == -1)
+            ret = state->err;
+    }
+
+    /* flush, free memory, and close file */
+    if (gz_comp(state, Z_FINISH) == -1)
+        ret = state->err;
+    if (state->size) {
+        if (!state->direct) {
+            (void)deflateEnd(&(state->strm));
+            free(state->out);
+        }
+        free(state->in);
+    }
+    gz_error(state, Z_OK, NULL);
+    free(state->path);
+    if (close(state->fd) == -1)
+        ret = Z_ERRNO;
+    free(state);
+    return ret;
+}
diff --git a/c-blosc/internal-complibs/zlib-1.2.8/infback.c b/c-blosc/internal-complibs/zlib-1.2.8/infback.c
new file mode 100644
index 0000000..f3833c2
--- /dev/null
+++ b/c-blosc/internal-complibs/zlib-1.2.8/infback.c
@@ -0,0 +1,640 @@
+/* infback.c -- inflate using a call-back interface
+ * Copyright (C) 1995-2011 Mark Adler
+ * For conditions of distribution and use, see copyright notice in zlib.h
+ */
+
+/*
+   This code is largely copied from inflate.c.  Normally either infback.o or
+   inflate.o would be linked into an application--not both.  The interface
+   with inffast.c is retained so that optimized assembler-coded versions of
+   inflate_fast() can be used with either inflate.c or infback.c.
+ */
+
+#include "zutil.h"
+#include "inftrees.h"
+#include "inflate.h"
+#include "inffast.h"
+
+/* function prototypes */
+local void fixedtables OF((struct inflate_state FAR *state));
+
+/*
+   strm provides memory allocation functions in zalloc and zfree, or
+   Z_NULL to use the library memory allocation functions.
+
+   windowBits is in the range 8..15, and window is a user-supplied
+   window and output buffer that is 2**windowBits bytes.
+ */
+int ZEXPORT inflateBackInit_(strm, windowBits, window, version, stream_size)
+z_streamp strm;
+int windowBits;
+unsigned char FAR *window;
+const char *version;
+int stream_size;
+{
+    struct inflate_state FAR *state;
+
+    if (version == Z_NULL || version[0] != ZLIB_VERSION[0] ||
+        stream_size != (int)(sizeof(z_stream)))
+        return Z_VERSION_ERROR;
+    if (strm == Z_NULL || window == Z_NULL ||
+        windowBits < 8 || windowBits > 15)
+        return Z_STREAM_ERROR;
+    strm->msg = Z_NULL;                 /* in case we return an error */
+    if (strm->zalloc == (alloc_func)0) {
+#ifdef Z_SOLO
+        return Z_STREAM_ERROR;
+#else
+        strm->zalloc = zcalloc;
+        strm->opaque = (voidpf)0;
+#endif
+    }
+    if (strm->zfree == (free_func)0)
+#ifdef Z_SOLO
+        return Z_STREAM_ERROR;
+#else
+    strm->zfree = zcfree;
+#endif
+    state = (struct inflate_state FAR *)ZALLOC(strm, 1,
+                                               sizeof(struct inflate_state));
+    if (state == Z_NULL) return Z_MEM_ERROR;
+    Tracev((stderr, "inflate: allocated\n"));
+    strm->state = (struct internal_state FAR *)state;
+    state->dmax = 32768U;
+    state->wbits = windowBits;
+    state->wsize = 1U << windowBits;
+    state->window = window;
+    state->wnext = 0;
+    state->whave = 0;
+    return Z_OK;
+}
+
+/*
+   Return state with length and distance decoding tables and index sizes set to
+   fixed code decoding.  Normally this returns fixed tables from inffixed.h.
+   If BUILDFIXED is defined, then instead this routine builds the tables the
+   first time it's called, and returns those tables the first time and
+   thereafter.  This reduces the size of the code by about 2K bytes, in
+   exchange for a little execution time.  However, BUILDFIXED should not be
+   used for threaded applications, since the rewriting of the tables and virgin
+   may not be thread-safe.
+ */
+local void fixedtables(state)
+struct inflate_state FAR *state;
+{
+#ifdef BUILDFIXED
+    static int virgin = 1;
+    static code *lenfix, *distfix;
+    static code fixed[544];
+
+    /* build fixed huffman tables if first call (may not be thread safe) */
+    if (virgin) {
+        unsigned sym, bits;
+        static code *next;
+
+        /* literal/length table */
+        sym = 0;
+        while (sym < 144) state->lens[sym++] = 8;
+        while (sym < 256) state->lens[sym++] = 9;
+        while (sym < 280) state->lens[sym++] = 7;
+        while (sym < 288) state->lens[sym++] = 8;
+        next = fixed;
+        lenfix = next;
+        bits = 9;
+        inflate_table(LENS, state->lens, 288, &(next), &(bits), state->work);
+
+        /* distance table */
+        sym = 0;
+        while (sym < 32) state->lens[sym++] = 5;
+        distfix = next;
+        bits = 5;
+        inflate_table(DISTS, state->lens, 32, &(next), &(bits), state->work);
+
+        /* do this just once */
+        virgin = 0;
+    }
+#else /* !BUILDFIXED */
+#   include "inffixed.h"
+#endif /* BUILDFIXED */
+    state->lencode = lenfix;
+    state->lenbits = 9;
+    state->distcode = distfix;
+    state->distbits = 5;
+}
+
+/* Macros for inflateBack(): */
+
+/* Load returned state from inflate_fast() */
+#define LOAD() \
+    do { \
+        put = strm->next_out; \
+        left = strm->avail_out; \
+        next = strm->next_in; \
+        have = strm->avail_in; \
+        hold = state->hold; \
+        bits = state->bits; \
+    } while (0)
+
+/* Set state from registers for inflate_fast() */
+#define RESTORE() \
+    do { \
+        strm->next_out = put; \
+        strm->avail_out = left; \
+        strm->next_in = next; \
+        strm->avail_in = have; \
+        state->hold = hold; \
+        state->bits = bits; \
+    } while (0)
+
+/* Clear the input bit accumulator */
+#define INITBITS() \
+    do { \
+        hold = 0; \
+        bits = 0; \
+    } while (0)
+
+/* Assure that some input is available.  If input is requested, but denied,
+   then return a Z_BUF_ERROR from inflateBack(). */
+#define PULL() \
+    do { \
+        if (have == 0) { \
+            have = in(in_desc, &next); \
+            if (have == 0) { \
+                next = Z_NULL; \
+                ret = Z_BUF_ERROR; \
+                goto inf_leave; \
+            } \
+        } \
+    } while (0)
+
+/* Get a byte of input into the bit accumulator, or return from inflateBack()
+   with an error if there is no input available. */
+#define PULLBYTE() \
+    do { \
+        PULL(); \
+        have--; \
+        hold += (unsigned long)(*next++) << bits; \
+        bits += 8; \
+    } while (0)
+
+/* Assure that there are at least n bits in the bit accumulator.  If there is
+   not enough available input to do that, then return from inflateBack() with
+   an error. */
+#define NEEDBITS(n) \
+    do { \
+        while (bits < (unsigned)(n)) \
+            PULLBYTE(); \
+    } while (0)
+
+/* Return the low n bits of the bit accumulator (n < 16) */
+#define BITS(n) \
+    ((unsigned)hold & ((1U << (n)) - 1))
+
+/* Remove n bits from the bit accumulator */
+#define DROPBITS(n) \
+    do { \
+        hold >>= (n); \
+        bits -= (unsigned)(n); \
+    } while (0)
+
+/* Remove zero to seven bits as needed to go to a byte boundary */
+#define BYTEBITS() \
+    do { \
+        hold >>= bits & 7; \
+        bits -= bits & 7; \
+    } while (0)
+
+/* Assure that some output space is available, by writing out the window
+   if it's full.  If the write fails, return from inflateBack() with a
+   Z_BUF_ERROR. */
+#define ROOM() \
+    do { \
+        if (left == 0) { \
+            put = state->window; \
+            left = state->wsize; \
+            state->whave = left; \
+            if (out(out_desc, put, left)) { \
+                ret = Z_BUF_ERROR; \
+                goto inf_leave; \
+            } \
+        } \
+    } while (0)
+
+/*
+   strm provides the memory allocation functions and window buffer on input,
+   and provides information on the unused input on return.  For Z_DATA_ERROR
+   returns, strm will also provide an error message.
+
+   in() and out() are the call-back input and output functions.  When
+   inflateBack() needs more input, it calls in().  When inflateBack() has
+   filled the window with output, or when it completes with data in the
+   window, it calls out() to write out the data.  The application must not
+   change the provided input until in() is called again or inflateBack()
+   returns.  The application must not change the window/output buffer until
+   inflateBack() returns.
+
+   in() and out() are called with a descriptor parameter provided in the
+   inflateBack() call.  This parameter can be a structure that provides the
+   information required to do the read or write, as well as accumulated
+   information on the input and output such as totals and check values.
+
+   in() should return zero on failure.  out() should return non-zero on
+   failure.  If either in() or out() fails, than inflateBack() returns a
+   Z_BUF_ERROR.  strm->next_in can be checked for Z_NULL to see whether it
+   was in() or out() that caused in the error.  Otherwise,  inflateBack()
+   returns Z_STREAM_END on success, Z_DATA_ERROR for an deflate format
+   error, or Z_MEM_ERROR if it could not allocate memory for the state.
+   inflateBack() can also return Z_STREAM_ERROR if the input parameters
+   are not correct, i.e. strm is Z_NULL or the state was not initialized.
+ */
+int ZEXPORT inflateBack(strm, in, in_desc, out, out_desc)
+z_streamp strm;
+in_func in;
+void FAR *in_desc;
+out_func out;
+void FAR *out_desc;
+{
+    struct inflate_state FAR *state;
+    z_const unsigned char FAR *next;    /* next input */
+    unsigned char FAR *put;     /* next output */
+    unsigned have, left;        /* available input and output */
+    unsigned long hold;         /* bit buffer */
+    unsigned bits;              /* bits in bit buffer */
+    unsigned copy;              /* number of stored or match bytes to copy */
+    unsigned char FAR *from;    /* where to copy match bytes from */
+    code here;                  /* current decoding table entry */
+    code last;                  /* parent table entry */
+    unsigned len;               /* length to copy for repeats, bits to drop */
+    int ret;                    /* return code */
+    static const unsigned short order[19] = /* permutation of code lengths */
+        {16, 17, 18, 0, 8, 7, 9, 6, 10, 5, 11, 4, 12, 3, 13, 2, 14, 1, 15};
+
+    /* Check that the strm exists and that the state was initialized */
+    if (strm == Z_NULL || strm->state == Z_NULL)
+        return Z_STREAM_ERROR;
+    state = (struct inflate_state FAR *)strm->state;
+
+    /* Reset the state */
+    strm->msg = Z_NULL;
+    state->mode = TYPE;
+    state->last = 0;
+    state->whave = 0;
+    next = strm->next_in;
+    have = next != Z_NULL ? strm->avail_in : 0;
+    hold = 0;
+    bits = 0;
+    put = state->window;
+    left = state->wsize;
+
+    /* Inflate until end of block marked as last */
+    for (;;)
+        switch (state->mode) {
+        case TYPE:
+            /* determine and dispatch block type */
+            if (state->last) {
+                BYTEBITS();
+                state->mode = DONE;
+                break;
+            }
+            NEEDBITS(3);
+            state->last = BITS(1);
+            DROPBITS(1);
+            switch (BITS(2)) {
+            case 0:                             /* stored block */
+                Tracev((stderr, "inflate:     stored block%s\n",
+                        state->last ? " (last)" : ""));
+                state->mode = STORED;
+                break;
+            case 1:                             /* fixed block */
+                fixedtables(state);
+                Tracev((stderr, "inflate:     fixed codes block%s\n",
+                        state->last ? " (last)" : ""));
+                state->mode = LEN;              /* decode codes */
+                break;
+            case 2:                             /* dynamic block */
+                Tracev((stderr, "inflate:     dynamic codes block%s\n",
+                        state->last ? " (last)" : ""));
+                state->mode = TABLE;
+                break;
+            case 3:
+                strm->msg = (char *)"invalid block type";
+                state->mode = BAD;
+            }
+            DROPBITS(2);
+            break;
+
+        case STORED:
+            /* get and verify stored block length */
+            BYTEBITS();                         /* go to byte boundary */
+            NEEDBITS(32);
+            if ((hold & 0xffff) != ((hold >> 16) ^ 0xffff)) {
+                strm->msg = (char *)"invalid stored block lengths";
+                state->mode = BAD;
+                break;
+            }
+            state->length = (unsigned)hold & 0xffff;
+            Tracev((stderr, "inflate:       stored length %u\n",
+                    state->length));
+            INITBITS();
+
+            /* copy stored block from input to output */
+            while (state->length != 0) {
+                copy = state->length;
+                PULL();
+                ROOM();
+                if (copy > have) copy = have;
+                if (copy > left) copy = left;
+                zmemcpy(put, next, copy);
+                have -= copy;
+                next += copy;
+                left -= copy;
+                put += copy;
+                state->length -= copy;
+            }
+            Tracev((stderr, "inflate:       stored end\n"));
+            state->mode = TYPE;
+            break;
+
+        case TABLE:
+            /* get dynamic table entries descriptor */
+            NEEDBITS(14);
+            state->nlen = BITS(5) + 257;
+            DROPBITS(5);
+            state->ndist = BITS(5) + 1;
+            DROPBITS(5);
+            state->ncode = BITS(4) + 4;
+            DROPBITS(4);
+#ifndef PKZIP_BUG_WORKAROUND
+            if (state->nlen > 286 || state->ndist > 30) {
+                strm->msg = (char *)"too many length or distance symbols";
+                state->mode = BAD;
+                break;
+            }
+#endif
+            Tracev((stderr, "inflate:       table sizes ok\n"));
+
+            /* get code length code lengths (not a typo) */
+            state->have = 0;
+            while (state->have < state->ncode) {
+                NEEDBITS(3);
+                state->lens[order[state->have++]] = (unsigned short)BITS(3);
+                DROPBITS(3);
+            }
+            while (state->have < 19)
+                state->lens[order[state->have++]] = 0;
+            state->next = state->codes;
+            state->lencode = (code const FAR *)(state->next);
+            state->lenbits = 7;
+            ret = inflate_table(CODES, state->lens, 19, &(state->next),
+                                &(state->lenbits), state->work);
+            if (ret) {
+                strm->msg = (char *)"invalid code lengths set";
+                state->mode = BAD;
+                break;
+            }
+            Tracev((stderr, "inflate:       code lengths ok\n"));
+
+            /* get length and distance code code lengths */
+            state->have = 0;
+            while (state->have < state->nlen + state->ndist) {
+                for (;;) {
+                    here = state->lencode[BITS(state->lenbits)];
+                    if ((unsigned)(here.bits) <= bits) break;
+                    PULLBYTE();
+                }
+                if (here.val < 16) {
+                    DROPBITS(here.bits);
+                    state->lens[state->have++] = here.val;
+                }
+                else {
+                    if (here.val == 16) {
+                        NEEDBITS(here.bits + 2);
+                        DROPBITS(here.bits);
+                        if (state->have == 0) {
+                            strm->msg = (char *)"invalid bit length repeat";
+                            state->mode = BAD;
+                            break;
+                        }
+                        len = (unsigned)(state->lens[state->have - 1]);
+                        copy = 3 + BITS(2);
+                        DROPBITS(2);
+                    }
+                    else if (here.val == 17) {
+                        NEEDBITS(here.bits + 3);
+                        DROPBITS(here.bits);
+                        len = 0;
+                        copy = 3 + BITS(3);
+                        DROPBITS(3);
+                    }
+                    else {
+                        NEEDBITS(here.bits + 7);
+                        DROPBITS(here.bits);
+                        len = 0;
+                        copy = 11 + BITS(7);
+                        DROPBITS(7);
+                    }
+                    if (state->have + copy > state->nlen + state->ndist) {
+                        strm->msg = (char *)"invalid bit length repeat";
+                        state->mode = BAD;
+                        break;
+                    }
+                    while (copy--)
+                        state->lens[state->have++] = (unsigned short)len;
+                }
+            }
+
+            /* handle error breaks in while */
+            if (state->mode == BAD) break;
+
+            /* check for end-of-block code (better have one) */
+            if (state->lens[256] == 0) {
+                strm->msg = (char *)"invalid code -- missing end-of-block";
+                state->mode = BAD;
+                break;
+            }
+
+            /* build code tables -- note: do not change the lenbits or distbits
+               values here (9 and 6) without reading the comments in inftrees.h
+               concerning the ENOUGH constants, which depend on those values */
+            state->next = state->codes;
+            state->lencode = (code const FAR *)(state->next);
+            state->lenbits = 9;
+            ret = inflate_table(LENS, state->lens, state->nlen, &(state->next),
+                                &(state->lenbits), state->work);
+            if (ret) {
+                strm->msg = (char *)"invalid literal/lengths set";
+                state->mode = BAD;
+                break;
+            }
+            state->distcode = (code const FAR *)(state->next);
+            state->distbits = 6;
+            ret = inflate_table(DISTS, state->lens + state->nlen, state->ndist,
+                            &(state->next), &(state->distbits), state->work);
+            if (ret) {
+                strm->msg = (char *)"invalid distances set";
+                state->mode = BAD;
+                break;
+            }
+            Tracev((stderr, "inflate:       codes ok\n"));
+            state->mode = LEN;
+
+        case LEN:
+            /* use inflate_fast() if we have enough input and output */
+            if (have >= 6 && left >= 258) {
+                RESTORE();
+                if (state->whave < state->wsize)
+                    state->whave = state->wsize - left;
+                inflate_fast(strm, state->wsize);
+                LOAD();
+                break;
+            }
+
+            /* get a literal, length, or end-of-block code */
+            for (;;) {
+                here = state->lencode[BITS(state->lenbits)];
+                if ((unsigned)(here.bits) <= bits) break;
+                PULLBYTE();
+            }
+            if (here.op && (here.op & 0xf0) == 0) {
+                last = here;
+                for (;;) {
+                    here = state->lencode[last.val +
+                            (BITS(last.bits + last.op) >> last.bits)];
+                    if ((unsigned)(last.bits + here.bits) <= bits) break;
+                    PULLBYTE();
+                }
+                DROPBITS(last.bits);
+            }
+            DROPBITS(here.bits);
+            state->length = (unsigned)here.val;
+
+            /* process literal */
+            if (here.op == 0) {
+                Tracevv((stderr, here.val >= 0x20 && here.val < 0x7f ?
+                        "inflate:         literal '%c'\n" :
+                        "inflate:         literal 0x%02x\n", here.val));
+                ROOM();
+                *put++ = (unsigned char)(state->length);
+                left--;
+                state->mode = LEN;
+                break;
+            }
+
+            /* process end of block */
+            if (here.op & 32) {
+                Tracevv((stderr, "inflate:         end of block\n"));
+                state->mode = TYPE;
+                break;
+            }
+
+            /* invalid code */
+            if (here.op & 64) {
+                strm->msg = (char *)"invalid literal/length code";
+                state->mode = BAD;
+                break;
+            }
+
+            /* length code -- get extra bits, if any */
+            state->extra = (unsigned)(here.op) & 15;
+            if (state->extra != 0) {
+                NEEDBITS(state->extra);
+                state->length += BITS(state->extra);
+                DROPBITS(state->extra);
+            }
+            Tracevv((stderr, "inflate:         length %u\n", state->length));
+
+            /* get distance code */
+            for (;;) {
+                here = state->distcode[BITS(state->distbits)];
+                if ((unsigned)(here.bits) <= bits) break;
+                PULLBYTE();
+            }
+            if ((here.op & 0xf0) == 0) {
+                last = here;
+                for (;;) {
+                    here = state->distcode[last.val +
+                            (BITS(last.bits + last.op) >> last.bits)];
+                    if ((unsigned)(last.bits + here.bits) <= bits) break;
+                    PULLBYTE();
+                }
+                DROPBITS(last.bits);
+            }
+            DROPBITS(here.bits);
+            if (here.op & 64) {
+                strm->msg = (char *)"invalid distance code";
+                state->mode = BAD;
+                break;
+            }
+            state->offset = (unsigned)here.val;
+
+            /* get distance extra bits, if any */
+            state->extra = (unsigned)(here.op) & 15;
+            if (state->extra != 0) {
+                NEEDBITS(state->extra);
+                state->offset += BITS(state->extra);
+                DROPBITS(state->extra);
+            }
+            if (state->offset > state->wsize - (state->whave < state->wsize ?
+                                                left : 0)) {
+                strm->msg = (char *)"invalid distance too far back";
+                state->mode = BAD;
+                break;
+            }
+            Tracevv((stderr, "inflate:         distance %u\n", state->offset));
+
+            /* copy match from window to output */
+            do {
+                ROOM();
+                copy = state->wsize - state->offset;
+                if (copy < left) {
+                    from = put + copy;
+                    copy = left - copy;
+                }
+                else {
+                    from = put - state->offset;
+                    copy = left;
+                }
+                if (copy > state->length) copy = state->length;
+                state->length -= copy;
+                left -= copy;
+                do {
+                    *put++ = *from++;
+                } while (--copy);
+            } while (state->length != 0);
+            break;
+
+        case DONE:
+            /* inflate stream terminated properly -- write leftover output */
+            ret = Z_STREAM_END;
+            if (left < state->wsize) {
+                if (out(out_desc, state->window, state->wsize - left))
+                    ret = Z_BUF_ERROR;
+            }
+            goto inf_leave;
+
+        case BAD:
+            ret = Z_DATA_ERROR;
+            goto inf_leave;
+
+        default:                /* can't happen, but makes compilers happy */
+            ret = Z_STREAM_ERROR;
+            goto inf_leave;
+        }
+
+    /* Return unused input */
+  inf_leave:
+    strm->next_in = next;
+    strm->avail_in = have;
+    return ret;
+}
+
+int ZEXPORT inflateBackEnd(strm)
+z_streamp strm;
+{
+    if (strm == Z_NULL || strm->state == Z_NULL || strm->zfree == (free_func)0)
+        return Z_STREAM_ERROR;
+    ZFREE(strm, strm->state);
+    strm->state = Z_NULL;
+    Tracev((stderr, "inflate: end\n"));
+    return Z_OK;
+}
diff --git a/c-blosc/internal-complibs/zlib-1.2.8/inffast.c b/c-blosc/internal-complibs/zlib-1.2.8/inffast.c
new file mode 100644
index 0000000..bda59ce
--- /dev/null
+++ b/c-blosc/internal-complibs/zlib-1.2.8/inffast.c
@@ -0,0 +1,340 @@
+/* inffast.c -- fast decoding
+ * Copyright (C) 1995-2008, 2010, 2013 Mark Adler
+ * For conditions of distribution and use, see copyright notice in zlib.h
+ */
+
+#include "zutil.h"
+#include "inftrees.h"
+#include "inflate.h"
+#include "inffast.h"
+
+#ifndef ASMINF
+
+/* Allow machine dependent optimization for post-increment or pre-increment.
+   Based on testing to date,
+   Pre-increment preferred for:
+   - PowerPC G3 (Adler)
+   - MIPS R5000 (Randers-Pehrson)
+   Post-increment preferred for:
+   - none
+   No measurable difference:
+   - Pentium III (Anderson)
+   - M68060 (Nikl)
+ */
+#ifdef POSTINC
+#  define OFF 0
+#  define PUP(a) *(a)++
+#else
+#  define OFF 1
+#  define PUP(a) *++(a)
+#endif
+
+/*
+   Decode literal, length, and distance codes and write out the resulting
+   literal and match bytes until either not enough input or output is
+   available, an end-of-block is encountered, or a data error is encountered.
+   When large enough input and output buffers are supplied to inflate(), for
+   example, a 16K input buffer and a 64K output buffer, more than 95% of the
+   inflate execution time is spent in this routine.
+
+   Entry assumptions:
+
+        state->mode == LEN
+        strm->avail_in >= 6
+        strm->avail_out >= 258
+        start >= strm->avail_out
+        state->bits < 8
+
+   On return, state->mode is one of:
+
+        LEN -- ran out of enough output space or enough available input
+        TYPE -- reached end of block code, inflate() to interpret next block
+        BAD -- error in block data
+
+   Notes:
+
+    - The maximum input bits used by a length/distance pair is 15 bits for the
+      length code, 5 bits for the length extra, 15 bits for the distance code,
+      and 13 bits for the distance extra.  This totals 48 bits, or six bytes.
+      Therefore if strm->avail_in >= 6, then there is enough input to avoid
+      checking for available input while decoding.
+
+    - The maximum bytes that a single length/distance pair can output is 258
+      bytes, which is the maximum length that can be coded.  inflate_fast()
+      requires strm->avail_out >= 258 for each loop to avoid checking for
+      output space.
+ */
+void ZLIB_INTERNAL inflate_fast(strm, start)
+z_streamp strm;
+unsigned start;         /* inflate()'s starting value for strm->avail_out */
+{
+    struct inflate_state FAR *state;
+    z_const unsigned char FAR *in;      /* local strm->next_in */
+    z_const unsigned char FAR *last;    /* have enough input while in < last */
+    unsigned char FAR *out;     /* local strm->next_out */
+    unsigned char FAR *beg;     /* inflate()'s initial strm->next_out */
+    unsigned char FAR *end;     /* while out < end, enough space available */
+#ifdef INFLATE_STRICT
+    unsigned dmax;              /* maximum distance from zlib header */
+#endif
+    unsigned wsize;             /* window size or zero if not using window */
+    unsigned whave;             /* valid bytes in the window */
+    unsigned wnext;             /* window write index */
+    unsigned char FAR *window;  /* allocated sliding window, if wsize != 0 */
+    unsigned long hold;         /* local strm->hold */
+    unsigned bits;              /* local strm->bits */
+    code const FAR *lcode;      /* local strm->lencode */
+    code const FAR *dcode;      /* local strm->distcode */
+    unsigned lmask;             /* mask for first level of length codes */
+    unsigned dmask;             /* mask for first level of distance codes */
+    code here;                  /* retrieved table entry */
+    unsigned op;                /* code bits, operation, extra bits, or */
+                                /*  window position, window bytes to copy */
+    unsigned len;               /* match length, unused bytes */
+    unsigned dist;              /* match distance */
+    unsigned char FAR *from;    /* where to copy match from */
+
+    /* copy state to local variables */
+    state = (struct inflate_state FAR *)strm->state;
+    in = strm->next_in - OFF;
+    last = in + (strm->avail_in - 5);
+    out = strm->next_out - OFF;
+    beg = out - (start - strm->avail_out);
+    end = out + (strm->avail_out - 257);
+#ifdef INFLATE_STRICT
+    dmax = state->dmax;
+#endif
+    wsize = state->wsize;
+    whave = state->whave;
+    wnext = state->wnext;
+    window = state->window;
+    hold = state->hold;
+    bits = state->bits;
+    lcode = state->lencode;
+    dcode = state->distcode;
+    lmask = (1U << state->lenbits) - 1;
+    dmask = (1U << state->distbits) - 1;
+
+    /* decode literals and length/distances until end-of-block or not enough
+       input data or output space */
+    do {
+        if (bits < 15) {
+            hold += (unsigned long)(PUP(in)) << bits;
+            bits += 8;
+            hold += (unsigned long)(PUP(in)) << bits;
+            bits += 8;
+        }
+        here = lcode[hold & lmask];
+      dolen:
+        op = (unsigned)(here.bits);
+        hold >>= op;
+        bits -= op;
+        op = (unsigned)(here.op);
+        if (op == 0) {                          /* literal */
+            Tracevv((stderr, here.val >= 0x20 && here.val < 0x7f ?
+                    "inflate:         literal '%c'\n" :
+                    "inflate:         literal 0x%02x\n", here.val));
+            PUP(out) = (unsigned char)(here.val);
+        }
+        else if (op & 16) {                     /* length base */
+            len = (unsigned)(here.val);
+            op &= 15;                           /* number of extra bits */
+            if (op) {
+                if (bits < op) {
+                    hold += (unsigned long)(PUP(in)) << bits;
+                    bits += 8;
+                }
+                len += (unsigned)hold & ((1U << op) - 1);
+                hold >>= op;
+                bits -= op;
+            }
+            Tracevv((stderr, "inflate:         length %u\n", len));
+            if (bits < 15) {
+                hold += (unsigned long)(PUP(in)) << bits;
+                bits += 8;
+                hold += (unsigned long)(PUP(in)) << bits;
+                bits += 8;
+            }
+            here = dcode[hold & dmask];
+          dodist:
+            op = (unsigned)(here.bits);
+            hold >>= op;
+            bits -= op;
+            op = (unsigned)(here.op);
+            if (op & 16) {                      /* distance base */
+                dist = (unsigned)(here.val);
+                op &= 15;                       /* number of extra bits */
+                if (bits < op) {
+                    hold += (unsigned long)(PUP(in)) << bits;
+                    bits += 8;
+                    if (bits < op) {
+                        hold += (unsigned long)(PUP(in)) << bits;
+                        bits += 8;
+                    }
+                }
+                dist += (unsigned)hold & ((1U << op) - 1);
+#ifdef INFLATE_STRICT
+                if (dist > dmax) {
+                    strm->msg = (char *)"invalid distance too far back";
+                    state->mode = BAD;
+                    break;
+                }
+#endif
+                hold >>= op;
+                bits -= op;
+                Tracevv((stderr, "inflate:         distance %u\n", dist));
+                op = (unsigned)(out - beg);     /* max distance in output */
+                if (dist > op) {                /* see if copy from window */
+                    op = dist - op;             /* distance back in window */
+                    if (op > whave) {
+                        if (state->sane) {
+                            strm->msg =
+                                (char *)"invalid distance too far back";
+                            state->mode = BAD;
+                            break;
+                        }
+#ifdef INFLATE_ALLOW_INVALID_DISTANCE_TOOFAR_ARRR
+                        if (len <= op - whave) {
+                            do {
+                                PUP(out) = 0;
+                            } while (--len);
+                            continue;
+                        }
+                        len -= op - whave;
+                        do {
+                            PUP(out) = 0;
+                        } while (--op > whave);
+                        if (op == 0) {
+                            from = out - dist;
+                            do {
+                                PUP(out) = PUP(from);
+                            } while (--len);
+                            continue;
+                        }
+#endif
+                    }
+                    from = window - OFF;
+                    if (wnext == 0) {           /* very common case */
+                        from += wsize - op;
+                        if (op < len) {         /* some from window */
+                            len -= op;
+                            do {
+                                PUP(out) = PUP(from);
+                            } while (--op);
+                            from = out - dist;  /* rest from output */
+                        }
+                    }
+                    else if (wnext < op) {      /* wrap around window */
+                        from += wsize + wnext - op;
+                        op -= wnext;
+                        if (op < len) {         /* some from end of window */
+                            len -= op;
+                            do {
+                                PUP(out) = PUP(from);
+                            } while (--op);
+                            from = window - OFF;
+                            if (wnext < len) {  /* some from start of window */
+                                op = wnext;
+                                len -= op;
+                                do {
+                                    PUP(out) = PUP(from);
+                                } while (--op);
+                                from = out - dist;      /* rest from output */
+                            }
+                        }
+                    }
+                    else {                      /* contiguous in window */
+                        from += wnext - op;
+                        if (op < len) {         /* some from window */
+                            len -= op;
+                            do {
+                                PUP(out) = PUP(from);
+                            } while (--op);
+                            from = out - dist;  /* rest from output */
+                        }
+                    }
+                    while (len > 2) {
+                        PUP(out) = PUP(from);
+                        PUP(out) = PUP(from);
+                        PUP(out) = PUP(from);
+                        len -= 3;
+                    }
+                    if (len) {
+                        PUP(out) = PUP(from);
+                        if (len > 1)
+                            PUP(out) = PUP(from);
+                    }
+                }
+                else {
+                    from = out - dist;          /* copy direct from output */
+                    do {                        /* minimum length is three */
+                        PUP(out) = PUP(from);
+                        PUP(out) = PUP(from);
+                        PUP(out) = PUP(from);
+                        len -= 3;
+                    } while (len > 2);
+                    if (len) {
+                        PUP(out) = PUP(from);
+                        if (len > 1)
+                            PUP(out) = PUP(from);
+                    }
+                }
+            }
+            else if ((op & 64) == 0) {          /* 2nd level distance code */
+                here = dcode[here.val + (hold & ((1U << op) - 1))];
+                goto dodist;
+            }
+            else {
+                strm->msg = (char *)"invalid distance code";
+                state->mode = BAD;
+                break;
+            }
+        }
+        else if ((op & 64) == 0) {              /* 2nd level length code */
+            here = lcode[here.val + (hold & ((1U << op) - 1))];
+            goto dolen;
+        }
+        else if (op & 32) {                     /* end-of-block */
+            Tracevv((stderr, "inflate:         end of block\n"));
+            state->mode = TYPE;
+            break;
+        }
+        else {
+            strm->msg = (char *)"invalid literal/length code";
+            state->mode = BAD;
+            break;
+        }
+    } while (in < last && out < end);
+
+    /* return unused bytes (on entry, bits < 8, so in won't go too far back) */
+    len = bits >> 3;
+    in -= len;
+    bits -= len << 3;
+    hold &= (1U << bits) - 1;
+
+    /* update state and return */
+    strm->next_in = in + OFF;
+    strm->next_out = out + OFF;
+    strm->avail_in = (unsigned)(in < last ? 5 + (last - in) : 5 - (in - last));
+    strm->avail_out = (unsigned)(out < end ?
+                                 257 + (end - out) : 257 - (out - end));
+    state->hold = hold;
+    state->bits = bits;
+    return;
+}
+
+/*
+   inflate_fast() speedups that turned out slower (on a PowerPC G3 750CXe):
+   - Using bit fields for code structure
+   - Different op definition to avoid & for extra bits (do & for table bits)
+   - Three separate decoding do-loops for direct, window, and wnext == 0
+   - Special case for distance > 1 copies to do overlapped load and store copy
+   - Explicit branch predictions (based on measured branch probabilities)
+   - Deferring match copy and interspersed it with decoding subsequent codes
+   - Swapping literal/length else
+   - Swapping window/direct else
+   - Larger unrolled copy loops (three is about right)
+   - Moving len -= 3 statement into middle of loop
+ */
+
+#endif /* !ASMINF */
diff --git a/c-blosc/internal-complibs/zlib-1.2.8/inffast.h b/c-blosc/internal-complibs/zlib-1.2.8/inffast.h
new file mode 100644
index 0000000..e5c1aa4
--- /dev/null
+++ b/c-blosc/internal-complibs/zlib-1.2.8/inffast.h
@@ -0,0 +1,11 @@
+/* inffast.h -- header to use inffast.c
+ * Copyright (C) 1995-2003, 2010 Mark Adler
+ * For conditions of distribution and use, see copyright notice in zlib.h
+ */
+
+/* WARNING: this file should *not* be used by applications. It is
+   part of the implementation of the compression library and is
+   subject to change. Applications should only use zlib.h.
+ */
+
+void ZLIB_INTERNAL inflate_fast OF((z_streamp strm, unsigned start));
diff --git a/c-blosc/internal-complibs/zlib-1.2.8/inffixed.h b/c-blosc/internal-complibs/zlib-1.2.8/inffixed.h
new file mode 100644
index 0000000..d628327
--- /dev/null
+++ b/c-blosc/internal-complibs/zlib-1.2.8/inffixed.h
@@ -0,0 +1,94 @@
+    /* inffixed.h -- table for decoding fixed codes
+     * Generated automatically by makefixed().
+     */
+
+    /* WARNING: this file should *not* be used by applications.
+       It is part of the implementation of this library and is
+       subject to change. Applications should only use zlib.h.
+     */
+
+    static const code lenfix[512] = {
+        {96,7,0},{0,8,80},{0,8,16},{20,8,115},{18,7,31},{0,8,112},{0,8,48},
+        {0,9,192},{16,7,10},{0,8,96},{0,8,32},{0,9,160},{0,8,0},{0,8,128},
+        {0,8,64},{0,9,224},{16,7,6},{0,8,88},{0,8,24},{0,9,144},{19,7,59},
+        {0,8,120},{0,8,56},{0,9,208},{17,7,17},{0,8,104},{0,8,40},{0,9,176},
+        {0,8,8},{0,8,136},{0,8,72},{0,9,240},{16,7,4},{0,8,84},{0,8,20},
+        {21,8,227},{19,7,43},{0,8,116},{0,8,52},{0,9,200},{17,7,13},{0,8,100},
+        {0,8,36},{0,9,168},{0,8,4},{0,8,132},{0,8,68},{0,9,232},{16,7,8},
+        {0,8,92},{0,8,28},{0,9,152},{20,7,83},{0,8,124},{0,8,60},{0,9,216},
+        {18,7,23},{0,8,108},{0,8,44},{0,9,184},{0,8,12},{0,8,140},{0,8,76},
+        {0,9,248},{16,7,3},{0,8,82},{0,8,18},{21,8,163},{19,7,35},{0,8,114},
+        {0,8,50},{0,9,196},{17,7,11},{0,8,98},{0,8,34},{0,9,164},{0,8,2},
+        {0,8,130},{0,8,66},{0,9,228},{16,7,7},{0,8,90},{0,8,26},{0,9,148},
+        {20,7,67},{0,8,122},{0,8,58},{0,9,212},{18,7,19},{0,8,106},{0,8,42},
+        {0,9,180},{0,8,10},{0,8,138},{0,8,74},{0,9,244},{16,7,5},{0,8,86},
+        {0,8,22},{64,8,0},{19,7,51},{0,8,118},{0,8,54},{0,9,204},{17,7,15},
+        {0,8,102},{0,8,38},{0,9,172},{0,8,6},{0,8,134},{0,8,70},{0,9,236},
+        {16,7,9},{0,8,94},{0,8,30},{0,9,156},{20,7,99},{0,8,126},{0,8,62},
+        {0,9,220},{18,7,27},{0,8,110},{0,8,46},{0,9,188},{0,8,14},{0,8,142},
+        {0,8,78},{0,9,252},{96,7,0},{0,8,81},{0,8,17},{21,8,131},{18,7,31},
+        {0,8,113},{0,8,49},{0,9,194},{16,7,10},{0,8,97},{0,8,33},{0,9,162},
+        {0,8,1},{0,8,129},{0,8,65},{0,9,226},{16,7,6},{0,8,89},{0,8,25},
+        {0,9,146},{19,7,59},{0,8,121},{0,8,57},{0,9,210},{17,7,17},{0,8,105},
+        {0,8,41},{0,9,178},{0,8,9},{0,8,137},{0,8,73},{0,9,242},{16,7,4},
+        {0,8,85},{0,8,21},{16,8,258},{19,7,43},{0,8,117},{0,8,53},{0,9,202},
+        {17,7,13},{0,8,101},{0,8,37},{0,9,170},{0,8,5},{0,8,133},{0,8,69},
+        {0,9,234},{16,7,8},{0,8,93},{0,8,29},{0,9,154},{20,7,83},{0,8,125},
+        {0,8,61},{0,9,218},{18,7,23},{0,8,109},{0,8,45},{0,9,186},{0,8,13},
+        {0,8,141},{0,8,77},{0,9,250},{16,7,3},{0,8,83},{0,8,19},{21,8,195},
+        {19,7,35},{0,8,115},{0,8,51},{0,9,198},{17,7,11},{0,8,99},{0,8,35},
+        {0,9,166},{0,8,3},{0,8,131},{0,8,67},{0,9,230},{16,7,7},{0,8,91},
+        {0,8,27},{0,9,150},{20,7,67},{0,8,123},{0,8,59},{0,9,214},{18,7,19},
+        {0,8,107},{0,8,43},{0,9,182},{0,8,11},{0,8,139},{0,8,75},{0,9,246},
+        {16,7,5},{0,8,87},{0,8,23},{64,8,0},{19,7,51},{0,8,119},{0,8,55},
+        {0,9,206},{17,7,15},{0,8,103},{0,8,39},{0,9,174},{0,8,7},{0,8,135},
+        {0,8,71},{0,9,238},{16,7,9},{0,8,95},{0,8,31},{0,9,158},{20,7,99},
+        {0,8,127},{0,8,63},{0,9,222},{18,7,27},{0,8,111},{0,8,47},{0,9,190},
+        {0,8,15},{0,8,143},{0,8,79},{0,9,254},{96,7,0},{0,8,80},{0,8,16},
+        {20,8,115},{18,7,31},{0,8,112},{0,8,48},{0,9,193},{16,7,10},{0,8,96},
+        {0,8,32},{0,9,161},{0,8,0},{0,8,128},{0,8,64},{0,9,225},{16,7,6},
+        {0,8,88},{0,8,24},{0,9,145},{19,7,59},{0,8,120},{0,8,56},{0,9,209},
+        {17,7,17},{0,8,104},{0,8,40},{0,9,177},{0,8,8},{0,8,136},{0,8,72},
+        {0,9,241},{16,7,4},{0,8,84},{0,8,20},{21,8,227},{19,7,43},{0,8,116},
+        {0,8,52},{0,9,201},{17,7,13},{0,8,100},{0,8,36},{0,9,169},{0,8,4},
+        {0,8,132},{0,8,68},{0,9,233},{16,7,8},{0,8,92},{0,8,28},{0,9,153},
+        {20,7,83},{0,8,124},{0,8,60},{0,9,217},{18,7,23},{0,8,108},{0,8,44},
+        {0,9,185},{0,8,12},{0,8,140},{0,8,76},{0,9,249},{16,7,3},{0,8,82},
+        {0,8,18},{21,8,163},{19,7,35},{0,8,114},{0,8,50},{0,9,197},{17,7,11},
+        {0,8,98},{0,8,34},{0,9,165},{0,8,2},{0,8,130},{0,8,66},{0,9,229},
+        {16,7,7},{0,8,90},{0,8,26},{0,9,149},{20,7,67},{0,8,122},{0,8,58},
+        {0,9,213},{18,7,19},{0,8,106},{0,8,42},{0,9,181},{0,8,10},{0,8,138},
+        {0,8,74},{0,9,245},{16,7,5},{0,8,86},{0,8,22},{64,8,0},{19,7,51},
+        {0,8,118},{0,8,54},{0,9,205},{17,7,15},{0,8,102},{0,8,38},{0,9,173},
+        {0,8,6},{0,8,134},{0,8,70},{0,9,237},{16,7,9},{0,8,94},{0,8,30},
+        {0,9,157},{20,7,99},{0,8,126},{0,8,62},{0,9,221},{18,7,27},{0,8,110},
+        {0,8,46},{0,9,189},{0,8,14},{0,8,142},{0,8,78},{0,9,253},{96,7,0},
+        {0,8,81},{0,8,17},{21,8,131},{18,7,31},{0,8,113},{0,8,49},{0,9,195},
+        {16,7,10},{0,8,97},{0,8,33},{0,9,163},{0,8,1},{0,8,129},{0,8,65},
+        {0,9,227},{16,7,6},{0,8,89},{0,8,25},{0,9,147},{19,7,59},{0,8,121},
+        {0,8,57},{0,9,211},{17,7,17},{0,8,105},{0,8,41},{0,9,179},{0,8,9},
+        {0,8,137},{0,8,73},{0,9,243},{16,7,4},{0,8,85},{0,8,21},{16,8,258},
+        {19,7,43},{0,8,117},{0,8,53},{0,9,203},{17,7,13},{0,8,101},{0,8,37},
+        {0,9,171},{0,8,5},{0,8,133},{0,8,69},{0,9,235},{16,7,8},{0,8,93},
+        {0,8,29},{0,9,155},{20,7,83},{0,8,125},{0,8,61},{0,9,219},{18,7,23},
+        {0,8,109},{0,8,45},{0,9,187},{0,8,13},{0,8,141},{0,8,77},{0,9,251},
+        {16,7,3},{0,8,83},{0,8,19},{21,8,195},{19,7,35},{0,8,115},{0,8,51},
+        {0,9,199},{17,7,11},{0,8,99},{0,8,35},{0,9,167},{0,8,3},{0,8,131},
+        {0,8,67},{0,9,231},{16,7,7},{0,8,91},{0,8,27},{0,9,151},{20,7,67},
+        {0,8,123},{0,8,59},{0,9,215},{18,7,19},{0,8,107},{0,8,43},{0,9,183},
+        {0,8,11},{0,8,139},{0,8,75},{0,9,247},{16,7,5},{0,8,87},{0,8,23},
+        {64,8,0},{19,7,51},{0,8,119},{0,8,55},{0,9,207},{17,7,15},{0,8,103},
+        {0,8,39},{0,9,175},{0,8,7},{0,8,135},{0,8,71},{0,9,239},{16,7,9},
+        {0,8,95},{0,8,31},{0,9,159},{20,7,99},{0,8,127},{0,8,63},{0,9,223},
+        {18,7,27},{0,8,111},{0,8,47},{0,9,191},{0,8,15},{0,8,143},{0,8,79},
+        {0,9,255}
+    };
+
+    static const code distfix[32] = {
+        {16,5,1},{23,5,257},{19,5,17},{27,5,4097},{17,5,5},{25,5,1025},
+        {21,5,65},{29,5,16385},{16,5,3},{24,5,513},{20,5,33},{28,5,8193},
+        {18,5,9},{26,5,2049},{22,5,129},{64,5,0},{16,5,2},{23,5,385},
+        {19,5,25},{27,5,6145},{17,5,7},{25,5,1537},{21,5,97},{29,5,24577},
+        {16,5,4},{24,5,769},{20,5,49},{28,5,12289},{18,5,13},{26,5,3073},
+        {22,5,193},{64,5,0}
+    };
diff --git a/c-blosc/internal-complibs/zlib-1.2.8/inflate.c b/c-blosc/internal-complibs/zlib-1.2.8/inflate.c
new file mode 100644
index 0000000..870f89b
--- /dev/null
+++ b/c-blosc/internal-complibs/zlib-1.2.8/inflate.c
@@ -0,0 +1,1512 @@
+/* inflate.c -- zlib decompression
+ * Copyright (C) 1995-2012 Mark Adler
+ * For conditions of distribution and use, see copyright notice in zlib.h
+ */
+
+/*
+ * Change history:
+ *
+ * 1.2.beta0    24 Nov 2002
+ * - First version -- complete rewrite of inflate to simplify code, avoid
+ *   creation of window when not needed, minimize use of window when it is
+ *   needed, make inffast.c even faster, implement gzip decoding, and to
+ *   improve code readability and style over the previous zlib inflate code
+ *
+ * 1.2.beta1    25 Nov 2002
+ * - Use pointers for available input and output checking in inffast.c
+ * - Remove input and output counters in inffast.c
+ * - Change inffast.c entry and loop from avail_in >= 7 to >= 6
+ * - Remove unnecessary second byte pull from length extra in inffast.c
+ * - Unroll direct copy to three copies per loop in inffast.c
+ *
+ * 1.2.beta2    4 Dec 2002
+ * - Change external routine names to reduce potential conflicts
+ * - Correct filename to inffixed.h for fixed tables in inflate.c
+ * - Make hbuf[] unsigned char to match parameter type in inflate.c
+ * - Change strm->next_out[-state->offset] to *(strm->next_out - state->offset)
+ *   to avoid negation problem on Alphas (64 bit) in inflate.c
+ *
+ * 1.2.beta3    22 Dec 2002
+ * - Add comments on state->bits assertion in inffast.c
+ * - Add comments on op field in inftrees.h
+ * - Fix bug in reuse of allocated window after inflateReset()
+ * - Remove bit fields--back to byte structure for speed
+ * - Remove distance extra == 0 check in inflate_fast()--only helps for lengths
+ * - Change post-increments to pre-increments in inflate_fast(), PPC biased?
+ * - Add compile time option, POSTINC, to use post-increments instead (Intel?)
+ * - Make MATCH copy in inflate() much faster for when inflate_fast() not used
+ * - Use local copies of stream next and avail values, as well as local bit
+ *   buffer and bit count in inflate()--for speed when inflate_fast() not used
+ *
+ * 1.2.beta4    1 Jan 2003
+ * - Split ptr - 257 statements in inflate_table() to avoid compiler warnings
+ * - Move a comment on output buffer sizes from inffast.c to inflate.c
+ * - Add comments in inffast.c to introduce the inflate_fast() routine
+ * - Rearrange window copies in inflate_fast() for speed and simplification
+ * - Unroll last copy for window match in inflate_fast()
+ * - Use local copies of window variables in inflate_fast() for speed
+ * - Pull out common wnext == 0 case for speed in inflate_fast()
+ * - Make op and len in inflate_fast() unsigned for consistency
+ * - Add FAR to lcode and dcode declarations in inflate_fast()
+ * - Simplified bad distance check in inflate_fast()
+ * - Added inflateBackInit(), inflateBack(), and inflateBackEnd() in new
+ *   source file infback.c to provide a call-back interface to inflate for
+ *   programs like gzip and unzip -- uses window as output buffer to avoid
+ *   window copying
+ *
+ * 1.2.beta5    1 Jan 2003
+ * - Improved inflateBack() interface to allow the caller to provide initial
+ *   input in strm.
+ * - Fixed stored blocks bug in inflateBack()
+ *
+ * 1.2.beta6    4 Jan 2003
+ * - Added comments in inffast.c on effectiveness of POSTINC
+ * - Typecasting all around to reduce compiler warnings
+ * - Changed loops from while (1) or do {} while (1) to for (;;), again to
+ *   make compilers happy
+ * - Changed type of window in inflateBackInit() to unsigned char *
+ *
+ * 1.2.beta7    27 Jan 2003
+ * - Changed many types to unsigned or unsigned short to avoid warnings
+ * - Added inflateCopy() function
+ *
+ * 1.2.0        9 Mar 2003
+ * - Changed inflateBack() interface to provide separate opaque descriptors
+ *   for the in() and out() functions
+ * - Changed inflateBack() argument and in_func typedef to swap the length
+ *   and buffer address return values for the input function
+ * - Check next_in and next_out for Z_NULL on entry to inflate()
+ *
+ * The history for versions after 1.2.0 are in ChangeLog in zlib distribution.
+ */
+
+#include "zutil.h"
+#include "inftrees.h"
+#include "inflate.h"
+#include "inffast.h"
+
+#ifdef MAKEFIXED
+#  ifndef BUILDFIXED
+#    define BUILDFIXED
+#  endif
+#endif
+
+/* function prototypes */
+local void fixedtables OF((struct inflate_state FAR *state));
+local int updatewindow OF((z_streamp strm, const unsigned char FAR *end,
+                           unsigned copy));
+#ifdef BUILDFIXED
+   void makefixed OF((void));
+#endif
+local unsigned syncsearch OF((unsigned FAR *have, const unsigned char FAR *buf,
+                              unsigned len));
+
+int ZEXPORT inflateResetKeep(strm)
+z_streamp strm;
+{
+    struct inflate_state FAR *state;
+
+    if (strm == Z_NULL || strm->state == Z_NULL) return Z_STREAM_ERROR;
+    state = (struct inflate_state FAR *)strm->state;
+    strm->total_in = strm->total_out = state->total = 0;
+    strm->msg = Z_NULL;
+    if (state->wrap)        /* to support ill-conceived Java test suite */
+        strm->adler = state->wrap & 1;
+    state->mode = HEAD;
+    state->last = 0;
+    state->havedict = 0;
+    state->dmax = 32768U;
+    state->head = Z_NULL;
+    state->hold = 0;
+    state->bits = 0;
+    state->lencode = state->distcode = state->next = state->codes;
+    state->sane = 1;
+    state->back = -1;
+    Tracev((stderr, "inflate: reset\n"));
+    return Z_OK;
+}
+
+int ZEXPORT inflateReset(strm)
+z_streamp strm;
+{
+    struct inflate_state FAR *state;
+
+    if (strm == Z_NULL || strm->state == Z_NULL) return Z_STREAM_ERROR;
+    state = (struct inflate_state FAR *)strm->state;
+    state->wsize = 0;
+    state->whave = 0;
+    state->wnext = 0;
+    return inflateResetKeep(strm);
+}
+
+int ZEXPORT inflateReset2(strm, windowBits)
+z_streamp strm;
+int windowBits;
+{
+    int wrap;
+    struct inflate_state FAR *state;
+
+    /* get the state */
+    if (strm == Z_NULL || strm->state == Z_NULL) return Z_STREAM_ERROR;
+    state = (struct inflate_state FAR *)strm->state;
+
+    /* extract wrap request from windowBits parameter */
+    if (windowBits < 0) {
+        wrap = 0;
+        windowBits = -windowBits;
+    }
+    else {
+        wrap = (windowBits >> 4) + 1;
+#ifdef GUNZIP
+        if (windowBits < 48)
+            windowBits &= 15;
+#endif
+    }
+
+    /* set number of window bits, free window if different */
+    if (windowBits && (windowBits < 8 || windowBits > 15))
+        return Z_STREAM_ERROR;
+    if (state->window != Z_NULL && state->wbits != (unsigned)windowBits) {
+        ZFREE(strm, state->window);
+        state->window = Z_NULL;
+    }
+
+    /* update state and reset the rest of it */
+    state->wrap = wrap;
+    state->wbits = (unsigned)windowBits;
+    return inflateReset(strm);
+}
+
+int ZEXPORT inflateInit2_(strm, windowBits, version, stream_size)
+z_streamp strm;
+int windowBits;
+const char *version;
+int stream_size;
+{
+    int ret;
+    struct inflate_state FAR *state;
+
+    if (version == Z_NULL || version[0] != ZLIB_VERSION[0] ||
+        stream_size != (int)(sizeof(z_stream)))
+        return Z_VERSION_ERROR;
+    if (strm == Z_NULL) return Z_STREAM_ERROR;
+    strm->msg = Z_NULL;                 /* in case we return an error */
+    if (strm->zalloc == (alloc_func)0) {
+#ifdef Z_SOLO
+        return Z_STREAM_ERROR;
+#else
+        strm->zalloc = zcalloc;
+        strm->opaque = (voidpf)0;
+#endif
+    }
+    if (strm->zfree == (free_func)0)
+#ifdef Z_SOLO
+        return Z_STREAM_ERROR;
+#else
+        strm->zfree = zcfree;
+#endif
+    state = (struct inflate_state FAR *)
+            ZALLOC(strm, 1, sizeof(struct inflate_state));
+    if (state == Z_NULL) return Z_MEM_ERROR;
+    Tracev((stderr, "inflate: allocated\n"));
+    strm->state = (struct internal_state FAR *)state;
+    state->window = Z_NULL;
+    ret = inflateReset2(strm, windowBits);
+    if (ret != Z_OK) {
+        ZFREE(strm, state);
+        strm->state = Z_NULL;
+    }
+    return ret;
+}
+
+int ZEXPORT inflateInit_(strm, version, stream_size)
+z_streamp strm;
+const char *version;
+int stream_size;
+{
+    return inflateInit2_(strm, DEF_WBITS, version, stream_size);
+}
+
+int ZEXPORT inflatePrime(strm, bits, value)
+z_streamp strm;
+int bits;
+int value;
+{
+    struct inflate_state FAR *state;
+
+    if (strm == Z_NULL || strm->state == Z_NULL) return Z_STREAM_ERROR;
+    state = (struct inflate_state FAR *)strm->state;
+    if (bits < 0) {
+        state->hold = 0;
+        state->bits = 0;
+        return Z_OK;
+    }
+    if (bits > 16 || state->bits + bits > 32) return Z_STREAM_ERROR;
+    value &= (1L << bits) - 1;
+    state->hold += value << state->bits;
+    state->bits += bits;
+    return Z_OK;
+}
+
+/*
+   Return state with length and distance decoding tables and index sizes set to
+   fixed code decoding.  Normally this returns fixed tables from inffixed.h.
+   If BUILDFIXED is defined, then instead this routine builds the tables the
+   first time it's called, and returns those tables the first time and
+   thereafter.  This reduces the size of the code by about 2K bytes, in
+   exchange for a little execution time.  However, BUILDFIXED should not be
+   used for threaded applications, since the rewriting of the tables and virgin
+   may not be thread-safe.
+ */
+local void fixedtables(state)
+struct inflate_state FAR *state;
+{
+#ifdef BUILDFIXED
+    static int virgin = 1;
+    static code *lenfix, *distfix;
+    static code fixed[544];
+
+    /* build fixed huffman tables if first call (may not be thread safe) */
+    if (virgin) {
+        unsigned sym, bits;
+        static code *next;
+
+        /* literal/length table */
+        sym = 0;
+        while (sym < 144) state->lens[sym++] = 8;
+        while (sym < 256) state->lens[sym++] = 9;
+        while (sym < 280) state->lens[sym++] = 7;
+        while (sym < 288) state->lens[sym++] = 8;
+        next = fixed;
+        lenfix = next;
+        bits = 9;
+        inflate_table(LENS, state->lens, 288, &(next), &(bits), state->work);
+
+        /* distance table */
+        sym = 0;
+        while (sym < 32) state->lens[sym++] = 5;
+        distfix = next;
+        bits = 5;
+        inflate_table(DISTS, state->lens, 32, &(next), &(bits), state->work);
+
+        /* do this just once */
+        virgin = 0;
+    }
+#else /* !BUILDFIXED */
+#   include "inffixed.h"
+#endif /* BUILDFIXED */
+    state->lencode = lenfix;
+    state->lenbits = 9;
+    state->distcode = distfix;
+    state->distbits = 5;
+}
+
+#ifdef MAKEFIXED
+#include <stdio.h>
+
+/*
+   Write out the inffixed.h that is #include'd above.  Defining MAKEFIXED also
+   defines BUILDFIXED, so the tables are built on the fly.  makefixed() writes
+   those tables to stdout, which would be piped to inffixed.h.  A small program
+   can simply call makefixed to do this:
+
+    void makefixed(void);
+
+    int main(void)
+    {
+        makefixed();
+        return 0;
+    }
+
+   Then that can be linked with zlib built with MAKEFIXED defined and run:
+
+    a.out > inffixed.h
+ */
+void makefixed()
+{
+    unsigned low, size;
+    struct inflate_state state;
+
+    fixedtables(&state);
+    puts("    /* inffixed.h -- table for decoding fixed codes");
+    puts("     * Generated automatically by makefixed().");
+    puts("     */");
+    puts("");
+    puts("    /* WARNING: this file should *not* be used by applications.");
+    puts("       It is part of the implementation of this library and is");
+    puts("       subject to change. Applications should only use zlib.h.");
+    puts("     */");
+    puts("");
+    size = 1U << 9;
+    printf("    static const code lenfix[%u] = {", size);
+    low = 0;
+    for (;;) {
+        if ((low % 7) == 0) printf("\n        ");
+        printf("{%u,%u,%d}", (low & 127) == 99 ? 64 : state.lencode[low].op,
+               state.lencode[low].bits, state.lencode[low].val);
+        if (++low == size) break;
+        putchar(',');
+    }
+    puts("\n    };");
+    size = 1U << 5;
+    printf("\n    static const code distfix[%u] = {", size);
+    low = 0;
+    for (;;) {
+        if ((low % 6) == 0) printf("\n        ");
+        printf("{%u,%u,%d}", state.distcode[low].op, state.distcode[low].bits,
+               state.distcode[low].val);
+        if (++low == size) break;
+        putchar(',');
+    }
+    puts("\n    };");
+}
+#endif /* MAKEFIXED */
+
+/*
+   Update the window with the last wsize (normally 32K) bytes written before
+   returning.  If window does not exist yet, create it.  This is only called
+   when a window is already in use, or when output has been written during this
+   inflate call, but the end of the deflate stream has not been reached yet.
+   It is also called to create a window for dictionary data when a dictionary
+   is loaded.
+
+   Providing output buffers larger than 32K to inflate() should provide a speed
+   advantage, since only the last 32K of output is copied to the sliding window
+   upon return from inflate(), and since all distances after the first 32K of
+   output will fall in the output data, making match copies simpler and faster.
+   The advantage may be dependent on the size of the processor's data caches.
+ */
+local int updatewindow(strm, end, copy)
+z_streamp strm;
+const Bytef *end;
+unsigned copy;
+{
+    struct inflate_state FAR *state;
+    unsigned dist;
+
+    state = (struct inflate_state FAR *)strm->state;
+
+    /* if it hasn't been done already, allocate space for the window */
+    if (state->window == Z_NULL) {
+        state->window = (unsigned char FAR *)
+                        ZALLOC(strm, 1U << state->wbits,
+                               sizeof(unsigned char));
+        if (state->window == Z_NULL) return 1;
+    }
+
+    /* if window not in use yet, initialize */
+    if (state->wsize == 0) {
+        state->wsize = 1U << state->wbits;
+        state->wnext = 0;
+        state->whave = 0;
+    }
+
+    /* copy state->wsize or less output bytes into the circular window */
+    if (copy >= state->wsize) {
+        zmemcpy(state->window, end - state->wsize, state->wsize);
+        state->wnext = 0;
+        state->whave = state->wsize;
+    }
+    else {
+        dist = state->wsize - state->wnext;
+        if (dist > copy) dist = copy;
+        zmemcpy(state->window + state->wnext, end - copy, dist);
+        copy -= dist;
+        if (copy) {
+            zmemcpy(state->window, end - copy, copy);
+            state->wnext = copy;
+            state->whave = state->wsize;
+        }
+        else {
+            state->wnext += dist;
+            if (state->wnext == state->wsize) state->wnext = 0;
+            if (state->whave < state->wsize) state->whave += dist;
+        }
+    }
+    return 0;
+}
+
+/* Macros for inflate(): */
+
+/* check function to use adler32() for zlib or crc32() for gzip */
+#ifdef GUNZIP
+#  define UPDATE(check, buf, len) \
+    (state->flags ? crc32(check, buf, len) : adler32(check, buf, len))
+#else
+#  define UPDATE(check, buf, len) adler32(check, buf, len)
+#endif
+
+/* check macros for header crc */
+#ifdef GUNZIP
+#  define CRC2(check, word) \
+    do { \
+        hbuf[0] = (unsigned char)(word); \
+        hbuf[1] = (unsigned char)((word) >> 8); \
+        check = crc32(check, hbuf, 2); \
+    } while (0)
+
+#  define CRC4(check, word) \
+    do { \
+        hbuf[0] = (unsigned char)(word); \
+        hbuf[1] = (unsigned char)((word) >> 8); \
+        hbuf[2] = (unsigned char)((word) >> 16); \
+        hbuf[3] = (unsigned char)((word) >> 24); \
+        check = crc32(check, hbuf, 4); \
+    } while (0)
+#endif
+
+/* Load registers with state in inflate() for speed */
+#define LOAD() \
+    do { \
+        put = strm->next_out; \
+        left = strm->avail_out; \
+        next = strm->next_in; \
+        have = strm->avail_in; \
+        hold = state->hold; \
+        bits = state->bits; \
+    } while (0)
+
+/* Restore state from registers in inflate() */
+#define RESTORE() \
+    do { \
+        strm->next_out = put; \
+        strm->avail_out = left; \
+        strm->next_in = next; \
+        strm->avail_in = have; \
+        state->hold = hold; \
+        state->bits = bits; \
+    } while (0)
+
+/* Clear the input bit accumulator */
+#define INITBITS() \
+    do { \
+        hold = 0; \
+        bits = 0; \
+    } while (0)
+
+/* Get a byte of input into the bit accumulator, or return from inflate()
+   if there is no input available. */
+#define PULLBYTE() \
+    do { \
+        if (have == 0) goto inf_leave; \
+        have--; \
+        hold += (unsigned long)(*next++) << bits; \
+        bits += 8; \
+    } while (0)
+
+/* Assure that there are at least n bits in the bit accumulator.  If there is
+   not enough available input to do that, then return from inflate(). */
+#define NEEDBITS(n) \
+    do { \
+        while (bits < (unsigned)(n)) \
+            PULLBYTE(); \
+    } while (0)
+
+/* Return the low n bits of the bit accumulator (n < 16) */
+#define BITS(n) \
+    ((unsigned)hold & ((1U << (n)) - 1))
+
+/* Remove n bits from the bit accumulator */
+#define DROPBITS(n) \
+    do { \
+        hold >>= (n); \
+        bits -= (unsigned)(n); \
+    } while (0)
+
+/* Remove zero to seven bits as needed to go to a byte boundary */
+#define BYTEBITS() \
+    do { \
+        hold >>= bits & 7; \
+        bits -= bits & 7; \
+    } while (0)
+
+/*
+   inflate() uses a state machine to process as much input data and generate as
+   much output data as possible before returning.  The state machine is
+   structured roughly as follows:
+
+    for (;;) switch (state) {
+    ...
+    case STATEn:
+        if (not enough input data or output space to make progress)
+            return;
+        ... make progress ...
+        state = STATEm;
+        break;
+    ...
+    }
+
+   so when inflate() is called again, the same case is attempted again, and
+   if the appropriate resources are provided, the machine proceeds to the
+   next state.  The NEEDBITS() macro is usually the way the state evaluates
+   whether it can proceed or should return.  NEEDBITS() does the return if
+   the requested bits are not available.  The typical use of the BITS macros
+   is:
+
+        NEEDBITS(n);
+        ... do something with BITS(n) ...
+        DROPBITS(n);
+
+   where NEEDBITS(n) either returns from inflate() if there isn't enough
+   input left to load n bits into the accumulator, or it continues.  BITS(n)
+   gives the low n bits in the accumulator.  When done, DROPBITS(n) drops
+   the low n bits off the accumulator.  INITBITS() clears the accumulator
+   and sets the number of available bits to zero.  BYTEBITS() discards just
+   enough bits to put the accumulator on a byte boundary.  After BYTEBITS()
+   and a NEEDBITS(8), then BITS(8) would return the next byte in the stream.
+
+   NEEDBITS(n) uses PULLBYTE() to get an available byte of input, or to return
+   if there is no input available.  The decoding of variable length codes uses
+   PULLBYTE() directly in order to pull just enough bytes to decode the next
+   code, and no more.
+
+   Some states loop until they get enough input, making sure that enough
+   state information is maintained to continue the loop where it left off
+   if NEEDBITS() returns in the loop.  For example, want, need, and keep
+   would all have to actually be part of the saved state in case NEEDBITS()
+   returns:
+
+    case STATEw:
+        while (want < need) {
+            NEEDBITS(n);
+            keep[want++] = BITS(n);
+            DROPBITS(n);
+        }
+        state = STATEx;
+    case STATEx:
+
+   As shown above, if the next state is also the next case, then the break
+   is omitted.
+
+   A state may also return if there is not enough output space available to
+   complete that state.  Those states are copying stored data, writing a
+   literal byte, and copying a matching string.
+
+   When returning, a "goto inf_leave" is used to update the total counters,
+   update the check value, and determine whether any progress has been made
+   during that inflate() call in order to return the proper return code.
+   Progress is defined as a change in either strm->avail_in or strm->avail_out.
+   When there is a window, goto inf_leave will update the window with the last
+   output written.  If a goto inf_leave occurs in the middle of decompression
+   and there is no window currently, goto inf_leave will create one and copy
+   output to the window for the next call of inflate().
+
+   In this implementation, the flush parameter of inflate() only affects the
+   return code (per zlib.h).  inflate() always writes as much as possible to
+   strm->next_out, given the space available and the provided input--the effect
+   documented in zlib.h of Z_SYNC_FLUSH.  Furthermore, inflate() always defers
+   the allocation of and copying into a sliding window until necessary, which
+   provides the effect documented in zlib.h for Z_FINISH when the entire input
+   stream available.  So the only thing the flush parameter actually does is:
+   when flush is set to Z_FINISH, inflate() cannot return Z_OK.  Instead it
+   will return Z_BUF_ERROR if it has not reached the end of the stream.
+ */
+
+int ZEXPORT inflate(strm, flush)
+z_streamp strm;
+int flush;
+{
+    struct inflate_state FAR *state;
+    z_const unsigned char FAR *next;    /* next input */
+    unsigned char FAR *put;     /* next output */
+    unsigned have, left;        /* available input and output */
+    unsigned long hold;         /* bit buffer */
+    unsigned bits;              /* bits in bit buffer */
+    unsigned in, out;           /* save starting available input and output */
+    unsigned copy;              /* number of stored or match bytes to copy */
+    unsigned char FAR *from;    /* where to copy match bytes from */
+    code here;                  /* current decoding table entry */
+    code last;                  /* parent table entry */
+    unsigned len;               /* length to copy for repeats, bits to drop */
+    int ret;                    /* return code */
+#ifdef GUNZIP
+    unsigned char hbuf[4];      /* buffer for gzip header crc calculation */
+#endif
+    static const unsigned short order[19] = /* permutation of code lengths */
+        {16, 17, 18, 0, 8, 7, 9, 6, 10, 5, 11, 4, 12, 3, 13, 2, 14, 1, 15};
+
+    if (strm == Z_NULL || strm->state == Z_NULL || strm->next_out == Z_NULL ||
+        (strm->next_in == Z_NULL && strm->avail_in != 0))
+        return Z_STREAM_ERROR;
+
+    state = (struct inflate_state FAR *)strm->state;
+    if (state->mode == TYPE) state->mode = TYPEDO;      /* skip check */
+    LOAD();
+    in = have;
+    out = left;
+    ret = Z_OK;
+    for (;;)
+        switch (state->mode) {
+        case HEAD:
+            if (state->wrap == 0) {
+                state->mode = TYPEDO;
+                break;
+            }
+            NEEDBITS(16);
+#ifdef GUNZIP
+            if ((state->wrap & 2) && hold == 0x8b1f) {  /* gzip header */
+                state->check = crc32(0L, Z_NULL, 0);
+                CRC2(state->check, hold);
+                INITBITS();
+                state->mode = FLAGS;
+                break;
+            }
+            state->flags = 0;           /* expect zlib header */
+            if (state->head != Z_NULL)
+                state->head->done = -1;
+            if (!(state->wrap & 1) ||   /* check if zlib header allowed */
+#else
+            if (
+#endif
+                ((BITS(8) << 8) + (hold >> 8)) % 31) {
+                strm->msg = (char *)"incorrect header check";
+                state->mode = BAD;
+                break;
+            }
+            if (BITS(4) != Z_DEFLATED) {
+                strm->msg = (char *)"unknown compression method";
+                state->mode = BAD;
+                break;
+            }
+            DROPBITS(4);
+            len = BITS(4) + 8;
+            if (state->wbits == 0)
+                state->wbits = len;
+            else if (len > state->wbits) {
+                strm->msg = (char *)"invalid window size";
+                state->mode = BAD;
+                break;
+            }
+            state->dmax = 1U << len;
+            Tracev((stderr, "inflate:   zlib header ok\n"));
+            strm->adler = state->check = adler32(0L, Z_NULL, 0);
+            state->mode = hold & 0x200 ? DICTID : TYPE;
+            INITBITS();
+            break;
+#ifdef GUNZIP
+        case FLAGS:
+            NEEDBITS(16);
+            state->flags = (int)(hold);
+            if ((state->flags & 0xff) != Z_DEFLATED) {
+                strm->msg = (char *)"unknown compression method";
+                state->mode = BAD;
+                break;
+            }
+            if (state->flags & 0xe000) {
+                strm->msg = (char *)"unknown header flags set";
+                state->mode = BAD;
+                break;
+            }
+            if (state->head != Z_NULL)
+                state->head->text = (int)((hold >> 8) & 1);
+            if (state->flags & 0x0200) CRC2(state->check, hold);
+            INITBITS();
+            state->mode = TIME;
+        case TIME:
+            NEEDBITS(32);
+            if (state->head != Z_NULL)
+                state->head->time = hold;
+            if (state->flags & 0x0200) CRC4(state->check, hold);
+            INITBITS();
+            state->mode = OS;
+        case OS:
+            NEEDBITS(16);
+            if (state->head != Z_NULL) {
+                state->head->xflags = (int)(hold & 0xff);
+                state->head->os = (int)(hold >> 8);
+            }
+            if (state->flags & 0x0200) CRC2(state->check, hold);
+            INITBITS();
+            state->mode = EXLEN;
+        case EXLEN:
+            if (state->flags & 0x0400) {
+                NEEDBITS(16);
+                state->length = (unsigned)(hold);
+                if (state->head != Z_NULL)
+                    state->head->extra_len = (unsigned)hold;
+                if (state->flags & 0x0200) CRC2(state->check, hold);
+                INITBITS();
+            }
+            else if (state->head != Z_NULL)
+                state->head->extra = Z_NULL;
+            state->mode = EXTRA;
+        case EXTRA:
+            if (state->flags & 0x0400) {
+                copy = state->length;
+                if (copy > have) copy = have;
+                if (copy) {
+                    if (state->head != Z_NULL &&
+                        state->head->extra != Z_NULL) {
+                        len = state->head->extra_len - state->length;
+                        zmemcpy(state->head->extra + len, next,
+                                len + copy > state->head->extra_max ?
+                                state->head->extra_max - len : copy);
+                    }
+                    if (state->flags & 0x0200)
+                        state->check = crc32(state->check, next, copy);
+                    have -= copy;
+                    next += copy;
+                    state->length -= copy;
+                }
+                if (state->length) goto inf_leave;
+            }
+            state->length = 0;
+            state->mode = NAME;
+        case NAME:
+            if (state->flags & 0x0800) {
+                if (have == 0) goto inf_leave;
+                copy = 0;
+                do {
+                    len = (unsigned)(next[copy++]);
+                    if (state->head != Z_NULL &&
+                            state->head->name != Z_NULL &&
+                            state->length < state->head->name_max)
+                        state->head->name[state->length++] = len;
+                } while (len && copy < have);
+                if (state->flags & 0x0200)
+                    state->check = crc32(state->check, next, copy);
+                have -= copy;
+                next += copy;
+                if (len) goto inf_leave;
+            }
+            else if (state->head != Z_NULL)
+                state->head->name = Z_NULL;
+            state->length = 0;
+            state->mode = COMMENT;
+        case COMMENT:
+            if (state->flags & 0x1000) {
+                if (have == 0) goto inf_leave;
+                copy = 0;
+                do {
+                    len = (unsigned)(next[copy++]);
+                    if (state->head != Z_NULL &&
+                            state->head->comment != Z_NULL &&
+                            state->length < state->head->comm_max)
+                        state->head->comment[state->length++] = len;
+                } while (len && copy < have);
+                if (state->flags & 0x0200)
+                    state->check = crc32(state->check, next, copy);
+                have -= copy;
+                next += copy;
+                if (len) goto inf_leave;
+            }
+            else if (state->head != Z_NULL)
+                state->head->comment = Z_NULL;
+            state->mode = HCRC;
+        case HCRC:
+            if (state->flags & 0x0200) {
+                NEEDBITS(16);
+                if (hold != (state->check & 0xffff)) {
+                    strm->msg = (char *)"header crc mismatch";
+                    state->mode = BAD;
+                    break;
+                }
+                INITBITS();
+            }
+            if (state->head != Z_NULL) {
+                state->head->hcrc = (int)((state->flags >> 9) & 1);
+                state->head->done = 1;
+            }
+            strm->adler = state->check = crc32(0L, Z_NULL, 0);
+            state->mode = TYPE;
+            break;
+#endif
+        case DICTID:
+            NEEDBITS(32);
+            strm->adler = state->check = ZSWAP32(hold);
+            INITBITS();
+            state->mode = DICT;
+        case DICT:
+            if (state->havedict == 0) {
+                RESTORE();
+                return Z_NEED_DICT;
+            }
+            strm->adler = state->check = adler32(0L, Z_NULL, 0);
+            state->mode = TYPE;
+        case TYPE:
+            if (flush == Z_BLOCK || flush == Z_TREES) goto inf_leave;
+        case TYPEDO:
+            if (state->last) {
+                BYTEBITS();
+                state->mode = CHECK;
+                break;
+            }
+            NEEDBITS(3);
+            state->last = BITS(1);
+            DROPBITS(1);
+            switch (BITS(2)) {
+            case 0:                             /* stored block */
+                Tracev((stderr, "inflate:     stored block%s\n",
+                        state->last ? " (last)" : ""));
+                state->mode = STORED;
+                break;
+            case 1:                             /* fixed block */
+                fixedtables(state);
+                Tracev((stderr, "inflate:     fixed codes block%s\n",
+                        state->last ? " (last)" : ""));
+                state->mode = LEN_;             /* decode codes */
+                if (flush == Z_TREES) {
+                    DROPBITS(2);
+                    goto inf_leave;
+                }
+                break;
+            case 2:                             /* dynamic block */
+                Tracev((stderr, "inflate:     dynamic codes block%s\n",
+                        state->last ? " (last)" : ""));
+                state->mode = TABLE;
+                break;
+            case 3:
+                strm->msg = (char *)"invalid block type";
+                state->mode = BAD;
+            }
+            DROPBITS(2);
+            break;
+        case STORED:
+            BYTEBITS();                         /* go to byte boundary */
+            NEEDBITS(32);
+            if ((hold & 0xffff) != ((hold >> 16) ^ 0xffff)) {
+                strm->msg = (char *)"invalid stored block lengths";
+                state->mode = BAD;
+                break;
+            }
+            state->length = (unsigned)hold & 0xffff;
+            Tracev((stderr, "inflate:       stored length %u\n",
+                    state->length));
+            INITBITS();
+            state->mode = COPY_;
+            if (flush == Z_TREES) goto inf_leave;
+        case COPY_:
+            state->mode = COPY;
+        case COPY:
+            copy = state->length;
+            if (copy) {
+                if (copy > have) copy = have;
+                if (copy > left) copy = left;
+                if (copy == 0) goto inf_leave;
+                zmemcpy(put, next, copy);
+                have -= copy;
+                next += copy;
+                left -= copy;
+                put += copy;
+                state->length -= copy;
+                break;
+            }
+            Tracev((stderr, "inflate:       stored end\n"));
+            state->mode = TYPE;
+            break;
+        case TABLE:
+            NEEDBITS(14);
+            state->nlen = BITS(5) + 257;
+            DROPBITS(5);
+            state->ndist = BITS(5) + 1;
+            DROPBITS(5);
+            state->ncode = BITS(4) + 4;
+            DROPBITS(4);
+#ifndef PKZIP_BUG_WORKAROUND
+            if (state->nlen > 286 || state->ndist > 30) {
+                strm->msg = (char *)"too many length or distance symbols";
+                state->mode = BAD;
+                break;
+            }
+#endif
+            Tracev((stderr, "inflate:       table sizes ok\n"));
+            state->have = 0;
+            state->mode = LENLENS;
+        case LENLENS:
+            while (state->have < state->ncode) {
+                NEEDBITS(3);
+                state->lens[order[state->have++]] = (unsigned short)BITS(3);
+                DROPBITS(3);
+            }
+            while (state->have < 19)
+                state->lens[order[state->have++]] = 0;
+            state->next = state->codes;
+            state->lencode = (const code FAR *)(state->next);
+            state->lenbits = 7;
+            ret = inflate_table(CODES, state->lens, 19, &(state->next),
+                                &(state->lenbits), state->work);
+            if (ret) {
+                strm->msg = (char *)"invalid code lengths set";
+                state->mode = BAD;
+                break;
+            }
+            Tracev((stderr, "inflate:       code lengths ok\n"));
+            state->have = 0;
+            state->mode = CODELENS;
+        case CODELENS:
+            while (state->have < state->nlen + state->ndist) {
+                for (;;) {
+                    here = state->lencode[BITS(state->lenbits)];
+                    if ((unsigned)(here.bits) <= bits) break;
+                    PULLBYTE();
+                }
+                if (here.val < 16) {
+                    DROPBITS(here.bits);
+                    state->lens[state->have++] = here.val;
+                }
+                else {
+                    if (here.val == 16) {
+                        NEEDBITS(here.bits + 2);
+                        DROPBITS(here.bits);
+                        if (state->have == 0) {
+                            strm->msg = (char *)"invalid bit length repeat";
+                            state->mode = BAD;
+                            break;
+                        }
+                        len = state->lens[state->have - 1];
+                        copy = 3 + BITS(2);
+                        DROPBITS(2);
+                    }
+                    else if (here.val == 17) {
+                        NEEDBITS(here.bits + 3);
+                        DROPBITS(here.bits);
+                        len = 0;
+                        copy = 3 + BITS(3);
+                        DROPBITS(3);
+                    }
+                    else {
+                        NEEDBITS(here.bits + 7);
+                        DROPBITS(here.bits);
+                        len = 0;
+                        copy = 11 + BITS(7);
+                        DROPBITS(7);
+                    }
+                    if (state->have + copy > state->nlen + state->ndist) {
+                        strm->msg = (char *)"invalid bit length repeat";
+                        state->mode = BAD;
+                        break;
+                    }
+                    while (copy--)
+                        state->lens[state->have++] = (unsigned short)len;
+                }
+            }
+
+            /* handle error breaks in while */
+            if (state->mode == BAD) break;
+
+            /* check for end-of-block code (better have one) */
+            if (state->lens[256] == 0) {
+                strm->msg = (char *)"invalid code -- missing end-of-block";
+                state->mode = BAD;
+                break;
+            }
+
+            /* build code tables -- note: do not change the lenbits or distbits
+               values here (9 and 6) without reading the comments in inftrees.h
+               concerning the ENOUGH constants, which depend on those values */
+            state->next = state->codes;
+            state->lencode = (const code FAR *)(state->next);
+            state->lenbits = 9;
+            ret = inflate_table(LENS, state->lens, state->nlen, &(state->next),
+                                &(state->lenbits), state->work);
+            if (ret) {
+                strm->msg = (char *)"invalid literal/lengths set";
+                state->mode = BAD;
+                break;
+            }
+            state->distcode = (const code FAR *)(state->next);
+            state->distbits = 6;
+            ret = inflate_table(DISTS, state->lens + state->nlen, state->ndist,
+                            &(state->next), &(state->distbits), state->work);
+            if (ret) {
+                strm->msg = (char *)"invalid distances set";
+                state->mode = BAD;
+                break;
+            }
+            Tracev((stderr, "inflate:       codes ok\n"));
+            state->mode = LEN_;
+            if (flush == Z_TREES) goto inf_leave;
+        case LEN_:
+            state->mode = LEN;
+        case LEN:
+            if (have >= 6 && left >= 258) {
+                RESTORE();
+                inflate_fast(strm, out);
+                LOAD();
+                if (state->mode == TYPE)
+                    state->back = -1;
+                break;
+            }
+            state->back = 0;
+            for (;;) {
+                here = state->lencode[BITS(state->lenbits)];
+                if ((unsigned)(here.bits) <= bits) break;
+                PULLBYTE();
+            }
+            if (here.op && (here.op & 0xf0) == 0) {
+                last = here;
+                for (;;) {
+                    here = state->lencode[last.val +
+                            (BITS(last.bits + last.op) >> last.bits)];
+                    if ((unsigned)(last.bits + here.bits) <= bits) break;
+                    PULLBYTE();
+                }
+                DROPBITS(last.bits);
+                state->back += last.bits;
+            }
+            DROPBITS(here.bits);
+            state->back += here.bits;
+            state->length = (unsigned)here.val;
+            if ((int)(here.op) == 0) {
+                Tracevv((stderr, here.val >= 0x20 && here.val < 0x7f ?
+                        "inflate:         literal '%c'\n" :
+                        "inflate:         literal 0x%02x\n", here.val));
+                state->mode = LIT;
+                break;
+            }
+            if (here.op & 32) {
+                Tracevv((stderr, "inflate:         end of block\n"));
+                state->back = -1;
+                state->mode = TYPE;
+                break;
+            }
+            if (here.op & 64) {
+                strm->msg = (char *)"invalid literal/length code";
+                state->mode = BAD;
+                break;
+            }
+            state->extra = (unsigned)(here.op) & 15;
+            state->mode = LENEXT;
+        case LENEXT:
+            if (state->extra) {
+                NEEDBITS(state->extra);
+                state->length += BITS(state->extra);
+                DROPBITS(state->extra);
+                state->back += state->extra;
+            }
+            Tracevv((stderr, "inflate:         length %u\n", state->length));
+            state->was = state->length;
+            state->mode = DIST;
+        case DIST:
+            for (;;) {
+                here = state->distcode[BITS(state->distbits)];
+                if ((unsigned)(here.bits) <= bits) break;
+                PULLBYTE();
+            }
+            if ((here.op & 0xf0) == 0) {
+                last = here;
+                for (;;) {
+                    here = state->distcode[last.val +
+                            (BITS(last.bits + last.op) >> last.bits)];
+                    if ((unsigned)(last.bits + here.bits) <= bits) break;
+                    PULLBYTE();
+                }
+                DROPBITS(last.bits);
+                state->back += last.bits;
+            }
+            DROPBITS(here.bits);
+            state->back += here.bits;
+            if (here.op & 64) {
+                strm->msg = (char *)"invalid distance code";
+                state->mode = BAD;
+                break;
+            }
+            state->offset = (unsigned)here.val;
+            state->extra = (unsigned)(here.op) & 15;
+            state->mode = DISTEXT;
+        case DISTEXT:
+            if (state->extra) {
+                NEEDBITS(state->extra);
+                state->offset += BITS(state->extra);
+                DROPBITS(state->extra);
+                state->back += state->extra;
+            }
+#ifdef INFLATE_STRICT
+            if (state->offset > state->dmax) {
+                strm->msg = (char *)"invalid distance too far back";
+                state->mode = BAD;
+                break;
+            }
+#endif
+            Tracevv((stderr, "inflate:         distance %u\n", state->offset));
+            state->mode = MATCH;
+        case MATCH:
+            if (left == 0) goto inf_leave;
+            copy = out - left;
+            if (state->offset > copy) {         /* copy from window */
+                copy = state->offset - copy;
+                if (copy > state->whave) {
+                    if (state->sane) {
+                        strm->msg = (char *)"invalid distance too far back";
+                        state->mode = BAD;
+                        break;
+                    }
+#ifdef INFLATE_ALLOW_INVALID_DISTANCE_TOOFAR_ARRR
+                    Trace((stderr, "inflate.c too far\n"));
+                    copy -= state->whave;
+                    if (copy > state->length) copy = state->length;
+                    if (copy > left) copy = left;
+                    left -= copy;
+                    state->length -= copy;
+                    do {
+                        *put++ = 0;
+                    } while (--copy);
+                    if (state->length == 0) state->mode = LEN;
+                    break;
+#endif
+                }
+                if (copy > state->wnext) {
+                    copy -= state->wnext;
+                    from = state->window + (state->wsize - copy);
+                }
+                else
+                    from = state->window + (state->wnext - copy);
+                if (copy > state->length) copy = state->length;
+            }
+            else {                              /* copy from output */
+                from = put - state->offset;
+                copy = state->length;
+            }
+            if (copy > left) copy = left;
+            left -= copy;
+            state->length -= copy;
+            do {
+                *put++ = *from++;
+            } while (--copy);
+            if (state->length == 0) state->mode = LEN;
+            break;
+        case LIT:
+            if (left == 0) goto inf_leave;
+            *put++ = (unsigned char)(state->length);
+            left--;
+            state->mode = LEN;
+            break;
+        case CHECK:
+            if (state->wrap) {
+                NEEDBITS(32);
+                out -= left;
+                strm->total_out += out;
+                state->total += out;
+                if (out)
+                    strm->adler = state->check =
+                        UPDATE(state->check, put - out, out);
+                out = left;
+                if ((
+#ifdef GUNZIP
+                     state->flags ? hold :
+#endif
+                     ZSWAP32(hold)) != state->check) {
+                    strm->msg = (char *)"incorrect data check";
+                    state->mode = BAD;
+                    break;
+                }
+                INITBITS();
+                Tracev((stderr, "inflate:   check matches trailer\n"));
+            }
+#ifdef GUNZIP
+            state->mode = LENGTH;
+        case LENGTH:
+            if (state->wrap && state->flags) {
+                NEEDBITS(32);
+                if (hold != (state->total & 0xffffffffUL)) {
+                    strm->msg = (char *)"incorrect length check";
+                    state->mode = BAD;
+                    break;
+                }
+                INITBITS();
+                Tracev((stderr, "inflate:   length matches trailer\n"));
+            }
+#endif
+            state->mode = DONE;
+        case DONE:
+            ret = Z_STREAM_END;
+            goto inf_leave;
+        case BAD:
+            ret = Z_DATA_ERROR;
+            goto inf_leave;
+        case MEM:
+            return Z_MEM_ERROR;
+        case SYNC:
+        default:
+            return Z_STREAM_ERROR;
+        }
+
+    /*
+       Return from inflate(), updating the total counts and the check value.
+       If there was no progress during the inflate() call, return a buffer
+       error.  Call updatewindow() to create and/or update the window state.
+       Note: a memory error from inflate() is non-recoverable.
+     */
+  inf_leave:
+    RESTORE();
+    if (state->wsize || (out != strm->avail_out && state->mode < BAD &&
+            (state->mode < CHECK || flush != Z_FINISH)))
+        if (updatewindow(strm, strm->next_out, out - strm->avail_out)) {
+            state->mode = MEM;
+            return Z_MEM_ERROR;
+        }
+    in -= strm->avail_in;
+    out -= strm->avail_out;
+    strm->total_in += in;
+    strm->total_out += out;
+    state->total += out;
+    if (state->wrap && out)
+        strm->adler = state->check =
+            UPDATE(state->check, strm->next_out - out, out);
+    strm->data_type = state->bits + (state->last ? 64 : 0) +
+                      (state->mode == TYPE ? 128 : 0) +
+                      (state->mode == LEN_ || state->mode == COPY_ ? 256 : 0);
+    if (((in == 0 && out == 0) || flush == Z_FINISH) && ret == Z_OK)
+        ret = Z_BUF_ERROR;
+    return ret;
+}
+
+int ZEXPORT inflateEnd(strm)
+z_streamp strm;
+{
+    struct inflate_state FAR *state;
+    if (strm == Z_NULL || strm->state == Z_NULL || strm->zfree == (free_func)0)
+        return Z_STREAM_ERROR;
+    state = (struct inflate_state FAR *)strm->state;
+    if (state->window != Z_NULL) ZFREE(strm, state->window);
+    ZFREE(strm, strm->state);
+    strm->state = Z_NULL;
+    Tracev((stderr, "inflate: end\n"));
+    return Z_OK;
+}
+
+int ZEXPORT inflateGetDictionary(strm, dictionary, dictLength)
+z_streamp strm;
+Bytef *dictionary;
+uInt *dictLength;
+{
+    struct inflate_state FAR *state;
+
+    /* check state */
+    if (strm == Z_NULL || strm->state == Z_NULL) return Z_STREAM_ERROR;
+    state = (struct inflate_state FAR *)strm->state;
+
+    /* copy dictionary */
+    if (state->whave && dictionary != Z_NULL) {
+        zmemcpy(dictionary, state->window + state->wnext,
+                state->whave - state->wnext);
+        zmemcpy(dictionary + state->whave - state->wnext,
+                state->window, state->wnext);
+    }
+    if (dictLength != Z_NULL)
+        *dictLength = state->whave;
+    return Z_OK;
+}
+
+int ZEXPORT inflateSetDictionary(strm, dictionary, dictLength)
+z_streamp strm;
+const Bytef *dictionary;
+uInt dictLength;
+{
+    struct inflate_state FAR *state;
+    unsigned long dictid;
+    int ret;
+
+    /* check state */
+    if (strm == Z_NULL || strm->state == Z_NULL) return Z_STREAM_ERROR;
+    state = (struct inflate_state FAR *)strm->state;
+    if (state->wrap != 0 && state->mode != DICT)
+        return Z_STREAM_ERROR;
+
+    /* check for correct dictionary identifier */
+    if (state->mode == DICT) {
+        dictid = adler32(0L, Z_NULL, 0);
+        dictid = adler32(dictid, dictionary, dictLength);
+        if (dictid != state->check)
+            return Z_DATA_ERROR;
+    }
+
+    /* copy dictionary to window using updatewindow(), which will amend the
+       existing dictionary if appropriate */
+    ret = updatewindow(strm, dictionary + dictLength, dictLength);
+    if (ret) {
+        state->mode = MEM;
+        return Z_MEM_ERROR;
+    }
+    state->havedict = 1;
+    Tracev((stderr, "inflate:   dictionary set\n"));
+    return Z_OK;
+}
+
+int ZEXPORT inflateGetHeader(strm, head)
+z_streamp strm;
+gz_headerp head;
+{
+    struct inflate_state FAR *state;
+
+    /* check state */
+    if (strm == Z_NULL || strm->state == Z_NULL) return Z_STREAM_ERROR;
+    state = (struct inflate_state FAR *)strm->state;
+    if ((state->wrap & 2) == 0) return Z_STREAM_ERROR;
+
+    /* save header structure */
+    state->head = head;
+    head->done = 0;
+    return Z_OK;
+}
+
+/*
+   Search buf[0..len-1] for the pattern: 0, 0, 0xff, 0xff.  Return when found
+   or when out of input.  When called, *have is the number of pattern bytes
+   found in order so far, in 0..3.  On return *have is updated to the new
+   state.  If on return *have equals four, then the pattern was found and the
+   return value is how many bytes were read including the last byte of the
+   pattern.  If *have is less than four, then the pattern has not been found
+   yet and the return value is len.  In the latter case, syncsearch() can be
+   called again with more data and the *have state.  *have is initialized to
+   zero for the first call.
+ */
+local unsigned syncsearch(have, buf, len)
+unsigned FAR *have;
+const unsigned char FAR *buf;
+unsigned len;
+{
+    unsigned got;
+    unsigned next;
+
+    got = *have;
+    next = 0;
+    while (next < len && got < 4) {
+        if ((int)(buf[next]) == (got < 2 ? 0 : 0xff))
+            got++;
+        else if (buf[next])
+            got = 0;
+        else
+            got = 4 - got;
+        next++;
+    }
+    *have = got;
+    return next;
+}
+
+int ZEXPORT inflateSync(strm)
+z_streamp strm;
+{
+    unsigned len;               /* number of bytes to look at or looked at */
+    unsigned long in, out;      /* temporary to save total_in and total_out */
+    unsigned char buf[4];       /* to restore bit buffer to byte string */
+    struct inflate_state FAR *state;
+
+    /* check parameters */
+    if (strm == Z_NULL || strm->state == Z_NULL) return Z_STREAM_ERROR;
+    state = (struct inflate_state FAR *)strm->state;
+    if (strm->avail_in == 0 && state->bits < 8) return Z_BUF_ERROR;
+
+    /* if first time, start search in bit buffer */
+    if (state->mode != SYNC) {
+        state->mode = SYNC;
+        state->hold <<= state->bits & 7;
+        state->bits -= state->bits & 7;
+        len = 0;
+        while (state->bits >= 8) {
+            buf[len++] = (unsigned char)(state->hold);
+            state->hold >>= 8;
+            state->bits -= 8;
+        }
+        state->have = 0;
+        syncsearch(&(state->have), buf, len);
+    }
+
+    /* search available input */
+    len = syncsearch(&(state->have), strm->next_in, strm->avail_in);
+    strm->avail_in -= len;
+    strm->next_in += len;
+    strm->total_in += len;
+
+    /* return no joy or set up to restart inflate() on a new block */
+    if (state->have != 4) return Z_DATA_ERROR;
+    in = strm->total_in;  out = strm->total_out;
+    inflateReset(strm);
+    strm->total_in = in;  strm->total_out = out;
+    state->mode = TYPE;
+    return Z_OK;
+}
+
+/*
+   Returns true if inflate is currently at the end of a block generated by
+   Z_SYNC_FLUSH or Z_FULL_FLUSH. This function is used by one PPP
+   implementation to provide an additional safety check. PPP uses
+   Z_SYNC_FLUSH but removes the length bytes of the resulting empty stored
+   block. When decompressing, PPP checks that at the end of input packet,
+   inflate is waiting for these length bytes.
+ */
+int ZEXPORT inflateSyncPoint(strm)
+z_streamp strm;
+{
+    struct inflate_state FAR *state;
+
+    if (strm == Z_NULL || strm->state == Z_NULL) return Z_STREAM_ERROR;
+    state = (struct inflate_state FAR *)strm->state;
+    return state->mode == STORED && state->bits == 0;
+}
+
+int ZEXPORT inflateCopy(dest, source)
+z_streamp dest;
+z_streamp source;
+{
+    struct inflate_state FAR *state;
+    struct inflate_state FAR *copy;
+    unsigned char FAR *window;
+    unsigned wsize;
+
+    /* check input */
+    if (dest == Z_NULL || source == Z_NULL || source->state == Z_NULL ||
+        source->zalloc == (alloc_func)0 || source->zfree == (free_func)0)
+        return Z_STREAM_ERROR;
+    state = (struct inflate_state FAR *)source->state;
+
+    /* allocate space */
+    copy = (struct inflate_state FAR *)
+           ZALLOC(source, 1, sizeof(struct inflate_state));
+    if (copy == Z_NULL) return Z_MEM_ERROR;
+    window = Z_NULL;
+    if (state->window != Z_NULL) {
+        window = (unsigned char FAR *)
+                 ZALLOC(source, 1U << state->wbits, sizeof(unsigned char));
+        if (window == Z_NULL) {
+            ZFREE(source, copy);
+            return Z_MEM_ERROR;
+        }
+    }
+
+    /* copy state */
+    zmemcpy((voidpf)dest, (voidpf)source, sizeof(z_stream));
+    zmemcpy((voidpf)copy, (voidpf)state, sizeof(struct inflate_state));
+    if (state->lencode >= state->codes &&
+        state->lencode <= state->codes + ENOUGH - 1) {
+        copy->lencode = copy->codes + (state->lencode - state->codes);
+        copy->distcode = copy->codes + (state->distcode - state->codes);
+    }
+    copy->next = copy->codes + (state->next - state->codes);
+    if (window != Z_NULL) {
+        wsize = 1U << state->wbits;
+        zmemcpy(window, state->window, wsize);
+    }
+    copy->window = window;
+    dest->state = (struct internal_state FAR *)copy;
+    return Z_OK;
+}
+
+int ZEXPORT inflateUndermine(strm, subvert)
+z_streamp strm;
+int subvert;
+{
+    struct inflate_state FAR *state;
+
+    if (strm == Z_NULL || strm->state == Z_NULL) return Z_STREAM_ERROR;
+    state = (struct inflate_state FAR *)strm->state;
+    state->sane = !subvert;
+#ifdef INFLATE_ALLOW_INVALID_DISTANCE_TOOFAR_ARRR
+    return Z_OK;
+#else
+    state->sane = 1;
+    return Z_DATA_ERROR;
+#endif
+}
+
+long ZEXPORT inflateMark(strm)
+z_streamp strm;
+{
+    struct inflate_state FAR *state;
+
+    if (strm == Z_NULL || strm->state == Z_NULL) return -1L << 16;
+    state = (struct inflate_state FAR *)strm->state;
+    return ((long)(state->back) << 16) +
+        (state->mode == COPY ? state->length :
+            (state->mode == MATCH ? state->was - state->length : 0));
+}
diff --git a/c-blosc/internal-complibs/zlib-1.2.8/inflate.h b/c-blosc/internal-complibs/zlib-1.2.8/inflate.h
new file mode 100644
index 0000000..95f4986
--- /dev/null
+++ b/c-blosc/internal-complibs/zlib-1.2.8/inflate.h
@@ -0,0 +1,122 @@
+/* inflate.h -- internal inflate state definition
+ * Copyright (C) 1995-2009 Mark Adler
+ * For conditions of distribution and use, see copyright notice in zlib.h
+ */
+
+/* WARNING: this file should *not* be used by applications. It is
+   part of the implementation of the compression library and is
+   subject to change. Applications should only use zlib.h.
+ */
+
+/* define NO_GZIP when compiling if you want to disable gzip header and
+   trailer decoding by inflate().  NO_GZIP would be used to avoid linking in
+   the crc code when it is not needed.  For shared libraries, gzip decoding
+   should be left enabled. */
+#ifndef NO_GZIP
+#  define GUNZIP
+#endif
+
+/* Possible inflate modes between inflate() calls */
+typedef enum {
+    HEAD,       /* i: waiting for magic header */
+    FLAGS,      /* i: waiting for method and flags (gzip) */
+    TIME,       /* i: waiting for modification time (gzip) */
+    OS,         /* i: waiting for extra flags and operating system (gzip) */
+    EXLEN,      /* i: waiting for extra length (gzip) */
+    EXTRA,      /* i: waiting for extra bytes (gzip) */
+    NAME,       /* i: waiting for end of file name (gzip) */
+    COMMENT,    /* i: waiting for end of comment (gzip) */
+    HCRC,       /* i: waiting for header crc (gzip) */
+    DICTID,     /* i: waiting for dictionary check value */
+    DICT,       /* waiting for inflateSetDictionary() call */
+        TYPE,       /* i: waiting for type bits, including last-flag bit */
+        TYPEDO,     /* i: same, but skip check to exit inflate on new block */
+        STORED,     /* i: waiting for stored size (length and complement) */
+        COPY_,      /* i/o: same as COPY below, but only first time in */
+        COPY,       /* i/o: waiting for input or output to copy stored block */
+        TABLE,      /* i: waiting for dynamic block table lengths */
+        LENLENS,    /* i: waiting for code length code lengths */
+        CODELENS,   /* i: waiting for length/lit and distance code lengths */
+            LEN_,       /* i: same as LEN below, but only first time in */
+            LEN,        /* i: waiting for length/lit/eob code */
+            LENEXT,     /* i: waiting for length extra bits */
+            DIST,       /* i: waiting for distance code */
+            DISTEXT,    /* i: waiting for distance extra bits */
+            MATCH,      /* o: waiting for output space to copy string */
+            LIT,        /* o: waiting for output space to write literal */
+    CHECK,      /* i: waiting for 32-bit check value */
+    LENGTH,     /* i: waiting for 32-bit length (gzip) */
+    DONE,       /* finished check, done -- remain here until reset */
+    BAD,        /* got a data error -- remain here until reset */
+    MEM,        /* got an inflate() memory error -- remain here until reset */
+    SYNC        /* looking for synchronization bytes to restart inflate() */
+} inflate_mode;
+
+/*
+    State transitions between above modes -
+
+    (most modes can go to BAD or MEM on error -- not shown for clarity)
+
+    Process header:
+        HEAD -> (gzip) or (zlib) or (raw)
+        (gzip) -> FLAGS -> TIME -> OS -> EXLEN -> EXTRA -> NAME -> COMMENT ->
+                  HCRC -> TYPE
+        (zlib) -> DICTID or TYPE
+        DICTID -> DICT -> TYPE
+        (raw) -> TYPEDO
+    Read deflate blocks:
+            TYPE -> TYPEDO -> STORED or TABLE or LEN_ or CHECK
+            STORED -> COPY_ -> COPY -> TYPE
+            TABLE -> LENLENS -> CODELENS -> LEN_
+            LEN_ -> LEN
+    Read deflate codes in fixed or dynamic block:
+                LEN -> LENEXT or LIT or TYPE
+                LENEXT -> DIST -> DISTEXT -> MATCH -> LEN
+                LIT -> LEN
+    Process trailer:
+        CHECK -> LENGTH -> DONE
+ */
+
+/* state maintained between inflate() calls.  Approximately 10K bytes. */
+struct inflate_state {
+    inflate_mode mode;          /* current inflate mode */
+    int last;                   /* true if processing last block */
+    int wrap;                   /* bit 0 true for zlib, bit 1 true for gzip */
+    int havedict;               /* true if dictionary provided */
+    int flags;                  /* gzip header method and flags (0 if zlib) */
+    unsigned dmax;              /* zlib header max distance (INFLATE_STRICT) */
+    unsigned long check;        /* protected copy of check value */
+    unsigned long total;        /* protected copy of output count */
+    gz_headerp head;            /* where to save gzip header information */
+        /* sliding window */
+    unsigned wbits;             /* log base 2 of requested window size */
+    unsigned wsize;             /* window size or zero if not using window */
+    unsigned whave;             /* valid bytes in the window */
+    unsigned wnext;             /* window write index */
+    unsigned char FAR *window;  /* allocated sliding window, if needed */
+        /* bit accumulator */
+    unsigned long hold;         /* input bit accumulator */
+    unsigned bits;              /* number of bits in "in" */
+        /* for string and stored block copying */
+    unsigned length;            /* literal or length of data to copy */
+    unsigned offset;            /* distance back to copy string from */
+        /* for table and code decoding */
+    unsigned extra;             /* extra bits needed */
+        /* fixed and dynamic code tables */
+    code const FAR *lencode;    /* starting table for length/literal codes */
+    code const FAR *distcode;   /* starting table for distance codes */
+    unsigned lenbits;           /* index bits for lencode */
+    unsigned distbits;          /* index bits for distcode */
+        /* dynamic table building */
+    unsigned ncode;             /* number of code length code lengths */
+    unsigned nlen;              /* number of length code lengths */
+    unsigned ndist;             /* number of distance code lengths */
+    unsigned have;              /* number of code lengths in lens[] */
+    code FAR *next;             /* next available space in codes[] */
+    unsigned short lens[320];   /* temporary storage for code lengths */
+    unsigned short work[288];   /* work area for code table building */
+    code codes[ENOUGH];         /* space for code tables */
+    int sane;                   /* if false, allow invalid distance too far */
+    int back;                   /* bits back of last unprocessed length/lit */
+    unsigned was;               /* initial length of match */
+};
diff --git a/c-blosc/internal-complibs/zlib-1.2.8/inftrees.c b/c-blosc/internal-complibs/zlib-1.2.8/inftrees.c
new file mode 100644
index 0000000..44d89cf
--- /dev/null
+++ b/c-blosc/internal-complibs/zlib-1.2.8/inftrees.c
@@ -0,0 +1,306 @@
+/* inftrees.c -- generate Huffman trees for efficient decoding
+ * Copyright (C) 1995-2013 Mark Adler
+ * For conditions of distribution and use, see copyright notice in zlib.h
+ */
+
+#include "zutil.h"
+#include "inftrees.h"
+
+#define MAXBITS 15
+
+const char inflate_copyright[] =
+   " inflate 1.2.8 Copyright 1995-2013 Mark Adler ";
+/*
+  If you use the zlib library in a product, an acknowledgment is welcome
+  in the documentation of your product. If for some reason you cannot
+  include such an acknowledgment, I would appreciate that you keep this
+  copyright string in the executable of your product.
+ */
+
+/*
+   Build a set of tables to decode the provided canonical Huffman code.
+   The code lengths are lens[0..codes-1].  The result starts at *table,
+   whose indices are 0..2^bits-1.  work is a writable array of at least
+   lens shorts, which is used as a work area.  type is the type of code
+   to be generated, CODES, LENS, or DISTS.  On return, zero is success,
+   -1 is an invalid code, and +1 means that ENOUGH isn't enough.  table
+   on return points to the next available entry's address.  bits is the
+   requested root table index bits, and on return it is the actual root
+   table index bits.  It will differ if the request is greater than the
+   longest code or if it is less than the shortest code.
+ */
+int ZLIB_INTERNAL inflate_table(type, lens, codes, table, bits, work)
+codetype type;
+unsigned short FAR *lens;
+unsigned codes;
+code FAR * FAR *table;
+unsigned FAR *bits;
+unsigned short FAR *work;
+{
+    unsigned len;               /* a code's length in bits */
+    unsigned sym;               /* index of code symbols */
+    unsigned min, max;          /* minimum and maximum code lengths */
+    unsigned root;              /* number of index bits for root table */
+    unsigned curr;              /* number of index bits for current table */
+    unsigned drop;              /* code bits to drop for sub-table */
+    int left;                   /* number of prefix codes available */
+    unsigned used;              /* code entries in table used */
+    unsigned huff;              /* Huffman code */
+    unsigned incr;              /* for incrementing code, index */
+    unsigned fill;              /* index for replicating entries */
+    unsigned low;               /* low bits for current root entry */
+    unsigned mask;              /* mask for low root bits */
+    code here;                  /* table entry for duplication */
+    code FAR *next;             /* next available space in table */
+    const unsigned short FAR *base;     /* base value table to use */
+    const unsigned short FAR *extra;    /* extra bits table to use */
+    int end;                    /* use base and extra for symbol > end */
+    unsigned short count[MAXBITS+1];    /* number of codes of each length */
+    unsigned short offs[MAXBITS+1];     /* offsets in table for each length */
+    static const unsigned short lbase[31] = { /* Length codes 257..285 base */
+        3, 4, 5, 6, 7, 8, 9, 10, 11, 13, 15, 17, 19, 23, 27, 31,
+        35, 43, 51, 59, 67, 83, 99, 115, 131, 163, 195, 227, 258, 0, 0};
+    static const unsigned short lext[31] = { /* Length codes 257..285 extra */
+        16, 16, 16, 16, 16, 16, 16, 16, 17, 17, 17, 17, 18, 18, 18, 18,
+        19, 19, 19, 19, 20, 20, 20, 20, 21, 21, 21, 21, 16, 72, 78};
+    static const unsigned short dbase[32] = { /* Distance codes 0..29 base */
+        1, 2, 3, 4, 5, 7, 9, 13, 17, 25, 33, 49, 65, 97, 129, 193,
+        257, 385, 513, 769, 1025, 1537, 2049, 3073, 4097, 6145,
+        8193, 12289, 16385, 24577, 0, 0};
+    static const unsigned short dext[32] = { /* Distance codes 0..29 extra */
+        16, 16, 16, 16, 17, 17, 18, 18, 19, 19, 20, 20, 21, 21, 22, 22,
+        23, 23, 24, 24, 25, 25, 26, 26, 27, 27,
+        28, 28, 29, 29, 64, 64};
+
+    /*
+       Process a set of code lengths to create a canonical Huffman code.  The
+       code lengths are lens[0..codes-1].  Each length corresponds to the
+       symbols 0..codes-1.  The Huffman code is generated by first sorting the
+       symbols by length from short to long, and retaining the symbol order
+       for codes with equal lengths.  Then the code starts with all zero bits
+       for the first code of the shortest length, and the codes are integer
+       increments for the same length, and zeros are appended as the length
+       increases.  For the deflate format, these bits are stored backwards
+       from their more natural integer increment ordering, and so when the
+       decoding tables are built in the large loop below, the integer codes
+       are incremented backwards.
+
+       This routine assumes, but does not check, that all of the entries in
+       lens[] are in the range 0..MAXBITS.  The caller must assure this.
+       1..MAXBITS is interpreted as that code length.  zero means that that
+       symbol does not occur in this code.
+
+       The codes are sorted by computing a count of codes for each length,
+       creating from that a table of starting indices for each length in the
+       sorted table, and then entering the symbols in order in the sorted
+       table.  The sorted table is work[], with that space being provided by
+       the caller.
+
+       The length counts are used for other purposes as well, i.e. finding
+       the minimum and maximum length codes, determining if there are any
+       codes at all, checking for a valid set of lengths, and looking ahead
+       at length counts to determine sub-table sizes when building the
+       decoding tables.
+     */
+
+    /* accumulate lengths for codes (assumes lens[] all in 0..MAXBITS) */
+    for (len = 0; len <= MAXBITS; len++)
+        count[len] = 0;
+    for (sym = 0; sym < codes; sym++)
+        count[lens[sym]]++;
+
+    /* bound code lengths, force root to be within code lengths */
+    root = *bits;
+    for (max = MAXBITS; max >= 1; max--)
+        if (count[max] != 0) break;
+    if (root > max) root = max;
+    if (max == 0) {                     /* no symbols to code at all */
+        here.op = (unsigned char)64;    /* invalid code marker */
+        here.bits = (unsigned char)1;
+        here.val = (unsigned short)0;
+        *(*table)++ = here;             /* make a table to force an error */
+        *(*table)++ = here;
+        *bits = 1;
+        return 0;     /* no symbols, but wait for decoding to report error */
+    }
+    for (min = 1; min < max; min++)
+        if (count[min] != 0) break;
+    if (root < min) root = min;
+
+    /* check for an over-subscribed or incomplete set of lengths */
+    left = 1;
+    for (len = 1; len <= MAXBITS; len++) {
+        left <<= 1;
+        left -= count[len];
+        if (left < 0) return -1;        /* over-subscribed */
+    }
+    if (left > 0 && (type == CODES || max != 1))
+        return -1;                      /* incomplete set */
+
+    /* generate offsets into symbol table for each length for sorting */
+    offs[1] = 0;
+    for (len = 1; len < MAXBITS; len++)
+        offs[len + 1] = offs[len] + count[len];
+
+    /* sort symbols by length, by symbol order within each length */
+    for (sym = 0; sym < codes; sym++)
+        if (lens[sym] != 0) work[offs[lens[sym]]++] = (unsigned short)sym;
+
+    /*
+       Create and fill in decoding tables.  In this loop, the table being
+       filled is at next and has curr index bits.  The code being used is huff
+       with length len.  That code is converted to an index by dropping drop
+       bits off of the bottom.  For codes where len is less than drop + curr,
+       those top drop + curr - len bits are incremented through all values to
+       fill the table with replicated entries.
+
+       root is the number of index bits for the root table.  When len exceeds
+       root, sub-tables are created pointed to by the root entry with an index
+       of the low root bits of huff.  This is saved in low to check for when a
+       new sub-table should be started.  drop is zero when the root table is
+       being filled, and drop is root when sub-tables are being filled.
+
+       When a new sub-table is needed, it is necessary to look ahead in the
+       code lengths to determine what size sub-table is needed.  The length
+       counts are used for this, and so count[] is decremented as codes are
+       entered in the tables.
+
+       used keeps track of how many table entries have been allocated from the
+       provided *table space.  It is checked for LENS and DIST tables against
+       the constants ENOUGH_LENS and ENOUGH_DISTS to guard against changes in
+       the initial root table size constants.  See the comments in inftrees.h
+       for more information.
+
+       sym increments through all symbols, and the loop terminates when
+       all codes of length max, i.e. all codes, have been processed.  This
+       routine permits incomplete codes, so another loop after this one fills
+       in the rest of the decoding tables with invalid code markers.
+     */
+
+    /* set up for code type */
+    switch (type) {
+    case CODES:
+        base = extra = work;    /* dummy value--not used */
+        end = 19;
+        break;
+    case LENS:
+        base = lbase;
+        base -= 257;
+        extra = lext;
+        extra -= 257;
+        end = 256;
+        break;
+    default:            /* DISTS */
+        base = dbase;
+        extra = dext;
+        end = -1;
+    }
+
+    /* initialize state for loop */
+    huff = 0;                   /* starting code */
+    sym = 0;                    /* starting code symbol */
+    len = min;                  /* starting code length */
+    next = *table;              /* current table to fill in */
+    curr = root;                /* current table index bits */
+    drop = 0;                   /* current bits to drop from code for index */
+    low = (unsigned)(-1);       /* trigger new sub-table when len > root */
+    used = 1U << root;          /* use root table entries */
+    mask = used - 1;            /* mask for comparing low */
+
+    /* check available table space */
+    if ((type == LENS && used > ENOUGH_LENS) ||
+        (type == DISTS && used > ENOUGH_DISTS))
+        return 1;
+
+    /* process all codes and make table entries */
+    for (;;) {
+        /* create table entry */
+        here.bits = (unsigned char)(len - drop);
+        if ((int)(work[sym]) < end) {
+            here.op = (unsigned char)0;
+            here.val = work[sym];
+        }
+        else if ((int)(work[sym]) > end) {
+            here.op = (unsigned char)(extra[work[sym]]);
+            here.val = base[work[sym]];
+        }
+        else {
+            here.op = (unsigned char)(32 + 64);         /* end of block */
+            here.val = 0;
+        }
+
+        /* replicate for those indices with low len bits equal to huff */
+        incr = 1U << (len - drop);
+        fill = 1U << curr;
+        min = fill;                 /* save offset to next table */
+        do {
+            fill -= incr;
+            next[(huff >> drop) + fill] = here;
+        } while (fill != 0);
+
+        /* backwards increment the len-bit code huff */
+        incr = 1U << (len - 1);
+        while (huff & incr)
+            incr >>= 1;
+        if (incr != 0) {
+            huff &= incr - 1;
+            huff += incr;
+        }
+        else
+            huff = 0;
+
+        /* go to next symbol, update count, len */
+        sym++;
+        if (--(count[len]) == 0) {
+            if (len == max) break;
+            len = lens[work[sym]];
+        }
+
+        /* create new sub-table if needed */
+        if (len > root && (huff & mask) != low) {
+            /* if first time, transition to sub-tables */
+            if (drop == 0)
+                drop = root;
+
+            /* increment past last table */
+            next += min;            /* here min is 1 << curr */
+
+            /* determine length of next table */
+            curr = len - drop;
+            left = (int)(1 << curr);
+            while (curr + drop < max) {
+                left -= count[curr + drop];
+                if (left <= 0) break;
+                curr++;
+                left <<= 1;
+            }
+
+            /* check for enough space */
+            used += 1U << curr;
+            if ((type == LENS && used > ENOUGH_LENS) ||
+                (type == DISTS && used > ENOUGH_DISTS))
+                return 1;
+
+            /* point entry in root table to sub-table */
+            low = huff & mask;
+            (*table)[low].op = (unsigned char)curr;
+            (*table)[low].bits = (unsigned char)root;
+            (*table)[low].val = (unsigned short)(next - *table);
+        }
+    }
+
+    /* fill in remaining table entry if code is incomplete (guaranteed to have
+       at most one remaining entry, since if the code is incomplete, the
+       maximum code length that was allowed to get this far is one bit) */
+    if (huff != 0) {
+        here.op = (unsigned char)64;            /* invalid code marker */
+        here.bits = (unsigned char)(len - drop);
+        here.val = (unsigned short)0;
+        next[huff] = here;
+    }
+
+    /* set return parameters */
+    *table += used;
+    *bits = root;
+    return 0;
+}
diff --git a/c-blosc/internal-complibs/zlib-1.2.8/inftrees.h b/c-blosc/internal-complibs/zlib-1.2.8/inftrees.h
new file mode 100644
index 0000000..baa53a0
--- /dev/null
+++ b/c-blosc/internal-complibs/zlib-1.2.8/inftrees.h
@@ -0,0 +1,62 @@
+/* inftrees.h -- header to use inftrees.c
+ * Copyright (C) 1995-2005, 2010 Mark Adler
+ * For conditions of distribution and use, see copyright notice in zlib.h
+ */
+
+/* WARNING: this file should *not* be used by applications. It is
+   part of the implementation of the compression library and is
+   subject to change. Applications should only use zlib.h.
+ */
+
+/* Structure for decoding tables.  Each entry provides either the
+   information needed to do the operation requested by the code that
+   indexed that table entry, or it provides a pointer to another
+   table that indexes more bits of the code.  op indicates whether
+   the entry is a pointer to another table, a literal, a length or
+   distance, an end-of-block, or an invalid code.  For a table
+   pointer, the low four bits of op is the number of index bits of
+   that table.  For a length or distance, the low four bits of op
+   is the number of extra bits to get after the code.  bits is
+   the number of bits in this code or part of the code to drop off
+   of the bit buffer.  val is the actual byte to output in the case
+   of a literal, the base length or distance, or the offset from
+   the current table to the next table.  Each entry is four bytes. */
+typedef struct {
+    unsigned char op;           /* operation, extra bits, table bits */
+    unsigned char bits;         /* bits in this part of the code */
+    unsigned short val;         /* offset in table or code value */
+} code;
+
+/* op values as set by inflate_table():
+    00000000 - literal
+    0000tttt - table link, tttt != 0 is the number of table index bits
+    0001eeee - length or distance, eeee is the number of extra bits
+    01100000 - end of block
+    01000000 - invalid code
+ */
+
+/* Maximum size of the dynamic table.  The maximum number of code structures is
+   1444, which is the sum of 852 for literal/length codes and 592 for distance
+   codes.  These values were found by exhaustive searches using the program
+   examples/enough.c found in the zlib distribtution.  The arguments to that
+   program are the number of symbols, the initial root table size, and the
+   maximum bit length of a code.  "enough 286 9 15" for literal/length codes
+   returns returns 852, and "enough 30 6 15" for distance codes returns 592.
+   The initial root table size (9 or 6) is found in the fifth argument of the
+   inflate_table() calls in inflate.c and infback.c.  If the root table size is
+   changed, then these maximum sizes would be need to be recalculated and
+   updated. */
+#define ENOUGH_LENS 852
+#define ENOUGH_DISTS 592
+#define ENOUGH (ENOUGH_LENS+ENOUGH_DISTS)
+
+/* Type of code to build for inflate_table() */
+typedef enum {
+    CODES,
+    LENS,
+    DISTS
+} codetype;
+
+int ZLIB_INTERNAL inflate_table OF((codetype type, unsigned short FAR *lens,
+                             unsigned codes, code FAR * FAR *table,
+                             unsigned FAR *bits, unsigned short FAR *work));
diff --git a/c-blosc/internal-complibs/zlib-1.2.8/trees.c b/c-blosc/internal-complibs/zlib-1.2.8/trees.c
new file mode 100644
index 0000000..1fd7759
--- /dev/null
+++ b/c-blosc/internal-complibs/zlib-1.2.8/trees.c
@@ -0,0 +1,1226 @@
+/* trees.c -- output deflated data using Huffman coding
+ * Copyright (C) 1995-2012 Jean-loup Gailly
+ * detect_data_type() function provided freely by Cosmin Truta, 2006
+ * For conditions of distribution and use, see copyright notice in zlib.h
+ */
+
+/*
+ *  ALGORITHM
+ *
+ *      The "deflation" process uses several Huffman trees. The more
+ *      common source values are represented by shorter bit sequences.
+ *
+ *      Each code tree is stored in a compressed form which is itself
+ * a Huffman encoding of the lengths of all the code strings (in
+ * ascending order by source values).  The actual code strings are
+ * reconstructed from the lengths in the inflate process, as described
+ * in the deflate specification.
+ *
+ *  REFERENCES
+ *
+ *      Deutsch, L.P.,"'Deflate' Compressed Data Format Specification".
+ *      Available in ftp.uu.net:/pub/archiving/zip/doc/deflate-1.1.doc
+ *
+ *      Storer, James A.
+ *          Data Compression:  Methods and Theory, pp. 49-50.
+ *          Computer Science Press, 1988.  ISBN 0-7167-8156-5.
+ *
+ *      Sedgewick, R.
+ *          Algorithms, p290.
+ *          Addison-Wesley, 1983. ISBN 0-201-06672-6.
+ */
+
+/* @(#) $Id$ */
+
+/* #define GEN_TREES_H */
+
+#include "deflate.h"
+
+#ifdef DEBUG
+#  include <ctype.h>
+#endif
+
+/* ===========================================================================
+ * Constants
+ */
+
+#define MAX_BL_BITS 7
+/* Bit length codes must not exceed MAX_BL_BITS bits */
+
+#define END_BLOCK 256
+/* end of block literal code */
+
+#define REP_3_6      16
+/* repeat previous bit length 3-6 times (2 bits of repeat count) */
+
+#define REPZ_3_10    17
+/* repeat a zero length 3-10 times  (3 bits of repeat count) */
+
+#define REPZ_11_138  18
+/* repeat a zero length 11-138 times  (7 bits of repeat count) */
+
+local const int extra_lbits[LENGTH_CODES] /* extra bits for each length code */
+   = {0,0,0,0,0,0,0,0,1,1,1,1,2,2,2,2,3,3,3,3,4,4,4,4,5,5,5,5,0};
+
+local const int extra_dbits[D_CODES] /* extra bits for each distance code */
+   = {0,0,0,0,1,1,2,2,3,3,4,4,5,5,6,6,7,7,8,8,9,9,10,10,11,11,12,12,13,13};
+
+local const int extra_blbits[BL_CODES]/* extra bits for each bit length code */
+   = {0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,2,3,7};
+
+local const uch bl_order[BL_CODES]
+   = {16,17,18,0,8,7,9,6,10,5,11,4,12,3,13,2,14,1,15};
+/* The lengths of the bit length codes are sent in order of decreasing
+ * probability, to avoid transmitting the lengths for unused bit length codes.
+ */
+
+/* ===========================================================================
+ * Local data. These are initialized only once.
+ */
+
+#define DIST_CODE_LEN  512 /* see definition of array dist_code below */
+
+#if defined(GEN_TREES_H) || !defined(STDC)
+/* non ANSI compilers may not accept trees.h */
+
+local ct_data static_ltree[L_CODES+2];
+/* The static literal tree. Since the bit lengths are imposed, there is no
+ * need for the L_CODES extra codes used during heap construction. However
+ * The codes 286 and 287 are needed to build a canonical tree (see _tr_init
+ * below).
+ */
+
+local ct_data static_dtree[D_CODES];
+/* The static distance tree. (Actually a trivial tree since all codes use
+ * 5 bits.)
+ */
+
+uch _dist_code[DIST_CODE_LEN];
+/* Distance codes. The first 256 values correspond to the distances
+ * 3 .. 258, the last 256 values correspond to the top 8 bits of
+ * the 15 bit distances.
+ */
+
+uch _length_code[MAX_MATCH-MIN_MATCH+1];
+/* length code for each normalized match length (0 == MIN_MATCH) */
+
+local int base_length[LENGTH_CODES];
+/* First normalized length for each code (0 = MIN_MATCH) */
+
+local int base_dist[D_CODES];
+/* First normalized distance for each code (0 = distance of 1) */
+
+#else
+#  include "trees.h"
+#endif /* GEN_TREES_H */
+
+struct static_tree_desc_s {
+    const ct_data *static_tree;  /* static tree or NULL */
+    const intf *extra_bits;      /* extra bits for each code or NULL */
+    int     extra_base;          /* base index for extra_bits */
+    int     elems;               /* max number of elements in the tree */
+    int     max_length;          /* max bit length for the codes */
+};
+
+local static_tree_desc  static_l_desc =
+{static_ltree, extra_lbits, LITERALS+1, L_CODES, MAX_BITS};
+
+local static_tree_desc  static_d_desc =
+{static_dtree, extra_dbits, 0,          D_CODES, MAX_BITS};
+
+local static_tree_desc  static_bl_desc =
+{(const ct_data *)0, extra_blbits, 0,   BL_CODES, MAX_BL_BITS};
+
+/* ===========================================================================
+ * Local (static) routines in this file.
+ */
+
+local void tr_static_init OF((void));
+local void init_block     OF((deflate_state *s));
+local void pqdownheap     OF((deflate_state *s, ct_data *tree, int k));
+local void gen_bitlen     OF((deflate_state *s, tree_desc *desc));
+local void gen_codes      OF((ct_data *tree, int max_code, ushf *bl_count));
+local void build_tree     OF((deflate_state *s, tree_desc *desc));
+local void scan_tree      OF((deflate_state *s, ct_data *tree, int max_code));
+local void send_tree      OF((deflate_state *s, ct_data *tree, int max_code));
+local int  build_bl_tree  OF((deflate_state *s));
+local void send_all_trees OF((deflate_state *s, int lcodes, int dcodes,
+                              int blcodes));
+local void compress_block OF((deflate_state *s, const ct_data *ltree,
+                              const ct_data *dtree));
+local int  detect_data_type OF((deflate_state *s));
+local unsigned bi_reverse OF((unsigned value, int length));
+local void bi_windup      OF((deflate_state *s));
+local void bi_flush       OF((deflate_state *s));
+local void copy_block     OF((deflate_state *s, charf *buf, unsigned len,
+                              int header));
+
+#ifdef GEN_TREES_H
+local void gen_trees_header OF((void));
+#endif
+
+#ifndef DEBUG
+#  define send_code(s, c, tree) send_bits(s, tree[c].Code, tree[c].Len)
+   /* Send a code of the given tree. c and tree must not have side effects */
+
+#else /* DEBUG */
+#  define send_code(s, c, tree) \
+     { if (z_verbose>2) fprintf(stderr,"\ncd %3d ",(c)); \
+       send_bits(s, tree[c].Code, tree[c].Len); }
+#endif
+
+/* ===========================================================================
+ * Output a short LSB first on the stream.
+ * IN assertion: there is enough room in pendingBuf.
+ */
+#define put_short(s, w) { \
+    put_byte(s, (uch)((w) & 0xff)); \
+    put_byte(s, (uch)((ush)(w) >> 8)); \
+}
+
+/* ===========================================================================
+ * Send a value on a given number of bits.
+ * IN assertion: length <= 16 and value fits in length bits.
+ */
+#ifdef DEBUG
+local void send_bits      OF((deflate_state *s, int value, int length));
+
+local void send_bits(s, value, length)
+    deflate_state *s;
+    int value;  /* value to send */
+    int length; /* number of bits */
+{
+    Tracevv((stderr," l %2d v %4x ", length, value));
+    Assert(length > 0 && length <= 15, "invalid length");
+    s->bits_sent += (ulg)length;
+
+    /* If not enough room in bi_buf, use (valid) bits from bi_buf and
+     * (16 - bi_valid) bits from value, leaving (width - (16-bi_valid))
+     * unused bits in value.
+     */
+    if (s->bi_valid > (int)Buf_size - length) {
+        s->bi_buf |= (ush)value << s->bi_valid;
+        put_short(s, s->bi_buf);
+        s->bi_buf = (ush)value >> (Buf_size - s->bi_valid);
+        s->bi_valid += length - Buf_size;
+    } else {
+        s->bi_buf |= (ush)value << s->bi_valid;
+        s->bi_valid += length;
+    }
+}
+#else /* !DEBUG */
+
+#define send_bits(s, value, length) \
+{ int len = length;\
+  if (s->bi_valid > (int)Buf_size - len) {\
+    int val = value;\
+    s->bi_buf |= (ush)val << s->bi_valid;\
+    put_short(s, s->bi_buf);\
+    s->bi_buf = (ush)val >> (Buf_size - s->bi_valid);\
+    s->bi_valid += len - Buf_size;\
+  } else {\
+    s->bi_buf |= (ush)(value) << s->bi_valid;\
+    s->bi_valid += len;\
+  }\
+}
+#endif /* DEBUG */
+
+
+/* the arguments must not have side effects */
+
+/* ===========================================================================
+ * Initialize the various 'constant' tables.
+ */
+local void tr_static_init()
+{
+#if defined(GEN_TREES_H) || !defined(STDC)
+    static int static_init_done = 0;
+    int n;        /* iterates over tree elements */
+    int bits;     /* bit counter */
+    int length;   /* length value */
+    int code;     /* code value */
+    int dist;     /* distance index */
+    ush bl_count[MAX_BITS+1];
+    /* number of codes at each bit length for an optimal tree */
+
+    if (static_init_done) return;
+
+    /* For some embedded targets, global variables are not initialized: */
+#ifdef NO_INIT_GLOBAL_POINTERS
+    static_l_desc.static_tree = static_ltree;
+    static_l_desc.extra_bits = extra_lbits;
+    static_d_desc.static_tree = static_dtree;
+    static_d_desc.extra_bits = extra_dbits;
+    static_bl_desc.extra_bits = extra_blbits;
+#endif
+
+    /* Initialize the mapping length (0..255) -> length code (0..28) */
+    length = 0;
+    for (code = 0; code < LENGTH_CODES-1; code++) {
+        base_length[code] = length;
+        for (n = 0; n < (1<<extra_lbits[code]); n++) {
+            _length_code[length++] = (uch)code;
+        }
+    }
+    Assert (length == 256, "tr_static_init: length != 256");
+    /* Note that the length 255 (match length 258) can be represented
+     * in two different ways: code 284 + 5 bits or code 285, so we
+     * overwrite length_code[255] to use the best encoding:
+     */
+    _length_code[length-1] = (uch)code;
+
+    /* Initialize the mapping dist (0..32K) -> dist code (0..29) */
+    dist = 0;
+    for (code = 0 ; code < 16; code++) {
+        base_dist[code] = dist;
+        for (n = 0; n < (1<<extra_dbits[code]); n++) {
+            _dist_code[dist++] = (uch)code;
+        }
+    }
+    Assert (dist == 256, "tr_static_init: dist != 256");
+    dist >>= 7; /* from now on, all distances are divided by 128 */
+    for ( ; code < D_CODES; code++) {
+        base_dist[code] = dist << 7;
+        for (n = 0; n < (1<<(extra_dbits[code]-7)); n++) {
+            _dist_code[256 + dist++] = (uch)code;
+        }
+    }
+    Assert (dist == 256, "tr_static_init: 256+dist != 512");
+
+    /* Construct the codes of the static literal tree */
+    for (bits = 0; bits <= MAX_BITS; bits++) bl_count[bits] = 0;
+    n = 0;
+    while (n <= 143) static_ltree[n++].Len = 8, bl_count[8]++;
+    while (n <= 255) static_ltree[n++].Len = 9, bl_count[9]++;
+    while (n <= 279) static_ltree[n++].Len = 7, bl_count[7]++;
+    while (n <= 287) static_ltree[n++].Len = 8, bl_count[8]++;
+    /* Codes 286 and 287 do not exist, but we must include them in the
+     * tree construction to get a canonical Huffman tree (longest code
+     * all ones)
+     */
+    gen_codes((ct_data *)static_ltree, L_CODES+1, bl_count);
+
+    /* The static distance tree is trivial: */
+    for (n = 0; n < D_CODES; n++) {
+        static_dtree[n].Len = 5;
+        static_dtree[n].Code = bi_reverse((unsigned)n, 5);
+    }
+    static_init_done = 1;
+
+#  ifdef GEN_TREES_H
+    gen_trees_header();
+#  endif
+#endif /* defined(GEN_TREES_H) || !defined(STDC) */
+}
+
+/* ===========================================================================
+ * Genererate the file trees.h describing the static trees.
+ */
+#ifdef GEN_TREES_H
+#  ifndef DEBUG
+#    include <stdio.h>
+#  endif
+
+#  define SEPARATOR(i, last, width) \
+      ((i) == (last)? "\n};\n\n" :    \
+       ((i) % (width) == (width)-1 ? ",\n" : ", "))
+
+void gen_trees_header()
+{
+    FILE *header = fopen("trees.h", "w");
+    int i;
+
+    Assert (header != NULL, "Can't open trees.h");
+    fprintf(header,
+            "/* header created automatically with -DGEN_TREES_H */\n\n");
+
+    fprintf(header, "local const ct_data static_ltree[L_CODES+2] = {\n");
+    for (i = 0; i < L_CODES+2; i++) {
+        fprintf(header, "{{%3u},{%3u}}%s", static_ltree[i].Code,
+                static_ltree[i].Len, SEPARATOR(i, L_CODES+1, 5));
+    }
+
+    fprintf(header, "local const ct_data static_dtree[D_CODES] = {\n");
+    for (i = 0; i < D_CODES; i++) {
+        fprintf(header, "{{%2u},{%2u}}%s", static_dtree[i].Code,
+                static_dtree[i].Len, SEPARATOR(i, D_CODES-1, 5));
+    }
+
+    fprintf(header, "const uch ZLIB_INTERNAL _dist_code[DIST_CODE_LEN] = {\n");
+    for (i = 0; i < DIST_CODE_LEN; i++) {
+        fprintf(header, "%2u%s", _dist_code[i],
+                SEPARATOR(i, DIST_CODE_LEN-1, 20));
+    }
+
+    fprintf(header,
+        "const uch ZLIB_INTERNAL _length_code[MAX_MATCH-MIN_MATCH+1]= {\n");
+    for (i = 0; i < MAX_MATCH-MIN_MATCH+1; i++) {
+        fprintf(header, "%2u%s", _length_code[i],
+                SEPARATOR(i, MAX_MATCH-MIN_MATCH, 20));
+    }
+
+    fprintf(header, "local const int base_length[LENGTH_CODES] = {\n");
+    for (i = 0; i < LENGTH_CODES; i++) {
+        fprintf(header, "%1u%s", base_length[i],
+                SEPARATOR(i, LENGTH_CODES-1, 20));
+    }
+
+    fprintf(header, "local const int base_dist[D_CODES] = {\n");
+    for (i = 0; i < D_CODES; i++) {
+        fprintf(header, "%5u%s", base_dist[i],
+                SEPARATOR(i, D_CODES-1, 10));
+    }
+
+    fclose(header);
+}
+#endif /* GEN_TREES_H */
+
+/* ===========================================================================
+ * Initialize the tree data structures for a new zlib stream.
+ */
+void ZLIB_INTERNAL _tr_init(s)
+    deflate_state *s;
+{
+    tr_static_init();
+
+    s->l_desc.dyn_tree = s->dyn_ltree;
+    s->l_desc.stat_desc = &static_l_desc;
+
+    s->d_desc.dyn_tree = s->dyn_dtree;
+    s->d_desc.stat_desc = &static_d_desc;
+
+    s->bl_desc.dyn_tree = s->bl_tree;
+    s->bl_desc.stat_desc = &static_bl_desc;
+
+    s->bi_buf = 0;
+    s->bi_valid = 0;
+#ifdef DEBUG
+    s->compressed_len = 0L;
+    s->bits_sent = 0L;
+#endif
+
+    /* Initialize the first block of the first file: */
+    init_block(s);
+}
+
+/* ===========================================================================
+ * Initialize a new block.
+ */
+local void init_block(s)
+    deflate_state *s;
+{
+    int n; /* iterates over tree elements */
+
+    /* Initialize the trees. */
+    for (n = 0; n < L_CODES;  n++) s->dyn_ltree[n].Freq = 0;
+    for (n = 0; n < D_CODES;  n++) s->dyn_dtree[n].Freq = 0;
+    for (n = 0; n < BL_CODES; n++) s->bl_tree[n].Freq = 0;
+
+    s->dyn_ltree[END_BLOCK].Freq = 1;
+    s->opt_len = s->static_len = 0L;
+    s->last_lit = s->matches = 0;
+}
+
+#define SMALLEST 1
+/* Index within the heap array of least frequent node in the Huffman tree */
+
+
+/* ===========================================================================
+ * Remove the smallest element from the heap and recreate the heap with
+ * one less element. Updates heap and heap_len.
+ */
+#define pqremove(s, tree, top) \
+{\
+    top = s->heap[SMALLEST]; \
+    s->heap[SMALLEST] = s->heap[s->heap_len--]; \
+    pqdownheap(s, tree, SMALLEST); \
+}
+
+/* ===========================================================================
+ * Compares to subtrees, using the tree depth as tie breaker when
+ * the subtrees have equal frequency. This minimizes the worst case length.
+ */
+#define smaller(tree, n, m, depth) \
+   (tree[n].Freq < tree[m].Freq || \
+   (tree[n].Freq == tree[m].Freq && depth[n] <= depth[m]))
+
+/* ===========================================================================
+ * Restore the heap property by moving down the tree starting at node k,
+ * exchanging a node with the smallest of its two sons if necessary, stopping
+ * when the heap property is re-established (each father smaller than its
+ * two sons).
+ */
+local void pqdownheap(s, tree, k)
+    deflate_state *s;
+    ct_data *tree;  /* the tree to restore */
+    int k;               /* node to move down */
+{
+    int v = s->heap[k];
+    int j = k << 1;  /* left son of k */
+    while (j <= s->heap_len) {
+        /* Set j to the smallest of the two sons: */
+        if (j < s->heap_len &&
+            smaller(tree, s->heap[j+1], s->heap[j], s->depth)) {
+            j++;
+        }
+        /* Exit if v is smaller than both sons */
+        if (smaller(tree, v, s->heap[j], s->depth)) break;
+
+        /* Exchange v with the smallest son */
+        s->heap[k] = s->heap[j];  k = j;
+
+        /* And continue down the tree, setting j to the left son of k */
+        j <<= 1;
+    }
+    s->heap[k] = v;
+}
+
+/* ===========================================================================
+ * Compute the optimal bit lengths for a tree and update the total bit length
+ * for the current block.
+ * IN assertion: the fields freq and dad are set, heap[heap_max] and
+ *    above are the tree nodes sorted by increasing frequency.
+ * OUT assertions: the field len is set to the optimal bit length, the
+ *     array bl_count contains the frequencies for each bit length.
+ *     The length opt_len is updated; static_len is also updated if stree is
+ *     not null.
+ */
+local void gen_bitlen(s, desc)
+    deflate_state *s;
+    tree_desc *desc;    /* the tree descriptor */
+{
+    ct_data *tree        = desc->dyn_tree;
+    int max_code         = desc->max_code;
+    const ct_data *stree = desc->stat_desc->static_tree;
+    const intf *extra    = desc->stat_desc->extra_bits;
+    int base             = desc->stat_desc->extra_base;
+    int max_length       = desc->stat_desc->max_length;
+    int h;              /* heap index */
+    int n, m;           /* iterate over the tree elements */
+    int bits;           /* bit length */
+    int xbits;          /* extra bits */
+    ush f;              /* frequency */
+    int overflow = 0;   /* number of elements with bit length too large */
+
+    for (bits = 0; bits <= MAX_BITS; bits++) s->bl_count[bits] = 0;
+
+    /* In a first pass, compute the optimal bit lengths (which may
+     * overflow in the case of the bit length tree).
+     */
+    tree[s->heap[s->heap_max]].Len = 0; /* root of the heap */
+
+    for (h = s->heap_max+1; h < HEAP_SIZE; h++) {
+        n = s->heap[h];
+        bits = tree[tree[n].Dad].Len + 1;
+        if (bits > max_length) bits = max_length, overflow++;
+        tree[n].Len = (ush)bits;
+        /* We overwrite tree[n].Dad which is no longer needed */
+
+        if (n > max_code) continue; /* not a leaf node */
+
+        s->bl_count[bits]++;
+        xbits = 0;
+        if (n >= base) xbits = extra[n-base];
+        f = tree[n].Freq;
+        s->opt_len += (ulg)f * (bits + xbits);
+        if (stree) s->static_len += (ulg)f * (stree[n].Len + xbits);
+    }
+    if (overflow == 0) return;
+
+    Trace((stderr,"\nbit length overflow\n"));
+    /* This happens for example on obj2 and pic of the Calgary corpus */
+
+    /* Find the first bit length which could increase: */
+    do {
+        bits = max_length-1;
+        while (s->bl_count[bits] == 0) bits--;
+        s->bl_count[bits]--;      /* move one leaf down the tree */
+        s->bl_count[bits+1] += 2; /* move one overflow item as its brother */
+        s->bl_count[max_length]--;
+        /* The brother of the overflow item also moves one step up,
+         * but this does not affect bl_count[max_length]
+         */
+        overflow -= 2;
+    } while (overflow > 0);
+
+    /* Now recompute all bit lengths, scanning in increasing frequency.
+     * h is still equal to HEAP_SIZE. (It is simpler to reconstruct all
+     * lengths instead of fixing only the wrong ones. This idea is taken
+     * from 'ar' written by Haruhiko Okumura.)
+     */
+    for (bits = max_length; bits != 0; bits--) {
+        n = s->bl_count[bits];
+        while (n != 0) {
+            m = s->heap[--h];
+            if (m > max_code) continue;
+            if ((unsigned) tree[m].Len != (unsigned) bits) {
+                Trace((stderr,"code %d bits %d->%d\n", m, tree[m].Len, bits));
+                s->opt_len += ((long)bits - (long)tree[m].Len)
+                              *(long)tree[m].Freq;
+                tree[m].Len = (ush)bits;
+            }
+            n--;
+        }
+    }
+}
+
+/* ===========================================================================
+ * Generate the codes for a given tree and bit counts (which need not be
+ * optimal).
+ * IN assertion: the array bl_count contains the bit length statistics for
+ * the given tree and the field len is set for all tree elements.
+ * OUT assertion: the field code is set for all tree elements of non
+ *     zero code length.
+ */
+local void gen_codes (tree, max_code, bl_count)
+    ct_data *tree;             /* the tree to decorate */
+    int max_code;              /* largest code with non zero frequency */
+    ushf *bl_count;            /* number of codes at each bit length */
+{
+    ush next_code[MAX_BITS+1]; /* next code value for each bit length */
+    ush code = 0;              /* running code value */
+    int bits;                  /* bit index */
+    int n;                     /* code index */
+
+    /* The distribution counts are first used to generate the code values
+     * without bit reversal.
+     */
+    for (bits = 1; bits <= MAX_BITS; bits++) {
+        next_code[bits] = code = (code + bl_count[bits-1]) << 1;
+    }
+    /* Check that the bit counts in bl_count are consistent. The last code
+     * must be all ones.
+     */
+    Assert (code + bl_count[MAX_BITS]-1 == (1<<MAX_BITS)-1,
+            "inconsistent bit counts");
+    Tracev((stderr,"\ngen_codes: max_code %d ", max_code));
+
+    for (n = 0;  n <= max_code; n++) {
+        int len = tree[n].Len;
+        if (len == 0) continue;
+        /* Now reverse the bits */
+        tree[n].Code = bi_reverse(next_code[len]++, len);
+
+        Tracecv(tree != static_ltree, (stderr,"\nn %3d %c l %2d c %4x (%x) ",
+             n, (isgraph(n) ? n : ' '), len, tree[n].Code, next_code[len]-1));
+    }
+}
+
+/* ===========================================================================
+ * Construct one Huffman tree and assigns the code bit strings and lengths.
+ * Update the total bit length for the current block.
+ * IN assertion: the field freq is set for all tree elements.
+ * OUT assertions: the fields len and code are set to the optimal bit length
+ *     and corresponding code. The length opt_len is updated; static_len is
+ *     also updated if stree is not null. The field max_code is set.
+ */
+local void build_tree(s, desc)
+    deflate_state *s;
+    tree_desc *desc; /* the tree descriptor */
+{
+    ct_data *tree         = desc->dyn_tree;
+    const ct_data *stree  = desc->stat_desc->static_tree;
+    int elems             = desc->stat_desc->elems;
+    int n, m;          /* iterate over heap elements */
+    int max_code = -1; /* largest code with non zero frequency */
+    int node;          /* new node being created */
+
+    /* Construct the initial heap, with least frequent element in
+     * heap[SMALLEST]. The sons of heap[n] are heap[2*n] and heap[2*n+1].
+     * heap[0] is not used.
+     */
+    s->heap_len = 0, s->heap_max = HEAP_SIZE;
+
+    for (n = 0; n < elems; n++) {
+        if (tree[n].Freq != 0) {
+            s->heap[++(s->heap_len)] = max_code = n;
+            s->depth[n] = 0;
+        } else {
+            tree[n].Len = 0;
+        }
+    }
+
+    /* The pkzip format requires that at least one distance code exists,
+     * and that at least one bit should be sent even if there is only one
+     * possible code. So to avoid special checks later on we force at least
+     * two codes of non zero frequency.
+     */
+    while (s->heap_len < 2) {
+        node = s->heap[++(s->heap_len)] = (max_code < 2 ? ++max_code : 0);
+        tree[node].Freq = 1;
+        s->depth[node] = 0;
+        s->opt_len--; if (stree) s->static_len -= stree[node].Len;
+        /* node is 0 or 1 so it does not have extra bits */
+    }
+    desc->max_code = max_code;
+
+    /* The elements heap[heap_len/2+1 .. heap_len] are leaves of the tree,
+     * establish sub-heaps of increasing lengths:
+     */
+    for (n = s->heap_len/2; n >= 1; n--) pqdownheap(s, tree, n);
+
+    /* Construct the Huffman tree by repeatedly combining the least two
+     * frequent nodes.
+     */
+    node = elems;              /* next internal node of the tree */
+    do {
+        pqremove(s, tree, n);  /* n = node of least frequency */
+        m = s->heap[SMALLEST]; /* m = node of next least frequency */
+
+        s->heap[--(s->heap_max)] = n; /* keep the nodes sorted by frequency */
+        s->heap[--(s->heap_max)] = m;
+
+        /* Create a new node father of n and m */
+        tree[node].Freq = tree[n].Freq + tree[m].Freq;
+        s->depth[node] = (uch)((s->depth[n] >= s->depth[m] ?
+                                s->depth[n] : s->depth[m]) + 1);
+        tree[n].Dad = tree[m].Dad = (ush)node;
+#ifdef DUMP_BL_TREE
+        if (tree == s->bl_tree) {
+            fprintf(stderr,"\nnode %d(%d), sons %d(%d) %d(%d)",
+                    node, tree[node].Freq, n, tree[n].Freq, m, tree[m].Freq);
+        }
+#endif
+        /* and insert the new node in the heap */
+        s->heap[SMALLEST] = node++;
+        pqdownheap(s, tree, SMALLEST);
+
+    } while (s->heap_len >= 2);
+
+    s->heap[--(s->heap_max)] = s->heap[SMALLEST];
+
+    /* At this point, the fields freq and dad are set. We can now
+     * generate the bit lengths.
+     */
+    gen_bitlen(s, (tree_desc *)desc);
+
+    /* The field len is now set, we can generate the bit codes */
+    gen_codes ((ct_data *)tree, max_code, s->bl_count);
+}
+
+/* ===========================================================================
+ * Scan a literal or distance tree to determine the frequencies of the codes
+ * in the bit length tree.
+ */
+local void scan_tree (s, tree, max_code)
+    deflate_state *s;
+    ct_data *tree;   /* the tree to be scanned */
+    int max_code;    /* and its largest code of non zero frequency */
+{
+    int n;                     /* iterates over all tree elements */
+    int prevlen = -1;          /* last emitted length */
+    int curlen;                /* length of current code */
+    int nextlen = tree[0].Len; /* length of next code */
+    int count = 0;             /* repeat count of the current code */
+    int max_count = 7;         /* max repeat count */
+    int min_count = 4;         /* min repeat count */
+
+    if (nextlen == 0) max_count = 138, min_count = 3;
+    tree[max_code+1].Len = (ush)0xffff; /* guard */
+
+    for (n = 0; n <= max_code; n++) {
+        curlen = nextlen; nextlen = tree[n+1].Len;
+        if (++count < max_count && curlen == nextlen) {
+            continue;
+        } else if (count < min_count) {
+            s->bl_tree[curlen].Freq += count;
+        } else if (curlen != 0) {
+            if (curlen != prevlen) s->bl_tree[curlen].Freq++;
+            s->bl_tree[REP_3_6].Freq++;
+        } else if (count <= 10) {
+            s->bl_tree[REPZ_3_10].Freq++;
+        } else {
+            s->bl_tree[REPZ_11_138].Freq++;
+        }
+        count = 0; prevlen = curlen;
+        if (nextlen == 0) {
+            max_count = 138, min_count = 3;
+        } else if (curlen == nextlen) {
+            max_count = 6, min_count = 3;
+        } else {
+            max_count = 7, min_count = 4;
+        }
+    }
+}
+
+/* ===========================================================================
+ * Send a literal or distance tree in compressed form, using the codes in
+ * bl_tree.
+ */
+local void send_tree (s, tree, max_code)
+    deflate_state *s;
+    ct_data *tree; /* the tree to be scanned */
+    int max_code;       /* and its largest code of non zero frequency */
+{
+    int n;                     /* iterates over all tree elements */
+    int prevlen = -1;          /* last emitted length */
+    int curlen;                /* length of current code */
+    int nextlen = tree[0].Len; /* length of next code */
+    int count = 0;             /* repeat count of the current code */
+    int max_count = 7;         /* max repeat count */
+    int min_count = 4;         /* min repeat count */
+
+    /* tree[max_code+1].Len = -1; */  /* guard already set */
+    if (nextlen == 0) max_count = 138, min_count = 3;
+
+    for (n = 0; n <= max_code; n++) {
+        curlen = nextlen; nextlen = tree[n+1].Len;
+        if (++count < max_count && curlen == nextlen) {
+            continue;
+        } else if (count < min_count) {
+            do { send_code(s, curlen, s->bl_tree); } while (--count != 0);
+
+        } else if (curlen != 0) {
+            if (curlen != prevlen) {
+                send_code(s, curlen, s->bl_tree); count--;
+            }
+            Assert(count >= 3 && count <= 6, " 3_6?");
+            send_code(s, REP_3_6, s->bl_tree); send_bits(s, count-3, 2);
+
+        } else if (count <= 10) {
+            send_code(s, REPZ_3_10, s->bl_tree); send_bits(s, count-3, 3);
+
+        } else {
+            send_code(s, REPZ_11_138, s->bl_tree); send_bits(s, count-11, 7);
+        }
+        count = 0; prevlen = curlen;
+        if (nextlen == 0) {
+            max_count = 138, min_count = 3;
+        } else if (curlen == nextlen) {
+            max_count = 6, min_count = 3;
+        } else {
+            max_count = 7, min_count = 4;
+        }
+    }
+}
+
+/* ===========================================================================
+ * Construct the Huffman tree for the bit lengths and return the index in
+ * bl_order of the last bit length code to send.
+ */
+local int build_bl_tree(s)
+    deflate_state *s;
+{
+    int max_blindex;  /* index of last bit length code of non zero freq */
+
+    /* Determine the bit length frequencies for literal and distance trees */
+    scan_tree(s, (ct_data *)s->dyn_ltree, s->l_desc.max_code);
+    scan_tree(s, (ct_data *)s->dyn_dtree, s->d_desc.max_code);
+
+    /* Build the bit length tree: */
+    build_tree(s, (tree_desc *)(&(s->bl_desc)));
+    /* opt_len now includes the length of the tree representations, except
+     * the lengths of the bit lengths codes and the 5+5+4 bits for the counts.
+     */
+
+    /* Determine the number of bit length codes to send. The pkzip format
+     * requires that at least 4 bit length codes be sent. (appnote.txt says
+     * 3 but the actual value used is 4.)
+     */
+    for (max_blindex = BL_CODES-1; max_blindex >= 3; max_blindex--) {
+        if (s->bl_tree[bl_order[max_blindex]].Len != 0) break;
+    }
+    /* Update opt_len to include the bit length tree and counts */
+    s->opt_len += 3*(max_blindex+1) + 5+5+4;
+    Tracev((stderr, "\ndyn trees: dyn %ld, stat %ld",
+            s->opt_len, s->static_len));
+
+    return max_blindex;
+}
+
+/* ===========================================================================
+ * Send the header for a block using dynamic Huffman trees: the counts, the
+ * lengths of the bit length codes, the literal tree and the distance tree.
+ * IN assertion: lcodes >= 257, dcodes >= 1, blcodes >= 4.
+ */
+local void send_all_trees(s, lcodes, dcodes, blcodes)
+    deflate_state *s;
+    int lcodes, dcodes, blcodes; /* number of codes for each tree */
+{
+    int rank;                    /* index in bl_order */
+
+    Assert (lcodes >= 257 && dcodes >= 1 && blcodes >= 4, "not enough codes");
+    Assert (lcodes <= L_CODES && dcodes <= D_CODES && blcodes <= BL_CODES,
+            "too many codes");
+    Tracev((stderr, "\nbl counts: "));
+    send_bits(s, lcodes-257, 5); /* not +255 as stated in appnote.txt */
+    send_bits(s, dcodes-1,   5);
+    send_bits(s, blcodes-4,  4); /* not -3 as stated in appnote.txt */
+    for (rank = 0; rank < blcodes; rank++) {
+        Tracev((stderr, "\nbl code %2d ", bl_order[rank]));
+        send_bits(s, s->bl_tree[bl_order[rank]].Len, 3);
+    }
+    Tracev((stderr, "\nbl tree: sent %ld", s->bits_sent));
+
+    send_tree(s, (ct_data *)s->dyn_ltree, lcodes-1); /* literal tree */
+    Tracev((stderr, "\nlit tree: sent %ld", s->bits_sent));
+
+    send_tree(s, (ct_data *)s->dyn_dtree, dcodes-1); /* distance tree */
+    Tracev((stderr, "\ndist tree: sent %ld", s->bits_sent));
+}
+
+/* ===========================================================================
+ * Send a stored block
+ */
+void ZLIB_INTERNAL _tr_stored_block(s, buf, stored_len, last)
+    deflate_state *s;
+    charf *buf;       /* input block */
+    ulg stored_len;   /* length of input block */
+    int last;         /* one if this is the last block for a file */
+{
+    send_bits(s, (STORED_BLOCK<<1)+last, 3);    /* send block type */
+#ifdef DEBUG
+    s->compressed_len = (s->compressed_len + 3 + 7) & (ulg)~7L;
+    s->compressed_len += (stored_len + 4) << 3;
+#endif
+    copy_block(s, buf, (unsigned)stored_len, 1); /* with header */
+}
+
+/* ===========================================================================
+ * Flush the bits in the bit buffer to pending output (leaves at most 7 bits)
+ */
+void ZLIB_INTERNAL _tr_flush_bits(s)
+    deflate_state *s;
+{
+    bi_flush(s);
+}
+
+/* ===========================================================================
+ * Send one empty static block to give enough lookahead for inflate.
+ * This takes 10 bits, of which 7 may remain in the bit buffer.
+ */
+void ZLIB_INTERNAL _tr_align(s)
+    deflate_state *s;
+{
+    send_bits(s, STATIC_TREES<<1, 3);
+    send_code(s, END_BLOCK, static_ltree);
+#ifdef DEBUG
+    s->compressed_len += 10L; /* 3 for block type, 7 for EOB */
+#endif
+    bi_flush(s);
+}
+
+/* ===========================================================================
+ * Determine the best encoding for the current block: dynamic trees, static
+ * trees or store, and output the encoded block to the zip file.
+ */
+void ZLIB_INTERNAL _tr_flush_block(s, buf, stored_len, last)
+    deflate_state *s;
+    charf *buf;       /* input block, or NULL if too old */
+    ulg stored_len;   /* length of input block */
+    int last;         /* one if this is the last block for a file */
+{
+    ulg opt_lenb, static_lenb; /* opt_len and static_len in bytes */
+    int max_blindex = 0;  /* index of last bit length code of non zero freq */
+
+    /* Build the Huffman trees unless a stored block is forced */
+    if (s->level > 0) {
+
+        /* Check if the file is binary or text */
+        if (s->strm->data_type == Z_UNKNOWN)
+            s->strm->data_type = detect_data_type(s);
+
+        /* Construct the literal and distance trees */
+        build_tree(s, (tree_desc *)(&(s->l_desc)));
+        Tracev((stderr, "\nlit data: dyn %ld, stat %ld", s->opt_len,
+                s->static_len));
+
+        build_tree(s, (tree_desc *)(&(s->d_desc)));
+        Tracev((stderr, "\ndist data: dyn %ld, stat %ld", s->opt_len,
+                s->static_len));
+        /* At this point, opt_len and static_len are the total bit lengths of
+         * the compressed block data, excluding the tree representations.
+         */
+
+        /* Build the bit length tree for the above two trees, and get the index
+         * in bl_order of the last bit length code to send.
+         */
+        max_blindex = build_bl_tree(s);
+
+        /* Determine the best encoding. Compute the block lengths in bytes. */
+        opt_lenb = (s->opt_len+3+7)>>3;
+        static_lenb = (s->static_len+3+7)>>3;
+
+        Tracev((stderr, "\nopt %lu(%lu) stat %lu(%lu) stored %lu lit %u ",
+                opt_lenb, s->opt_len, static_lenb, s->static_len, stored_len,
+                s->last_lit));
+
+        if (static_lenb <= opt_lenb) opt_lenb = static_lenb;
+
+    } else {
+        Assert(buf != (char*)0, "lost buf");
+        opt_lenb = static_lenb = stored_len + 5; /* force a stored block */
+    }
+
+#ifdef FORCE_STORED
+    if (buf != (char*)0) { /* force stored block */
+#else
+    if (stored_len+4 <= opt_lenb && buf != (char*)0) {
+                       /* 4: two words for the lengths */
+#endif
+        /* The test buf != NULL is only necessary if LIT_BUFSIZE > WSIZE.
+         * Otherwise we can't have processed more than WSIZE input bytes since
+         * the last block flush, because compression would have been
+         * successful. If LIT_BUFSIZE <= WSIZE, it is never too late to
+         * transform a block into a stored block.
+         */
+        _tr_stored_block(s, buf, stored_len, last);
+
+#ifdef FORCE_STATIC
+    } else if (static_lenb >= 0) { /* force static trees */
+#else
+    } else if (s->strategy == Z_FIXED || static_lenb == opt_lenb) {
+#endif
+        send_bits(s, (STATIC_TREES<<1)+last, 3);
+        compress_block(s, (const ct_data *)static_ltree,
+                       (const ct_data *)static_dtree);
+#ifdef DEBUG
+        s->compressed_len += 3 + s->static_len;
+#endif
+    } else {
+        send_bits(s, (DYN_TREES<<1)+last, 3);
+        send_all_trees(s, s->l_desc.max_code+1, s->d_desc.max_code+1,
+                       max_blindex+1);
+        compress_block(s, (const ct_data *)s->dyn_ltree,
+                       (const ct_data *)s->dyn_dtree);
+#ifdef DEBUG
+        s->compressed_len += 3 + s->opt_len;
+#endif
+    }
+    Assert (s->compressed_len == s->bits_sent, "bad compressed size");
+    /* The above check is made mod 2^32, for files larger than 512 MB
+     * and uLong implemented on 32 bits.
+     */
+    init_block(s);
+
+    if (last) {
+        bi_windup(s);
+#ifdef DEBUG
+        s->compressed_len += 7;  /* align on byte boundary */
+#endif
+    }
+    Tracev((stderr,"\ncomprlen %lu(%lu) ", s->compressed_len>>3,
+           s->compressed_len-7*last));
+}
+
+/* ===========================================================================
+ * Save the match info and tally the frequency counts. Return true if
+ * the current block must be flushed.
+ */
+int ZLIB_INTERNAL _tr_tally (s, dist, lc)
+    deflate_state *s;
+    unsigned dist;  /* distance of matched string */
+    unsigned lc;    /* match length-MIN_MATCH or unmatched char (if dist==0) */
+{
+    s->d_buf[s->last_lit] = (ush)dist;
+    s->l_buf[s->last_lit++] = (uch)lc;
+    if (dist == 0) {
+        /* lc is the unmatched char */
+        s->dyn_ltree[lc].Freq++;
+    } else {
+        s->matches++;
+        /* Here, lc is the match length - MIN_MATCH */
+        dist--;             /* dist = match distance - 1 */
+        Assert((ush)dist < (ush)MAX_DIST(s) &&
+               (ush)lc <= (ush)(MAX_MATCH-MIN_MATCH) &&
+               (ush)d_code(dist) < (ush)D_CODES,  "_tr_tally: bad match");
+
+        s->dyn_ltree[_length_code[lc]+LITERALS+1].Freq++;
+        s->dyn_dtree[d_code(dist)].Freq++;
+    }
+
+#ifdef TRUNCATE_BLOCK
+    /* Try to guess if it is profitable to stop the current block here */
+    if ((s->last_lit & 0x1fff) == 0 && s->level > 2) {
+        /* Compute an upper bound for the compressed length */
+        ulg out_length = (ulg)s->last_lit*8L;
+        ulg in_length = (ulg)((long)s->strstart - s->block_start);
+        int dcode;
+        for (dcode = 0; dcode < D_CODES; dcode++) {
+            out_length += (ulg)s->dyn_dtree[dcode].Freq *
+                (5L+extra_dbits[dcode]);
+        }
+        out_length >>= 3;
+        Tracev((stderr,"\nlast_lit %u, in %ld, out ~%ld(%ld%%) ",
+               s->last_lit, in_length, out_length,
+               100L - out_length*100L/in_length));
+        if (s->matches < s->last_lit/2 && out_length < in_length/2) return 1;
+    }
+#endif
+    return (s->last_lit == s->lit_bufsize-1);
+    /* We avoid equality with lit_bufsize because of wraparound at 64K
+     * on 16 bit machines and because stored blocks are restricted to
+     * 64K-1 bytes.
+     */
+}
+
+/* ===========================================================================
+ * Send the block data compressed using the given Huffman trees
+ */
+local void compress_block(s, ltree, dtree)
+    deflate_state *s;
+    const ct_data *ltree; /* literal tree */
+    const ct_data *dtree; /* distance tree */
+{
+    unsigned dist;      /* distance of matched string */
+    int lc;             /* match length or unmatched char (if dist == 0) */
+    unsigned lx = 0;    /* running index in l_buf */
+    unsigned code;      /* the code to send */
+    int extra;          /* number of extra bits to send */
+
+    if (s->last_lit != 0) do {
+        dist = s->d_buf[lx];
+        lc = s->l_buf[lx++];
+        if (dist == 0) {
+            send_code(s, lc, ltree); /* send a literal byte */
+            Tracecv(isgraph(lc), (stderr," '%c' ", lc));
+        } else {
+            /* Here, lc is the match length - MIN_MATCH */
+            code = _length_code[lc];
+            send_code(s, code+LITERALS+1, ltree); /* send the length code */
+            extra = extra_lbits[code];
+            if (extra != 0) {
+                lc -= base_length[code];
+                send_bits(s, lc, extra);       /* send the extra length bits */
+            }
+            dist--; /* dist is now the match distance - 1 */
+            code = d_code(dist);
+            Assert (code < D_CODES, "bad d_code");
+
+            send_code(s, code, dtree);       /* send the distance code */
+            extra = extra_dbits[code];
+            if (extra != 0) {
+                dist -= base_dist[code];
+                send_bits(s, dist, extra);   /* send the extra distance bits */
+            }
+        } /* literal or match pair ? */
+
+        /* Check that the overlay between pending_buf and d_buf+l_buf is ok: */
+        Assert((uInt)(s->pending) < s->lit_bufsize + 2*lx,
+               "pendingBuf overflow");
+
+    } while (lx < s->last_lit);
+
+    send_code(s, END_BLOCK, ltree);
+}
+
+/* ===========================================================================
+ * Check if the data type is TEXT or BINARY, using the following algorithm:
+ * - TEXT if the two conditions below are satisfied:
+ *    a) There are no non-portable control characters belonging to the
+ *       "black list" (0..6, 14..25, 28..31).
+ *    b) There is at least one printable character belonging to the
+ *       "white list" (9 {TAB}, 10 {LF}, 13 {CR}, 32..255).
+ * - BINARY otherwise.
+ * - The following partially-portable control characters form a
+ *   "gray list" that is ignored in this detection algorithm:
+ *   (7 {BEL}, 8 {BS}, 11 {VT}, 12 {FF}, 26 {SUB}, 27 {ESC}).
+ * IN assertion: the fields Freq of dyn_ltree are set.
+ */
+local int detect_data_type(s)
+    deflate_state *s;
+{
+    /* black_mask is the bit mask of black-listed bytes
+     * set bits 0..6, 14..25, and 28..31
+     * 0xf3ffc07f = binary 11110011111111111100000001111111
+     */
+    unsigned long black_mask = 0xf3ffc07fUL;
+    int n;
+
+    /* Check for non-textual ("black-listed") bytes. */
+    for (n = 0; n <= 31; n++, black_mask >>= 1)
+        if ((black_mask & 1) && (s->dyn_ltree[n].Freq != 0))
+            return Z_BINARY;
+
+    /* Check for textual ("white-listed") bytes. */
+    if (s->dyn_ltree[9].Freq != 0 || s->dyn_ltree[10].Freq != 0
+            || s->dyn_ltree[13].Freq != 0)
+        return Z_TEXT;
+    for (n = 32; n < LITERALS; n++)
+        if (s->dyn_ltree[n].Freq != 0)
+            return Z_TEXT;
+
+    /* There are no "black-listed" or "white-listed" bytes:
+     * this stream either is empty or has tolerated ("gray-listed") bytes only.
+     */
+    return Z_BINARY;
+}
+
+/* ===========================================================================
+ * Reverse the first len bits of a code, using straightforward code (a faster
+ * method would use a table)
+ * IN assertion: 1 <= len <= 15
+ */
+local unsigned bi_reverse(code, len)
+    unsigned code; /* the value to invert */
+    int len;       /* its bit length */
+{
+    register unsigned res = 0;
+    do {
+        res |= code & 1;
+        code >>= 1, res <<= 1;
+    } while (--len > 0);
+    return res >> 1;
+}
+
+/* ===========================================================================
+ * Flush the bit buffer, keeping at most 7 bits in it.
+ */
+local void bi_flush(s)
+    deflate_state *s;
+{
+    if (s->bi_valid == 16) {
+        put_short(s, s->bi_buf);
+        s->bi_buf = 0;
+        s->bi_valid = 0;
+    } else if (s->bi_valid >= 8) {
+        put_byte(s, (Byte)s->bi_buf);
+        s->bi_buf >>= 8;
+        s->bi_valid -= 8;
+    }
+}
+
+/* ===========================================================================
+ * Flush the bit buffer and align the output on a byte boundary
+ */
+local void bi_windup(s)
+    deflate_state *s;
+{
+    if (s->bi_valid > 8) {
+        put_short(s, s->bi_buf);
+    } else if (s->bi_valid > 0) {
+        put_byte(s, (Byte)s->bi_buf);
+    }
+    s->bi_buf = 0;
+    s->bi_valid = 0;
+#ifdef DEBUG
+    s->bits_sent = (s->bits_sent+7) & ~7;
+#endif
+}
+
+/* ===========================================================================
+ * Copy a stored block, storing first the length and its
+ * one's complement if requested.
+ */
+local void copy_block(s, buf, len, header)
+    deflate_state *s;
+    charf    *buf;    /* the input data */
+    unsigned len;     /* its length */
+    int      header;  /* true if block header must be written */
+{
+    bi_windup(s);        /* align on byte boundary */
+
+    if (header) {
+        put_short(s, (ush)len);
+        put_short(s, (ush)~len);
+#ifdef DEBUG
+        s->bits_sent += 2*16;
+#endif
+    }
+#ifdef DEBUG
+    s->bits_sent += (ulg)len<<3;
+#endif
+    while (len--) {
+        put_byte(s, *buf++);
+    }
+}
diff --git a/c-blosc/internal-complibs/zlib-1.2.8/trees.h b/c-blosc/internal-complibs/zlib-1.2.8/trees.h
new file mode 100644
index 0000000..d35639d
--- /dev/null
+++ b/c-blosc/internal-complibs/zlib-1.2.8/trees.h
@@ -0,0 +1,128 @@
+/* header created automatically with -DGEN_TREES_H */
+
+local const ct_data static_ltree[L_CODES+2] = {
+{{ 12},{  8}}, {{140},{  8}}, {{ 76},{  8}}, {{204},{  8}}, {{ 44},{  8}},
+{{172},{  8}}, {{108},{  8}}, {{236},{  8}}, {{ 28},{  8}}, {{156},{  8}},
+{{ 92},{  8}}, {{220},{  8}}, {{ 60},{  8}}, {{188},{  8}}, {{124},{  8}},
+{{252},{  8}}, {{  2},{  8}}, {{130},{  8}}, {{ 66},{  8}}, {{194},{  8}},
+{{ 34},{  8}}, {{162},{  8}}, {{ 98},{  8}}, {{226},{  8}}, {{ 18},{  8}},
+{{146},{  8}}, {{ 82},{  8}}, {{210},{  8}}, {{ 50},{  8}}, {{178},{  8}},
+{{114},{  8}}, {{242},{  8}}, {{ 10},{  8}}, {{138},{  8}}, {{ 74},{  8}},
+{{202},{  8}}, {{ 42},{  8}}, {{170},{  8}}, {{106},{  8}}, {{234},{  8}},
+{{ 26},{  8}}, {{154},{  8}}, {{ 90},{  8}}, {{218},{  8}}, {{ 58},{  8}},
+{{186},{  8}}, {{122},{  8}}, {{250},{  8}}, {{  6},{  8}}, {{134},{  8}},
+{{ 70},{  8}}, {{198},{  8}}, {{ 38},{  8}}, {{166},{  8}}, {{102},{  8}},
+{{230},{  8}}, {{ 22},{  8}}, {{150},{  8}}, {{ 86},{  8}}, {{214},{  8}},
+{{ 54},{  8}}, {{182},{  8}}, {{118},{  8}}, {{246},{  8}}, {{ 14},{  8}},
+{{142},{  8}}, {{ 78},{  8}}, {{206},{  8}}, {{ 46},{  8}}, {{174},{  8}},
+{{110},{  8}}, {{238},{  8}}, {{ 30},{  8}}, {{158},{  8}}, {{ 94},{  8}},
+{{222},{  8}}, {{ 62},{  8}}, {{190},{  8}}, {{126},{  8}}, {{254},{  8}},
+{{  1},{  8}}, {{129},{  8}}, {{ 65},{  8}}, {{193},{  8}}, {{ 33},{  8}},
+{{161},{  8}}, {{ 97},{  8}}, {{225},{  8}}, {{ 17},{  8}}, {{145},{  8}},
+{{ 81},{  8}}, {{209},{  8}}, {{ 49},{  8}}, {{177},{  8}}, {{113},{  8}},
+{{241},{  8}}, {{  9},{  8}}, {{137},{  8}}, {{ 73},{  8}}, {{201},{  8}},
+{{ 41},{  8}}, {{169},{  8}}, {{105},{  8}}, {{233},{  8}}, {{ 25},{  8}},
+{{153},{  8}}, {{ 89},{  8}}, {{217},{  8}}, {{ 57},{  8}}, {{185},{  8}},
+{{121},{  8}}, {{249},{  8}}, {{  5},{  8}}, {{133},{  8}}, {{ 69},{  8}},
+{{197},{  8}}, {{ 37},{  8}}, {{165},{  8}}, {{101},{  8}}, {{229},{  8}},
+{{ 21},{  8}}, {{149},{  8}}, {{ 85},{  8}}, {{213},{  8}}, {{ 53},{  8}},
+{{181},{  8}}, {{117},{  8}}, {{245},{  8}}, {{ 13},{  8}}, {{141},{  8}},
+{{ 77},{  8}}, {{205},{  8}}, {{ 45},{  8}}, {{173},{  8}}, {{109},{  8}},
+{{237},{  8}}, {{ 29},{  8}}, {{157},{  8}}, {{ 93},{  8}}, {{221},{  8}},
+{{ 61},{  8}}, {{189},{  8}}, {{125},{  8}}, {{253},{  8}}, {{ 19},{  9}},
+{{275},{  9}}, {{147},{  9}}, {{403},{  9}}, {{ 83},{  9}}, {{339},{  9}},
+{{211},{  9}}, {{467},{  9}}, {{ 51},{  9}}, {{307},{  9}}, {{179},{  9}},
+{{435},{  9}}, {{115},{  9}}, {{371},{  9}}, {{243},{  9}}, {{499},{  9}},
+{{ 11},{  9}}, {{267},{  9}}, {{139},{  9}}, {{395},{  9}}, {{ 75},{  9}},
+{{331},{  9}}, {{203},{  9}}, {{459},{  9}}, {{ 43},{  9}}, {{299},{  9}},
+{{171},{  9}}, {{427},{  9}}, {{107},{  9}}, {{363},{  9}}, {{235},{  9}},
+{{491},{  9}}, {{ 27},{  9}}, {{283},{  9}}, {{155},{  9}}, {{411},{  9}},
+{{ 91},{  9}}, {{347},{  9}}, {{219},{  9}}, {{475},{  9}}, {{ 59},{  9}},
+{{315},{  9}}, {{187},{  9}}, {{443},{  9}}, {{123},{  9}}, {{379},{  9}},
+{{251},{  9}}, {{507},{  9}}, {{  7},{  9}}, {{263},{  9}}, {{135},{  9}},
+{{391},{  9}}, {{ 71},{  9}}, {{327},{  9}}, {{199},{  9}}, {{455},{  9}},
+{{ 39},{  9}}, {{295},{  9}}, {{167},{  9}}, {{423},{  9}}, {{103},{  9}},
+{{359},{  9}}, {{231},{  9}}, {{487},{  9}}, {{ 23},{  9}}, {{279},{  9}},
+{{151},{  9}}, {{407},{  9}}, {{ 87},{  9}}, {{343},{  9}}, {{215},{  9}},
+{{471},{  9}}, {{ 55},{  9}}, {{311},{  9}}, {{183},{  9}}, {{439},{  9}},
+{{119},{  9}}, {{375},{  9}}, {{247},{  9}}, {{503},{  9}}, {{ 15},{  9}},
+{{271},{  9}}, {{143},{  9}}, {{399},{  9}}, {{ 79},{  9}}, {{335},{  9}},
+{{207},{  9}}, {{463},{  9}}, {{ 47},{  9}}, {{303},{  9}}, {{175},{  9}},
+{{431},{  9}}, {{111},{  9}}, {{367},{  9}}, {{239},{  9}}, {{495},{  9}},
+{{ 31},{  9}}, {{287},{  9}}, {{159},{  9}}, {{415},{  9}}, {{ 95},{  9}},
+{{351},{  9}}, {{223},{  9}}, {{479},{  9}}, {{ 63},{  9}}, {{319},{  9}},
+{{191},{  9}}, {{447},{  9}}, {{127},{  9}}, {{383},{  9}}, {{255},{  9}},
+{{511},{  9}}, {{  0},{  7}}, {{ 64},{  7}}, {{ 32},{  7}}, {{ 96},{  7}},
+{{ 16},{  7}}, {{ 80},{  7}}, {{ 48},{  7}}, {{112},{  7}}, {{  8},{  7}},
+{{ 72},{  7}}, {{ 40},{  7}}, {{104},{  7}}, {{ 24},{  7}}, {{ 88},{  7}},
+{{ 56},{  7}}, {{120},{  7}}, {{  4},{  7}}, {{ 68},{  7}}, {{ 36},{  7}},
+{{100},{  7}}, {{ 20},{  7}}, {{ 84},{  7}}, {{ 52},{  7}}, {{116},{  7}},
+{{  3},{  8}}, {{131},{  8}}, {{ 67},{  8}}, {{195},{  8}}, {{ 35},{  8}},
+{{163},{  8}}, {{ 99},{  8}}, {{227},{  8}}
+};
+
+local const ct_data static_dtree[D_CODES] = {
+{{ 0},{ 5}}, {{16},{ 5}}, {{ 8},{ 5}}, {{24},{ 5}}, {{ 4},{ 5}},
+{{20},{ 5}}, {{12},{ 5}}, {{28},{ 5}}, {{ 2},{ 5}}, {{18},{ 5}},
+{{10},{ 5}}, {{26},{ 5}}, {{ 6},{ 5}}, {{22},{ 5}}, {{14},{ 5}},
+{{30},{ 5}}, {{ 1},{ 5}}, {{17},{ 5}}, {{ 9},{ 5}}, {{25},{ 5}},
+{{ 5},{ 5}}, {{21},{ 5}}, {{13},{ 5}}, {{29},{ 5}}, {{ 3},{ 5}},
+{{19},{ 5}}, {{11},{ 5}}, {{27},{ 5}}, {{ 7},{ 5}}, {{23},{ 5}}
+};
+
+const uch ZLIB_INTERNAL _dist_code[DIST_CODE_LEN] = {
+ 0,  1,  2,  3,  4,  4,  5,  5,  6,  6,  6,  6,  7,  7,  7,  7,  8,  8,  8,  8,
+ 8,  8,  8,  8,  9,  9,  9,  9,  9,  9,  9,  9, 10, 10, 10, 10, 10, 10, 10, 10,
+10, 10, 10, 10, 10, 10, 10, 10, 11, 11, 11, 11, 11, 11, 11, 11, 11, 11, 11, 11,
+11, 11, 11, 11, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12,
+12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 13, 13, 13, 13,
+13, 13, 13, 13, 13, 13, 13, 13, 13, 13, 13, 13, 13, 13, 13, 13, 13, 13, 13, 13,
+13, 13, 13, 13, 13, 13, 13, 13, 14, 14, 14, 14, 14, 14, 14, 14, 14, 14, 14, 14,
+14, 14, 14, 14, 14, 14, 14, 14, 14, 14, 14, 14, 14, 14, 14, 14, 14, 14, 14, 14,
+14, 14, 14, 14, 14, 14, 14, 14, 14, 14, 14, 14, 14, 14, 14, 14, 14, 14, 14, 14,
+14, 14, 14, 14, 14, 14, 14, 14, 14, 14, 14, 14, 15, 15, 15, 15, 15, 15, 15, 15,
+15, 15, 15, 15, 15, 15, 15, 15, 15, 15, 15, 15, 15, 15, 15, 15, 15, 15, 15, 15,
+15, 15, 15, 15, 15, 15, 15, 15, 15, 15, 15, 15, 15, 15, 15, 15, 15, 15, 15, 15,
+15, 15, 15, 15, 15, 15, 15, 15, 15, 15, 15, 15, 15, 15, 15, 15,  0,  0, 16, 17,
+18, 18, 19, 19, 20, 20, 20, 20, 21, 21, 21, 21, 22, 22, 22, 22, 22, 22, 22, 22,
+23, 23, 23, 23, 23, 23, 23, 23, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24,
+24, 24, 24, 24, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25,
+26, 26, 26, 26, 26, 26, 26, 26, 26, 26, 26, 26, 26, 26, 26, 26, 26, 26, 26, 26,
+26, 26, 26, 26, 26, 26, 26, 26, 26, 26, 26, 26, 27, 27, 27, 27, 27, 27, 27, 27,
+27, 27, 27, 27, 27, 27, 27, 27, 27, 27, 27, 27, 27, 27, 27, 27, 27, 27, 27, 27,
+27, 27, 27, 27, 28, 28, 28, 28, 28, 28, 28, 28, 28, 28, 28, 28, 28, 28, 28, 28,
+28, 28, 28, 28, 28, 28, 28, 28, 28, 28, 28, 28, 28, 28, 28, 28, 28, 28, 28, 28,
+28, 28, 28, 28, 28, 28, 28, 28, 28, 28, 28, 28, 28, 28, 28, 28, 28, 28, 28, 28,
+28, 28, 28, 28, 28, 28, 28, 28, 29, 29, 29, 29, 29, 29, 29, 29, 29, 29, 29, 29,
+29, 29, 29, 29, 29, 29, 29, 29, 29, 29, 29, 29, 29, 29, 29, 29, 29, 29, 29, 29,
+29, 29, 29, 29, 29, 29, 29, 29, 29, 29, 29, 29, 29, 29, 29, 29, 29, 29, 29, 29,
+29, 29, 29, 29, 29, 29, 29, 29, 29, 29, 29, 29
+};
+
+const uch ZLIB_INTERNAL _length_code[MAX_MATCH-MIN_MATCH+1]= {
+ 0,  1,  2,  3,  4,  5,  6,  7,  8,  8,  9,  9, 10, 10, 11, 11, 12, 12, 12, 12,
+13, 13, 13, 13, 14, 14, 14, 14, 15, 15, 15, 15, 16, 16, 16, 16, 16, 16, 16, 16,
+17, 17, 17, 17, 17, 17, 17, 17, 18, 18, 18, 18, 18, 18, 18, 18, 19, 19, 19, 19,
+19, 19, 19, 19, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20,
+21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 22, 22, 22, 22,
+22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 23, 23, 23, 23, 23, 23, 23, 23,
+23, 23, 23, 23, 23, 23, 23, 23, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24,
+24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24,
+25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25,
+25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 26, 26, 26, 26, 26, 26, 26, 26,
+26, 26, 26, 26, 26, 26, 26, 26, 26, 26, 26, 26, 26, 26, 26, 26, 26, 26, 26, 26,
+26, 26, 26, 26, 27, 27, 27, 27, 27, 27, 27, 27, 27, 27, 27, 27, 27, 27, 27, 27,
+27, 27, 27, 27, 27, 27, 27, 27, 27, 27, 27, 27, 27, 27, 27, 28
+};
+
+local const int base_length[LENGTH_CODES] = {
+0, 1, 2, 3, 4, 5, 6, 7, 8, 10, 12, 14, 16, 20, 24, 28, 32, 40, 48, 56,
+64, 80, 96, 112, 128, 160, 192, 224, 0
+};
+
+local const int base_dist[D_CODES] = {
+    0,     1,     2,     3,     4,     6,     8,    12,    16,    24,
+   32,    48,    64,    96,   128,   192,   256,   384,   512,   768,
+ 1024,  1536,  2048,  3072,  4096,  6144,  8192, 12288, 16384, 24576
+};
+
diff --git a/c-blosc/internal-complibs/zlib-1.2.8/uncompr.c b/c-blosc/internal-complibs/zlib-1.2.8/uncompr.c
new file mode 100644
index 0000000..242e949
--- /dev/null
+++ b/c-blosc/internal-complibs/zlib-1.2.8/uncompr.c
@@ -0,0 +1,59 @@
+/* uncompr.c -- decompress a memory buffer
+ * Copyright (C) 1995-2003, 2010 Jean-loup Gailly.
+ * For conditions of distribution and use, see copyright notice in zlib.h
+ */
+
+/* @(#) $Id$ */
+
+#define ZLIB_INTERNAL
+#include "zlib.h"
+
+/* ===========================================================================
+     Decompresses the source buffer into the destination buffer.  sourceLen is
+   the byte length of the source buffer. Upon entry, destLen is the total
+   size of the destination buffer, which must be large enough to hold the
+   entire uncompressed data. (The size of the uncompressed data must have
+   been saved previously by the compressor and transmitted to the decompressor
+   by some mechanism outside the scope of this compression library.)
+   Upon exit, destLen is the actual size of the compressed buffer.
+
+     uncompress returns Z_OK if success, Z_MEM_ERROR if there was not
+   enough memory, Z_BUF_ERROR if there was not enough room in the output
+   buffer, or Z_DATA_ERROR if the input data was corrupted.
+*/
+int ZEXPORT uncompress (dest, destLen, source, sourceLen)
+    Bytef *dest;
+    uLongf *destLen;
+    const Bytef *source;
+    uLong sourceLen;
+{
+    z_stream stream;
+    int err;
+
+    stream.next_in = (z_const Bytef *)source;
+    stream.avail_in = (uInt)sourceLen;
+    /* Check for source > 64K on 16-bit machine: */
+    if ((uLong)stream.avail_in != sourceLen) return Z_BUF_ERROR;
+
+    stream.next_out = dest;
+    stream.avail_out = (uInt)*destLen;
+    if ((uLong)stream.avail_out != *destLen) return Z_BUF_ERROR;
+
+    stream.zalloc = (alloc_func)0;
+    stream.zfree = (free_func)0;
+
+    err = inflateInit(&stream);
+    if (err != Z_OK) return err;
+
+    err = inflate(&stream, Z_FINISH);
+    if (err != Z_STREAM_END) {
+        inflateEnd(&stream);
+        if (err == Z_NEED_DICT || (err == Z_BUF_ERROR && stream.avail_in == 0))
+            return Z_DATA_ERROR;
+        return err;
+    }
+    *destLen = stream.total_out;
+
+    err = inflateEnd(&stream);
+    return err;
+}
diff --git a/c-blosc/internal-complibs/zlib-1.2.8/zconf.h b/c-blosc/internal-complibs/zlib-1.2.8/zconf.h
new file mode 100644
index 0000000..9987a77
--- /dev/null
+++ b/c-blosc/internal-complibs/zlib-1.2.8/zconf.h
@@ -0,0 +1,511 @@
+/* zconf.h -- configuration of the zlib compression library
+ * Copyright (C) 1995-2013 Jean-loup Gailly.
+ * For conditions of distribution and use, see copyright notice in zlib.h
+ */
+
+/* @(#) $Id$ */
+
+#ifndef ZCONF_H
+#define ZCONF_H
+
+/*
+ * If you *really* need a unique prefix for all types and library functions,
+ * compile with -DZ_PREFIX. The "standard" zlib should be compiled without it.
+ * Even better than compiling with -DZ_PREFIX would be to use configure to set
+ * this permanently in zconf.h using "./configure --zprefix".
+ */
+#ifdef Z_PREFIX     /* may be set to #if 1 by ./configure */
+#  define Z_PREFIX_SET
+
+/* all linked symbols */
+#  define _dist_code            z__dist_code
+#  define _length_code          z__length_code
+#  define _tr_align             z__tr_align
+#  define _tr_flush_bits        z__tr_flush_bits
+#  define _tr_flush_block       z__tr_flush_block
+#  define _tr_init              z__tr_init
+#  define _tr_stored_block      z__tr_stored_block
+#  define _tr_tally             z__tr_tally
+#  define adler32               z_adler32
+#  define adler32_combine       z_adler32_combine
+#  define adler32_combine64     z_adler32_combine64
+#  ifndef Z_SOLO
+#    define compress              z_compress
+#    define compress2             z_compress2
+#    define compressBound         z_compressBound
+#  endif
+#  define crc32                 z_crc32
+#  define crc32_combine         z_crc32_combine
+#  define crc32_combine64       z_crc32_combine64
+#  define deflate               z_deflate
+#  define deflateBound          z_deflateBound
+#  define deflateCopy           z_deflateCopy
+#  define deflateEnd            z_deflateEnd
+#  define deflateInit2_         z_deflateInit2_
+#  define deflateInit_          z_deflateInit_
+#  define deflateParams         z_deflateParams
+#  define deflatePending        z_deflatePending
+#  define deflatePrime          z_deflatePrime
+#  define deflateReset          z_deflateReset
+#  define deflateResetKeep      z_deflateResetKeep
+#  define deflateSetDictionary  z_deflateSetDictionary
+#  define deflateSetHeader      z_deflateSetHeader
+#  define deflateTune           z_deflateTune
+#  define deflate_copyright     z_deflate_copyright
+#  define get_crc_table         z_get_crc_table
+#  ifndef Z_SOLO
+#    define gz_error              z_gz_error
+#    define gz_intmax             z_gz_intmax
+#    define gz_strwinerror        z_gz_strwinerror
+#    define gzbuffer              z_gzbuffer
+#    define gzclearerr            z_gzclearerr
+#    define gzclose               z_gzclose
+#    define gzclose_r             z_gzclose_r
+#    define gzclose_w             z_gzclose_w
+#    define gzdirect              z_gzdirect
+#    define gzdopen               z_gzdopen
+#    define gzeof                 z_gzeof
+#    define gzerror               z_gzerror
+#    define gzflush               z_gzflush
+#    define gzgetc                z_gzgetc
+#    define gzgetc_               z_gzgetc_
+#    define gzgets                z_gzgets
+#    define gzoffset              z_gzoffset
+#    define gzoffset64            z_gzoffset64
+#    define gzopen                z_gzopen
+#    define gzopen64              z_gzopen64
+#    ifdef _WIN32
+#      define gzopen_w              z_gzopen_w
+#    endif
+#    define gzprintf              z_gzprintf
+#    define gzvprintf             z_gzvprintf
+#    define gzputc                z_gzputc
+#    define gzputs                z_gzputs
+#    define gzread                z_gzread
+#    define gzrewind              z_gzrewind
+#    define gzseek                z_gzseek
+#    define gzseek64              z_gzseek64
+#    define gzsetparams           z_gzsetparams
+#    define gztell                z_gztell
+#    define gztell64              z_gztell64
+#    define gzungetc              z_gzungetc
+#    define gzwrite               z_gzwrite
+#  endif
+#  define inflate               z_inflate
+#  define inflateBack           z_inflateBack
+#  define inflateBackEnd        z_inflateBackEnd
+#  define inflateBackInit_      z_inflateBackInit_
+#  define inflateCopy           z_inflateCopy
+#  define inflateEnd            z_inflateEnd
+#  define inflateGetHeader      z_inflateGetHeader
+#  define inflateInit2_         z_inflateInit2_
+#  define inflateInit_          z_inflateInit_
+#  define inflateMark           z_inflateMark
+#  define inflatePrime          z_inflatePrime
+#  define inflateReset          z_inflateReset
+#  define inflateReset2         z_inflateReset2
+#  define inflateSetDictionary  z_inflateSetDictionary
+#  define inflateGetDictionary  z_inflateGetDictionary
+#  define inflateSync           z_inflateSync
+#  define inflateSyncPoint      z_inflateSyncPoint
+#  define inflateUndermine      z_inflateUndermine
+#  define inflateResetKeep      z_inflateResetKeep
+#  define inflate_copyright     z_inflate_copyright
+#  define inflate_fast          z_inflate_fast
+#  define inflate_table         z_inflate_table
+#  ifndef Z_SOLO
+#    define uncompress            z_uncompress
+#  endif
+#  define zError                z_zError
+#  ifndef Z_SOLO
+#    define zcalloc               z_zcalloc
+#    define zcfree                z_zcfree
+#  endif
+#  define zlibCompileFlags      z_zlibCompileFlags
+#  define zlibVersion           z_zlibVersion
+
+/* all zlib typedefs in zlib.h and zconf.h */
+#  define Byte                  z_Byte
+#  define Bytef                 z_Bytef
+#  define alloc_func            z_alloc_func
+#  define charf                 z_charf
+#  define free_func             z_free_func
+#  ifndef Z_SOLO
+#    define gzFile                z_gzFile
+#  endif
+#  define gz_header             z_gz_header
+#  define gz_headerp            z_gz_headerp
+#  define in_func               z_in_func
+#  define intf                  z_intf
+#  define out_func              z_out_func
+#  define uInt                  z_uInt
+#  define uIntf                 z_uIntf
+#  define uLong                 z_uLong
+#  define uLongf                z_uLongf
+#  define voidp                 z_voidp
+#  define voidpc                z_voidpc
+#  define voidpf                z_voidpf
+
+/* all zlib structs in zlib.h and zconf.h */
+#  define gz_header_s           z_gz_header_s
+#  define internal_state        z_internal_state
+
+#endif
+
+#if defined(__MSDOS__) && !defined(MSDOS)
+#  define MSDOS
+#endif
+#if (defined(OS_2) || defined(__OS2__)) && !defined(OS2)
+#  define OS2
+#endif
+#if defined(_WINDOWS) && !defined(WINDOWS)
+#  define WINDOWS
+#endif
+#if defined(_WIN32) || defined(_WIN32_WCE) || defined(__WIN32__)
+#  ifndef WIN32
+#    define WIN32
+#  endif
+#endif
+#if (defined(MSDOS) || defined(OS2) || defined(WINDOWS)) && !defined(WIN32)
+#  if !defined(__GNUC__) && !defined(__FLAT__) && !defined(__386__)
+#    ifndef SYS16BIT
+#      define SYS16BIT
+#    endif
+#  endif
+#endif
+
+/*
+ * Compile with -DMAXSEG_64K if the alloc function cannot allocate more
+ * than 64k bytes at a time (needed on systems with 16-bit int).
+ */
+#ifdef SYS16BIT
+#  define MAXSEG_64K
+#endif
+#ifdef MSDOS
+#  define UNALIGNED_OK
+#endif
+
+#ifdef __STDC_VERSION__
+#  ifndef STDC
+#    define STDC
+#  endif
+#  if __STDC_VERSION__ >= 199901L
+#    ifndef STDC99
+#      define STDC99
+#    endif
+#  endif
+#endif
+#if !defined(STDC) && (defined(__STDC__) || defined(__cplusplus))
+#  define STDC
+#endif
+#if !defined(STDC) && (defined(__GNUC__) || defined(__BORLANDC__))
+#  define STDC
+#endif
+#if !defined(STDC) && (defined(MSDOS) || defined(WINDOWS) || defined(WIN32))
+#  define STDC
+#endif
+#if !defined(STDC) && (defined(OS2) || defined(__HOS_AIX__))
+#  define STDC
+#endif
+
+#if defined(__OS400__) && !defined(STDC)    /* iSeries (formerly AS/400). */
+#  define STDC
+#endif
+
+#ifndef STDC
+#  ifndef const /* cannot use !defined(STDC) && !defined(const) on Mac */
+#    define const       /* note: need a more gentle solution here */
+#  endif
+#endif
+
+#if defined(ZLIB_CONST) && !defined(z_const)
+#  define z_const const
+#else
+#  define z_const
+#endif
+
+/* Some Mac compilers merge all .h files incorrectly: */
+#if defined(__MWERKS__)||defined(applec)||defined(THINK_C)||defined(__SC__)
+#  define NO_DUMMY_DECL
+#endif
+
+/* Maximum value for memLevel in deflateInit2 */
+#ifndef MAX_MEM_LEVEL
+#  ifdef MAXSEG_64K
+#    define MAX_MEM_LEVEL 8
+#  else
+#    define MAX_MEM_LEVEL 9
+#  endif
+#endif
+
+/* Maximum value for windowBits in deflateInit2 and inflateInit2.
+ * WARNING: reducing MAX_WBITS makes minigzip unable to extract .gz files
+ * created by gzip. (Files created by minigzip can still be extracted by
+ * gzip.)
+ */
+#ifndef MAX_WBITS
+#  define MAX_WBITS   15 /* 32K LZ77 window */
+#endif
+
+/* The memory requirements for deflate are (in bytes):
+            (1 << (windowBits+2)) +  (1 << (memLevel+9))
+ that is: 128K for windowBits=15  +  128K for memLevel = 8  (default values)
+ plus a few kilobytes for small objects. For example, if you want to reduce
+ the default memory requirements from 256K to 128K, compile with
+     make CFLAGS="-O -DMAX_WBITS=14 -DMAX_MEM_LEVEL=7"
+ Of course this will generally degrade compression (there's no free lunch).
+
+   The memory requirements for inflate are (in bytes) 1 << windowBits
+ that is, 32K for windowBits=15 (default value) plus a few kilobytes
+ for small objects.
+*/
+
+                        /* Type declarations */
+
+#ifndef OF /* function prototypes */
+#  ifdef STDC
+#    define OF(args)  args
+#  else
+#    define OF(args)  ()
+#  endif
+#endif
+
+#ifndef Z_ARG /* function prototypes for stdarg */
+#  if defined(STDC) || defined(Z_HAVE_STDARG_H)
+#    define Z_ARG(args)  args
+#  else
+#    define Z_ARG(args)  ()
+#  endif
+#endif
+
+/* The following definitions for FAR are needed only for MSDOS mixed
+ * model programming (small or medium model with some far allocations).
+ * This was tested only with MSC; for other MSDOS compilers you may have
+ * to define NO_MEMCPY in zutil.h.  If you don't need the mixed model,
+ * just define FAR to be empty.
+ */
+#ifdef SYS16BIT
+#  if defined(M_I86SM) || defined(M_I86MM)
+     /* MSC small or medium model */
+#    define SMALL_MEDIUM
+#    ifdef _MSC_VER
+#      define FAR _far
+#    else
+#      define FAR far
+#    endif
+#  endif
+#  if (defined(__SMALL__) || defined(__MEDIUM__))
+     /* Turbo C small or medium model */
+#    define SMALL_MEDIUM
+#    ifdef __BORLANDC__
+#      define FAR _far
+#    else
+#      define FAR far
+#    endif
+#  endif
+#endif
+
+#if defined(WINDOWS) || defined(WIN32)
+   /* If building or using zlib as a DLL, define ZLIB_DLL.
+    * This is not mandatory, but it offers a little performance increase.
+    */
+#  ifdef ZLIB_DLL
+#    if defined(WIN32) && (!defined(__BORLANDC__) || (__BORLANDC__ >= 0x500))
+#      ifdef ZLIB_INTERNAL
+#        define ZEXTERN extern __declspec(dllexport)
+#      else
+#        define ZEXTERN extern __declspec(dllimport)
+#      endif
+#    endif
+#  endif  /* ZLIB_DLL */
+   /* If building or using zlib with the WINAPI/WINAPIV calling convention,
+    * define ZLIB_WINAPI.
+    * Caution: the standard ZLIB1.DLL is NOT compiled using ZLIB_WINAPI.
+    */
+#  ifdef ZLIB_WINAPI
+#    ifdef FAR
+#      undef FAR
+#    endif
+#    include <windows.h>
+     /* No need for _export, use ZLIB.DEF instead. */
+     /* For complete Windows compatibility, use WINAPI, not __stdcall. */
+#    define ZEXPORT WINAPI
+#    ifdef WIN32
+#      define ZEXPORTVA WINAPIV
+#    else
+#      define ZEXPORTVA FAR CDECL
+#    endif
+#  endif
+#endif
+
+#if defined (__BEOS__)
+#  ifdef ZLIB_DLL
+#    ifdef ZLIB_INTERNAL
+#      define ZEXPORT   __declspec(dllexport)
+#      define ZEXPORTVA __declspec(dllexport)
+#    else
+#      define ZEXPORT   __declspec(dllimport)
+#      define ZEXPORTVA __declspec(dllimport)
+#    endif
+#  endif
+#endif
+
+#ifndef ZEXTERN
+#  define ZEXTERN extern
+#endif
+#ifndef ZEXPORT
+#  define ZEXPORT
+#endif
+#ifndef ZEXPORTVA
+#  define ZEXPORTVA
+#endif
+
+#ifndef FAR
+#  define FAR
+#endif
+
+#if !defined(__MACTYPES__)
+typedef unsigned char  Byte;  /* 8 bits */
+#endif
+typedef unsigned int   uInt;  /* 16 bits or more */
+typedef unsigned long  uLong; /* 32 bits or more */
+
+#ifdef SMALL_MEDIUM
+   /* Borland C/C++ and some old MSC versions ignore FAR inside typedef */
+#  define Bytef Byte FAR
+#else
+   typedef Byte  FAR Bytef;
+#endif
+typedef char  FAR charf;
+typedef int   FAR intf;
+typedef uInt  FAR uIntf;
+typedef uLong FAR uLongf;
+
+#ifdef STDC
+   typedef void const *voidpc;
+   typedef void FAR   *voidpf;
+   typedef void       *voidp;
+#else
+   typedef Byte const *voidpc;
+   typedef Byte FAR   *voidpf;
+   typedef Byte       *voidp;
+#endif
+
+#if !defined(Z_U4) && !defined(Z_SOLO) && defined(STDC)
+#  include <limits.h>
+#  if (UINT_MAX == 0xffffffffUL)
+#    define Z_U4 unsigned
+#  elif (ULONG_MAX == 0xffffffffUL)
+#    define Z_U4 unsigned long
+#  elif (USHRT_MAX == 0xffffffffUL)
+#    define Z_U4 unsigned short
+#  endif
+#endif
+
+#ifdef Z_U4
+   typedef Z_U4 z_crc_t;
+#else
+   typedef unsigned long z_crc_t;
+#endif
+
+#ifdef HAVE_UNISTD_H    /* may be set to #if 1 by ./configure */
+#  define Z_HAVE_UNISTD_H
+#endif
+
+#ifdef HAVE_STDARG_H    /* may be set to #if 1 by ./configure */
+#  define Z_HAVE_STDARG_H
+#endif
+
+#ifdef STDC
+#  ifndef Z_SOLO
+#    include <sys/types.h>      /* for off_t */
+#  endif
+#endif
+
+#if defined(STDC) || defined(Z_HAVE_STDARG_H)
+#  ifndef Z_SOLO
+#    include <stdarg.h>         /* for va_list */
+#  endif
+#endif
+
+#ifdef _WIN32
+#  ifndef Z_SOLO
+#    include <stddef.h>         /* for wchar_t */
+#  endif
+#endif
+
+/* a little trick to accommodate both "#define _LARGEFILE64_SOURCE" and
+ * "#define _LARGEFILE64_SOURCE 1" as requesting 64-bit operations, (even
+ * though the former does not conform to the LFS document), but considering
+ * both "#undef _LARGEFILE64_SOURCE" and "#define _LARGEFILE64_SOURCE 0" as
+ * equivalently requesting no 64-bit operations
+ */
+#if defined(_LARGEFILE64_SOURCE) && -_LARGEFILE64_SOURCE - -1 == 1
+#  undef _LARGEFILE64_SOURCE
+#endif
+
+#if defined(__WATCOMC__) && !defined(Z_HAVE_UNISTD_H)
+#  define Z_HAVE_UNISTD_H
+#endif
+#ifndef Z_SOLO
+#  if defined(Z_HAVE_UNISTD_H) || defined(_LARGEFILE64_SOURCE)
+#    include <unistd.h>         /* for SEEK_*, off_t, and _LFS64_LARGEFILE */
+#    ifdef VMS
+#      include <unixio.h>       /* for off_t */
+#    endif
+#    ifndef z_off_t
+#      define z_off_t off_t
+#    endif
+#  endif
+#endif
+
+#if defined(_LFS64_LARGEFILE) && _LFS64_LARGEFILE-0
+#  define Z_LFS64
+#endif
+
+#if defined(_LARGEFILE64_SOURCE) && defined(Z_LFS64)
+#  define Z_LARGE64
+#endif
+
+#if defined(_FILE_OFFSET_BITS) && _FILE_OFFSET_BITS-0 == 64 && defined(Z_LFS64)
+#  define Z_WANT64
+#endif
+
+#if !defined(SEEK_SET) && !defined(Z_SOLO)
+#  define SEEK_SET        0       /* Seek from beginning of file.  */
+#  define SEEK_CUR        1       /* Seek from current position.  */
+#  define SEEK_END        2       /* Set file pointer to EOF plus "offset" */
+#endif
+
+#ifndef z_off_t
+#  define z_off_t long
+#endif
+
+#if !defined(_WIN32) && defined(Z_LARGE64)
+#  define z_off64_t off64_t
+#else
+#  if defined(_WIN32) && !defined(__GNUC__) && !defined(Z_SOLO)
+#    define z_off64_t __int64
+#  else
+#    define z_off64_t z_off_t
+#  endif
+#endif
+
+/* MVS linker does not support external names larger than 8 bytes */
+#if defined(__MVS__)
+  #pragma map(deflateInit_,"DEIN")
+  #pragma map(deflateInit2_,"DEIN2")
+  #pragma map(deflateEnd,"DEEND")
+  #pragma map(deflateBound,"DEBND")
+  #pragma map(inflateInit_,"ININ")
+  #pragma map(inflateInit2_,"ININ2")
+  #pragma map(inflateEnd,"INEND")
+  #pragma map(inflateSync,"INSY")
+  #pragma map(inflateSetDictionary,"INSEDI")
+  #pragma map(compressBound,"CMBND")
+  #pragma map(inflate_table,"INTABL")
+  #pragma map(inflate_fast,"INFA")
+  #pragma map(inflate_copyright,"INCOPY")
+#endif
+
+#endif /* ZCONF_H */
diff --git a/c-blosc/internal-complibs/zlib-1.2.8/zlib.h b/c-blosc/internal-complibs/zlib-1.2.8/zlib.h
new file mode 100644
index 0000000..3e0c767
--- /dev/null
+++ b/c-blosc/internal-complibs/zlib-1.2.8/zlib.h
@@ -0,0 +1,1768 @@
+/* zlib.h -- interface of the 'zlib' general purpose compression library
+  version 1.2.8, April 28th, 2013
+
+  Copyright (C) 1995-2013 Jean-loup Gailly and Mark Adler
+
+  This software is provided 'as-is', without any express or implied
+  warranty.  In no event will the authors be held liable for any damages
+  arising from the use of this software.
+
+  Permission is granted to anyone to use this software for any purpose,
+  including commercial applications, and to alter it and redistribute it
+  freely, subject to the following restrictions:
+
+  1. The origin of this software must not be misrepresented; you must not
+     claim that you wrote the original software. If you use this software
+     in a product, an acknowledgment in the product documentation would be
+     appreciated but is not required.
+  2. Altered source versions must be plainly marked as such, and must not be
+     misrepresented as being the original software.
+  3. This notice may not be removed or altered from any source distribution.
+
+  Jean-loup Gailly        Mark Adler
+  jloup at gzip.org          madler at alumni.caltech.edu
+
+
+  The data format used by the zlib library is described by RFCs (Request for
+  Comments) 1950 to 1952 in the files http://tools.ietf.org/html/rfc1950
+  (zlib format), rfc1951 (deflate format) and rfc1952 (gzip format).
+*/
+
+#ifndef ZLIB_H
+#define ZLIB_H
+
+#include "zconf.h"
+
+#ifdef __cplusplus
+extern "C" {
+#endif
+
+#define ZLIB_VERSION "1.2.8"
+#define ZLIB_VERNUM 0x1280
+#define ZLIB_VER_MAJOR 1
+#define ZLIB_VER_MINOR 2
+#define ZLIB_VER_REVISION 8
+#define ZLIB_VER_SUBREVISION 0
+
+/*
+    The 'zlib' compression library provides in-memory compression and
+  decompression functions, including integrity checks of the uncompressed data.
+  This version of the library supports only one compression method (deflation)
+  but other algorithms will be added later and will have the same stream
+  interface.
+
+    Compression can be done in a single step if the buffers are large enough,
+  or can be done by repeated calls of the compression function.  In the latter
+  case, the application must provide more input and/or consume the output
+  (providing more output space) before each call.
+
+    The compressed data format used by default by the in-memory functions is
+  the zlib format, which is a zlib wrapper documented in RFC 1950, wrapped
+  around a deflate stream, which is itself documented in RFC 1951.
+
+    The library also supports reading and writing files in gzip (.gz) format
+  with an interface similar to that of stdio using the functions that start
+  with "gz".  The gzip format is different from the zlib format.  gzip is a
+  gzip wrapper, documented in RFC 1952, wrapped around a deflate stream.
+
+    This library can optionally read and write gzip streams in memory as well.
+
+    The zlib format was designed to be compact and fast for use in memory
+  and on communications channels.  The gzip format was designed for single-
+  file compression on file systems, has a larger header than zlib to maintain
+  directory information, and uses a different, slower check method than zlib.
+
+    The library does not install any signal handler.  The decoder checks
+  the consistency of the compressed data, so the library should never crash
+  even in case of corrupted input.
+*/
+
+typedef voidpf (*alloc_func) OF((voidpf opaque, uInt items, uInt size));
+typedef void   (*free_func)  OF((voidpf opaque, voidpf address));
+
+struct internal_state;
+
+typedef struct z_stream_s {
+    z_const Bytef *next_in;     /* next input byte */
+    uInt     avail_in;  /* number of bytes available at next_in */
+    uLong    total_in;  /* total number of input bytes read so far */
+
+    Bytef    *next_out; /* next output byte should be put there */
+    uInt     avail_out; /* remaining free space at next_out */
+    uLong    total_out; /* total number of bytes output so far */
+
+    z_const char *msg;  /* last error message, NULL if no error */
+    struct internal_state FAR *state; /* not visible by applications */
+
+    alloc_func zalloc;  /* used to allocate the internal state */
+    free_func  zfree;   /* used to free the internal state */
+    voidpf     opaque;  /* private data object passed to zalloc and zfree */
+
+    int     data_type;  /* best guess about the data type: binary or text */
+    uLong   adler;      /* adler32 value of the uncompressed data */
+    uLong   reserved;   /* reserved for future use */
+} z_stream;
+
+typedef z_stream FAR *z_streamp;
+
+/*
+     gzip header information passed to and from zlib routines.  See RFC 1952
+  for more details on the meanings of these fields.
+*/
+typedef struct gz_header_s {
+    int     text;       /* true if compressed data believed to be text */
+    uLong   time;       /* modification time */
+    int     xflags;     /* extra flags (not used when writing a gzip file) */
+    int     os;         /* operating system */
+    Bytef   *extra;     /* pointer to extra field or Z_NULL if none */
+    uInt    extra_len;  /* extra field length (valid if extra != Z_NULL) */
+    uInt    extra_max;  /* space at extra (only when reading header) */
+    Bytef   *name;      /* pointer to zero-terminated file name or Z_NULL */
+    uInt    name_max;   /* space at name (only when reading header) */
+    Bytef   *comment;   /* pointer to zero-terminated comment or Z_NULL */
+    uInt    comm_max;   /* space at comment (only when reading header) */
+    int     hcrc;       /* true if there was or will be a header crc */
+    int     done;       /* true when done reading gzip header (not used
+                           when writing a gzip file) */
+} gz_header;
+
+typedef gz_header FAR *gz_headerp;
+
+/*
+     The application must update next_in and avail_in when avail_in has dropped
+   to zero.  It must update next_out and avail_out when avail_out has dropped
+   to zero.  The application must initialize zalloc, zfree and opaque before
+   calling the init function.  All other fields are set by the compression
+   library and must not be updated by the application.
+
+     The opaque value provided by the application will be passed as the first
+   parameter for calls of zalloc and zfree.  This can be useful for custom
+   memory management.  The compression library attaches no meaning to the
+   opaque value.
+
+     zalloc must return Z_NULL if there is not enough memory for the object.
+   If zlib is used in a multi-threaded application, zalloc and zfree must be
+   thread safe.
+
+     On 16-bit systems, the functions zalloc and zfree must be able to allocate
+   exactly 65536 bytes, but will not be required to allocate more than this if
+   the symbol MAXSEG_64K is defined (see zconf.h).  WARNING: On MSDOS, pointers
+   returned by zalloc for objects of exactly 65536 bytes *must* have their
+   offset normalized to zero.  The default allocation function provided by this
+   library ensures this (see zutil.c).  To reduce memory requirements and avoid
+   any allocation of 64K objects, at the expense of compression ratio, compile
+   the library with -DMAX_WBITS=14 (see zconf.h).
+
+     The fields total_in and total_out can be used for statistics or progress
+   reports.  After compression, total_in holds the total size of the
+   uncompressed data and may be saved for use in the decompressor (particularly
+   if the decompressor wants to decompress everything in a single step).
+*/
+
+                        /* constants */
+
+#define Z_NO_FLUSH      0
+#define Z_PARTIAL_FLUSH 1
+#define Z_SYNC_FLUSH    2
+#define Z_FULL_FLUSH    3
+#define Z_FINISH        4
+#define Z_BLOCK         5
+#define Z_TREES         6
+/* Allowed flush values; see deflate() and inflate() below for details */
+
+#define Z_OK            0
+#define Z_STREAM_END    1
+#define Z_NEED_DICT     2
+#define Z_ERRNO        (-1)
+#define Z_STREAM_ERROR (-2)
+#define Z_DATA_ERROR   (-3)
+#define Z_MEM_ERROR    (-4)
+#define Z_BUF_ERROR    (-5)
+#define Z_VERSION_ERROR (-6)
+/* Return codes for the compression/decompression functions. Negative values
+ * are errors, positive values are used for special but normal events.
+ */
+
+#define Z_NO_COMPRESSION         0
+#define Z_BEST_SPEED             1
+#define Z_BEST_COMPRESSION       9
+#define Z_DEFAULT_COMPRESSION  (-1)
+/* compression levels */
+
+#define Z_FILTERED            1
+#define Z_HUFFMAN_ONLY        2
+#define Z_RLE                 3
+#define Z_FIXED               4
+#define Z_DEFAULT_STRATEGY    0
+/* compression strategy; see deflateInit2() below for details */
+
+#define Z_BINARY   0
+#define Z_TEXT     1
+#define Z_ASCII    Z_TEXT   /* for compatibility with 1.2.2 and earlier */
+#define Z_UNKNOWN  2
+/* Possible values of the data_type field (though see inflate()) */
+
+#define Z_DEFLATED   8
+/* The deflate compression method (the only one supported in this version) */
+
+#define Z_NULL  0  /* for initializing zalloc, zfree, opaque */
+
+#define zlib_version zlibVersion()
+/* for compatibility with versions < 1.0.2 */
+
+
+                        /* basic functions */
+
+ZEXTERN const char * ZEXPORT zlibVersion OF((void));
+/* The application can compare zlibVersion and ZLIB_VERSION for consistency.
+   If the first character differs, the library code actually used is not
+   compatible with the zlib.h header file used by the application.  This check
+   is automatically made by deflateInit and inflateInit.
+ */
+
+/*
+ZEXTERN int ZEXPORT deflateInit OF((z_streamp strm, int level));
+
+     Initializes the internal stream state for compression.  The fields
+   zalloc, zfree and opaque must be initialized before by the caller.  If
+   zalloc and zfree are set to Z_NULL, deflateInit updates them to use default
+   allocation functions.
+
+     The compression level must be Z_DEFAULT_COMPRESSION, or between 0 and 9:
+   1 gives best speed, 9 gives best compression, 0 gives no compression at all
+   (the input data is simply copied a block at a time).  Z_DEFAULT_COMPRESSION
+   requests a default compromise between speed and compression (currently
+   equivalent to level 6).
+
+     deflateInit returns Z_OK if success, Z_MEM_ERROR if there was not enough
+   memory, Z_STREAM_ERROR if level is not a valid compression level, or
+   Z_VERSION_ERROR if the zlib library version (zlib_version) is incompatible
+   with the version assumed by the caller (ZLIB_VERSION).  msg is set to null
+   if there is no error message.  deflateInit does not perform any compression:
+   this will be done by deflate().
+*/
+
+
+ZEXTERN int ZEXPORT deflate OF((z_streamp strm, int flush));
+/*
+    deflate compresses as much data as possible, and stops when the input
+  buffer becomes empty or the output buffer becomes full.  It may introduce
+  some output latency (reading input without producing any output) except when
+  forced to flush.
+
+    The detailed semantics are as follows.  deflate performs one or both of the
+  following actions:
+
+  - Compress more input starting at next_in and update next_in and avail_in
+    accordingly.  If not all input can be processed (because there is not
+    enough room in the output buffer), next_in and avail_in are updated and
+    processing will resume at this point for the next call of deflate().
+
+  - Provide more output starting at next_out and update next_out and avail_out
+    accordingly.  This action is forced if the parameter flush is non zero.
+    Forcing flush frequently degrades the compression ratio, so this parameter
+    should be set only when necessary (in interactive applications).  Some
+    output may be provided even if flush is not set.
+
+    Before the call of deflate(), the application should ensure that at least
+  one of the actions is possible, by providing more input and/or consuming more
+  output, and updating avail_in or avail_out accordingly; avail_out should
+  never be zero before the call.  The application can consume the compressed
+  output when it wants, for example when the output buffer is full (avail_out
+  == 0), or after each call of deflate().  If deflate returns Z_OK and with
+  zero avail_out, it must be called again after making room in the output
+  buffer because there might be more output pending.
+
+    Normally the parameter flush is set to Z_NO_FLUSH, which allows deflate to
+  decide how much data to accumulate before producing output, in order to
+  maximize compression.
+
+    If the parameter flush is set to Z_SYNC_FLUSH, all pending output is
+  flushed to the output buffer and the output is aligned on a byte boundary, so
+  that the decompressor can get all input data available so far.  (In
+  particular avail_in is zero after the call if enough output space has been
+  provided before the call.) Flushing may degrade compression for some
+  compression algorithms and so it should be used only when necessary.  This
+  completes the current deflate block and follows it with an empty stored block
+  that is three bits plus filler bits to the next byte, followed by four bytes
+  (00 00 ff ff).
+
+    If flush is set to Z_PARTIAL_FLUSH, all pending output is flushed to the
+  output buffer, but the output is not aligned to a byte boundary.  All of the
+  input data so far will be available to the decompressor, as for Z_SYNC_FLUSH.
+  This completes the current deflate block and follows it with an empty fixed
+  codes block that is 10 bits long.  This assures that enough bytes are output
+  in order for the decompressor to finish the block before the empty fixed code
+  block.
+
+    If flush is set to Z_BLOCK, a deflate block is completed and emitted, as
+  for Z_SYNC_FLUSH, but the output is not aligned on a byte boundary, and up to
+  seven bits of the current block are held to be written as the next byte after
+  the next deflate block is completed.  In this case, the decompressor may not
+  be provided enough bits at this point in order to complete decompression of
+  the data provided so far to the compressor.  It may need to wait for the next
+  block to be emitted.  This is for advanced applications that need to control
+  the emission of deflate blocks.
+
+    If flush is set to Z_FULL_FLUSH, all output is flushed as with
+  Z_SYNC_FLUSH, and the compression state is reset so that decompression can
+  restart from this point if previous compressed data has been damaged or if
+  random access is desired.  Using Z_FULL_FLUSH too often can seriously degrade
+  compression.
+
+    If deflate returns with avail_out == 0, this function must be called again
+  with the same value of the flush parameter and more output space (updated
+  avail_out), until the flush is complete (deflate returns with non-zero
+  avail_out).  In the case of a Z_FULL_FLUSH or Z_SYNC_FLUSH, make sure that
+  avail_out is greater than six to avoid repeated flush markers due to
+  avail_out == 0 on return.
+
+    If the parameter flush is set to Z_FINISH, pending input is processed,
+  pending output is flushed and deflate returns with Z_STREAM_END if there was
+  enough output space; if deflate returns with Z_OK, this function must be
+  called again with Z_FINISH and more output space (updated avail_out) but no
+  more input data, until it returns with Z_STREAM_END or an error.  After
+  deflate has returned Z_STREAM_END, the only possible operations on the stream
+  are deflateReset or deflateEnd.
+
+    Z_FINISH can be used immediately after deflateInit if all the compression
+  is to be done in a single step.  In this case, avail_out must be at least the
+  value returned by deflateBound (see below).  Then deflate is guaranteed to
+  return Z_STREAM_END.  If not enough output space is provided, deflate will
+  not return Z_STREAM_END, and it must be called again as described above.
+
+    deflate() sets strm->adler to the adler32 checksum of all input read
+  so far (that is, total_in bytes).
+
+    deflate() may update strm->data_type if it can make a good guess about
+  the input data type (Z_BINARY or Z_TEXT).  In doubt, the data is considered
+  binary.  This field is only for information purposes and does not affect the
+  compression algorithm in any manner.
+
+    deflate() returns Z_OK if some progress has been made (more input
+  processed or more output produced), Z_STREAM_END if all input has been
+  consumed and all output has been produced (only when flush is set to
+  Z_FINISH), Z_STREAM_ERROR if the stream state was inconsistent (for example
+  if next_in or next_out was Z_NULL), Z_BUF_ERROR if no progress is possible
+  (for example avail_in or avail_out was zero).  Note that Z_BUF_ERROR is not
+  fatal, and deflate() can be called again with more input and more output
+  space to continue compressing.
+*/
+
+
+ZEXTERN int ZEXPORT deflateEnd OF((z_streamp strm));
+/*
+     All dynamically allocated data structures for this stream are freed.
+   This function discards any unprocessed input and does not flush any pending
+   output.
+
+     deflateEnd returns Z_OK if success, Z_STREAM_ERROR if the
+   stream state was inconsistent, Z_DATA_ERROR if the stream was freed
+   prematurely (some input or output was discarded).  In the error case, msg
+   may be set but then points to a static string (which must not be
+   deallocated).
+*/
+
+
+/*
+ZEXTERN int ZEXPORT inflateInit OF((z_streamp strm));
+
+     Initializes the internal stream state for decompression.  The fields
+   next_in, avail_in, zalloc, zfree and opaque must be initialized before by
+   the caller.  If next_in is not Z_NULL and avail_in is large enough (the
+   exact value depends on the compression method), inflateInit determines the
+   compression method from the zlib header and allocates all data structures
+   accordingly; otherwise the allocation will be deferred to the first call of
+   inflate.  If zalloc and zfree are set to Z_NULL, inflateInit updates them to
+   use default allocation functions.
+
+     inflateInit returns Z_OK if success, Z_MEM_ERROR if there was not enough
+   memory, Z_VERSION_ERROR if the zlib library version is incompatible with the
+   version assumed by the caller, or Z_STREAM_ERROR if the parameters are
+   invalid, such as a null pointer to the structure.  msg is set to null if
+   there is no error message.  inflateInit does not perform any decompression
+   apart from possibly reading the zlib header if present: actual decompression
+   will be done by inflate().  (So next_in and avail_in may be modified, but
+   next_out and avail_out are unused and unchanged.) The current implementation
+   of inflateInit() does not process any header information -- that is deferred
+   until inflate() is called.
+*/
+
+
+ZEXTERN int ZEXPORT inflate OF((z_streamp strm, int flush));
+/*
+    inflate decompresses as much data as possible, and stops when the input
+  buffer becomes empty or the output buffer becomes full.  It may introduce
+  some output latency (reading input without producing any output) except when
+  forced to flush.
+
+  The detailed semantics are as follows.  inflate performs one or both of the
+  following actions:
+
+  - Decompress more input starting at next_in and update next_in and avail_in
+    accordingly.  If not all input can be processed (because there is not
+    enough room in the output buffer), next_in is updated and processing will
+    resume at this point for the next call of inflate().
+
+  - Provide more output starting at next_out and update next_out and avail_out
+    accordingly.  inflate() provides as much output as possible, until there is
+    no more input data or no more space in the output buffer (see below about
+    the flush parameter).
+
+    Before the call of inflate(), the application should ensure that at least
+  one of the actions is possible, by providing more input and/or consuming more
+  output, and updating the next_* and avail_* values accordingly.  The
+  application can consume the uncompressed output when it wants, for example
+  when the output buffer is full (avail_out == 0), or after each call of
+  inflate().  If inflate returns Z_OK and with zero avail_out, it must be
+  called again after making room in the output buffer because there might be
+  more output pending.
+
+    The flush parameter of inflate() can be Z_NO_FLUSH, Z_SYNC_FLUSH, Z_FINISH,
+  Z_BLOCK, or Z_TREES.  Z_SYNC_FLUSH requests that inflate() flush as much
+  output as possible to the output buffer.  Z_BLOCK requests that inflate()
+  stop if and when it gets to the next deflate block boundary.  When decoding
+  the zlib or gzip format, this will cause inflate() to return immediately
+  after the header and before the first block.  When doing a raw inflate,
+  inflate() will go ahead and process the first block, and will return when it
+  gets to the end of that block, or when it runs out of data.
+
+    The Z_BLOCK option assists in appending to or combining deflate streams.
+  Also to assist in this, on return inflate() will set strm->data_type to the
+  number of unused bits in the last byte taken from strm->next_in, plus 64 if
+  inflate() is currently decoding the last block in the deflate stream, plus
+  128 if inflate() returned immediately after decoding an end-of-block code or
+  decoding the complete header up to just before the first byte of the deflate
+  stream.  The end-of-block will not be indicated until all of the uncompressed
+  data from that block has been written to strm->next_out.  The number of
+  unused bits may in general be greater than seven, except when bit 7 of
+  data_type is set, in which case the number of unused bits will be less than
+  eight.  data_type is set as noted here every time inflate() returns for all
+  flush options, and so can be used to determine the amount of currently
+  consumed input in bits.
+
+    The Z_TREES option behaves as Z_BLOCK does, but it also returns when the
+  end of each deflate block header is reached, before any actual data in that
+  block is decoded.  This allows the caller to determine the length of the
+  deflate block header for later use in random access within a deflate block.
+  256 is added to the value of strm->data_type when inflate() returns
+  immediately after reaching the end of the deflate block header.
+
+    inflate() should normally be called until it returns Z_STREAM_END or an
+  error.  However if all decompression is to be performed in a single step (a
+  single call of inflate), the parameter flush should be set to Z_FINISH.  In
+  this case all pending input is processed and all pending output is flushed;
+  avail_out must be large enough to hold all of the uncompressed data for the
+  operation to complete.  (The size of the uncompressed data may have been
+  saved by the compressor for this purpose.) The use of Z_FINISH is not
+  required to perform an inflation in one step.  However it may be used to
+  inform inflate that a faster approach can be used for the single inflate()
+  call.  Z_FINISH also informs inflate to not maintain a sliding window if the
+  stream completes, which reduces inflate's memory footprint.  If the stream
+  does not complete, either because not all of the stream is provided or not
+  enough output space is provided, then a sliding window will be allocated and
+  inflate() can be called again to continue the operation as if Z_NO_FLUSH had
+  been used.
+
+     In this implementation, inflate() always flushes as much output as
+  possible to the output buffer, and always uses the faster approach on the
+  first call.  So the effects of the flush parameter in this implementation are
+  on the return value of inflate() as noted below, when inflate() returns early
+  when Z_BLOCK or Z_TREES is used, and when inflate() avoids the allocation of
+  memory for a sliding window when Z_FINISH is used.
+
+     If a preset dictionary is needed after this call (see inflateSetDictionary
+  below), inflate sets strm->adler to the Adler-32 checksum of the dictionary
+  chosen by the compressor and returns Z_NEED_DICT; otherwise it sets
+  strm->adler to the Adler-32 checksum of all output produced so far (that is,
+  total_out bytes) and returns Z_OK, Z_STREAM_END or an error code as described
+  below.  At the end of the stream, inflate() checks that its computed adler32
+  checksum is equal to that saved by the compressor and returns Z_STREAM_END
+  only if the checksum is correct.
+
+    inflate() can decompress and check either zlib-wrapped or gzip-wrapped
+  deflate data.  The header type is detected automatically, if requested when
+  initializing with inflateInit2().  Any information contained in the gzip
+  header is not retained, so applications that need that information should
+  instead use raw inflate, see inflateInit2() below, or inflateBack() and
+  perform their own processing of the gzip header and trailer.  When processing
+  gzip-wrapped deflate data, strm->adler32 is set to the CRC-32 of the output
+  producted so far.  The CRC-32 is checked against the gzip trailer.
+
+    inflate() returns Z_OK if some progress has been made (more input processed
+  or more output produced), Z_STREAM_END if the end of the compressed data has
+  been reached and all uncompressed output has been produced, Z_NEED_DICT if a
+  preset dictionary is needed at this point, Z_DATA_ERROR if the input data was
+  corrupted (input stream not conforming to the zlib format or incorrect check
+  value), Z_STREAM_ERROR if the stream structure was inconsistent (for example
+  next_in or next_out was Z_NULL), Z_MEM_ERROR if there was not enough memory,
+  Z_BUF_ERROR if no progress is possible or if there was not enough room in the
+  output buffer when Z_FINISH is used.  Note that Z_BUF_ERROR is not fatal, and
+  inflate() can be called again with more input and more output space to
+  continue decompressing.  If Z_DATA_ERROR is returned, the application may
+  then call inflateSync() to look for a good compression block if a partial
+  recovery of the data is desired.
+*/
+
+
+ZEXTERN int ZEXPORT inflateEnd OF((z_streamp strm));
+/*
+     All dynamically allocated data structures for this stream are freed.
+   This function discards any unprocessed input and does not flush any pending
+   output.
+
+     inflateEnd returns Z_OK if success, Z_STREAM_ERROR if the stream state
+   was inconsistent.  In the error case, msg may be set but then points to a
+   static string (which must not be deallocated).
+*/
+
+
+                        /* Advanced functions */
+
+/*
+    The following functions are needed only in some special applications.
+*/
+
+/*
+ZEXTERN int ZEXPORT deflateInit2 OF((z_streamp strm,
+                                     int  level,
+                                     int  method,
+                                     int  windowBits,
+                                     int  memLevel,
+                                     int  strategy));
+
+     This is another version of deflateInit with more compression options.  The
+   fields next_in, zalloc, zfree and opaque must be initialized before by the
+   caller.
+
+     The method parameter is the compression method.  It must be Z_DEFLATED in
+   this version of the library.
+
+     The windowBits parameter is the base two logarithm of the window size
+   (the size of the history buffer).  It should be in the range 8..15 for this
+   version of the library.  Larger values of this parameter result in better
+   compression at the expense of memory usage.  The default value is 15 if
+   deflateInit is used instead.
+
+     windowBits can also be -8..-15 for raw deflate.  In this case, -windowBits
+   determines the window size.  deflate() will then generate raw deflate data
+   with no zlib header or trailer, and will not compute an adler32 check value.
+
+     windowBits can also be greater than 15 for optional gzip encoding.  Add
+   16 to windowBits to write a simple gzip header and trailer around the
+   compressed data instead of a zlib wrapper.  The gzip header will have no
+   file name, no extra data, no comment, no modification time (set to zero), no
+   header crc, and the operating system will be set to 255 (unknown).  If a
+   gzip stream is being written, strm->adler is a crc32 instead of an adler32.
+
+     The memLevel parameter specifies how much memory should be allocated
+   for the internal compression state.  memLevel=1 uses minimum memory but is
+   slow and reduces compression ratio; memLevel=9 uses maximum memory for
+   optimal speed.  The default value is 8.  See zconf.h for total memory usage
+   as a function of windowBits and memLevel.
+
+     The strategy parameter is used to tune the compression algorithm.  Use the
+   value Z_DEFAULT_STRATEGY for normal data, Z_FILTERED for data produced by a
+   filter (or predictor), Z_HUFFMAN_ONLY to force Huffman encoding only (no
+   string match), or Z_RLE to limit match distances to one (run-length
+   encoding).  Filtered data consists mostly of small values with a somewhat
+   random distribution.  In this case, the compression algorithm is tuned to
+   compress them better.  The effect of Z_FILTERED is to force more Huffman
+   coding and less string matching; it is somewhat intermediate between
+   Z_DEFAULT_STRATEGY and Z_HUFFMAN_ONLY.  Z_RLE is designed to be almost as
+   fast as Z_HUFFMAN_ONLY, but give better compression for PNG image data.  The
+   strategy parameter only affects the compression ratio but not the
+   correctness of the compressed output even if it is not set appropriately.
+   Z_FIXED prevents the use of dynamic Huffman codes, allowing for a simpler
+   decoder for special applications.
+
+     deflateInit2 returns Z_OK if success, Z_MEM_ERROR if there was not enough
+   memory, Z_STREAM_ERROR if any parameter is invalid (such as an invalid
+   method), or Z_VERSION_ERROR if the zlib library version (zlib_version) is
+   incompatible with the version assumed by the caller (ZLIB_VERSION).  msg is
+   set to null if there is no error message.  deflateInit2 does not perform any
+   compression: this will be done by deflate().
+*/
+
+ZEXTERN int ZEXPORT deflateSetDictionary OF((z_streamp strm,
+                                             const Bytef *dictionary,
+                                             uInt  dictLength));
+/*
+     Initializes the compression dictionary from the given byte sequence
+   without producing any compressed output.  When using the zlib format, this
+   function must be called immediately after deflateInit, deflateInit2 or
+   deflateReset, and before any call of deflate.  When doing raw deflate, this
+   function must be called either before any call of deflate, or immediately
+   after the completion of a deflate block, i.e. after all input has been
+   consumed and all output has been delivered when using any of the flush
+   options Z_BLOCK, Z_PARTIAL_FLUSH, Z_SYNC_FLUSH, or Z_FULL_FLUSH.  The
+   compressor and decompressor must use exactly the same dictionary (see
+   inflateSetDictionary).
+
+     The dictionary should consist of strings (byte sequences) that are likely
+   to be encountered later in the data to be compressed, with the most commonly
+   used strings preferably put towards the end of the dictionary.  Using a
+   dictionary is most useful when the data to be compressed is short and can be
+   predicted with good accuracy; the data can then be compressed better than
+   with the default empty dictionary.
+
+     Depending on the size of the compression data structures selected by
+   deflateInit or deflateInit2, a part of the dictionary may in effect be
+   discarded, for example if the dictionary is larger than the window size
+   provided in deflateInit or deflateInit2.  Thus the strings most likely to be
+   useful should be put at the end of the dictionary, not at the front.  In
+   addition, the current implementation of deflate will use at most the window
+   size minus 262 bytes of the provided dictionary.
+
+     Upon return of this function, strm->adler is set to the adler32 value
+   of the dictionary; the decompressor may later use this value to determine
+   which dictionary has been used by the compressor.  (The adler32 value
+   applies to the whole dictionary even if only a subset of the dictionary is
+   actually used by the compressor.) If a raw deflate was requested, then the
+   adler32 value is not computed and strm->adler is not set.
+
+     deflateSetDictionary returns Z_OK if success, or Z_STREAM_ERROR if a
+   parameter is invalid (e.g.  dictionary being Z_NULL) or the stream state is
+   inconsistent (for example if deflate has already been called for this stream
+   or if not at a block boundary for raw deflate).  deflateSetDictionary does
+   not perform any compression: this will be done by deflate().
+*/
+
+ZEXTERN int ZEXPORT deflateCopy OF((z_streamp dest,
+                                    z_streamp source));
+/*
+     Sets the destination stream as a complete copy of the source stream.
+
+     This function can be useful when several compression strategies will be
+   tried, for example when there are several ways of pre-processing the input
+   data with a filter.  The streams that will be discarded should then be freed
+   by calling deflateEnd.  Note that deflateCopy duplicates the internal
+   compression state which can be quite large, so this strategy is slow and can
+   consume lots of memory.
+
+     deflateCopy returns Z_OK if success, Z_MEM_ERROR if there was not
+   enough memory, Z_STREAM_ERROR if the source stream state was inconsistent
+   (such as zalloc being Z_NULL).  msg is left unchanged in both source and
+   destination.
+*/
+
+ZEXTERN int ZEXPORT deflateReset OF((z_streamp strm));
+/*
+     This function is equivalent to deflateEnd followed by deflateInit,
+   but does not free and reallocate all the internal compression state.  The
+   stream will keep the same compression level and any other attributes that
+   may have been set by deflateInit2.
+
+     deflateReset returns Z_OK if success, or Z_STREAM_ERROR if the source
+   stream state was inconsistent (such as zalloc or state being Z_NULL).
+*/
+
+ZEXTERN int ZEXPORT deflateParams OF((z_streamp strm,
+                                      int level,
+                                      int strategy));
+/*
+     Dynamically update the compression level and compression strategy.  The
+   interpretation of level and strategy is as in deflateInit2.  This can be
+   used to switch between compression and straight copy of the input data, or
+   to switch to a different kind of input data requiring a different strategy.
+   If the compression level is changed, the input available so far is
+   compressed with the old level (and may be flushed); the new level will take
+   effect only at the next call of deflate().
+
+     Before the call of deflateParams, the stream state must be set as for
+   a call of deflate(), since the currently available input may have to be
+   compressed and flushed.  In particular, strm->avail_out must be non-zero.
+
+     deflateParams returns Z_OK if success, Z_STREAM_ERROR if the source
+   stream state was inconsistent or if a parameter was invalid, Z_BUF_ERROR if
+   strm->avail_out was zero.
+*/
+
+ZEXTERN int ZEXPORT deflateTune OF((z_streamp strm,
+                                    int good_length,
+                                    int max_lazy,
+                                    int nice_length,
+                                    int max_chain));
+/*
+     Fine tune deflate's internal compression parameters.  This should only be
+   used by someone who understands the algorithm used by zlib's deflate for
+   searching for the best matching string, and even then only by the most
+   fanatic optimizer trying to squeeze out the last compressed bit for their
+   specific input data.  Read the deflate.c source code for the meaning of the
+   max_lazy, good_length, nice_length, and max_chain parameters.
+
+     deflateTune() can be called after deflateInit() or deflateInit2(), and
+   returns Z_OK on success, or Z_STREAM_ERROR for an invalid deflate stream.
+ */
+
+ZEXTERN uLong ZEXPORT deflateBound OF((z_streamp strm,
+                                       uLong sourceLen));
+/*
+     deflateBound() returns an upper bound on the compressed size after
+   deflation of sourceLen bytes.  It must be called after deflateInit() or
+   deflateInit2(), and after deflateSetHeader(), if used.  This would be used
+   to allocate an output buffer for deflation in a single pass, and so would be
+   called before deflate().  If that first deflate() call is provided the
+   sourceLen input bytes, an output buffer allocated to the size returned by
+   deflateBound(), and the flush value Z_FINISH, then deflate() is guaranteed
+   to return Z_STREAM_END.  Note that it is possible for the compressed size to
+   be larger than the value returned by deflateBound() if flush options other
+   than Z_FINISH or Z_NO_FLUSH are used.
+*/
+
+ZEXTERN int ZEXPORT deflatePending OF((z_streamp strm,
+                                       unsigned *pending,
+                                       int *bits));
+/*
+     deflatePending() returns the number of bytes and bits of output that have
+   been generated, but not yet provided in the available output.  The bytes not
+   provided would be due to the available output space having being consumed.
+   The number of bits of output not provided are between 0 and 7, where they
+   await more bits to join them in order to fill out a full byte.  If pending
+   or bits are Z_NULL, then those values are not set.
+
+     deflatePending returns Z_OK if success, or Z_STREAM_ERROR if the source
+   stream state was inconsistent.
+ */
+
+ZEXTERN int ZEXPORT deflatePrime OF((z_streamp strm,
+                                     int bits,
+                                     int value));
+/*
+     deflatePrime() inserts bits in the deflate output stream.  The intent
+   is that this function is used to start off the deflate output with the bits
+   leftover from a previous deflate stream when appending to it.  As such, this
+   function can only be used for raw deflate, and must be used before the first
+   deflate() call after a deflateInit2() or deflateReset().  bits must be less
+   than or equal to 16, and that many of the least significant bits of value
+   will be inserted in the output.
+
+     deflatePrime returns Z_OK if success, Z_BUF_ERROR if there was not enough
+   room in the internal buffer to insert the bits, or Z_STREAM_ERROR if the
+   source stream state was inconsistent.
+*/
+
+ZEXTERN int ZEXPORT deflateSetHeader OF((z_streamp strm,
+                                         gz_headerp head));
+/*
+     deflateSetHeader() provides gzip header information for when a gzip
+   stream is requested by deflateInit2().  deflateSetHeader() may be called
+   after deflateInit2() or deflateReset() and before the first call of
+   deflate().  The text, time, os, extra field, name, and comment information
+   in the provided gz_header structure are written to the gzip header (xflag is
+   ignored -- the extra flags are set according to the compression level).  The
+   caller must assure that, if not Z_NULL, name and comment are terminated with
+   a zero byte, and that if extra is not Z_NULL, that extra_len bytes are
+   available there.  If hcrc is true, a gzip header crc is included.  Note that
+   the current versions of the command-line version of gzip (up through version
+   1.3.x) do not support header crc's, and will report that it is a "multi-part
+   gzip file" and give up.
+
+     If deflateSetHeader is not used, the default gzip header has text false,
+   the time set to zero, and os set to 255, with no extra, name, or comment
+   fields.  The gzip header is returned to the default state by deflateReset().
+
+     deflateSetHeader returns Z_OK if success, or Z_STREAM_ERROR if the source
+   stream state was inconsistent.
+*/
+
+/*
+ZEXTERN int ZEXPORT inflateInit2 OF((z_streamp strm,
+                                     int  windowBits));
+
+     This is another version of inflateInit with an extra parameter.  The
+   fields next_in, avail_in, zalloc, zfree and opaque must be initialized
+   before by the caller.
+
+     The windowBits parameter is the base two logarithm of the maximum window
+   size (the size of the history buffer).  It should be in the range 8..15 for
+   this version of the library.  The default value is 15 if inflateInit is used
+   instead.  windowBits must be greater than or equal to the windowBits value
+   provided to deflateInit2() while compressing, or it must be equal to 15 if
+   deflateInit2() was not used.  If a compressed stream with a larger window
+   size is given as input, inflate() will return with the error code
+   Z_DATA_ERROR instead of trying to allocate a larger window.
+
+     windowBits can also be zero to request that inflate use the window size in
+   the zlib header of the compressed stream.
+
+     windowBits can also be -8..-15 for raw inflate.  In this case, -windowBits
+   determines the window size.  inflate() will then process raw deflate data,
+   not looking for a zlib or gzip header, not generating a check value, and not
+   looking for any check values for comparison at the end of the stream.  This
+   is for use with other formats that use the deflate compressed data format
+   such as zip.  Those formats provide their own check values.  If a custom
+   format is developed using the raw deflate format for compressed data, it is
+   recommended that a check value such as an adler32 or a crc32 be applied to
+   the uncompressed data as is done in the zlib, gzip, and zip formats.  For
+   most applications, the zlib format should be used as is.  Note that comments
+   above on the use in deflateInit2() applies to the magnitude of windowBits.
+
+     windowBits can also be greater than 15 for optional gzip decoding.  Add
+   32 to windowBits to enable zlib and gzip decoding with automatic header
+   detection, or add 16 to decode only the gzip format (the zlib format will
+   return a Z_DATA_ERROR).  If a gzip stream is being decoded, strm->adler is a
+   crc32 instead of an adler32.
+
+     inflateInit2 returns Z_OK if success, Z_MEM_ERROR if there was not enough
+   memory, Z_VERSION_ERROR if the zlib library version is incompatible with the
+   version assumed by the caller, or Z_STREAM_ERROR if the parameters are
+   invalid, such as a null pointer to the structure.  msg is set to null if
+   there is no error message.  inflateInit2 does not perform any decompression
+   apart from possibly reading the zlib header if present: actual decompression
+   will be done by inflate().  (So next_in and avail_in may be modified, but
+   next_out and avail_out are unused and unchanged.) The current implementation
+   of inflateInit2() does not process any header information -- that is
+   deferred until inflate() is called.
+*/
+
+ZEXTERN int ZEXPORT inflateSetDictionary OF((z_streamp strm,
+                                             const Bytef *dictionary,
+                                             uInt  dictLength));
+/*
+     Initializes the decompression dictionary from the given uncompressed byte
+   sequence.  This function must be called immediately after a call of inflate,
+   if that call returned Z_NEED_DICT.  The dictionary chosen by the compressor
+   can be determined from the adler32 value returned by that call of inflate.
+   The compressor and decompressor must use exactly the same dictionary (see
+   deflateSetDictionary).  For raw inflate, this function can be called at any
+   time to set the dictionary.  If the provided dictionary is smaller than the
+   window and there is already data in the window, then the provided dictionary
+   will amend what's there.  The application must insure that the dictionary
+   that was used for compression is provided.
+
+     inflateSetDictionary returns Z_OK if success, Z_STREAM_ERROR if a
+   parameter is invalid (e.g.  dictionary being Z_NULL) or the stream state is
+   inconsistent, Z_DATA_ERROR if the given dictionary doesn't match the
+   expected one (incorrect adler32 value).  inflateSetDictionary does not
+   perform any decompression: this will be done by subsequent calls of
+   inflate().
+*/
+
+ZEXTERN int ZEXPORT inflateGetDictionary OF((z_streamp strm,
+                                             Bytef *dictionary,
+                                             uInt  *dictLength));
+/*
+     Returns the sliding dictionary being maintained by inflate.  dictLength is
+   set to the number of bytes in the dictionary, and that many bytes are copied
+   to dictionary.  dictionary must have enough space, where 32768 bytes is
+   always enough.  If inflateGetDictionary() is called with dictionary equal to
+   Z_NULL, then only the dictionary length is returned, and nothing is copied.
+   Similary, if dictLength is Z_NULL, then it is not set.
+
+     inflateGetDictionary returns Z_OK on success, or Z_STREAM_ERROR if the
+   stream state is inconsistent.
+*/
+
+ZEXTERN int ZEXPORT inflateSync OF((z_streamp strm));
+/*
+     Skips invalid compressed data until a possible full flush point (see above
+   for the description of deflate with Z_FULL_FLUSH) can be found, or until all
+   available input is skipped.  No output is provided.
+
+     inflateSync searches for a 00 00 FF FF pattern in the compressed data.
+   All full flush points have this pattern, but not all occurrences of this
+   pattern are full flush points.
+
+     inflateSync returns Z_OK if a possible full flush point has been found,
+   Z_BUF_ERROR if no more input was provided, Z_DATA_ERROR if no flush point
+   has been found, or Z_STREAM_ERROR if the stream structure was inconsistent.
+   In the success case, the application may save the current current value of
+   total_in which indicates where valid compressed data was found.  In the
+   error case, the application may repeatedly call inflateSync, providing more
+   input each time, until success or end of the input data.
+*/
+
+ZEXTERN int ZEXPORT inflateCopy OF((z_streamp dest,
+                                    z_streamp source));
+/*
+     Sets the destination stream as a complete copy of the source stream.
+
+     This function can be useful when randomly accessing a large stream.  The
+   first pass through the stream can periodically record the inflate state,
+   allowing restarting inflate at those points when randomly accessing the
+   stream.
+
+     inflateCopy returns Z_OK if success, Z_MEM_ERROR if there was not
+   enough memory, Z_STREAM_ERROR if the source stream state was inconsistent
+   (such as zalloc being Z_NULL).  msg is left unchanged in both source and
+   destination.
+*/
+
+ZEXTERN int ZEXPORT inflateReset OF((z_streamp strm));
+/*
+     This function is equivalent to inflateEnd followed by inflateInit,
+   but does not free and reallocate all the internal decompression state.  The
+   stream will keep attributes that may have been set by inflateInit2.
+
+     inflateReset returns Z_OK if success, or Z_STREAM_ERROR if the source
+   stream state was inconsistent (such as zalloc or state being Z_NULL).
+*/
+
+ZEXTERN int ZEXPORT inflateReset2 OF((z_streamp strm,
+                                      int windowBits));
+/*
+     This function is the same as inflateReset, but it also permits changing
+   the wrap and window size requests.  The windowBits parameter is interpreted
+   the same as it is for inflateInit2.
+
+     inflateReset2 returns Z_OK if success, or Z_STREAM_ERROR if the source
+   stream state was inconsistent (such as zalloc or state being Z_NULL), or if
+   the windowBits parameter is invalid.
+*/
+
+ZEXTERN int ZEXPORT inflatePrime OF((z_streamp strm,
+                                     int bits,
+                                     int value));
+/*
+     This function inserts bits in the inflate input stream.  The intent is
+   that this function is used to start inflating at a bit position in the
+   middle of a byte.  The provided bits will be used before any bytes are used
+   from next_in.  This function should only be used with raw inflate, and
+   should be used before the first inflate() call after inflateInit2() or
+   inflateReset().  bits must be less than or equal to 16, and that many of the
+   least significant bits of value will be inserted in the input.
+
+     If bits is negative, then the input stream bit buffer is emptied.  Then
+   inflatePrime() can be called again to put bits in the buffer.  This is used
+   to clear out bits leftover after feeding inflate a block description prior
+   to feeding inflate codes.
+
+     inflatePrime returns Z_OK if success, or Z_STREAM_ERROR if the source
+   stream state was inconsistent.
+*/
+
+ZEXTERN long ZEXPORT inflateMark OF((z_streamp strm));
+/*
+     This function returns two values, one in the lower 16 bits of the return
+   value, and the other in the remaining upper bits, obtained by shifting the
+   return value down 16 bits.  If the upper value is -1 and the lower value is
+   zero, then inflate() is currently decoding information outside of a block.
+   If the upper value is -1 and the lower value is non-zero, then inflate is in
+   the middle of a stored block, with the lower value equaling the number of
+   bytes from the input remaining to copy.  If the upper value is not -1, then
+   it is the number of bits back from the current bit position in the input of
+   the code (literal or length/distance pair) currently being processed.  In
+   that case the lower value is the number of bytes already emitted for that
+   code.
+
+     A code is being processed if inflate is waiting for more input to complete
+   decoding of the code, or if it has completed decoding but is waiting for
+   more output space to write the literal or match data.
+
+     inflateMark() is used to mark locations in the input data for random
+   access, which may be at bit positions, and to note those cases where the
+   output of a code may span boundaries of random access blocks.  The current
+   location in the input stream can be determined from avail_in and data_type
+   as noted in the description for the Z_BLOCK flush parameter for inflate.
+
+     inflateMark returns the value noted above or -1 << 16 if the provided
+   source stream state was inconsistent.
+*/
+
+ZEXTERN int ZEXPORT inflateGetHeader OF((z_streamp strm,
+                                         gz_headerp head));
+/*
+     inflateGetHeader() requests that gzip header information be stored in the
+   provided gz_header structure.  inflateGetHeader() may be called after
+   inflateInit2() or inflateReset(), and before the first call of inflate().
+   As inflate() processes the gzip stream, head->done is zero until the header
+   is completed, at which time head->done is set to one.  If a zlib stream is
+   being decoded, then head->done is set to -1 to indicate that there will be
+   no gzip header information forthcoming.  Note that Z_BLOCK or Z_TREES can be
+   used to force inflate() to return immediately after header processing is
+   complete and before any actual data is decompressed.
+
+     The text, time, xflags, and os fields are filled in with the gzip header
+   contents.  hcrc is set to true if there is a header CRC.  (The header CRC
+   was valid if done is set to one.) If extra is not Z_NULL, then extra_max
+   contains the maximum number of bytes to write to extra.  Once done is true,
+   extra_len contains the actual extra field length, and extra contains the
+   extra field, or that field truncated if extra_max is less than extra_len.
+   If name is not Z_NULL, then up to name_max characters are written there,
+   terminated with a zero unless the length is greater than name_max.  If
+   comment is not Z_NULL, then up to comm_max characters are written there,
+   terminated with a zero unless the length is greater than comm_max.  When any
+   of extra, name, or comment are not Z_NULL and the respective field is not
+   present in the header, then that field is set to Z_NULL to signal its
+   absence.  This allows the use of deflateSetHeader() with the returned
+   structure to duplicate the header.  However if those fields are set to
+   allocated memory, then the application will need to save those pointers
+   elsewhere so that they can be eventually freed.
+
+     If inflateGetHeader is not used, then the header information is simply
+   discarded.  The header is always checked for validity, including the header
+   CRC if present.  inflateReset() will reset the process to discard the header
+   information.  The application would need to call inflateGetHeader() again to
+   retrieve the header from the next gzip stream.
+
+     inflateGetHeader returns Z_OK if success, or Z_STREAM_ERROR if the source
+   stream state was inconsistent.
+*/
+
+/*
+ZEXTERN int ZEXPORT inflateBackInit OF((z_streamp strm, int windowBits,
+                                        unsigned char FAR *window));
+
+     Initialize the internal stream state for decompression using inflateBack()
+   calls.  The fields zalloc, zfree and opaque in strm must be initialized
+   before the call.  If zalloc and zfree are Z_NULL, then the default library-
+   derived memory allocation routines are used.  windowBits is the base two
+   logarithm of the window size, in the range 8..15.  window is a caller
+   supplied buffer of that size.  Except for special applications where it is
+   assured that deflate was used with small window sizes, windowBits must be 15
+   and a 32K byte window must be supplied to be able to decompress general
+   deflate streams.
+
+     See inflateBack() for the usage of these routines.
+
+     inflateBackInit will return Z_OK on success, Z_STREAM_ERROR if any of
+   the parameters are invalid, Z_MEM_ERROR if the internal state could not be
+   allocated, or Z_VERSION_ERROR if the version of the library does not match
+   the version of the header file.
+*/
+
+typedef unsigned (*in_func) OF((void FAR *,
+                                z_const unsigned char FAR * FAR *));
+typedef int (*out_func) OF((void FAR *, unsigned char FAR *, unsigned));
+
+ZEXTERN int ZEXPORT inflateBack OF((z_streamp strm,
+                                    in_func in, void FAR *in_desc,
+                                    out_func out, void FAR *out_desc));
+/*
+     inflateBack() does a raw inflate with a single call using a call-back
+   interface for input and output.  This is potentially more efficient than
+   inflate() for file i/o applications, in that it avoids copying between the
+   output and the sliding window by simply making the window itself the output
+   buffer.  inflate() can be faster on modern CPUs when used with large
+   buffers.  inflateBack() trusts the application to not change the output
+   buffer passed by the output function, at least until inflateBack() returns.
+
+     inflateBackInit() must be called first to allocate the internal state
+   and to initialize the state with the user-provided window buffer.
+   inflateBack() may then be used multiple times to inflate a complete, raw
+   deflate stream with each call.  inflateBackEnd() is then called to free the
+   allocated state.
+
+     A raw deflate stream is one with no zlib or gzip header or trailer.
+   This routine would normally be used in a utility that reads zip or gzip
+   files and writes out uncompressed files.  The utility would decode the
+   header and process the trailer on its own, hence this routine expects only
+   the raw deflate stream to decompress.  This is different from the normal
+   behavior of inflate(), which expects either a zlib or gzip header and
+   trailer around the deflate stream.
+
+     inflateBack() uses two subroutines supplied by the caller that are then
+   called by inflateBack() for input and output.  inflateBack() calls those
+   routines until it reads a complete deflate stream and writes out all of the
+   uncompressed data, or until it encounters an error.  The function's
+   parameters and return types are defined above in the in_func and out_func
+   typedefs.  inflateBack() will call in(in_desc, &buf) which should return the
+   number of bytes of provided input, and a pointer to that input in buf.  If
+   there is no input available, in() must return zero--buf is ignored in that
+   case--and inflateBack() will return a buffer error.  inflateBack() will call
+   out(out_desc, buf, len) to write the uncompressed data buf[0..len-1].  out()
+   should return zero on success, or non-zero on failure.  If out() returns
+   non-zero, inflateBack() will return with an error.  Neither in() nor out()
+   are permitted to change the contents of the window provided to
+   inflateBackInit(), which is also the buffer that out() uses to write from.
+   The length written by out() will be at most the window size.  Any non-zero
+   amount of input may be provided by in().
+
+     For convenience, inflateBack() can be provided input on the first call by
+   setting strm->next_in and strm->avail_in.  If that input is exhausted, then
+   in() will be called.  Therefore strm->next_in must be initialized before
+   calling inflateBack().  If strm->next_in is Z_NULL, then in() will be called
+   immediately for input.  If strm->next_in is not Z_NULL, then strm->avail_in
+   must also be initialized, and then if strm->avail_in is not zero, input will
+   initially be taken from strm->next_in[0 ..  strm->avail_in - 1].
+
+     The in_desc and out_desc parameters of inflateBack() is passed as the
+   first parameter of in() and out() respectively when they are called.  These
+   descriptors can be optionally used to pass any information that the caller-
+   supplied in() and out() functions need to do their job.
+
+     On return, inflateBack() will set strm->next_in and strm->avail_in to
+   pass back any unused input that was provided by the last in() call.  The
+   return values of inflateBack() can be Z_STREAM_END on success, Z_BUF_ERROR
+   if in() or out() returned an error, Z_DATA_ERROR if there was a format error
+   in the deflate stream (in which case strm->msg is set to indicate the nature
+   of the error), or Z_STREAM_ERROR if the stream was not properly initialized.
+   In the case of Z_BUF_ERROR, an input or output error can be distinguished
+   using strm->next_in which will be Z_NULL only if in() returned an error.  If
+   strm->next_in is not Z_NULL, then the Z_BUF_ERROR was due to out() returning
+   non-zero.  (in() will always be called before out(), so strm->next_in is
+   assured to be defined if out() returns non-zero.) Note that inflateBack()
+   cannot return Z_OK.
+*/
+
+ZEXTERN int ZEXPORT inflateBackEnd OF((z_streamp strm));
+/*
+     All memory allocated by inflateBackInit() is freed.
+
+     inflateBackEnd() returns Z_OK on success, or Z_STREAM_ERROR if the stream
+   state was inconsistent.
+*/
+
+ZEXTERN uLong ZEXPORT zlibCompileFlags OF((void));
+/* Return flags indicating compile-time options.
+
+    Type sizes, two bits each, 00 = 16 bits, 01 = 32, 10 = 64, 11 = other:
+     1.0: size of uInt
+     3.2: size of uLong
+     5.4: size of voidpf (pointer)
+     7.6: size of z_off_t
+
+    Compiler, assembler, and debug options:
+     8: DEBUG
+     9: ASMV or ASMINF -- use ASM code
+     10: ZLIB_WINAPI -- exported functions use the WINAPI calling convention
+     11: 0 (reserved)
+
+    One-time table building (smaller code, but not thread-safe if true):
+     12: BUILDFIXED -- build static block decoding tables when needed
+     13: DYNAMIC_CRC_TABLE -- build CRC calculation tables when needed
+     14,15: 0 (reserved)
+
+    Library content (indicates missing functionality):
+     16: NO_GZCOMPRESS -- gz* functions cannot compress (to avoid linking
+                          deflate code when not needed)
+     17: NO_GZIP -- deflate can't write gzip streams, and inflate can't detect
+                    and decode gzip streams (to avoid linking crc code)
+     18-19: 0 (reserved)
+
+    Operation variations (changes in library functionality):
+     20: PKZIP_BUG_WORKAROUND -- slightly more permissive inflate
+     21: FASTEST -- deflate algorithm with only one, lowest compression level
+     22,23: 0 (reserved)
+
+    The sprintf variant used by gzprintf (zero is best):
+     24: 0 = vs*, 1 = s* -- 1 means limited to 20 arguments after the format
+     25: 0 = *nprintf, 1 = *printf -- 1 means gzprintf() not secure!
+     26: 0 = returns value, 1 = void -- 1 means inferred string length returned
+
+    Remainder:
+     27-31: 0 (reserved)
+ */
+
+#ifndef Z_SOLO
+
+                        /* utility functions */
+
+/*
+     The following utility functions are implemented on top of the basic
+   stream-oriented functions.  To simplify the interface, some default options
+   are assumed (compression level and memory usage, standard memory allocation
+   functions).  The source code of these utility functions can be modified if
+   you need special options.
+*/
+
+ZEXTERN int ZEXPORT compress OF((Bytef *dest,   uLongf *destLen,
+                                 const Bytef *source, uLong sourceLen));
+/*
+     Compresses the source buffer into the destination buffer.  sourceLen is
+   the byte length of the source buffer.  Upon entry, destLen is the total size
+   of the destination buffer, which must be at least the value returned by
+   compressBound(sourceLen).  Upon exit, destLen is the actual size of the
+   compressed buffer.
+
+     compress returns Z_OK if success, Z_MEM_ERROR if there was not
+   enough memory, Z_BUF_ERROR if there was not enough room in the output
+   buffer.
+*/
+
+ZEXTERN int ZEXPORT compress2 OF((Bytef *dest,   uLongf *destLen,
+                                  const Bytef *source, uLong sourceLen,
+                                  int level));
+/*
+     Compresses the source buffer into the destination buffer.  The level
+   parameter has the same meaning as in deflateInit.  sourceLen is the byte
+   length of the source buffer.  Upon entry, destLen is the total size of the
+   destination buffer, which must be at least the value returned by
+   compressBound(sourceLen).  Upon exit, destLen is the actual size of the
+   compressed buffer.
+
+     compress2 returns Z_OK if success, Z_MEM_ERROR if there was not enough
+   memory, Z_BUF_ERROR if there was not enough room in the output buffer,
+   Z_STREAM_ERROR if the level parameter is invalid.
+*/
+
+ZEXTERN uLong ZEXPORT compressBound OF((uLong sourceLen));
+/*
+     compressBound() returns an upper bound on the compressed size after
+   compress() or compress2() on sourceLen bytes.  It would be used before a
+   compress() or compress2() call to allocate the destination buffer.
+*/
+
+ZEXTERN int ZEXPORT uncompress OF((Bytef *dest,   uLongf *destLen,
+                                   const Bytef *source, uLong sourceLen));
+/*
+     Decompresses the source buffer into the destination buffer.  sourceLen is
+   the byte length of the source buffer.  Upon entry, destLen is the total size
+   of the destination buffer, which must be large enough to hold the entire
+   uncompressed data.  (The size of the uncompressed data must have been saved
+   previously by the compressor and transmitted to the decompressor by some
+   mechanism outside the scope of this compression library.) Upon exit, destLen
+   is the actual size of the uncompressed buffer.
+
+     uncompress returns Z_OK if success, Z_MEM_ERROR if there was not
+   enough memory, Z_BUF_ERROR if there was not enough room in the output
+   buffer, or Z_DATA_ERROR if the input data was corrupted or incomplete.  In
+   the case where there is not enough room, uncompress() will fill the output
+   buffer with the uncompressed data up to that point.
+*/
+
+                        /* gzip file access functions */
+
+/*
+     This library supports reading and writing files in gzip (.gz) format with
+   an interface similar to that of stdio, using the functions that start with
+   "gz".  The gzip format is different from the zlib format.  gzip is a gzip
+   wrapper, documented in RFC 1952, wrapped around a deflate stream.
+*/
+
+typedef struct gzFile_s *gzFile;    /* semi-opaque gzip file descriptor */
+
+/*
+ZEXTERN gzFile ZEXPORT gzopen OF((const char *path, const char *mode));
+
+     Opens a gzip (.gz) file for reading or writing.  The mode parameter is as
+   in fopen ("rb" or "wb") but can also include a compression level ("wb9") or
+   a strategy: 'f' for filtered data as in "wb6f", 'h' for Huffman-only
+   compression as in "wb1h", 'R' for run-length encoding as in "wb1R", or 'F'
+   for fixed code compression as in "wb9F".  (See the description of
+   deflateInit2 for more information about the strategy parameter.)  'T' will
+   request transparent writing or appending with no compression and not using
+   the gzip format.
+
+     "a" can be used instead of "w" to request that the gzip stream that will
+   be written be appended to the file.  "+" will result in an error, since
+   reading and writing to the same gzip file is not supported.  The addition of
+   "x" when writing will create the file exclusively, which fails if the file
+   already exists.  On systems that support it, the addition of "e" when
+   reading or writing will set the flag to close the file on an execve() call.
+
+     These functions, as well as gzip, will read and decode a sequence of gzip
+   streams in a file.  The append function of gzopen() can be used to create
+   such a file.  (Also see gzflush() for another way to do this.)  When
+   appending, gzopen does not test whether the file begins with a gzip stream,
+   nor does it look for the end of the gzip streams to begin appending.  gzopen
+   will simply append a gzip stream to the existing file.
+
+     gzopen can be used to read a file which is not in gzip format; in this
+   case gzread will directly read from the file without decompression.  When
+   reading, this will be detected automatically by looking for the magic two-
+   byte gzip header.
+
+     gzopen returns NULL if the file could not be opened, if there was
+   insufficient memory to allocate the gzFile state, or if an invalid mode was
+   specified (an 'r', 'w', or 'a' was not provided, or '+' was provided).
+   errno can be checked to determine if the reason gzopen failed was that the
+   file could not be opened.
+*/
+
+ZEXTERN gzFile ZEXPORT gzdopen OF((int fd, const char *mode));
+/*
+     gzdopen associates a gzFile with the file descriptor fd.  File descriptors
+   are obtained from calls like open, dup, creat, pipe or fileno (if the file
+   has been previously opened with fopen).  The mode parameter is as in gzopen.
+
+     The next call of gzclose on the returned gzFile will also close the file
+   descriptor fd, just like fclose(fdopen(fd, mode)) closes the file descriptor
+   fd.  If you want to keep fd open, use fd = dup(fd_keep); gz = gzdopen(fd,
+   mode);.  The duplicated descriptor should be saved to avoid a leak, since
+   gzdopen does not close fd if it fails.  If you are using fileno() to get the
+   file descriptor from a FILE *, then you will have to use dup() to avoid
+   double-close()ing the file descriptor.  Both gzclose() and fclose() will
+   close the associated file descriptor, so they need to have different file
+   descriptors.
+
+     gzdopen returns NULL if there was insufficient memory to allocate the
+   gzFile state, if an invalid mode was specified (an 'r', 'w', or 'a' was not
+   provided, or '+' was provided), or if fd is -1.  The file descriptor is not
+   used until the next gz* read, write, seek, or close operation, so gzdopen
+   will not detect if fd is invalid (unless fd is -1).
+*/
+
+ZEXTERN int ZEXPORT gzbuffer OF((gzFile file, unsigned size));
+/*
+     Set the internal buffer size used by this library's functions.  The
+   default buffer size is 8192 bytes.  This function must be called after
+   gzopen() or gzdopen(), and before any other calls that read or write the
+   file.  The buffer memory allocation is always deferred to the first read or
+   write.  Two buffers are allocated, either both of the specified size when
+   writing, or one of the specified size and the other twice that size when
+   reading.  A larger buffer size of, for example, 64K or 128K bytes will
+   noticeably increase the speed of decompression (reading).
+
+     The new buffer size also affects the maximum length for gzprintf().
+
+     gzbuffer() returns 0 on success, or -1 on failure, such as being called
+   too late.
+*/
+
+ZEXTERN int ZEXPORT gzsetparams OF((gzFile file, int level, int strategy));
+/*
+     Dynamically update the compression level or strategy.  See the description
+   of deflateInit2 for the meaning of these parameters.
+
+     gzsetparams returns Z_OK if success, or Z_STREAM_ERROR if the file was not
+   opened for writing.
+*/
+
+ZEXTERN int ZEXPORT gzread OF((gzFile file, voidp buf, unsigned len));
+/*
+     Reads the given number of uncompressed bytes from the compressed file.  If
+   the input file is not in gzip format, gzread copies the given number of
+   bytes into the buffer directly from the file.
+
+     After reaching the end of a gzip stream in the input, gzread will continue
+   to read, looking for another gzip stream.  Any number of gzip streams may be
+   concatenated in the input file, and will all be decompressed by gzread().
+   If something other than a gzip stream is encountered after a gzip stream,
+   that remaining trailing garbage is ignored (and no error is returned).
+
+     gzread can be used to read a gzip file that is being concurrently written.
+   Upon reaching the end of the input, gzread will return with the available
+   data.  If the error code returned by gzerror is Z_OK or Z_BUF_ERROR, then
+   gzclearerr can be used to clear the end of file indicator in order to permit
+   gzread to be tried again.  Z_OK indicates that a gzip stream was completed
+   on the last gzread.  Z_BUF_ERROR indicates that the input file ended in the
+   middle of a gzip stream.  Note that gzread does not return -1 in the event
+   of an incomplete gzip stream.  This error is deferred until gzclose(), which
+   will return Z_BUF_ERROR if the last gzread ended in the middle of a gzip
+   stream.  Alternatively, gzerror can be used before gzclose to detect this
+   case.
+
+     gzread returns the number of uncompressed bytes actually read, less than
+   len for end of file, or -1 for error.
+*/
+
+ZEXTERN int ZEXPORT gzwrite OF((gzFile file,
+                                voidpc buf, unsigned len));
+/*
+     Writes the given number of uncompressed bytes into the compressed file.
+   gzwrite returns the number of uncompressed bytes written or 0 in case of
+   error.
+*/
+
+ZEXTERN int ZEXPORTVA gzprintf Z_ARG((gzFile file, const char *format, ...));
+/*
+     Converts, formats, and writes the arguments to the compressed file under
+   control of the format string, as in fprintf.  gzprintf returns the number of
+   uncompressed bytes actually written, or 0 in case of error.  The number of
+   uncompressed bytes written is limited to 8191, or one less than the buffer
+   size given to gzbuffer().  The caller should assure that this limit is not
+   exceeded.  If it is exceeded, then gzprintf() will return an error (0) with
+   nothing written.  In this case, there may also be a buffer overflow with
+   unpredictable consequences, which is possible only if zlib was compiled with
+   the insecure functions sprintf() or vsprintf() because the secure snprintf()
+   or vsnprintf() functions were not available.  This can be determined using
+   zlibCompileFlags().
+*/
+
+ZEXTERN int ZEXPORT gzputs OF((gzFile file, const char *s));
+/*
+     Writes the given null-terminated string to the compressed file, excluding
+   the terminating null character.
+
+     gzputs returns the number of characters written, or -1 in case of error.
+*/
+
+ZEXTERN char * ZEXPORT gzgets OF((gzFile file, char *buf, int len));
+/*
+     Reads bytes from the compressed file until len-1 characters are read, or a
+   newline character is read and transferred to buf, or an end-of-file
+   condition is encountered.  If any characters are read or if len == 1, the
+   string is terminated with a null character.  If no characters are read due
+   to an end-of-file or len < 1, then the buffer is left untouched.
+
+     gzgets returns buf which is a null-terminated string, or it returns NULL
+   for end-of-file or in case of error.  If there was an error, the contents at
+   buf are indeterminate.
+*/
+
+ZEXTERN int ZEXPORT gzputc OF((gzFile file, int c));
+/*
+     Writes c, converted to an unsigned char, into the compressed file.  gzputc
+   returns the value that was written, or -1 in case of error.
+*/
+
+ZEXTERN int ZEXPORT gzgetc OF((gzFile file));
+/*
+     Reads one byte from the compressed file.  gzgetc returns this byte or -1
+   in case of end of file or error.  This is implemented as a macro for speed.
+   As such, it does not do all of the checking the other functions do.  I.e.
+   it does not check to see if file is NULL, nor whether the structure file
+   points to has been clobbered or not.
+*/
+
+ZEXTERN int ZEXPORT gzungetc OF((int c, gzFile file));
+/*
+     Push one character back onto the stream to be read as the first character
+   on the next read.  At least one character of push-back is allowed.
+   gzungetc() returns the character pushed, or -1 on failure.  gzungetc() will
+   fail if c is -1, and may fail if a character has been pushed but not read
+   yet.  If gzungetc is used immediately after gzopen or gzdopen, at least the
+   output buffer size of pushed characters is allowed.  (See gzbuffer above.)
+   The pushed character will be discarded if the stream is repositioned with
+   gzseek() or gzrewind().
+*/
+
+ZEXTERN int ZEXPORT gzflush OF((gzFile file, int flush));
+/*
+     Flushes all pending output into the compressed file.  The parameter flush
+   is as in the deflate() function.  The return value is the zlib error number
+   (see function gzerror below).  gzflush is only permitted when writing.
+
+     If the flush parameter is Z_FINISH, the remaining data is written and the
+   gzip stream is completed in the output.  If gzwrite() is called again, a new
+   gzip stream will be started in the output.  gzread() is able to read such
+   concatented gzip streams.
+
+     gzflush should be called only when strictly necessary because it will
+   degrade compression if called too often.
+*/
+
+/*
+ZEXTERN z_off_t ZEXPORT gzseek OF((gzFile file,
+                                   z_off_t offset, int whence));
+
+     Sets the starting position for the next gzread or gzwrite on the given
+   compressed file.  The offset represents a number of bytes in the
+   uncompressed data stream.  The whence parameter is defined as in lseek(2);
+   the value SEEK_END is not supported.
+
+     If the file is opened for reading, this function is emulated but can be
+   extremely slow.  If the file is opened for writing, only forward seeks are
+   supported; gzseek then compresses a sequence of zeroes up to the new
+   starting position.
+
+     gzseek returns the resulting offset location as measured in bytes from
+   the beginning of the uncompressed stream, or -1 in case of error, in
+   particular if the file is opened for writing and the new starting position
+   would be before the current position.
+*/
+
+ZEXTERN int ZEXPORT    gzrewind OF((gzFile file));
+/*
+     Rewinds the given file. This function is supported only for reading.
+
+     gzrewind(file) is equivalent to (int)gzseek(file, 0L, SEEK_SET)
+*/
+
+/*
+ZEXTERN z_off_t ZEXPORT    gztell OF((gzFile file));
+
+     Returns the starting position for the next gzread or gzwrite on the given
+   compressed file.  This position represents a number of bytes in the
+   uncompressed data stream, and is zero when starting, even if appending or
+   reading a gzip stream from the middle of a file using gzdopen().
+
+     gztell(file) is equivalent to gzseek(file, 0L, SEEK_CUR)
+*/
+
+/*
+ZEXTERN z_off_t ZEXPORT gzoffset OF((gzFile file));
+
+     Returns the current offset in the file being read or written.  This offset
+   includes the count of bytes that precede the gzip stream, for example when
+   appending or when using gzdopen() for reading.  When reading, the offset
+   does not include as yet unused buffered input.  This information can be used
+   for a progress indicator.  On error, gzoffset() returns -1.
+*/
+
+ZEXTERN int ZEXPORT gzeof OF((gzFile file));
+/*
+     Returns true (1) if the end-of-file indicator has been set while reading,
+   false (0) otherwise.  Note that the end-of-file indicator is set only if the
+   read tried to go past the end of the input, but came up short.  Therefore,
+   just like feof(), gzeof() may return false even if there is no more data to
+   read, in the event that the last read request was for the exact number of
+   bytes remaining in the input file.  This will happen if the input file size
+   is an exact multiple of the buffer size.
+
+     If gzeof() returns true, then the read functions will return no more data,
+   unless the end-of-file indicator is reset by gzclearerr() and the input file
+   has grown since the previous end of file was detected.
+*/
+
+ZEXTERN int ZEXPORT gzdirect OF((gzFile file));
+/*
+     Returns true (1) if file is being copied directly while reading, or false
+   (0) if file is a gzip stream being decompressed.
+
+     If the input file is empty, gzdirect() will return true, since the input
+   does not contain a gzip stream.
+
+     If gzdirect() is used immediately after gzopen() or gzdopen() it will
+   cause buffers to be allocated to allow reading the file to determine if it
+   is a gzip file.  Therefore if gzbuffer() is used, it should be called before
+   gzdirect().
+
+     When writing, gzdirect() returns true (1) if transparent writing was
+   requested ("wT" for the gzopen() mode), or false (0) otherwise.  (Note:
+   gzdirect() is not needed when writing.  Transparent writing must be
+   explicitly requested, so the application already knows the answer.  When
+   linking statically, using gzdirect() will include all of the zlib code for
+   gzip file reading and decompression, which may not be desired.)
+*/
+
+ZEXTERN int ZEXPORT    gzclose OF((gzFile file));
+/*
+     Flushes all pending output if necessary, closes the compressed file and
+   deallocates the (de)compression state.  Note that once file is closed, you
+   cannot call gzerror with file, since its structures have been deallocated.
+   gzclose must not be called more than once on the same file, just as free
+   must not be called more than once on the same allocation.
+
+     gzclose will return Z_STREAM_ERROR if file is not valid, Z_ERRNO on a
+   file operation error, Z_MEM_ERROR if out of memory, Z_BUF_ERROR if the
+   last read ended in the middle of a gzip stream, or Z_OK on success.
+*/
+
+ZEXTERN int ZEXPORT gzclose_r OF((gzFile file));
+ZEXTERN int ZEXPORT gzclose_w OF((gzFile file));
+/*
+     Same as gzclose(), but gzclose_r() is only for use when reading, and
+   gzclose_w() is only for use when writing or appending.  The advantage to
+   using these instead of gzclose() is that they avoid linking in zlib
+   compression or decompression code that is not used when only reading or only
+   writing respectively.  If gzclose() is used, then both compression and
+   decompression code will be included the application when linking to a static
+   zlib library.
+*/
+
+ZEXTERN const char * ZEXPORT gzerror OF((gzFile file, int *errnum));
+/*
+     Returns the error message for the last error which occurred on the given
+   compressed file.  errnum is set to zlib error number.  If an error occurred
+   in the file system and not in the compression library, errnum is set to
+   Z_ERRNO and the application may consult errno to get the exact error code.
+
+     The application must not modify the returned string.  Future calls to
+   this function may invalidate the previously returned string.  If file is
+   closed, then the string previously returned by gzerror will no longer be
+   available.
+
+     gzerror() should be used to distinguish errors from end-of-file for those
+   functions above that do not distinguish those cases in their return values.
+*/
+
+ZEXTERN void ZEXPORT gzclearerr OF((gzFile file));
+/*
+     Clears the error and end-of-file flags for file.  This is analogous to the
+   clearerr() function in stdio.  This is useful for continuing to read a gzip
+   file that is being written concurrently.
+*/
+
+#endif /* !Z_SOLO */
+
+                        /* checksum functions */
+
+/*
+     These functions are not related to compression but are exported
+   anyway because they might be useful in applications using the compression
+   library.
+*/
+
+ZEXTERN uLong ZEXPORT adler32 OF((uLong adler, const Bytef *buf, uInt len));
+/*
+     Update a running Adler-32 checksum with the bytes buf[0..len-1] and
+   return the updated checksum.  If buf is Z_NULL, this function returns the
+   required initial value for the checksum.
+
+     An Adler-32 checksum is almost as reliable as a CRC32 but can be computed
+   much faster.
+
+   Usage example:
+
+     uLong adler = adler32(0L, Z_NULL, 0);
+
+     while (read_buffer(buffer, length) != EOF) {
+       adler = adler32(adler, buffer, length);
+     }
+     if (adler != original_adler) error();
+*/
+
+/*
+ZEXTERN uLong ZEXPORT adler32_combine OF((uLong adler1, uLong adler2,
+                                          z_off_t len2));
+
+     Combine two Adler-32 checksums into one.  For two sequences of bytes, seq1
+   and seq2 with lengths len1 and len2, Adler-32 checksums were calculated for
+   each, adler1 and adler2.  adler32_combine() returns the Adler-32 checksum of
+   seq1 and seq2 concatenated, requiring only adler1, adler2, and len2.  Note
+   that the z_off_t type (like off_t) is a signed integer.  If len2 is
+   negative, the result has no meaning or utility.
+*/
+
+ZEXTERN uLong ZEXPORT crc32   OF((uLong crc, const Bytef *buf, uInt len));
+/*
+     Update a running CRC-32 with the bytes buf[0..len-1] and return the
+   updated CRC-32.  If buf is Z_NULL, this function returns the required
+   initial value for the crc.  Pre- and post-conditioning (one's complement) is
+   performed within this function so it shouldn't be done by the application.
+
+   Usage example:
+
+     uLong crc = crc32(0L, Z_NULL, 0);
+
+     while (read_buffer(buffer, length) != EOF) {
+       crc = crc32(crc, buffer, length);
+     }
+     if (crc != original_crc) error();
+*/
+
+/*
+ZEXTERN uLong ZEXPORT crc32_combine OF((uLong crc1, uLong crc2, z_off_t len2));
+
+     Combine two CRC-32 check values into one.  For two sequences of bytes,
+   seq1 and seq2 with lengths len1 and len2, CRC-32 check values were
+   calculated for each, crc1 and crc2.  crc32_combine() returns the CRC-32
+   check value of seq1 and seq2 concatenated, requiring only crc1, crc2, and
+   len2.
+*/
+
+
+                        /* various hacks, don't look :) */
+
+/* deflateInit and inflateInit are macros to allow checking the zlib version
+ * and the compiler's view of z_stream:
+ */
+ZEXTERN int ZEXPORT deflateInit_ OF((z_streamp strm, int level,
+                                     const char *version, int stream_size));
+ZEXTERN int ZEXPORT inflateInit_ OF((z_streamp strm,
+                                     const char *version, int stream_size));
+ZEXTERN int ZEXPORT deflateInit2_ OF((z_streamp strm, int  level, int  method,
+                                      int windowBits, int memLevel,
+                                      int strategy, const char *version,
+                                      int stream_size));
+ZEXTERN int ZEXPORT inflateInit2_ OF((z_streamp strm, int  windowBits,
+                                      const char *version, int stream_size));
+ZEXTERN int ZEXPORT inflateBackInit_ OF((z_streamp strm, int windowBits,
+                                         unsigned char FAR *window,
+                                         const char *version,
+                                         int stream_size));
+#define deflateInit(strm, level) \
+        deflateInit_((strm), (level), ZLIB_VERSION, (int)sizeof(z_stream))
+#define inflateInit(strm) \
+        inflateInit_((strm), ZLIB_VERSION, (int)sizeof(z_stream))
+#define deflateInit2(strm, level, method, windowBits, memLevel, strategy) \
+        deflateInit2_((strm),(level),(method),(windowBits),(memLevel),\
+                      (strategy), ZLIB_VERSION, (int)sizeof(z_stream))
+#define inflateInit2(strm, windowBits) \
+        inflateInit2_((strm), (windowBits), ZLIB_VERSION, \
+                      (int)sizeof(z_stream))
+#define inflateBackInit(strm, windowBits, window) \
+        inflateBackInit_((strm), (windowBits), (window), \
+                      ZLIB_VERSION, (int)sizeof(z_stream))
+
+#ifndef Z_SOLO
+
+/* gzgetc() macro and its supporting function and exposed data structure.  Note
+ * that the real internal state is much larger than the exposed structure.
+ * This abbreviated structure exposes just enough for the gzgetc() macro.  The
+ * user should not mess with these exposed elements, since their names or
+ * behavior could change in the future, perhaps even capriciously.  They can
+ * only be used by the gzgetc() macro.  You have been warned.
+ */
+struct gzFile_s {
+    unsigned have;
+    unsigned char *next;
+    z_off64_t pos;
+};
+ZEXTERN int ZEXPORT gzgetc_ OF((gzFile file));  /* backward compatibility */
+#ifdef Z_PREFIX_SET
+#  undef z_gzgetc
+#  define z_gzgetc(g) \
+          ((g)->have ? ((g)->have--, (g)->pos++, *((g)->next)++) : gzgetc(g))
+#else
+#  define gzgetc(g) \
+          ((g)->have ? ((g)->have--, (g)->pos++, *((g)->next)++) : gzgetc(g))
+#endif
+
+/* provide 64-bit offset functions if _LARGEFILE64_SOURCE defined, and/or
+ * change the regular functions to 64 bits if _FILE_OFFSET_BITS is 64 (if
+ * both are true, the application gets the *64 functions, and the regular
+ * functions are changed to 64 bits) -- in case these are set on systems
+ * without large file support, _LFS64_LARGEFILE must also be true
+ */
+#ifdef Z_LARGE64
+   ZEXTERN gzFile ZEXPORT gzopen64 OF((const char *, const char *));
+   ZEXTERN z_off64_t ZEXPORT gzseek64 OF((gzFile, z_off64_t, int));
+   ZEXTERN z_off64_t ZEXPORT gztell64 OF((gzFile));
+   ZEXTERN z_off64_t ZEXPORT gzoffset64 OF((gzFile));
+   ZEXTERN uLong ZEXPORT adler32_combine64 OF((uLong, uLong, z_off64_t));
+   ZEXTERN uLong ZEXPORT crc32_combine64 OF((uLong, uLong, z_off64_t));
+#endif
+
+#if !defined(ZLIB_INTERNAL) && defined(Z_WANT64)
+#  ifdef Z_PREFIX_SET
+#    define z_gzopen z_gzopen64
+#    define z_gzseek z_gzseek64
+#    define z_gztell z_gztell64
+#    define z_gzoffset z_gzoffset64
+#    define z_adler32_combine z_adler32_combine64
+#    define z_crc32_combine z_crc32_combine64
+#  else
+#    define gzopen gzopen64
+#    define gzseek gzseek64
+#    define gztell gztell64
+#    define gzoffset gzoffset64
+#    define adler32_combine adler32_combine64
+#    define crc32_combine crc32_combine64
+#  endif
+#  ifndef Z_LARGE64
+     ZEXTERN gzFile ZEXPORT gzopen64 OF((const char *, const char *));
+     ZEXTERN z_off_t ZEXPORT gzseek64 OF((gzFile, z_off_t, int));
+     ZEXTERN z_off_t ZEXPORT gztell64 OF((gzFile));
+     ZEXTERN z_off_t ZEXPORT gzoffset64 OF((gzFile));
+     ZEXTERN uLong ZEXPORT adler32_combine64 OF((uLong, uLong, z_off_t));
+     ZEXTERN uLong ZEXPORT crc32_combine64 OF((uLong, uLong, z_off_t));
+#  endif
+#else
+   ZEXTERN gzFile ZEXPORT gzopen OF((const char *, const char *));
+   ZEXTERN z_off_t ZEXPORT gzseek OF((gzFile, z_off_t, int));
+   ZEXTERN z_off_t ZEXPORT gztell OF((gzFile));
+   ZEXTERN z_off_t ZEXPORT gzoffset OF((gzFile));
+   ZEXTERN uLong ZEXPORT adler32_combine OF((uLong, uLong, z_off_t));
+   ZEXTERN uLong ZEXPORT crc32_combine OF((uLong, uLong, z_off_t));
+#endif
+
+#else /* Z_SOLO */
+
+   ZEXTERN uLong ZEXPORT adler32_combine OF((uLong, uLong, z_off_t));
+   ZEXTERN uLong ZEXPORT crc32_combine OF((uLong, uLong, z_off_t));
+
+#endif /* !Z_SOLO */
+
+/* hack for buggy compilers */
+#if !defined(ZUTIL_H) && !defined(NO_DUMMY_DECL)
+    struct internal_state {int dummy;};
+#endif
+
+/* undocumented functions */
+ZEXTERN const char   * ZEXPORT zError           OF((int));
+ZEXTERN int            ZEXPORT inflateSyncPoint OF((z_streamp));
+ZEXTERN const z_crc_t FAR * ZEXPORT get_crc_table    OF((void));
+ZEXTERN int            ZEXPORT inflateUndermine OF((z_streamp, int));
+ZEXTERN int            ZEXPORT inflateResetKeep OF((z_streamp));
+ZEXTERN int            ZEXPORT deflateResetKeep OF((z_streamp));
+#if defined(_WIN32) && !defined(Z_SOLO)
+ZEXTERN gzFile         ZEXPORT gzopen_w OF((const wchar_t *path,
+                                            const char *mode));
+#endif
+#if defined(STDC) || defined(Z_HAVE_STDARG_H)
+#  ifndef Z_SOLO
+ZEXTERN int            ZEXPORTVA gzvprintf Z_ARG((gzFile file,
+                                                  const char *format,
+                                                  va_list va));
+#  endif
+#endif
+
+#ifdef __cplusplus
+}
+#endif
+
+#endif /* ZLIB_H */
diff --git a/c-blosc/internal-complibs/zlib-1.2.8/zutil.c b/c-blosc/internal-complibs/zlib-1.2.8/zutil.c
new file mode 100644
index 0000000..23d2ebe
--- /dev/null
+++ b/c-blosc/internal-complibs/zlib-1.2.8/zutil.c
@@ -0,0 +1,324 @@
+/* zutil.c -- target dependent utility functions for the compression library
+ * Copyright (C) 1995-2005, 2010, 2011, 2012 Jean-loup Gailly.
+ * For conditions of distribution and use, see copyright notice in zlib.h
+ */
+
+/* @(#) $Id$ */
+
+#include "zutil.h"
+#ifndef Z_SOLO
+#  include "gzguts.h"
+#endif
+
+#ifndef NO_DUMMY_DECL
+struct internal_state      {int dummy;}; /* for buggy compilers */
+#endif
+
+z_const char * const z_errmsg[10] = {
+"need dictionary",     /* Z_NEED_DICT       2  */
+"stream end",          /* Z_STREAM_END      1  */
+"",                    /* Z_OK              0  */
+"file error",          /* Z_ERRNO         (-1) */
+"stream error",        /* Z_STREAM_ERROR  (-2) */
+"data error",          /* Z_DATA_ERROR    (-3) */
+"insufficient memory", /* Z_MEM_ERROR     (-4) */
+"buffer error",        /* Z_BUF_ERROR     (-5) */
+"incompatible version",/* Z_VERSION_ERROR (-6) */
+""};
+
+
+const char * ZEXPORT zlibVersion()
+{
+    return ZLIB_VERSION;
+}
+
+uLong ZEXPORT zlibCompileFlags()
+{
+    uLong flags;
+
+    flags = 0;
+    switch ((int)(sizeof(uInt))) {
+    case 2:     break;
+    case 4:     flags += 1;     break;
+    case 8:     flags += 2;     break;
+    default:    flags += 3;
+    }
+    switch ((int)(sizeof(uLong))) {
+    case 2:     break;
+    case 4:     flags += 1 << 2;        break;
+    case 8:     flags += 2 << 2;        break;
+    default:    flags += 3 << 2;
+    }
+    switch ((int)(sizeof(voidpf))) {
+    case 2:     break;
+    case 4:     flags += 1 << 4;        break;
+    case 8:     flags += 2 << 4;        break;
+    default:    flags += 3 << 4;
+    }
+    switch ((int)(sizeof(z_off_t))) {
+    case 2:     break;
+    case 4:     flags += 1 << 6;        break;
+    case 8:     flags += 2 << 6;        break;
+    default:    flags += 3 << 6;
+    }
+#ifdef DEBUG
+    flags += 1 << 8;
+#endif
+#if defined(ASMV) || defined(ASMINF)
+    flags += 1 << 9;
+#endif
+#ifdef ZLIB_WINAPI
+    flags += 1 << 10;
+#endif
+#ifdef BUILDFIXED
+    flags += 1 << 12;
+#endif
+#ifdef DYNAMIC_CRC_TABLE
+    flags += 1 << 13;
+#endif
+#ifdef NO_GZCOMPRESS
+    flags += 1L << 16;
+#endif
+#ifdef NO_GZIP
+    flags += 1L << 17;
+#endif
+#ifdef PKZIP_BUG_WORKAROUND
+    flags += 1L << 20;
+#endif
+#ifdef FASTEST
+    flags += 1L << 21;
+#endif
+#if defined(STDC) || defined(Z_HAVE_STDARG_H)
+#  ifdef NO_vsnprintf
+    flags += 1L << 25;
+#    ifdef HAS_vsprintf_void
+    flags += 1L << 26;
+#    endif
+#  else
+#    ifdef HAS_vsnprintf_void
+    flags += 1L << 26;
+#    endif
+#  endif
+#else
+    flags += 1L << 24;
+#  ifdef NO_snprintf
+    flags += 1L << 25;
+#    ifdef HAS_sprintf_void
+    flags += 1L << 26;
+#    endif
+#  else
+#    ifdef HAS_snprintf_void
+    flags += 1L << 26;
+#    endif
+#  endif
+#endif
+    return flags;
+}
+
+#ifdef DEBUG
+
+#  ifndef verbose
+#    define verbose 0
+#  endif
+int ZLIB_INTERNAL z_verbose = verbose;
+
+void ZLIB_INTERNAL z_error (m)
+    char *m;
+{
+    fprintf(stderr, "%s\n", m);
+    exit(1);
+}
+#endif
+
+/* exported to allow conversion of error code to string for compress() and
+ * uncompress()
+ */
+const char * ZEXPORT zError(err)
+    int err;
+{
+    return ERR_MSG(err);
+}
+
+#if defined(_WIN32_WCE)
+    /* The Microsoft C Run-Time Library for Windows CE doesn't have
+     * errno.  We define it as a global variable to simplify porting.
+     * Its value is always 0 and should not be used.
+     */
+    int errno = 0;
+#endif
+
+#ifndef HAVE_MEMCPY
+
+void ZLIB_INTERNAL zmemcpy(dest, source, len)
+    Bytef* dest;
+    const Bytef* source;
+    uInt  len;
+{
+    if (len == 0) return;
+    do {
+        *dest++ = *source++; /* ??? to be unrolled */
+    } while (--len != 0);
+}
+
+int ZLIB_INTERNAL zmemcmp(s1, s2, len)
+    const Bytef* s1;
+    const Bytef* s2;
+    uInt  len;
+{
+    uInt j;
+
+    for (j = 0; j < len; j++) {
+        if (s1[j] != s2[j]) return 2*(s1[j] > s2[j])-1;
+    }
+    return 0;
+}
+
+void ZLIB_INTERNAL zmemzero(dest, len)
+    Bytef* dest;
+    uInt  len;
+{
+    if (len == 0) return;
+    do {
+        *dest++ = 0;  /* ??? to be unrolled */
+    } while (--len != 0);
+}
+#endif
+
+#ifndef Z_SOLO
+
+#ifdef SYS16BIT
+
+#ifdef __TURBOC__
+/* Turbo C in 16-bit mode */
+
+#  define MY_ZCALLOC
+
+/* Turbo C malloc() does not allow dynamic allocation of 64K bytes
+ * and farmalloc(64K) returns a pointer with an offset of 8, so we
+ * must fix the pointer. Warning: the pointer must be put back to its
+ * original form in order to free it, use zcfree().
+ */
+
+#define MAX_PTR 10
+/* 10*64K = 640K */
+
+local int next_ptr = 0;
+
+typedef struct ptr_table_s {
+    voidpf org_ptr;
+    voidpf new_ptr;
+} ptr_table;
+
+local ptr_table table[MAX_PTR];
+/* This table is used to remember the original form of pointers
+ * to large buffers (64K). Such pointers are normalized with a zero offset.
+ * Since MSDOS is not a preemptive multitasking OS, this table is not
+ * protected from concurrent access. This hack doesn't work anyway on
+ * a protected system like OS/2. Use Microsoft C instead.
+ */
+
+voidpf ZLIB_INTERNAL zcalloc (voidpf opaque, unsigned items, unsigned size)
+{
+    voidpf buf = opaque; /* just to make some compilers happy */
+    ulg bsize = (ulg)items*size;
+
+    /* If we allocate less than 65520 bytes, we assume that farmalloc
+     * will return a usable pointer which doesn't have to be normalized.
+     */
+    if (bsize < 65520L) {
+        buf = farmalloc(bsize);
+        if (*(ush*)&buf != 0) return buf;
+    } else {
+        buf = farmalloc(bsize + 16L);
+    }
+    if (buf == NULL || next_ptr >= MAX_PTR) return NULL;
+    table[next_ptr].org_ptr = buf;
+
+    /* Normalize the pointer to seg:0 */
+    *((ush*)&buf+1) += ((ush)((uch*)buf-0) + 15) >> 4;
+    *(ush*)&buf = 0;
+    table[next_ptr++].new_ptr = buf;
+    return buf;
+}
+
+void ZLIB_INTERNAL zcfree (voidpf opaque, voidpf ptr)
+{
+    int n;
+    if (*(ush*)&ptr != 0) { /* object < 64K */
+        farfree(ptr);
+        return;
+    }
+    /* Find the original pointer */
+    for (n = 0; n < next_ptr; n++) {
+        if (ptr != table[n].new_ptr) continue;
+
+        farfree(table[n].org_ptr);
+        while (++n < next_ptr) {
+            table[n-1] = table[n];
+        }
+        next_ptr--;
+        return;
+    }
+    ptr = opaque; /* just to make some compilers happy */
+    Assert(0, "zcfree: ptr not found");
+}
+
+#endif /* __TURBOC__ */
+
+
+#ifdef M_I86
+/* Microsoft C in 16-bit mode */
+
+#  define MY_ZCALLOC
+
+#if (!defined(_MSC_VER) || (_MSC_VER <= 600))
+#  define _halloc  halloc
+#  define _hfree   hfree
+#endif
+
+voidpf ZLIB_INTERNAL zcalloc (voidpf opaque, uInt items, uInt size)
+{
+    if (opaque) opaque = 0; /* to make compiler happy */
+    return _halloc((long)items, size);
+}
+
+void ZLIB_INTERNAL zcfree (voidpf opaque, voidpf ptr)
+{
+    if (opaque) opaque = 0; /* to make compiler happy */
+    _hfree(ptr);
+}
+
+#endif /* M_I86 */
+
+#endif /* SYS16BIT */
+
+
+#ifndef MY_ZCALLOC /* Any system without a special alloc function */
+
+#ifndef STDC
+extern voidp  malloc OF((uInt size));
+extern voidp  calloc OF((uInt items, uInt size));
+extern void   free   OF((voidpf ptr));
+#endif
+
+voidpf ZLIB_INTERNAL zcalloc (opaque, items, size)
+    voidpf opaque;
+    unsigned items;
+    unsigned size;
+{
+    if (opaque) items += size - size; /* make compiler happy */
+    return sizeof(uInt) > 2 ? (voidpf)malloc(items * size) :
+                              (voidpf)calloc(items, size);
+}
+
+void ZLIB_INTERNAL zcfree (opaque, ptr)
+    voidpf opaque;
+    voidpf ptr;
+{
+    free(ptr);
+    if (opaque) return; /* make compiler happy */
+}
+
+#endif /* MY_ZCALLOC */
+
+#endif /* !Z_SOLO */
diff --git a/c-blosc/internal-complibs/zlib-1.2.8/zutil.h b/c-blosc/internal-complibs/zlib-1.2.8/zutil.h
new file mode 100644
index 0000000..24ab06b
--- /dev/null
+++ b/c-blosc/internal-complibs/zlib-1.2.8/zutil.h
@@ -0,0 +1,253 @@
+/* zutil.h -- internal interface and configuration of the compression library
+ * Copyright (C) 1995-2013 Jean-loup Gailly.
+ * For conditions of distribution and use, see copyright notice in zlib.h
+ */
+
+/* WARNING: this file should *not* be used by applications. It is
+   part of the implementation of the compression library and is
+   subject to change. Applications should only use zlib.h.
+ */
+
+/* @(#) $Id$ */
+
+#ifndef ZUTIL_H
+#define ZUTIL_H
+
+#ifdef HAVE_HIDDEN
+#  define ZLIB_INTERNAL __attribute__((visibility ("hidden")))
+#else
+#  define ZLIB_INTERNAL
+#endif
+
+#include "zlib.h"
+
+#if defined(STDC) && !defined(Z_SOLO)
+#  if !(defined(_WIN32_WCE) && defined(_MSC_VER))
+#    include <stddef.h>
+#  endif
+#  include <string.h>
+#  include <stdlib.h>
+#endif
+
+#ifdef Z_SOLO
+   typedef long ptrdiff_t;  /* guess -- will be caught if guess is wrong */
+#endif
+
+#ifndef local
+#  define local static
+#endif
+/* compile with -Dlocal if your debugger can't find static symbols */
+
+typedef unsigned char  uch;
+typedef uch FAR uchf;
+typedef unsigned short ush;
+typedef ush FAR ushf;
+typedef unsigned long  ulg;
+
+extern z_const char * const z_errmsg[10]; /* indexed by 2-zlib_error */
+/* (size given to avoid silly warnings with Visual C++) */
+
+#define ERR_MSG(err) z_errmsg[Z_NEED_DICT-(err)]
+
+#define ERR_RETURN(strm,err) \
+  return (strm->msg = ERR_MSG(err), (err))
+/* To be used only when the state is known to be valid */
+
+        /* common constants */
+
+#ifndef DEF_WBITS
+#  define DEF_WBITS MAX_WBITS
+#endif
+/* default windowBits for decompression. MAX_WBITS is for compression only */
+
+#if MAX_MEM_LEVEL >= 8
+#  define DEF_MEM_LEVEL 8
+#else
+#  define DEF_MEM_LEVEL  MAX_MEM_LEVEL
+#endif
+/* default memLevel */
+
+#define STORED_BLOCK 0
+#define STATIC_TREES 1
+#define DYN_TREES    2
+/* The three kinds of block type */
+
+#define MIN_MATCH  3
+#define MAX_MATCH  258
+/* The minimum and maximum match lengths */
+
+#define PRESET_DICT 0x20 /* preset dictionary flag in zlib header */
+
+        /* target dependencies */
+
+#if defined(MSDOS) || (defined(WINDOWS) && !defined(WIN32))
+#  define OS_CODE  0x00
+#  ifndef Z_SOLO
+#    if defined(__TURBOC__) || defined(__BORLANDC__)
+#      if (__STDC__ == 1) && (defined(__LARGE__) || defined(__COMPACT__))
+         /* Allow compilation with ANSI keywords only enabled */
+         void _Cdecl farfree( void *block );
+         void *_Cdecl farmalloc( unsigned long nbytes );
+#      else
+#        include <alloc.h>
+#      endif
+#    else /* MSC or DJGPP */
+#      include <malloc.h>
+#    endif
+#  endif
+#endif
+
+#ifdef AMIGA
+#  define OS_CODE  0x01
+#endif
+
+#if defined(VAXC) || defined(VMS)
+#  define OS_CODE  0x02
+#  define F_OPEN(name, mode) \
+     fopen((name), (mode), "mbc=60", "ctx=stm", "rfm=fix", "mrs=512")
+#endif
+
+#if defined(ATARI) || defined(atarist)
+#  define OS_CODE  0x05
+#endif
+
+#ifdef OS2
+#  define OS_CODE  0x06
+#  if defined(M_I86) && !defined(Z_SOLO)
+#    include <malloc.h>
+#  endif
+#endif
+
+#if defined(MACOS) || defined(TARGET_OS_MAC)
+#  define OS_CODE  0x07
+#  ifndef Z_SOLO
+#    if defined(__MWERKS__) && __dest_os != __be_os && __dest_os != __win32_os
+#      include <unix.h> /* for fdopen */
+#    else
+#      ifndef fdopen
+#        define fdopen(fd,mode) NULL /* No fdopen() */
+#      endif
+#    endif
+#  endif
+#endif
+
+#ifdef TOPS20
+#  define OS_CODE  0x0a
+#endif
+
+#ifdef WIN32
+#  ifndef __CYGWIN__  /* Cygwin is Unix, not Win32 */
+#    define OS_CODE  0x0b
+#  endif
+#endif
+
+#ifdef __50SERIES /* Prime/PRIMOS */
+#  define OS_CODE  0x0f
+#endif
+
+#if defined(_BEOS_) || defined(RISCOS)
+#  define fdopen(fd,mode) NULL /* No fdopen() */
+#endif
+
+#if (defined(_MSC_VER) && (_MSC_VER > 600)) && !defined __INTERIX
+#  if defined(_WIN32_WCE)
+#    define fdopen(fd,mode) NULL /* No fdopen() */
+#    ifndef _PTRDIFF_T_DEFINED
+       typedef int ptrdiff_t;
+#      define _PTRDIFF_T_DEFINED
+#    endif
+#  else
+#    define fdopen(fd,type)  _fdopen(fd,type)
+#  endif
+#endif
+
+#if defined(__BORLANDC__) && !defined(MSDOS)
+  #pragma warn -8004
+  #pragma warn -8008
+  #pragma warn -8066
+#endif
+
+/* provide prototypes for these when building zlib without LFS */
+#if !defined(_WIN32) && \
+    (!defined(_LARGEFILE64_SOURCE) || _LFS64_LARGEFILE-0 == 0)
+    ZEXTERN uLong ZEXPORT adler32_combine64 OF((uLong, uLong, z_off_t));
+    ZEXTERN uLong ZEXPORT crc32_combine64 OF((uLong, uLong, z_off_t));
+#endif
+
+        /* common defaults */
+
+#ifndef OS_CODE
+#  define OS_CODE  0x03  /* assume Unix */
+#endif
+
+#ifndef F_OPEN
+#  define F_OPEN(name, mode) fopen((name), (mode))
+#endif
+
+         /* functions */
+
+#if defined(pyr) || defined(Z_SOLO)
+#  define NO_MEMCPY
+#endif
+#if defined(SMALL_MEDIUM) && !defined(_MSC_VER) && !defined(__SC__)
+ /* Use our own functions for small and medium model with MSC <= 5.0.
+  * You may have to use the same strategy for Borland C (untested).
+  * The __SC__ check is for Symantec.
+  */
+#  define NO_MEMCPY
+#endif
+#if defined(STDC) && !defined(HAVE_MEMCPY) && !defined(NO_MEMCPY)
+#  define HAVE_MEMCPY
+#endif
+#ifdef HAVE_MEMCPY
+#  ifdef SMALL_MEDIUM /* MSDOS small or medium model */
+#    define zmemcpy _fmemcpy
+#    define zmemcmp _fmemcmp
+#    define zmemzero(dest, len) _fmemset(dest, 0, len)
+#  else
+#    define zmemcpy memcpy
+#    define zmemcmp memcmp
+#    define zmemzero(dest, len) memset(dest, 0, len)
+#  endif
+#else
+   void ZLIB_INTERNAL zmemcpy OF((Bytef* dest, const Bytef* source, uInt len));
+   int ZLIB_INTERNAL zmemcmp OF((const Bytef* s1, const Bytef* s2, uInt len));
+   void ZLIB_INTERNAL zmemzero OF((Bytef* dest, uInt len));
+#endif
+
+/* Diagnostic functions */
+#ifdef DEBUG
+#  include <stdio.h>
+   extern int ZLIB_INTERNAL z_verbose;
+   extern void ZLIB_INTERNAL z_error OF((char *m));
+#  define Assert(cond,msg) {if(!(cond)) z_error(msg);}
+#  define Trace(x) {if (z_verbose>=0) fprintf x ;}
+#  define Tracev(x) {if (z_verbose>0) fprintf x ;}
+#  define Tracevv(x) {if (z_verbose>1) fprintf x ;}
+#  define Tracec(c,x) {if (z_verbose>0 && (c)) fprintf x ;}
+#  define Tracecv(c,x) {if (z_verbose>1 && (c)) fprintf x ;}
+#else
+#  define Assert(cond,msg)
+#  define Trace(x)
+#  define Tracev(x)
+#  define Tracevv(x)
+#  define Tracec(c,x)
+#  define Tracecv(c,x)
+#endif
+
+#ifndef Z_SOLO
+   voidpf ZLIB_INTERNAL zcalloc OF((voidpf opaque, unsigned items,
+                                    unsigned size));
+   void ZLIB_INTERNAL zcfree  OF((voidpf opaque, voidpf ptr));
+#endif
+
+#define ZALLOC(strm, items, size) \
+           (*((strm)->zalloc))((strm)->opaque, (items), (size))
+#define ZFREE(strm, addr)  (*((strm)->zfree))((strm)->opaque, (voidpf)(addr))
+#define TRY_FREE(s, p) {if (p) ZFREE(s, p);}
+
+/* Reverse the bytes in a 32-bit value */
+#define ZSWAP32(q) ((((q) >> 24) & 0xff) + (((q) >> 8) & 0xff00) + \
+                    (((q) & 0xff00) << 8) + (((q) & 0xff) << 24))
+
+#endif /* ZUTIL_H */
diff --git a/c-blosc/tests/.gitignore b/c-blosc/tests/.gitignore
new file mode 100644
index 0000000..b883f1f
--- /dev/null
+++ b/c-blosc/tests/.gitignore
@@ -0,0 +1 @@
+*.exe
diff --git a/c-blosc/tests/CMakeLists.txt b/c-blosc/tests/CMakeLists.txt
new file mode 100644
index 0000000..a76d670
--- /dev/null
+++ b/c-blosc/tests/CMakeLists.txt
@@ -0,0 +1,14 @@
+# sources
+#aux_source_directory(. SOURCES)
+file(GLOB SOURCES test_*.c)
+
+# flags
+link_directories(${PROJECT_BINARY_DIR}/blosc)
+
+# targets and tests
+foreach(source ${SOURCES})
+    get_filename_component(target ${source} NAME_WE)
+    add_executable(${target} ${source})
+    target_link_libraries(${target} blosc_shared)
+    add_test(test_${target} ${target})
+endforeach(source)
diff --git a/c-blosc/tests/Makefile b/c-blosc/tests/Makefile
new file mode 100644
index 0000000..fddba4b
--- /dev/null
+++ b/c-blosc/tests/Makefile
@@ -0,0 +1,46 @@
+CC=gcc
+CFLAGS=-O3 -msse2 -Wall -pthread
+LDFLAGS=-pthread
+BLOSC_LIB= $(wildcard ../blosc/*.c)
+
+# The list of executables
+# Generated PNG (intermediate) files
+SOURCES := $(wildcard *.c)
+EXECUTABLES := $(patsubst %.c, %.exe, $(SOURCES))
+
+# Support for internal LZ4 and LZ4HC
+LZ4_DIR = ../internal-complibs/lz4-r110
+CFLAGS += -DHAVE_LZ4 -I$(LZ4_DIR)
+BLOSC_LIB += $(wildcard $(LZ4_DIR)/*.c)
+
+# Support for external LZ4 and LZ4HC
+#LDFLAGS += -DHAVE_LZ4 -llz4
+
+# Support for internal Snappy
+#SNAPPY_DIR = ../internal-complibs/snappy-1.1.1
+#CFLAGS += -DHAVE_SNAPPY -I$(SNAPPY_DIR)
+#BLOSC_LIB += $(wildcard $(SNAPPY_DIR)/*.cc)
+
+# Support for external Snappy
+LDFLAGS += -DHAVE_SNAPPY -lsnappy
+
+# Support for external Zlib
+LDFLAGS += -DHAVE_ZLIB -lz
+
+# Support for internal Zlib
+#ZLIB_DIR = ../internal-complibs/zlib-1.2.8
+#CFLAGS += -DHAVE_ZLIB -I$(ZLIB_DIR)
+#BLOSC_LIB += $(wildcard $(ZLIB_DIR)/*.c)
+
+
+.PHONY: all
+all: $(EXECUTABLES)
+
+test: $(EXECUTABLES)
+	sh test_all.sh
+
+%.exe: %.c $(BLOSC_LIB)
+	$(CC) $(CFLAGS) $(LDFLAGS) "$<" $(BLOSC_LIB) -o "$@"
+
+clean:
+	rm -rf $(EXECUTABLES)
diff --git a/c-blosc/tests/print_versions.c b/c-blosc/tests/print_versions.c
new file mode 100644
index 0000000..3639f37
--- /dev/null
+++ b/c-blosc/tests/print_versions.c
@@ -0,0 +1,32 @@
+/*********************************************************************
+  Print versions for Blosc and all its internal compressors.
+*********************************************************************/
+
+#include <stdlib.h>
+#include <stdio.h>
+#include <string.h>
+#include "../blosc/blosc.h"
+
+
+int main(int argc, char *argv[]) {
+
+  char *name = NULL, *version = NULL;
+  int ret;
+
+  printf("Blosc version: %s (%s)\n", BLOSC_VERSION_STRING, BLOSC_VERSION_DATE);
+
+  printf("List of supported compressors in this build: %s\n",
+         blosc_list_compressors());
+
+  printf("Supported compression libraries:\n");
+  ret = blosc_get_complib_info("blosclz", &name, &version);
+  if (ret >= 0) printf("  %s: %s\n", name, version);
+  ret = blosc_get_complib_info("lz4", &name, &version);
+  if (ret >= 0) printf("  %s: %s\n", name, version);
+  ret = blosc_get_complib_info("snappy", &name, &version);
+  if (ret >= 0) printf("  %s: %s\n", name, version);
+  ret = blosc_get_complib_info("zlib", &name, &version);
+  if (ret >= 0) printf("  %s: %s\n", name, version);
+
+  return(0);
+}
diff --git a/c-blosc/tests/test_all.sh b/c-blosc/tests/test_all.sh
new file mode 100644
index 0000000..9deae21
--- /dev/null
+++ b/c-blosc/tests/test_all.sh
@@ -0,0 +1,14 @@
+#*********************************************************************
+#  Blosc - Blocked Suffling and Compression Library
+#
+#  Unit tests for basic features in Blosc.
+#
+#  Creation date: 2010-06-07
+#  Author: Francesc Alted <faltet at gmail.com>
+#
+#  See LICENSES/BLOSC.txt for details about copyright and rights to use.
+#**********************************************************************
+
+for exe in $(ls *.exe); do
+    ./$exe
+done
diff --git a/c-blosc/tests/test_api.c b/c-blosc/tests/test_api.c
new file mode 100644
index 0000000..76d7226
--- /dev/null
+++ b/c-blosc/tests/test_api.c
@@ -0,0 +1,103 @@
+/*********************************************************************
+  Blosc - Blocked Suffling and Compression Library
+
+  Unit tests for Blosc API.
+
+  Creation date: 2010-06-07
+  Author: Francesc Alted <faltet at gmail.com>
+
+  See LICENSES/BLOSC.txt for details about copyright and rights to use.
+**********************************************************************/
+
+#include "test_common.h"
+
+int tests_run = 0;
+
+/* Global vars */
+void *src, *srccpy, *dest, *dest2;
+size_t nbytes, cbytes;
+int clevel = 3;
+int doshuffle = 1;
+size_t typesize = 4;
+size_t size = 1*MB;
+
+
+
+static char *test_cbuffer_sizes() {
+  size_t nbytes_, cbytes_, blocksize;
+
+  blosc_cbuffer_sizes(dest, &nbytes_, &cbytes_, &blocksize);
+  mu_assert("ERROR: nbytes incorrect(1)", nbytes == size);
+  mu_assert("ERROR: nbytes incorrect(2)", nbytes_ == nbytes);
+  mu_assert("ERROR: cbytes incorrect", cbytes == cbytes_);
+  mu_assert("ERROR: blocksize incorrect", blocksize >= 128);
+  return 0;
+}
+
+static char *test_cbuffer_metainfo() {
+  size_t typesize_;
+  int flags;
+
+  blosc_cbuffer_metainfo(dest, &typesize_, &flags);
+  mu_assert("ERROR: typesize incorrect", typesize_ == typesize);
+  mu_assert("ERROR: shuffle incorrect", (flags & BLOSC_DOSHUFFLE) == doshuffle);
+  return 0;
+}
+
+
+static char *test_cbuffer_versions() {
+  int version_;
+  int versionlz_;
+
+  blosc_cbuffer_versions(dest, &version_, &versionlz_);
+  mu_assert("ERROR: version incorrect", version_ == BLOSC_VERSION_FORMAT);
+  mu_assert("ERROR: versionlz incorrect", versionlz_ == BLOSC_BLOSCLZ_VERSION_FORMAT);
+  return 0;
+}
+
+
+static char *all_tests() {
+  mu_run_test(test_cbuffer_sizes);
+  mu_run_test(test_cbuffer_metainfo);
+  mu_run_test(test_cbuffer_versions);
+  return 0;
+}
+
+int main(int argc, char **argv) {
+  char *result;
+
+  printf("STARTING TESTS for %s", argv[0]);
+
+  blosc_init();
+  blosc_set_nthreads(1);
+
+  /* Initialize buffers */
+  src = malloc(size);
+  srccpy = malloc(size);
+  dest = malloc(size);
+  dest2 = malloc(size);
+  memset(src, 0, size);
+  memcpy(srccpy, src, size);
+
+  /* Get a compressed buffer */
+  cbytes = blosc_compress(clevel, doshuffle, typesize, size, src, dest, size);
+
+  /* Get a decompressed buffer */
+  nbytes = blosc_decompress(dest, dest2, size);
+
+  /* Run all the suite */
+  result = all_tests();
+  if (result != 0) {
+    printf(" (%s)\n", result);
+  }
+  else {
+    printf(" ALL TESTS PASSED");
+  }
+  printf("\tTests run: %d\n", tests_run);
+
+  free(src); free(srccpy); free(dest); free(dest2);
+  blosc_destroy();
+
+  return result != 0;
+}
+
diff --git a/c-blosc/tests/test_basics.c b/c-blosc/tests/test_basics.c
new file mode 100644
index 0000000..f15f2f3
--- /dev/null
+++ b/c-blosc/tests/test_basics.c
@@ -0,0 +1,141 @@
+/*********************************************************************
+  Blosc - Blocked Suffling and Compression Library
+
+  Unit tests for basic features in Blosc.
+
+  Creation date: 2010-06-07
+  Author: Francesc Alted <faltet at gmail.com>
+
+  See LICENSES/BLOSC.txt for details about copyright and rights to use.
+**********************************************************************/
+
+#include "test_common.h"
+
+int tests_run = 0;
+
+/* Global vars */
+void *src, *srccpy, *dest, *dest2;
+size_t nbytes, cbytes;
+int clevel = 1;
+int doshuffle = 0;
+size_t typesize = 4;
+size_t size = 1000;             /* must be divisible by 4 */
+
+
+/* Check maxout with maxout < size */
+static char *test_maxout_less() {
+
+  /* Get a compressed buffer */
+  cbytes = blosc_compress(clevel, doshuffle, typesize, size, src,
+                          dest, size+15);
+  mu_assert("ERROR: cbytes is not 0", cbytes == 0);
+
+  return 0;
+}
+
+/* Check maxout with maxout == size */
+static char *test_maxout_equal() {
+
+  /* Get a compressed buffer */
+  cbytes = blosc_compress(clevel, doshuffle, typesize, size, src,
+                          dest, size+16);
+  mu_assert("ERROR: cbytes is not correct", cbytes == size+16);
+
+  /* Decompress the buffer */
+  nbytes = blosc_decompress(dest, dest2, size);
+  mu_assert("ERROR: nbytes incorrect(1)", nbytes == size);
+
+  return 0;
+}
+
+
+/* Check maxout with maxout > size */
+static char *test_maxout_great() {
+  /* Get a compressed buffer */
+  cbytes = blosc_compress(clevel, doshuffle, typesize, size, src,
+                          dest, size+17);
+  mu_assert("ERROR: cbytes is not 0", cbytes == size+16);
+
+  /* Decompress the buffer */
+  nbytes = blosc_decompress(dest, dest2, size);
+  mu_assert("ERROR: nbytes incorrect(1)", nbytes == size);
+
+  return 0;
+}
+
+static char * test_shuffle()
+{
+  int sizes[] = {7, 64 * 3, 7*256, 500, 8000, 100000, 702713};
+  int types[] = {1, 2, 3, 4, 5, 6, 7, 8, 16};
+  int i, j, k;
+  int ok;
+  for (i = 0; i < sizeof(sizes) / sizeof(sizes[0]); i++) {
+    for (j = 0; j < sizeof(types) / sizeof(types[0]); j++) {
+      int n = sizes[i];
+      int t = types[j];
+      char * d = malloc(t * n);
+      char * d2 = malloc(t * n);
+      char * o = malloc(t * n + BLOSC_MAX_OVERHEAD);
+      for (k = 0; k < n; k++) {
+        d[k] = rand();
+      }
+      blosc_compress(5, 1, t, t * n, d, o, t * n + BLOSC_MAX_OVERHEAD);
+      blosc_decompress(o, d2, t * n);
+      ok = 1;
+      for (k = 0; ok&& k < n; k++) {
+        ok = (d[k] == d2[k]);
+      }
+      free(d);
+      free(d2);
+      free(o);
+      mu_assert("ERROR: multi size test failed", ok);
+    }
+  }
+
+  return 0;
+}
+
+static char *all_tests() {
+  mu_run_test(test_maxout_less);
+  mu_run_test(test_maxout_equal);
+  mu_run_test(test_maxout_great);
+  mu_run_test(test_shuffle);
+  return 0;
+}
+
+int main(int argc, char **argv) {
+  size_t i;
+  int32_t *_src;
+  char *result;
+
+  printf("STARTING TESTS for %s", argv[0]);
+
+  blosc_init();
+  blosc_set_nthreads(1);
+
+  /* Initialize buffers */
+  src = malloc(size);
+  srccpy = malloc(size);
+  dest = malloc(size+16);
+  dest2 = malloc(size);
+  _src = (int32_t *)src;
+  for (i=0; i < (size/4); i++) {
+    _src[i] = i;
+  }
+  memcpy(srccpy, src, size);
+
+  /* Run all the suite */
+  result = all_tests();
+  if (result != 0) {
+    printf(" (%s)\n", result);
+  }
+  else {
+    printf(" ALL TESTS PASSED");
+  }
+  printf("\tTests run: %d\n", tests_run);
+
+  free(src); free(srccpy); free(dest); free(dest2);
+  blosc_destroy();
+
+  return result != 0;
+}
diff --git a/c-blosc/tests/test_common.h b/c-blosc/tests/test_common.h
new file mode 100644
index 0000000..08fe1f2
--- /dev/null
+++ b/c-blosc/tests/test_common.h
@@ -0,0 +1,40 @@
+/*********************************************************************
+  Blosc - Blocked Suffling and Compression Library
+
+  Unit tests for basic features in Blosc.
+
+  Creation date: 2010-06-07
+  Author: Francesc Alted <faltet at gmail.com>
+
+  See LICENSES/BLOSC.txt for details about copyright and rights to use.
+**********************************************************************/
+
+#include <stdlib.h>
+#include <stdio.h>
+#include <string.h>
+#include <sys/types.h>
+#include <sys/stat.h>
+#include <fcntl.h>
+#if defined(_WIN32) && !defined(__MINGW32__)
+  #include <time.h>
+  #include "win32/stdint-windows.h"
+#else
+  #include <unistd.h>
+  #include <sys/time.h>
+#endif
+#include <math.h>
+#include "../blosc/blosc.h"
+
+
+/* This is MinUnit in action (http://www.jera.com/techinfo/jtns/jtn002.html) */
+#define mu_assert(message, test) do { if (!(test)) return message; } while (0)
+#define mu_run_test(test) do \
+    { char *message = test(); tests_run++;                          \
+      if (message) { printf("%c", 'F'); return message;}            \
+      else printf("%c", '.'); } while (0)
+
+extern int tests_run;
+
+#define KB  1024
+#define MB  (1024*KB)
+#define GB  (1024*MB)
diff --git a/doc/scripts/filenode.py b/doc/scripts/filenode.py
index ed91292..2fb11c5 100644
--- a/doc/scripts/filenode.py
+++ b/doc/scripts/filenode.py
@@ -1,44 +1,45 @@
 # Copy this file into the clipboard and paste into 'script -c python'.
 
+from __future__ import print_function
 from tables.nodes import FileNode
 
 
 import tables
-h5file = tables.openFile('fnode.h5', 'w')
+h5file = tables.open_file('fnode.h5', 'w')
 
 
-fnode = FileNode.newNode(h5file, where='/', name='fnode_test')
+fnode = FileNode.new_node(h5file, where='/', name='fnode_test')
 
 
-print h5file.getAttrNode('/fnode_test', 'NODE_TYPE')
+print(h5file.getAttrNode('/fnode_test', 'NODE_TYPE'))
 
 
-print >> fnode, "This is a test text line."
-print >> fnode, "And this is another one."
-print >> fnode
+print("This is a test text line.", file=fnode)
+print("And this is another one.", file=fnode)
+print(file=fnode)
 fnode.write("Of course, file methods can also be used.")
 
 fnode.seek(0)  # Go back to the beginning of file.
 
 for line in fnode:
-  print repr(line)
+    print(repr(line))
 
 
 fnode.close()
-print fnode.closed
+print(fnode.closed)
 
 
 node = h5file.root.fnode_test
-fnode = FileNode.openNode(node, 'a+')
-print repr(fnode.readline())
-print fnode.tell()
-print >> fnode, "This is a new line."
-print repr(fnode.readline())
+fnode = FileNode.open_node(node, 'a+')
+print(repr(fnode.readline()))
+print(fnode.tell())
+print("This is a new line.", file=fnode)
+print(repr(fnode.readline()))
 
 
 fnode.seek(0)
 for line in fnode:
-  print repr(line)
+    print(repr(line))
 
 
 fnode.attrs.content_type = 'text/plain; charset=us-ascii'
diff --git a/doc/scripts/pickletrouble.py b/doc/scripts/pickletrouble.py
index 1fb4c5b..5c42d15 100644
--- a/doc/scripts/pickletrouble.py
+++ b/doc/scripts/pickletrouble.py
@@ -1,26 +1,28 @@
+from __future__ import print_function
 import tables
 
+
 class MyClass(object):
-  foo = 'bar'
+    foo = 'bar'
 
 # An object of my custom class.
 myObject = MyClass()
 
-h5f = tables.openFile('test.h5', 'w')
+h5f = tables.open_file('test.h5', 'w')
 h5f.root._v_attrs.obj = myObject  # store the object
-print h5f.root._v_attrs.obj.foo  # retrieve it
+print(h5f.root._v_attrs.obj.foo)  # retrieve it
 h5f.close()
 
 # Delete class of stored object and reopen the file.
 del MyClass, myObject
 
-h5f = tables.openFile('test.h5', 'r')
-print h5f.root._v_attrs.obj.foo
+h5f = tables.open_file('test.h5', 'r')
+print(h5f.root._v_attrs.obj.foo)
 # Let us inspect the object to see what is happening.
-print repr(h5f.root._v_attrs.obj)
+print(repr(h5f.root._v_attrs.obj))
 # Maybe unpickling the string will yield more information:
-import cPickle
-cPickle.loads(h5f.root._v_attrs.obj)
+import pickle
+pickle.loads(h5f.root._v_attrs.obj)
 # So the problem was not in the stored object,
 # but in the *environment* where it was restored.
 h5f.close()
diff --git a/doc/scripts/tutorial1.py b/doc/scripts/tutorial1.py
index a41a0a7..6f34474 100644
--- a/doc/scripts/tutorial1.py
+++ b/doc/scripts/tutorial1.py
@@ -6,27 +6,32 @@ with any HDF5 generic utility.
 """
 
 
-import os, traceback
+from __future__ import print_function
+import os
+import traceback
 
 SECTION = "I HAVE NO TITLE"
 
+
 def tutsep():
-    print '----8<----', SECTION, '----8<----'
+    print('----8<----', SECTION, '----8<----')
+
 
 def tutprint(obj):
     tutsep()
-    print obj
+    print(obj)
+
 
 def tutrepr(obj):
     tutsep()
-    print repr(obj)
+    print(repr(obj))
+
 
 def tutexc():
     tutsep()
     traceback.print_exc(file=sys.stdout)
 
 
-
 SECTION = "Importing tables objects"
 from numpy import *
 from tables import *
@@ -47,17 +52,17 @@ class Particle(IsDescription):
 
 SECTION = "Creating a PyTables file from scratch"
 # Open a file in "w"rite mode
-h5file = openFile('tutorial1.h5', mode = "w", title = "Test file")
+h5file = open_file('tutorial1.h5', mode="w", title="Test file")
 
 
 SECTION = "Creating a new group"
 # Create a new group under "/" (root)
-group = h5file.createGroup("/", 'detector', 'Detector information')
+group = h5file.create_group("/", 'detector', 'Detector information')
 
 
 SECTION = "Creating a new table"
 # Create one table on it
-table = h5file.createTable(group, 'readout', Particle, "Readout example")
+table = h5file.create_table(group, 'readout', Particle, "Readout example")
 
 tutprint(h5file)
 tutrepr(h5file)
@@ -66,8 +71,8 @@ tutrepr(h5file)
 particle = table.row
 
 # Fill the table with 10 particles
-for i in xrange(10):
-    particle['name']  = 'Particle: %6d' % (i)
+for i in range(10):
+    particle['name'] = 'Particle: %6d' % (i)
     particle['TDCcount'] = i % 256
     particle['ADCcount'] = (i * 256) % (1 << 16)
     particle['grid_i'] = i
@@ -86,28 +91,32 @@ SECTION = "Reading (and selecting) data in a table"
 # Read actual data from table. We are interested in collecting pressure values
 # on entries where TDCcount field is greater than 3 and pressure less than 50
 table = h5file.root.detector.readout
-pressure = [ x['pressure'] for x in table
-             if x['TDCcount']>3 and 20<=x['pressure']<50 ]
+pressure = [
+    x['pressure'] for x in table
+    if x['TDCcount'] > 3 and 20 <= x['pressure'] < 50
+]
 
 tutrepr(pressure)
 
 # Read also the names with the same cuts
-names = [ x['name'] for x in table
-          if x['TDCcount'] > 3 and 20 <= x['pressure'] < 50 ]
+names = [
+    x['name'] for x in table
+    if x['TDCcount'] > 3 and 20 <= x['pressure'] < 50
+]
 
 tutrepr(names)
 
 
 SECTION = "Creating new array objects"
-gcolumns = h5file.createGroup(h5file.root, "columns", "Pressure and Name")
+gcolumns = h5file.create_group(h5file.root, "columns", "Pressure and Name")
 
 tutrepr(
-h5file.createArray(gcolumns, 'pressure', array(pressure),
-                   "Pressure column selection")
+    h5file.create_array(gcolumns, 'pressure', array(pressure),
+                       "Pressure column selection")
 )
 
 tutrepr(
-h5file.createArray('/columns', 'name', names, "Name column selection")
+    h5file.create_array('/columns', 'name', names, "Name column selection")
 )
 
 tutprint(h5file)
@@ -123,7 +132,6 @@ tutsep()
 os.system('ptdump tutorial1.h5')
 
 
-
 """This example shows how to browse the object tree and enlarge tables.
 
 Before to run this program you need to execute first tutorial1-1.py
@@ -134,7 +142,7 @@ that create the tutorial1.h5 file needed here.
 
 SECTION = "Traversing the object tree"
 # Reopen the file in append mode
-h5file = openFile("tutorial1.h5", "a")
+h5file = open_file("tutorial1.h5", "a")
 
 # Print the object tree created from this filename
 # List all the nodes (Group and Leaf objects) on tree
@@ -143,29 +151,29 @@ tutprint(h5file)
 # List all the nodes (using File iterator) on tree
 tutsep()
 for node in h5file:
-    print node
+    print(node)
 
 # Now, only list all the groups on tree
 tutsep()
-for group in h5file.walkGroups("/"):
-    print group
+for group in h5file.walk_groups("/"):
+    print(group)
 
 # List only the arrays hanging from /
 tutsep()
-for group in h5file.walkGroups("/"):
-    for array in h5file.listNodes(group, classname = 'Array'):
-        print array
+for group in h5file.walk_groups("/"):
+    for array in h5file.list_nodes(group, classname='Array'):
+        print(array)
 
 # This gives the same result
 tutsep()
-for array in h5file.walkNodes("/", "Array"):
-    print array
+for array in h5file.walk_nodes("/", "Array"):
+    print(array)
 
 # And finally, list only leafs on /detector group (there should be one!)
 # Other way using iterators and natural naming
 tutsep()
 for leaf in h5file.root.detector('Leaf'):
-    print leaf
+    print(leaf)
 
 
 SECTION = "Setting and getting user attributes"
@@ -220,33 +228,33 @@ os.system('h5ls -vr tutorial1.h5/detector/readout')
 SECTION = "Getting object metadata"
 # Get metadata from table
 tutsep()
-print "Object:", table
+print("Object:", table)
 tutsep()
-print "Table name:", table.name
+print("Table name:", table.name)
 tutsep()
-print "Table title:", table.title
+print("Table title:", table.title)
 tutsep()
-print "Number of rows in table:", table.nrows
+print("Number of rows in table:", table.nrows)
 tutsep()
-print "Table variable names with their type and shape:"
+print("Table variable names with their type and shape:")
 tutsep()
 for name in table.colnames:
-    print name, ':= %s, %s' % (table.coltypes[name], table.colshapes[name])
+    print(name, ':= %s, %s' % (table.coltypes[name], table.colshapes[name]))
 
 tutprint(table.__doc__)
 
 # Get the object in "/columns pressure"
-pressureObject = h5file.getNode("/columns", "pressure")
+pressureObject = h5file.get_node("/columns", "pressure")
 
 # Get some metadata on this object
 tutsep()
-print "Info on the object:", repr(pressureObject)
+print("Info on the object:", repr(pressureObject))
 tutsep()
-print " shape: ==>", pressureObject.shape
+print(" shape: ==>", pressureObject.shape)
 tutsep()
-print " title: ==>", pressureObject.title
+print(" title: ==>", pressureObject.title)
 tutsep()
-print " type: ==>", pressureObject.type
+print(" type: ==>", pressureObject.type)
 
 
 SECTION = "Reading data from Array objects"
@@ -254,18 +262,18 @@ SECTION = "Reading data from Array objects"
 pressureArray = pressureObject.read()
 tutrepr(pressureArray)
 tutsep()
-print "pressureArray is an object of type:", type(pressureArray)
+print("pressureArray is an object of type:", type(pressureArray))
 
 # Read the 'name' Array actual data
 nameArray = h5file.root.columns.name.read()
 tutrepr(nameArray)
-print "nameArray is an object of type:", type(nameArray)
+print("nameArray is an object of type:", type(nameArray))
 
 # Print the data for both arrays
 tutprint("Data on arrays nameArray and pressureArray:")
 tutsep()
 for i in range(pressureObject.shape[0]):
-    print nameArray[i], "-->", pressureArray[i]
+    print(nameArray[i], "-->", pressureArray[i])
 tutrepr(pressureObject.name)
 
 
@@ -276,8 +284,8 @@ table = h5file.root.detector.readout
 particle = table.row
 
 # Append 5 new particles to table
-for i in xrange(10, 15):
-    particle['name']  = 'Particle: %6d' % (i)
+for i in range(10, 15):
+    particle['name'] = 'Particle: %6d' % (i)
     particle['TDCcount'] = i % 256
     particle['ADCcount'] = (i * 256) % (1 << 16)
     particle['grid_i'] = i
@@ -293,12 +301,12 @@ table.flush()
 # Print the data using the table iterator:
 tutsep()
 for r in table:
-    print "%-16s | %11.1f | %11.4g | %6d | %6d | %8d |" % \
+    print("%-16s | %11.1f | %11.4g | %6d | %6d | %8d |" % \
           (r['name'], r['pressure'], r['energy'], r['grid_i'], r['grid_j'],
-           r['TDCcount'])
+           r['TDCcount']))
 
 # Delete some rows on the Table (yes, rows can be removed!)
-tutrepr(table.removeRows(5, 10))
+tutrepr(table.remove_rows(5, 10))
 
 # Close the file
 h5file.close()
diff --git a/doc/source/FAQ.rst b/doc/source/FAQ.rst
index 4a1efe2..4c806eb 100644
--- a/doc/source/FAQ.rst
+++ b/doc/source/FAQ.rst
@@ -3,6 +3,8 @@
 :date: 2011-06-13 08:40:20
 :author: FrancescAlted
 
+.. py:currentmodule:: tables
+
 ===
 FAQ
 ===
@@ -184,26 +186,29 @@ What kind of containers does PyTables implement?
 PyTables does support a series of data containers that address specific needs
 of the user. Below is a brief description of them:
 
-:Table:
+::class:`Table`:
     Lets you deal with heterogeneous datasets. Allows compression. Enlargeable.
     Supports nested types. Good performance for read/writing data.
-:Array:
+::class:`Array`:
     Provides quick and dirty array handling. Not compression allowed.
     Not enlargeable. Can be used only with relatively small datasets (i.e.
     those that fit in memory). It provides the fastest I/O speed.
-:CArray:
+::class:`CArray`:
     Provides compressed array support. Not enlargeable. Good speed when
     reading/writing.
-:EArray:
+::class:`EArray`:
     Most general array support. Compressible and enlargeable. It is pretty
     fast at extending, and very good at reading.
-:VLArray:
+::class:`VLArray`:
     Supports collections of homogeneous data with a variable number of entries.
     Compressible and enlargeable. I/O is not very fast.
-:Group:
+::class:`Group`:
     The structural component.
+    A hierarchically-addressable container for HDF5 nodes (each of these
+    containers, including Group, are nodes), similar to a directory in a
+    UNIX filesystem.
 
-Please refer to the documentation for more specific information.
+Please refer to the  :doc:`usersguide/libref` for more specific information.
 
 
 Cool! I'd like to see some examples of use.
@@ -406,7 +411,7 @@ themselves and SQLObject_, and there have been quite longish discussions about
 adding the possibility of overloading logical operators to Python (see `PEP
 335`_ and `this thread`__ for more details).
 
-__ http://mail.python.org/pipermail/python-dev/2004-September/048763.html
+__ https://mail.python.org/pipermail/python-dev/2004-September/048763.html
 
 
 I can not select rows using in-kernel queries with a condition that involves an UInt64Col. Why?
diff --git a/doc/source/conf.py b/doc/source/conf.py
index 0706c9f..3a1d04b 100644
--- a/doc/source/conf.py
+++ b/doc/source/conf.py
@@ -53,7 +53,7 @@ master_doc = 'index'
 
 # General information about the project.
 project = u'PyTables'
-copyright = u'2011-2013, PyTables maintainers'
+copyright = u'2011-2014, PyTables maintainers'
 
 # The version info for the project you're documenting, acts as replacement for
 # |version| and |release|, also used in various other places throughout the
@@ -280,7 +280,7 @@ autosummary_generate = []
 epub_title = u'PyTables'
 epub_author = u'PyTables maintainers'
 epub_publisher = u'PyTables maintainers'
-epub_copyright = u'2011-2013, PyTables maintainers'
+epub_copyright = u'2011-2014, PyTables maintainers'
 
 # -- External link oOptions ----------------------------------------------------
 extlinks = {
diff --git a/doc/source/cookbook/custom_data_types.rst b/doc/source/cookbook/custom_data_types.rst
index 3ab96cc..59a804a 100644
--- a/doc/source/cookbook/custom_data_types.rst
+++ b/doc/source/cookbook/custom_data_types.rst
@@ -18,6 +18,7 @@ http://sourceforge.net/mailarchive/message.php?msg_id=200805250042.50653.pgmdevl
 
 ::
 
+    from __future__ import print_function
     import numpy as np
     import numpy.ma as ma
 
@@ -82,12 +83,12 @@ http://sourceforge.net/mailarchive/message.php?msg_id=200805250042.50653.pgmdevl
         mtab = h5file.createMaskedTable('/','random',x)
 
         h5file.flush()
-        print type(mtab)
-        print mtab.read()
+        print(type(mtab))
+        print(mtab.read())
         h5file.close()
         h5file = tables.openFile('tester.hdf5','r')
         mtab = h5file.root.random
 
-        print type(mtab)
-        print mtab.read()
+        print(type(mtab))
+        print(mtab.read())
 
diff --git a/doc/source/cookbook/hints_for_sql_users.rst b/doc/source/cookbook/hints_for_sql_users.rst
index 0819786..32394eb 100644
--- a/doc/source/cookbook/hints_for_sql_users.rst
+++ b/doc/source/cookbook/hints_for_sql_users.rst
@@ -38,10 +38,10 @@ A usual syntax is::
 
 In PyTables, each database goes to a different HDF5_ file (much like
 SQLite_ or MS Access).
-To create a new HDF5_ file, you use the :func:`tables.openFile` function with
+To create a new HDF5_ file, you use the :func:`tables.open_file` function with
 the `'w'` mode (which deletes the database if it already exists), like this::
 
-    h5f = tables.openFile('database_name.h5', 'w')
+    h5f = tables.open_file('database_name.h5', 'w')
 
 In this way you get the `h5f` PyTables *file handleé (an instance of the
 :class:`tables.File` class), which is a concept similar to a *database
@@ -56,7 +56,7 @@ In case you forget to do it, PyTables closes all open database handles for
 you when you exit your program or interactive session, but it is always safer
 to close your files explicitly.
 If you want to use the database after closing it, you just call
-:func:`openFile` again, but using the `'r+'` or `'r'` modes, depending on
+:func:`open_file` again, but using the `'r+'` or `'r'` modes, depending on
 whether you do or don't need to modify the database, respectively.
 
 You may use several PyTables databases simultaneously in a program, so you
@@ -174,20 +174,20 @@ like this::
 Once you have a table description `description_name` and a writeable file
 handle `h5f`, creating a table with that description is as easy as::
 
-    tbl = h5f.createTable('/', 'table_name', description_name)
+    tbl = h5f.create_table('/', 'table_name', description_name)
 
 PyTables is very object-oriented, and database is usually done through
 methods of :class:`tables.File`.
 The first argument indicates the *path* where the table will be created,
 i.e. the root path (HDF5 uses Unix-like paths).
-The :meth:`tables.File.createTable` method has many options e.g. for setting
+The :meth:`tables.File.create_table` method has many options e.g. for setting
 a table title or compression properties. What you get back is an instance of
 :class:`tables.Table`, a handle for accessing the data in that table.
 
 As with files, table handles can also be closed with `tbl.close()`.
-If you want to acces an already created table, you can use::
+If you want to access an already created table, you can use::
 
-    tbl = h5f.getNode('/', 'table_name')
+    tbl = h5f.get_node('/', 'table_name')
 
 (PyTables uses the concept of *node* for datasets -tables and others- and
 groups in the object tree) or, using *natural naming*::
@@ -211,18 +211,18 @@ and
 
     DROP INDEX index_name
 
-Indexing is supported in the commercial version of PyTables (PyTablesPro).
+Indexing is supported in the versions of PyTables >= 2.3 (and in PyTablesPro).
 However, indexes don't have names and they are bound to single columns.
 Following the object-oriented philosophy of PyTables, index creation is a
-method (:meth:`tables.Column.createIndex`) of a :class:`tables.Column` object
+method (:meth:`tables.Column.create_index`) of a :class:`tables.Column` object
 of a table, which you can access trough its `cols` accessor.
 
 ::
-    tbl.cols.colum_name.createIndex()
+    tbl.cols.colum_name.create_index()
 
 For dropping an index on a column::
 
-    tbl.cols.colum_name.removeIndex()
+    tbl.cols.colum_name.remove_index()
 
 
 Altering a table
@@ -234,7 +234,7 @@ The first case of table alteration is renaming::
 
 This is accomplished in !PyTables with::
 
-    h5f.renameNode('/', name='old_name', newname='new_name')
+    h5f.rename_node('/', name='old_name', newname='new_name')
 
 or through the table handle::
 
@@ -253,9 +253,9 @@ In SQL you can remove a table using::
     DROP TABLE table_name
 
 In PyTables, tables are removed as other nodes, using the
-:meth:`tables.File.removeNode` method::
+:meth:`tables.File.remove_node` method::
 
-    h5f.removeNode('/', 'table_name')
+    h5f.remove_node('/', 'table_name')
 
 or through the table handle::
 
@@ -374,7 +374,7 @@ quite decoupled operations, so we will have a look at querying later and
 assume that you already know the set of rows you want to update.
 
 If the set happens to be a slice of the table, you may use the
-:`meth:`tables.Table.modifyRows` method or its equivalent
+:`meth:`tables.Table.modify_rows` method or its equivalent
 :meth:`tables.Table.__setitem__` notation::
 
     rows = [
@@ -386,15 +386,15 @@ If the set happens to be a slice of the table, you may use the
     tbl[6:13:3] = rows  # this is the same
 
 If you just want to update some columns in the slice, use the
-:meth:`tables.Table.modifyColumns` or :meth:`tables.Table.modifyColumn`
+:meth:`tables.Table.modify_columns` or :meth:`tables.Table.modify_column`
 methods::
 
     cols = [
         [150.0, 100.0, 25.0]
     ]
     # These are all equivalent.
-    tbl.modifyColumns(start=6, stop=13, step=3, columns=cols, names=['temperature'])
-    tbl.modifyColumn(start=6, stop=13, step=3, column=cols[0], colname='temperature')
+    tbl.modify_columns(start=6, stop=13, step=3, columns=cols, names=['temperature'])
+    tbl.modify_column(start=6, stop=13, step=3, column=cols[0], colname='temperature')
     tbl.cols.temperature[6:13:3] = cols[0]
 
 The last line shows an example of using the `cols` accessor to get to the
@@ -430,7 +430,7 @@ Rows are deleted from a table with the following SQL syntax::
     DELETE FROM table_name
     [WHERE condition]
 
-:meth:`tables.Table.removeRows` is the method used for deleting rows in
+:meth:`tables.Table.remove_rows` is the method used for deleting rows in
 PyTables.
 However, it is very simple (only contiguous blocks of rows can be deleted) and
 quite inefficient, and one should consider whether *dumping filtered data from
@@ -438,12 +438,12 @@ one table into another* isn't a much more convenient approach.
 This is a far more optimized operation under PyTables which will be covered
 later.
 
-Anyway, using `removeRows()` is quite straightforward::
+Anyway, using `remove_row()` or `remove_rows()` is quite straightforward::
 
-    tbl.removeRows(12)  # delete one single row (12)
-    tbl.removeRows(12, 20)  # delete all rows from 12 to 19 (included)
-    tbl.removeRows(0, tbl.nrows)  # delete all rows unconditionally
-    tbl.removeRows(-4, tbl.nrows)  # delete the last 4 rows
+    tbl.remove_row(12)  # delete one single row (12)
+    tbl.remove_rows(12, 20)  # delete all rows from 12 to 19 (included)
+    tbl.remove_rows(0, tbl.nrows)  # delete all rows unconditionally
+    tbl.remove_rows(-4, tbl.nrows)  # delete the last 4 rows
 
 
 Reading data
@@ -529,10 +529,10 @@ For reading a *slice* of rows, use `[slice]` or the
     rows = tbl.read(start=6, stop=13, step=3)
     rows = tbl[6:13:3]  # equivalent
 
-For reading a *sequence* of rows, use the :meth:`tables.Table.readCoordinates`
+For reading a *sequence* of rows, use the :meth:`tables.Table.read_coordinates`
 method::
 
-    rows = tbl.readCoordinates([6, 7, 9, 11])
+    rows = tbl.read_coordinates([6, 7, 9, 11])
 
 Please note that you can add a `field='column_name'` argument to `read*()`
 methods in order to get only the given column instead of them all.
@@ -593,31 +593,32 @@ Here is an example of using `where()` with the previous example condition::
         do something with row['name'], row['x']...
 
 
-Reading seleted rows at once
-----------------------------
+Reading selected rows at once
+-----------------------------
 
 Like the aforementioned :meth:`tables.Table.read`,
-:meth:`tables.Table.readWhere` gets all the rows fulfilling the given condition
-and packs them in a single container (a la DBAPI `fetchmany()`).
+:meth:`tables.Table.read_where` gets all the rows fulfilling the given
+condition and packs them in a single container (a la DBAPI `fetchmany()`).
 The same warning applies: be careful on how many rows you expect to retrieve,
 or you may run out of memory!
 
-Here is an example of using `readWhere()` with the previous example condition::
+Here is an example of using `read_where()` with the previous example
+condition::
 
-    rows = tbl.readWhere('(sqrt(x**2 + y**2) <= 1) & (temperature < 100)')
+    rows = tbl.read_where('(sqrt(x**2 + y**2) <= 1) & (temperature < 100)')
 
 Please note that both :meth:`tables.Table.where` and
-:meth:`tables.Table.readWhere` can also take slicing arguments.
+:meth:`tables.Table.read_where` can also take slicing arguments.
 
 
 Getting the coordinates of selected rows
 ----------------------------------------
 
 There is yet another method for querying tables:
-:meth:`tables.Table.getWhereList`.
-It returns just a sequence of the numbers of the rows which fulfill the given
+:meth:`tables.Table.get_where_list`.
+It returns just a sequence of the numbers of the rows which fulfil the given
 condition.
-You may pass that sequence to :meth:tables.Table.readCoordinates`, e.g. to
+You may pass that sequence to :meth:tables.Table.read_coordinates`, e.g. to
 retrieve data from a different table where rows with the same number as the
 queried one refer to the same first-class object or entity.
 
@@ -678,7 +679,7 @@ Summary of row selection methods
 | **Iterative access** | ``__iter__()``, | ``iterrows(range)`` | ``itersequence()``    | ``where(condition)``    |
 |                      | ``iterrows()``  |                     |                       |                         |
 +----------------------+-----------------+---------------------+-----------------------+-------------------------+
-| **Block access**     | ``[:]``,        | ``[range]``,        | ``readCoordinates()`` |``readWhere(condition)`` |
+| **Block access**     | ``[:]``,        | ``[range]``,        | ``readCoordinates()`` |``read_where(condition)``|
 |                      | ``read()``      | ``read(range)``     |                       |                         |
 +----------------------+-----------------+---------------------+-----------------------+-------------------------+
 
diff --git a/doc/source/cookbook/inmemory_hdf5_files.rst b/doc/source/cookbook/inmemory_hdf5_files.rst
index b30fb54..5103c40 100644
--- a/doc/source/cookbook/inmemory_hdf5_files.rst
+++ b/doc/source/cookbook/inmemory_hdf5_files.rst
@@ -15,10 +15,10 @@ Assuming the :file:`sample.h5` exists in the current folder, it is possible to
 open it in memory simply using the CORE driver at opening time.
 
 The HDF5 driver that one intend to use to open/create a file can be specified
-using the *driver* keyword argument of the :func:`tables.openFile` function::
+using the *driver* keyword argument of the :func:`tables.open_file` function::
 
     >>> import tables
-    >>> h5file = tables.openFile("sample.h", driver="H5FD_CORE")
+    >>> h5file = tables.open_file("sample.h", driver="H5FD_CORE")
 
 The content of the :file`sample.h5` is opened for reading. It is loaded into
 memory and all reading operations are performed without disk I/O overhead.
@@ -45,9 +45,9 @@ Creating a new file in memory is as simple as creating a regular file, just
 one needs to specify to use the CORE driver::
 
     >>> import tables
-    >>> h5file = tables.openFile("new_sample.h5", "w", driver="H5FD_CORE")
+    >>> h5file = tables.open_file("new_sample.h5", "w", driver="H5FD_CORE")
     >>> import numpy
-    >>> a = h5file.createArray(h5file.root, "array", numpy.zeros((300, 300)))
+    >>> a = h5file.create_array(h5file.root, "array", numpy.zeros((300, 300)))
     >>> h5file.close()
 
 
@@ -64,12 +64,12 @@ data in the HDF5 file and depending on how fast is the disk I/O.
 Saving data to disk is the default behavior for the CORE driver in PyTables.
 
 This feature can be controlled using the *driver_core_backing_store*
-parameter of the :func:`tables.openFile` function.  Setting it to `False`
+parameter of the :func:`tables.open_file` function.  Setting it to `False`
 disables the backing store feature and all changes in the working `h5file`
 are lost after closing::
 
-    >>> h5file = tables.openFile("new_sample.h5", "w", driver="H5FD_CORE",
-    ...                          driver_core_bacling_store=0)
+    >>> h5file = tables.open_file("new_sample.h5", "w", driver="H5FD_CORE",
+    ...                           driver_core_bacling_store=0)
 
 Please note that the *driver_core_backing_store* disables saving of data, not
 loading.
@@ -78,13 +78,13 @@ append mode.  All data in the existing :file:`sample.h5` file are loaded into
 memory and contents can be actually modified by the user::
 
     >>> import tables
-    >>> h5file = tables.openFile("sample.h5", "a", driver="H5FD_CORE",
-                                 driver_core_backing_store=0)
+    >>> h5file = tables.open_file("sample.h5", "a", driver="H5FD_CORE",
+                                  driver_core_backing_store=0)
     >>> import numpy
-    >>> h5file.createArray(h5file.root, "new_array", numpy.arange(20),
-                           title="New array")
+    >>> h5file.create_array(h5file.root, "new_array", numpy.arange(20),
+                            title="New array")
     >>> array2 = h5file.root.array2
-    >>> print array2
+    >>> print(array2)
     /array2 (Array(20,)) 'New array'
     >>> h5file.close()
 
@@ -105,7 +105,7 @@ one), STDIO or CORE.
 An example of how to get an image::
 
     >>> import tables
-    >>> h5file = tables.openFile("sample.h5")
+    >>> h5file = tables.open_file("sample.h5")
     >>> image = h5file.get_file_image()
     >>> h5file.close()
 
@@ -121,9 +121,9 @@ The `ìmage` string can be passed around and can also be used to initialize a
 new HDF55 file descriptor::
 
     >>> import tables
-    >>> h5file = tables.openFile("in-memory-sample.h5", driver="H5DF_CORE",
-                                 driver_core_backing_store=0)
-    >>> print h5file.root.array
+    >>> h5file = tables.open_file("in-memory-sample.h5", driver="H5DF_CORE",
+                                  driver_core_backing_store=0)
+    >>> print(h5file.root.array)
     /array (Array(300, 300)) 'Array'
     >>> h5file.setNodeAttr(h5file.root, "description", "In memory file example")
 
diff --git a/doc/source/cookbook/tailoring_atexit_hooks.rst b/doc/source/cookbook/tailoring_atexit_hooks.rst
index 60971c6..a177c6d 100644
--- a/doc/source/cookbook/tailoring_atexit_hooks.rst
+++ b/doc/source/cookbook/tailoring_atexit_hooks.rst
@@ -12,25 +12,34 @@ outputs::
 
     Closing remaining open files: /tmp/prova.h5... done
 
-The responsible of this behaviour is the :meth:`tables.file.close_open_files`
+The responsible of this behaviour is the :func:`tables.file.close_open_files`
 function that is being registered via :func:`atexit.register` Python function.
 Although you can't de-register already registered cleanup functions, you can
 register new ones to tailor the existing behaviour.
 For example, if you  register this function::
 
     def my_close_open_files(verbose):
-        open_files = tb.file._open_files
+        open_files = tables.file._open_files
+
         are_open_files = len(open_files) > 0
+
         if verbose and are_open_files:
-            print >> sys.stderr, "Closing remaining open files:",
-        for fileh in open_files.keys():
+            sys.stderr.write("Closing remaining open files:")
+
+        # make a copy of the open_files.handlers container for the iteration
+        handlers = list(open_files.handlers)
+
+        for fileh in handlers:
             if verbose:
-                print >> sys.stderr, "%s..." % (open_files[fileh].filename,),
-            open_files[fileh].close()
+                sys.stderr.write("%s..." % fileh.filename)
+
+            fileh.close()
+
             if verbose:
-                print >> sys.stderr, "done",
+                sys.stderr.write("done")
+
         if verbose and are_open_files:
-            print >> sys.stderr
+            sys.stderr.write("\n")
 
     import sys, atexit
     atexit.register(my_close_open_files, False)
diff --git a/doc/source/project_pointers.rst b/doc/source/project_pointers.rst
index 4de5571..ca5043d 100644
--- a/doc/source/project_pointers.rst
+++ b/doc/source/project_pointers.rst
@@ -4,10 +4,10 @@ Project pointers
 
 * `Project Home Page <http://www.pytables.org>`_
 * `GitHub Project Page <https://github.com/PyTables>`_
-* `Online HTML Documentation <http://pytables.github.com>`_
+* `Online HTML Documentation <http://pytables.github.io>`_
 * `Download area <http://sourceforge.net/projects/pytables/files/pytables>`_
 * `Git Repository browser <https://github.com/PyTables/PyTables>`_
-* `Users Mailing List <https://lists.sourceforge.net/lists/listinfo/pytables-users>`_
+* `Users Mailing List <https://groups.google.com/group/pytables-users>`_
 * `Announce Mailing List <https://lists.sourceforge.net/lists/listinfo/pytables-announce>`_
 * `Developers Mailing List <https://groups.google.com/group/pytables-dev>`_
 * Continuous Integration:
diff --git a/doc/source/release-notes/RELEASE_NOTES_v3.0.x.rst b/doc/source/release-notes/RELEASE_NOTES_v3.0.x.rst
index f152d2e..2eaf2a6 100644
--- a/doc/source/release-notes/RELEASE_NOTES_v3.0.x.rst
+++ b/doc/source/release-notes/RELEASE_NOTES_v3.0.x.rst
@@ -1 +1,303 @@
-.. include:: ../../../RELEASE_NOTES.txt
+=======================================
+ Release notes for PyTables 3.0 series
+=======================================
+
+:Author: PyTables Developers
+:Contact: pytables at googlemail.com
+
+.. py:currentmodule:: tables
+
+
+Changes from 2.4 to 3.0
+=======================
+
+New features
+------------
+
+- Since this release PyTables provides full support to Python_ 3
+  (closes :issue:`188`).
+
+- The entire code base is now more compliant with coding style guidelines
+  describe in the PEP8_ (closes :issue:`103` and :issue:`224`).
+  See `API changes`_ for more details.
+
+- Basic support for HDF5 drivers.  Now it is possible to open/create an
+  HDF5 file using one of the SEC2, DIRECT, LOG, WINDOWS, STDIO or CORE
+  drivers.  Users can also set the main driver parameters (closes
+  :issue:`166`).
+  Thanks to Michal Slonina.
+
+- Basic support for in-memory image files.  An HDF5 file can be set from or
+  copied into a memory buffer (thanks to Michal Slonina).  This feature is
+  only available if PyTables is built against HDF5 1.8.9 or newer.
+  Closes :issue:`165` and :issue:`173`.
+
+- New :meth:`File.get_filesize` method for retrieving the HDF5 file size.
+
+- Implemented methods to get/set the user block size in a HDF5 file
+  (closes :issue:`123`)
+
+- Improved support for PyInstaller_.  Now it is easier to pack frozen
+  applications that use the PyTables package (closes: :issue:`177`).
+  Thanks to Stuart Mentzer and Christoph Gohlke.
+
+- All read methods now have an optional *out* argument that allows to pass a
+  pre-allocated array to store data (closes :issue:`192`)
+
+- Added support for the floating point data types with extended precision
+  (Float96, Float128, Complex192 and Complex256).  This feature is only
+  available if numpy_ provides it as well.
+  Closes :issue:`51` and :issue:`214`.  Many thanks to Andrea Bedini.
+
+- Consistent ``create_xxx()`` signatures.  Now it is possible to create all
+  data sets :class:`Array`, :class:`CArray`, :class:`EArray`,
+  :class:`VLArray`, and :class:`Table` from existing Python objects (closes
+  :issue:`61` and :issue:`249`).  See also the `API changes`_ section.
+
+- Complete rewrite of the :mod:`nodes.filenode` module. Now it is fully
+  compliant with the interfaces defined in the standard :mod:`io` module.
+  Only non-buffered binary I/O is supported currently.
+  See also the `API changes`_ section.  Closes :issue:`244`.
+
+- New :program:`pt2to3` tool is provided to help users to port their
+  applications to the new API (see `API changes`_ section).
+
+
+Improvements
+------------
+
+- Improved runtime checks on dynamic loading of libraries: meaningful error
+  messages are generated in case of failure.
+  Also, now PyTables no more alters the system PATH.
+  Closes :issue:`178` and :issue:`179` (thanks to Christoph Gohlke).
+
+- Improved list of search paths for libraries as suggested by Nicholaus
+  Halecky (see :issue:`219`).
+
+- Removed deprecated Cython_ include (.pxi) files. Contents of
+  :file:`convtypetables.pxi` have been moved in :file:`utilsextension.pyx`.
+  Closes :issue:`217`.
+
+- The internal Blosc_ library has been upgraded to version 1.2.3.
+
+- Pre-load the bzip2_ library on windows (closes :issue:`205`)
+
+- The :meth:`File.get_node` method now accepts unicode paths
+  (closes :issue:`203`)
+
+- Improved compatibility with Cython_ 0.19 (see :issue:`220` and
+  :issue:`221`)
+
+- Improved compatibility with numexpr_ 2.1 (see also :issue:`199` and
+  :issue:`241`)
+
+- Improved compatibility with development versions of numpy_
+  (see :issue:`193`)
+
+- Packaging: since this release the standard tar-ball package no more includes
+  the PDF version of the "PyTables User Guide", so it is a little bit smaller
+  now.  The complete and pre-build version of the documentation both in HTML
+  and PDF format is available on the file `download area`_ on SourceForge.net.
+  Closes: :issue:`172`.
+
+- Now PyTables also uses `Travis-CI`_ as continuous integration service.
+  All branches and all pull requests are automatically tested with different
+  Python_ versions.  Closes :issue:`212`.
+
+
+Other changes
+-------------
+
+- PyTables now requires Python 2.6 or newer.
+
+- Minimum supported version of Numexpr_ is now 2.0.
+
+
+API changes
+-----------
+
+The entire PyTables API as been made more PEP8_ compliant (see :issue:`224`).
+
+This means that many methods, attributes, module global variables and also
+keyword parameters have been renamed to be compliant with PEP8_ style
+guidelines (e.g. the ``tables.hdf5Version`` constant has been renamed into
+``tables.hdf5_version``).
+
+We made the best effort to maintain compatibility to the old API for existing
+applications.  In most cases, the old 2.x API is still available and usable
+even if it is now deprecated (see the Deprecations_ section).
+
+The only important backwards incompatible API changes are for names of
+function/methods arguments.  All uses of keyword arguments should be
+checked and fixed to use the new naming convention.
+
+The new :program:`pt2to3` tool can be used to port PyTables based applications
+to the new API.
+
+Many deprecated features and support for obsolete modules has been dropped:
+
+- The deprecated :data:`is_pro` module constant has been removed
+
+- The nra module and support for the obsolete numarray module has been removed.
+  The *numarray* flavor is no more supported as well (closes :issue:`107`).
+
+- Support for the obsolete Numeric module has been removed.
+  The *numeric* flavor is no longer available (closes :issue:`108`).
+
+- The tables.netcdf3 module has been removed (closes :issue:`68`).
+
+- The deprecated :exc:`exceptions.Incompat16Warning` exception has been
+  removed
+
+- The :meth:`File.create_external_link` method no longer has a keyword
+  parameter named *warn16incompat*.  It was deprecated in PyTables 2.4.
+
+Moreover:
+
+- The :meth:`File.create_array`, :meth:`File.create_carray`,
+  :meth:`File.create_earray`, :meth:`File.create_vlarray`, and
+  :meth:`File.create_table` methods of the :class:`File` objects gained a
+  new (optional) keyword argument named ``obj``.  It can be used to initialize
+  the newly created dataset with an existing Python object, though normally
+  these are numpy_ arrays.
+
+  The *atom*/*descriptor* and *shape* parameters are now optional if the
+  *obj* argument is provided.
+
+- The :mod:`nodes.filenode` has been completely rewritten to be fully
+  compliant with the interfaces defined in the :mod:`io` module.
+
+  The FileNode classes currently implemented are intended for binary I/O.
+
+  Main changes:
+
+  * the FileNode base class is no more available,
+  * the new version of :class:`nodes.filenode.ROFileNode` and
+    :class:`nodes.filenode.RAFileNode` objects no more expose the *offset*
+    attribute (the *seek* and *tell* methods can be used instead),
+  * the *lineSeparator* property is no more available end the ``\n``
+    character is always used as line separator.
+
+- The `__version__` module constants has been removed from almost all the
+  modules (it was not used after the switch to Git).  Of course the package
+  level constant (:data:`tables.__version__`) still remains.
+  Closes :issue:`112`.
+
+- The :func:`lrange` has been dropped in favor of xrange (:issue:`181`)
+
+- The :data:`parameters.MAX_THREADS` configuration parameter has been dropped
+  in favor of :data:`parameters.MAX_BLOSC_THREADS` and
+  :data:`parameters.MAX_NUMEXPR_THREADS` (closes :issue:`147`).
+
+- The :func:`conditions.compile_condition` function no more has a *copycols*
+  argument, it was no more necessary since Numexpr_ 1.3.1.
+  Closes :issue:`117`.
+
+- The *expectedsizeinMB* parameter of the :meth:`File.create_vlarray` and of
+  the :meth:`VLArrsy.__init__` methods has been replaced by *expectedrows*.
+  See also (:issue:`35`).
+
+- The :meth:`Table.whereAppend` method has been renamed into
+  :meth:`Table.append_where` (closes :issue:`248`).
+
+Please refer to the :doc:`../MIGRATING_TO_3.x` document for more details about
+API changes and for some useful hint about the migration process from the 2.X
+API to the new one.
+
+
+Other possibly incompatible changes
+-----------------------------------
+
+- All methods of the :class:`Table` class that take *start*, *stop* and
+  *step* parameters (including :meth:`Table.read`, :meth:`Table.where`,
+  :meth:`Table.iterrows`, etc) have been redesigned to have a consistent
+  behaviour.  The meaning of the *start*, *stop* and *step* and their default
+  values now always work exactly like in the standard :class:`slice` objects.
+  Closes :issue:`44` and :issue:`255`.
+
+- Unicode attributes are not stored in the HDF5 file as pickled string.
+  They are now saved on the HDF5 file as UTF-8 encoded strings.
+
+  Although this does not introduce any API breakage, files produced are
+  different (for unicode attributes) from the ones produced by earlier
+  versions of PyTables.
+
+- System attributes are now stored in the HDF5 file using the character set
+  that reflects the native string behaviour: ASCII for Python 2 and UTF8 for
+  Python 3.  In any case, system attributes are represented as Python string.
+
+- The :meth:`iterrows` method of :class:`*Array` and :class:`Table` as well
+  as the :meth:`Table.itersorted` now behave like functions in the standard
+  :mod:`itertools` module.
+  If the *start* parameter is provided and *stop* is None then the
+  array/table is iterated from *start* to the last line.
+  In PyTables < 3.0 only one element was returned.
+
+
+Deprecations
+------------
+
+- As described in `API changes`_, all functions, methods and attribute names
+  that was not compliant with the PEP8_ guidelines have been changed.
+  Old names are still available but they are deprecated.
+
+- The use of upper-case keyword arguments in the :func:`open_file` function
+  and the :class:`File` class initializer is now deprecated.  All parameters
+  defined in the :file:`tables/parameters.py` module can still be passed as
+  keyword argument to the :func:`open_file` function just using a lower-case
+  version of the parameter name.
+
+
+Bugs fixed
+----------
+
+- Better check access on closed files (closes :issue:`62`)
+
+- Fix for :meth:`File.renameNode` where in certain cases
+  :meth:`File._g_updateLocation` was wrongly called (closes :issue:`208`).
+  Thanks to Michka Popoff.
+
+- Fixed ptdump failure on data with nested columns (closes :issue:`213`).
+  Thanks to Alexander Ford.
+
+- Fixed an error in :func:`open_file` when *filename* is a :class:`numpy.str_`
+  (closes :issue:`204`)
+
+- Fixed :issue:`119`, :issue:`230` and :issue:`232`, where an index on
+  :class:`Time64Col` (only, :class:`Time32Col` was ok) hides the data on
+  selection from a Tables. Thanks to Jeff Reback.
+
+- Fixed ``tables.tests.test_nestedtypes.ColsTestCase.test_00a_repr`` test
+  method.  Now the ``repr`` of of cols on big-endian platforms is correctly
+  handled  (closes :issue:`237`).
+
+- Fixes bug with completely sorted indexes where *nrowsinbuf* must be equal
+  to or greater than the *chunksize* (thanks to Thadeus Burgess).
+  Closes :issue:`206` and :issue:`238`.
+
+- Fixed an issue of the :meth:`Table.itersorted` with reverse iteration
+  (closes :issue:`252` and :issue:`253`).
+
+
+.. _Python: http://www.python.org
+.. _PEP8: http://www.python.org/dev/peps/pep-0008
+.. _PyInstaller: http://www.pyinstaller.org
+.. _Blosc: https://github.com/FrancescAlted/blosc
+.. _bzip2: http://www.bzip.org
+.. _Cython: http://www.cython.org
+.. _Numexpr: http://code.google.com/p/numexpr
+.. _numpy: http://www.numpy.org
+.. _`download area`: http://sourceforge.net/projects/pytables/files/pytables
+.. _`Travis-CI`: https://travis-ci.org
+
+
+  **Enjoy data!**
+
+  -- The PyTables Developers
+
+
+.. Local Variables:
+.. mode: rst
+.. coding: utf-8
+.. fill-column: 72
+.. End:
diff --git a/doc/source/release-notes/RELEASE_NOTES_v3.0.x.rst b/doc/source/release-notes/RELEASE_NOTES_v3.1.x.rst
similarity index 100%
copy from doc/source/release-notes/RELEASE_NOTES_v3.0.x.rst
copy to doc/source/release-notes/RELEASE_NOTES_v3.1.x.rst
diff --git a/doc/source/release_notes.rst b/doc/source/release_notes.rst
index 052788d..709d0d7 100644
--- a/doc/source/release_notes.rst
+++ b/doc/source/release_notes.rst
@@ -17,6 +17,7 @@ PyTables
 .. toctree::
     :maxdepth: 1
 
+    release-notes/RELEASE_NOTES_v3.1.x
     release-notes/RELEASE_NOTES_v3.0.x
     release-notes/RELEASE_NOTES_v2.4.x
     release-notes/RELEASE_NOTES_v2.3.x
@@ -57,6 +58,9 @@ Release timeline
 ----------------
 
 =============== =========== ==========
+PyTables        3.1.0       2014-02-05
+PyTables        3.1.0rc2    2014-01-22
+PyTables        3.1.0rc1    2014-01-17
 PyTables        3.0         2013-06-01
 PyTables        3.0rc3      2013-05-29
 PyTables        3.0rc2      2013-05-17
diff --git a/doc/source/usersguide/bibliography.rst b/doc/source/usersguide/bibliography.rst
index 6faa9f7..9c91b79 100644
--- a/doc/source/usersguide/bibliography.rst
+++ b/doc/source/usersguide/bibliography.rst
@@ -27,7 +27,7 @@ Bibliography
     objects(II). Article describing XML Objectify, a Python module that
     allows working with XML documents as Python objects.
     Some of the ideas presented here are used in PyTables.
-    `<http://www.ibm.com/developerworks/xml/library/xml-matters2/index.html>`_.
+    `<http://gnosis.cx/publish/programming/xml_matters_2.html>`_.
 
 .. _CYTHON:
 
diff --git a/doc/source/usersguide/filenode.rst b/doc/source/usersguide/filenode.rst
index a15a6ab..fc3049f 100644
--- a/doc/source/usersguide/filenode.rst
+++ b/doc/source/usersguide/filenode.rst
@@ -116,7 +116,7 @@ so that the PyTables library would be able to optimize the data access.
 new_node() creates a PyTables node where it is told to. To prove it, we will
 try to get the NODE_TYPE attribute from the newly created node::
 
-    >>> print h5file.get_node_attr('/fnode_test', 'NODE_TYPE')
+    >>> print(h5file.get_node_attr('/fnode_test', 'NODE_TYPE'))
     file
 
 
@@ -125,15 +125,15 @@ Using a file node
 As stated above, you can use the new node file as any other opened file. Let
 us try to write some text in and read it::
 
-    >>> print >> fnode, "This is a test text line."
-    >>> print >> fnode, "And this is another one."
-    >>> print >> fnode
+    >>> print("This is a test text line.", file=fnode)
+    >>> print("And this is another one.", file=fnode)
+    >>> print(file=fnode)
     >>> fnode.write("Of course, file methods can also be used.")
     >>>
     >>> fnode.seek(0)  # Go back to the beginning of file.
     >>>
     >>> for line in fnode:
-    ...     print repr(line)
+    ...     print(repr(line))
     'This is a test text line.\\n'
     'And this is another one.\\n'
     '\\n'
@@ -151,7 +151,7 @@ ValueError. To close a file node, simply delete it or call its close()
 method::
 
     >>> fnode.close()
-    >>> print fnode.closed
+    >>> print(fnode.closed)
     True
 
 
@@ -173,12 +173,12 @@ example::
 
     >>> node = h5file.root.fnode_test
     >>> fnode = filenode.open_node(node, 'a+')
-    >>> print repr(fnode.readline())
+    >>> print(repr(fnode.readline()))
     'This is a test text line.\\n'
-    >>> print fnode.tell()
+    >>> print(fnode.tell())
     26
-    >>> print >> fnode, "This is a new line."
-    >>> print repr(fnode.readline())
+    >>> print("This is a new line.", file=fnode)
+    >>> print(repr(fnode.readline()))
     ''
 
 Of course, the data append process places the pointer at the end of the file,
@@ -187,7 +187,7 @@ to see the whole contents of our file::
 
     >>> fnode.seek(0)
     >>> for line in fnode:
-    ...   print repr(line)
+    ...   print(repr(line))
     'This is a test text line.\\n'
     'And this is another one.\\n'
     '\\n'
diff --git a/doc/source/usersguide/index.rst b/doc/source/usersguide/index.rst
index bc082c2..19f9ea5 100644
--- a/doc/source/usersguide/index.rst
+++ b/doc/source/usersguide/index.rst
@@ -13,7 +13,7 @@ Hierarchical datasets in Python
 
             |copy| 2008, 2009, 2010 - Francesc Alted
 
-            |copy| 2011-2013 - PyTables maintainers
+            |copy| 2011-2014 - PyTables maintainers
 
 --------
 Contents
diff --git a/doc/source/usersguide/installation.rst b/doc/source/usersguide/installation.rst
index c0e286c..294e20e 100644
--- a/doc/source/usersguide/installation.rst
+++ b/doc/source/usersguide/installation.rst
@@ -11,11 +11,16 @@ The Python Distutils are used to build and install PyTables, so it is fairly
 simple to get the application up and running. If you want to install the
 package from sources you can go on reading to the next section.
 
-However, if you are running Windows and want to install precompiled binaries,
-you can jump straight to :ref:`binaryInstallationDescr`. In addition, binary
-packages are available for many different Linux distributions, MacOSX and
-other Unices.  Just check the package repository for your preferred operating
-system.
+However, if you want to go straight to binaries that 'just work' for the main
+platforms (Linux, Mac OSX and Windows), you might want to use the excellent
+Anaconda_ or Canopy_ distributions.  PyTables usually distributes its own
+Windows binaries too; go :ref:`binaryInstallationDescr` for instructions.
+Finally `Christoph Gohlke`_ also maintains an excellent suite of a variety of
+binary packages for Windows at his site.
+
+.. _Anaconda: https://store.continuum.io/cshop/anaconda/
+.. _Canopy: https://www.enthought.com/products/canopy/
+.. _`Christoph Gohlke`: http://www.lfd.uci.edu/~gohlke/pythonlibs/
 
 Installation from source
 ------------------------
@@ -42,14 +47,14 @@ Prerequisites
 First, make sure that you have
 
 * Python_ >= 2.6 including Python 3.x
-* HDF5_ >= 1.8.4,
+* HDF5_ >= 1.8.4 (>=1.8.7 is strongly recommended),
 * NumPy_ >= 1.4.1,
 * Numexpr_ >= 2.0 and
 * Cython_ >= 0.13
-* argparse_ (only Python 2.6, it it used by the :program:`pt2to3` utility)
+* argparse_ (only Python 2.6, it is used by the :program:`pt2to3` utility)
 
-installed (for testing purposes, we are using HDF5_ 1.8.9, NumPy_ 1.7.1
-and Numexpr_ 2.1 currently). If you don't, fetch and install them before
+installed (for testing purposes, we are using HDF5_ 1.8.12, NumPy_ 1.8.0
+and Numexpr_ 2.2.2 currently). If you don't, fetch and install them before
 proceeding.
 
 .. _Python: http://www.python.org
@@ -61,6 +66,12 @@ proceeding.
 
 .. note::
 
+    HDF5 versions < 1.8.7 are supported with some limitations.
+    It is not possible to open the same file multiple times (simultaneously),
+    even in read-only mode.
+
+.. note::
+
     Currently PyTables does not use setuptools_ by default so do not expect
     that the setup.py script automatically install all packages PyTables
     depends on.
@@ -69,8 +80,8 @@ proceeding.
 .. _ctypes: https://pypi.python.org/pypi/ctypes
 
 Compile and install these packages (but see :ref:`prerequisitesBinInst` for
-instructions on how to install precompiled binaries if you are not willing to
-compile the prerequisites on Windows systems).
+instructions on how to install pre-compiled binaries if you are not willing
+to compile the prerequisites on Windows systems).
 
 For compression (and possibly improved performance), you will need to install
 the Zlib (see :ref:`[ZLIB] <ZLIB>`), which is also required by HDF5 as well.
@@ -78,8 +89,11 @@ You may also optionally install the excellent LZO compression library (see
 :ref:`[LZO] <LZO>` and :ref:`compressionIssues`). The high-performance bzip2
 compression library can also be used with PyTables (see
 :ref:`[BZIP2] <BZIP2>`).
-The Blosc (see :ref:`[BLOSC] <BLOSC>`) compression library is embedded in
-PyTables, so you don't need to install it separately.
+
+The Blosc (see :ref:`[BLOSC] <BLOSC>`) compression library is embedded
+in PyTables, so this will be used in case it is not found in the
+system.  So, in case the installer warns about not finding it, do not
+worry too much ;)
 
 **Unix**
 
@@ -90,20 +104,21 @@ PyTables, so you don't need to install it separately.
     may wish to use) or if you have several versions of a library installed
     and want to use a particular one, then you can set the path to the
     resource in the environment, by setting the values of the
-    :envvar:`HDF5_DIR`, :envvar:`LZO_DIR`, or :envvar:`BZIP2_DIR` environment
-    variables to the path to the particular resource. You may also specify the
-    locations of the resource root directories on the setup.py command line.
-    For example::
+    :envvar:`HDF5_DIR`, :envvar:`LZO_DIR`, :envvar:`BZIP2_DIR` or
+    :envvar:`BLOSC_DIR` environment variables to the path to the particular
+    resource. You may also specify the locations of the resource root
+    directories on the setup.py command line.  For example::
 
-        --hdf5=/stuff/hdf5-1.8.9
+        --hdf5=/stuff/hdf5-1.8.12
         --lzo=/stuff/lzo-2.02
         --bzip2=/stuff/bzip2-1.0.5
+        --blosc=/stuff/blosc-1.3.2
 
     If your HDF5 library was built as a shared library not in the runtime load
     path, then you can specify the additional linker flags needed to find the
     shared library on the command line as well. For example::
 
-        --lflags="-Xlinker -rpath -Xlinker /stuff/hdf5-1.8.9/lib"
+        --lflags="-Xlinker -rpath -Xlinker /stuff/hdf5-1.8.12/lib"
 
     You may also want to try setting the :envvar:`LD_LIBRARY_PATH`
     environment variable to point to the directory where the shared libraries
@@ -113,7 +128,7 @@ PyTables, so you don't need to install it separately.
     It is also possible to link with specific libraries by setting the
     :envvar:`LIBS` environment variable::
 
-        LIBS="hdf5-1.8.9 nsl"
+        LIBS="hdf5-1.8.12 nsl"
 
     Finally, you can give additional flags to your compiler by passing them to
     the :option:`--cflags` flag::
@@ -146,14 +161,15 @@ PyTables, so you don't need to install it separately.
     Once you have installed the prerequisites, setup.py needs to know where
     the necessary library *stub* (.lib) and *header* (.h) files are installed.
     You can set the path to the include and dll directories for the HDF5
-    (mandatory) and LZO or BZIP2 (optional) libraries in the environment, by
-    setting the values of the :envvar:`HDF5_DIR`, :envvar:`LZO_DIR`, or
-    :envvar:`BZIP2_DIR` environment variables to the path to the particular
-    resource.  For example::
+    (mandatory) and LZO, BZIP2, BLOSC (optional) libraries in the environment,
+    by setting the values of the :envvar:`HDF5_DIR`, :envvar:`LZO_DIR`,
+    :envvar:`BZIP2_DIR` or :envvar:`BLOSC_DIR` environment variables to the
+    path to the particular resource.  For example::
 
         set HDF5_DIR=c:\\stuff\\hdf5-1.8.5-32bit-VS2008-IVF101\\release
         set LZO_DIR=c:\\Program Files (x86)\\GnuWin32
         set BZIP2_DIR=c:\\Program Files (x86)\\GnuWin32
+        set BLOSC_DIR=c:\\Program Files (x86)\\Blosc
 
     You may also specify the locations of the resource root directories on the
     setup.py command line.
@@ -162,6 +178,7 @@ PyTables, so you don't need to install it separately.
         --hdf5=c:\\stuff\\hdf5-1.8.5-32bit-VS2008-IVF101\\release
         --lzo=c:\\Program Files (x86)\\GnuWin32
         --bzip2=c:\\Program Files (x86)\\GnuWin32
+        --blosc=c:\\Program Files (x86)\\Blosc
 
 **Development version (Unix)**
 
@@ -201,12 +218,12 @@ you can proceed with the PyTables package itself.
    **Unix**
       In the sh shell and its variants::
 
-        $ cd build/lib.linux-x86_64-2.7
+        $ cd build/lib.linux-x86_64-3.3
         $ env PYTHONPATH=. python tables/tests/test_all.py
 
       or, if you prefer::
 
-        $ cd build/lib.linux-x86_64-2.7
+        $ cd build/lib.linux-x86_64-3.3
         $ env PYTHONPATH=. python -c "import tables; tables.test()"
 
       .. note::
@@ -287,10 +304,11 @@ you can proceed with the PyTables package itself.
 
    **Windows**
 
-      Put the DLL libraries (hdf5dll.dll and, optionally, lzo1.dll and
-      bzip2.dll) in a directory listed in your :envvar:`PATH` environment
-      variable. The setup.py installation program will print out a warning to
-      that effect if the libraries can not be found.
+      Put the DLL libraries (hdf5dll.dll and, optionally, lzo1.dll,
+      bzip2.dll or blosc.dll) in a directory listed in your
+      :envvar:`PATH` environment variable. The setup.py installation
+      program will print out a warning to that effect if the libraries
+      can not be found.
 
 #. To install the entire PyTables Python package, change back to the root
    distribution directory and run the following command (make sure you have
@@ -319,6 +337,56 @@ you can proceed with the PyTables package itself.
 That's it! Now you can skip to the next chapter to learn how to use PyTables.
 
 
+Installation with :program:`pip`
+--------------------------------
+
+Many users find it useful to use the :program:`pip` program (or similar ones)
+to install python packages.
+
+As explained in previous sections the user should in any case ensure that all
+dependencies listed in the `Prerequisites`_ section are correctly installed.
+
+The simplest way to install PyTables using :program:`pip` is the following::
+
+  $ pip install tables
+
+The following example shows how to install the latest stable version of
+PyTables in the user folder when a older version of the package is already
+installed at system level::
+
+  $ pip install --user --upgrade tables
+
+The `--user` option tels to the :program:`pip` tool to install the package in
+the user folder (``$HOME/.local`` on GNU/Linux and Unix systems), while the
+`--upgrade` option forces the installation of the latest version even if an
+older version of the package is already installed.
+
+The :program:`pip` tool can also be used to install packages from a source
+tar-ball::
+
+  $ pip install tables-3.0.0.tar.gz
+
+To install the development version of PyTables from the *develop* branch of
+the main :program:`git` :ref:`[GIT] <GIT>` repository the command is the
+following::
+
+  $ pip install git+https://github.com/PyTables/PyTables.git@develop#egg=tables
+
+A similar command can be used to install a specific tagged fersion::
+
+  $ pip install git+https://github.com/PyTables/PyTables.git@v.2.4.0#egg=tables
+
+Finally, PyTables developers provide a :file:`requirements.txt` file that
+can be used by :program:`pip` to install the PyTables dependencies::
+
+  $ wget https://raw.github.com/PyTables/PyTables/develop/requirements.txt
+  $ pip install -r requirements.txt
+
+Of course the :file:`requirements.txt` file can be used to install only
+python packages.  Other dependencies like the HDF5 library of compression
+libraries have to be installed by the user.
+
+
 .. _binaryInstallationDescr:
 
 Binary installation (Windows)
diff --git a/doc/source/usersguide/libref/homogenous_storage.rst b/doc/source/usersguide/libref/homogenous_storage.rst
index df1b48a..18709f1 100644
--- a/doc/source/usersguide/libref/homogenous_storage.rst
+++ b/doc/source/usersguide/libref/homogenous_storage.rst
@@ -109,6 +109,8 @@ VLArray methods
 
 .. automethod:: VLArray.read
 
+.. automethod:: VLArray.get_row_size
+
 
 VLArray special methods
 ~~~~~~~~~~~~~~~~~~~~~~~
diff --git a/doc/source/usersguide/libref/structured_storage.rst b/doc/source/usersguide/libref/structured_storage.rst
index ec844db..ecb9ab7 100644
--- a/doc/source/usersguide/libref/structured_storage.rst
+++ b/doc/source/usersguide/libref/structured_storage.rst
@@ -83,6 +83,8 @@ Table methods - writing
 
 .. automethod:: Table.remove_rows
 
+.. automethod:: Table.remove_row
+
 .. automethod:: Table.__setitem__
 
 
diff --git a/doc/source/usersguide/optimization.rst b/doc/source/usersguide/optimization.rst
index 4912ec9..86b27bf 100644
--- a/doc/source/usersguide/optimization.rst
+++ b/doc/source/usersguide/optimization.rst
@@ -902,7 +902,7 @@ by using the :meth:`Table.append` method and building your own buffers [4]_.
 As an example, imagine that you have a small script that reads and selects
 data over a series of datasets, like this::
 
-    def readFile(filename):
+    def read_file(filename):
         "Select data from all the tables in filename"
         fileh = open_file(filename, mode = "r")
         result = []
@@ -912,15 +912,15 @@ data over a series of datasets, like this::
         return result
 
     if __name__=="__main__":
-        print readFile("myfile.h5")
+        print(read_file("myfile.h5"))
 
 In order to accelerate this piece of code, you can rewrite your main program
 to look like::
 
     if __name__=="__main__":
         import psyco
-        psyco.bind(readFile)
-        print readFile("myfile.h5")
+        psyco.bind(read_file)
+        print(read_file("myfile.h5"))
 
 That's all!  From now on, each time that you execute your Python script,
 Psyco will deploy its sophisticated algorithms so as to accelerate your
@@ -981,12 +981,12 @@ One thing that deserves some discussion is the election of the parameter that
 sets the maximum amount of nodes to be kept in memory at any time.
 As PyTables is meant to be deployed in machines that can have potentially low
 memory, the default for it is quite conservative (you can look at its actual
-value in the NODE_CACHE_SLOTS parameter in module
+value in the :data:`parameters.NODE_CACHE_SLOTS` parameter in module
 :file:`tables/parameters.py`). However, if you usually need to deal with
 files that have many more nodes than the maximum default, and you have a lot
 of free memory in your system, then you may want to experiment in order to
-see which is the appropriate value of NODE_CACHE_SLOTS that fits better your
-needs.
+see which is the appropriate value of :data:`parameters.NODE_CACHE_SLOTS` that
+fits better your needs.
 
 As an example, look at the next code::
 
@@ -1001,7 +1001,7 @@ As an example, look at the next code::
         fileh.close()
 
 We will be running the code above against a couple of files having a
-/newgroup containing 100 tables and 1000 tables respectively.  In addition,
+``/newgroup`` containing 100 tables and 1000 tables respectively.  In addition,
 this benchmark is run twice for two different values of the LRU cache size,
 specifically 256 and 1024. You can see the results in
 :ref:`table <optimization_table_1>`.
@@ -1061,19 +1061,19 @@ memory used.
 
 Also worth noting is that if you have a lot of memory available and
 performance is absolutely critical, you may want to try out a negative value
-for NODE_CACHE_SLOTS.  This will cause that all the touched nodes will be
-kept in an internal dictionary and this is the faster way to load/retrieve
-nodes.
+for :data:`parameters.NODE_CACHE_SLOTS`.  This will cause that all the touched
+nodes will be kept in an internal dictionary and this is the faster way to
+load/retrieve nodes.
 However, and in order to avoid a large memory consumption, the user will be
-warned when the number of loaded nodes will reach the -NODE_CACHE_SLOTS
+warned when the number of loaded nodes will reach the ``-NODE_CACHE_SLOTS``
 value.
 
-Finally, a value of zero in NODE_CACHE_SLOTS means that any cache mechanism
-is disabled.
+Finally, a value of zero in :data:`parameters.NODE_CACHE_SLOTS` means that
+any cache mechanism is disabled.
 
 At any rate, if you feel that this issue is important for you, there is no
 replacement for setting your own experiments up in order to proceed to
-fine-tune the NODE_CACHE_SLOTS parameter.
+fine-tune the :data:`parameters.NODE_CACHE_SLOTS` parameter.
 
 .. note::
 
@@ -1082,6 +1082,15 @@ fine-tune the NODE_CACHE_SLOTS parameter.
     working with it.
 
 
+.. note::
+
+    Numerical results reported in :ref:`table <optimization_table_1>` have been
+    obtained with PyTables < 3.1. In PyTables 3.1 the node cache mechanism has
+    been completely redesigned so while all comments above are still valid,
+    numerical values could be a little bit different from the ones reported in
+    :ref:`table <optimization_table_1>`.
+
+
 Compacting your PyTables files
 ------------------------------
 Let's suppose that you have a file where you have made a lot of row deletions
diff --git a/doc/source/usersguide/parameter_files.rst b/doc/source/usersguide/parameter_files.rst
index b31746f..7258baa 100644
--- a/doc/source/usersguide/parameter_files.rst
+++ b/doc/source/usersguide/parameter_files.rst
@@ -150,3 +150,6 @@ HDF5 driver management
 
 .. autodata:: DRIVER_CORE_IMAGE
 
+.. autodata:: DRIVER_SPLIT_META_EXT
+
+.. autodata:: DRIVER_SPLIT_RAW_EXT
diff --git a/doc/source/usersguide/tutorials.rst b/doc/source/usersguide/tutorials.rst
index 18c3394..c5dbfdd 100644
--- a/doc/source/usersguide/tutorials.rst
+++ b/doc/source/usersguide/tutorials.rst
@@ -18,10 +18,8 @@ reference in :ref:`library_reference`. If you are reading this in PDF or HTML
 formats, follow the corresponding hyperlink near each newly introduced
 entity.
 
-Please note that throughout this document the terms
-*column* and *field* will be used
-interchangeably, as will the terms *row* and
-*record*.
+Please note that throughout this document the terms *column* and *field*
+will be used interchangeably, as will the terms *row* and *record*.
 
 .. currentmodule:: tables
 
@@ -158,7 +156,7 @@ Table instance is created and assigned to the variable *table*.
 If you are curious about how the object tree looks right now, simply print
 the File instance variable *h5file*, and examine the output::
 
-    >>> print h5file
+    >>> print(h5file)
     tutorial1.h5 (File) 'Test file'
     Last modif.: 'Wed Mar  7 11:06:12 2007'
     Object Tree:
@@ -260,9 +258,8 @@ data in table is exhausted. These rows are filtered using the expression::
 
     x['TDCcount'] > 3 and 20 <= x['pressure'] < 50
 
-So, we are selecting the values of the pressure
-column from filtered records to create the final list and assign it to
-pressure variable.
+So, we are selecting the values of the pressure column from filtered records
+to create the final list and assign it to pressure variable.
 
 We could have used a normal for loop to accomplish the same purpose, but I
 find comprehension syntax to be more compact and elegant.
@@ -284,6 +281,38 @@ they also look more compact, and are among the greatests features for
 PyTables, so be sure that you use them a lot. See :ref:`condition_syntax` and
 :ref:`searchOptim` for more information on in-kernel and indexed selections.
 
+.. note::
+
+    A special care should be taken when the query condition includes
+    string literals.  Indeed Python 2 string literals are string of
+    bytes while Python 3 strings are unicode objects.
+
+    With reference to the above definition of :class:`Particle` it has to be
+    noted that the type of the "name" column do not change depending on the
+    Python version used (of course).
+    It always corresponds to strings of bytes.
+
+    Any condition involving the "name" column should be written using the
+    appropriate type for string literals in order to avoid
+    :exc:`TypeError`\ s.
+
+    Suppose one wants to get rows corresponding to specific particle names.
+
+    The code below will work fine in Python 2 but will fail with a
+    :exc:`TypeError` in Python 3::
+
+        >>> condition = '(name == "Particle:      5") | (name == "Particle:      7")'
+        >>> for record in table.where(condition):  # TypeError in Python3
+        ...     # do something with "record"
+
+    The reason is that in Python 3 "condition" implies a comparison
+    between a string of bytes ("name" column contents) and an unicode
+    literals.
+
+    The correct way to write the condition is::
+
+        >>> condition = '(name == b"Particle:      5") | (name == b"Particle:      7")'
+
 That's enough about selections for now. The next section will show you how to
 save these selected results to a file.
 
@@ -342,7 +371,7 @@ show the kind of object we have created by displaying its representation. The
 Array objects have been attached to the object tree and saved to disk, as you
 can see if you print the complete object tree::
 
-    >>> print h5file
+    >>> print(h5file)
     tutorial1.h5 (File) 'Test file'
     Last modif.: 'Wed Mar  7 19:40:44 2007'
     Object Tree:
@@ -446,7 +475,7 @@ object tree as well as search the tree.
 To start with, you can get a preliminary overview of the object tree by
 simply printing the existing File instance::
 
-    >>> print h5file
+    >>> print(h5file)
     tutorial1.h5 (File) 'Test file'
     Last modif.: 'Wed Mar  7 19:50:57 2007'
     Object Tree:
@@ -461,7 +490,7 @@ It looks like all of our objects are there. Now let's make use of the File
 iterator to see how to list all the nodes in the object tree::
 
     >>> for node in h5file:
-    ...     print node
+    ...     print(node)
     / (RootGroup) 'Test file'
     /columns (Group) 'Pressure and Name'
     /detector (Group) 'Detector information'
@@ -473,7 +502,7 @@ We can use the :meth:`File.walk_groups` method of the File class to list only
 the *groups* on tree::
 
     >>> for group in h5file.walk_groups():
-    ...     print group
+    ...     print(group)
     / (RootGroup) 'Test file'
     /columns (Group) 'Pressure and Name'
     /detector (Group) 'Detector information'
@@ -484,7 +513,7 @@ combination. Let's see an example listing of all the arrays in the tree::
 
     >>> for group in h5file.walk_groups("/"):
     ...     for array in h5file.list_nodes(group, classname='Array'):
-    ...         print array
+    ...         print(array)
     /columns/name (Array(3,)) 'Name column selection'
     /columns/pressure (Array(3,)) 'Pressure column selection'
 
@@ -499,7 +528,7 @@ We can combine both calls by using the :meth:`File.walk_nodes` special method
 of the File object. For example::
 
     >>> for array in h5file.walk_nodes("/", "Array"):
-    ...     print array
+    ...     print(array)
     /columns/name (Array(3,)) 'Name column selection'
     /columns/pressure (Array(3,)) 'Pressure column selection'
 
@@ -511,7 +540,7 @@ Finally, we will list all the Leaf, i.e. Table and Array instances (see
 readout) will be selected in this group (as should be the case)::
 
     >>> for leaf in h5file.root.detector._f_walknodes('Leaf'):
-    ...     print leaf
+    ...     print(leaf)
     /detector/readout (Table(10,)) 'Readout example'
 
 We have used a call to the :meth:`Group._f_walknodes` method, using the
@@ -598,15 +627,15 @@ We've got all the attributes (including the *system* attributes). You can get
 a list of *all* attributes or only the *user* or *system* attributes with the
 _f_list() method::
 
-    >>> print table.attrs._f_list("all")
+    >>> print(table.attrs._f_list("all"))
     ['CLASS', 'FIELD_0_FILL', 'FIELD_0_NAME', 'FIELD_1_FILL', 'FIELD_1_NAME',
     'FIELD_2_FILL', 'FIELD_2_NAME', 'FIELD_3_FILL', 'FIELD_3_NAME', 'FIELD_4_FILL',
     'FIELD_4_NAME', 'FIELD_5_FILL', 'FIELD_5_NAME', 'FIELD_6_FILL', 'FIELD_6_NAME',
     'FIELD_7_FILL', 'FIELD_7_NAME', 'FLAVOR', 'NROWS', 'TITLE', 'VERSION',
     'temp_scale', 'temperature']
-    >>> print table.attrs._f_list("user")
+    >>> print(table.attrs._f_list("user"))
     ['temp_scale', 'temperature']
-    >>> print table.attrs._f_list("sys")
+    >>> print(table.attrs._f_list("sys"))
     ['CLASS', 'FIELD_0_FILL', 'FIELD_0_NAME', 'FIELD_1_FILL', 'FIELD_1_NAME',
     'FIELD_2_FILL', 'FIELD_2_NAME', 'FIELD_3_FILL', 'FIELD_3_NAME', 'FIELD_4_FILL',
     'FIELD_4_NAME', 'FIELD_5_FILL', 'FIELD_5_NAME', 'FIELD_6_FILL', 'FIELD_6_NAME',
@@ -615,7 +644,7 @@ _f_list() method::
 You can also rename attributes::
 
     >>> table.attrs._f_rename("temp_scale","tempScale")
-    >>> print table.attrs._f_list()
+    >>> print(table.attrs._f_list())
     ['tempScale', 'temperature']
 
 And, from PyTables 2.0 on, you are allowed also to set, delete or rename
@@ -722,18 +751,18 @@ Each object in PyTables has *metadata* information about the data in the
 file. Normally this *meta-information* is accessible through the node
 instance variables. Let's take a look at some examples::
 
-    >>> print "Object:", table
+    >>> print("Object:", table)
     Object: /detector/readout (Table(10,)) 'Readout example'
-    >>> print "Table name:", table.name
+    >>> print("Table name:", table.name)
     Table name: readout
-    >>> print "Table title:", table.title
+    >>> print("Table title:", table.title)
     Table title: Readout example
-    >>> print "Number of rows in table:", table.nrows
+    >>> print("Number of rows in table:", table.nrows)
     Number of rows in table: 10
-    >>> print "Table variable names with their type and shape:"
+    >>> print("Table variable names with their type and shape:")
     Table variable names with their type and shape:
     >>> for name in table.colnames:
-    ...     print name, ':= %s, %s' % (table.coldtypes[name], table.coldtypes[name].shape)
+    ...     print(name, ':= %s, %s' % (table.coldtypes[name], table.coldtypes[name].shape))
     ADCcount := uint16, ()
     TDCcount := uint8, ()
     energy := float64, ()
@@ -810,18 +839,18 @@ Try getting help with other object docs by yourself::
 To examine metadata in the */columns/pressure* Array object::
 
     >>> pressureObject = h5file.get_node("/columns", "pressure")
-    >>> print "Info on the object:", repr(pressureObject)
+    >>> print("Info on the object:", repr(pressureObject))
     Info on the object: /columns/pressure (Array(3,)) 'Pressure column selection'
       atom := Float64Atom(shape=(), dflt=0.0)
       maindim := 0
       flavor := 'numpy'
       byteorder := 'little'
       chunkshape := None
-    >>> print "  shape: ==>", pressureObject.shape
+    >>> print("  shape: ==>", pressureObject.shape)
       shape: ==> (3,)
-    >>> print "  title: ==>", pressureObject.title
+    >>> print("  title: ==>", pressureObject.title)
       title: ==> Pressure column selection
-    >>> print "  atom: ==>", pressureObject.atom
+    >>> print("  atom: ==>", pressureObject.atom)
       atom: ==> Float64Atom(shape=(), dflt=0.0)
 
 Observe that we have used the :meth:`File.get_node` method of the File class
@@ -852,16 +881,16 @@ object to retrieve its data::
     >>> pressureArray = pressureObject.read()
     >>> pressureArray
     array([ 25.,  36.,  49.])
-    >>> print "pressureArray is an object of type:", type(pressureArray)
+    >>> print("pressureArray is an object of type:", type(pressureArray))
     pressureArray is an object of type: <type 'numpy.ndarray'>
     >>> nameArray = h5file.root.columns.name.read()
-    >>> print "nameArray is an object of type:", type(nameArray)
+    >>> print("nameArray is an object of type:", type(nameArray))
     nameArray is an object of type: <type 'list'>
     >>>
-    >>> print "Data on arrays nameArray and pressureArray:"
+    >>> print("Data on arrays nameArray and pressureArray:")
     Data on arrays nameArray and pressureArray:
     >>> for i in range(pressureObject.shape[0]):
-    ...     print nameArray[i], "-->", pressureArray[i]
+    ...     print(nameArray[i], "-->", pressureArray[i])
     Particle:      5 --> 25.0
     Particle:      6 --> 36.0
     Particle:      7 --> 49.0
@@ -925,9 +954,9 @@ Let's have a look at some rows in the modified table and verify that our new
 data has been appended::
 
     >>> for r in table.iterrows():
-    ...     print "%-16s | %11.1f | %11.4g | %6d | %6d | %8d \|" % \\
+    ...     print("%-16s | %11.1f | %11.4g | %6d | %6d | %8d \|" % \\
     ...         (r['name'], r['pressure'], r['energy'], r['grid_i'], r['grid_j'],
-    ...         r['TDCcount'])
+    ...         r['TDCcount']))
     Particle:      0 |         0.0 |           0 |      0 |     10 |        0 |
     Particle:      1 |         1.0 |           1 |      1 |      9 |        1 |
     Particle:      2 |         4.0 |         256 |      2 |      8 |        2 |
@@ -954,19 +983,19 @@ world data to adapt your goals ;).
 Let's see how we can modify the values that were saved in our existing tables.
 We will start modifying single cells in the first row of the Particle table::
 
-    >>> print "Before modif-->", table[0]
+    >>> print("Before modif-->", table[0])
     Before modif--> (0, 0, 0.0, 0, 10, 0L, 'Particle:      0', 0.0)
     >>> table.cols.TDCcount[0] = 1
-    >>> print "After modifying first row of ADCcount-->", table[0]
+    >>> print("After modifying first row of ADCcount-->", table[0])
     After modifying first row of ADCcount--> (0, 1, 0.0, 0, 10, 0L, 'Particle:      0', 0.0)
     >>> table.cols.energy[0] = 2
-    >>> print "After modifying first row of energy-->", table[0]
+    >>> print("After modifying first row of energy-->", table[0])
     After modifying first row of energy--> (0, 1, 2.0, 0, 10, 0L, 'Particle:      0', 0.0)
 
 We can modify complete ranges of columns as well::
 
     >>> table.cols.TDCcount[2:5] = [2,3,4]
-    >>> print "After modifying slice [2:5] of TDCcount-->", table[0:5]
+    >>> print("After modifying slice [2:5] of TDCcount-->", table[0:5])
     After modifying slice [2:5] of TDCcount-->
     [(0, 1, 2.0, 0, 10, 0L, 'Particle:      0', 0.0)
      (256, 1, 1.0, 1, 9, 17179869184L, 'Particle:      1', 1.0)
@@ -974,7 +1003,7 @@ We can modify complete ranges of columns as well::
      (768, 3, 6561.0, 3, 7, 51539607552L, 'Particle:      3', 9.0)
      (1024, 4, 65536.0, 4, 6, 68719476736L, 'Particle:      4', 16.0)]
     >>> table.cols.energy[1:9:3] = [2,3,4]
-    >>> print "After modifying slice [1:9:3] of energy-->", table[0:9]
+    >>> print("After modifying slice [1:9:3] of energy-->", table[0:9])
     After modifying slice [1:9:3] of energy-->
     [(0, 1, 2.0, 0, 10, 0L, 'Particle:      0', 0.0)
      (256, 1, 2.0, 1, 9, 17179869184L, 'Particle:      1', 1.0)
@@ -1001,7 +1030,7 @@ demonstration of these capability, see the next example::
     ...                 rows=[(1, 2, 3.0, 4, 5, 6L, 'Particle:   None', 8.0),
     ...                       (2, 4, 6.0, 8, 10, 12L, 'Particle: None*2', 16.0)])
     2
-    >>> print "After modifying the complete third row-->", table[0:5]
+    >>> print("After modifying the complete third row-->", table[0:5])
     After modifying the complete third row-->
     [(0, 1, 2.0, 0, 10, 0L, 'Particle:      0', 0.0)
      (1, 2, 3.0, 4, 5, 6L, 'Particle:   None', 8.0)
@@ -1023,7 +1052,7 @@ is meant to be used in table iterators. Look at the next example::
     >>> for row in table.where('TDCcount <= 2'):
     ...     row['energy'] = row['TDCcount']*2
     ...     row.update()
-    >>> print "After modifying energy column (where TDCcount <=2)-->", table[0:4]
+    >>> print("After modifying energy column (where TDCcount <=2)-->", table[0:4])
     After modifying energy column (where TDCcount <=2)-->
     [(0, 1, 2.0, 0, 10, 0L, 'Particle:      0', 0.0)
      (1, 2, 4.0, 4, 5, 6L, 'Particle:   None', 8.0)
@@ -1045,16 +1074,16 @@ The basic way to do this is through the use of :meth:`Array.__setitem__`
 special method. Let's see at how modify data on the pressureObject array::
 
     >>> pressureObject = h5file.root.columns.pressure
-    >>> print "Before modif-->", pressureObject[:]
+    >>> print("Before modif-->", pressureObject[:])
     Before modif--> [ 25.  36.  49.]
     >>> pressureObject[0] = 2
-    >>> print "First modif-->", pressureObject[:]
+    >>> print("First modif-->", pressureObject[:])
     First modif--> [  2.  36.  49.]
     >>> pressureObject[1:3] = [2.1, 3.5]
-    >>> print "Second modif-->", pressureObject[:]
+    >>> print("Second modif-->", pressureObject[:])
     Second modif--> [ 2.   2.1  3.5]
     >>> pressureObject[::2] = [1,2]
-    >>> print "Third modif-->", pressureObject[:]
+    >>> print("Third modif-->", pressureObject[:])
     Third modif--> [ 1.   2.1  2. ]
 
 So, in general, you can use any combination of (multidimensional) extended
@@ -1064,26 +1093,26 @@ With the sole exception that you cannot use negative values for step to refer
 to indexes that you want to modify. See :meth:`Array.__getitem__` for more
 examples on how to use extended slicing in PyTables objects.
 
-Similarly, with and array of strings::
+Similarly, with an array of strings::
 
     >>> nameObject = h5file.root.columns.name
-    >>> print "Before modif-->", nameObject[:]
+    >>> print("Before modif-->", nameObject[:])
     Before modif--> ['Particle:      5', 'Particle:      6', 'Particle:      7']
     >>> nameObject[0] = 'Particle:   None'
-    >>> print "First modif-->", nameObject[:]
+    >>> print("First modif-->", nameObject[:])
     First modif--> ['Particle:   None', 'Particle:      6', 'Particle:      7']
     >>> nameObject[1:3] = ['Particle:      0', 'Particle:      1']
-    >>> print "Second modif-->", nameObject[:]
+    >>> print("Second modif-->", nameObject[:])
     Second modif--> ['Particle:   None', 'Particle:      0', 'Particle:      1']
     >>> nameObject[::2] = ['Particle:     -3', 'Particle:     -5']
-    >>> print "Third modif-->", nameObject[:]
+    >>> print("Third modif-->", nameObject[:])
     Third modif--> ['Particle:     -3', 'Particle:      0', 'Particle:     -5']
 
 
 And finally... how to delete rows from a table
 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
 We'll finish this tutorial by deleting some rows from the table we have.
-Suppose that we want to delete the the 5th to 9th rows (inclusive)::
+Suppose that we want to delete the 5th to 9th rows (inclusive)::
 
     >>> table.remove_rows(5,10)
     5
@@ -1246,9 +1275,9 @@ can be passed to this method::
     # Read the records from table "/Events/TEvent3" and select some
     table = root.Events.TEvent3
     e = [ p['TDCcount'] for p in table if p['ADCcount'] < 20 and 4 <= p['TDCcount'] < 15 ]
-    print "Last record ==>", p
-    print "Selected values ==>", e
-    print "Total selected records ==> ", len(e)
+    print("Last record ==>", p)
+    print("Selected values ==>", e)
+    print("Total selected records ==> ", len(e))
 
     # Finally, close the file (this also will flush all the remaining buffers!)
     fileh.close()
@@ -1393,7 +1422,7 @@ where we will put our links and will start creating one hard link too::
 
     >>> gl = f1.create_group('/', 'gl')
     >>> ht = f1.create_hard_link(gl, 'ht', '/g1/g2/t1')  # ht points to t1
-    >>> print "``%s`` is a hard link to: ``%s``" % (ht, t1)
+    >>> print("``%s`` is a hard link to: ``%s``" % (ht, t1))
     ``/gl/ht (Table(0,)) `` is a hard link to: ``/g1/g2/t1 (Table(0,)) ``
 
 You can see how we've created a hard link in /gl/ht which is pointing to the
@@ -1404,16 +1433,16 @@ different paths to access that table, the original /g1/g2/t1 and the new one
 the new path::
 
     >>> t1.remove()
-    >>> print "table continues to be accessible in: ``%s``" % f1.get_node('/gl/ht')
+    >>> print("table continues to be accessible in: ``%s``" % f1.get_node('/gl/ht'))
     table continues to be accessible in: ``/gl/ht (Table(0,)) ``
 
 So far so good. Now, let's create a couple of soft links::
 
     >>> la1 = f1.create_soft_link(gl, 'la1', '/g1/a1')  # la1 points to a1
-    >>> print "``%s`` is a soft link to: ``%s``" % (la1, la1.target)
+    >>> print("``%s`` is a soft link to: ``%s``" % (la1, la1.target))
     ``/gl/la1 (SoftLink) -> /g1/a1`` is a soft link to: ``/g1/a1``
     >>> lt = f1.create_soft_link(gl, 'lt', '/g1/g2/t1')  # lt points to t1
-    >>> print "``%s`` is a soft link to: ``%s``" % (lt, lt.target)
+    >>> print("``%s`` is a soft link to: ``%s``" % (lt, lt.target))
     ``/gl/lt (SoftLink) -> /g1/g2/t1 (dangling)`` is a soft link to: ``/g1/g2/t1``
 
 Okay, we see how the first link /gl/la1 points to the array /g1/a1.  Notice
@@ -1430,7 +1459,7 @@ node or not.
 So, let's re-create the removed path to t1 table::
 
     >>> t1 = f1.create_hard_link('/g1/g2', 't1', '/gl/ht')
-    >>> print "``%s`` is not dangling anymore" % (lt,)
+    >>> print("``%s`` is not dangling anymore" % (lt,))
     ``/gl/lt (SoftLink) -> /g1/g2/t1`` is not dangling anymore
 
 and the soft link is pointing to an existing node now.
@@ -1440,10 +1469,10 @@ the pointed node.  It happens that soft links are callable, and that's the
 way to get the referred nodes back::
 
     >>> plt = lt()
-    >>> print "dereferred lt node: ``%s``" % plt
+    >>> print("dereferred lt node: ``%s``" % plt)
     dereferred lt node: ``/g1/g2/t1 (Table(0,)) ``
     >>> pla1 = la1()
-    >>> print "dereferred la1 node: ``%s``" % pla1
+    >>> print("dereferred la1 node: ``%s``" % pla1)
     dereferred la1 node: ``/g1/a1 (CArray(10000,)) ``
 
 Now, plt is a Python reference to the t1 table while pla1 refers to the a1
@@ -1471,19 +1500,19 @@ its place::
 
     >>> la1.remove()
     >>> la1 = f1.create_external_link(gl, 'la1', 'links2.h5:/a1')
-    >>> print "``%s`` is an external link to: ``%s``" % (la1, la1.target)
+    >>> print("``%s`` is an external link to: ``%s``" % (la1, la1.target))
     ``/gl/la1 (ExternalLink) -> links2.h5:/a1`` is an external link to: ``links2.h5:/a1``
 
 Let's try dereferring it::
 
     >>> new_a1 = la1()  # dereferrencing la1 returns a1 in links2.h5
-    >>> print "dereferred la1 node:  ``%s``" % new_a1
+    >>> print("dereferred la1 node:  ``%s``" % new_a1)
     dereferred la1 node:  ``/a1 (CArray(10000,)) ``
 
 Well, it seems like we can access the external node.  But just to make sure
 that the node is in the other file::
 
-    >>> print "new_a1 file:", new_a1._v_file.filename
+    >>> print("new_a1 file:", new_a1._v_file.filename)
     new_a1 file: links2.h5
 
 Okay, the node is definitely in the external file.  So, you won't have to
@@ -1586,7 +1615,7 @@ introduce the undo() method (see :meth:`File.undo`)::
 Fine, what do you think it happened? Well, let's have a look at the object
 tree::
 
-    >>> print fileh
+    >>> print(fileh)
     tutorial3-1.h5 (File) 'Undo/Redo demo 1'
     Last modif.: 'Tue Mar 13 11:43:55 2007'
     Object Tree:
@@ -1602,7 +1631,7 @@ PyTables that renders it invisible and waiting for a chance to be reborn.
 Now, unwind once more, and look at the object tree::
 
     >>> fileh.undo()
-    >>> print fileh
+    >>> print(fileh)
     tutorial3-1.h5 (File) 'Undo/Redo demo 1'
     Last modif.: 'Tue Mar 13 11:43:55 2007'
     Object Tree:
@@ -1613,7 +1642,7 @@ Don't worry, it will revisit us very shortly. So, you might be somewhat lost
 right now; in which mark are we?. Let's ask the :meth:`File.get_current_mark`
 method in the file handler::
 
-    >>> print fileh.get_current_mark()
+    >>> print(fileh.get_current_mark())
     0
 
 So we are at mark #0, remember? Mark #0 is an implicit mark that is created
@@ -1622,7 +1651,7 @@ you are missing your too-young-to-die arrays. What can we do about that?
 :meth:`File.redo` to the rescue::
 
     >>> fileh.redo()
-    >>> print fileh
+    >>> print(fileh)
     tutorial3-1.h5 (File) 'Undo/Redo demo 1'
     Last modif.: 'Tue Mar 13 11:43:55 2007'
     Object Tree:
@@ -1647,7 +1676,7 @@ It just was moved to the the hidden group and back again, but that's all!
 That's kind of fun, so we are going to do the same with /anotherarray::
 
     >>> fileh.redo()
-    >>> print fileh
+    >>> print(fileh)
     tutorial3-1.h5 (File) 'Undo/Redo demo 1'
     Last modif.: 'Tue Mar 13 11:43:55 2007'
     Object Tree:
@@ -1819,7 +1848,7 @@ Here we used a simple list giving the names of enumerated values, but we left
 the choice of concrete values up to the Enum class. Let us see the enumerated
 pairs to check those values::
 
-    >>> print "Colors:", [v for v in colors]
+    >>> print("Colors:", [v for v in colors])
     Colors: [('blue', 2), ('black', 4), ('white', 3), ('green', 1), ('red', 0)]
 
 Names have been given automatic integer concrete values. We can iterate over
@@ -1828,9 +1857,9 @@ accessing single values. We can get the concrete value associated with a name
 by accessing it as an attribute or as an item (the later can be useful for
 names not resembling Python identifiers)::
 
-    >>> print "Value of 'red' and 'white':", (colors.red, colors.white)
+    >>> print("Value of 'red' and 'white':", (colors.red, colors.white))
     Value of 'red' and 'white': (0, 3)
-    >>> print "Value of 'yellow':", colors.yellow
+    >>> print("Value of 'yellow':", colors.yellow)
     Value of 'yellow':
     Traceback (most recent call last):
       File "<stdin>", line 1, in ?
@@ -1838,9 +1867,9 @@ names not resembling Python identifiers)::
         raise AttributeError(\*ke.args)
     AttributeError: no enumerated value with that name: 'yellow'
     >>>
-    >>> print "Value of 'red' and 'white':", (colors['red'], colors['white'])
+    >>> print("Value of 'red' and 'white':", (colors['red'], colors['white']))
     Value of 'red' and 'white': (0, 3)
-    >>> print "Value of 'yellow':", colors['yellow']
+    >>> print("Value of 'yellow':", colors['yellow'])
     Value of 'yellow':
     Traceback (most recent call last):
       File "<stdin>", line 1, in ?
@@ -1852,9 +1881,9 @@ See how accessing a value that is not in the enumeration raises the
 appropriate exception. We can also do the opposite action and get the name
 that matches a concrete value by using the __call__() method of Enum::
 
-    >>> print "Name of value %s:" % colors.red, colors(colors.red)
+    >>> print("Name of value %s:" % colors.red, colors(colors.red))
     Name of value 0: red
-    >>> print "Name of value 1234:", colors(1234)
+    >>> print("Name of value 1234:", colors(1234))
     Name of value 1234:
     Traceback (most recent call last):
       File "<stdin>", line 1, in ?
@@ -1920,7 +1949,7 @@ table we can see the results of the insertions::
     >>> for r in tbl:
     ...     ballTime = r['ballTime']
     ...     ballColor = colors(r['ballColor'])  # notice this
-    ...     print "Ball extracted on %d is of color %s." % (ballTime, ballColor)
+    ...     print("Ball extracted on %d is of color %s." % (ballTime, ballColor))
     Ball extracted on 1173785568 is of color green.
     Ball extracted on 1173785569 is of color black.
     Ball extracted on 1173785570 is of color white.
@@ -1975,7 +2004,7 @@ enumerated values.
 Finally, we will print the contents of the array::
 
     >>> for (d1, d2) in earr:
-    ...     print "From %s to %s (%d days)." % (wdays(d1), wdays(d2), d2-d1+1)
+    ...     print("From %s to %s (%d days)." % (wdays(d1), wdays(d2), d2-d1+1))
     From Mon to Fri (5 days).
     From Wed to Fri (3 days).
     Traceback (most recent call last):
@@ -2235,7 +2264,7 @@ Finally, there is a special iterator of the Description class, called _f_walk
 that is able to return you the different columns of the table::
 
     >>> for coldescr in table.description._f_walk():
-    ...     print "column-->",coldescr
+    ...     print("column-->",coldescr)
     column--> Description([('info2', [('info3', [('x', '()f8'), ('y', '()u1')]),
                            ('name', '()S10'), ('value', '()f8')]),
                            ('info1', [('name', '()S10'), ('value', '()f8')]),
@@ -2288,7 +2317,8 @@ Finally, you may want to have a look at your resulting data file.
     [1] (((1.0, 4), 'name2-4', 0.0), ('name1-4', 0.0), 1L)
     [2] (((1.0, 8), 'name2-8', 0.0), ('name1-8', 0.0), 2L)
 
-Most of the code in this section is also available in examples/nested-tut.py.
+Most of the code in this section is also available in
+:file:`examples/nested-tut.py`.
 
 All in all, PyTables provides a quite comprehensive toolset to cope with
 nested structures and address your classification needs.
diff --git a/doc/source/usersguide/usersguide.rst b/doc/source/usersguide/usersguide.rst
index cdb937b..99414aa 100644
--- a/doc/source/usersguide/usersguide.rst
+++ b/doc/source/usersguide/usersguide.rst
@@ -19,7 +19,7 @@ PyTables User's Guide
 
             |copy| 2008, 2009, 2010 - Francesc Alted
 
-            |copy| 2011-2013 - PyTables maintainers
+            |copy| 2011-2014 - PyTables maintainers
 :Date:      |today|
 :Version:   |version|
 :Home Page: http://www.pytables.org
diff --git a/doc/source/usersguide/utilities.rst b/doc/source/usersguide/utilities.rst
index 0db1262..dd2c3ef 100644
--- a/doc/source/usersguide/utilities.rst
+++ b/doc/source/usersguide/utilities.rst
@@ -211,8 +211,10 @@ to see the message usage:
         --complevel=(0-9) -- Set a compression level (0 for no compression, which
             is the default).
         --complib=lib -- Set the compression library to be used during the copy.
-            lib can be set to "zlib", "lzo", "bzip2" or "blosc".  Defaults to
-            "zlib".
+            lib can be set to "zlib", "lzo", "bzip2" or "blosc".
+            Additional compressors for Blosc like "blosc:blosclz",
+            "blosc:lz4", "blosc:lz4hc", "blosc:snappy" and
+            "blosc:zlib", are supported too.  Defaults to "zlib".
         --shuffle=(0|1) -- Activate or not the shuffling filter (default is active
             if complevel>0).
         --fletcher32=(0|1) -- Whether to activate or not the fletcher32 filter
diff --git a/examples/add-column.py b/examples/add-column.py
index ff32281..d396143 100644
--- a/examples/add-column.py
+++ b/examples/add-column.py
@@ -1,34 +1,37 @@
 "Example showing how to add a column on a existing column"
 
-from tables import *
+from __future__ import print_function
+import tables
 
-class Particle(IsDescription):
-    name        = StringCol(16, pos=1)   # 16-character String
-    lati        = Int32Col(pos=2)        # integer
-    longi       = Int32Col(pos=3)        # integer
-    pressure    = Float32Col(pos=4)    # float  (single-precision)
-    temperature = Float64Col(pos=5)      # double (double-precision)
+
+class Particle(tables.IsDescription):
+    name = tables.StringCol(16, pos=1)      # 16-character String
+    lati = tables.Int32Col(pos=2)           # integer
+    longi = tables.Int32Col(pos=3)          # integer
+    pressure = tables.Float32Col(pos=4)     # float  (single-precision)
+    temperature = tables.Float64Col(pos=5)  # double (double-precision)
 
 # Open a file in "w"rite mode
-fileh = open_file("add-column.h5", mode = "w")
+fileh = tables.open_file("add-column.h5", mode="w")
 # Create a new group
 group = fileh.create_group(fileh.root, "newgroup")
 
 # Create a new table in newgroup group
-table = fileh.create_table(group, 'table', Particle, "A table", Filters(1))
+table = fileh.create_table(group, 'table', Particle, "A table",
+                           tables.Filters(1))
 
 # Append several rows
-table.append([("Particle:     10", 10, 0, 10*10, 10**2),
-              ("Particle:     11", 11, -1, 11*11, 11**2),
-              ("Particle:     12", 12, -2, 12*12, 12**2)])
+table.append([("Particle:     10", 10, 0, 10 * 10, 10 ** 2),
+              ("Particle:     11", 11, -1, 11 * 11, 11 ** 2),
+              ("Particle:     12", 12, -2, 12 * 12, 12 ** 2)])
 
-print "Contents of the original table:", fileh.root.newgroup.table[:]
+print("Contents of the original table:", fileh.root.newgroup.table[:])
 
 # close the file
 fileh.close()
 
 # Open it again in append mode
-fileh = open_file("add-column.h5", "a")
+fileh = tables.open_file("add-column.h5", "a")
 group = fileh.root.newgroup
 table = group.table
 
@@ -37,16 +40,17 @@ descr = table.description._v_colobjects
 descr2 = descr.copy()
 
 # Add a column to description
-descr2["hot"] = BoolCol(dflt=False)
+descr2["hot"] = tables.BoolCol(dflt=False)
 
 # Create a new table with the new description
-table2 = fileh.create_table(group, 'table2', descr2, "A table", Filters(1))
+table2 = fileh.create_table(group, 'table2', descr2, "A table",
+                            tables.Filters(1))
 
 # Copy the user attributes
 table.attrs._f_copy(table2)
 
 # Fill the rows of new table with default values
-for i in xrange(table.nrows):
+for i in range(table.nrows):
     table2.row.append()
 # Flush the rows to disk
 table2.flush()
@@ -56,7 +60,7 @@ for col in descr:
     getattr(table2.cols, col)[:] = getattr(table.cols, col)[:]
 
 # Fill the new column
-table2.cols.hot[:] = [ row["temperature"] > 11**2 for row in table ]
+table2.cols.hot[:] = [row["temperature"] > 11 ** 2 for row in table]
 
 # Remove the original table
 table.remove()
@@ -65,7 +69,7 @@ table.remove()
 table2.move('/newgroup', 'table')
 
 # Print the new table
-print "Contents of the table with column added:", fileh.root.newgroup.table[:]
+print("Contents of the table with column added:", fileh.root.newgroup.table[:])
 
 # Finally, close the file
 fileh.close()
diff --git a/examples/array1.py b/examples/array1.py
index c3be13d..92223e5 100644
--- a/examples/array1.py
+++ b/examples/array1.py
@@ -1,23 +1,24 @@
-from numpy import *
-from tables import *
+from __future__ import print_function
+import numpy as np
+import tables
 
 # Open a new empty HDF5 file
-fileh = open_file("array1.h5", mode = "w")
+fileh = tables.open_file("array1.h5", mode="w")
 # Get the root group
 root = fileh.root
 
 # Create an Array
-a = array([-1, 2, 4], int16)
+a = np.array([-1, 2, 4], np.int16)
 # Save it on the HDF5 file
 hdfarray = fileh.create_array(root, 'array_1', a, "Signed short array")
 
 # Create a scalar Array
-a = array(4, int16)
+a = np.array(4, np.int16)
 # Save it on the HDF5 file
 hdfarray = fileh.create_array(root, 'array_s', a, "Scalar signed short array")
 
 # Create a 3-d array of floats
-a = arange(120, dtype=float64).reshape(20, 3, 2)
+a = np.arange(120, dtype=np.float64).reshape(20, 3, 2)
 # Save it on the HDF5 file
 hdfarray = fileh.create_array(root, 'array_f', a, "3-D float array")
 
@@ -25,26 +26,26 @@ hdfarray = fileh.create_array(root, 'array_f', a, "3-D float array")
 fileh.close()
 
 # Open the file for reading
-fileh = open_file("array1.h5", mode = "r")
+fileh = tables.open_file("array1.h5", mode="r")
 # Get the root group
 root = fileh.root
 
 a = root.array_1.read()
-print "Signed byte array -->", repr(a), a.shape
+print("Signed byte array -->", repr(a), a.shape)
 
-print "Testing iterator (works even over scalar arrays):",
+print("Testing iterator (works even over scalar arrays):", end=' ')
 arr = root.array_s
 for x in arr:
-    print "nrow-->", arr.nrow
-    print "Element-->", repr(x)
+    print("nrow-->", arr.nrow)
+    print("Element-->", repr(x))
 
 # print "Testing getitem:"
 # for i in range(root.array_1.nrows):
 #     print "array_1["+str(i)+"]", "-->", root.array_1[i]
 
-print "array_f[:,2:3,2::2]", repr(root.array_f[:, 2:3, 2::2])
-print "array_f[1,2:]", repr(root.array_f[1, 2:])
-print "array_f[1]", repr(root.array_f[1])
+print("array_f[:,2:3,2::2]", repr(root.array_f[:, 2:3, 2::2]))
+print("array_f[1,2:]", repr(root.array_f[1, 2:]))
+print("array_f[1]", repr(root.array_f[1]))
 
 # Close the file
 fileh.close()
diff --git a/examples/array2.py b/examples/array2.py
index 9ec8453..351f6f8 100644
--- a/examples/array2.py
+++ b/examples/array2.py
@@ -1,42 +1,43 @@
-from numpy import *
-from tables import *
+from __future__ import print_function
+import numpy as np
+import tables
 
 # Open a new empty HDF5 file
-fileh = open_file("array2.h5", mode = "w")
+fileh = tables.open_file("array2.h5", mode="w")
 # Shortcut to the root group
 root = fileh.root
 
 # Create an array
-a = array([1, 2.7182818284590451, 3.141592], float)
-print "About to write array:", a
-print "  with shape: ==>", a.shape
-print "  and dtype ==>", a.dtype
+a = np.array([1, 2.7182818284590451, 3.141592], float)
+print("About to write array:", a)
+print("  with shape: ==>", a.shape)
+print("  and dtype ==>", a.dtype)
 
 # Save it on the HDF5 file
 hdfarray = fileh.create_array(root, 'carray', a, "Float array")
 
 # Get metadata on the previously saved array
-print
-print "Info on the object:", repr(root.carray)
+print()
+print("Info on the object:", repr(root.carray))
 
 # Close the file
 fileh.close()
 
 # Open the previous HDF5 file in read-only mode
-fileh = open_file("array2.h5", mode = "r")
+fileh = tables.open_file("array2.h5", mode="r")
 # Get the root group
 root = fileh.root
 
 # Get metadata on the previously saved array
-print
-print "Info on the object:", repr(root.carray)
+print()
+print("Info on the object:", repr(root.carray))
 
 # Get the actual array
 b = root.carray.read()
-print
-print "Array read from file:", b
-print "  with shape: ==>", b.shape
-print "  and dtype ==>", b.dtype
+print()
+print("Array read from file:", b)
+print("  with shape: ==>", b.shape)
+print("  and dtype ==>", b.dtype)
 
 # Close the file
 fileh.close()
diff --git a/examples/array3.py b/examples/array3.py
index 2b6490a..0014aa4 100644
--- a/examples/array3.py
+++ b/examples/array3.py
@@ -1,46 +1,47 @@
-from numpy import *
-from tables import *
+from __future__ import print_function
+import numpy as np
+import tables
 
 # Open a new empty HDF5 file
-fileh = open_file("array3.h5", mode = "w")
+fileh = tables.open_file("array3.h5", mode="w")
 # Get the root group
 root = fileh.root
 
 # Create a large array
-#a = reshape(array(range(2**16), "s"), (2,) * 16)
-a = ones((2,) * 8, int8)
-print "About to write array a"
-print "  with shape: ==>", a.shape
-print "  and dtype: ==>", a.dtype
+# a = reshape(array(range(2**16), "s"), (2,) * 16)
+a = np.ones((2,) * 8, np.int8)
+print("About to write array a")
+print("  with shape: ==>", a.shape)
+print("  and dtype: ==>", a.dtype)
 
 # Save it on the HDF5 file
 hdfarray = fileh.create_array(root, 'carray', a, "Large array")
 
 # Get metadata on the previously saved array
-print
-print "Info on the object:", repr(root.carray)
+print()
+print("Info on the object:", repr(root.carray))
 
 # Close the file
 fileh.close()
 
 # Open the previous HDF5 file in read-only mode
-fileh = open_file("array3.h5", mode = "r")
+fileh = tables.open_file("array3.h5", mode="r")
 # Get the root group
 root = fileh.root
 
 # Get metadata on the previously saved array
-print
-print "Getting info on retrieved /carray object:", repr(root.carray)
+print()
+print("Getting info on retrieved /carray object:", repr(root.carray))
 
 # Get the actual array
-#b = fileh.readArray("/carray")
+# b = fileh.readArray("/carray")
 # You can obtain the same result with:
 b = root.carray.read()
-print
-print "Array b read from file"
-print "  with shape: ==>", b.shape
-print "  with dtype: ==>", b.dtype
-#print "  contents:", b
+print()
+print("Array b read from file")
+print("  with shape: ==>", b.shape)
+print("  with dtype: ==>", b.dtype)
+# print "  contents:", b
 
 # Close the file
 fileh.close()
diff --git a/examples/array4.py b/examples/array4.py
index 8754624..bdbb5d8 100644
--- a/examples/array4.py
+++ b/examples/array4.py
@@ -1,22 +1,23 @@
-from numpy import *
-from tables import *
+from __future__ import print_function
+import numpy as np
+import tables
 
 basedim = 4
 file = "array4.h5"
 # Open a new empty HDF5 file
-fileh = open_file(file, mode = "w")
+fileh = tables.open_file(file, mode="w")
 # Get the root group
 group = fileh.root
 # Set the type codes to test
-dtypes = [int8, uint8, int16, int, float32, float]
+dtypes = [np.int8, np.uint8, np.int16, np.int, np.float32, np.float]
 i = 1
 for dtype in dtypes:
     # Create an array of dtype, with incrementally bigger ranges
-    a = ones((basedim,) * i, dtype)
+    a = np.ones((basedim,) * i, dtype)
     # Save it on the HDF5 file
     dsetname = 'array_' + a.dtype.char
     hdfarray = fileh.create_array(group, dsetname, a, "Large array")
-    print "Created dataset:", hdfarray
+    print("Created dataset:", hdfarray)
     # Create a new group
     group = fileh.create_group(group, 'group' + str(i))
     # increment the range for next iteration
@@ -27,27 +28,27 @@ fileh.close()
 
 
 # Open the previous HDF5 file in read-only mode
-fileh = open_file(file, mode = "r")
+fileh = tables.open_file(file, mode="r")
 # Get the root group
 group = fileh.root
 # Get the metadata on the previosly saved arrays
 for i in range(len(dtypes)):
     # Create an array for later comparison
-    a = ones((basedim,) * (i+1), dtypes[i])
+    a = np.ones((basedim,) * (i + 1), dtypes[i])
     # Get the dset object hangin from group
     dset = getattr(group, 'array_' + a.dtype.char)
-    print "Info from dataset:", repr(dset)
+    print("Info from dataset:", repr(dset))
     # Read the actual data in array
     b = dset.read()
-    print "Array b read from file. Shape ==>", b.shape,
-    print ". Dtype ==> %s" % b.dtype
+    print("Array b read from file. Shape ==>", b.shape, end=' ')
+    print(". Dtype ==> %s" % b.dtype)
     # Test if the original and read arrays are equal
-    if allclose(a, b):
-        print "Good: Read array is equal to the original"
+    if np.allclose(a, b):
+        print("Good: Read array is equal to the original")
     else:
-        print "Error: Read array and the original differs!"
+        print("Error: Read array and the original differs!")
     # Iterate over the next group
-    group = getattr(group, 'group' + str(i+1))
+    group = getattr(group, 'group' + str(i + 1))
 
 # Close the file
 fileh.close()
diff --git a/examples/attributes1.py b/examples/attributes1.py
index ddc5aad..82774e5 100644
--- a/examples/attributes1.py
+++ b/examples/attributes1.py
@@ -1,13 +1,14 @@
-from numpy import *
-from tables import *
+import numpy as np
+import tables
 
 # Open a new empty HDF5 file
-fileh = open_file("attributes1.h5", mode = "w", title="Testing attributes")
+fileh = tables.open_file("attributes1.h5", mode="w",
+                         title="Testing attributes")
 # Get the root group
 root = fileh.root
 
 # Create an array
-a = array([1, 2, 4], int32)
+a = np.array([1, 2, 4], np.int32)
 # Save it on the HDF5 file
 hdfarray = fileh.create_array(root, 'array', a, "Integer array")
 
@@ -26,7 +27,7 @@ hdfarray.attrs.int = 12
 hdfarray.attrs.float = 12.32
 
 # A generic object
-hdfarray.attrs.object = {"a":32.1, "b":1, "c":[1, 2]}
+hdfarray.attrs.object = {"a": 32.1, "b": 1, "c": [1, 2]}
 
 # Close the file
 fileh.close()
diff --git a/examples/carray1.py b/examples/carray1.py
index 0e6a602..b3b8f2a 100644
--- a/examples/carray1.py
+++ b/examples/carray1.py
@@ -1,3 +1,4 @@
+from __future__ import print_function
 import numpy
 import tables
 
@@ -14,6 +15,6 @@ h5f.close()
 
 # Re-open and read another hyperslab
 h5f = tables.open_file(fileName)
-print h5f
-print h5f.root.carray[8:12, 18:22]
+print(h5f)
+print(h5f.root.carray[8:12, 18:22])
 h5f.close()
diff --git a/examples/check_examples.sh b/examples/check_examples.sh
index 7f345af..10ba7dd 100755
--- a/examples/check_examples.sh
+++ b/examples/check_examples.sh
@@ -1,28 +1,46 @@
 #!/bin/sh
+
+set -e
+
 PYTHON=python
 # Small script to check the example repository quickly
+$PYTHON add-column.py
 $PYTHON array1.py
 $PYTHON array2.py
 $PYTHON array3.py
 $PYTHON array4.py
 $PYTHON attributes1.py
+$PYTHON carray1.py
 $PYTHON earray1.py
 $PYTHON earray2.py
+#$PYTHON enum.py       # This should always fail
+$PYTHON filenodes1.py
+$PYTHON index.py
+$PYTHON inmemory.py
+$PYTHON links.py
+$PYTHON multiprocess_access_benchmarks.py
+$PYTHON multiprocess_access_queues.py
+$PYTHON nested1.py
+#$PYTHON nested-iter.py    # Run this after "tutorial1-1.py"
+$PYTHON nested-tut.py
 $PYTHON objecttree.py
+$PYTHON particles.py
+$PYTHON read_array_out_arg.py
+$PYTHON split.py
 $PYTHON table1.py
 $PYTHON table2.py
+$PYTHON table3.py
 $PYTHON table-tree.py
 $PYTHON tutorial1-1.py
 $PYTHON tutorial1-2.py
 #$PYTHON tutorial2.py   # This should always fail at the beginning
+$PYTHON tutorial3-1.py
+$PYTHON tutorial3-2.py
+$PYTHON undo-redo.py
 $PYTHON vlarray1.py
 $PYTHON vlarray2.py
 $PYTHON vlarray3.py
-$PYTHON nested1.py
-$PYTHON nested-tut.py
+$PYTHON vlarray4.py
+
+
 $PYTHON nested-iter.py
-$PYTHON links.py
-$PYTHON undo-redo.py
-$PYTHON multiprocess_access_queues.py
-$PYTHON multiprocess_access_benchmarks.py
-$PYTHON read_array_out_arg.py
diff --git a/examples/earray1.py b/examples/earray1.py
index 6dda6b8..fcbb739 100644
--- a/examples/earray1.py
+++ b/examples/earray1.py
@@ -1,3 +1,4 @@
+from __future__ import print_function
 import tables
 import numpy
 
@@ -5,11 +6,11 @@ fileh = tables.open_file('earray1.h5', mode='w')
 a = tables.StringAtom(itemsize=8)
 # Use ``a`` as the object type for the enlargeable array.
 array_c = fileh.create_earray(fileh.root, 'array_c', a, (0,), "Chars")
-array_c.append(numpy.array(['a'*2, 'b'*4], dtype='S8'))
-array_c.append(numpy.array(['a'*6, 'b'*8, 'c'*10], dtype='S8'))
+array_c.append(numpy.array(['a' * 2, 'b' * 4], dtype='S8'))
+array_c.append(numpy.array(['a' * 6, 'b' * 8, 'c' * 10], dtype='S8'))
 
 # Read the string ``EArray`` we have created on disk.
 for s in array_c:
-    print 'array_c[%s] => %r' % (array_c.nrow, s)
+    print('array_c[%s] => %r' % (array_c.nrow, s))
 # Close the file.
 fileh.close()
diff --git a/examples/earray2.py b/examples/earray2.py
index fda17f7..11e8525 100644
--- a/examples/earray2.py
+++ b/examples/earray2.py
@@ -1,77 +1,81 @@
 #!/usr/bin/env python
 
-""" Small example that shows how to work with extendeable arrays of
-different types, strings included. """
+"""Small example that shows how to work with extendeable arrays of different
+types, strings included."""
 
-from numpy import *
-from tables import *
+from __future__ import print_function
+import numpy as np
+import tables
 
 # Open a new empty HDF5 file
 filename = "earray2.h5"
-fileh = open_file(filename, mode = "w")
+fileh = tables.open_file(filename, mode="w")
 # Get the root group
 root = fileh.root
 
 # Create an string atom
-a = StringAtom(itemsize=1)
+a = tables.StringAtom(itemsize=1)
 # Use it as a type for the enlargeable array
 hdfarray = fileh.create_earray(root, 'array_c', a, (0,), "Character array")
-hdfarray.append(array(['a', 'b', 'c']))
+hdfarray.append(np.array(['a', 'b', 'c']))
 # The next is legal:
-hdfarray.append(array(['c', 'b', 'c', 'd']))
+hdfarray.append(np.array(['c', 'b', 'c', 'd']))
 # but these are not:
-#hdfarray.append(array([['c', 'b'], ['c', 'd']]))
-#hdfarray.append(array([[1,2,3],[3,2,1]], dtype=uint8).reshape(2,1,3))
+# hdfarray.append(array([['c', 'b'], ['c', 'd']]))
+# hdfarray.append(array([[1,2,3],[3,2,1]], dtype=uint8).reshape(2,1,3))
 
 # Create an atom
-a = UInt16Atom()
+a = tables.UInt16Atom()
 hdfarray = fileh.create_earray(root, 'array_e', a, (2, 0, 3),
-                              "Unsigned short array")
+                               "Unsigned short array")
 
 # Create an enlargeable array
-a = UInt8Atom()
+a = tables.UInt8Atom()
 hdfarray = fileh.create_earray(root, 'array_b', a, (2, 0, 3),
-                              "Unsigned byte array", Filters(complevel = 1))
+                               "Unsigned byte array",
+                               tables.Filters(complevel=1))
 
 # Append an array to this table
-hdfarray.append(array([[1, 2, 3], [3, 2, 1]], dtype=uint8).reshape(2, 1, 3))
-hdfarray.append(array([[1, 2, 3], [3, 2, 1], [2, 4, 6], [6, 4, 2]],
-                      dtype=uint8).reshape(2, 2, 3)*2)
+hdfarray.append(
+    np.array([[1, 2, 3], [3, 2, 1]], dtype=np.uint8).reshape(2, 1, 3))
+hdfarray.append(
+    np.array([[1, 2, 3], [3, 2, 1], [2, 4, 6], [6, 4, 2]],
+             dtype=np.uint8).reshape(2, 2, 3) * 2)
 # The next should give a type error:
-#hdfarray.append(array([[1,0,1],[0,0,1]], dtype=Bool).reshape(2,1,3))
+# hdfarray.append(array([[1,0,1],[0,0,1]], dtype=Bool).reshape(2,1,3))
 
 # Close the file
 fileh.close()
 
 # Open the file for reading
-fileh = open_file(filename, mode = "r")
+fileh = tables.open_file(filename, mode="r")
 # Get the root group
 root = fileh.root
 
 a = root.array_c.read()
-print "Character array -->", repr(a), a.shape
+print("Character array -->", repr(a), a.shape)
 a = root.array_e.read()
-print "Empty array (yes, this is suported) -->", repr(a), a.shape
+print("Empty array (yes, this is suported) -->", repr(a), a.shape)
 a = root.array_b.read(step=2)
-print "Int8 array, even rows (step = 2) -->", repr(a), a.shape
+print("Int8 array, even rows (step = 2) -->", repr(a), a.shape)
 
-print "Testing iterator:",
-#for x in root.array_b.iterrows(step=2):
+print("Testing iterator:", end=' ')
+# for x in root.array_b.iterrows(step=2):
 for x in root.array_b:
-    print "nrow-->", root.array_b.nrow
-    print "Element-->", x
+    print("nrow-->", root.array_b.nrow)
+    print("Element-->", x)
 
-print "Testing getitem:"
+print("Testing getitem:")
 for i in range(root.array_b.shape[0]):
-    print "array_b["+str(i)+"]", "-->", root.array_b[i]
+    print("array_b[" + str(i) + "]", "-->", root.array_b[i])
 # The nrows counts the growing dimension, which is different from the
 # first index
 for i in range(root.array_b.nrows):
-    print "array_b[:,"+str(i)+",:]", "-->", root.array_b[:, i,:]
-print "array_c[1:2]", repr(root.array_c[1:2])
-print "array_c[1:3]", repr(root.array_c[1:3])
-print "array_b[:]", root.array_b[:]
+    print("array_b[:," + str(i) + ",:]", "-->", root.array_b[:, i, :])
+print("array_c[1:2]", repr(root.array_c[1:2]))
+print("array_c[1:3]", repr(root.array_c[1:3]))
+print("array_b[:]", root.array_b[:])
 
-print repr(root.array_c)
+print(repr(root.array_c))
 # Close the file
 fileh.close()
diff --git a/examples/enum.py b/examples/enum.py
index 401b0c1..c2b0b1d 100644
--- a/examples/enum.py
+++ b/examples/enum.py
@@ -3,6 +3,8 @@
 # since it contains some statements that raise exceptions.
 # To run it, paste it as the input of ``python``.
 
+from __future__ import print_function
+
 
 def COMMENT(string):
     pass
@@ -16,19 +18,19 @@ colorList = ['red', 'green', 'blue', 'white', 'black']
 colors = tables.Enum(colorList)
 
 COMMENT("Take a look at the name-value pairs.")
-print "Colors:", [v for v in colors]
+print("Colors:", [v for v in colors])
 
 COMMENT("Access values as attributes.")
-print "Value of 'red' and 'white':", (colors.red, colors.white)
-print "Value of 'yellow':", colors.yellow
+print("Value of 'red' and 'white':", (colors.red, colors.white))
+print("Value of 'yellow':", colors.yellow)
 
 COMMENT("Access values as items.")
-print "Value of 'red' and 'white':", (colors['red'], colors['white'])
-print "Value of 'yellow':", colors['yellow']
+print("Value of 'red' and 'white':", (colors['red'], colors['white']))
+print("Value of 'yellow':", colors['yellow'])
 
 COMMENT("Access names.")
-print "Name of value %s:" % colors.red, colors(colors.red)
-print "Name of value 1234:", colors(1234)
+print("Name of value %s:" % colors.red, colors(colors.red))
+print("Name of value 1234:", colors(1234))
 
 
 COMMENT("**** Enumerated columns. ****")
@@ -37,6 +39,8 @@ COMMENT("Create a new PyTables file.")
 h5f = tables.open_file('enum.h5', 'w')
 
 COMMENT("This describes a ball extraction.")
+
+
 class BallExt(tables.IsDescription):
     ballTime = tables.Time32Col()
     ballColor = tables.EnumCol(colors, 'black', base='uint8')
@@ -65,7 +69,7 @@ COMMENT("Now print them!")
 for r in tbl:
     ballTime = r['ballTime']
     ballColor = colors(r['ballColor'])  # notice this
-    print "Ball extracted on %d is of color %s." % (ballTime, ballColor)
+    print("Ball extracted on %d is of color %s." % (ballTime, ballColor))
 
 
 COMMENT("**** Enumerated arrays. ****")
@@ -87,7 +91,7 @@ earr.append([(wdays.Mon, 1234)])
 
 COMMENT("Print the values.")
 for (d1, d2) in earr:
-    print "From %s to %s (%d days)." % (wdays(d1), wdays(d2), d2-d1+1)
+    print("From %s to %s (%d days)." % (wdays(d1), wdays(d2), d2 - d1 + 1))
 
 COMMENT("Close the PyTables file and remove it.")
 import os
diff --git a/examples/filenodes1.py b/examples/filenodes1.py
index 2360fcb..5a12f85 100644
--- a/examples/filenodes1.py
+++ b/examples/filenodes1.py
@@ -1,6 +1,6 @@
+from __future__ import print_function
 from tables.nodes import filenode
 
-
 import tables
 h5file = tables.open_file('fnode.h5', 'w')
 
@@ -8,35 +8,35 @@ h5file = tables.open_file('fnode.h5', 'w')
 fnode = filenode.new_node(h5file, where='/', name='fnode_test')
 
 
-print h5file.get_node_attr('/fnode_test', 'NODE_TYPE')
+print(h5file.get_node_attr('/fnode_test', 'NODE_TYPE'))
 
 
-print >> fnode, "This is a test text line."
-print >> fnode, "And this is another one."
-print >> fnode
+print("This is a test text line.", file=fnode)
+print("And this is another one.", file=fnode)
+print(file=fnode)
 fnode.write("Of course, file methods can also be used.")
 
 fnode.seek(0)  # Go back to the beginning of file.
 
 for line in fnode:
-    print repr(line)
+    print(repr(line))
 
 
 fnode.close()
-print fnode.closed
+print(fnode.closed)
 
 
 node = h5file.root.fnode_test
 fnode = filenode.open_node(node, 'a+')
-print repr(fnode.readline())
-print fnode.tell()
-print >> fnode, "This is a new line."
-print repr(fnode.readline())
+print(repr(fnode.readline()))
+print(fnode.tell())
+print("This is a new line.", file=fnode)
+print(repr(fnode.readline()))
 
 
 fnode.seek(0)
 for line in fnode:
-    print repr(line)
+    print(repr(line))
 
 
 fnode.attrs.content_type = 'text/plain; charset=us-ascii'
diff --git a/examples/index.py b/examples/index.py
index a1664cc..25ca22b 100644
--- a/examples/index.py
+++ b/examples/index.py
@@ -1,8 +1,10 @@
+from __future__ import print_function
 import random
 import tables
-print 'tables.__version__', tables.__version__
+print('tables.__version__', tables.__version__)
+
+nrows = 10000 - 1
 
-nrows=10000-1
 
 class Distance(tables.IsDescription):
     frame = tables.Int32Col(pos=0)
@@ -10,25 +12,25 @@ class Distance(tables.IsDescription):
 
 h5file = tables.open_file('index.h5', mode='w')
 table = h5file.create_table(h5file.root, 'distance_table', Distance,
-                          'distance table', expectedrows=nrows)
-r = table.row
+                            'distance table', expectedrows=nrows)
+row = table.row
 for i in range(nrows):
-    #r['frame'] = nrows-i
-    r['frame'] = random.randint(0, nrows)
-    r['distance'] = float(i**2)
-    r.append()
+    # r['frame'] = nrows-i
+    row['frame'] = random.randint(0, nrows)
+    row['distance'] = float(i ** 2)
+    row.append()
 table.flush()
 
 table.cols.frame.create_index(optlevel=9, _testmode=True, _verbose=True)
-#table.cols.frame.optimizeIndex(level=5, verbose=1)
+# table.cols.frame.optimizeIndex(level=5, verbose=1)
 
 results = [r.nrow for r in table.where('frame < 2')]
-print "frame<2 -->", table.read_coordinates(results)
-#print "frame<2 -->", table.get_where_list('frame < 2')
+print("frame<2 -->", table.read_coordinates(results))
+# print("frame<2 -->", table.get_where_list('frame < 2'))
 
 results = [r.nrow for r in table.where('(1 < frame) & (frame <= 5)')]
-print "rows-->", results
-print "1<frame<=5 -->", table.read_coordinates(results)
-#print "1<frame<=5 -->", table.get_where_list('(1 < frame) & (frame <= 5)')
+print("rows-->", results)
+print("1<frame<=5 -->", table.read_coordinates(results))
+# print("1<frame<=5 -->", table.get_where_list('(1 < frame) & (frame <= 5)'))
 
 h5file.close()
diff --git a/examples/inmemory.py b/examples/inmemory.py
new file mode 100755
index 0000000..0fef364
--- /dev/null
+++ b/examples/inmemory.py
@@ -0,0 +1,52 @@
+#!/usr/bin/env python
+# encoding: utf-8
+
+"""inmemory.py.
+
+Example usage of creating in-memory HDF5 file with a specified chunksize
+using PyTables 3.0.0+
+
+See also Cookbook page
+http://pytables.github.io/cookbook/inmemory_hdf5_files.html and available
+drivers
+http://pytables.github.io/usersguide/parameter_files.html#hdf5-driver-management
+
+"""
+
+import numpy as np
+import tables
+
+CHUNKY = 30
+CHUNKX = 4320
+
+if __name__ == '__main__':
+
+    # create dataset and add global attrs
+    file_path = 'demofile_chunk%sx%d.h5' % (CHUNKY, CHUNKX)
+
+    with tables.open_file(file_path, 'w',
+                          title='PyTables HDF5 In-memory example',
+                          driver='H5FD_CORE') as h5f:
+
+        # dummy some data
+        lats = np.empty([2160])
+        lons = np.empty([4320])
+
+        # create some simple arrays
+        lat_node = h5f.create_array('/', 'lat', lats, title='latitude')
+        lon_node = h5f.create_array('/', 'lon', lons, title='longitude')
+
+        # create a 365 x 4320 x 8640 CArray of 32bit float
+        shape = (5, 2160, 4320)
+        atom = tables.Float32Atom(dflt=np.nan)
+
+        # chunk into daily slices and then further chunk days
+        sst_node = h5f.create_carray(
+            h5f.root, 'sst', atom, shape, chunkshape=(1, CHUNKY, CHUNKX))
+
+        # dummy up an ndarray
+        sst = np.empty([2160, 4320], dtype=np.float32)
+        sst.fill(30.0)
+
+        # write ndarray to a 2D plane in the HDF5
+        sst_node[0] = sst
diff --git a/examples/links.py b/examples/links.py
index 4583478..1ecd3bb 100644
--- a/examples/links.py
+++ b/examples/links.py
@@ -1,3 +1,4 @@
+from __future__ import print_function
 import tables as tb
 
 # Create a new file with some structural groups
@@ -12,27 +13,27 @@ t1 = f1.create_table(g2, 't1', {'f1': tb.IntCol(), 'f2': tb.FloatCol()})
 # Create new group and a first hard link
 gl = f1.create_group('/', 'gl')
 ht = f1.create_hard_link(gl, 'ht', '/g1/g2/t1')  # ht points to t1
-print "``%s`` is a hard link to: ``%s``" % (ht, t1)
+print("``%s`` is a hard link to: ``%s``" % (ht, t1))
 
 # Remove the orginal link to the t1 table
 t1.remove()
-print "table continues to be accessible in: ``%s``" % f1.get_node('/gl/ht')
+print("table continues to be accessible in: ``%s``" % f1.get_node('/gl/ht'))
 
 # Let's continue with soft links
 la1 = f1.create_soft_link(gl, 'la1', '/g1/a1')  # la1 points to a1
-print "``%s`` is a soft link to: ``%s``" % (la1, la1.target)
+print("``%s`` is a soft link to: ``%s``" % (la1, la1.target))
 lt = f1.create_soft_link(gl, 'lt', '/g1/g2/t1')  # lt points to t1 (dangling)
-print "``%s`` is a soft link to: ``%s``" % (lt, lt.target)
+print("``%s`` is a soft link to: ``%s``" % (lt, lt.target))
 
 # Recreate the '/g1/g2/t1' path
 t1 = f1.create_hard_link('/g1/g2', 't1', '/gl/ht')
-print "``%s`` is not dangling anymore" % (lt,)
+print("``%s`` is not dangling anymore" % (lt,))
 
 # Dereferrencing
 plt = lt()
-print "dereferred lt node: ``%s``" % plt
+print("dereferred lt node: ``%s``" % plt)
 pla1 = la1()
-print "dereferred la1 node: ``%s``" % pla1
+print("dereferred la1 node: ``%s``" % pla1)
 
 # Copy the array a1 into another file
 f2 = tb.open_file('links2.h5', 'w')
@@ -42,9 +43,9 @@ f2.close()  # close the other file
 # Remove the original soft link and create an external link
 la1.remove()
 la1 = f1.create_external_link(gl, 'la1', 'links2.h5:/a1')
-print "``%s`` is an external link to: ``%s``" % (la1, la1.target)
+print("``%s`` is an external link to: ``%s``" % (la1, la1.target))
 new_a1 = la1()  # dereferrencing la1 returns a1 in links2.h5
-print "dereferred la1 node:  ``%s``" % new_a1
-print "new_a1 file:", new_a1._v_file.filename
+print("dereferred la1 node:  ``%s``" % new_a1)
+print("new_a1 file:", new_a1._v_file.filename)
 
 f1.close()
diff --git a/examples/multiprocess_access_benchmarks.py b/examples/multiprocess_access_benchmarks.py
index dc6b107..754f201 100644
--- a/examples/multiprocess_access_benchmarks.py
+++ b/examples/multiprocess_access_benchmarks.py
@@ -79,8 +79,8 @@ def read_and_send_pipe(send_type, array_size):
 
 
 # process to receive an array using a shared memory mapped file
-# for real use, this would require creating some protocol to specify the array's
-# data type and shape
+# for real use, this would require creating some protocol to specify the
+# array's data type and shape
 class MemmapReceive(multiprocessing.Process):
 
     def __init__(self, path_recv, result_send):
@@ -131,8 +131,8 @@ def read_and_send_memmap(send_type, array_size):
 
 
 # process to receive an array using a socket
-# for real use, this would require creating some protocol to specify the array's
-# data type and shape
+# for real use, this would require creating some protocol to specify the
+# array's data type and shape
 class SocketReceive(multiprocessing.Process):
 
     def __init__(self, socket_family, address, result_send, array_nbytes):
@@ -207,7 +207,8 @@ def read_and_send_socket(send_type, array_size, array_bytes, address_func,
     recv_process.join()
 
 
-def print_results(send_type, start_timestamp, recv_timestamp, finish_timestamp):
+def print_results(send_type, start_timestamp, recv_timestamp,
+                  finish_timestamp):
     msg = 'type: {0}\t receive: {1:5.5f}, add:{2:5.5f}, total: {3:5.5f}'
     print(msg.format(send_type,
                      recv_timestamp - start_timestamp,
@@ -231,4 +232,4 @@ if __name__ == '__main__':
                              unix_socket_address, socket.AF_UNIX)
         read_and_send_socket('IPv4 socket', array_size, array_bytes,
                              ipv4_socket_address, socket.AF_INET)
-        print()
\ No newline at end of file
+        print()
diff --git a/examples/multiprocess_access_queues.py b/examples/multiprocess_access_queues.py
index 1c55eae..4ea576a 100644
--- a/examples/multiprocess_access_queues.py
+++ b/examples/multiprocess_access_queues.py
@@ -1,8 +1,8 @@
-"""Example showing how to access a PyTables file from multiple processes
-using queues.
-"""
+"""Example showing how to access a PyTables file from multiple processes using
+queues."""
 
-import Queue
+from __future__ import print_function
+import Queue as queue
 import multiprocessing
 import os
 import random
@@ -17,7 +17,7 @@ def make_file(file_path, n):
 
     with tables.open_file(file_path, 'w') as fobj:
         array = fobj.create_carray('/', 'array', tables.Int64Atom(), (n, n))
-        for i in xrange(n):
+        for i in range(n):
             array[i, :] = i
 
 
@@ -52,24 +52,25 @@ class FileAccess(multiprocessing.Process):
 
             # Check for any data requests in the read_queue.
             try:
-                row_num, proc_num = self.read_queue.get(True, self.block_period)
+                row_num, proc_num = self.read_queue.get(
+                    True, self.block_period)
                 # look up the appropriate result_queue for this data processor
                 # instance
                 result_queue = self.result_queues[proc_num]
-                print 'processor {0} reading from row {1}'.format(proc_num,
-                                                                  row_num)
+                print('processor {0} reading from row {1}'.format(proc_num,
+                                                                  row_num))
                 result_queue.put(self.read_data(row_num))
                 another_loop = True
-            except Queue.Empty:
+            except queue.Empty:
                 pass
 
             # Check for any write requests in the write_queue.
             try:
                 row_num, data = self.write_queue.get(True, self.block_period)
-                print 'writing row', row_num
+                print('writing row', row_num)
                 self.write_data(row_num, data)
                 another_loop = True
-            except Queue.Empty:
+            except queue.Empty:
                 pass
 
         # close the HDF5 file before shutting down
@@ -127,7 +128,7 @@ def make_queues(num_processors):
     read_queue = multiprocessing.Queue()
     write_queue = multiprocessing.Queue()
     shutdown_recv, shutdown_send = multiprocessing.Pipe(False)
-    result_queues = [multiprocessing.Queue() for i in xrange(num_processors)]
+    result_queues = [multiprocessing.Queue() for i in range(num_processors)]
     file_access = FileAccess(file_path, read_queue, result_queues, write_queue,
                              shutdown_recv)
     file_access.start()
@@ -146,7 +147,7 @@ if __name__ == '__main__':
 
     processors = []
     output_files = []
-    for i in xrange(num_processors):
+    for i in range(num_processors):
         result_queue = result_queues[i]
         output_file = str(i)
         processor = DataProcessor(read_queue, result_queue, write_queue, i, n,
@@ -166,11 +167,11 @@ if __name__ == '__main__':
     shutdown_send.send(0)
 
     # print out contents of log files and delete them
-    print
+    print()
     for output_file in output_files:
-        print
-        print 'contents of log file {0}'.format(output_file)
-        print open(output_file, 'r').read()
+        print()
+        print('contents of log file {0}'.format(output_file))
+        print(open(output_file, 'r').read())
         os.remove(output_file)
 
     os.remove('test.h5')
diff --git a/examples/nested-iter.py b/examples/nested-iter.py
index ed4c19c..44c3bbb 100644
--- a/examples/nested-iter.py
+++ b/examples/nested-iter.py
@@ -5,14 +5,15 @@ This program needs the output file, 'tutorial1.h5', generated by
 
 """
 
+from __future__ import print_function
 import tables
-f=tables.open_file("tutorial1.h5")
+f = tables.open_file("tutorial1.h5")
 rout = f.root.detector.readout
 
-print "*** Result of a three-folded nested iterator ***"
+print("*** Result of a three-folded nested iterator ***")
 for p in rout.where('pressure < 16'):
     for q in rout.where('pressure < 9'):
         for n in rout.where('energy < 10'):
-            print "pressure, energy-->", p['pressure'], n['energy']
-print "*** End of selected data ***"
+            print("pressure, energy-->", p['pressure'], n['energy'])
+print("*** End of selected data ***")
 f.close()
diff --git a/examples/nested-tut.py b/examples/nested-tut.py
index ecdbddb..9d5ec9c 100644
--- a/examples/nested-tut.py
+++ b/examples/nested-tut.py
@@ -5,129 +5,129 @@ with ptdump or any HDF5 generic utility.
 
 :Author: F. Alted
 :Date: 2005/06/10
+
 """
 
+from __future__ import print_function
 import numpy
 
-from tables import *
+import tables
+
+#'-**-**-**-**- The sample nested class description  -**-**-**-**-**-'
 
-        #'-**-**-**-**- The sample nested class description  -**-**-**-**-**-'
 
-class Info(IsDescription):
+class Info(tables.IsDescription):
     """A sub-structure of Test"""
+
     _v_pos = 2   # The position in the whole structure
-    name = StringCol(10)
-    value = Float64Col(pos=0)
+    name = tables.StringCol(10)
+    value = tables.Float64Col(pos=0)
+
+colors = tables.Enum(['red', 'green', 'blue'])
 
-colors = Enum(['red', 'green', 'blue'])
 
-class NestedDescr(IsDescription):
-    """A description that has several nested columns"""
-    color = EnumCol(colors, 'red', base='uint32')
+class NestedDescr(tables.IsDescription):
+    """A description that has several nested columns."""
+
+    color = tables.EnumCol(colors, 'red', base='uint32')
     info1 = Info()
-    class info2(IsDescription):
+
+    class info2(tables.IsDescription):
         _v_pos = 1
-        name = StringCol(10)
-        value = Float64Col(pos=0)
-        class info3(IsDescription):
-            x = Float64Col(dflt=1)
-            y = UInt8Col(dflt=1)
+        name = tables.StringCol(10)
+        value = tables.Float64Col(pos=0)
+
+        class info3(tables.IsDescription):
+            x = tables.Float64Col(dflt=1)
+            y = tables.UInt8Col(dflt=1)
 
-print
-print   '-**-**-**-**-**-**- file creation  -**-**-**-**-**-**-**-'
+print()
+print('-**-**-**-**-**-**- file creation  -**-**-**-**-**-**-**-')
 
 filename = "nested-tut.h5"
 
-print "Creating file:", filename
-fileh = open_file(filename, "w")
+print("Creating file:", filename)
+fileh = tables.open_file(filename, "w")
 
-print
-print   '-**-**-**-**-**- nested table creation  -**-**-**-**-**-'
+print()
+print('-**-**-**-**-**- nested table creation  -**-**-**-**-**-')
 
 table = fileh.create_table(fileh.root, 'table', NestedDescr)
 
 # Fill the table with some rows
 row = table.row
 for i in range(10):
-    row['color'] = colors[['red', 'green', 'blue'][i%3]]
+    row['color'] = colors[['red', 'green', 'blue'][i % 3]]
     row['info1/name'] = "name1-%s" % i
     row['info2/name'] = "name2-%s" % i
-    row['info2/info3/y'] =  i
+    row['info2/info3/y'] = i
     # All the rest will be filled with defaults
     row.append()
 
 table.flush()  # flush the row buffer to disk
-print repr(table.nrows)
+print(repr(table.nrows))
 
 nra = table[::4]
-print repr(nra)
+print(repr(nra))
 # Append some additional rows
 table.append(nra)
-print repr(table.nrows)
+print(repr(table.nrows))
 
 # Create a new table
 table2 = fileh.create_table(fileh.root, 'table2', nra)
-print repr(table2[:])
+print(repr(table2[:]))
 
 # Read also the info2/name values with color == colors.red
-names = [ x['info2/name'] for x in table if x['color'] == colors.red ]
+names = [x['info2/name'] for x in table if x['color'] == colors.red]
 
-print
-print "**** info2/name elements satisfying color == 'red':", repr(names)
+print()
+print("**** info2/name elements satisfying color == 'red':", repr(names))
 
-print
-print   '-**-**-**-**-**-**- table data reading & selection  -**-**-**-**-**-'
+print()
+print('-**-**-**-**-**-**- table data reading & selection  -**-**-**-**-**-')
 
 # Read the data
-print
-print "**** table data contents:\n", table[:]
+print()
+print("**** table data contents:\n", table[:])
 
-print
-print "**** table.info2 data contents:\n", repr(table.cols.info2[1:5])
+print()
+print("**** table.info2 data contents:\n", repr(table.cols.info2[1:5]))
 
-print
-print "**** table.info2.info3 data contents:\n", repr(table.cols.info2.info3[1:5])
+print()
+print("**** table.info2.info3 data contents:\n",
+      repr(table.cols.info2.info3[1:5]))
 
-print "**** _f_col() ****"
-print repr(table.cols._f_col('info2'))
-print repr(table.cols._f_col('info2/info3/y'))
+print("**** _f_col() ****")
+print(repr(table.cols._f_col('info2')))
+print(repr(table.cols._f_col('info2/info3/y')))
 
-print
-print   '-**-**-**-**-**-**- table metadata  -**-**-**-**-**-'
+print()
+print('-**-**-**-**-**-**- table metadata  -**-**-**-**-**-')
 
 # Read description metadata
-print
-print "**** table description (short):\n", repr(table.description)
-print
-print "**** more from manual, period ***"
-print repr(table.description.info1)
-print repr(table.description.info2.info3)
-print repr(table.description._v_nested_names)
-print repr(table.description.info1._v_nested_names)
-print
-print "**** now some for nested records, take that ****"
-print repr(table.description._v_nested_descr)
-print repr(numpy.rec.array(None, shape=0,
-                           dtype=table.description._v_nested_descr))
-print repr(numpy.rec.array(None, shape=0,
-                           dtype=table.description.info2._v_nested_descr))
-# NumPy recarrays doesn't have the machinery to understand the idiom below,
-# please use the above form instead.
-###print repr(numpy.rec.array(None, shape=1,
-###           names=table.description._v_nested_names,
-###           formats=table.description._v_nested_formats))
-from tables import nra
-print repr(nra.array(None, descr=table.description._v_nested_descr))
-print repr(nra.array(None, names=table.description._v_nested_names,
-                     formats=table.description._v_nested_formats))
-print
-print "**** and some iteration over descriptions, too ****"
+print()
+print("**** table description (short):\n", repr(table.description))
+print()
+print("**** more from manual, period ***")
+print(repr(table.description.info1))
+print(repr(table.description.info2.info3))
+print(repr(table.description._v_nested_names))
+print(repr(table.description.info1._v_nested_names))
+print()
+print("**** now some for nested records, take that ****")
+print(repr(table.description._v_nested_descr))
+print(repr(numpy.rec.array(None, shape=0,
+                           dtype=table.description._v_nested_descr)))
+print(repr(numpy.rec.array(None, shape=0,
+                           dtype=table.description.info2._v_nested_descr)))
+print()
+print("**** and some iteration over descriptions, too ****")
 for coldescr in table.description._f_walk():
-    print "column-->", coldescr
-print
-print "**** info2 sub-structure description:\n", table.description.info2
-print
-print "**** table representation (long form):\n", repr(table)
+    print("column-->", coldescr)
+print()
+print("**** info2 sub-structure description:\n", table.description.info2)
+print()
+print("**** table representation (long form):\n", repr(table))
 
 # Remember to always close the file
 fileh.close()
diff --git a/examples/nested1.py b/examples/nested1.py
index 5e1cb2a..5706229 100644
--- a/examples/nested1.py
+++ b/examples/nested1.py
@@ -1,25 +1,28 @@
 # Example to show how nested types can be dealed with PyTables
 # F. Alted 2005/05/27
 
+from __future__ import print_function
 import random
-from tables import *
+import tables
 
 fileout = "nested1.h5"
 
 # An example of enumerated structure
-colors = Enum(['red', 'green', 'blue'])
+colors = tables.Enum(['red', 'green', 'blue'])
+
 
 def read(file):
-    fileh = open_file(file, "r")
+    fileh = tables.open_file(file, "r")
 
-    print "table (short)-->", fileh.root.table
-    print "table (long)-->", repr(fileh.root.table)
-    print "table (contents)-->", repr(fileh.root.table[:])
+    print("table (short)-->", fileh.root.table)
+    print("table (long)-->", repr(fileh.root.table))
+    print("table (contents)-->", repr(fileh.root.table[:]))
 
     fileh.close()
 
+
 def write(file, desc, indexed):
-    fileh = open_file(file, "w")
+    fileh = tables.open_file(file, "w")
     table = fileh.create_table(fileh.root, 'table', desc)
     for colname in indexed:
         table.colinstances[colname].create_index()
@@ -27,11 +30,11 @@ def write(file, desc, indexed):
     row = table.row
     for i in range(10):
         row['x'] = i
-        row['y'] = 10.2-i
+        row['y'] = 10.2 - i
         row['z'] = i
         row['color'] = colors[random.choice(['red', 'green', 'blue'])]
         row['info/name'] = "name%s" % i
-        row['info/info2/info3/z4'] =  i
+        row['info/info2/info3/z4'] = i
         # All the rest will be filled with defaults
         row.append()
 
@@ -39,36 +42,42 @@ def write(file, desc, indexed):
 
 # The sample nested class description
 
-class Info(IsDescription):
+
+class Info(tables.IsDescription):
     _v_pos = 2
-    Name = UInt32Col()
-    Value = Float64Col()
-
-class Test(IsDescription):
-    """A description that has several columns"""
-    x = Int32Col(shape=2, dflt=0, pos=0)
-    y = Float64Col(dflt=1.2, shape=(2, 3))
-    z = UInt8Col(dflt=1)
-    color = EnumCol(colors, 'red', base='uint32', shape=(2,))
+    Name = tables.UInt32Col()
+    Value = tables.Float64Col()
+
+
+class Test(tables.IsDescription):
+    """A description that has several columns."""
+
+    x = tables.Int32Col(shape=2, dflt=0, pos=0)
+    y = tables.Float64Col(dflt=1.2, shape=(2, 3))
+    z = tables.UInt8Col(dflt=1)
+    color = tables.EnumCol(colors, 'red', base='uint32', shape=(2,))
     Info = Info()
-    class info(IsDescription):
+
+    class info(tables.IsDescription):
         _v_pos = 1
-        name = StringCol(10)
-        value = Float64Col(pos=0)
-        y2 = Float64Col(dflt=1, shape=(2, 3), pos=1)
-        z2 = UInt8Col(dflt=1)
-        class info2(IsDescription):
-            y3 = Float64Col(dflt=1, shape=(2, 3))
-            z3 = UInt8Col(dflt=1)
-            name = StringCol(10)
-            value = EnumCol(colors, 'blue', base='uint32', shape=(1,))
-            class info3(IsDescription):
-                name = StringCol(10)
-                value = Time64Col()
-                y4 = Float64Col(dflt=1, shape=(2, 3))
-                z4 = UInt8Col(dflt=1)
+        name = tables.StringCol(10)
+        value = tables.Float64Col(pos=0)
+        y2 = tables.Float64Col(dflt=1, shape=(2, 3), pos=1)
+        z2 = tables.UInt8Col(dflt=1)
+
+        class info2(tables.IsDescription):
+            y3 = tables.Float64Col(dflt=1, shape=(2, 3))
+            z3 = tables.UInt8Col(dflt=1)
+            name = tables.StringCol(10)
+            value = tables.EnumCol(colors, 'blue', base='uint32', shape=(1,))
+
+            class info3(tables.IsDescription):
+                name = tables.StringCol(10)
+                value = tables.Time64Col()
+                y4 = tables.Float64Col(dflt=1, shape=(2, 3))
+                z4 = tables.UInt8Col(dflt=1)
 
 # Write the file and read it
 write(fileout, Test, ['info/info2/z3'])
 read(fileout)
-print "You can have a look at '%s' output file now." % fileout
+print("You can have a look at '%s' output file now." % fileout)
diff --git a/examples/objecttree.py b/examples/objecttree.py
index 6a514ab..1ed58fa 100644
--- a/examples/objecttree.py
+++ b/examples/objecttree.py
@@ -1,12 +1,15 @@
-from tables import *
+from __future__ import print_function
+import tables
 
-class Particle(IsDescription):
-    identity = StringCol(itemsize=22, dflt=" ", pos=0)  # character String
-    idnumber = Int16Col(dflt=1, pos = 1)  # short integer
-    speed    = Float32Col(dflt=1, pos = 1)  # single-precision
+
+class Particle(tables.IsDescription):
+    identity = tables.StringCol(itemsize=22, dflt=" ", pos=0)
+                                                # character String
+    idnumber = tables.Int16Col(dflt=1, pos=1)   # short integer
+    speed = tables.Float32Col(dflt=1, pos=1)    # single-precision
 
 # Open a file in "w"rite mode
-fileh = open_file("objecttree.h5", mode = "w")
+fileh = tables.open_file("objecttree.h5", mode="w")
 # Get the HDF5 root group
 root = fileh.root
 
@@ -15,7 +18,8 @@ group1 = fileh.create_group(root, "group1")
 group2 = fileh.create_group(root, "group2")
 
 # Now, create an array in root group
-array1 = fileh.create_array(root, "array1", ["string", "array"], "String array")
+array1 = fileh.create_array(
+    root, "array1", ["string", "array"], "String array")
 # Create 2 new tables in group1
 table1 = fileh.create_table(group1, "table1", Particle)
 table2 = fileh.create_table("/group2", "table2", Particle)
@@ -27,11 +31,11 @@ for table in (table1, table2):
     # Get the record object associated with the table:
     row = table.row
     # Fill the table with 10 records
-    for i in xrange(10):
+    for i in range(10):
         # First, assign the values to the Particle record
-        row['identity']  = 'This is particle: %2d' % (i)
+        row['identity'] = 'This is particle: %2d' % (i)
         row['idnumber'] = i
-        row['speed']  = i * 2.
+        row['speed'] = i * 2.
         # This injects the Record values
         row.append()
 
diff --git a/examples/particles.py b/examples/particles.py
index 027f327..03eccab 100644
--- a/examples/particles.py
+++ b/examples/particles.py
@@ -1,112 +1,122 @@
-"""
-Beware! you need PyTables >= 2.3 to run this script!
-"""
+"""Beware! you need PyTables >= 2.3 to run this script!"""
 
+from __future__ import print_function
 from time import time  # use clock for Win
-import numpy
-from tables import *
+import numpy as np
+import tables
 
-#NEVENTS = 10000
+# NEVENTS = 10000
 NEVENTS = 20000
 MAX_PARTICLES_PER_EVENT = 100
 
 # Particle description
-class Particle(IsDescription):
-    #event_id    = Int32Col(pos=1, indexed=True) # event id (indexed)
-    event_id    = Int32Col(pos=1)               # event id (not indexed)
-    particle_id = Int32Col(pos=2)               # particle id in the event
-    parent_id   = Int32Col(pos=3)               # the id of the parent particle
-                                                # (negative values means no parent)
-    momentum    = Float64Col(shape=3, pos=4)    # momentum of the particle
-    mass        = Float64Col(pos=5)             # mass of the particle
-
-# # Create a new table for events
+
+
+class Particle(tables.IsDescription):
+    # event_id = tables.Int32Col(pos=1, indexed=True) # event id (indexed)
+    event_id = tables.Int32Col(pos=1)               # event id (not indexed)
+    particle_id = tables.Int32Col(pos=2)            # particle id in the event
+    parent_id = tables.Int32Col(pos=3)              # the id of the parent
+                                                    # particle (negative
+                                                    # values means no parent)
+    momentum = tables.Float64Col(shape=3, pos=4)    # momentum of the particle
+    mass = tables.Float64Col(pos=5)                 # mass of the particle
+
+# Create a new table for events
 t1 = time()
-print "Creating a table with %s entries aprox.. Wait please..." % \
-      (int(NEVENTS*(MAX_PARTICLES_PER_EVENT/2.)))
-fileh = open_file("particles-pro.h5", mode = "w")
+print("Creating a table with %s entries aprox.. Wait please..." %
+      (int(NEVENTS * (MAX_PARTICLES_PER_EVENT / 2.))))
+fileh = tables.open_file("particles-pro.h5", mode="w")
 group = fileh.create_group(fileh.root, "events")
-table = fileh.create_table(group, 'table', Particle, "A table", Filters(0))
+table = fileh.create_table(group, 'table', Particle, "A table",
+                           tables.Filters(0))
 # Choose this line if you want data compression
-#table = fileh.create_table(group, 'table', Particle, "A table", Filters(1))
+# table = fileh.create_table(group, 'table', Particle, "A table", Filters(1))
 
 # Fill the table with events
-numpy.random.seed(1)  # In order to have reproducible results
+np.random.seed(1)  # In order to have reproducible results
 particle = table.row
-for i in xrange(NEVENTS):
-    for j in xrange(numpy.random.randint(0, MAX_PARTICLES_PER_EVENT)):
-        particle['event_id']  = i
+for i in range(NEVENTS):
+    for j in range(np.random.randint(0, MAX_PARTICLES_PER_EVENT)):
+        particle['event_id'] = i
         particle['particle_id'] = j
         particle['parent_id'] = j - 10     # 10 root particles (max)
-        particle['momentum'] = numpy.random.normal(5.0, 2.0, size=3)
-        particle['mass'] = numpy.random.normal(500.0, 10.0)
+        particle['momentum'] = np.random.normal(5.0, 2.0, size=3)
+        particle['mass'] = np.random.normal(500.0, 10.0)
         # This injects the row values.
         particle.append()
 table.flush()
-print "Added %s entries --- Time: %s sec" % (table.nrows, round((time()-t1), 3))
+print("Added %s entries --- Time: %s sec" %
+      (table.nrows, round((time() - t1), 3)))
 
 t1 = time()
-print "Creating index..."
-table.cols.event_id.create_index(optlevel=0, verbose=True)
-print "Index created --- Time: %s sec" % (round((time()-t1), 3))
+print("Creating index...")
+table.cols.event_id.create_index(optlevel=0, _verbose=True)
+print("Index created --- Time: %s sec" % (round((time() - t1), 3)))
 # Add the number of events as an attribute
 table.attrs.nevents = NEVENTS
 
 fileh.close()
 
 # Open the file en read only mode and start selections
-print "Selecting events..."
-fileh = open_file("particles-pro.h5", mode = "r")
+print("Selecting events...")
+fileh = tables.open_file("particles-pro.h5", mode="r")
 table = fileh.root.events.table
 
-print "Particles in event 34:",
-nrows = 0; t1 = time()
+print("Particles in event 34:", end=' ')
+nrows = 0
+t1 = time()
 for row in table.where("event_id == 34"):
         nrows += 1
-print nrows
-print "Done --- Time:", round((time()-t1), 3), "sec"
+print(nrows)
+print("Done --- Time:", round((time() - t1), 3), "sec")
 
-print "Root particles in event 34:",
-nrows = 0; t1 = time()
+print("Root particles in event 34:", end=' ')
+nrows = 0
+t1 = time()
 for row in table.where("event_id == 34"):
     if row['parent_id'] < 0:
         nrows += 1
-print nrows
-print "Done --- Time:", round((time()-t1), 3), "sec"
+print(nrows)
+print("Done --- Time:", round((time() - t1), 3), "sec")
 
-print "Sum of masses of root particles in event 34:",
-smass = 0.0; t1 = time()
+print("Sum of masses of root particles in event 34:", end=' ')
+smass = 0.0
+t1 = time()
 for row in table.where("event_id == 34"):
     if row['parent_id'] < 0:
         smass += row['mass']
-print smass
-print "Done --- Time:", round((time()-t1), 3), "sec"
+print(smass)
+print("Done --- Time:", round((time() - t1), 3), "sec")
 
-print "Sum of masses of daughter particles for particle 3 in event 34:",
-smass = 0.0; t1 = time()
+print(
+    "Sum of masses of daughter particles for particle 3 in event 34:", end=' ')
+smass = 0.0
+t1 = time()
 for row in table.where("event_id == 34"):
     if row['parent_id'] == 3:
         smass += row['mass']
-print smass
-print "Done --- Time:", round((time()-t1), 3), "sec"
+print(smass)
+print("Done --- Time:", round((time() - t1), 3), "sec")
 
-print "Sum of module of momentum for particle 3 in event 34:",
-smomentum = 0.0; t1 = time()
-#for row in table.where("(event_id == 34) & ((parent_id) == 3)"):
+print("Sum of module of momentum for particle 3 in event 34:", end=' ')
+smomentum = 0.0
+t1 = time()
+# for row in table.where("(event_id == 34) & ((parent_id) == 3)"):
 for row in table.where("event_id == 34"):
     if row['parent_id'] == 3:
-        smomentum += numpy.sqrt(numpy.add.reduce(row['momentum']**2))
-print smomentum
-print "Done --- Time:", round((time()-t1), 3), "sec"
+        smomentum += np.sqrt(np.add.reduce(row['momentum'] ** 2))
+print(smomentum)
+print("Done --- Time:", round((time() - t1), 3), "sec")
 
 # This is the same than above, but using generator expressions
 # Python >= 2.4 needed here!
-print "Sum of module of momentum for particle 3 in event 34 (2):",
+print("Sum of module of momentum for particle 3 in event 34 (2):", end=' ')
 t1 = time()
-print sum(numpy.sqrt(numpy.add.reduce(row['momentum']**2))
+print(sum(np.sqrt(np.add.reduce(row['momentum'] ** 2))
           for row in table.where("event_id == 34")
-          if row['parent_id'] == 3)
-print "Done --- Time:", round((time()-t1), 3), "sec"
+          if row['parent_id'] == 3))
+print("Done --- Time:", round((time() - t1), 3), "sec")
 
 
 fileh.close()
diff --git a/examples/read_array_out_arg.py b/examples/read_array_out_arg.py
index e50d8bb..dcbe596 100644
--- a/examples/read_array_out_arg.py
+++ b/examples/read_array_out_arg.py
@@ -27,7 +27,7 @@ def standard_read(array_size):
     with tables.open_file('test.h5', 'r') as fobj:
         array = fobj.get_node('/', 'test')
         start = time.time()
-        for i in xrange(N):
+        for i in range(N):
             output = array.read(0, array_size, 1)
         end = time.time()
         assert(np.all(output == 1))
@@ -40,7 +40,7 @@ def pre_allocated_read(array_size):
         array = fobj.get_node('/', 'test')
         start = time.time()
         output = np.empty(array_size, 'i8')
-        for i in xrange(N):
+        for i in range(N):
             array.read(0, array_size, 1, out=output)
         end = time.time()
         assert(np.all(output == 1))
@@ -58,4 +58,4 @@ if __name__ == '__main__':
         create_file(array_size)
         standard_read(array_size)
         pre_allocated_read(array_size)
-        print()
\ No newline at end of file
+        print()
diff --git a/examples/split.py b/examples/split.py
new file mode 100644
index 0000000..bbc7f97
--- /dev/null
+++ b/examples/split.py
@@ -0,0 +1,38 @@
+"""Use the H5FD_SPLIT driver to store metadata and raw data in separate files.
+
+In this example, we store the metadata file in the current directory and
+the raw data file in a subdirectory.
+
+"""
+
+import os
+import errno
+import numpy
+import tables
+
+FNAME = "split"
+DRIVER = "H5FD_SPLIT"
+RAW_DIR = "raw"
+DRIVER_PROPS = {
+    "driver_split_raw_ext": os.path.join(RAW_DIR, "%s-r.h5")
+}
+DATA_SHAPE = (2, 10)
+
+
+class FooBar(tables.IsDescription):
+    tag = tables.StringCol(16)
+    data = tables.Float32Col(shape=DATA_SHAPE)
+
+try:
+    os.mkdir(RAW_DIR)
+except OSError as e:
+    if e.errno == errno.EEXIST:
+        pass
+with tables.open_file(FNAME, mode="w", driver=DRIVER, **DRIVER_PROPS) as f:
+    group = f.create_group("/", "foo", "foo desc")
+    table = f.create_table(group, "bar", FooBar, "bar desc")
+    for i in range(5):
+        table.row["tag"] = "t%d" % i
+        table.row["data"] = numpy.random.random_sample(DATA_SHAPE)
+        table.row.append()
+    table.flush()
diff --git a/examples/table-tree.py b/examples/table-tree.py
index c5f2c8f..5d7feba 100644
--- a/examples/table-tree.py
+++ b/examples/table-tree.py
@@ -1,40 +1,44 @@
-import numpy
-from tables import *
-
-class Particle(IsDescription):
-    ADCcount    = Int16Col()              # signed short integer
-    TDCcount    = UInt8Col()              # unsigned byte
-    grid_i      = Int32Col()              # integer
-    grid_j      = Int32Col()              # integer
-    idnumber    = Int64Col()              # signed long long
-    name        = StringCol(16, dflt="")  # 16-character String
-    pressure    = Float32Col(shape=2)     # float  (single-precision)
-    temperature = Float64Col()            # double (double-precision)
+from __future__ import print_function
+import numpy as np
+import tables
+
+
+class Particle(tables.IsDescription):
+    ADCcount = tables.Int16Col()                # signed short integer
+    TDCcount = tables.UInt8Col()                # unsigned byte
+    grid_i = tables.Int32Col()                  # integer
+    grid_j = tables.Int32Col()                  # integer
+    idnumber = tables.Int64Col()                # signed long long
+    name = tables.StringCol(16, dflt="")        # 16-character String
+    pressure = tables.Float32Col(shape=2)       # float  (single-precision)
+    temperature = tables.Float64Col()           # double (double-precision)
 
 Particle2 = {
     # You can also use any of the atom factories, i.e. the one which
     # accepts a PyTables type.
-    "ADCcount": Col.from_type("int16"),    # signed short integer
-    "TDCcount": Col.from_type("uint8"),    # unsigned byte
-    "grid_i": Col.from_type("int32"),    # integer
-    "grid_j": Col.from_type("int32"),    # integer
-    "idnumber": Col.from_type("int64"),    # signed long long
-    "name": Col.from_kind("string", 16),  # 16-character String
-    "pressure": Col.from_type("float32", (2,)), # float  (single-precision)
-    "temperature": Col.from_type("float64"),  # double (double-precision)
+    "ADCcount": tables.Col.from_type("int16"),          # signed short integer
+    "TDCcount": tables.Col.from_type("uint8"),          # unsigned byte
+    "grid_i": tables.Col.from_type("int32"),            # integer
+    "grid_j": tables.Col.from_type("int32"),            # integer
+    "idnumber": tables.Col.from_type("int64"),          # signed long long
+    "name": tables.Col.from_kind("string", 16),         # 16-character String
+    "pressure": tables.Col.from_type("float32", (2,)),  # float
+                                                        # (single-precision)
+    "temperature": tables.Col.from_type("float64"),     # double
+                                                        # (double-precision)
 }
 
 # The name of our HDF5 filename
 filename = "table-tree.h5"
 
 # Open a file in "w"rite mode
-h5file = open_file(filename, mode = "w")
+h5file = tables.open_file(filename, mode="w")
 
 # Create a new group under "/" (root)
 group = h5file.create_group("/", 'detector')
 
 # Create one table on it
-#table = h5file.create_table(group, 'table', Particle, "Title example")
+# table = h5file.create_table(group, 'table', Particle, "Title example")
 # You can choose creating a Table from a description dictionary if you wish
 table = h5file.create_table(group, 'table', Particle2, "Title example")
 
@@ -42,15 +46,15 @@ table = h5file.create_table(group, 'table', Particle2, "Title example")
 particle = table.row
 
 # Fill the table with 10 particles
-for i in xrange(10):
+for i in range(10):
     # First, assign the values to the Particle record
-    particle['name']  = 'Particle: %6d' % (i)
+    particle['name'] = 'Particle: %6d' % (i)
     particle['TDCcount'] = i % 256
     particle['ADCcount'] = (i * 256) % (1 << 16)
     particle['grid_i'] = i
     particle['grid_j'] = 10 - i
-    particle['pressure'] = [float(i*i), float(i*2)]
-    particle['temperature'] = float(i**2)
+    particle['pressure'] = [float(i * i), float(i * 2)]
+    particle['temperature'] = float(i ** 2)
     particle['idnumber'] = i * (2 ** 34)  # This exceeds integer range
     # This injects the Record values.
     particle.append()
@@ -59,117 +63,117 @@ for i in xrange(10):
 table.flush()
 
 # Get actual data from table. We are interested in column pressure.
-pressure = [ p['pressure'] for p in table.iterrows() ]
-print "Last record ==>", p
-print "Column pressure ==>", numpy.array(pressure)
-print "Total records in table ==> ", len(pressure)
-print
+pressure = [p['pressure'] for p in table.iterrows()]
+print("Last record ==>", p)
+print("Column pressure ==>", np.array(pressure))
+print("Total records in table ==> ", len(pressure))
+print()
 
 # Create a new group to hold new arrays
 gcolumns = h5file.create_group("/", "columns")
-print "columns ==>", gcolumns, pressure
+print("columns ==>", gcolumns, pressure)
 # Create an array with this info under '/columns' having a 'list' flavor
 h5file.create_array(gcolumns, 'pressure', pressure,
-                   "Pressure column")
-print "gcolumns.pressure type ==> ", gcolumns.pressure.atom.dtype
+                    "Pressure column")
+print("gcolumns.pressure type ==> ", gcolumns.pressure.atom.dtype)
 
 # Do the same with TDCcount, but with a numpy object
-TDC = [ p['TDCcount'] for p in table.iterrows() ]
-print "TDC ==>", TDC
-print "TDC shape ==>", numpy.array(TDC).shape
-h5file.create_array('/columns', 'TDC', numpy.array(TDC), "TDCcount column")
+TDC = [p['TDCcount'] for p in table.iterrows()]
+print("TDC ==>", TDC)
+print("TDC shape ==>", np.array(TDC).shape)
+h5file.create_array('/columns', 'TDC', np.array(TDC), "TDCcount column")
 
 # Do the same with name column
-names = [ p['name'] for p in table.iterrows() ]
-print "names ==>", names
+names = [p['name'] for p in table.iterrows()]
+print("names ==>", names)
 h5file.create_array('/columns', 'name', names, "Name column")
 # This works even with homogeneous tuples or lists (!)
-print "gcolumns.name shape ==>", gcolumns.name.shape
-print "gcolumns.name type ==> ", gcolumns.name.atom.dtype
+print("gcolumns.name shape ==>", gcolumns.name.shape)
+print("gcolumns.name type ==> ", gcolumns.name.atom.dtype)
 
-print "Table dump:"
+print("Table dump:")
 for p in table.iterrows():
-    print p
+    print(p)
 
 # Save a recarray object under detector
-r = numpy.rec.array("a"*300, formats='f4,3i4,a5,i2', shape=3)
+r = np.rec.array("a" * 300, formats='f4,3i4,a5,i2', shape=3)
 recarrt = h5file.create_table("/detector", 'recarray', r, "RecArray example")
 r2 = r[0:3:2]
 # Change the byteorder property
 recarrt = h5file.create_table("/detector", 'recarray2', r2,
-                             "Non-contiguous recarray")
-print recarrt
-print
+                              "Non-contiguous recarray")
+print(recarrt)
+print()
 
-print h5file.root.detector.table.description
+print(h5file.root.detector.table.description)
 # Close the file
 h5file.close()
 
-#sys.exit()
+# sys.exit()
 
 # Reopen it in append mode
-h5file = open_file(filename, "a")
+h5file = tables.open_file(filename, "a")
 
 # Ok. let's start browsing the tree from this filename
-print "Reading info from filename:", h5file.filename
-print
+print("Reading info from filename:", h5file.filename)
+print()
 
 # Firstly, list all the groups on tree
-print "Groups in file:"
+print("Groups in file:")
 for group in h5file.walk_groups("/"):
-    print group
-print
+    print(group)
+print()
 
 # List all the nodes (Group and Leaf objects) on tree
-print "List of all nodes in file:"
-print h5file
+print("List of all nodes in file:")
+print(h5file)
 
 # And finally, only the Arrays (Array objects)
-print "Arrays in file:"
+print("Arrays in file:")
 for array in h5file.walk_nodes("/", classname="Array"):
-    print array
-print
+    print(array)
+print()
 
 # Get group /detector and print some info on it
 detector = h5file.get_node("/detector")
-print "detector object ==>", detector
+print("detector object ==>", detector)
 
 # List only leaves on detector
-print "Leaves in group", detector, ":"
+print("Leaves in group", detector, ":")
 for leaf in h5file.list_nodes("/detector", 'Leaf'):
-    print leaf
-print
+    print(leaf)
+print()
 
 # List only tables on detector
-print "Tables in group", detector, ":"
+print("Tables in group", detector, ":")
 for leaf in h5file.list_nodes("/detector", 'Table'):
-    print leaf
-print
+    print(leaf)
+print()
 
 # List only arrays on detector (there should be none!)
-print "Arrays in group", detector, ":"
+print("Arrays in group", detector, ":")
 for leaf in h5file.list_nodes("/detector", 'Array'):
-    print leaf
-print
+    print(leaf)
+print()
 
 # Get "/detector" Group object
 group = h5file.root.detector
-print "/detector ==>", group
+print("/detector ==>", group)
 
 # Get the "/detector/table
 table = h5file.get_node("/detector/table")
-print "/detector/table ==>", table
+print("/detector/table ==>", table)
 
 # Get metadata from table
-print "Object:", table
-print "Table name:", table.name
-print "Table title:", table.title
-print "Rows saved on table: %d" % (table.nrows)
+print("Object:", table)
+print("Table name:", table.name)
+print("Table title:", table.title)
+print("Rows saved on table: %d" % (table.nrows))
 
-print "Variable names on table with their type:"
+print("Variable names on table with their type:")
 for name in table.colnames:
-    print "  ", name, ':=', table.coldtypes[name]
-print
+    print("  ", name, ':=', table.coldtypes[name])
+print()
 
 # Read arrays in /columns/names and /columns/pressure
 
@@ -177,38 +181,38 @@ print
 pressureObject = h5file.get_node("/columns", "pressure")
 
 # Get some metadata on this object
-print "Info on the object:", pressureObject
-print "  shape ==>", pressureObject.shape
-print "  title ==>", pressureObject.title
-print "  type ==> ", pressureObject.atom.dtype
-print "  byteorder ==> ", pressureObject.byteorder
+print("Info on the object:", pressureObject)
+print("  shape ==>", pressureObject.shape)
+print("  title ==>", pressureObject.title)
+print("  type ==> ", pressureObject.atom.dtype)
+print("  byteorder ==> ", pressureObject.byteorder)
 
 # Read the pressure actual data
 pressureArray = pressureObject.read()
-print "  data type ==>", type(pressureArray)
-print "  data ==>", pressureArray
-print
+print("  data type ==>", type(pressureArray))
+print("  data ==>", pressureArray)
+print()
 
 # Get the object in "/columns/names"
 nameObject = h5file.root.columns.name
 
 # Get some metadata on this object
-print "Info on the object:", nameObject
-print "  shape ==>", nameObject.shape
-print "  title ==>", nameObject.title
-print "  type ==> " % nameObject.atom.dtype
+print("Info on the object:", nameObject)
+print("  shape ==>", nameObject.shape)
+print("  title ==>", nameObject.title)
+print("  type ==> " % nameObject.atom.dtype)
 
 
 # Read the 'name' actual data
 nameArray = nameObject.read()
-print "  data type ==>", type(nameArray)
-print "  data ==>", nameArray
+print("  data type ==>", type(nameArray))
+print("  data ==>", nameArray)
 
 # Print the data for both arrays
-print "Data on arrays name and pressure:"
+print("Data on arrays name and pressure:")
 for i in range(pressureObject.shape[0]):
-    print "".join(nameArray[i]), "-->", pressureArray[i]
-print
+    print("".join(nameArray[i]), "-->", pressureArray[i])
+print()
 
 
 # Finally, append some new records to table
@@ -216,15 +220,15 @@ table = h5file.root.detector.table
 
 # Append 5 new particles to table (yes, tables can be enlarged!)
 particle = table.row
-for i in xrange(10, 15):
+for i in range(10, 15):
     # First, assign the values to the Particle record
-    particle['name']  = 'Particle: %6d' % (i)
+    particle['name'] = 'Particle: %6d' % (i)
     particle['TDCcount'] = i % 256
     particle['ADCcount'] = (i * 256) % (1 << 16)
     particle['grid_i'] = i
     particle['grid_j'] = 10 - i
-    particle['pressure'] = [float(i*i), float(i*2)]
-    particle['temperature'] = float(i**2)
+    particle['pressure'] = [float(i * i), float(i * 2)]
+    particle['temperature'] = float(i ** 2)
     particle['idnumber'] = i * (2 ** 34)  # This exceeds integer range
     # This injects the Row values.
     particle.append()
@@ -232,66 +236,66 @@ for i in xrange(10, 15):
 # Flush this table
 table.flush()
 
-print "Columns name and pressure on expanded table:"
+print("Columns name and pressure on expanded table:")
 # Print some table columns, for comparison with array data
 for p in table:
-    print p['name'], '-->', p['pressure']
-print
+    print(p['name'], '-->', p['pressure'])
+print()
 
 # Put several flavors
 oldflavor = table.flavor
-print table.read(field="ADCcount")
+print(table.read(field="ADCcount"))
 table.flavor = "numpy"
-print table.read(field="ADCcount")
+print(table.read(field="ADCcount"))
 table.flavor = oldflavor
-print table.read(0, 0, 1, "name")
+print(table.read(0, 0, 1, "name"))
 table.flavor = "python"
-print table.read(0, 0, 1, "name")
+print(table.read(0, 0, 1, "name"))
 table.flavor = oldflavor
-print table.read(0, 0, 2, "pressure")
+print(table.read(0, 0, 2, "pressure"))
 table.flavor = "python"
-print table.read(0, 0, 2, "pressure")
+print(table.read(0, 0, 2, "pressure"))
 table.flavor = oldflavor
 
 # Several range selections
-print "Extended slice in selection: [0:7:6]"
-print table.read(0, 7, 6)
-print "Single record in selection: [1]"
-print table.read(1)
-print "Last record in selection: [-1]"
-print table.read(-1)
-print "Two records before the last in selection: [-3:-1]"
-print table.read(-3, -1)
+print("Extended slice in selection: [0:7:6]")
+print(table.read(0, 7, 6))
+print("Single record in selection: [1]")
+print(table.read(1))
+print("Last record in selection: [-1]")
+print(table.read(-1))
+print("Two records before the last in selection: [-3:-1]")
+print(table.read(-3, -1))
 
 # Print a recarray in table form
 table = h5file.root.detector.recarray2
-print "recarray2:", table
-print "  nrows:", table.nrows
-print "  byteorder:", table.byteorder
-print "  coldtypes:", table.coldtypes
-print "  colnames:", table.colnames
+print("recarray2:", table)
+print("  nrows:", table.nrows)
+print("  byteorder:", table.byteorder)
+print("  coldtypes:", table.coldtypes)
+print("  colnames:", table.colnames)
 
-print table.read()
+print(table.read())
 for p in table.iterrows():
-    print p['f1'], '-->', p['f2']
-print
+    print(p['f1'], '-->', p['f2'])
+print()
 
-result = [ rec['f1'] for rec in table if rec.nrow < 2 ]
-print result
+result = [rec['f1'] for rec in table if rec.nrow < 2]
+print(result)
 
 # Test the File.rename_node() method
-#h5file.rename_node(h5file.root.detector.recarray2, "recarray3")
+# h5file.rename_node(h5file.root.detector.recarray2, "recarray3")
 h5file.rename_node(table, "recarray3")
 # Delete a Leaf from the HDF5 tree
 h5file.remove_node(h5file.root.detector.recarray3)
 # Delete the detector group and its leaves recursively
-#h5file.remove_node(h5file.root.detector, recursive=1)
+# h5file.remove_node(h5file.root.detector, recursive=1)
 # Create a Group and then remove it
 h5file.create_group(h5file.root, "newgroup")
 h5file.remove_node(h5file.root, "newgroup")
 h5file.rename_node(h5file.root.columns, "newcolumns")
 
-print h5file
+print(h5file)
 
 # Close this file
 h5file.close()
diff --git a/examples/table1.py b/examples/table1.py
index 9e7f9c6..37f7249 100644
--- a/examples/table1.py
+++ b/examples/table1.py
@@ -1,29 +1,32 @@
-from tables import *
+from __future__ import print_function
+import tables
 
-class Particle(IsDescription):
-    name        = StringCol(16, pos=1)   # 16-character String
-    lati        = Int32Col(pos=2)        # integer
-    longi       = Int32Col(pos=3)        # integer
-    pressure    = Float32Col(pos=4)      # float  (single-precision)
-    temperature = Float64Col(pos=5)      # double (double-precision)
+
+class Particle(tables.IsDescription):
+    name = tables.StringCol(16, pos=1)      # 16-character String
+    lati = tables.Int32Col(pos=2)           # integer
+    longi = tables.Int32Col(pos=3)          # integer
+    pressure = tables.Float32Col(pos=4)     # float  (single-precision)
+    temperature = tables.Float64Col(pos=5)  # double (double-precision)
 
 # Open a file in "w"rite mode
-fileh = open_file("table1.h5", mode = "w")
+fileh = tables.open_file("table1.h5", mode="w")
 # Create a new group
 group = fileh.create_group(fileh.root, "newgroup")
 
 # Create a new table in newgroup group
-table = fileh.create_table(group, 'table', Particle, "A table", Filters(1))
+table = fileh.create_table(group, 'table', Particle, "A table",
+                           tables.Filters(1))
 particle = table.row
 
 # Fill the table with 10 particles
-for i in xrange(10):
+for i in range(10):
     # First, assign the values to the Particle record
-    particle['name']  = 'Particle: %6d' % (i)
+    particle['name'] = 'Particle: %6d' % (i)
     particle['lati'] = i
     particle['longi'] = 10 - i
-    particle['pressure'] = float(i*i)
-    particle['temperature'] = float(i**2)
+    particle['pressure'] = float(i * i)
+    particle['temperature'] = float(i ** 2)
     # This injects the row values.
     particle.append()
 
@@ -32,39 +35,39 @@ for i in xrange(10):
 table.flush()
 
 # Add a couple of user attrs
-table.attrs.user_attr1=1.023
-table.attrs.user_attr2="This is the second user attr"
+table.attrs.user_attr1 = 1.023
+table.attrs.user_attr2 = "This is the second user attr"
 
 # Append several rows in only one call
-table.append([("Particle:     10", 10, 0, 10*10, 10**2),
-              ("Particle:     11", 11, -1, 11*11, 11**2),
-              ("Particle:     12", 12, -2, 12*12, 12**2)])
+table.append([("Particle:     10", 10, 0, 10 * 10, 10 ** 2),
+              ("Particle:     11", 11, -1, 11 * 11, 11 ** 2),
+              ("Particle:     12", 12, -2, 12 * 12, 12 ** 2)])
 
 group = fileh.root.newgroup
-print "Nodes under group", group, ":"
+print("Nodes under group", group, ":")
 for node in fileh.list_nodes(group):
-    print node
-print
+    print(node)
+print()
 
-print "Leaves everywhere in file", fileh.filename, ":"
+print("Leaves everywhere in file", fileh.filename, ":")
 for leaf in fileh.walk_nodes(classname="Leaf"):
-    print leaf
-print
+    print(leaf)
+print()
 
 table = fileh.root.newgroup.table
-print "Object:", table
-print "Table name: %s. Table title: %s" % (table.name, table.title)
-print "Rows saved on table: %d" % (table.nrows)
+print("Object:", table)
+print("Table name: %s. Table title: %s" % (table.name, table.title))
+print("Rows saved on table: %d" % (table.nrows))
 
-print "Variable names on table with their type:"
+print("Variable names on table with their type:")
 for name in table.colnames:
-    print "  ", name, ':=', table.coldtypes[name]
+    print("  ", name, ':=', table.coldtypes[name])
 
-print "Table contents:"
+print("Table contents:")
 for row in table:
-    print row[:]
-print "Associated recarray:"
-print table.read()
+    print(row[:])
+print("Associated recarray:")
+print(table.read())
 
 # Finally, close the file
 fileh.close()
diff --git a/examples/table2.py b/examples/table2.py
index 7788b9e..d7939a6 100644
--- a/examples/table2.py
+++ b/examples/table2.py
@@ -1,35 +1,43 @@
 # This shows how to use the cols accessors for table columns
-from tables import *
-class Particle(IsDescription):
-    name        = StringCol(16, pos=1)   # 16-character String
-    lati        = Int32Col(pos=2)        # integer
-    longi       = Int32Col(pos=3)        # integer
-    vector      = Int32Col(shape=(2,), pos=4)    # Integer
-    matrix2D    = Float64Col(shape=(2, 2), pos=5)      # double (double-precision)
+from __future__ import print_function
+import tables
+
+
+class Particle(tables.IsDescription):
+    name = tables.StringCol(16, pos=1)          # 16-character String
+    lati = tables.Int32Col(pos=2)               # integer
+    longi = tables.Int32Col(pos=3)              # integer
+    vector = tables.Int32Col(shape=(2,), pos=4)  # Integer
+    matrix2D = tables.Float64Col(shape=(2, 2), pos=5)
+                                                # double (double-precision)
 
 # Open a file in "w"rite mode
-fileh = open_file("table2.h5", mode = "w")
+fileh = tables.open_file("table2.h5", mode="w")
 table = fileh.create_table(fileh.root, 'table', Particle, "A table")
 # Append several rows in only one call
-table.append([("Particle:     10", 10, 0, (10*9, 1), [[10**2, 11*3]]*2),
-              ("Particle:     11", 11, -1, (11*10, 2), [[11**2, 10*3]]*2),
-              ("Particle:     12", 12, -2, (12*11, 3), [[12**2, 9*3]]*2),
-              ("Particle:     13", 13, -3, (13*11, 4), [[13**2, 8*3]]*2),
-              ("Particle:     14", 14, -4, (14*11, 5), [[14**2, 7*3]]*2)])
+table.append(
+    [("Particle:     10", 10, 0, (10 * 9, 1), [[10 ** 2, 11 * 3]] * 2),
+     ("Particle:     11", 11, -1,
+      (11 * 10, 2), [[11 ** 2, 10 * 3]] * 2),
+     ("Particle:     12", 12, -2,
+      (12 * 11, 3), [[12 ** 2, 9 * 3]] * 2),
+     ("Particle:     13", 13, -3,
+      (13 * 11, 4), [[13 ** 2, 8 * 3]] * 2),
+     ("Particle:     14", 14, -4, (14 * 11, 5), [[14 ** 2, 7 * 3]] * 2)])
 
-print "str(Cols)-->", table.cols
-print "repr(Cols)-->", repr(table.cols)
-print "Column handlers:"
+print("str(Cols)-->", table.cols)
+print("repr(Cols)-->", repr(table.cols))
+print("Column handlers:")
 for name in table.colnames:
-    print table.cols._f_col(name)
+    print(table.cols._f_col(name))
 
-print "Select table.cols.name[1]-->", table.cols.name[1]
-print "Select table.cols.name[1:2]-->", table.cols.name[1:2]
-print "Select table.cols.name[:]-->", table.cols.name[:]
-print "Select table.cols._f_col('name')[:]-->", table.cols._f_col('name')[:]
-print "Select table.cols.lati[1]-->", table.cols.lati[1]
-print "Select table.cols.lati[1:2]-->", table.cols.lati[1:2]
-print "Select table.cols.vector[:]-->", table.cols.vector[:]
-print "Select table.cols['matrix2D'][:]-->", table.cols.matrix2D[:]
+print("Select table.cols.name[1]-->", table.cols.name[1])
+print("Select table.cols.name[1:2]-->", table.cols.name[1:2])
+print("Select table.cols.name[:]-->", table.cols.name[:])
+print("Select table.cols._f_col('name')[:]-->", table.cols._f_col('name')[:])
+print("Select table.cols.lati[1]-->", table.cols.lati[1])
+print("Select table.cols.lati[1:2]-->", table.cols.lati[1:2])
+print("Select table.cols.vector[:]-->", table.cols.vector[:])
+print("Select table.cols['matrix2D'][:]-->", table.cols.matrix2D[:])
 
 fileh.close()
diff --git a/examples/table3.py b/examples/table3.py
index cdf1358..3f2ad4b 100644
--- a/examples/table3.py
+++ b/examples/table3.py
@@ -1,35 +1,40 @@
 # This is an example on how to use complex columns
-from tables import *
-class Particle(IsDescription):
-    name        = StringCol(16, pos=1)   # 16-character String
-    lati        = ComplexCol(itemsize=16, pos=2)
-    longi       = ComplexCol(itemsize=8, pos=3)
-    vector      = ComplexCol(itemsize=8, shape=(2,), pos=4)
-    matrix2D    = ComplexCol(itemsize=16, shape=(2, 2), pos=5)
+from __future__ import print_function
+import tables
+
+
+class Particle(tables.IsDescription):
+    name = tables.StringCol(16, pos=1)   # 16-character String
+    lati = tables.ComplexCol(itemsize=16, pos=2)
+    longi = tables.ComplexCol(itemsize=8, pos=3)
+    vector = tables.ComplexCol(itemsize=8, shape=(2,), pos=4)
+    matrix2D = tables.ComplexCol(itemsize=16, shape=(2, 2), pos=5)
 
 # Open a file in "w"rite mode
-fileh = open_file("table3.h5", mode = "w")
+fileh = tables.open_file("table3.h5", mode="w")
 table = fileh.create_table(fileh.root, 'table', Particle, "A table")
 # Append several rows in only one call
-table.append([("Particle:     10", 10j, 0, (10*9+1j, 1), [[10**2j, 11*3]]*2),
-              ("Particle:     11", 11j, -1, (11*10+2j, 2), [[11**2j, 10*3]]*2),
-              ("Particle:     12", 12j, -2, (12*11+3j, 3), [[12**2j, 9*3]]*2),
-              ("Particle:     13", 13j, -3, (13*11+4j, 4), [[13**2j, 8*3]]*2),
-              ("Particle:     14", 14j, -4, (14*11+5j, 5), [[14**2j, 7*3]]*2)])
+table.append([
+    ("Particle:     10", 10j, 0, (10 * 9 + 1j, 1), [[10 ** 2j, 11 * 3]] * 2),
+    ("Particle:     11", 11j, -1, (11 * 10 + 2j, 2), [[11 ** 2j, 10 * 3]] * 2),
+    ("Particle:     12", 12j, -2, (12 * 11 + 3j, 3), [[12 ** 2j, 9 * 3]] * 2),
+    ("Particle:     13", 13j, -3, (13 * 11 + 4j, 4), [[13 ** 2j, 8 * 3]] * 2),
+    ("Particle:     14", 14j, -4, (14 * 11 + 5j, 5), [[14 ** 2j, 7 * 3]] * 2)
+])
 
-print "str(Cols)-->", table.cols
-print "repr(Cols)-->", repr(table.cols)
-print "Column handlers:"
+print("str(Cols)-->", table.cols)
+print("repr(Cols)-->", repr(table.cols))
+print("Column handlers:")
 for name in table.colnames:
-    print table.cols[name]
+    print(table.cols._f_col(name))
 
-print "Select table.cols.name[1]-->", table.cols.name[1]
-print "Select table.cols.name[1:2]-->", table.cols.name[1:2]
-print "Select table.cols.name[:]-->", table.cols.name[:]
-print "Select table.cols['name'][:]-->", table.cols['name'][:]
-print "Select table.cols.lati[1]-->", table.cols.lati[1]
-print "Select table.cols.lati[1:2]-->", table.cols.lati[1:2]
-print "Select table.cols.vector[:]-->", table.cols.vector[:]
-print "Select table.cols['matrix2D'][:]-->", table.cols.matrix2D[:]
+print("Select table.cols.name[1]-->", table.cols.name[1])
+print("Select table.cols.name[1:2]-->", table.cols.name[1:2])
+print("Select table.cols.name[:]-->", table.cols.name[:])
+print("Select table.cols._f_col('name')[:]-->", table.cols._f_col('name')[:])
+print("Select table.cols.lati[1]-->", table.cols.lati[1])
+print("Select table.cols.lati[1:2]-->", table.cols.lati[1:2])
+print("Select table.cols.vector[:]-->", table.cols.vector[:])
+print("Select table.cols['matrix2D'][:]-->", table.cols.matrix2D[:])
 
 fileh.close()
diff --git a/examples/tutorial1-1.py b/examples/tutorial1-1.py
index 2c8b626..934e8be 100644
--- a/examples/tutorial1-1.py
+++ b/examples/tutorial1-1.py
@@ -5,61 +5,62 @@ with any HDF5 generic utility.
 
 """
 
-from numpy import *
-from tables import *
+from __future__ import print_function
+import numpy as np
+import tables
 
 
         #'-**-**-**-**-**-**- user record definition  -**-**-**-**-**-**-**-'
 
 # Define a user record to characterize some kind of particles
-class Particle(IsDescription):
-    name      = StringCol(16)   # 16-character String
-    idnumber  = Int64Col()      # Signed 64-bit integer
-    ADCcount  = UInt16Col()     # Unsigned short integer
-    TDCcount  = UInt8Col()      # unsigned byte
-    grid_i    = Int32Col()      # integer
-    grid_j    = Int32Col()      # integer
-    pressure  = Float32Col()    # float  (single-precision)
-    energy    = Float64Col()    # double (double-precision)
-
-print
-print   '-**-**-**-**-**-**- file creation  -**-**-**-**-**-**-**-'
+class Particle(tables.IsDescription):
+    name = tables.StringCol(16)     # 16-character String
+    idnumber = tables.Int64Col()    # Signed 64-bit integer
+    ADCcount = tables.UInt16Col()   # Unsigned short integer
+    TDCcount = tables.UInt8Col()    # unsigned byte
+    grid_i = tables.Int32Col()      # integer
+    grid_j = tables.Int32Col()      # integer
+    pressure = tables.Float32Col()  # float  (single-precision)
+    energy = tables.Float64Col()    # double (double-precision)
+
+print()
+print('-**-**-**-**-**-**- file creation  -**-**-**-**-**-**-**-')
 
 # The name of our HDF5 filename
 filename = "tutorial1.h5"
 
-print "Creating file:", filename
+print("Creating file:", filename)
 
 # Open a file in "w"rite mode
-h5file = open_file(filename, mode = "w", title = "Test file")
+h5file = tables.open_file(filename, mode="w", title="Test file")
 
-print
-print   '-**-**-**-**-**- group and table creation  -**-**-**-**-**-**-**-'
+print()
+print('-**-**-**-**-**- group and table creation  -**-**-**-**-**-**-**-')
 
 # Create a new group under "/" (root)
 group = h5file.create_group("/", 'detector', 'Detector information')
-print "Group '/detector' created"
+print("Group '/detector' created")
 
 # Create one table on it
 table = h5file.create_table(group, 'readout', Particle, "Readout example")
-print "Table '/detector/readout' created"
+print("Table '/detector/readout' created")
 
 # Print the file
-print h5file
-print
-print repr(h5file)
+print(h5file)
+print()
+print(repr(h5file))
 
 # Get a shortcut to the record object in table
 particle = table.row
 
 # Fill the table with 10 particles
-for i in xrange(10):
-    particle['name']  = 'Particle: %6d' % (i)
+for i in range(10):
+    particle['name'] = 'Particle: %6d' % (i)
     particle['TDCcount'] = i % 256
     particle['ADCcount'] = (i * 256) % (1 << 16)
     particle['grid_i'] = i
     particle['grid_j'] = 10 - i
-    particle['pressure'] = float(i*i)
+    particle['pressure'] = float(i * i)
     particle['energy'] = float(particle['pressure'] ** 4)
     particle['idnumber'] = i * (2 ** 34)
     particle.append()
@@ -67,42 +68,44 @@ for i in xrange(10):
 # Flush the buffers for table
 table.flush()
 
-print
-print   '-**-**-**-**-**-**- table data reading & selection  -**-**-**-**-**-'
+print()
+print('-**-**-**-**-**-**- table data reading & selection  -**-**-**-**-**-')
 
 # Read actual data from table. We are interested in collecting pressure values
 # on entries where TDCcount field is greater than 3 and pressure less than 50
-pressure = [ x['pressure'] for x in table.iterrows()
-             if x['TDCcount'] > 3 and 20 <= x['pressure'] < 50 ]
-print "Last record read:"
-print repr(x)
-print "Field pressure elements satisfying the cuts:"
-print repr(pressure)
+pressure = [x['pressure'] for x in table.iterrows()
+            if x['TDCcount'] > 3 and 20 <= x['pressure'] < 50]
+print("Last record read:")
+print(repr(x))
+print("Field pressure elements satisfying the cuts:")
+print(repr(pressure))
 
 # Read also the names with the same cuts
-names = [ x['name'] for x in table.where(
-    """(TDCcount > 3) & (20 <= pressure) & (pressure < 50)""" ) ]
-print "Field names elements satisfying the cuts:"
-print repr(names)
+names = [
+    x['name'] for x in table.where(
+        """(TDCcount > 3) & (20 <= pressure) & (pressure < 50)""")
+]
+print("Field names elements satisfying the cuts:")
+print(repr(names))
 
-print
-print   '-**-**-**-**-**-**- array object creation  -**-**-**-**-**-**-**-'
+print()
+print('-**-**-**-**-**-**- array object creation  -**-**-**-**-**-**-**-')
 
-print "Creating a new group called '/columns' to hold new arrays"
+print("Creating a new group called '/columns' to hold new arrays")
 gcolumns = h5file.create_group(h5file.root, "columns", "Pressure and Name")
 
-print "Creating an array called 'pressure' under '/columns' group"
-h5file.create_array(gcolumns, 'pressure', array(pressure),
-                   "Pressure column selection")
-print repr(h5file.root.columns.pressure)
+print("Creating an array called 'pressure' under '/columns' group")
+h5file.create_array(gcolumns, 'pressure', np.array(pressure),
+                    "Pressure column selection")
+print(repr(h5file.root.columns.pressure))
 
-print "Creating another array called 'name' under '/columns' group"
+print("Creating another array called 'name' under '/columns' group")
 h5file.create_array(gcolumns, 'name', names, "Name column selection")
-print repr(h5file.root.columns.name)
+print(repr(h5file.root.columns.name))
 
-print "HDF5 file:"
-print h5file
+print("HDF5 file:")
+print(h5file)
 
 # Close the file
 h5file.close()
-print "File '"+filename+"' created"
+print("File '" + filename + "' created")
diff --git a/examples/tutorial1-2.py b/examples/tutorial1-2.py
index e506009..f16f321 100644
--- a/examples/tutorial1-2.py
+++ b/examples/tutorial1-2.py
@@ -5,62 +5,61 @@ that create the tutorial1.h5 file needed here.
 
 """
 
+from __future__ import print_function
+import tables
 
-from tables import *
-
-print
-print   '-**-**-**-**- open the previous tutorial file -**-**-**-**-**-'
+print()
+print('-**-**-**-**- open the previous tutorial file -**-**-**-**-**-')
 
 # Reopen the file in append mode
-h5file = open_file("tutorial1.h5", "a")
+h5file = tables.open_file("tutorial1.h5", "a")
 
 # Print the object tree created from this filename
-print "Object tree from filename:", h5file.filename
-print h5file
+print("Object tree from filename:", h5file.filename)
+print(h5file)
 
-print
-print   '-**-**-**-**-**-**- traverse tree methods -**-**-**-**-**-**-**-'
+print()
+print('-**-**-**-**-**-**- traverse tree methods -**-**-**-**-**-**-**-')
 
 # List all the nodes (Group and Leaf objects) on tree
-print h5file
+print(h5file)
 
 # List all the nodes (using File iterator) on tree
-print "Nodes in file:"
+print("Nodes in file:")
 for node in h5file:
-    print node
-print
+    print(node)
+print()
 
 # Now, only list all the groups on tree
-print "Groups in file:"
+print("Groups in file:")
 for group in h5file.walk_groups():
-    print group
-print
+    print(group)
+print()
 
 # List only the arrays hanging from /
-print "Arrays in file (I):"
+print("Arrays in file (I):")
 for group in h5file.walk_groups("/"):
     for array in h5file.list_nodes(group, classname='Array'):
-        print array
+        print(array)
 
 # This do the same result
-print "Arrays in file (II):"
+print("Arrays in file (II):")
 for array in h5file.walk_nodes("/", "Array"):
-    print array
-print
+    print(array)
+print()
 # And finally, list only leafs on /detector group (there should be one!)
-print "Leafs in group '/detector' (I):"
+print("Leafs in group '/detector' (I):")
 for leaf in h5file.list_nodes("/detector", 'Leaf'):
-    print leaf
+    print(leaf)
 
 # Other way using iterators and natural naming
-print "Leafs in group '/detector' (II):"
+print("Leafs in group '/detector' (II):")
 for leaf in h5file.root.detector._f_walknodes('Leaf'):
-    print leaf
-
+    print(leaf)
 
 
-print
-print   '-**-**-**-**-**-**- setting/getting object attributes -**-**--**-**-'
+print()
+print('-**-**-**-**-**-**- setting/getting object attributes -**-**--**-**-')
 
 # Get a pointer to '/detector/readout' node
 table = h5file.root.detector.readout
@@ -76,112 +75,113 @@ detector = h5file.root.detector
 detector._v_attrs.stuff = [5, (2.3, 4.5), "Integer and tuple"]
 
 # Now, get the attributes
-print "gath_date attribute of /detector/readout:", table.attrs.gath_date
-print "temperature attribute of /detector/readout:", table.attrs.temperature
-print "temp_scale attribute of /detector/readout:", table.attrs.temp_scale
-print "stuff attribute in /detector:", detector._v_attrs.stuff
-print
+print("gath_date attribute of /detector/readout:", table.attrs.gath_date)
+print("temperature attribute of /detector/readout:", table.attrs.temperature)
+print("temp_scale attribute of /detector/readout:", table.attrs.temp_scale)
+print("stuff attribute in /detector:", detector._v_attrs.stuff)
+print()
 
 # Delete permanently the attribute gath_date of /detector/readout
-print "Deleting /detector/readout gath_date attribute"
+print("Deleting /detector/readout gath_date attribute")
 del table.attrs.gath_date
 
 # Print a representation of all attributes in  /detector/table
-print "AttributeSet instance in /detector/table:", repr(table.attrs)
+print("AttributeSet instance in /detector/table:", repr(table.attrs))
 
 # Get the (user) attributes of /detector/table
-print "List of user attributes in /detector/table:", table.attrs._f_list()
+print("List of user attributes in /detector/table:", table.attrs._f_list())
 
 # Get the (sys) attributes of /detector/table
-print "List of user attributes in /detector/table:", table.attrs._f_list("sys")
-print
+print("List of user attributes in /detector/table:",
+      table.attrs._f_list("sys"))
+print()
 # Rename an attribute
-print "renaming 'temp_scale' attribute to 'tempScale'"
+print("renaming 'temp_scale' attribute to 'tempScale'")
 table.attrs._f_rename("temp_scale", "tempScale")
-print table.attrs._f_list()
+print(table.attrs._f_list())
 
 # Try to rename a system attribute:
 try:
     table.attrs._f_rename("VERSION", "version")
 except:
-    print "You can not rename a VERSION attribute: it is read only!."
+    print("You can not rename a VERSION attribute: it is read only!.")
 
-print
-print   '-**-**-**-**-**-**- getting object metadata -**-**-**-**-**-**-'
+print()
+print('-**-**-**-**-**-**- getting object metadata -**-**-**-**-**-**-')
 
 # Get a pointer to '/detector/readout' data
 table = h5file.root.detector.readout
 
 # Get metadata from table
-print "Object:", table
-print "Table name:", table.name
-print "Table title:", table.title
-print "Number of rows in table:", table.nrows
-print "Table variable names with their type and shape:"
+print("Object:", table)
+print("Table name:", table.name)
+print("Table title:", table.title)
+print("Number of rows in table:", table.nrows)
+print("Table variable names with their type and shape:")
 for name in table.colnames:
-    print name, ':= %s, %s' % (table.coldtypes[name],
-                               table.coldtypes[name].shape)
-print
+    print(name, ':= %s, %s' % (table.coldtypes[name],
+                               table.coldtypes[name].shape))
+print()
 
 # Get the object in "/columns pressure"
 pressureObject = h5file.get_node("/columns", "pressure")
 
 # Get some metadata on this object
-print "Info on the object:", repr(pressureObject)
-print "  shape: ==>", pressureObject.shape
-print "  title: ==>", pressureObject.title
-print "  atom: ==>", pressureObject.atom
-print
-print   '-**-**-**-**-**- reading actual data from arrays -**-**-**-**-**-**-'
+print("Info on the object:", repr(pressureObject))
+print("  shape: ==>", pressureObject.shape)
+print("  title: ==>", pressureObject.title)
+print("  atom: ==>", pressureObject.atom)
+print()
+print('-**-**-**-**-**- reading actual data from arrays -**-**-**-**-**-**-')
 
 # Read the 'pressure' actual data
 pressureArray = pressureObject.read()
-print repr(pressureArray)
+print(repr(pressureArray))
 # Check the kind of object we have created (it should be a numpy array)
-print "pressureArray is an object of type:", type(pressureArray)
+print("pressureArray is an object of type:", type(pressureArray))
 
 # Read the 'name' Array actual data
 nameArray = h5file.root.columns.name.read()
 # Check the kind of object we have created (it should be a numpy array)
-print "nameArray is an object of type:", type(nameArray)
+print("nameArray is an object of type:", type(nameArray))
 
-print
+print()
 
 # Print the data for both arrays
-print "Data on arrays nameArray and pressureArray:"
+print("Data on arrays nameArray and pressureArray:")
 for i in range(pressureObject.shape[0]):
-    print nameArray[i], "-->", pressureArray[i]
+    print(nameArray[i], "-->", pressureArray[i])
 
-print
-print   '-**-**-**-**-**- reading actual data from tables -**-**-**-**-**-**-'
+print()
+print('-**-**-**-**-**- reading actual data from tables -**-**-**-**-**-**-')
 
 # Create a shortcut to table object
 table = h5file.root.detector.readout
 
 # Read the 'energy' column of '/detector/readout'
-print "Column 'energy' of '/detector/readout':\n", table.cols.energy
-print
+print("Column 'energy' of '/detector/readout':\n", table.cols.energy)
+print()
 # Read the 3rd row of '/detector/readout'
-print "Third row of '/detector/readout':\n", table[2]
-print
+print("Third row of '/detector/readout':\n", table[2])
+print()
 # Read the rows from 3 to 9 of row of '/detector/readout'
-print "Rows from 3 to 9 of '/detector/readout':\n", table[2:9]
+print("Rows from 3 to 9 of '/detector/readout':\n", table[2:9])
 
-print
-print   '-**-**-**-**- append records to existing table -**-**-**-**-**-'
+print()
+print('-**-**-**-**- append records to existing table -**-**-**-**-**-')
 
 # Get the object row from table
 table = h5file.root.detector.readout
 particle = table.row
 
 # Append 5 new particles to table
-for i in xrange(10, 15):
-    particle['name']  = 'Particle: %6d' % (i)
+for i in range(10, 15):
+    particle['name'] = 'Particle: %6d' % (i)
     particle['TDCcount'] = i % 256
     particle['ADCcount'] = (i * 256) % (1 << 16)
     particle['grid_i'] = i
     particle['grid_j'] = 10 - i
-    particle['pressure'] = float(i*i)
+    particle['pressure'] = float(i * i)
     particle['energy'] = float(particle['pressure'] ** 4)
     particle['idnumber'] = i * (2 ** 34)  # This exceeds long integer range
     particle.append()
@@ -191,89 +191,89 @@ table.flush()
 
 # Print the data using the table iterator:
 for r in table:
-    print "%-16s | %11.1f | %11.4g | %6d | %6d | %8d |" % \
+    print("%-16s | %11.1f | %11.4g | %6d | %6d | %8d |" %
           (r['name'], r['pressure'], r['energy'], r['grid_i'], r['grid_j'],
-           r['TDCcount'])
+           r['TDCcount']))
 
-print
-print "Total number of entries in resulting table:", table.nrows
+print()
+print("Total number of entries in resulting table:", table.nrows)
 
-print
-print   '-**-**-**-**- modify records of a table -**-**-**-**-**-'
+print()
+print('-**-**-**-**- modify records of a table -**-**-**-**-**-')
 
 # Single cells
-print "First row of readout table."
-print "Before modif-->", table[0]
+print("First row of readout table.")
+print("Before modif-->", table[0])
 table.cols.TDCcount[0] = 1
-print "After modifying first row of TDCcount-->", table[0]
+print("After modifying first row of TDCcount-->", table[0])
 table.cols.energy[0] = 2
-print "After modifying first row of energy-->", table[0]
+print("After modifying first row of energy-->", table[0])
 
 # Column slices
 table.cols.TDCcount[2:5] = [2, 3, 4]
-print "After modifying slice [2:5] of ADCcount-->", table[0:5]
+print("After modifying slice [2:5] of ADCcount-->", table[0:5])
 table.cols.energy[1:9:3] = [2, 3, 4]
-print "After modifying slice [1:9:3] of energy-->", table[0:9]
+print("After modifying slice [1:9:3] of energy-->", table[0:9])
 
 # Modifying complete Rows
 table.modify_rows(start=1, step=3,
-                 rows=[(1, 2, 3.0, 4, 5, 6L, 'Particle:   None', 8.0),
-                       (2, 4, 6.0, 8, 10, 12L, 'Particle: None*2', 16.0)])
-print "After modifying the complete third row-->", table[0:5]
+                  rows=[(1, 2, 3.0, 4, 5, 6, 'Particle:   None', 8.0),
+                        (2, 4, 6.0, 8, 10, 12, 'Particle: None*2', 16.0)])
+print("After modifying the complete third row-->", table[0:5])
 
 # Modifying columns inside table iterators
 for row in table.where('TDCcount <= 2'):
-    row['energy'] = row['TDCcount']*2
+    row['energy'] = row['TDCcount'] * 2
     row.update()
-print "After modifying energy column (where TDCcount <=2)-->", table[0:4]
+print("After modifying energy column (where TDCcount <=2)-->", table[0:4])
 
-print
-print   '-**-**-**-**- modify elements of an array -**-**-**-**-**-'
+print()
+print('-**-**-**-**- modify elements of an array -**-**-**-**-**-')
 
-print "pressure array"
+print("pressure array")
 pressureObject = h5file.root.columns.pressure
-print "Before modif-->", pressureObject[:]
+print("Before modif-->", pressureObject[:])
 pressureObject[0] = 2
-print "First modif-->", pressureObject[:]
+print("First modif-->", pressureObject[:])
 pressureObject[1:3] = [2.1, 3.5]
-print "Second modif-->", pressureObject[:]
+print("Second modif-->", pressureObject[:])
 pressureObject[::2] = [1, 2]
-print "Third modif-->", pressureObject[:]
+print("Third modif-->", pressureObject[:])
 
-print "name array"
+print("name array")
 nameObject = h5file.root.columns.name
-print "Before modif-->", nameObject[:]
+print("Before modif-->", nameObject[:])
 nameObject[0] = 'Particle:   None'
-print "First modif-->", nameObject[:]
+print("First modif-->", nameObject[:])
 nameObject[1:3] = ['Particle:      0', 'Particle:      1']
-print "Second modif-->", nameObject[:]
+print("Second modif-->", nameObject[:])
 nameObject[::2] = ['Particle:     -3', 'Particle:     -5']
-print "Third modif-->", nameObject[:]
+print("Third modif-->", nameObject[:])
 
-print
-print   '-**-**-**-**- remove records from a table -**-**-**-**-**-'
+print()
+print('-**-**-**-**- remove records from a table -**-**-**-**-**-')
 
 # Delete some rows on the Table (yes, rows can be removed!)
 table.remove_rows(5, 10)
 
 # Print some table columns, for comparison with array data
-print "Some columns in final table:"
-print
+print("Some columns in final table:")
+print()
 # Print the headers
-print "%-16s | %11s | %11s | %6s | %6s | %8s |" % \
-       ('name', 'pressure', 'energy', 'grid_i', 'grid_j',
-        'TDCcount')
+print("%-16s | %11s | %11s | %6s | %6s | %8s |" %
+     ('name', 'pressure', 'energy', 'grid_i', 'grid_j',
+      'TDCcount'))
 
-print "%-16s + %11s + %11s + %6s + %6s + %8s +" % \
-      ('-' * 16, '-' * 11, '-' * 11, '-' * 6, '-' * 6, '-' * 8)
+print("%-16s + %11s + %11s + %6s + %6s + %8s +" %
+      ('-' * 16, '-' * 11, '-' * 11, '-' * 6, '-' * 6, '-' * 8))
 # Print the data using the table iterator:
 for r in table.iterrows():
-    print "%-16s | %11.1f | %11.4g | %6d | %6d | %8d |" % \
+    print("%-16s | %11.1f | %11.4g | %6d | %6d | %8d |" %
           (r['name'], r['pressure'], r['energy'], r['grid_i'], r['grid_j'],
-           r['TDCcount'])
+           r['TDCcount']))
 
-print
-print "Total number of entries in final table:", table.nrows
+print()
+print("Total number of entries in final table:", table.nrows)
 
 # Close the file
 h5file.close()
diff --git a/examples/tutorial2.py b/examples/tutorial2.py
index 33ce169..f792808 100644
--- a/examples/tutorial2.py
+++ b/examples/tutorial2.py
@@ -1,29 +1,34 @@
-"""This program shows the different protections that PyTables offer to
-the user in order to insure a correct data injection in tables.
+"""This program shows the different protections that PyTables offer to the user
+in order to insure a correct data injection in tables.
 
 Example to be used in the second tutorial in the User's Guide.
 
 """
 
-from tables import *
-from numpy import *
+from __future__ import print_function
+import tables
+import numpy as np
 
 # Describe a particle record
-class Particle(IsDescription):
-    name        = StringCol(itemsize=16)  # 16-character string
-    lati        = Int32Col()              # integer
-    longi       = Int32Col()              # integer
-    pressure    = Float32Col(shape=(2, 3)) # array of floats (single-precision)
-    temperature = Float64Col(shape=(2, 3)) # array of doubles (double-precision)
+
+
+class Particle(tables.IsDescription):
+    name = tables.StringCol(itemsize=16)    # 16-character string
+    lati = tables.Int32Col()                # integer
+    longi = tables.Int32Col()               # integer
+    pressure = tables.Float32Col(shape=(2, 3))      # array of floats
+                                                    # (single-precision)
+    temperature = tables.Float64Col(shape=(2, 3))   # array of doubles
+                                                    # (double-precision)
 
 # Native NumPy dtype instances are also accepted
-Event = dtype([
+Event = np.dtype([
     ("name", "S16"),
-    ("TDCcount", uint8),
-    ("ADCcount", uint16),
-    ("xcoord", float32),
-    ("ycoord", float32)
-    ])
+    ("TDCcount", np.uint8),
+    ("ADCcount", np.uint16),
+    ("xcoord", np.float32),
+    ("ycoord", np.float32)
+])
 
 # And dictionaries too (this defines the same structure as above)
 # Event = {
@@ -35,7 +40,7 @@ Event = dtype([
 #     }
 
 # Open a file in "w"rite mode
-fileh = open_file("tutorial2.h5", mode = "w")
+fileh = tables.open_file("tutorial2.h5", mode="w")
 # Get the HDF5 root group
 root = fileh.root
 # Create the groups:
@@ -47,20 +52,21 @@ gparticles = root.Particles
 for tablename in ("TParticle1", "TParticle2", "TParticle3"):
     # Create a table
     table = fileh.create_table("/Particles", tablename, Particle,
-                              "Particles: "+tablename)
+                               "Particles: " + tablename)
     # Get the record object associated with the table:
     particle = table.row
     # Fill the table with 257 particles
-    for i in xrange(257):
+    for i in range(257):
         # First, assign the values to the Particle record
         particle['name'] = 'Particle: %6d' % (i)
         particle['lati'] = i
         particle['longi'] = 10 - i
-        ########### Detectable errors start here. Play with them!
-        particle['pressure'] = array(i*arange(2*3)).reshape((2, 4))  # Incorrect
-        #particle['pressure'] = array(i*arange(2*3)).reshape((2,3))  # Correct
-        ########### End of errors
-        particle['temperature'] = (i**2)     # Broadcasting
+        # Detectable errors start here. Play with them!
+        particle['pressure'] = np.array(
+            i * np.arange(2 * 3)).reshape((2, 4))  # Incorrect
+        # particle['pressure'] = array(i*arange(2*3)).reshape((2,3))  # Correct
+        # End of errors
+        particle['temperature'] = (i ** 2)     # Broadcasting
         # This injects the Record values
         particle.append()
     # Flush the table buffers
@@ -70,21 +76,21 @@ for tablename in ("TParticle1", "TParticle2", "TParticle3"):
 for tablename in ("TEvent1", "TEvent2", "TEvent3"):
     # Create a table in Events group
     table = fileh.create_table(root.Events, tablename, Event,
-                              "Events: "+tablename)
+                               "Events: " + tablename)
     # Get the record object associated with the table:
     event = table.row
     # Fill the table with 257 events
-    for i in xrange(257):
+    for i in range(257):
         # First, assign the values to the Event record
-        event['name']  = 'Event: %6d' % (i)
-        event['TDCcount'] = i % (1<<8)   # Correct range
-        ########### Detectable errors start here. Play with them!
-        event['xcoor'] = float(i**2)     # Wrong spelling
-        #event['xcoord'] = float(i**2)   # Correct spelling
+        event['name'] = 'Event: %6d' % (i)
+        event['TDCcount'] = i % (1 << 8)   # Correct range
+        # Detectable errors start here. Play with them!
+        event['xcoor'] = float(i ** 2)     # Wrong spelling
+        # event['xcoord'] = float(i**2)   # Correct spelling
         event['ADCcount'] = "sss"          # Wrong type
-        #event['ADCcount'] = i * 2        # Correct type
-        ########### End of errors
-        event['ycoord'] = float(i)**4
+        # event['ADCcount'] = i * 2        # Correct type
+        # End of errors
+        event['ycoord'] = float(i) ** 4
         # This injects the Record values
         event.append()
     # Flush the buffers
@@ -92,10 +98,10 @@ for tablename in ("TEvent1", "TEvent2", "TEvent3"):
 
 # Read the records from table "/Events/TEvent3" and select some
 table = root.Events.TEvent3
-e = [ p['TDCcount'] for p in table
-      if p['ADCcount'] < 20 and 4 <= p['TDCcount'] < 15 ]
-print "Last record ==>", p
-print "Selected values ==>", e
-print "Total selected records ==> ", len(e)
+e = [p['TDCcount'] for p in table
+     if p['ADCcount'] < 20 and 4 <= p['TDCcount'] < 15]
+print("Last record ==>", p)
+print("Selected values ==>", e)
+print("Total selected records ==> ", len(e))
 # Finally, close the file (this also will flush all the remaining buffers!)
 fileh.close()
diff --git a/examples/tutorial3-1.py b/examples/tutorial3-1.py
index bd67de8..a739df3 100644
--- a/examples/tutorial3-1.py
+++ b/examples/tutorial3-1.py
@@ -1,4 +1,4 @@
-"""Small example of do/undo capability with PyTables"""
+"""Small example of do/undo capability with PyTables."""
 
 import tables
 
diff --git a/examples/tutorial3-2.py b/examples/tutorial3-2.py
index 0c1051a..7c91d2e 100644
--- a/examples/tutorial3-2.py
+++ b/examples/tutorial3-2.py
@@ -1,4 +1,4 @@
-"""A more complex example of do/undo capability with PyTables
+"""A more complex example of do/undo capability with PyTables.
 
 Here, names has been assigned to the marks, and jumps are done between
 marks.
diff --git a/examples/undo-redo.py b/examples/undo-redo.py
index 67ae935..9221482 100644
--- a/examples/undo-redo.py
+++ b/examples/undo-redo.py
@@ -2,9 +2,10 @@
 
 import tables
 
+
 def setUp(filename):
     # Create an HDF5 file
-    fileh = tables.open_file(filename, mode = "w", title="Undo/Redo demo")
+    fileh = tables.open_file(filename, mode="w", title="Undo/Redo demo")
     # Create some nodes in there
     fileh.create_group("/", "agroup", "Group 1")
     fileh.create_group("/agroup", "agroup2", "Group 2")
@@ -13,14 +14,16 @@ def setUp(filename):
     fileh.enable_undo()
     return fileh
 
+
 def tearDown(fileh):
     # Disable undo/redo.
     fileh.disable_undo()
     # Close the file
     fileh.close()
 
+
 def demo_6times3marks():
-    """Checking with six ops and three marks"""
+    """Checking with six ops and three marks."""
 
     # Initialize the data base with some nodes
     fileh = setUp("undo-redo-6times3marks.h5")
@@ -88,8 +91,9 @@ def demo_6times3marks():
     # Tear down the file
     tearDown(fileh)
 
+
 def demo_manyops():
-    """Checking many operations together """
+    """Checking many operations together."""
 
     # Initialize the data base with some nodes
     fileh = setUp("undo-redo-manyops.h5")
diff --git a/examples/vlarray1.py b/examples/vlarray1.py
index 11fec3c..e71c6ab 100644
--- a/examples/vlarray1.py
+++ b/examples/vlarray1.py
@@ -1,37 +1,38 @@
+from __future__ import print_function
 import tables
-from numpy import *
+import numpy as np
 
 # Create a VLArray:
 fileh = tables.open_file('vlarray1.h5', mode='w')
 vlarray = fileh.create_vlarray(fileh.root, 'vlarray1',
-                              tables.Int32Atom(shape=()),
-                              "ragged array of ints",
-                              filters=tables.Filters(1))
+                               tables.Int32Atom(shape=()),
+                               "ragged array of ints",
+                               filters=tables.Filters(1))
 # Append some (variable length) rows:
-vlarray.append(array([5, 6]))
-vlarray.append(array([5, 6, 7]))
+vlarray.append(np.array([5, 6]))
+vlarray.append(np.array([5, 6, 7]))
 vlarray.append([5, 6, 9, 8])
 
 # Now, read it through an iterator:
-print '-->', vlarray.title
+print('-->', vlarray.title)
 for x in vlarray:
-    print '%s[%d]--> %s' % (vlarray.name, vlarray.nrow, x)
+    print('%s[%d]--> %s' % (vlarray.name, vlarray.nrow, x))
 
 # Now, do the same with native Python strings.
 vlarray2 = fileh.create_vlarray(fileh.root, 'vlarray2',
-                              tables.StringAtom(itemsize=2),
-                              "ragged array of strings",
-                              filters=tables.Filters(1))
+                                tables.StringAtom(itemsize=2),
+                                "ragged array of strings",
+                                filters=tables.Filters(1))
 vlarray2.flavor = 'python'
 # Append some (variable length) rows:
-print '-->', vlarray2.title
+print('-->', vlarray2.title)
 vlarray2.append(['5', '66'])
 vlarray2.append(['5', '6', '77'])
 vlarray2.append(['5', '6', '9', '88'])
 
 # Now, read it through an iterator:
 for x in vlarray2:
-    print '%s[%d]--> %s' % (vlarray2.name, vlarray2.nrow, x)
+    print('%s[%d]--> %s' % (vlarray2.name, vlarray2.nrow, x))
 
 # Close the file.
 fileh.close()
diff --git a/examples/vlarray2.py b/examples/vlarray2.py
index 66e0d8c..17a0bf2 100644
--- a/examples/vlarray2.py
+++ b/examples/vlarray2.py
@@ -1,98 +1,100 @@
 #!/usr/bin/env python
 
-""" Small example that shows how to work with variable length arrays of
-different types, UNICODE strings and general Python objects included. """
+"""Small example that shows how to work with variable length arrays of
+different types, UNICODE strings and general Python objects included."""
 
-from numpy import *
-from tables import *
-import cPickle
+from __future__ import print_function
+import numpy as np
+import tables
+import pickle
 
 # Open a new empty HDF5 file
-fileh = open_file("vlarray2.h5", mode = "w")
+fileh = tables.open_file("vlarray2.h5", mode="w")
 # Get the root group
 root = fileh.root
 
 # A test with VL length arrays:
-vlarray = fileh.create_vlarray(root, 'vlarray1', Int32Atom(),
-                              "ragged array of ints")
-vlarray.append(array([5, 6]))
-vlarray.append(array([5, 6, 7]))
+vlarray = fileh.create_vlarray(root, 'vlarray1', tables.Int32Atom(),
+                               "ragged array of ints")
+vlarray.append(np.array([5, 6]))
+vlarray.append(np.array([5, 6, 7]))
 vlarray.append([5, 6, 9, 8])
 
 # Test with lists of bidimensional vectors
-vlarray = fileh.create_vlarray(root, 'vlarray2', Int64Atom(shape=(2,)),
-                              "Ragged array of vectors")
-a = array([[1, 2], [1, 2]], dtype=int64)
+vlarray = fileh.create_vlarray(root, 'vlarray2', tables.Int64Atom(shape=(2,)),
+                               "Ragged array of vectors")
+a = np.array([[1, 2], [1, 2]], dtype=np.int64)
 vlarray.append(a)
-vlarray.append(array([[1, 2], [3, 4]], dtype=int64))
-vlarray.append(zeros(dtype=int64, shape=(0, 2)))
-vlarray.append(array([[5, 6]], dtype=int64))
+vlarray.append(np.array([[1, 2], [3, 4]], dtype=np.int64))
+vlarray.append(np.zeros(dtype=np.int64, shape=(0, 2)))
+vlarray.append(np.array([[5, 6]], dtype=np.int64))
 # This makes an error (shape)
-#vlarray.append(array([[5], [6]], dtype=int64))
+# vlarray.append(array([[5], [6]], dtype=int64))
 # This makes an error (type)
-#vlarray.append(array([[5, 6]], dtype=uint64))
+# vlarray.append(array([[5, 6]], dtype=uint64))
 
 # Test with strings
-vlarray = fileh.create_vlarray(root, 'vlarray3', StringAtom(itemsize=3),
+vlarray = fileh.create_vlarray(root, 'vlarray3', tables.StringAtom(itemsize=3),
                                "Ragged array of strings")
 vlarray.append(["123", "456", "3"])
 vlarray.append(["456", "3"])
 # This makes an error because of different string sizes than declared
-#vlarray.append(["1234", "456", "3"])
+# vlarray.append(["1234", "456", "3"])
 
 # Python flavor
-vlarray = fileh.create_vlarray(root, 'vlarray3b', StringAtom(itemsize=3),
-                              "Ragged array of strings")
+vlarray = fileh.create_vlarray(root, 'vlarray3b',
+                               tables.StringAtom(itemsize=3),
+                               "Ragged array of strings")
 vlarray.flavor = "python"
 vlarray.append(["123", "456", "3"])
 vlarray.append(["456", "3"])
 
 # Binary strings
-vlarray = fileh.create_vlarray(root, 'vlarray4', UInt8Atom(),
-                              "pickled bytes")
-data = cPickle.dumps((["123", "456"], "3"))
-vlarray.append(ndarray(buffer=data, dtype=uint8, shape=len(data)))
+vlarray = fileh.create_vlarray(root, 'vlarray4', tables.UInt8Atom(),
+                               "pickled bytes")
+data = pickle.dumps((["123", "456"], "3"))
+vlarray.append(np.ndarray(buffer=data, dtype=np.uint8, shape=len(data)))
 
 # The next is a way of doing the same than before
-vlarray = fileh.create_vlarray(root, 'vlarray5', ObjectAtom(),
-                              "pickled object")
+vlarray = fileh.create_vlarray(root, 'vlarray5', tables.ObjectAtom(),
+                               "pickled object")
 vlarray.append([["123", "456"], "3"])
 
 # Boolean arrays are supported as well
-vlarray = fileh.create_vlarray(root, 'vlarray6', BoolAtom(),
+vlarray = fileh.create_vlarray(root, 'vlarray6', tables.BoolAtom(),
                                "Boolean atoms")
 # The next lines are equivalent...
 vlarray.append([1, 0])
 vlarray.append([1, 0, 3, 0])  # This will be converted to a boolean
 # This gives a TypeError
-#vlarray.append([1,0,1])
+# vlarray.append([1,0,1])
 
 # Variable length strings
-vlarray = fileh.create_vlarray(root, 'vlarray7', VLStringAtom(),
-                              "Variable Length String")
+vlarray = fileh.create_vlarray(root, 'vlarray7', tables.VLStringAtom(),
+                               "Variable Length String")
 vlarray.append("asd")
 vlarray.append("aaana")
 
 # Unicode variable length strings
-vlarray = fileh.create_vlarray(root, 'vlarray8', VLUnicodeAtom(),
+vlarray = fileh.create_vlarray(root, 'vlarray8', tables.VLUnicodeAtom(),
                                "Variable Length Unicode String")
-vlarray.append(u"aaana")
-vlarray.append(u"")   # The empty string
-vlarray.append(u"asd")
-vlarray.append(u"para\u0140lel")
+vlarray.append("aaana")
+vlarray.append("")   # The empty string
+vlarray.append("asd")
+vlarray.append("para\u0140lel")
 
 # Close the file
 fileh.close()
 
 # Open the file for reading
-fileh = open_file("vlarray2.h5", mode = "r")
+fileh = tables.open_file("vlarray2.h5", mode="r")
 # Get the root group
 root = fileh.root
 
 for object in fileh.list_nodes(root, "Leaf"):
     arr = object.read()
-    print object.name, "-->", arr
-    print "number of objects in this row:", len(arr)
+    print(object.name, "-->", arr)
+    print("number of objects in this row:", len(arr))
 
 # Close the file
 fileh.close()
diff --git a/examples/vlarray3.py b/examples/vlarray3.py
index c17ab81..6a5601d 100644
--- a/examples/vlarray3.py
+++ b/examples/vlarray3.py
@@ -1,8 +1,9 @@
 #!/usr/bin/env python
 
-"""Example that shows how to easily save a variable number of atoms
-with a VLArray."""
+"""Example that shows how to easily save a variable number of atoms with a
+VLArray."""
 
+from __future__ import print_function
 import numpy
 import tables
 
@@ -10,18 +11,18 @@ N = 100
 shape = (3, 3)
 
 numpy.random.seed(10)  # For reproductible results
-f = tables.open_file("vlarray3.h5", mode = "w")
+f = tables.open_file("vlarray3.h5", mode="w")
 vlarray = f.create_vlarray(f.root, 'vlarray1',
-                          tables.Float64Atom(shape=shape),
-                          "ragged array of arrays")
+                           tables.Float64Atom(shape=shape),
+                           "ragged array of arrays")
 
 k = 0
-for i in xrange(N):
+for i in range(N):
     l = []
-    for j in xrange(numpy.random.randint(N)):
+    for j in range(numpy.random.randint(N)):
         l.append(numpy.random.randn(*shape))
         k += 1
     vlarray.append(l)
 
-print "Total number of atoms:", k
+print("Total number of atoms:", k)
 f.close()
diff --git a/examples/vlarray4.py b/examples/vlarray4.py
index 3d484ca..e98f7a0 100644
--- a/examples/vlarray4.py
+++ b/examples/vlarray4.py
@@ -1,8 +1,9 @@
 #!/usr/bin/env python
 
-"""Example that shows how to easily save a variable number of atoms
-with a VLArray."""
+"""Example that shows how to easily save a variable number of atoms with a
+VLArray."""
 
+from __future__ import print_function
 import numpy
 import tables
 
@@ -10,18 +11,18 @@ N = 100
 shape = (3, 3)
 
 numpy.random.seed(10)  # For reproductible results
-f = tables.open_file("vlarray4.h5", mode = "w")
+f = tables.open_file("vlarray4.h5", mode="w")
 vlarray = f.create_vlarray(f.root, 'vlarray1',
-                          tables.Float64Atom(shape=shape),
-                          "ragged array of arrays")
+                           tables.Float64Atom(shape=shape),
+                           "ragged array of arrays")
 
 k = 0
-for i in xrange(N):
+for i in range(N):
     l = []
-    for j in xrange(numpy.random.randint(N)):
+    for j in range(numpy.random.randint(N)):
         l.append(numpy.random.randn(*shape))
         k += 1
     vlarray.append(l)
 
-print "Total number of atoms:", k
+print("Total number of atoms:", k)
 f.close()
diff --git a/setup.py b/setup.py
index d966c03..dcc3b6b 100755
--- a/setup.py
+++ b/setup.py
@@ -10,7 +10,7 @@ import ctypes
 import textwrap
 import subprocess
 from os.path import exists, expanduser
-
+import glob
 
 # Using ``setuptools`` enables lots of goodies, such as building eggs.
 if 'FORCE_SETUPTOOLS' in os.environ:
@@ -29,23 +29,32 @@ cmdclass = {}
 setuptools_kwargs = {}
 
 if sys.version_info >= (3,):
-    exclude_fixers = [
-        'lib2to3.fixes.fix_idioms',
-        'lib2to3.fixes.fix_zip',
+    fixer_names = [
+        'lib2to3.fixes.fix_basestring',
+        'lib2to3.fixes.fix_dict',
+        'lib2to3.fixes.fix_imports',
+        'lib2to3.fixes.fix_long',
+        'lib2to3.fixes.fix_metaclass',
+        'lib2to3.fixes.fix_next',
+        'lib2to3.fixes.fix_numliterals',
+        'lib2to3.fixes.fix_print',
+        'lib2to3.fixes.fix_unicode',
+        'lib2to3.fixes.fix_xrange',
     ]
 
     if has_setuptools:
+        from lib2to3.refactor import get_fixers_from_package
+
+        all_fixers = set(get_fixers_from_package('lib2to3.fixes'))
+        exclude_fixers = sorted(all_fixers.difference(fixer_names))
+
         setuptools_kwargs['use_2to3'] = True
         setuptools_kwargs['use_2to3_fixers'] = []
         setuptools_kwargs['use_2to3_exclude_fixers'] = exclude_fixers
     else:
         from distutils.command.build_py import build_py_2to3 as build_py
-        from distutils.command.build_scripts import build_scripts_2to3 as build_scripts
-
-        from lib2to3.refactor import get_fixers_from_package
-
-        all_fixers = set(get_fixers_from_package('lib2to3.fixes'))
-        fixer_names = sorted(all_fixers.difference(exclude_fixers))
+        from distutils.command.build_scripts \
+            import build_scripts_2to3 as build_scripts
 
         build_py.fixer_names = fixer_names
         build_scripts.fixer_names = fixer_names
@@ -55,10 +64,12 @@ if sys.version_info >= (3,):
 
 
 # The minimum required versions
-# (keep these in sync with tables.req_versions and user's guide and README)
-min_numpy_version = '1.4.1'
-min_numexpr_version = '2.0.0'
-min_cython_version = '0.13'
+min_numpy_version = None
+min_numexpr_version = None
+min_cython_version = None
+min_hdf5_version = None
+min_python_version = (2, 6)
+exec(open(os.path.join('tables', 'req_versions.py')).read())
 
 
 # Some functions for showing errors and warnings.
@@ -80,7 +91,7 @@ def print_warning(head, body=''):
 
 
 # Check for Python
-if sys.version_info < (2, 6):
+if sys.version_info < min_python_version:
     exit_with_error("You need Python 2.6 or greater to install PyTables!")
 print("* Using Python %s" % sys.version.splitlines()[0])
 
@@ -134,7 +145,7 @@ debug = '--debug' in sys.argv
 
 # Global variables
 lib_dirs = []
-inc_dirs = ['blosc']
+inc_dirs = ['c-blosc/hdf5']
 optional_libs = []
 data_files = []    # list of data files to add to packages (mainly for DLL's)
 
@@ -156,18 +167,21 @@ def add_from_flags(envname, flag_key, dirs):
             dirs.append(flag[len(flag_key):])
 
 if os.name == 'posix':
+    prefixes = ('/usr/local', '/sw', '/opt', '/opt/local', '/usr', '/')
+
     default_header_dirs = []
     add_from_path("CPATH", default_header_dirs)
     add_from_path("C_INCLUDE_PATH", default_header_dirs)
     add_from_flags("CPPFLAGS", "-I", default_header_dirs)
-    default_header_dirs.extend(['/usr/include', '/usr/local/include'])
+    default_header_dirs.extend(
+        os.path.join(_tree, 'include') for _tree in prefixes
+    )
 
     default_library_dirs = []
     add_from_flags("LDFLAGS", "-L", default_library_dirs)
     default_library_dirs.extend(
-        os.path.join(_tree, _arch)
-        for _tree in ('/usr/local', '/sw', '/opt', '/opt/local', '/usr', '/')
-            for _arch in ('lib64', 'lib'))
+        os.path.join(_tree, _arch) for _tree in prefixes
+        for _arch in ('lib64', 'lib'))
     default_runtime_dirs = default_library_dirs
 
 elif os.name == 'nt':
@@ -253,8 +267,10 @@ class Package(object):
             # component directories to the given path.
             # Remove leading and trailing '"' chars that can mislead
             # the finding routines on Windows machines
-            locations = [os.path.join(location.strip('"'), compdir)
-                                        for compdir in self._component_dirs]
+            locations = [
+                os.path.join(location.strip('"'), compdir)
+                for compdir in self._component_dirs
+            ]
 
         directories = [None, None, None]  # headers, libraries, runtime
         for idx, (name, find_path, default_dirs) in enumerate(dirdata):
@@ -321,10 +337,10 @@ def get_hdf5_version(headername):
         if 'H5_VERS_RELEASE' in line:
             release_version = int(re.split("\s*", line)[2])
         if (major_version != -1 and minor_version != -1 and
-                                                    release_version != -1):
+                release_version != -1):
             break
     if (major_version == -1 or minor_version == -1 or
-                                                    release_version == -1):
+            release_version == -1):
         exit_with_error("Unable to detect HDF5 library version!")
     return (major_version, minor_version, release_version)
 
@@ -337,6 +353,7 @@ if os.name == 'posix':
         'LZO2': ['lzo2'],
         'LZO': ['lzo'],
         'BZ2': ['bz2'],
+        'BLOSC': ['blosc'],
     }
 elif os.name == 'nt':
     _Package = WindowsPackage
@@ -345,6 +362,7 @@ elif os.name == 'nt':
         'LZO2': ['lzo2', 'lzo2'],
         'LZO': ['liblzo', 'lzo1'],
         'BZ2': ['bzip2', 'bzip2'],
+        'BLOSC': ['blosc', 'blosc'],
     }
 
     # Copy the next DLL's to binaries by default.
@@ -363,6 +381,8 @@ lzo1_package = _Package("LZO 1", 'LZO', 'lzo1x', *_platdep['LZO'])
 lzo1_package.target_function = 'lzo_version_date'
 bzip2_package = _Package("bzip2", 'BZ2', 'bzlib', *_platdep['BZ2'])
 bzip2_package.target_function = 'BZ2_bzlibVersion'
+blosc_package = _Package("blosc", 'BLOSC', 'blosc', *_platdep['BLOSC'])
+blosc_package.target_function = 'blosc_list_compressors'  # Blosc >= 1.3
 
 
 #-----------------------------------------------------------------
@@ -379,6 +399,7 @@ if os.name == 'nt':
 HDF5_DIR = os.environ.get('HDF5_DIR', '')
 LZO_DIR = os.environ.get('LZO_DIR', '')
 BZIP2_DIR = os.environ.get('BZIP2_DIR', '')
+BLOSC_DIR = os.environ.get('BLOSC_DIR', '')
 LFLAGS = os.environ.get('LFLAGS', '').split()
 # in GCC-style compilers, -w in extra flags will get rid of copious
 # 'uninitialized variable' Cython warnings. However, this shouldn't be
@@ -388,7 +409,7 @@ CFLAGS = os.environ.get('CFLAGS', '').split()
 LIBS = os.environ.get('LIBS', '').split()
 
 # ...then the command line.
-# Handle --hdf5=[PATH] --lzo=[PATH] --bzip2=[PATH]
+# Handle --hdf5=[PATH] --lzo=[PATH] --bzip2=[PATH] --blosc=[PATH]
 # --lflags=[FLAGS] --cflags=[FLAGS] and --debug
 args = sys.argv[:]
 for arg in args:
@@ -401,6 +422,9 @@ for arg in args:
     elif arg.find('--bzip2=') == 0:
         BZIP2_DIR = expanduser(arg.split('=')[1])
         sys.argv.remove(arg)
+    elif arg.find('--blosc=') == 0:
+        BLOSC_DIR = expanduser(arg.split('=')[1])
+        sys.argv.remove(arg)
     elif arg.find('--lflags=') == 0:
         LFLAGS = arg.split('=')[1].split()
         sys.argv.remove(arg)
@@ -471,7 +495,8 @@ compiler = new_compiler()
 for (package, location) in [(hdf5_package, HDF5_DIR),
                             (lzo2_package, LZO_DIR),
                             (lzo1_package, LZO_DIR),
-                            (bzip2_package, BZIP2_DIR)]:
+                            (bzip2_package, BZIP2_DIR),
+                            (blosc_package, BLOSC_DIR)]:
 
     if package.tag == 'LZO' and lzo2_enabled:
         print("* Skipping detection of %s since %s has already been found."
@@ -495,8 +520,13 @@ for (package, location) in [(hdf5_package, HDF5_DIR),
                 "by setting the ``%(tag)s_DIR`` environment variable "
                 "or by using the ``--%(ltag)s`` command-line option."
                 % dict(name=pname, tag=ptag, ltag=ptag.lower()))
-        print("* Could not find %s headers and library; "
-              "disabling support for it." % package.name)
+        if package.tag == 'BLOSC':  # this is optional, but comes with sources
+            print("* Could not find %s headers and library; "
+                  "using internal sources." % package.name)
+        else:
+            print("* Could not find %s headers and library; "
+                  "disabling support for it." % package.name)
+
         continue  # look for the next library
 
     if libdir in ("", True):
@@ -509,8 +539,12 @@ for (package, location) in [(hdf5_package, HDF5_DIR),
     if package.tag in ['HDF5']:
         hdf5_header = os.path.join(hdrdir, "H5public.h")
         hdf5_version = get_hdf5_version(hdf5_header)
-        if hdf5_version < (1, 8, 4):
-            exit_with_error("Unsupported HDF5 version!")
+        if hdf5_version < min_hdf5_version:
+            exit_with_error(
+                "Unsupported HDF5 version! HDF5 v%s+ required. "
+                "Found version v%s" % (
+                    '.'.join(map(str, min_hdf5_version)),
+                    '.'.join(map(str, hdf5_version))))
 
     if hdrdir not in default_header_dirs:
         inc_dirs.append(hdrdir)  # save header directory if needed
@@ -584,9 +618,12 @@ def get_cython_extfiles(extnames):
         if not exists(extcfile) or newer(extpfile, extcfile):
             # For some reason, setup in setuptools does not compile
             # Cython files (!)  Do that manually...
+            # 2013/08/24: the issue should be fixed in distribute 0.6.15
+            # see also https://bitbucket.org/tarek/distribute/issue/195
             print("cythoning %s to %s" % (extpfile, extcfile))
             retcode = subprocess.call(
-                            [sys.executable, "-m", "cython", extpfile])
+                [sys.executable, "-m", "cython", extpfile]
+            )
             if retcode > 0:
                 print("cython aborted compilation with retcode:", retcode)
                 sys.exit()
@@ -678,7 +715,32 @@ if os.name == "nt":
         ('Lib/site-packages/%s' % name, dll_files),
     ])
 
-ADDLIBS = [hdf5_package.library_name, ]
+ADDLIBS = [hdf5_package.library_name]
+
+# List of Blosc file dependencies
+blosc_files = ["c-blosc/hdf5/blosc_filter.c"]
+if 'BLOSC' not in optional_libs:
+    # Compiling everything from sources
+    # Blosc + BloscLZ sources
+    blosc_files += glob.glob('c-blosc/blosc/*.c')
+    # LZ4 sources
+    blosc_files += glob.glob('c-blosc/internal-complibs/lz4*/*.c')
+    # Snappy sources
+    blosc_files += glob.glob('c-blosc/internal-complibs/snappy*/*.cc')
+    # Zlib sources
+    blosc_files += glob.glob('c-blosc/internal-complibs/zlib*/*.c')
+    # Finally, add all the include dirs...
+    inc_dirs += [os.path.join('c-blosc', 'blosc')]
+    inc_dirs += glob.glob('c-blosc/internal-complibs/*')
+    # ...and the macros for all the compressors supported
+    def_macros += [('HAVE_LZ4', 1), ('HAVE_SNAPPY', 1), ('HAVE_ZLIB', 1)]
+    # Add -msse2 flag for optimizing shuffle in include Blosc
+    if os.name == 'posix':
+        CFLAGS.append("-msse2")
+else:
+    ADDLIBS += ['blosc']
+
+
 utilsExtension_libs = LIBS + ADDLIBS
 hdf5Extension_libs = LIBS + ADDLIBS
 tableExtension_libs = LIBS + ADDLIBS
@@ -695,9 +757,6 @@ for (package, complibs) in [(lzo_package, _comp_lzo_libs),
     if package.tag in optional_libs:
         complibs.extend([hdf5_package.library_name, package.library_name])
 
-# List of Blosc file dependencies
-blosc_files = ["blosc/blosc.c", "blosc/blosclz.c", "blosc/shuffle.c",
-               "blosc/blosc_filter.c"]
 
 extensions = [
     Extension("tables.utilsextension",
@@ -810,10 +869,11 @@ Operating System :: Microsoft :: Windows
 Operating System :: Unix
 """
 
-setup(name=name,
-      version=VERSION,
-      description='Hierarchical datasets for Python',
-      long_description="""\
+setup(
+    name=name,
+    version=VERSION,
+    description='Hierarchical datasets for Python',
+    long_description="""\
 PyTables is a package for managing hierarchical datasets and
 designed to efficently cope with extremely large amounts of
 data. PyTables is built on top of the HDF5 library and the
@@ -823,17 +883,17 @@ makes of it a fast, yet extremely easy to use tool for
 interactively save and retrieve large amounts of data.
 
 """,
-      classifiers=[c for c in classifiers.split("\n") if c],
-      author='Francesc Alted, Ivan Vilata, et al.',
-      author_email='pytables at pytables.org',
-      maintainer='PyTables maintainers',
-      maintainer_email='pytables at pytables.org',
-      url='http://www.pytables.org/',
-      license='http://www.opensource.org/licenses/bsd-license.php',
-      download_url="http://sourceforge.net/projects/pytables/files/pytables/",
-      platforms=['any'],
-      ext_modules=extensions,
-      cmdclass=cmdclass,
-      data_files=data_files,
-      **setuptools_kwargs
+    classifiers=[c for c in classifiers.split("\n") if c],
+    author='Francesc Alted, Ivan Vilata, et al.',
+    author_email='pytables at pytables.org',
+    maintainer='PyTables maintainers',
+    maintainer_email='pytables at pytables.org',
+    url='http://www.pytables.org/',
+    license='http://www.opensource.org/licenses/bsd-license.php',
+    download_url="http://sourceforge.net/projects/pytables/files/pytables/",
+    platforms=['any'],
+    ext_modules=extensions,
+    cmdclass=cmdclass,
+    data_files=data_files,
+    **setuptools_kwargs
 )
diff --git a/src/H5ARRAY.c b/src/H5ARRAY.c
index e04ff3f..9308962 100644
--- a/src/H5ARRAY.c
+++ b/src/H5ARRAY.c
@@ -4,7 +4,7 @@
 #include "utils.h"
 #include "H5Zlzo.h"                    /* Import FILTER_LZO */
 #include "H5Zbzip2.h"                  /* Import FILTER_BZIP2 */
-#include "../blosc/blosc_filter.h"     /* Import FILTER_BLOSC */
+#include "blosc_filter.h"              /* Import FILTER_BLOSC */
 
 #include <string.h>
 #include <stdlib.h>
@@ -50,7 +50,9 @@ herr_t H5ARRAYmake( hid_t loc_id,
  hid_t   dataset_id, space_id;
  hsize_t *maxdims = NULL;
  hid_t   plist_id = 0;
- unsigned int cd_values[6];
+ unsigned int cd_values[7];
+ int     blosc_compcode;
+ char    *blosc_compname = NULL;
  int     chunked = 0;
  int     i;
 
@@ -102,8 +104,8 @@ herr_t H5ARRAYmake( hid_t loc_id,
      if ( H5Pset_fletcher32( plist_id) < 0 )
        return -1;
    }
-   /* Then shuffle (not if blosc is activated) */
-   if ((shuffle) && (strcmp(complib, "blosc") != 0)) {
+   /* Then shuffle (blosc shuffles inplace) */
+   if ((shuffle) && (strncmp(complib, "blosc", 5) != 0)) {
      if ( H5Pset_shuffle( plist_id) < 0 )
        return -1;
    }
@@ -128,6 +130,16 @@ herr_t H5ARRAYmake( hid_t loc_id,
        if ( H5Pset_filter( plist_id, FILTER_BLOSC, H5Z_FLAG_OPTIONAL, 6, cd_values) < 0 )
          return -1;
      }
+     /* The Blosc compressor can use other compressors */
+     else if (strncmp(complib, "blosc:", 6) == 0) {
+       cd_values[4] = compress;
+       cd_values[5] = shuffle;
+       blosc_compname = complib + 6;
+       blosc_compcode = blosc_compname_to_compcode(blosc_compname);
+       cd_values[6] = blosc_compcode;
+       if ( H5Pset_filter( plist_id, FILTER_BLOSC, H5Z_FLAG_OPTIONAL, 7, cd_values) < 0 )
+	 return -1;
+     }
      /* The LZO compressor does accept parameters */
      else if (strcmp(complib, "lzo") == 0) {
        if ( H5Pset_filter( plist_id, FILTER_LZO, H5Z_FLAG_OPTIONAL, 3, cd_values) < 0 )
diff --git a/src/H5TB-opt.c b/src/H5TB-opt.c
index da97d37..4a05d97 100644
--- a/src/H5TB-opt.c
+++ b/src/H5TB-opt.c
@@ -35,7 +35,7 @@
 #include "tables.h"
 #include "H5Zlzo.h"                    /* Import FILTER_LZO */
 #include "H5Zbzip2.h"                  /* Import FILTER_BZIP2 */
-#include "../blosc/blosc_filter.h"     /* Import FILTER_BLOSC */
+#include "blosc_filter.h"              /* Import FILTER_BLOSC */
 
 /* Define this in order to shrink datasets after deleting */
 #if 1
@@ -94,7 +94,9 @@ herr_t H5TBOmake_table( const char *table_title,
  hsize_t dims[1];
  hsize_t dims_chunk[1];
  hsize_t maxdims[1] = { H5S_UNLIMITED };
- unsigned int cd_values[6];
+ unsigned int cd_values[7];
+ int     blosc_compcode;
+ char    *blosc_compname = NULL;
 
  dims[0]       = nrecords;
  dims_chunk[0] = chunk_size;
@@ -129,7 +131,7 @@ herr_t H5TBOmake_table( const char *table_title,
      return -1;
  }
  /* Then shuffle (blosc shuffles inplace) */
- if (shuffle && (strcmp(complib, "blosc") != 0)) {
+ if (shuffle && (strncmp(complib, "blosc", 5) != 0)) {
    if ( H5Pset_shuffle( plist_id) < 0 )
      return -1;
  }
@@ -151,6 +153,16 @@ herr_t H5TBOmake_table( const char *table_title,
      if ( H5Pset_filter( plist_id, FILTER_BLOSC, H5Z_FLAG_OPTIONAL, 6, cd_values) < 0 )
        return -1;
    }
+   /* The Blosc compressor can use other compressors */
+   else if (strncmp(complib, "blosc:", 6) == 0) {
+     cd_values[4] = compress;
+     cd_values[5] = shuffle;
+     blosc_compname = complib + 6;
+     blosc_compcode = blosc_compname_to_compcode(blosc_compname);
+     cd_values[6] = blosc_compcode;
+     if ( H5Pset_filter( plist_id, FILTER_BLOSC, H5Z_FLAG_OPTIONAL, 7, cd_values) < 0 )
+       return -1;
+   }
    /* The LZO compressor does accept parameters */
    else if (strcmp(complib, "lzo") == 0) {
      if ( H5Pset_filter( plist_id, FILTER_LZO, H5Z_FLAG_OPTIONAL, 3, cd_values) < 0 )
diff --git a/src/H5VLARRAY.c b/src/H5VLARRAY.c
index 0f2bf44..582c611 100644
--- a/src/H5VLARRAY.c
+++ b/src/H5VLARRAY.c
@@ -3,7 +3,7 @@
 #include "utils.h"                  /* get_order */
 #include "H5Zlzo.h"                 /* Import FILTER_LZO */
 #include "H5Zbzip2.h"               /* Import FILTER_BZIP2 */
-#include "../blosc/blosc_filter.h"  /* Import FILTER_BLOSC */
+#include "blosc_filter.h"           /* Import FILTER_BLOSC */
 #include <string.h>
 #include <stdlib.h>
 
@@ -50,7 +50,9 @@ herr_t H5VLARRAYmake( hid_t loc_id,
  hsize_t maxdims[1] = { H5S_UNLIMITED };
  hsize_t dims_chunk[1];
  hid_t   plist_id;
- unsigned int cd_values[6];
+ unsigned int cd_values[7];
+ int     blosc_compcode;
+ char    *blosc_compname = NULL;
 
  if (data)
    /* if data, one row will be filled initially */
@@ -94,7 +96,7 @@ herr_t H5VLARRAYmake( hid_t loc_id,
      return -1;
  }
  /* Then shuffle (blosc shuffles inplace) */
- if (shuffle && (strcmp(complib, "blosc") != 0)) {
+ if (shuffle && (strncmp(complib, "blosc", 5) != 0)) {
    if ( H5Pset_shuffle( plist_id) < 0 )
      return -1;
  }
@@ -115,6 +117,16 @@ herr_t H5VLARRAYmake( hid_t loc_id,
      if ( H5Pset_filter( plist_id, FILTER_BLOSC, H5Z_FLAG_OPTIONAL, 6, cd_values) < 0 )
        return -1;
    }
+   /* The Blosc compressor can use other compressors */
+   else if (strncmp(complib, "blosc:", 6) == 0) {
+     cd_values[4] = compress;
+     cd_values[5] = shuffle;
+     blosc_compname = complib + 6;
+     blosc_compcode = blosc_compname_to_compcode(blosc_compname);
+     cd_values[6] = blosc_compcode;
+     if ( H5Pset_filter( plist_id, FILTER_BLOSC, H5Z_FLAG_OPTIONAL, 7, cd_values) < 0 )
+       return -1;
+   }
    /* The LZO compressor does accept parameters */
    else if (strcmp(complib, "lzo") == 0) {
      if ( H5Pset_filter( plist_id, FILTER_LZO, H5Z_FLAG_OPTIONAL, 3, cd_values) < 0 )
diff --git a/src/idx-opt.c b/src/idx-opt.c
index f9e3edf..73225bd 100644
--- a/src/idx-opt.c
+++ b/src/idx-opt.c
@@ -8,6 +8,8 @@
  *-------------------------------------------------------------------------
  */
 
+#define NAN_AWARE_LT(a, b) (a < b || (b != b && a == a))
+
 /*-------------------------------------------------------------------------
  * Function: bisect_{left,right}_optim_*
  *
@@ -500,16 +502,16 @@ int keysort_f96(npy_float96 *start1, char *start2, npy_intp num, int ts)
     while ((pr - pl) > SMALL_QUICKSORT) {
       /* quicksort partition */
       pm = pl + ((pr - pl) >> 1); ipm = ipl + (((ipr - ipl)/ts) >> 1)*ts;
-      if (*pm < *pl) { SWAP(*pm, *pl); iSWAP(ipm, ipl); }
-      if (*pr < *pm) { SWAP(*pr, *pm); iSWAP(ipr, ipm); }
-      if (*pm < *pl) { SWAP(*pm, *pl); iSWAP(ipm, ipl); }
+      if (NAN_AWARE_LT(*pm, *pl)) { SWAP(*pm, *pl); iSWAP(ipm, ipl); }
+      if (NAN_AWARE_LT(*pr, *pm)) { SWAP(*pr, *pm); iSWAP(ipr, ipm); }
+      if (NAN_AWARE_LT(*pm, *pl)) { SWAP(*pm, *pl); iSWAP(ipm, ipl); }
       vp = *pm;
       pi = pl; ipi = ipl;
       pj = pr - 1; ipj = ipr - ts;
       SWAP(*pm, *pj); iSWAP(ipm, ipj);
       for(;;) {
-        do { ++pi; ipi += ts; } while (*pi < vp);
-        do { --pj; ipj -= ts; } while (vp < *pj);
+        do { ++pi; ipi += ts; } while (NAN_AWARE_LT(*pi, vp));
+        do { --pj; ipj -= ts; } while (NAN_AWARE_LT(vp, *pj));
         if (pi >= pj)  break;
         SWAP(*pi, *pj); iSWAP(ipi, ipj);
       }
@@ -529,7 +531,7 @@ int keysort_f96(npy_float96 *start1, char *start2, npy_intp num, int ts)
     for(pi = pl + 1, ipi = ipl + ts; pi <= pr; ++pi, ipi += ts) {
       vp = *pi; opt_memcpy(ivp, ipi, ts);
       for(pj = pi, pt = pi - 1, ipj = ipi, ipt = ipi - ts; \
-          pj > pl && vp < *pt;) {
+          pj > pl && NAN_AWARE_LT(vp, *pt);) {
         *pj-- = *pt--; opt_memcpy(ipj, ipt, ts); ipj -= ts; ipt -= ts;
       }
       *pj = vp; opt_memcpy(ipj, ivp, ts);
@@ -568,16 +570,16 @@ int keysort_f128(npy_float128 *start1, char *start2, npy_intp num, int ts)
     while ((pr - pl) > SMALL_QUICKSORT) {
       /* quicksort partition */
       pm = pl + ((pr - pl) >> 1); ipm = ipl + (((ipr - ipl)/ts) >> 1)*ts;
-      if (*pm < *pl) { SWAP(*pm, *pl); iSWAP(ipm, ipl); }
-      if (*pr < *pm) { SWAP(*pr, *pm); iSWAP(ipr, ipm); }
-      if (*pm < *pl) { SWAP(*pm, *pl); iSWAP(ipm, ipl); }
+      if (NAN_AWARE_LT(*pm, *pl)) { SWAP(*pm, *pl); iSWAP(ipm, ipl); }
+      if (NAN_AWARE_LT(*pr, *pm)) { SWAP(*pr, *pm); iSWAP(ipr, ipm); }
+      if (NAN_AWARE_LT(*pm, *pl)) { SWAP(*pm, *pl); iSWAP(ipm, ipl); }
       vp = *pm;
       pi = pl; ipi = ipl;
       pj = pr - 1; ipj = ipr - ts;
       SWAP(*pm, *pj); iSWAP(ipm, ipj);
       for(;;) {
-        do { ++pi; ipi += ts; } while (*pi < vp);
-        do { --pj; ipj -= ts; } while (vp < *pj);
+        do { ++pi; ipi += ts; } while (NAN_AWARE_LT(*pi, vp));
+        do { --pj; ipj -= ts; } while (NAN_AWARE_LT(vp, *pj));
         if (pi >= pj)  break;
         SWAP(*pi, *pj); iSWAP(ipi, ipj);
       }
@@ -597,7 +599,7 @@ int keysort_f128(npy_float128 *start1, char *start2, npy_intp num, int ts)
     for(pi = pl + 1, ipi = ipl + ts; pi <= pr; ++pi, ipi += ts) {
       vp = *pi; opt_memcpy(ivp, ipi, ts);
       for(pj = pi, pt = pi - 1, ipj = ipi, ipt = ipi - ts; \
-          pj > pl && vp < *pt;) {
+          pj > pl && NAN_AWARE_LT(vp, *pt);) {
         *pj-- = *pt--; opt_memcpy(ipj, ipt, ts); ipj -= ts; ipt -= ts;
       }
       *pj = vp; opt_memcpy(ipj, ivp, ts);
@@ -636,16 +638,16 @@ int keysort_f64(npy_float64 *start1, char *start2, npy_intp num, int ts)
     while ((pr - pl) > SMALL_QUICKSORT) {
       /* quicksort partition */
       pm = pl + ((pr - pl) >> 1); ipm = ipl + (((ipr - ipl)/ts) >> 1)*ts;
-      if (*pm < *pl) { SWAP(*pm, *pl); iSWAP(ipm, ipl); }
-      if (*pr < *pm) { SWAP(*pr, *pm); iSWAP(ipr, ipm); }
-      if (*pm < *pl) { SWAP(*pm, *pl); iSWAP(ipm, ipl); }
+      if (NAN_AWARE_LT(*pm, *pl)) { SWAP(*pm, *pl); iSWAP(ipm, ipl); }
+      if (NAN_AWARE_LT(*pr, *pm)) { SWAP(*pr, *pm); iSWAP(ipr, ipm); }
+      if (NAN_AWARE_LT(*pm, *pl)) { SWAP(*pm, *pl); iSWAP(ipm, ipl); }
       vp = *pm;
       pi = pl; ipi = ipl;
       pj = pr - 1; ipj = ipr - ts;
       SWAP(*pm, *pj); iSWAP(ipm, ipj);
       for(;;) {
-        do { ++pi; ipi += ts; } while (*pi < vp);
-        do { --pj; ipj -= ts; } while (vp < *pj);
+        do { ++pi; ipi += ts; } while (NAN_AWARE_LT(*pi, vp));
+        do { --pj; ipj -= ts; } while (NAN_AWARE_LT(vp, *pj));
         if (pi >= pj)  break;
         SWAP(*pi, *pj); iSWAP(ipi, ipj);
       }
@@ -665,7 +667,7 @@ int keysort_f64(npy_float64 *start1, char *start2, npy_intp num, int ts)
     for(pi = pl + 1, ipi = ipl + ts; pi <= pr; ++pi, ipi += ts) {
       vp = *pi; opt_memcpy(ivp, ipi, ts);
       for(pj = pi, pt = pi - 1, ipj = ipi, ipt = ipi - ts; \
-          pj > pl && vp < *pt;) {
+          pj > pl && NAN_AWARE_LT(vp, *pt);) {
         *pj-- = *pt--; opt_memcpy(ipj, ipt, ts); ipj -= ts; ipt -= ts;
       }
       *pj = vp; opt_memcpy(ipj, ivp, ts);
@@ -704,16 +706,16 @@ int keysort_f32(npy_float32 *start1, char *start2, npy_intp num, int ts)
     while ((pr - pl) > SMALL_QUICKSORT) {
       /* quicksort partition */
       pm = pl + ((pr - pl) >> 1); ipm = ipl + (((ipr - ipl)/ts) >> 1)*ts;
-      if (*pm < *pl) { SWAP(*pm, *pl); iSWAP(ipm, ipl); }
-      if (*pr < *pm) { SWAP(*pr, *pm); iSWAP(ipr, ipm); }
-      if (*pm < *pl) { SWAP(*pm, *pl); iSWAP(ipm, ipl); }
+      if (NAN_AWARE_LT(*pm, *pl)) { SWAP(*pm, *pl); iSWAP(ipm, ipl); }
+      if (NAN_AWARE_LT(*pr, *pm)) { SWAP(*pr, *pm); iSWAP(ipr, ipm); }
+      if (NAN_AWARE_LT(*pm, *pl)) { SWAP(*pm, *pl); iSWAP(ipm, ipl); }
       vp = *pm;
       pi = pl; ipi = ipl;
       pj = pr - 1; ipj = ipr - ts;
       SWAP(*pm, *pj); iSWAP(ipm, ipj);
       for(;;) {
-        do { ++pi; ipi += ts; } while (*pi < vp);
-        do { --pj; ipj -= ts; } while (vp < *pj);
+        do { ++pi; ipi += ts; } while (NAN_AWARE_LT(*pi, vp));
+        do { --pj; ipj -= ts; } while (NAN_AWARE_LT(vp, *pj));
         if (pi >= pj)  break;
         SWAP(*pi, *pj); iSWAP(ipi, ipj);
       }
@@ -733,7 +735,7 @@ int keysort_f32(npy_float32 *start1, char *start2, npy_intp num, int ts)
     for(pi = pl + 1, ipi = ipl + ts; pi <= pr; ++pi, ipi += ts) {
       vp = *pi; opt_memcpy(ivp, ipi, ts);
       for(pj = pi, pt = pi - 1, ipj = ipi, ipt = ipi - ts; \
-          pj > pl && vp < *pt;) {
+          pj > pl && NAN_AWARE_LT(vp, *pt);) {
         *pj-- = *pt--; opt_memcpy(ipj, ipt, ts); ipj -= ts; ipt -= ts;
       }
       *pj = vp; opt_memcpy(ipj, ivp, ts);
diff --git a/src/utils.c b/src/utils.c
index 477e1d7..2d1eb82 100644
--- a/src/utils.c
+++ b/src/utils.c
@@ -184,60 +184,59 @@ PyObject *createNamesList(char *buffer[], int nelements)
 PyObject *get_filter_names( hid_t loc_id,
                             const char *dset_name)
 {
- hid_t    dset;
- hid_t    dcpl;           /* dataset creation property list */
-/*  hsize_t  chsize[64];     /\* chunk size in elements *\/ */
- int      i, j;
- int      nf;             /* number of filters */
- unsigned filt_flags;     /* filter flags */
- size_t   cd_nelmts;      /* filter client number of values */
- unsigned cd_values[20];  /* filter client data values */
- char     f_name[256];    /* filter name */
- PyObject *filters;
- PyObject *filter_values;
-
- /* Open the dataset. */
- if ( (dset = H5Dopen( loc_id, dset_name, H5P_DEFAULT )) < 0 ) {
-   goto out;
- }
+  hid_t    dset;
+  hid_t    dcpl;           /* dataset creation property list */
+  /*  hsize_t  chsize[64];     /\* chunk size in elements *\/ */
+  int      i, j;
+  int      nf;             /* number of filters */
+  unsigned filt_flags;     /* filter flags */
+  size_t   cd_nelmts;      /* filter client number of values */
+  unsigned cd_values[20];  /* filter client data values */
+  char     f_name[256];    /* filter name */
+  PyObject *filters;
+  PyObject *filter_values;
 
- /* Get the properties container */
- dcpl = H5Dget_create_plist(dset);
- /* Collect information about filters on chunked storage */
- if (H5D_CHUNKED==H5Pget_layout(dcpl)) {
-   filters = PyDict_New();
-    nf = H5Pget_nfilters(dcpl);
-   if ((nf = H5Pget_nfilters(dcpl))>0) {
-     for (i=0; i<nf; i++) {
-       cd_nelmts = 20;
-       H5Pget_filter(dcpl, i, &filt_flags, &cd_nelmts,
-                     cd_values, sizeof(f_name), f_name, NULL);
-       filter_values = PyTuple_New(cd_nelmts);
-       for (j=0;j<(long)cd_nelmts;j++) {
-         PyTuple_SetItem(filter_values, j, PyLong_FromLong(cd_values[j]));
-       }
-       PyMapping_SetItemString (filters, f_name, filter_values);
-     }
-   }
- }
- else {
-   /* http://aspn.activestate.com/ASPN/Cookbook/Python/Recipe/52309 */
-   Py_INCREF(Py_None);
-   filters = Py_None;   /* Not chunked, so return None */
- }
+  /* Open the dataset. */
+  if ( (dset = H5Dopen( loc_id, dset_name, H5P_DEFAULT )) < 0 ) {
+    goto out;
+  }
+
+  /* Get the properties container */
+  dcpl = H5Dget_create_plist(dset);
+  /* Collect information about filters on chunked storage */
+  if (H5D_CHUNKED==H5Pget_layout(dcpl)) {
+    filters = PyDict_New();
+    if ((nf = H5Pget_nfilters(dcpl))>0) {
+      for (i=0; i<nf; i++) {
+        cd_nelmts = 20;
+        H5Pget_filter(dcpl, i, &filt_flags, &cd_nelmts,
+                      cd_values, sizeof(f_name), f_name, NULL);
+        filter_values = PyTuple_New(cd_nelmts);
+        for (j=0;j<(long)cd_nelmts;j++) {
+          PyTuple_SetItem(filter_values, j, PyLong_FromLong(cd_values[j]));
+        }
+        PyMapping_SetItemString (filters, f_name, filter_values);
+      }
+    }
+  }
+  else {
+    /* http://aspn.activestate.com/ASPN/Cookbook/Python/Recipe/52309 */
+    Py_INCREF(Py_None);
+    filters = Py_None;   /* Not chunked, so return None */
+  }
 
- H5Pclose(dcpl);
- H5Dclose(dset);
+  H5Pclose(dcpl);
+  H5Dclose(dset);
 
-return filters;
+  return filters;
 
 out:
- H5Dclose(dset);
- Py_INCREF(Py_None);
- return Py_None;        /* Not chunked, so return None */
-
+  H5Dclose(dset);
+  Py_INCREF(Py_None);
+  return Py_None;        /* Not chunked, so return None */
 }
 
+
 /****************************************************************
 **
 **  get_objinfo(): Get information about the type of a child.
diff --git a/subtree-merge-blosc.sh b/subtree-merge-blosc.sh
new file mode 100755
index 0000000..7139469
--- /dev/null
+++ b/subtree-merge-blosc.sh
@@ -0,0 +1,43 @@
+#!/bin/sh
+
+# Script to automatically subtree merge a specifc version of blosc to
+# python-blosc.
+
+# TODO
+# ----
+#
+# * Should probably check working tree and index are clean.
+
+# configure remote
+remote="git://github.com/FrancescAlted/blosc.git"
+
+# check argument
+if [ -z "$1" ] ; then
+    echo "usage: subtree-merge-blosc.sh <blosc-tag>"
+    exit 1
+fi
+
+# extract the blosc tag the user has requested
+blosc_tag="$1"
+blosc_tag_long="refs/tags/$1"
+
+# check that it exists on the remote side
+remote_ans=$( git ls-remote $remote $blosc_tag_long )
+if [ -z "$remote_ans" ] ; then
+    echo "no remote tag '$1' found"
+    exit 1
+else
+    echo "found remote tag: '$remote_ans'"
+fi
+
+# fetch the contents of this tag
+git fetch $remote $blosc_tag_long || exit 1
+# subtree merge it
+git merge --squash -s subtree FETCH_HEAD || exit 1
+if git diff --staged --quiet ; then
+    echo "nothing new to be committed"
+    exit 1
+else
+    # set a custom commit message
+    git commit -m "subtree merge blosc $blosc_tag" || exit 1
+fi
diff --git a/tables/__init__.py b/tables/__init__.py
index 10efd27..bab517c 100644
--- a/tables/__init__.py
+++ b/tables/__init__.py
@@ -10,7 +10,7 @@
 #
 ########################################################################
 
-"""PyTables, hierarchical datasets in Python
+"""PyTables, hierarchical datasets in Python.
 
 :URL: http://www.pytables.org/
 
@@ -30,8 +30,8 @@ if os.name == 'nt':
     def _load_library(dllname, loadfunction, dllpaths=('', )):
         """Load a DLL via ctypes load function. Return None on failure.
 
-        By default, try to load the DLL from the current package directory
-        first, then from the Windows DLL search path.
+        By default, try to load the DLL from the current package
+        directory first, then from the Windows DLL search path.
 
         """
         try:
@@ -59,7 +59,7 @@ if os.name == 'nt':
 
     # In order to improve diagnosis of a common Windows dependency
     # issue, we explicitly test that we can load the HDF5 dll before
-    # loading tables.utilsExtensions.
+    # loading tables.utilsextensions.
     if not _load_library('hdf5dll.dll', ctypes.cdll.LoadLibrary):
         raise ImportError(
             'Could not load "hdf5dll.dll", please ensure'
@@ -79,7 +79,9 @@ if os.name == 'nt':
 
 
 # Necessary imports to get versions stored on the cython extension
-from tables.utilsextension import (get_pytables_version, get_hdf5_version,
+from tables.utilsextension import (
+    get_pytables_version, get_hdf5_version, blosc_compressor_list,
+    blosc_compcode_to_compname_ as blosc_compcode_to_compname,
     getPyTablesVersion, getHDF5Version)  # Pending Deprecation!
 
 
@@ -187,10 +189,30 @@ if 'Float16Atom' in locals():
     # float16 is new in numpy 1.6.0
     __all__.extend(('Float16Atom', 'Float16Col'))
 
-if 'Float96Atom' in locals():
-    __all__.extend(('Float96Atom', 'Float96Col'))
-    __all__.extend(('Complex192Atom', 'Complex192Col'))    # XXX check
 
-if 'Float128Atom' in locals():
-    __all__.extend(('Float128Atom', 'Float128Col'))
-    __all__.extend(('Complex256Atom', 'Complex256Col'))    # XXX check
+from tables.utilsextension import _broken_hdf5_long_double
+if not _broken_hdf5_long_double():
+    if 'Float96Atom' in locals():
+        __all__.extend(('Float96Atom', 'Float96Col'))
+        __all__.extend(('Complex192Atom', 'Complex192Col'))    # XXX check
+
+    if 'Float128Atom' in locals():
+        __all__.extend(('Float128Atom', 'Float128Col'))
+        __all__.extend(('Complex256Atom', 'Complex256Col'))    # XXX check
+
+else:
+
+    from tables import atom as _atom
+    from tables import description as _description
+    try:
+        del _atom.Float96Atom, _atom.Complex192Col
+        del _description.Float96Col, _description.Complex192Col
+        _atom.all_types.discard('complex192')
+        _atom.ComplexAtom._isizes.remove(24)
+    except AttributeError:
+        del _atom.Float128Atom, _atom.Complex256Atom
+        del _description.Float128Col, _description.Complex256Col
+        _atom.all_types.discard('complex256')
+        _atom.ComplexAtom._isizes.remove(32)
+    del _atom, _description
+del _broken_hdf5_long_double
diff --git a/tables/_comp_bzip2.pyx b/tables/_comp_bzip2.pyx
index e4da176..5831c3b 100644
--- a/tables/_comp_bzip2.pyx
+++ b/tables/_comp_bzip2.pyx
@@ -9,7 +9,8 @@ cdef extern from "H5Zbzip2.h":
 
 
 def register_():
-    cdef char *version, *date
+    cdef char *version
+    cdef char *date
 
     if not register_bzip2(&version, &date):
         return None
diff --git a/tables/_comp_lzo.pyx b/tables/_comp_lzo.pyx
index e465539..accd0bd 100644
--- a/tables/_comp_lzo.pyx
+++ b/tables/_comp_lzo.pyx
@@ -9,7 +9,8 @@ cdef extern from "H5Zlzo.h":
 
 
 def register_():
-    cdef char *version, *date
+    cdef char *version
+    cdef char *date
 
     if not register_lzo(&version, &date):
         return None
diff --git a/tables/_past.py b/tables/_past.py
index f3134c3..c518c92 100644
--- a/tables/_past.py
+++ b/tables/_past.py
@@ -10,8 +10,8 @@
 #
 ########################################################################
 
-"""A module with no PyTables dependencies that helps with deprecation warnings.
-"""
+"""A module with no PyTables dependencies that helps with deprecation
+warnings."""
 from inspect import getmembers, ismethod, isfunction
 from warnings import warn
 
@@ -31,7 +31,7 @@ def previous_api(obj):
     warnmsg = warnmsg.format(oldname, newname)
 
     def oldfunc(*args, **kwargs):
-        warn(warnmsg, PendingDeprecationWarning, stacklevel=2)
+        warn(warnmsg, DeprecationWarning, stacklevel=2)
         return obj(*args, **kwargs)
     oldfunc.__doc__ = (
         obj.__doc__ or '') + "\n\n.. warning::\n\n    " + warnmsg + "\n"
@@ -46,11 +46,11 @@ def previous_api_property(newname):
     warnmsg = warnmsg.format(oldname, newname)
 
     def _getter(self):
-        warn(warnmsg, PendingDeprecationWarning, stacklevel=1)
+        warn(warnmsg, DeprecationWarning, stacklevel=1)
         return getattr(self, newname)
 
     def _setter(self, value):
-        warn(warnmsg, PendingDeprecationWarning, stacklevel=1)
+        warn(warnmsg, DeprecationWarning, stacklevel=1)
         return setattr(self, newname, value)
 
     _getter.__name__ = _setter.__name__ = oldname
@@ -170,10 +170,6 @@ old2newnames = dict([
     ('_getMarkID', '_get_mark_id'),
     ('_getFinalAction', '_get_final_action'),
     ('getCurrentMark', 'get_current_mark'),
-    ('_refNode', '_refnode'),
-    ('_unrefNode', '_unrefnode'),
-    ('_killNode', '_killnode'),
-    ('_reviveNode', '_revivenode'),
     ('_updateNodeLocations', '_update_node_locations'),
     # from group.py
     #('parentNode', 'parentnode'),                       # kwarg
@@ -300,7 +296,6 @@ old2newnames = dict([
     #('parentNode', 'parentnode'),                       # kwarg
     ('_g_logCreate', '_g_log_create'),
     ('_g_preKillHook', '_g_pre_kill_hook'),
-    ('_g_postReviveHook', '_g_post_revive_hook'),
     ('_g_checkOpen', '_g_check_open'),
     ('_g_setLocation', '_g_set_location'),
     ('_g_updateLocation', '_g_update_location'),
diff --git a/tables/array.py b/tables/array.py
index 919ab3d..91fbefd 100644
--- a/tables/array.py
+++ b/tables/array.py
@@ -21,7 +21,7 @@ from tables.filters import Filters
 from tables.flavor import flavor_of, array_as_internal, internal_to_flavor
 
 from tables.utils import (is_idx, convert_to_np_atom2, SizeType, lazyattr,
-                          byteorders)
+                          byteorders, quantize)
 from tables.leaf import Leaf
 
 from tables._past import previous_api, previous_api_property
@@ -319,7 +319,7 @@ class Array(hdf5extension.Array, Leaf):
         return self
 
     def _init_loop(self):
-        """Initialization for the __iter__ iterator"""
+        """Initialization for the __iter__ iterator."""
 
         self._nrowsread = self._start
         self._startb = self._start
@@ -340,6 +340,7 @@ class Array(hdf5extension.Array, Leaf):
         # listarr buffer
         if self._nrowsread >= self._stop:
             self._init = False
+            self.listarr = None        # fixes issue #308
             raise StopIteration        # end of iteration
         else:
             # Read a chunk of rows
@@ -456,7 +457,7 @@ class Array(hdf5extension.Array, Leaf):
         # Internal functions
 
         def validate_number(num, length):
-            """Validate a list member for the given axis length"""
+            """Validate a list member for the given axis length."""
 
             try:
                 num = long(num)
@@ -704,6 +705,12 @@ class Array(hdf5extension.Array, Leaf):
         if nparr.size == 0:
             return
 
+        # truncate data if least_significant_digit filter is set
+        # TODO: add the least_significant_digit attribute to the array on disk
+        if (self.filters.least_significant_digit is not None and
+                not numpy.issubdtype(nparr.dtype, int)):
+            nparr = quantize(nparr, self.filters.least_significant_digit)
+
         try:
             startl, stopl, stepl, shape = self._interpret_indexing(key)
             self._write_slice(startl, stopl, stepl, shape, nparr)
@@ -724,19 +731,16 @@ class Array(hdf5extension.Array, Leaf):
 
         """
 
-        if nparr.shape != slice_shape:
+        if nparr.shape != (slice_shape + self.atom.dtype.shape):
             # Create an array compliant with the specified shape
             narr = numpy.empty(shape=slice_shape, dtype=self.atom.dtype)
-            # Assign the value to it
-            try:
-                narr[...] = nparr
-            except Exception, exc:  # XXX
-                raise ValueError("value parameter '%s' cannot be converted "
-                                 "into an array object compliant with %s: "
-                                 "'%r' The error was: <%s>" % (
-                                 nparr, self.__class__.__name__, self, exc))
+
+            # Assign the value to it. It will raise a ValueError exception
+            # if the objects cannot be broadcast to a single shape.
+            narr[...] = nparr
             return narr
-        return nparr
+        else:
+            return nparr
 
     _checkShape = previous_api(_check_shape)
 
@@ -769,7 +773,11 @@ class Array(hdf5extension.Array, Leaf):
     _readCoords = previous_api(_read_coords)
 
     def _read_selection(self, selection, reorder, shape):
-        """Read a `selection`.  Reorder if necessary."""
+        """Read a `selection`.
+
+        Reorder if necessary.
+
+        """
 
         # Create the container for the slice
         nparr = numpy.empty(dtype=self.atom.dtype, shape=shape)
@@ -809,7 +817,11 @@ class Array(hdf5extension.Array, Leaf):
     _writeCoords = previous_api(_write_coords)
 
     def _write_selection(self, selection, reorder, shape, nparr):
-        """Write `nparr` in `selection`.  Reorder if necessary."""
+        """Write `nparr` in `selection`.
+
+        Reorder if necessary.
+
+        """
 
         nparr = self._check_shape(nparr, tuple(shape))
         # Check whether we should reorder the array
@@ -894,7 +906,7 @@ class Array(hdf5extension.Array, Leaf):
 
     def _g_copy_with_stats(self, group, name, start, stop, step,
                            title, filters, chunkshape, _log, **kwargs):
-        """Private part of Leaf.copy() for each kind of leaf"""
+        """Private part of Leaf.copy() for each kind of leaf."""
 
         # Compute the correct indices.
         (start, stop, step) = self._process_range_read(start, stop, step)
@@ -932,9 +944,9 @@ class Array(hdf5extension.Array, Leaf):
 class ImageArray(Array):
     """Array containing an image.
 
-    This class has no additional behaviour or functionality compared
-    to that of an ordinary array.  It simply enables the user to open
-    an ``IMAGE`` HDF5 node as a normal `Array` node in PyTables.
+    This class has no additional behaviour or functionality compared to
+    that of an ordinary array.  It simply enables the user to open an
+    ``IMAGE`` HDF5 node as a normal `Array` node in PyTables.
 
     """
 
diff --git a/tables/atom.py b/tables/atom.py
index ac7c674..50bc303 100644
--- a/tables/atom.py
+++ b/tables/atom.py
@@ -69,6 +69,7 @@ def split_type(type):
         Traceback (most recent call last):
         ...
         ValueError: malformed type: 'foo bar'
+
     """
 
     match = _type_re.match(type)
@@ -183,6 +184,7 @@ class MetaAtom(type):
 
     This metaclass ensures that data about atom classes gets inserted
     into the suitable registries.
+
     """
 
     def __init__(class_, name, bases, dict_):
@@ -339,6 +341,7 @@ class Atom(object):
             ValueError: unknown NumPy scalar type: 'S5'
             >>> Atom.from_sctype('Float64')
             Float64Atom(shape=(), dflt=0.0)
+
         """
         if (not isinstance(sctype, type)
            or not issubclass(sctype, numpy.generic)):
@@ -360,6 +363,7 @@ class Atom(object):
             Int16Atom(shape=(2, 2), dflt=0)
             >>> Atom.from_dtype(numpy.dtype('Float64'))
             Float64Atom(shape=(), dflt=0.0)
+
         """
         basedtype = dtype.base
         if basedtype.names:
@@ -393,6 +397,7 @@ class Atom(object):
             Traceback (most recent call last):
             ...
             ValueError: unknown type: 'Float64'
+
         """
 
         if type not in all_types:
@@ -432,6 +437,7 @@ class Atom(object):
             Traceback (most recent call last):
             ...
             ValueError: the ``enum`` kind is not supported...
+
         """
 
         kwargs = {'shape': shape}
@@ -547,6 +553,7 @@ class Atom(object):
             Traceback (most recent call last):
             ...
             TypeError: __init__() got an unexpected keyword argument 'foobar'
+
         """
         newargs = self._get_init_args()
         newargs.update(override)
@@ -559,6 +566,7 @@ class Atom(object):
 
         This implementation works on classes which use the same names
         for both constructor arguments and instance attributes.
+
         """
 
         return dict((arg, getattr(self, arg))
@@ -577,6 +585,7 @@ class StringAtom(Atom):
     """Defines an atom of type string.
 
     The item size is the *maximum* length in characters of strings.
+
     """
 
     kind = 'string'
@@ -638,8 +647,7 @@ class FloatAtom(Atom):
 
 def _create_numeric_class(baseclass, itemsize):
     """Create a numeric atom class with the given `baseclass` and an
-    `itemsize`.
-    """
+    `itemsize`."""
 
     prefix = '%s%d' % (baseclass.prefix(), itemsize * 8)
     type_ = prefix.lower()
@@ -682,7 +690,7 @@ def _generate_floating_classes():
 # Create all numeric atom classes.
 for _classgen in [_generate_integral_classes, _generate_floating_classes]:
     for _newclass in _classgen():
-        exec '%s = _newclass' % _newclass.__name__
+        exec('%s = _newclass' % _newclass.__name__)
 del _classgen, _newclass
 
 
@@ -692,6 +700,7 @@ class ComplexAtom(Atom):
     Allowed item sizes are 8 (single precision) and 16 (double precision). This
     class must be used instead of more concrete ones to avoid confusions with
     numarray-like precision specifications used in PyTables 1.X.
+
     """
 
     # This definition is a little more complex (no pun intended)
@@ -738,7 +747,10 @@ class _ComplexErrorAtom(ComplexAtom):
             "where N=8 for single precision complex atoms, "
             "and N=16 for double precision complex atoms")
 Complex32Atom = Complex64Atom = Complex128Atom = _ComplexErrorAtom
-Complex192Atom = Complex256Atom = _ComplexErrorAtom  # XXX check
+if hasattr(numpy, 'complex192'):
+    Complex192Atom = _ComplexErrorAtom
+if hasattr(numpy, 'complex256'):
+    Complex256Atom = _ComplexErrorAtom
 
 
 class TimeAtom(Atom):
@@ -748,6 +760,7 @@ class TimeAtom(Atom):
     a 64 bit floating point value. Both of them reflect the number of seconds
     since the Unix epoch. This atom has the property of being stored using the
     HDF5 time datatypes.
+
     """
 
     kind = 'time'
@@ -833,15 +846,15 @@ class EnumAtom(Atom):
     The next C enum construction::
 
         enum myEnum {
-                    T0,
-                    T1,
-                    T2
-                    };
+            T0,
+            T1,
+            T2
+        };
 
     would correspond to the following PyTables
     declaration::
 
-        >>> myEnumAtom = EnumAtom(['T0', 'T1', 'T2'], 'T0', 'int32')
+        >>> my_enum_atom = EnumAtom(['T0', 'T1', 'T2'], 'T0', 'int32')
 
     Please note the dflt argument with a value of 'T0'. Since the concrete
     value matching T0 is unknown right now (we have not used explicit concrete
@@ -853,14 +866,15 @@ class EnumAtom(Atom):
     could be selected by using the base argument (this time with a full-blown
     storage atom)::
 
-        >>> myEnumAtom = EnumAtom(['T0', 'T1', 'T2'], 'T0', UInt8Atom())
+        >>> my_enum_atom = EnumAtom(['T0', 'T1', 'T2'], 'T0', UInt8Atom())
 
     You can also define multidimensional arrays for data elements::
 
-        >>> myEnumAtom = EnumAtom(
+        >>> my_enum_atom = EnumAtom(
         ...    ['T0', 'T1', 'T2'], 'T0', base='uint32', shape=(3,2))
 
     for 3x2 arrays of uint32.
+
     """
 
     # Registering this class in the class map may be a little wrong,
@@ -1052,6 +1066,7 @@ class VLStringAtom(_BufferedAtom):
     Variable-length string atoms do not accept parameters and they cause the
     reads of rows to always return Python strings.  You can regard vlstring
     atoms as an easy way to save generic variable length strings.
+
     """
 
     kind = 'vlstring'
@@ -1083,6 +1098,7 @@ class VLUnicodeAtom(_BufferedAtom):
     Variable-length Unicode atoms do not accept parameters and they cause the
     reads of rows to always return Python Unicode strings.  You can regard
     vlunicode atoms as an easy way to save variable length Unicode strings.
+
     """
 
     kind = 'vlunicode'
@@ -1133,6 +1149,7 @@ class ObjectAtom(_BufferedAtom):
     Object atoms do not accept parameters and they cause the reads of rows to
     always return Python objects. You can regard object atoms as an easy way to
     save an arbitrary number of generic Python objects in a VLArray dataset.
+
     """
 
     kind = 'object'
diff --git a/tables/attributeset.py b/tables/attributeset.py
index 88686f2..3efe71d 100644
--- a/tables/attributeset.py
+++ b/tables/attributeset.py
@@ -62,15 +62,15 @@ def issysattrname(name):
     "Check if a name is a system attribute or not"
 
     if (name in SYS_ATTRS or
-        numpy.prod([name.startswith(prefix)
-       for prefix in SYS_ATTRS_PREFIXES])):
+            numpy.prod([name.startswith(prefix)
+                        for prefix in SYS_ATTRS_PREFIXES])):
         return True
     else:
         return False
 
 
 class AttributeSet(hdf5extension.AttributeSet, object):
-    """Container for the HDF5 attributes of a Node
+    """Container for the HDF5 attributes of a Node.
 
     This class provides methods to create new HDF5 node attributes,
     and to get, rename or delete existing ones.
@@ -126,13 +126,13 @@ class AttributeSet(hdf5extension.AttributeSet, object):
         >>> h5fname = tempfile.mktemp(suffix='.h5')
         >>> h5f = tables.open_file(h5fname, 'w')
         >>> h5f.root._v_attrs.obj = myObject  # store the object
-        >>> print h5f.root._v_attrs.obj.foo  # retrieve it
+        >>> print(h5f.root._v_attrs.obj.foo)  # retrieve it
         bar
         >>> h5f.close()
         >>>
         >>> del MyClass, myObject  # delete class of object and reopen file
         >>> h5f = tables.open_file(h5fname, 'r')
-        >>> print repr(h5f.root._v_attrs.obj)
+        >>> print(repr(h5f.root._v_attrs.obj))
         'ccopy_reg\\n_reconstructor...
         >>> import pickle  # let's unpickle that to see what went wrong
         >>> pickle.loads(h5f.root._v_attrs.obj)
@@ -162,7 +162,7 @@ class AttributeSet(hdf5extension.AttributeSet, object):
     this::
 
         for name in :attr:`Node._v_attrs`._f_list():
-            print "name: %s, value: %s" % (name, :attr:`Node._v_attrs`[name])
+            print("name: %s, value: %s" % (name, :attr:`Node._v_attrs`[name]))
 
     Use whatever idiom you prefer to access the attributes.
 
@@ -269,6 +269,7 @@ class AttributeSet(hdf5extension.AttributeSet, object):
         'user' value returns only user attributes (this is the default).
         A 'sys' value returns only system attributes.  Finally, 'all'
         returns both system and user attributes.
+
         """
 
         if attrset == "user":
@@ -364,6 +365,7 @@ class AttributeSet(hdf5extension.AttributeSet, object):
         replaced.
 
         It does not log the change.
+
         """
 
         # Save this attribute to disk
@@ -424,6 +426,7 @@ class AttributeSet(hdf5extension.AttributeSet, object):
         the name is not a valid Python identifier.  A
         `PerformanceWarning` is issued when the recommended maximum
         number of attributes in a node is going to be exceeded.
+
         """
 
         nodeFile = self._v__nodefile
@@ -475,6 +478,7 @@ be ready to see PyTables asking for *lots* of memory and possibly slow I/O"""
         Deletes the specified existing PyTables attribute.
 
         It does not log the change.
+
         """
 
         # Delete the attribute from disk
@@ -497,6 +501,7 @@ be ready to see PyTables asking for *lots* of memory and possibly slow I/O"""
         Deletes the specified existing PyTables attribute from the
         attribute set.  If a nonexistent or system attribute is
         specified, an ``AttributeError`` is raised.
+
         """
 
         nodeFile = self._v__nodefile
@@ -547,6 +552,7 @@ be ready to see PyTables asking for *lots* of memory and possibly slow I/O"""
 
         A true value is returned if the attribute set has an attribute
         with the given name, false otherwise.
+
         """
 
         return name in self._v_attrnames
@@ -617,6 +623,7 @@ be ready to see PyTables asking for *lots* of memory and possibly slow I/O"""
         Copies all user and certain system attributes to the given where
         node (a Node instance - see :ref:`NodeClassDescr`), replacing
         the existing ones.
+
         """
 
         # AttributeSet must be defined in order to define a Node.
diff --git a/tables/carray.py b/tables/carray.py
index 9eb69ae..c457cb6 100644
--- a/tables/carray.py
+++ b/tables/carray.py
@@ -102,8 +102,8 @@ class CArray(Array):
 
         # Re-open a read another hyperslab
         h5f = tables.open_file(fileName)
-        print h5f
-        print h5f.root.carray[8:12, 18:22]
+        print(h5f)
+        print(h5f.root.carray[8:12, 18:22])
         h5f.close()
 
     The output for the previous script is something like::
@@ -249,7 +249,7 @@ class CArray(Array):
 
     def _g_copy_with_stats(self, group, name, start, stop, step,
                            title, filters, chunkshape, _log, **kwargs):
-        """Private part of Leaf.copy() for each kind of leaf"""
+        """Private part of Leaf.copy() for each kind of leaf."""
 
         (start, stop, step) = self._process_range_read(start, stop, step)
         maindim = self.maindim
diff --git a/tables/conditions.py b/tables/conditions.py
index 1777325..1757b7d 100644
--- a/tables/conditions.py
+++ b/tables/conditions.py
@@ -27,6 +27,7 @@ Functions:
     Compile a condition and extract usable index conditions.
 `call_on_recarr`
     Evaluate a function over a structured array.
+
 """
 
 import re
@@ -45,6 +46,7 @@ def _unsupported_operation_error(exception):
     """Make the \"no matching opcode\" Numexpr `exception` more clear.
 
     A new exception of the same kind is returned.
+
     """
 
     message = exception.args[0]
@@ -59,6 +61,7 @@ def _check_indexable_cmp(getidxcmp):
 
     This does some extra checking that Numexpr would perform later on
     the comparison if it was compiled within a complete condition.
+
     """
 
     def newfunc(exprnode, indexedcols):
@@ -66,7 +69,7 @@ def _check_indexable_cmp(getidxcmp):
         if result[0] is not None:
             try:
                 typeCompileAst(expressionToAST(exprnode))
-            except NotImplementedError, nie:
+            except NotImplementedError as nie:
                 # Try to make this Numexpr error less cryptic.
                 raise _unsupported_operation_error(nie)
         return result
@@ -142,15 +145,18 @@ def _get_indexable_cmp(exprnode, indexedcols):
 
 
 def _equiv_expr_node(x, y):
-    """Returns whether two ExpressionNodes are equivalent.  This is needed
-    because '==' is overridden on ExpressionNode to return a new ExpressionNode.
+    """Returns whether two ExpressionNodes are equivalent.
+
+    This is needed because '==' is overridden on ExpressionNode to
+    return a new ExpressionNode.
+
     """
     if not isinstance(x, ExpressionNode) and not isinstance(y, ExpressionNode):
         return x == y
-    elif type(x) is not type(y) or not isinstance(x, ExpressionNode) \
-                                or not isinstance(y, ExpressionNode) \
-                                or x.value != y.value or x.astKind != y.astKind \
-                                or len(x.children) != len(y.children):
+    elif (type(x) is not type(y) or not isinstance(x, ExpressionNode)
+            or not isinstance(y, ExpressionNode)
+            or x.value != y.value or x.astKind != y.astKind
+            or len(x.children) != len(y.children)):
         return False
     for xchild, ychild in zip(x.children, y.children):
         if not _equiv_expr_node(xchild, ychild):
@@ -162,12 +168,13 @@ def _get_idx_expr_recurse(exprnode, indexedcols, idxexprs, strexpr):
     """Here lives the actual implementation of the get_idx_expr() wrapper.
 
     'idxexprs' is a list of expressions in the form ``(var, (ops),
-    (limits))``. 'strexpr' is the indexable expression in string
-    format.  These parameters will be received empty (i.e. [], [''])
-    for the first time and populated during the different recursive
-    calls.  Finally, they are returned in the last level to the
-    original wrapper.  If 'exprnode' is not indexable, it will return
-    the tuple ([], ['']) so as to signal this.
+    (limits))``. 'strexpr' is the indexable expression in string format.
+    These parameters will be received empty (i.e. [], ['']) for the
+    first time and populated during the different recursive calls.
+    Finally, they are returned in the last level to the original
+    wrapper.  If 'exprnode' is not indexable, it will return the tuple
+    ([], ['']) so as to signal this.
+
     """
 
     not_indexable = ([], [''])
@@ -231,8 +238,8 @@ def _get_idx_expr_recurse(exprnode, indexedcols, idxexprs, strexpr):
     # ``(a <[=] x) & (x <[=] b)`` or ``(a >[=] x) & (x >[=] b)``
     # as ``a <[=] x <[=] b``, for the moment.
     op = exprnode.value
-    if lcolvar is not None and rcolvar is not None \
-      and _equiv_expr_node(lcolvar, rcolvar) and op == 'and':
+    if (lcolvar is not None and rcolvar is not None
+            and _equiv_expr_node(lcolvar, rcolvar) and op == 'and'):
         if lop in ['gt', 'ge'] and rop in ['lt', 'le']:  # l <= x <= r
             expr = (lcolvar, (lop, rop), (llim, rlim))
             return [expr]
@@ -299,6 +306,7 @@ def _get_idx_expr(expr, indexedcols):
 
     * ``a != 1`` and  ``c_bool != False``
     * ``~((a > 0) & (c_bool))``
+
     """
 
     return _get_idx_expr_recurse(expr, indexedcols, [], [''])
@@ -417,7 +425,7 @@ def compile_condition(condition, typemap, indexedcols):
         # reasons of inserting copy operators for unaligned,
         # *unidimensional* arrays.
         func = NumExpr(expr, signature)
-    except NotImplementedError, nie:
+    except NotImplementedError as nie:
         # Try to make this Numexpr error less cryptic.
         raise _unsupported_operation_error(nie)
     params = varnames
@@ -429,10 +437,11 @@ def compile_condition(condition, typemap, indexedcols):
 def call_on_recarr(func, params, recarr, param2arg=None):
     """Call `func` with `params` over `recarr`.
 
-    The `param2arg` function, when specified, is used to get an
-    argument given a parameter name; otherwise, the parameter itself
-    is used as an argument.  When the argument is a `Column` object,
-    the proper column from `recarr` is used as its value.
+    The `param2arg` function, when specified, is used to get an argument
+    given a parameter name; otherwise, the parameter itself is used as
+    an argument.  When the argument is a `Column` object, the proper
+    column from `recarr` is used as its value.
+
     """
 
     args = []
diff --git a/tables/definitions.pxd b/tables/definitions.pxd
index 60f9b64..0cb45b4 100644
--- a/tables/definitions.pxd
+++ b/tables/definitions.pxd
@@ -400,8 +400,8 @@ cdef extern from "hdf5.h" nogil:
   herr_t H5Pset_fapl_multi(hid_t fapl_id, H5FD_mem_t *memb_map,
                            hid_t *memb_fapl, char **memb_name,
                            haddr_t *memb_addr, hbool_t relax)
-  herr_t H5Pset_fapl_split(hid_t fapl_id, const_char *meta_ext,
-                           hid_t meta_plist_id, const_char *raw_ext,
+  herr_t H5Pset_fapl_split(hid_t fapl_id, char *meta_ext,
+                           hid_t meta_plist_id, char *raw_ext,
                            hid_t raw_plist_id)
   #herr_t H5Pget_fapl_mpio(hid_t fapl_id, MPI_Comm *comm, MPI_Info *info)
   #herr_t H5Pset_fapl_mpio(hid_t fapl_id, MPI_Comm comm, MPI_Info info)
diff --git a/tables/description.py b/tables/description.py
index 80b3871..923e9b9 100644
--- a/tables/description.py
+++ b/tables/description.py
@@ -14,6 +14,7 @@
 
 # Imports
 # =======
+from __future__ import print_function
 import sys
 import copy
 import warnings
@@ -258,9 +259,9 @@ def _generate_col_classes():
     # Bottom-level complex classes are not in the type map, of course.
     # We still want the user to get the compatibility warning, though.
     cprefixes.extend(['Complex32', 'Complex64', 'Complex128'])
-    if hasattr(numpy, 'complex192'):
+    if hasattr(atom, 'Complex192Atom'):
         cprefixes.append('Complex192')
-    if hasattr(numpy, 'complex256'):
+    if hasattr(atom, 'Complex256Atom'):
         cprefixes.append('Complex256')
 
     for cprefix in cprefixes:
@@ -269,7 +270,7 @@ def _generate_col_classes():
 
 # Create all column classes.
 for _newclass in _generate_col_classes():
-    exec '%s = _newclass' % _newclass.__name__
+    exec('%s = _newclass' % _newclass.__name__)
 del _newclass
 
 
@@ -404,6 +405,9 @@ class Description(object):
 
     def __init__(self, classdict, nestedlvl=-1, validate=True):
 
+        if not classdict:
+            raise ValueError("cannot create an empty data type")
+
         # Do a shallow copy of classdict just in case this is going to
         # be shared by other instances
         newdict = self.__dict__
@@ -427,29 +431,29 @@ class Description(object):
         for (name, descr) in classdict.iteritems():
             if name.startswith('_v_'):
                 if name in newdict:
-                    # print "Warning!"
+                    # print("Warning!")
                     # special methods &c: copy to newdict, warn about conflicts
                     warnings.warn("Can't set attr %r in description class %r"
                                   % (name, self))
                 else:
-                    # print "Special variable!-->", name, classdict[name]
+                    # print("Special variable!-->", name, classdict[name])
                     newdict[name] = descr
                 continue  # This variable is not needed anymore
 
             columns = None
             if (type(descr) == type(IsDescription) and
                     issubclass(descr, IsDescription)):
-                # print "Nested object (type I)-->", name
+                # print("Nested object (type I)-->", name)
                 columns = descr().columns
             elif (type(descr.__class__) == type(IsDescription) and
                   issubclass(descr.__class__, IsDescription)):
-                # print "Nested object (type II)-->", name
+                # print("Nested object (type II)-->", name)
                 columns = descr.columns
             elif isinstance(descr, dict):
-                # print "Nested object (type III)-->", name
+                # print("Nested object (type III)-->", name)
                 columns = descr
             else:
-                # print "Nested object (type IV)-->", name
+                # print("Nested object (type IV)-->", name)
                 descr = copy.copy(descr)
             # The copies above and below ensure that the structures
             # provided by the user will remain unchanged even if we
@@ -677,10 +681,10 @@ type can only take the parameters 'All', 'Col' or 'Description'.""")
 
 
 class MetaIsDescription(type):
-    """Helper metaclass to return the class variables as a dictionary"""
+    """Helper metaclass to return the class variables as a dictionary."""
 
     def __new__(cls, classname, bases, classdict):
-        """Return a new class with a "columns" attribute filled"""
+        """Return a new class with a "columns" attribute filled."""
 
         newdict = {"columns": {}, }
         if '__doc__' in classdict:
@@ -805,7 +809,7 @@ def dtype_from_descr(descr, byteorder=None):
 
 
 if __name__ == "__main__":
-    """Test code"""
+    """Test code."""
 
     class Info(IsDescription):
         _v_pos = 2
@@ -813,7 +817,7 @@ if __name__ == "__main__":
         Value = Float64Col()
 
     class Test(IsDescription):
-        """A description that has several columns"""
+        """A description that has several columns."""
 
         x = Col.from_type("int32", 2, 0, pos=0)
         y = Col.from_kind('float', dflt=1, shape=(2, 3))
@@ -874,31 +878,31 @@ if __name__ == "__main__":
     klass = Test()
     # klass = Info()
     desc = Description(klass.columns)
-    print "Description representation (short) ==>", desc
-    print "Description representation (long) ==>", repr(desc)
-    print "Column names ==>", desc._v_names
-    print "Column x ==>", desc.x
-    print "Column Info ==>", desc.Info
-    print "Column Info.value ==>", desc.Info.Value
-    print "Nested column names  ==>", desc._v_nested_names
-    print "Defaults ==>", desc._v_dflts
-    print "Nested Formats ==>", desc._v_nested_formats
-    print "Nested Descriptions ==>", desc._v_nested_descr
-    print "Nested Descriptions (info) ==>", desc.info._v_nested_descr
-    print "Total size ==>", desc._v_dtype.itemsize
+    print("Description representation (short) ==>", desc)
+    print("Description representation (long) ==>", repr(desc))
+    print("Column names ==>", desc._v_names)
+    print("Column x ==>", desc.x)
+    print("Column Info ==>", desc.Info)
+    print("Column Info.value ==>", desc.Info.Value)
+    print("Nested column names  ==>", desc._v_nested_names)
+    print("Defaults ==>", desc._v_dflts)
+    print("Nested Formats ==>", desc._v_nested_formats)
+    print("Nested Descriptions ==>", desc._v_nested_descr)
+    print("Nested Descriptions (info) ==>", desc.info._v_nested_descr)
+    print("Total size ==>", desc._v_dtype.itemsize)
 
     # check _f_walk
     for object in desc._f_walk():
         if isinstance(object, Description):
-            print "******begin object*************",
-            print "name -->", object._v_name
-            # print "name -->", object._v_dtype.name
-            # print "object childs-->", object._v_names
-            # print "object nested childs-->", object._v_nested_names
-            print "totalsize-->", object._v_dtype.itemsize
+            print("******begin object*************", end=' ')
+            print("name -->", object._v_name)
+            # print("name -->", object._v_dtype.name)
+            # print("object childs-->", object._v_names)
+            # print("object nested childs-->", object._v_nested_names)
+            print("totalsize-->", object._v_dtype.itemsize)
         else:
             # pass
-            print "leaf -->", object._v_name, object.dtype
+            print("leaf -->", object._v_name, object.dtype)
 
     class testDescParent(IsDescription):
         c = Int32Col()
diff --git a/tables/earray.py b/tables/earray.py
index c16d41e..1465fa4 100644
--- a/tables/earray.py
+++ b/tables/earray.py
@@ -113,7 +113,7 @@ class EArray(CArray):
 
         # Read the string ``EArray`` we have created on disk.
         for s in array_c:
-            print 'array_c[%s] => %r' % (array_c.nrow, s)
+            print('array_c[%s] => %r' % (array_c.nrow, s))
         # Close the file.
         fileh.close()
 
diff --git a/tables/exceptions.py b/tables/exceptions.py
index 9bd1bdd..234fc1d 100644
--- a/tables/exceptions.py
+++ b/tables/exceptions.py
@@ -174,7 +174,7 @@ class HDF5ExtError(RuntimeError):
         return msg
 
     def format_h5_backtrace(self, backtrace=None):
-        """Convert the HDF5 trace back represented as a list of tuples
+        """Convert the HDF5 trace back represented as a list of tuples.
         (see :attr:`HDF5ExtError.h5backtrace`) into a string.
 
         .. versionadded:: 2.4
@@ -200,6 +200,7 @@ class ClosedNodeError(ValueError):
     """The operation can not be completed because the node is closed.
 
     For instance, listing the children of a closed group is not allowed.
+
     """
 
     pass
@@ -210,18 +211,19 @@ class ClosedFileError(ValueError):
 
     For instance, getting an existing node from a closed file is not
     allowed.
+
     """
 
     pass
 
 
 class FileModeError(ValueError):
-    """
-    The operation can not be carried out because the mode in which the
+    """The operation can not be carried out because the mode in which the
     hosting file is opened is not adequate.
 
     For instance, removing an existing leaf from a read-only file is not
     allowed.
+
     """
 
     pass
@@ -241,6 +243,7 @@ class NodeError(AttributeError, LookupError):
     before another one can take its place.  This is done to protect
     interactive users from inadvertedly deleting whole trees of data by
     a single erroneous command.
+
     """
 
     pass
@@ -251,6 +254,7 @@ class NoSuchNodeError(NodeError):
 
     This exception is raised when an operation gets a path name or a
     ``(where, name)`` pair leading to a nonexistent node.
+
     """
 
     pass
@@ -262,6 +266,7 @@ class UndoRedoError(Exception):
     This exception indicates a problem related to the Undo/Redo
     mechanism, such as trying to undo or redo actions with this
     mechanism disabled, or going to a nonexistent mark.
+
     """
 
     pass
@@ -271,6 +276,7 @@ class UndoRedoWarning(Warning):
     """Issued when an action not supporting Undo/Redo is run.
 
     This warning is only shown when the Undo/Redo mechanism is enabled.
+
     """
 
     pass
@@ -294,6 +300,7 @@ class PerformanceWarning(Warning):
     This warning is issued when an operation is made on the database
     which may cause it to slow down on future operations (i.e. making
     the node tree grow too much).
+
     """
 
     pass
@@ -305,6 +312,7 @@ class FlavorError(ValueError):
     This exception is raised when an unsupported or unavailable flavor
     is given to a dataset, or when a conversion of data between two
     given flavors is not supported nor available.
+
     """
 
     pass
@@ -319,6 +327,7 @@ class FlavorWarning(Warning):
     flavor in a read-only file).
 
     See the `FlavorError` class for more information.
+
     """
 
     pass
@@ -330,6 +339,7 @@ class FiltersWarning(Warning):
     This warning is issued when a valid filter is specified but it is
     not available in the system.  It may mean that an available default
     filter is to be used instead.
+
     """
 
     pass
@@ -341,6 +351,7 @@ class OldIndexWarning(Warning):
     This warning is issued when an index in an unsupported format is
     found.  The index will be marked as invalid and will behave as if
     doesn't exist.
+
     """
 
     pass
@@ -351,6 +362,7 @@ class DataTypeWarning(Warning):
 
     This warning is issued when an unsupported HDF5 data type is found
     (normally in a file created with other tool than PyTables).
+
     """
 
     pass
@@ -361,6 +373,7 @@ class ExperimentalFeatureWarning(Warning):
 
     This warning is issued when using a functionality that is still
     experimental and that users have to use with care.
+
     """
     pass
 
diff --git a/tables/expression.py b/tables/expression.py
index 959a926..1d8c6ce 100644
--- a/tables/expression.py
+++ b/tables/expression.py
@@ -12,6 +12,7 @@
 
 """Here is defined the Expr class."""
 
+from __future__ import print_function
 import sys
 import warnings
 
@@ -313,11 +314,12 @@ class Expr(object):
     def set_inputs_range(self, start=None, stop=None, step=None):
         """Define a range for all inputs in expression.
 
-        The computation will only take place for the range defined by the
-        start, stop and step parameters in the main dimension of inputs (or the
-        leading one, if the object lacks the concept of main dimension, like a
-        NumPy container).  If not a common main dimension exists for all
-        inputs, the leading dimension will be used instead.
+        The computation will only take place for the range defined by
+        the start, stop and step parameters in the main dimension of
+        inputs (or the leading one, if the object lacks the concept of
+        main dimension, like a NumPy container).  If not a common main
+        dimension exists for all inputs, the leading dimension will be
+        used instead.
 
         """
 
@@ -619,7 +621,7 @@ value of dimensions that are orthogonal (and preferably close) to the
                 out.append(rout)
             else:
                 # Compute the slice to be filled in output
-                start3 = o_start + (start2 - start) / step
+                start3 = o_start + (start2 - start) // step
                 stop3 = start3 + nrowsinbuf * o_step
                 if stop3 > o_stop:
                     stop3 = o_stop
@@ -708,8 +710,8 @@ if __name__ == "__main__":
     expr.set_output(out)
     d = expr.eval()
 
-    print "returned-->", repr(d)
-    # print `d[:]`
+    print("returned-->", repr(d))
+    # print(`d[:]`)
 
     f.close()
 
diff --git a/tables/file.py b/tables/file.py
index 7146bb5..ab236f6 100644
--- a/tables/file.py
+++ b/tables/file.py
@@ -14,16 +14,19 @@
 
 This module support importing generic HDF5 files, on top of which
 PyTables files are created, read or extended. If a file exists, an
-object tree mirroring their hierarchical structure is created in
-memory. File class offer methods to traverse the tree, as well as to
-create new nodes.
+object tree mirroring their hierarchical structure is created in memory.
+File class offer methods to traverse the tree, as well as to create new
+nodes.
+
 """
 
+from __future__ import print_function
 import os
 import sys
 import time
 import weakref
 import warnings
+import collections
 
 import numexpr
 import numpy
@@ -33,7 +36,7 @@ from tables import hdf5extension
 from tables import utilsextension
 from tables import parameters
 from tables.exceptions import (ClosedFileError, FileModeError, NodeError,
-                               NoSuchNodeError, UndoRedoError,
+                               NoSuchNodeError, UndoRedoError, ClosedNodeError,
                                PerformanceWarning)
 from tables.registry import get_class_by_name
 from tables.path import join_path, split_path
@@ -74,15 +77,65 @@ from tables._past import previous_api, previous_api_property
 # format_version = "1.6"  # Support for NumPy objects and new flavors for
 #                         # objects.
 #                         # 1.6 was introduced in pytables 1.3
-#format_version = "2.0"  # Pickles are not used anymore in system attrs
-#                        # 2.0 was introduced in PyTables 2.0
+#format_version = "2.0"   # Pickles are not used anymore in system attrs
+#                         # 2.0 was introduced in PyTables 2.0
 format_version = "2.1"  # Numeric and numarray flavors are gone.
 
 compatible_formats = []  # Old format versions we can read
                          # Empty means that we support all the old formats
 
+
+class _FileRegistry(object):
+    def __init__(self):
+        self._name_mapping = collections.defaultdict(set)
+        self._handlers = set()
+
+    @property
+    def filenames(self):
+        return self._name_mapping.keys()
+
+    @property
+    def handlers(self):
+        #return set(self._handlers)  # return a copy
+        return self._handlers
+
+    def __len__(self):
+        return len(self._handlers)
+
+    def __contains__(self, filename):
+        return filename in self.filenames
+
+    def add(self, handler):
+        self._name_mapping[handler.filename].add(handler)
+        self._handlers.add(handler)
+
+    def remove(self, handler):
+        filename = handler.filename
+        self._name_mapping[filename].remove(handler)
+        # remove enpty keys
+        if not self._name_mapping[filename]:
+            del self._name_mapping[filename]
+        self._handlers.remove(handler)
+
+    def get_handlers_by_name(self, filename):
+        #return set(self._name_mapping[filename])  # return a copy
+        return self._name_mapping[filename]
+
+    def close_all(self):
+        are_open_files = len(self._handlers) > 0
+        if are_open_files:
+            sys.stderr.write("Closing remaining open files:")
+        handlers = list(self._handlers)  # make a copy
+        for fileh in handlers:
+            sys.stderr.write("%s..." % fileh.filename)
+            fileh.close()
+            sys.stderr.write("done")
+        if are_open_files:
+            sys.stderr.write("\n")
+
+
 # Dict of opened files (keys are filenames and values filehandlers)
-_open_files = {}
+_open_files = _FileRegistry()
 
 # Opcodes for do-undo actions
 _op_to_code = {
@@ -147,18 +200,25 @@ def copy_file(srcfilename, dstfilename, overwrite=False, **kwargs):
     """
 
     # Open the source file.
-    srcFileh = open_file(srcfilename, mode="r")
+    srcfileh = open_file(srcfilename, mode="r")
 
     try:
         # Copy it to the destination file.
-        srcFileh.copy_file(dstfilename, overwrite=overwrite, **kwargs)
+        srcfileh.copy_file(dstfilename, overwrite=overwrite, **kwargs)
     finally:
         # Close the source file.
-        srcFileh.close()
+        srcfileh.close()
 
 copyFile = previous_api(copy_file)
 
 
+if tuple(map(int, utilsextension.get_hdf5_version().split('-')[0].split('.'))) \
+                                                                        < (1, 8, 7):
+    _FILE_OPEN_POLICY = 'strict'
+else:
+    _FILE_OPEN_POLICY = 'default'
+
+
 def open_file(filename, mode="r", title="", root_uep="/", filters=None,
               **kwargs):
     """Open a PyTables (or generic HDF5) file and return a File object.
@@ -221,114 +281,305 @@ def open_file(filename, mode="r", title="", root_uep="/", filters=None,
 
     """
 
-    # Get the list of already opened files
-    ofiles = [fname for fname in _open_files]
-    if filename in ofiles:
-        filehandle = _open_files[filename]
-        omode = filehandle.mode
-        # 'r' is incompatible with everything except 'r' itself
-        if mode == 'r' and omode != 'r':
-            raise ValueError(
-                "The file '%s' is already opened, but "
-                "not in read-only mode (as requested)." % filename)
-        # 'a' and 'r+' are compatible with everything except 'r'
-        elif mode in ('a', 'r+') and omode == 'r':
-            raise ValueError(
-                "The file '%s' is already opened, but "
-                "in read-only mode.  Please close it before "
-                "reopening in append mode." % filename)
-        # 'w' means that we want to destroy existing contents
-        elif mode == 'w':
+    # XXX filename normalization ??
+
+    # Check already opened files
+    if _FILE_OPEN_POLICY == 'strict':
+        # This policy do not allows to open the same file multiple times
+        # even in read-only mode
+        if filename in _open_files:
             raise ValueError(
-                "The file '%s' is already opened.  Please "
-                "close it before reopening in write mode." % filename)
-        else:
-            # The file is already open and modes are compatible
-            # Increase the number of openings for this file
-            filehandle._open_count += 1
-            return filehandle
+                "The file '%s' is already opened.  "
+                "Please close it before reopening.  "
+                "HDF5 v.%s, FILE_OPEN_POLICY = '%s'" % (
+                    filename, utilsextension.get_hdf5_version(),
+                    _FILE_OPEN_POLICY))
+    else:
+        for filehandle in _open_files.get_handlers_by_name(filename):
+            omode = filehandle.mode
+            # 'r' is incompatible with everything except 'r' itself
+            if mode == 'r' and omode != 'r':
+                raise ValueError(
+                    "The file '%s' is already opened, but "
+                    "not in read-only mode (as requested)." % filename)
+            # 'a' and 'r+' are compatible with everything except 'r'
+            elif mode in ('a', 'r+') and omode == 'r':
+                raise ValueError(
+                    "The file '%s' is already opened, but "
+                    "in read-only mode.  Please close it before "
+                    "reopening in append mode." % filename)
+            # 'w' means that we want to destroy existing contents
+            elif mode == 'w':
+                raise ValueError(
+                    "The file '%s' is already opened.  Please "
+                    "close it before reopening in write mode." % filename)
+
     # Finally, create the File instance, and return it
     return File(filename, mode, title, root_uep, filters, **kwargs)
 
 openFile = previous_api(open_file)
 
 
-class _AliveNodes(dict):
-    """Stores strong or weak references to nodes in a transparent way."""
+# A dumb class that doesn't keep nothing at all
+class _NoCache(object):
+    def __len__(self):
+        return 0
 
-    def __init__(self, nodeCacheSlots):
-        if nodeCacheSlots > 0:
-            self.hasdeadnodes = True
-        else:
-            self.hasdeadnodes = False
-        if nodeCacheSlots >= 0:
-            self.hassoftlinks = True
-        else:
-            self.hassoftlinks = False
-        self.nodeCacheSlots = nodeCacheSlots
-        super(_AliveNodes, self).__init__()
+    def __contains__(self, key):
+        return False
 
-    def __getitem__(self, key):
-        if self.hassoftlinks:
-            ref = super(_AliveNodes, self).__getitem__(key)()
-        else:
-            ref = super(_AliveNodes, self).__getitem__(key)
-        return ref
+    def __iter__(self):
+        return iter([])
+
+    def __setitem__(self, key, value):
+        pass
+
+    __marker = object()
+
+    def pop(self, key, d=__marker):
+        if d is not self.__marker:
+            return d
+        raise KeyError(key)
+
+
+class _DictCache(dict):
+    def __init__(self, nslots):
+        if nslots < 1:
+            raise ValueError("Invalid number of slots: %d" % nslots)
+        self.nslots = nslots
+        super(_DictCache, self).__init__()
 
     def __setitem__(self, key, value):
-        if self.hassoftlinks:
-            ref = weakref.ref(value)
+        # Check if we are running out of space
+        if len(self) > self.nslots:
+            warnings.warn(
+                "the dictionary of node cache is exceeding the recommended "
+                "maximum number (%d); be ready to see PyTables asking for "
+                "*lots* of memory and possibly slow I/O." % (
+                    self.nslots), PerformanceWarning)
+        super(_DictCache, self).__setitem__(key, value)
+
+
+class NodeManager(object):
+    def __init__(self, nslots=64, node_factory=None):
+        super(NodeManager, self).__init__()
+
+        self.registry = weakref.WeakValueDictionary()
+
+        if nslots > 0:
+            cache = lrucacheextension.NodeCache(nslots)
+        elif nslots == 0:
+            cache = _NoCache()
         else:
-            ref = value
-            # Check if we are running out of space
-            if self.nodeCacheSlots < 0 and len(self) > -self.nodeCacheSlots:
-                warnings.warn("the dictionary of alive nodes is exceeding "
-                              "the recommended maximum number (%d); "
-                              "be ready to see PyTables asking for *lots* "
-                              "of memory and possibly slow I/O." % (
-                              -self.nodeCacheSlots), PerformanceWarning)
-        super(_AliveNodes, self).__setitem__(key, ref)
+            # nslots < 0
+            cache = _DictCache(-nslots)
 
+        self.cache = cache
 
-class _DeadNodes(lrucacheextension.NodeCache):
-    pass
+        # node_factory(node_path)
+        self.node_factory = node_factory
 
+    def register_node(self, node, key):
+        if key is None:
+            key = node._v_pathname
 
-# A dumb class that doesn't keep nothing at all
-class _NoDeadNodes(object):
-    def __len__(self):
-        return 0
+        if key in self.registry:
+            if not self.registry[key]._v_isopen:
+                del self.registry[key]
+            elif self.registry[key] is not node:
+                raise RuntimeError('trying to ragister a node with an '
+                                   'existing key: ``%s``' % key)
+        else:
+            self.registry[key] = node
+
+    def cache_node(self, node, key=None):
+        if key is None:
+            key = node._v_pathname
+
+        self.register_node(node, key)
+        if key in self.cache:
+            oldnode = self.cache.pop(key)
+            if oldnode is not node and oldnode._v_isopen:
+                raise RuntimeError('trying to cache a node with an '
+                                   'existing key: ``%s``' % key)
+
+        self.cache[key] = node
+
+    def get_node(self, key):
+        node = self.cache.pop(key, None)
+        if node is not None:
+            if node._v_isopen:
+                self.cache_node(node, key)
+                return node
+            else:
+                # this should not happen
+                warnings.warn("a closed node found in the cache: ``%s``" % key)
+
+        if key in self.registry:
+            node = self.registry[key]
+            if node is None:
+                # this should not happen since WeakValueDictionary drops all
+                # dead weakrefs
+                warnings.warn("None is stored in the registry for key: "
+                              "``%s``" % key)
+            elif node._v_isopen:
+                self.cache_node(node, key)
+                return node
+            else:
+                # this should not happen
+                warnings.warn("a closed node found in the registry: "
+                              "``%s``" % key)
+                del self.registry[key]
+                node = None
 
-    def __contains__(self, key):
-        return False
+        if self.node_factory:
+            node = self.node_factory(key)
+            self.cache_node(node, key)
 
-    def __iter__(self):
-        return iter([])
+        return node
 
+    def rename_node(self, oldkey, newkey):
+        for cache in (self.cache, self.registry):
+            if oldkey in cache:
+                node = cache.pop(oldkey)
+                cache[newkey] = node
 
-class _NodeDict(tables.misc.proxydict.ProxyDict):
-    """A proxy dictionary which is able to delegate access to missing items
-    to the container object (a `File`)."""
+    def drop_from_cache(self, nodepath):
+        '''Remove the node from cache'''
 
-    def _get_value_from_container(self, container, key):
-        return container.get_node(key)
+        # Remove the node from the cache.
+        self.cache.pop(nodepath, None)
 
-    _getValueFromContainer = previous_api(_get_value_from_container)
+    def drop_node(self, node, check_unregistered=True):
+        """Drop the `node`.
 
-    def _condition(self, node):
-        """Nodes fulfilling the condition are considered to belong here."""
-        raise NotImplementedError
+        Remove the node from the cache and, if it has no more references,
+        close it.
 
+        """
 
-    # def __len__(self):
-    #    return len(list(self.iterkeys()))
+        # Remove all references to the node.
+        nodepath = node._v_pathname
+
+        self.drop_from_cache(nodepath)
+
+        if nodepath in self.registry:
+            if not node._v_isopen:
+                del self.registry[nodepath]
+        elif check_unregistered:
+            # If the node is not in the registry (this should never happen)
+            # we close it forcibly since it is not ensured that the __del__
+            # method is called for object that are still alive when the
+            # interpreter is shut down
+            if node._v_isopen:
+                warnings.warn("dropping a node that is not in the registry: "
+                              "``%s``" % nodepath)
+
+                node._g_pre_kill_hook()
+                node._f_close()
+
+    def flush_nodes(self):
+        # Only iter on the nodes in the registry since nodes in the cahce
+        # should always have an entry in the registry
+        closed_keys = []
+        for path, node in self.registry.items():
+            if not node._v_isopen:
+                closed_keys.append(path)
+            elif '/_i_' not in path:  # Indexes are not necessary to be flushed
+                if isinstance(node, Leaf):
+                    node.flush()
+
+        for path in closed_keys:
+            # self.cache.pop(path, None)
+            if path in self.cache:
+                warnings.warn("closed node the cache: ``%s``" % path)
+                self.cache.pop(path, None)
+            self.registry.pop(path)
+
+    @staticmethod
+    def _close_nodes(nodepaths, get_node):
+        for nodepath in nodepaths:
+            try:
+                node = get_node(nodepath)
+            except KeyError:
+                pass
+            else:
+                if not node._v_isopen or node._v__deleting:
+                    continue
+
+                try:
+                    # Avoid descendent nodes to also iterate over
+                    # their descendents, which are already to be
+                    # closed by this loop.
+                    if hasattr(node, '_f_get_child'):
+                        node._g_close()
+                    else:
+                        node._f_close()
+                    del node
+                except ClosedNodeError:
+                    #import traceback
+                    #type_, value, tb = sys.exc_info()
+                    #exception_dump = ''.join(
+                    #    traceback.format_exception(type_, value, tb))
+                    #warnings.warn(
+                    #    "A '%s' exception occurred trying to close a node "
+                    #    "that was supposed to be open.\n"
+                    #    "%s" % (type_.__name__, exception_dump))
+                    pass
+
+    def close_subtree(self, prefix='/'):
+        if not prefix.endswith('/'):
+            prefix = prefix + '/'
+
+        cache = self.cache
+        registry = self.registry
+
+        # Ensure tables are closed before their indices
+        paths = [
+            path for path in cache
+            if path.startswith(prefix) and '/_i_' not in path
+        ]
+        self._close_nodes(paths, cache.pop)
+
+        # Close everything else (i.e. indices)
+        paths = [path for path in cache if path.startswith(prefix)]
+        self._close_nodes(paths, cache.pop)
+
+        # Ensure tables are closed before their indices
+        paths = [
+            path for path in registry
+            if path.startswith(prefix) and '/_i_' not in path
+        ]
+        self._close_nodes(paths, registry.pop)
+
+        # Close everything else (i.e. indices)
+        paths = [path for path in registry if path.startswith(prefix)]
+        self._close_nodes(paths, registry.pop)
+
+    def shutdown(self):
+        registry = self.registry
+        cache = self.cache
+
+        #self.close_subtree('/')
+
+        keys = list(cache)  # copy
+        for key in keys:
+            node = cache.pop(key)
+            if node._v_isopen:
+                registry.pop(node._v_pathname, None)
+                node._f_close()
+
+        while registry:
+            key, node = registry.popitem()
+            if node._v_isopen:
+                node._f_close()
 
 
 class File(hdf5extension.File, object):
     """The in-memory representation of a PyTables file.
 
     An instance of this class is returned when a PyTables file is
-    opened with the :func`tables.open_file` function. It offers methods
+    opened with the :func:`tables.open_file` function. It offers methods
     to manipulate (create, rename, delete...) nodes and handle their
     attributes, as well as methods to traverse the object tree.
     The *user entry point* to the object tree attached to the HDF5 file
@@ -488,7 +739,16 @@ class File(hdf5extension.File, object):
 
     open_count = property(
         lambda self: self._open_count, None, None,
-        "The number of times this file has been opened currently.")
+        """The number of times this file handle has been opened.
+
+        .. versionchanged:: 3.1
+           The mechanism for caching and sharing file handles has been
+           removed in PyTables 3.1.  Now this property should always
+           be 1 (or 0 for closed files).
+
+        .. deprecated:: 3.1
+
+        """)
 
     ## </properties>
 
@@ -501,6 +761,10 @@ class File(hdf5extension.File, object):
         self.mode = mode
         """The mode in which the file was opened."""
 
+        if mode not in ('r', 'r+', 'a', 'w'):
+            raise ValueError("invalid mode string ``%s``. Allowed modes are: "
+                             "'r', 'r+', 'a' and 'w'" % mode)
+
         # Get all the parameters in parameter file(s)
         params = dict([(k, v) for k, v in parameters.__dict__.iteritems()
                        if k.isupper() and not k.startswith('_')])
@@ -533,16 +797,11 @@ class File(hdf5extension.File, object):
             self.format_version = format_version
             """The PyTables version number of this file."""
 
-        # Nodes referenced by a variable are kept in `_aliveNodes`.
-        # When they are no longer referenced, they move themselves
-        # to `_deadNodes`, where they are kept until they are referenced again
-        # or they are preempted from it by other unreferenced nodes.
-        nodeCacheSlots = params['NODE_CACHE_SLOTS']
-        self._aliveNodes = _AliveNodes(nodeCacheSlots)
-        if nodeCacheSlots > 0:
-            self._deadNodes = _DeadNodes(nodeCacheSlots)
-        else:
-            self._deadNodes = _NoDeadNodes()
+        # The node manager must be initialized before the root group
+        # initialization but the node_factory attribute is set onl later
+        # because it is a bount method of the root grop itself.
+        node_cache_slots = params['NODE_CACHE_SLOTS']
+        self._node_manager = NodeManager(nslots=node_cache_slots)
 
         # For the moment Undo/Redo is not enabled.
         self._undoEnabled = False
@@ -554,7 +813,7 @@ class File(hdf5extension.File, object):
         """True if the underlying file os open, False otherwise."""
 
         # Append the name of the file to the global dict of files opened.
-        _open_files[self.filename] = self
+        _open_files.add(self)
 
         # Set the number of times this file has been opened to 1
         self._open_count = 1
@@ -565,6 +824,7 @@ class File(hdf5extension.File, object):
         # Complete the creation of the root node
         # (see the explanation in ``RootGroup.__init__()``.
         root._g_post_init_hook()
+        self._node_manager.node_factory = self.root._g_load_child
 
         # Save the PyTables format version for this file.
         if new:
@@ -582,11 +842,14 @@ class File(hdf5extension.File, object):
         numexpr.set_vml_num_threads(params['MAX_NUMEXPR_THREADS'])
 
     def __get_root_group(self, root_uep, title, filters):
-        """Returns a Group instance which will act as the root group
-        in the hierarchical tree. If file is opened in "r", "r+" or
-        "a" mode, and the file already exists, this method dynamically
-        builds a python object tree emulating the structure present on
-        file."""
+        """Returns a Group instance which will act as the root group in the
+        hierarchical tree.
+
+        If file is opened in "r", "r+" or "a" mode, and the file already
+        exists, this method dynamically builds a python object tree
+        emulating the structure present on file.
+
+        """
 
         self._v_objectid = self._get_file_id()
 
@@ -1205,7 +1468,7 @@ class File(hdf5extension.File, object):
     createVLArray = previous_api(create_vlarray)
 
     def create_hard_link(self, where, name, target, createparents=False):
-        """Create a hard link
+        """Create a hard link.
 
         Create a hard link to a `target` node with the given `name` in
         `where` location.  `target` can be a node object or a path
@@ -1228,15 +1491,18 @@ class File(hdf5extension.File, object):
     createHardLink = previous_api(create_hard_link)
 
     def create_soft_link(self, where, name, target, createparents=False):
-        """
-        Create a soft link (aka symbolic link) to a `target` node with
+        """Create a soft link (aka symbolic link) to a `target` node.
+
+        Create a soft link (aka symbolic link) to a `target` nodewith
         the given `name` in `where` location.  `target` can be a node
         object or a path string.  If `createparents` is true, the
-        intermediate groups required for reaching `where` are created
+        intermediate groups required for reaching `where` are created.
+
         (the default is not doing so).
 
-        The returned node is a SoftLink instance.  See the SoftLink class
-        (in :ref:`SoftLinkClassDescr`) for more information on soft links.
+        The returned node is a SoftLink instance.  See the SoftLink
+        class (in :ref:`SoftLinkClassDescr`) for more information on
+        soft links.
 
         """
 
@@ -1284,31 +1550,14 @@ class File(hdf5extension.File, object):
 
     createExternalLink = previous_api(create_external_link)
 
-    # There is another version of _get_node in cython space, but only
-    # marginally faster (5% or less, but sometimes slower!) than this one.
-    # So I think it is worth to use this one instead (much easier to debug).
-    def _get_node(self, nodePath):
+    def _get_node(self, nodepath):
         # The root node is always at hand.
-        if nodePath == '/':
+        if nodepath == '/':
             return self.root
 
-        aliveNodes = self._aliveNodes
-        deadNodes = self._deadNodes
-
-        if nodePath in aliveNodes:
-            # The parent node is in memory and alive, so get it.
-            node = aliveNodes[nodePath]
-            assert node is not None, \
-                "stale weak reference to dead node ``%s``" % nodePath
-            return node
-        if nodePath in deadNodes:
-            # The parent node is in memory but dead, so revive it.
-            node = self._revivenode(nodePath)
-            return node
-
-        # The node has not been found in alive or dead nodes.
-        # Open it directly from disk.
-        node = self.root._g_load_child(nodePath)
+        node = self._node_manager.get_node(nodepath)
+        assert node is not None, "unable to instantiate node ``%s``" % nodepath
+
         return node
 
     _getNode = previous_api(_get_node)
@@ -1346,11 +1595,11 @@ class File(hdf5extension.File, object):
         if isinstance(where, Node):
             node = where
             node._g_check_open()  # the node object must be open
-            nodePath = where._v_pathname
+            nodepath = where._v_pathname
         elif isinstance(where, (basestring, numpy.str_)):
             node = None
             if where.startswith('/'):
-                nodePath = where
+                nodepath = where
             else:
                 raise NameError(
                     "``where`` must start with a slash ('/')")
@@ -1361,27 +1610,27 @@ class File(hdf5extension.File, object):
         # Get the name of the child node.
         if name is not None:
             node = None
-            nodePath = join_path(nodePath, name)
+            nodepath = join_path(nodepath, name)
 
-        assert node is None or node._v_pathname == nodePath
+        assert node is None or node._v_pathname == nodepath
 
         # Now we have the definitive node path, let us try to get the node.
         if node is None:
-            node = self._get_node(nodePath)
+            node = self._get_node(nodepath)
 
         # Finally, check whether the desired node is an instance
         # of the expected class.
         if classname:
             class_ = get_class_by_name(classname)
             if not isinstance(node, class_):
-                nPathname = node._v_pathname
-                nClassname = node.__class__.__name__
+                npathname = node._v_pathname
+                nclassname = node.__class__.__name__
                 # This error message is right since it can never be shown
                 # for ``classname in [None, 'Node']``.
                 raise NoSuchNodeError(
                     "could not find a ``%s`` node at ``%s``; "
                     "instead, a ``%s`` node has been found there"
-                    % (classname, nPathname, nClassname))
+                    % (classname, npathname, nclassname))
 
         return node
 
@@ -1720,18 +1969,18 @@ class File(hdf5extension.File, object):
                            "argument") % dstfilename)
 
         # Create destination file, overwriting it.
-        dstFileh = open_file(
+        dstfileh = open_file(
             dstfilename, mode="w", title=title, filters=filters, **kwargs)
 
         try:
             # Maybe copy the user attributes of the root group.
             if copyuserattrs:
-                self.root._v_attrs._f_copy(dstFileh.root)
+                self.root._v_attrs._f_copy(dstfileh.root)
 
             # Copy the rest of the hierarchy.
-            self.root._f_copy_children(dstFileh.root, recursive=True, **kwargs)
+            self.root._f_copy_children(dstfileh.root, recursive=True, **kwargs)
         finally:
-            dstFileh.close()
+            dstfileh.close()
 
     copyFile = previous_api(copy_file)
 
@@ -1804,9 +2053,9 @@ class File(hdf5extension.File, object):
 
             # Recursively list all the nodes in the object tree.
             h5file = tables.open_file('vlarray1.h5')
-            print "All nodes in the object tree:"
+            print("All nodes in the object tree:")
             for node in h5file:
-                print node
+                print(node)
 
         """
 
@@ -1838,9 +2087,9 @@ class File(hdf5extension.File, object):
         ::
 
             # Recursively print all the nodes hanging from '/detector'.
-            print "Nodes hanging from group '/detector':"
+            print("Nodes hanging from group '/detector':")
             for node in h5file.walk_nodes('/detector', classname='EArray'):
-                print node
+                print(node)
 
         """
 
@@ -1922,9 +2171,10 @@ class File(hdf5extension.File, object):
     def is_undo_enabled(self):
         """Is the Undo/Redo mechanism enabled?
 
-        Returns True if the Undo/Redo mechanism has been enabled for this file,
-        False otherwise. Please note that this mechanism is persistent, so a
-        newly opened PyTables file may already have Undo/Redo support enabled.
+        Returns True if the Undo/Redo mechanism has been enabled for
+        this file, False otherwise. Please note that this mechanism is
+        persistent, so a newly opened PyTables file may already have
+        Undo/Redo support enabled.
 
         """
 
@@ -2191,14 +2441,14 @@ class File(hdf5extension.File, object):
                 or len(arg2) > maxundo):  # INTERNAL
             raise UndoRedoError("Parameter arg1 or arg2 is too long: "
                                 "(%r, %r)" % (arg1, arg2))
-        # print "Logging-->", (action, arg1, arg2)
+        # print("Logging-->", (action, arg1, arg2))
         self._actionlog.append([(_op_to_code[action],
                                  arg1.encode('utf-8'),
                                  arg2.encode('utf-8'))])
         self._curaction += 1
 
     def _get_mark_id(self, mark):
-        """Get an integer markid from a mark sequence number or name"""
+        """Get an integer markid from a mark sequence number or name."""
 
         if isinstance(mark, int):
             markid = mark
@@ -2212,14 +2462,17 @@ class File(hdf5extension.File, object):
         else:
             raise TypeError("Parameter mark can only be an integer or a "
                             "string, and you passed a type <%s>" % type(mark))
-        # print "markid, self._nmarks:", markid, self._nmarks
+        # print("markid, self._nmarks:", markid, self._nmarks)
         return markid
 
     _getMarkID = previous_api(_get_mark_id)
 
     def _get_final_action(self, markid):
-        """Get the action to go. It does not touch the self private
-        attributes"""
+        """Get the action to go.
+
+        It does not touch the self private attributes
+
+        """
 
         if markid > self._nmarks - 1:
             # The required mark is beyond the end of the action log
@@ -2235,7 +2488,7 @@ class File(hdf5extension.File, object):
     _getFinalAction = previous_api(_get_final_action)
 
     def _doundo(self, finalaction, direction):
-        """Undo/Redo actions up to final action in the specificed direction"""
+        """Undo/Redo actions up to final action in the specificed direction."""
 
         if direction < 0:
             actionlog = \
@@ -2244,17 +2497,17 @@ class File(hdf5extension.File, object):
             actionlog = self._actionlog[self._curaction:finalaction]
 
         # Uncomment this for debugging
-#         print "curaction, finalaction, direction", \
-#               self._curaction, finalaction, direction
+#         print("curaction, finalaction, direction", \
+#               self._curaction, finalaction, direction)
         for i in xrange(len(actionlog)):
             if actionlog['opcode'][i] != _op_to_code["MARK"]:
                 # undo/redo the action
                 if direction > 0:
                     # Uncomment this for debugging
-#                     print "redo-->", \
+#                     print("redo-->", \
 #                           _code_to_op[actionlog['opcode'][i]],\
 #                           actionlog['arg1'][i],\
-#                           actionlog['arg2'][i]
+#                           actionlog['arg2'][i])
                     undoredo.redo(self,
                                   # _code_to_op[actionlog['opcode'][i]],
                                   # The next is a workaround for python < 2.5
@@ -2263,10 +2516,10 @@ class File(hdf5extension.File, object):
                                   actionlog['arg2'][i].decode('utf8'))
                 else:
                     # Uncomment this for debugging
-                    # print "undo-->", \
+                    # print("undo-->", \
                     #       _code_to_op[actionlog['opcode'][i]],\
                     #       actionlog['arg1'][i].decode('utf8'),\
-                    #       actionlog['arg2'][i].decode('utf8')
+                    #       actionlog['arg2'][i].decode('utf8'))
                     undoredo.undo(self,
                                   # _code_to_op[actionlog['opcode'][i]],
                                   # The next is a workaround for python < 2.5
@@ -2301,8 +2554,8 @@ class File(hdf5extension.File, object):
         self._check_open()
         self._check_undo_enabled()
 
-#         print "(pre)UNDO: (curaction, curmark) = (%s,%s)" % \
-#               (self._curaction, self._curmark)
+#         print("(pre)UNDO: (curaction, curmark) = (%s,%s)" % \
+#               (self._curaction, self._curmark))
         if mark is None:
             markid = self._curmark
             # Correction if we are settled on top of a mark
@@ -2326,8 +2579,8 @@ class File(hdf5extension.File, object):
         if self._curaction < self._actionlog.nrows - 1:
             self._curaction += 1
         self._curmark = int(self._actionlog.cols.arg1[self._curaction])
-#         print "(post)UNDO: (curaction, curmark) = (%s,%s)" % \
-#               (self._curaction, self._curmark)
+#         print("(post)UNDO: (curaction, curmark) = (%s,%s)" % \
+#               (self._curaction, self._curmark))
 
     def redo(self, mark=None):
         """Go to a future state of the database.
@@ -2346,8 +2599,8 @@ class File(hdf5extension.File, object):
         self._check_open()
         self._check_undo_enabled()
 
-#         print "(pre)REDO: (curaction, curmark) = (%s, %s)" % \
-#               (self._curaction, self._curmark)
+#         print("(pre)REDO: (curaction, curmark) = (%s, %s)" % \
+#               (self._curaction, self._curmark))
         if self._curaction >= self._actionlog.nrows - 1:
             # We are at the end of log, so no action
             return
@@ -2376,8 +2629,8 @@ class File(hdf5extension.File, object):
             self._curmark += 1
         if self._curaction > self._actionlog.nrows - 1:
             self._curaction = self._actionlog.nrows - 1
-#         print "(post)REDO: (curaction, curmark) = (%s,%s)" % \
-#               (self._curaction, self._curmark)
+#         print("(post)REDO: (curaction, curmark) = (%s,%s)" % \
+#               (self._curaction, self._curmark))
 
     def goto(self, mark):
         """Go to a specific mark of the database.
@@ -2447,19 +2700,8 @@ class File(hdf5extension.File, object):
 
         self._check_open()
 
-        # First, flush PyTables buffers on alive leaves.
-        # Leaves that are dead should have been flushed already (at least,
-        # users are directed to do this through a PerformanceWarning!)
-        for path, refnode in self._aliveNodes.iteritems():
-            if '/_i_' not in path:  # Indexes are not necessary to be flushed
-                if (self._aliveNodes.hassoftlinks):
-                    node = refnode()
-                else:
-                    node = refnode
-                if isinstance(node, Leaf):
-                    node.flush()
-
         # Flush the cache to disk
+        self._node_manager.flush_nodes()
         self._flush_file(0)  # 0 means local scope, 1 global (virtual) scope
 
     def close(self):
@@ -2485,26 +2727,34 @@ class File(hdf5extension.File, object):
         # Close all loaded nodes.
         self.root._f_close()
 
+        self._node_manager.shutdown()
+
         # Post-conditions
-        assert len(self._deadNodes) == 0, \
-            ("dead nodes remain after closing dead nodes: %s"
-                % [path for path in self._deadNodes])
+        assert len(self._node_manager.cache) == 0, \
+            ("cached nodes remain after closing: %s"
+                % list(self._node_manager.cache))
 
         # No other nodes should have been revived.
-        assert len(self._aliveNodes) == 0, \
-            ("alive nodes remain after closing dead nodes: %s"
-                % [path for path in self._aliveNodes])
+        assert len(self._node_manager.registry) == 0, \
+            ("alive nodes remain after closing: %s"
+                % list(self._node_manager.registry))
 
         # Close the file
         self._close_file()
+
         # After the objects are disconnected, destroy the
         # object dictionary using the brute force ;-)
         # This should help to the garbage collector
         self.__dict__.clear()
+
         # Set the flag to indicate that the file is closed
         self.isopen = 0
-        # Delete the entry in the dictionary of opened files
-        del _open_files[filename]
+
+        # Restore the filename attribute that is used by _FileRegistry
+        self.filename = filename
+
+        # Delete the entry from he registry of opened files
+        _open_files.remove(self)
 
     def __enter__(self):
         """Enter a context and return the same file."""
@@ -2526,7 +2776,7 @@ class File(hdf5extension.File, object):
         ::
 
             >>> f = tables.open_file('data/test.h5')
-            >>> print f
+            >>> print(f)
             data/test.h5 (File) 'Table Benchmark'
             Last modif.: 'Mon Sep 20 12:40:47 2004'
             Object Tree:
@@ -2583,120 +2833,33 @@ class File(hdf5extension.File, object):
                     astring += repr(node) + '\n'
         return astring
 
-    def _refnode(self, node, nodePath):
-        """Register `node` as alive and insert references to it."""
-
-        if nodePath != '/':
-            # The root group does not participate in alive/dead stuff.
-            aliveNodes = self._aliveNodes
-            assert nodePath not in aliveNodes, \
-                "file already has a node with path ``%s``" % nodePath
-
-            # Add the node to the set of referenced ones.
-            aliveNodes[nodePath] = node
-
-    _refNode = previous_api(_refnode)
-
-    def _unrefnode(self, nodePath):
-        """Unregister `node` as alive and remove references to it."""
-
-        if nodePath != '/':
-            # The root group does not participate in alive/dead stuff.
-            aliveNodes = self._aliveNodes
-            assert nodePath in aliveNodes, \
-                "file does not have a node with path ``%s``" % nodePath
-
-            # Remove the node from the set of referenced ones.
-            del aliveNodes[nodePath]
-
-    _unrefNode = previous_api(_unrefnode)
-
-    def _killnode(self, node):
-        """Kill the `node`.
-
-        Moves the `node` from the set of alive, referenced nodes to the
-        set of dead, unreferenced ones.
-
-        """
-
-        nodePath = node._v_pathname
-        assert nodePath in self._aliveNodes, \
-            "trying to kill non-alive node ``%s``" % nodePath
-
-        node._g_pre_kill_hook()
-
-        # Remove all references to the node.
-        self._unrefnode(nodePath)
-        # Save the dead node in the limbo.
-        if self._aliveNodes.hasdeadnodes:
-            self._deadNodes[nodePath] = node
-        else:
-            # We have not a cache for dead nodes,
-            # so follow the usual deletion procedure.
-            node._v__deleting = True
-            node._f_close()
-
-    _killNode = previous_api(_killnode)
-
-    def _revivenode(self, nodePath):
-        """Revive the node under `nodePath` and return it.
-
-        Moves the node under `nodePath` from the set of dead,
-        unreferenced nodes to the set of alive, referenced ones.
-
-        """
-
-        assert nodePath in self._deadNodes, \
-            "trying to revive non-dead node ``%s``" % nodePath
-
-        # Take the node out of the limbo.
-        node = self._deadNodes.pop(nodePath)
-        # Make references to the node.
-        self._refnode(node, nodePath)
-
-        node._g_post_revive_hook()
-
-        return node
-
-    _reviveNode = previous_api(_revivenode)
-
-    def _update_node_locations(self, oldPath, newPath):
-        """Update location information of nodes under `oldPath`.
+    def _update_node_locations(self, oldpath, newpath):
+        """Update location information of nodes under `oldpath`.
 
         This only affects *already loaded* nodes.
+
         """
 
-        oldPrefix = oldPath + '/'  # root node can not be renamed, anyway
-        oldPrefixLen = len(oldPrefix)
+        oldprefix = oldpath + '/'  # root node can not be renamed, anyway
+        oldprefix_len = len(oldprefix)
 
         # Update alive and dead descendents.
-        for cache in [self._aliveNodes, self._deadNodes]:
-            for nodePath in cache:
-                if nodePath.startswith(oldPrefix) and nodePath != oldPrefix:
-                    nodeSuffix = nodePath[oldPrefixLen:]
-                    newNodePath = join_path(newPath, nodeSuffix)
-                    newNodePPath = split_path(newNodePath)[0]
-                    descendentNode = self._get_node(nodePath)
-                    descendentNode._g_update_location(newNodePPath)
+        for cache in [self._node_manager.cache, self._node_manager.registry]:
+            for nodepath in cache:
+                if nodepath.startswith(oldprefix) and nodepath != oldprefix:
+                    nodesuffix = nodepath[oldprefix_len:]
+                    newnodepath = join_path(newpath, nodesuffix)
+                    newnodeppath = split_path(newnodepath)[0]
+                    descendent_node = self._get_node(nodepath)
+                    descendent_node._g_update_location(newnodeppath)
 
     _updateNodeLocations = previous_api(_update_node_locations)
 
 
 # If a user hits ^C during a run, it is wise to gracefully close the
 # opened files.
-def close_open_files():
-    are_open_files = len(_open_files) > 0
-    if are_open_files:
-        print >> sys.stderr, "Closing remaining open files:",
-    for fname, fileh in _open_files.items():
-        print >> sys.stderr, "%s..." % (fname,),
-        fileh.close()
-        print >> sys.stderr, "done",
-    if are_open_files:
-        print >> sys.stderr
-
 import atexit
-atexit.register(close_open_files)
+atexit.register(_open_files.close_all)
 
 
 ## Local Variables:
diff --git a/tables/filters.py b/tables/filters.py
index ccf5c82..fd5e8da 100644
--- a/tables/filters.py
+++ b/tables/filters.py
@@ -17,7 +17,8 @@
 import warnings
 import numpy
 
-from tables import utilsextension
+from tables import (
+    utilsextension, blosc_compressor_list, blosc_compcode_to_compname)
 from tables.exceptions import FiltersWarning
 
 
@@ -27,6 +28,9 @@ __docformat__ = 'reStructuredText'
 """The format of documentation strings in this module."""
 
 all_complibs = ['zlib', 'lzo', 'bzip2', 'blosc']
+all_complibs += ['blosc:%s' % cname for cname in blosc_compressor_list()]
+
+
 """List of all compression libraries."""
 
 foreign_complibs = ['szip']
@@ -40,6 +44,7 @@ default_complib = 'zlib'
 # =================
 _shuffle_flag = 0x1
 _fletcher32_flag = 0x2
+_rounding_flag = 0x4
 
 
 # Classes
@@ -60,11 +65,14 @@ class Filters(object):
         range is 0-9. A value of 0 (the default) disables
         compression.
     complib : str
-        Specifies the compression library to be used. Right
-        now, 'zlib' (the default), 'lzo', 'bzip2'
-        and 'blosc' are supported.  Specifying a
-        compression library which is not available in the system
-        issues a FiltersWarning and sets the library to the default one.
+        Specifies the compression library to be used. Right now, 'zlib' (the
+        default), 'lzo', 'bzip2' and 'blosc' are supported.  Additional
+        compressors for Blosc like 'blosc:blosclz' ('blosclz' is the default
+        in case the additional compressor is not specified), 'blosc:lz4',
+        'blosc:lz4hc', 'blosc:snappy' and 'blosc:zlib' are supported too.
+        Specifying a compression library which is not available in the
+        system issues a FiltersWarning and sets the library to the default
+        one.
     shuffle : bool
         Whether or not to use the *Shuffle*
         filter in the HDF5 library. This is normally used to improve
@@ -78,6 +86,19 @@ class Filters(object):
         *Fletcher32* filter in the HDF5 library.
         This is used to add a checksum on each data chunk. A false
         value (the default) disables the checksum.
+    least_significant_digit : int
+        If specified, data will be truncated (quantized). In conjunction
+        with enabling compression, this produces 'lossy', but
+        significantly more efficient compression. For example, if
+        *least_significant_digit=1*, data will be quantized using
+        ``around(scale*data)/scale``, where ``scale = 2**bits``, and
+        bits is determined so that a precision of 0.1 is retained (in
+        this case bits=4). Default is *None*, or no quantization.
+
+        .. note::
+
+            quantization is only applied if some form of compression is
+            enabled
 
     Examples
     --------
@@ -141,14 +162,14 @@ class Filters(object):
     def _from_leaf(class_, leaf):
         # Get a dictionary with all the filters
         parent = leaf._v_parent
-        filtersDict = utilsextension.get_filters(parent._v_objectid,
-                                                 leaf._v_name)
-        if filtersDict is None:
-            filtersDict = {}  # not chunked
+        filters_dict = utilsextension.get_filters(parent._v_objectid,
+                                                  leaf._v_name)
+        if filters_dict is None:
+            filters_dict = {}  # not chunked
 
         kwargs = dict(complevel=0, shuffle=False, fletcher32=False,  # all off
-                      _new=False)
-        for (name, values) in filtersDict.iteritems():
+                      least_significant_digit=None, _new=False)
+        for (name, values) in filters_dict.iteritems():
             if name == 'deflate':
                 name = 'zlib'
             if name in all_complibs:
@@ -158,6 +179,10 @@ class Filters(object):
                     # Shuffle filter is internal to blosc
                     if values[5]:
                         kwargs['shuffle'] = True
+                    # In Blosc 1.3 another parameter is used for the compressor
+                    if len(values) > 6:
+                        cname = blosc_compcode_to_compname(values[6])
+                        kwargs['complib'] = "blosc:%s" % cname
                 else:
                     kwargs['complevel'] = values[0]
             elif name in foreign_complibs:
@@ -172,11 +197,11 @@ class Filters(object):
         """Create a new `Filters` object from a packed version.
 
         >>> Filters._unpack(0)
-        Filters(complevel=0, shuffle=False, fletcher32=False)
+        Filters(complevel=0, shuffle=False, fletcher32=False, least_significant_digit=None)
         >>> Filters._unpack(0x101)
-        Filters(complevel=1, complib='zlib', shuffle=False, fletcher32=False)
+        Filters(complevel=1, complib='zlib', shuffle=False, fletcher32=False, least_significant_digit=None)
         >>> Filters._unpack(0x30109)
-        Filters(complevel=9, complib='zlib', shuffle=True, fletcher32=True)
+        Filters(complevel=9, complib='zlib', shuffle=True, fletcher32=True, least_significant_digit=None)
         >>> Filters._unpack(0x3010A)
         Traceback (most recent call last):
           ...
@@ -185,12 +210,15 @@ class Filters(object):
         Traceback (most recent call last):
           ...
         ValueError: invalid compression library id: 0
+
         """
 
         kwargs = {'_new': False}
+
         # Byte 0: compression level.
         kwargs['complevel'] = complevel = packed & 0xff
         packed >>= 8
+
         # Byte 1: compression library id (0 for none).
         if complevel > 0:
             complib_id = int(packed & 0xff)
@@ -199,32 +227,55 @@ class Filters(object):
                                  % complib_id)
             kwargs['complib'] = all_complibs[complib_id - 1]
         packed >>= 8
+
         # Byte 2: parameterless filters.
         kwargs['shuffle'] = packed & _shuffle_flag
         kwargs['fletcher32'] = packed & _fletcher32_flag
+        has_rounding = packed & _rounding_flag
+        packed >>= 8
+
+        # Byte 3: least significant digit.
+        if has_rounding:
+            kwargs['least_significant_digit'] = numpy.int8(packed & 0xff)
+        else:
+            kwargs['least_significant_digit'] = None
+
         return class_(**kwargs)
 
     def _pack(self):
         """Pack the `Filters` object into a 64-bit NumPy integer."""
 
         packed = numpy.int64(0)
+
+        # Byte 3: least significant digit.
+        if self.least_significant_digit is not None:
+            #assert isinstance(self.least_significant_digit, numpy.int8)
+            packed |= self.least_significant_digit
+        packed <<= 8
+
         # Byte 2: parameterless filters.
         if self.shuffle:
             packed |= _shuffle_flag
         if self.fletcher32:
             packed |= _fletcher32_flag
+        if self.least_significant_digit:
+            packed |= _rounding_flag
         packed <<= 8
+
         # Byte 1: compression library id (0 for none).
         if self.complevel > 0:
             packed |= all_complibs.index(self.complib) + 1
         packed <<= 8
+
         # Byte 0: compression level.
         packed |= self.complevel
+
         return packed
 
     def __init__(self, complevel=0, complib=default_complib,
                  shuffle=True, fletcher32=False,
-                 _new=True):
+                 least_significant_digit=None, _new=True):
+
         if not (0 <= complevel <= 9):
             raise ValueError("compression level must be between 0 and 9")
 
@@ -245,27 +296,35 @@ class Filters(object):
         complib = str(complib)
         shuffle = bool(shuffle)
         fletcher32 = bool(fletcher32)
+        if least_significant_digit is not None:
+            least_significant_digit = numpy.int8(least_significant_digit)
 
         if complevel == 0:
             # Override some inputs when compression is not enabled.
             complib = None  # make it clear there is no compression
             shuffle = False  # shuffling and not compressing makes no sense
+            least_significant_digit = None
         elif complib not in all_complibs:
             # Do not try to use a meaningful level for unsupported libs.
             complevel = -1
 
         self.complevel = complevel
         """The compression level (0 disables compression)."""
+
         self.complib = complib
-        """
-        The compression filter used (irrelevant when compression is
+        """The compression filter used (irrelevant when compression is
         not enabled).
         """
+
         self.shuffle = shuffle
         """Whether the *Shuffle* filter is active or not."""
+
         self.fletcher32 = fletcher32
         """Whether the *Fletcher32* filter is active or not."""
 
+        self.least_significant_digit = least_significant_digit
+        """The least significant digit to which data shall be truncated."""
+
     def __repr__(self):
         args, complevel = [], self.complevel
         if complevel >= 0:  # meaningful compression level
@@ -274,6 +333,8 @@ class Filters(object):
             args.append('complib=%r' % self.complib)
         args.append('shuffle=%s' % self.shuffle)
         args.append('fletcher32=%s' % self.fletcher32)
+        args.append(
+            'least_significant_digit=%s' % self.least_significant_digit)
         return '%s(%s)' % (self.__class__.__name__, ', '.join(args))
 
     def __str__(self):
@@ -315,14 +376,16 @@ class Filters(object):
             ValueError: compression library ``None`` is not supported...
             >>> filters3 = filters1.copy(complevel=1, complib='zlib')
             >>> print(filters1)
-            Filters(complevel=0, shuffle=False, fletcher32=False)
+            Filters(complevel=0, shuffle=False, fletcher32=False, least_significant_digit=None)
             >>> print(filters3)
-            Filters(complevel=1, complib='zlib', shuffle=False, fletcher32=False)
+            Filters(complevel=1, complib='zlib', shuffle=False, fletcher32=False, least_significant_digit=None)
             >>> filters1.copy(foobar=42)
             Traceback (most recent call last):
             ...
             TypeError: __init__() got an unexpected keyword argument 'foobar'
+
         """
+
         newargs = self.__dict__.copy()
         newargs.update(override)
         return self.__class__(**newargs)
diff --git a/tables/flavor.py b/tables/flavor.py
index 9f2e1fa..9bbf277 100644
--- a/tables/flavor.py
+++ b/tables/flavor.py
@@ -43,6 +43,7 @@ Variables
 
     See the `array_of_flavor()` and `flavor_to_flavor()` functions for
     friendlier interfaces to flavor conversion.
+
 """
 
 # Imports
@@ -117,6 +118,7 @@ def array_of_flavor2(array, src_flavor, dst_flavor):
     case.
 
     If the conversion is not supported, a ``FlavorError`` is raised.
+
     """
 
     convkey = (src_flavor, dst_flavor)
@@ -140,11 +142,12 @@ def flavor_to_flavor(array, src_flavor, dst_flavor):
 
     If the conversion is not supported, a `FlavorWarning` is issued
     and the input `array` is returned as is.
+
     """
 
     try:
         return array_of_flavor2(array, src_flavor, dst_flavor)
-    except FlavorError, fe:
+    except FlavorError as fe:
         warnings.warn("%s; returning an object of the ``%s`` flavor instead"
                       % (fe.args[0], src_flavor), FlavorWarning)
         return array
@@ -156,6 +159,7 @@ def internal_to_flavor(array, dst_flavor):
     The input `array` must be of the internal flavor, and the returned
     array will be of the given `dst_flavor`.  See `flavor_to_flavor()`
     for more information.
+
     """
 
     return flavor_to_flavor(array, internal_flavor, dst_flavor)
@@ -168,6 +172,7 @@ def array_as_internal(array, src_flavor):
     returned array will be of the internal flavor.
 
     If the conversion is not supported, a ``FlavorError`` is raised.
+
     """
 
     return array_of_flavor2(array, src_flavor, internal_flavor)
@@ -178,6 +183,7 @@ def flavor_of(array):
 
     If the `array` can not be matched with any flavor, a ``TypeError``
     is raised.
+
     """
 
     for flavor in all_flavors:
@@ -197,6 +203,7 @@ def array_of_flavor(array, dst_flavor):
     will be of the given `dst_flavor`.
 
     If the conversion is not supported, a ``FlavorError`` is raised.
+
     """
 
     return array_of_flavor2(array, flavor_of(array), dst_flavor)
@@ -210,6 +217,7 @@ def restrict_flavors(keep=['python']):
     disabled.
 
     .. important:: Once you disable a flavor, it can not be enabled again.
+
     """
 
     keep = set(keep).union([internal_flavor])
diff --git a/tables/group.py b/tables/group.py
index 38e38e7..c6d4930 100644
--- a/tables/group.py
+++ b/tables/group.py
@@ -273,7 +273,7 @@ class Group(hdf5extension.Group, Node):
 
     def __del__(self):
         if (self._v_isopen and
-            self._v_pathname in self._v_file._aliveNodes and
+            self._v_pathname in self._v_file._node_manager.registry and
                 '_v_children' in self.__dict__):
             # The group is going to be killed.  Rebuild weak references
             # (that Python cancelled just before calling this method) so
@@ -348,9 +348,7 @@ class Group(hdf5extension.Group, Node):
 
     def _g_add_children_names(self):
         """Add children names to this group taking into account their
-        visibility and kind.
-
-        """
+        visibility and kind."""
 
         mydict = self.__dict__
 
@@ -420,9 +418,9 @@ class Group(hdf5extension.Group, Node):
         ::
 
             # Non-recursively list all the nodes hanging from '/detector'
-            print "Nodes in '/detector' group:"
+            print("Nodes in '/detector' group:")
             for node in h5file.root.detector:
-                print node
+                print(node)
 
         """
 
@@ -461,9 +459,9 @@ class Group(hdf5extension.Group, Node):
         ::
 
             # Recursively print all the arrays hanging from '/'
-            print "Arrays in the object tree '/':"
+            print("Arrays in the object tree '/':")
             for array in h5file.root._f_walknodes('Array', recursive=True):
-                print array
+                print(array)
 
         """
 
@@ -485,8 +483,8 @@ class Group(hdf5extension.Group, Node):
     _f_walkNodes = previous_api(_f_walknodes)
 
     def _g_join(self, name):
-        """Helper method to correctly concatenate a name child object
-        with the pathname of this group."""
+        """Helper method to correctly concatenate a name child object with the
+        pathname of this group."""
 
         if name == "/":
             # This case can happen when doing copies
@@ -540,7 +538,7 @@ be ready to see PyTables asking for *lots* of memory and possibly slow I/O."""
 
         # Check group width limits.
         if (len(self._v_children) + len(self._v_hidden) >=
-                                                    self._v_max_group_width):
+                self._v_max_group_width):
             self._g_width_warning()
 
         # Update members information.
@@ -581,8 +579,8 @@ be ready to see PyTables asking for *lots* of memory and possibly slow I/O."""
             if childname in self._v_children:
                 # Visible node.
                 members = self.__members__
-                memberIndex = members.index(childname)
-                del members[memberIndex]  # disables completion
+                member_index = members.index(childname)
+                del members[member_index]  # disables completion
 
                 del self._v_children[childname]  # remove node
                 self._v_unknown.pop(childname, None)
@@ -682,8 +680,8 @@ be ready to see PyTables asking for *lots* of memory and possibly slow I/O."""
 
         self._g_check_has_child(childname)
 
-        childPath = join_path(self._v_pathname, childname)
-        return self._v_file._get_node(childPath)
+        childpath = join_path(self._v_pathname, childname)
+        return self._v_file._get_node(childpath)
 
     _f_getChild = previous_api(_f_get_child)
 
@@ -790,7 +788,7 @@ be ready to see PyTables asking for *lots* of memory and possibly slow I/O."""
 
         try:
             super(Group, self).__delattr__(name)  # nothing particular
-        except AttributeError, ae:
+        except AttributeError as ae:
             hint = " (use ``node._f_remove()`` if you want to remove a node)"
             raise ae.__class__(str(ae) + hint)
 
@@ -862,7 +860,7 @@ be ready to see PyTables asking for *lots* of memory and possibly slow I/O."""
         super(Group, self).__setattr__(name, value)
 
     def _f_flush(self):
-        """Flush this Group"""
+        """Flush this Group."""
 
         self._g_check_open()
         self._g_flush_group()
@@ -870,62 +868,18 @@ be ready to see PyTables asking for *lots* of memory and possibly slow I/O."""
     def _g_close_descendents(self):
         """Close all the *loaded* descendent nodes of this group."""
 
-        def closenodes(prefix, nodepaths, get_node):
-            for nodepath in nodepaths:
-                if nodepath.startswith(prefix):
-                    try:
-                        node = get_node(nodepath)
-                        # Avoid descendent nodes to also iterate over
-                        # their descendents, which are already to be
-                        # closed by this loop.
-                        if hasattr(node, '_f_get_child'):
-                            node._g_close()
-                        else:
-                            node._f_close()
-                        del node
-                    except KeyError:
-                        pass
-
-        prefix = self._v_pathname + '/'
-        if prefix == '//':
-            prefix = '/'
-
-        # Close all loaded nodes.
-        alivenodes = self._v_file._aliveNodes
-        deadnodes = self._v_file._deadNodes
-        revivenode = self._v_file._revivenode
-        # First, close the alive nodes and delete them
-        # so they are not placed in the limbo again.
-        # These two steps ensure tables are closed *before* their indices.
-        closenodes(prefix,
-                   [path for path in alivenodes
-                        if '/_i_' not in path],  # not indices
-                   lambda path: alivenodes[path])
-        # Close everything else (i.e. indices)
-        closenodes(prefix,
-                   [path for path in alivenodes],
-                   lambda path: alivenodes[path])
-
-        # Next, revive the dead nodes, close and delete them
-        # so they are not placed in the limbo again.
-        # These two steps ensure tables are closed *before* their indices.
-        closenodes(prefix,
-                   [path for path in deadnodes
-                        if '/_i_' not in path],  # not indices
-                   lambda path: revivenode(path))
-        # Close everything else (i.e. indices)
-        closenodes(prefix,
-                   [path for path in deadnodes],
-                   lambda path: revivenode(path))
+        node_manager = self._v_file._node_manager
+        node_manager.close_subtree(self._v_pathname)
 
     _g_closeDescendents = previous_api(_g_close_descendents)
 
     def _g_close(self):
         """Close this (open) group."""
 
-        # hdf5extension operations:
-        #   Close HDF5 group.
-        self._g_close_group()
+        if self._v_isopen:
+            # hdf5extension operations:
+            #   Close HDF5 group.
+            self._g_close_group()
 
         # Close myself as a node.
         super(Group, self)._f_close()
@@ -1051,22 +1005,22 @@ be ready to see PyTables asking for *lots* of memory and possibly slow I/O."""
         # `Node` objects when `createparents` is true.  Also, note that
         # there is no risk of creating parent nodes and failing later
         # because of destination nodes already existing.
-        dstParent = self._v_file._get_or_create_path(dstgroup, createparents)
-        self._g_check_group(dstParent)  # Is it a group?
+        dstparent = self._v_file._get_or_create_path(dstgroup, createparents)
+        self._g_check_group(dstparent)  # Is it a group?
 
         if not overwrite:
             # Abort as early as possible when destination nodes exist
             # and overwriting is not enabled.
             for childname in self._v_children:
-                if childname in dstParent:
+                if childname in dstparent:
                     raise NodeError(
                         "destination group ``%s`` already has "
                         "a node named ``%s``; "
                         "you may want to use the ``overwrite`` argument"
-                        % (dstParent._v_pathname, childname))
+                        % (dstparent._v_pathname, childname))
 
         for child in self._v_children.itervalues():
-            child._f_copy(dstParent, None, overwrite, recursive, **kwargs)
+            child._f_copy(dstparent, None, overwrite, recursive, **kwargs)
 
     _f_copyChildren = previous_api(_f_copy_children)
 
@@ -1079,7 +1033,7 @@ be ready to see PyTables asking for *lots* of memory and possibly slow I/O."""
         ::
 
             >>> f=tables.open_file('data/test.h5')
-            >>> print f.root.group0
+            >>> print(f.root.group0)
             /group0 (Group) 'First Group'
 
         """
@@ -1104,8 +1058,10 @@ be ready to see PyTables asking for *lots* of memory and possibly slow I/O."""
 
         """
 
-        rep = ['%r (%s)' % (childname, child.__class__.__name__)
-                    for (childname, child) in self._v_children.iteritems()]
+        rep = [
+            '%r (%s)' % (childname, child.__class__.__name__)
+            for (childname, child) in self._v_children.iteritems()
+        ]
         childlist = '[%s]' % (', '.join(rep))
 
         return "%s\n  children := %s" % (str(self), childlist)
@@ -1142,7 +1098,7 @@ class RootGroup(Group):
         # Only the root node has the file as a parent.
         # Bypass __setattr__ to avoid the ``Node._v_parent`` property.
         mydict['_v_parent'] = ptfile
-        ptfile._refnode(self, '/')
+        ptfile._node_manager.register_node(self, '/')
 
         # hdf5extension operations (do before setting an AttributeSet):
         #   Update node attributes.
@@ -1192,7 +1148,7 @@ class RootGroup(Group):
             # return ChildClass(self, childname)  # uncomment for debugging
             try:
                 return ChildClass(self, childname)
-            except Exception, exc:  # XXX
+            except Exception as exc:  # XXX
                 warnings.warn(
                     "problems loading leaf ``%s``::\n\n"
                     "  %s\n\n"
@@ -1271,6 +1227,7 @@ be ready to see PyTables asking for *lots* of memory and possibly slow I/O"""
 
         This method empties all action storage kept in this node: nodes
         and attributes.
+
         """
 
         # Remove action storage nodes.
diff --git a/tables/hdf5Extension.py b/tables/hdf5Extension.py
index 02ed793..4c58935 100644
--- a/tables/hdf5Extension.py
+++ b/tables/hdf5Extension.py
@@ -3,4 +3,4 @@ from tables.hdf5extension import *
 
 _warnmsg = ("hdf5Extension is pending deprecation, import hdf5extension instead. "
             "You may use the pt2to3 tool to update your source code.")
-warn(_warnmsg, PendingDeprecationWarning, stacklevel=2)
+warn(_warnmsg, DeprecationWarning, stacklevel=2)
diff --git a/tables/hdf5extension.pyx b/tables/hdf5extension.pyx
index 7dbada4..001a99b 100644
--- a/tables/hdf5extension.pyx
+++ b/tables/hdf5extension.pyx
@@ -84,6 +84,7 @@ from definitions cimport (const_char, uintptr_t, hid_t, herr_t, hsize_t, hvl_t,
   H5Adelete, H5T_BITFIELD, H5T_INTEGER, H5T_FLOAT, H5T_STRING, H5Tget_order,
   H5Pcreate, H5Pset_cache, H5Pclose, H5Pget_userblock, H5Pset_userblock,
   H5Pset_fapl_sec2, H5Pset_fapl_log, H5Pset_fapl_stdio, H5Pset_fapl_core,
+  H5Pset_fapl_split,
   H5Sselect_all, H5Sselect_elements, H5Sselect_hyperslab,
   H5Screate_simple, H5Sclose,
   H5ATTRset_attribute, H5ATTRset_attribute_string,
@@ -281,7 +282,7 @@ _supported_drivers = (
     "H5FD_CORE",
     #"H5FD_FAMILY",
     #"H5FD_MULTI",
-    #"H5FD_SPLIT",
+    "H5FD_SPLIT",
     #"H5FD_MPIO",
     #"H5FD_MPIPOSIX",
     #"H5FD_STREAM",
@@ -301,6 +302,7 @@ cdef class File:
   def _g_new(self, name, pymode, **params):
     cdef herr_t err = 0
     cdef hid_t access_plist, create_plist = H5P_DEFAULT
+    cdef hid_t meta_plist_id = H5P_DEFAULT, raw_plist_id = H5P_DEFAULT
     cdef size_t img_buf_len = 0, user_block_size = 0
     cdef void *img_buf_p = NULL
     cdef bytes encname
@@ -310,6 +312,13 @@ cdef class File:
     driver = params["DRIVER"]
     if driver is not None and driver not in _supported_drivers:
       raise ValueError("Invalid or not supported driver: '%s'" % driver)
+    if driver == "H5FD_SPLIT":
+      meta_ext = params.get("DRIVER_SPLIT_META_EXT", "-m.h5")
+      raw_ext = params.get("DRIVER_SPLIT_RAW_EXT", "-r.h5")
+      meta_name = meta_ext % name if "%s" in meta_ext else name + meta_ext
+      raw_name = raw_ext % name if "%s" in raw_ext else name + raw_ext
+      enc_meta_ext = encode_filename(meta_ext)
+      enc_raw_ext = encode_filename(raw_ext)
 
     # Create a new file using default properties
     self.name = name
@@ -341,14 +350,19 @@ cdef class File:
 
     # After the following check we can be quite sure
     # that the file or directory exists and permissions are right.
-    # But only if we are using file backed storage.
-    backing_store = params.get("DRIVER_CORE_BACKING_STORE", 1)
-    if driver != "H5FD_CORE" or backing_store:
-      check_file_access(name, pymode)
+    if driver == "H5FD_SPLIT":
+      for n in meta_name, raw_name:
+        check_file_access(n, pymode)
+    else:
+      backing_store = params.get("DRIVER_CORE_BACKING_STORE", 1)
+      if driver != "H5FD_CORE" or backing_store:
+        check_file_access(name, pymode)
 
     # Should a new file be created?
     if image:
       exists = True
+    elif driver == "H5FD_SPLIT":
+      exists = os.path.exists(meta_name) and os.path.exists(raw_name)
     else:
       exists = os.path.exists(name)
     self._v_new = not (pymode in ('r', 'r+') or (pymode == 'a' and exists))
@@ -430,9 +444,9 @@ cdef class File:
     #elif driver == "H5FD_MULTI":
     #  err = H5Pset_fapl_multi(access_plist, memb_map, memb_fapl, memb_name,
     #                          memb_addr, relax)
-    #elif driver == "H5FD_SPLIT":
-    #  err = H5Pset_fapl_split(access_plist, meta_ext, meta_plist_id, raw_ext,
-    #                          raw_plist_id)
+    elif driver == "H5FD_SPLIT":
+      err = H5Pset_fapl_split(access_plist, enc_meta_ext, meta_plist_id,
+                              enc_raw_ext, raw_plist_id)
     if err < 0:
       e = HDF5ExtError("Unable to set the file access property list")
       H5Pclose(create_plist)
@@ -707,7 +721,8 @@ cdef class AttributeSet:
     cdef hid_t mem_type, dset_id, type_id, native_type
     cdef int rank, ret, enumtype
     cdef void *rbuf
-    cdef char *str_value, **str_values = NULL
+    cdef char *str_value
+    cdef char **str_values = NULL
     cdef ndarray ndvalue
     cdef object shape, stype_atom, shape_atom, retvalue
     cdef int i, nelements
@@ -1449,7 +1464,9 @@ cdef class Array(Leaf):
   def _g_read_slice(self, ndarray startl, ndarray stopl, ndarray stepl,
                    ndarray nparr):
     cdef herr_t ret
-    cdef hsize_t *start, *stop, *step
+    cdef hsize_t *start
+    cdef hsize_t *stop
+    cdef hsize_t *step
     cdef void *rbuf
 
     # Get the pointer to the buffer data area of startl, stopl and stepl arrays
@@ -1541,7 +1558,9 @@ cdef class Array(Leaf):
 
     cdef int select_mode
     cdef ndarray start_, count_, step_
-    cdef hsize_t *startp, *countp, *stepp
+    cdef hsize_t *startp
+    cdef hsize_t *countp
+    cdef hsize_t *stepp
 
     # Build arrays for the selection parameters
     startl, countl, stepl = [], [], []
@@ -1627,8 +1646,11 @@ cdef class Array(Leaf):
     """Write a slice in an already created NumPy array."""
 
     cdef int ret
-    cdef void *rbuf, *temp
-    cdef hsize_t *start, *step, *count
+    cdef void *rbuf
+    cdef void *temp
+    cdef hsize_t *start
+    cdef hsize_t *step
+    cdef hsize_t *count
 
     # Get the pointer to the buffer data area
     rbuf = nparr.data
@@ -2017,6 +2039,40 @@ cdef class VLArray(Leaf):
 
   _readArray = previous_api(_read_array)
 
+  def get_row_size(self, row):
+    """Return the total size in bytes of all the elements contained in a given row."""
+
+    cdef hid_t space_id
+    cdef hsize_t size
+    cdef herr_t ret
+
+    cdef hsize_t offset[1]
+    cdef hsize_t count[1]
+
+    if row >= self.nrows:
+      raise HDF5ExtError(
+        "Asking for a range of rows exceeding the available ones!.",
+        h5bt=False)
+
+    # Get the dataspace handle
+    space_id = H5Dget_space(self.dataset_id)
+
+    offset[0] = row
+    count[0] = 1
+
+    ret = H5Sselect_hyperslab(space_id, H5S_SELECT_SET, offset, NULL, count, NULL);
+    if ret < 0:
+      size = -1
+
+    ret = H5Dvlen_get_buf_size(self.dataset_id, self.type_id, space_id, &size)
+    if ret < 0:
+      size = -1
+
+    # Terminate access to the dataspace
+    H5Sclose(space_id)
+
+    return size
+
 
 cdef class UnImplemented(Leaf):
 
diff --git a/tables/idxutils.py b/tables/idxutils.py
index 8997cc4..0492cc6 100644
--- a/tables/idxutils.py
+++ b/tables/idxutils.py
@@ -91,6 +91,7 @@ def computeblocksize(expectedrows, compoundsize, lowercompoundsize):
 
     This is useful for computing the sizes of both blocks and
     superblocks (using the PyTables terminology for blocks in indexes).
+
     """
 
     nlowerblocks = (expectedrows // lowercompoundsize) + 1
@@ -108,10 +109,11 @@ def calc_chunksize(expectedrows, optlevel=6, indsize=4, memlevel=4):
     """Calculate the HDF5 chunk size for index and sorted arrays.
 
     The logic to do that is based purely in experiments playing with
-    different chunksizes and compression flag. It is obvious that
-    using big chunks optimizes the I/O speed, but if they are too
-    large, the uncompressor takes too much time. This might (should)
-    be further optimized by doing more experiments.
+    different chunksizes and compression flag. It is obvious that using
+    big chunks optimizes the I/O speed, but if they are too large, the
+    uncompressor takes too much time. This might (should) be further
+    optimized by doing more experiments.
+
     """
 
     chunksize = computechunksize(expectedrows)
@@ -208,6 +210,7 @@ def calcoptlevels(nblocks, optlevel, indsize):
 
     The calculation is based on the number of blocks, optlevel and
     indexing mode.
+
     """
 
     if indsize == 2:  # light
@@ -314,16 +317,18 @@ def get_reduction_level(indsize, optlevel, slicesize, chunksize):
 #
 # Thanks to Shack Toms shack at livedata.com for NextAfter and NextAfterF
 # implementations in Python. 2004-10-01
-# epsilon  = math.ldexp(1.0, -53) # smallest double such that 0.5 + epsilon != 0.5
+# epsilon  = math.ldexp(1.0, -53) # smallest double such that
+#                                 # 0.5 + epsilon != 0.5
 # epsilonF = math.ldexp(1.0, -24) # smallest float such that 0.5 + epsilonF
 # != 0.5
 # maxFloat = float(2**1024 - 2**971)  # From the IEEE 754 standard
 # maxFloatF = float(2**128 - 2**104)  # From the IEEE 754 standard
 # minFloat  = math.ldexp(1.0, -1022) # min positive normalized double
 # minFloatF = math.ldexp(1.0, -126)  # min positive normalized float
-# smallEpsilon  = math.ldexp(1.0, -1074) # smallest increment for doubles < minFloat
-# smallEpsilonF = math.ldexp(1.0, -149)  # smallest increment for floats <
-# minFloatF
+# smallEpsilon  = math.ldexp(1.0, -1074) # smallest increment for
+#                                        # doubles < minFloat
+# smallEpsilonF = math.ldexp(1.0, -149)  # smallest increment for
+#                                        # floats < minFloatF
 infinity = math.ldexp(1.0, 1023) * 2
 infinityf = math.ldexp(1.0, 128)
 # Finf = float("inf")  # Infinite in the IEEE 754 standard (not avail in Win)
@@ -371,7 +376,7 @@ infinityF = infinityf
 
 
 def inftype(dtype, itemsize, sign=+1):
-    """Return a superior limit for maximum representable data type"""
+    """Return a superior limit for maximum representable data type."""
 
     assert sign in [-1, +1]
 
diff --git a/tables/index.py b/tables/index.py
index dcefddc..ea7c90e 100644
--- a/tables/index.py
+++ b/tables/index.py
@@ -12,6 +12,7 @@
 
 """Here is defined the Index class."""
 
+from __future__ import print_function
 import sys
 from bisect import bisect_left, bisect_right
 from time import time, clock
@@ -163,7 +164,7 @@ class Index(NotLoggedMixin, indexesextension.Index, Group):
         "The kind of this index.")
 
     filters = property(
-        lambda self: self._v_filters, None, None, 
+        lambda self: self._v_filters, None, None,
         """Filter properties for this index - see Filters in
         :ref:`FiltersClassDescr`.""")
 
@@ -193,12 +194,13 @@ class Index(NotLoggedMixin, indexesextension.Index, Group):
         """)
 
     def _getcolumn(self):
-        tablepath, columnpath = _table_column_pathname_of_index(self._v_pathname)
+        tablepath, columnpath = _table_column_pathname_of_index(
+            self._v_pathname)
         table = self._v_file._get_node(tablepath)
         column = table.cols._g_col(columnpath)
         return column
 
-    column = property(_getcolumn, None, None, 
+    column = property(_getcolumn, None, None,
         """The Column (see :ref:`ColumnClassDescr`) instance for the indexed
         column.""")
 
@@ -209,7 +211,7 @@ class Index(NotLoggedMixin, indexesextension.Index, Group):
         return table
 
     table = property(_gettable, None, None,
-        "Accessor for the `Table` object of this index.")
+                     "Accessor for the `Table` object of this index.")
 
     nblockssuperblock = property(
         lambda self: self.superblocksize // self.blocksize, None, None,
@@ -450,8 +452,8 @@ class Index(NotLoggedMixin, indexesextension.Index, Group):
                 nboundsLR = 0  # correction for -1 bounds
             nboundsLR += 2  # bounds + begin + end
             # All bounds values (+begin + end) are at the end of sortedLR
-            self.bebounds = self.sortedLR[nelementsSLR:nelementsSLR +
-                                                                nboundsLR]
+            self.bebounds = self.sortedLR[
+                nelementsSLR:nelementsSLR + nboundsLR]
             return
 
         # The index is new. Initialize the values
@@ -474,7 +476,7 @@ class Index(NotLoggedMixin, indexesextension.Index, Group):
         (self.superblocksize, self.blocksize,
          self.slicesize, self.chunksize) = self.blocksizes
         if debug:
-            print "blocksizes:", self.blocksizes
+            print("blocksizes:", self.blocksizes)
         # Compute the reduction level
         self.reduction = get_reduction_level(
             self.indsize, self.optlevel, self.slicesize, self.chunksize)
@@ -655,7 +657,7 @@ class Index(NotLoggedMixin, indexesextension.Index, Group):
         return idx
 
     def append(self, xarr, update=False):
-        """Append the array to the index objects"""
+        """Append the array to the index objects."""
 
         if profile:
             tref = time()
@@ -720,7 +722,7 @@ class Index(NotLoggedMixin, indexesextension.Index, Group):
             show_stats("Exiting append", tref)
 
     def append_last_row(self, xarr, update=False):
-        """Append the array to the last row index objects"""
+        """Append the array to the last row index objects."""
 
         if profile:
             tref = time()
@@ -795,7 +797,7 @@ class Index(NotLoggedMixin, indexesextension.Index, Group):
         optmedian, optstarts, optstops, optfull = opts
 
         if debug:
-            print "optvalues:", opts
+            print("optvalues:", opts)
 
         self.create_temp2()
         # Start the optimization process
@@ -956,7 +958,7 @@ class Index(NotLoggedMixin, indexesextension.Index, Group):
         if self.verbose:
             t = round(time() - t1, 4)
             c = round(clock() - c1, 4)
-            print "time: %s. clock: %s" % (t, c)
+            print("time: %s. clock: %s" % (t, c))
 
     def swap(self, what, mode=None):
         """Swap chunks or slices using a certain bounds reference."""
@@ -985,7 +987,7 @@ class Index(NotLoggedMixin, indexesextension.Index, Group):
         if self.verbose:
             t = round(time() - t1, 4)
             c = round(clock() - c1, 4)
-            print "time: %s. clock: %s" % (t, c)
+            print("time: %s. clock: %s" % (t, c))
         # Check that entropy is actually decreasing
         if what == "chunks" and self.last_tover > 0. and self.last_nover > 0:
             tover_var = (self.last_tover - tover) / self.last_tover
@@ -1097,7 +1099,7 @@ class Index(NotLoggedMixin, indexesextension.Index, Group):
         """Copy the data and delete the temporaries for sorting purposes."""
 
         if self.verbose:
-            print "Copying temporary data..."
+            print("Copying temporary data...")
         # tmp -> index
         reduction = self.reduction
         cs = self.chunksize // reduction
@@ -1144,7 +1146,7 @@ class Index(NotLoggedMixin, indexesextension.Index, Group):
         self.indicesLR.attrs.nelements = self.nelementsILR
 
         if self.verbose:
-            print "Deleting temporaries..."
+            print("Deleting temporaries...")
         self.tmp = None
         self.tmpfile.close()
         os.remove(self.tmpfilename)
@@ -1452,7 +1454,7 @@ class Index(NotLoggedMixin, indexesextension.Index, Group):
                 # so skip the reordering of this superblock
                 # (too expensive for such a little improvement)
                 if self.verbose:
-                    print "skipping reordering of superblock ->", sblock
+                    print("skipping reordering of superblock ->", sblock)
                 continue
             ns = sblock * nss2
             # Swap sorted and indices slices following the new order
@@ -1610,9 +1612,9 @@ class Index(NotLoggedMixin, indexesextension.Index, Group):
             if erange > 0:
                 toverlap = soverlap / erange
         if verbose and message != "init":
-            print "toverlap (%s):" % message, toverlap
-            print "multiplicity:\n", multiplicity, multiplicity.sum()
-            print "overlaps:\n", overlaps, overlaps.sum()
+            print("toverlap (%s):" % message, toverlap)
+            print("multiplicity:\n", multiplicity, multiplicity.sum())
+            print("overlaps:\n", overlaps, overlaps.sum())
         noverlaps = overlaps.sum()
         # For full indexes, set the 'is_csi' flag
         if self.indsize == 8 and self._v_file._iswritable():
@@ -1674,8 +1676,8 @@ class Index(NotLoggedMixin, indexesextension.Index, Group):
             if erange > 0:
                 toverlap = soverlap / erange
         if verbose:
-            print "overlaps (%s):" % message, noverlaps, toverlap
-            print multiplicity
+            print("overlaps (%s):" % message, noverlaps, toverlap)
+            print(multiplicity)
         # For full indexes, set the 'is_csi' flag
         if self.indsize == 8 and self._v_file._iswritable():
             self._v_attrs.is_csi = (noverlaps == 0)
@@ -1827,7 +1829,7 @@ class Index(NotLoggedMixin, indexesextension.Index, Group):
         self.dirtycache = False
 
     def search(self, item):
-        """Do a binary search in this index for an item"""
+        """Do a binary search in this index for an item."""
 
         if profile:
             tref = time()
@@ -2012,7 +2014,7 @@ class Index(NotLoggedMixin, indexesextension.Index, Group):
     searchLastRow = previous_api(search_last_row)
 
     def get_chunkmap(self):
-        """Compute a map with the interesting chunks in index"""
+        """Compute a map with the interesting chunks in index."""
 
         if profile:
             tref = time()
@@ -2122,7 +2124,7 @@ class Index(NotLoggedMixin, indexesextension.Index, Group):
     getLookupRange = previous_api(get_lookup_range)
 
     def _f_remove(self, recursive=False):
-        """Remove this Index object"""
+        """Remove this Index object."""
 
         # Index removal is always recursive,
         # no matter what `recursive` says.
diff --git a/tables/indexes.py b/tables/indexes.py
index 175591b..9901004 100644
--- a/tables/indexes.py
+++ b/tables/indexes.py
@@ -35,8 +35,8 @@ class CacheArray(NotLoggedMixin, EArray, indexesextension.CacheArray):
 
 
 class LastRowArray(NotLoggedMixin, CArray, indexesextension.LastRowArray):
-    """Container for keeping sorted and indices values of last row of
-    an index."""
+    """Container for keeping sorted and indices values of last row of an
+    index."""
 
     # Class identifier.
     _c_classid = 'LASTROWARRAY'
@@ -182,7 +182,7 @@ class IndexArray(NotLoggedMixin, EArray, indexesextension.IndexArray):
         return "IndexArray(path=%s)" % self._v_pathname
 
     def __repr__(self):
-        """A verbose representation of this class"""
+        """A verbose representation of this class."""
 
         return """%s
   atom = %r
diff --git a/tables/indexesExtension.py b/tables/indexesExtension.py
index 809ed35..3bb38ce 100644
--- a/tables/indexesExtension.py
+++ b/tables/indexesExtension.py
@@ -3,4 +3,4 @@ from tables.indexesextension import *
 
 _warnmsg = ("indexesExtension is pending deprecation, import indexesextension instead. "
             "You may use the pt2to3 tool to update your source code.")
-warn(_warnmsg, PendingDeprecationWarning, stacklevel=2)
+warn(_warnmsg, DeprecationWarning, stacklevel=2)
diff --git a/tables/indexesextension.pyx b/tables/indexesextension.pyx
index b0f6915..826feef 100644
--- a/tables/indexesextension.pyx
+++ b/tables/indexesextension.pyx
@@ -226,7 +226,11 @@ cdef class CacheArray(Array):
 cdef class IndexArray(Array):
   """Container for keeping sorted and indices values."""
 
-  cdef void    *rbufst, *rbufln, *rbufrv, *rbufbc, *rbuflb
+  cdef void    *rbufst
+  cdef void    *rbufln
+  cdef void    *rbufrv
+  cdef void    *rbufbc
+  cdef void    *rbuflb
   cdef hid_t   mem_space_id
   cdef int     l_chunksize, l_slicesize, nbounds, indsize
   cdef CacheArray bounds_ext
@@ -266,14 +270,16 @@ cdef class IndexArray(Array):
       self.rbuflb = self.bufferlb.data
       # Init structures for accelerating sorted array reads
       rank = 2
-      count[0] = 1; count[1] = self.chunksize
+      count[0] = 1
+      count[1] = self.chunksize
       self.mem_space_id = H5Screate_simple(rank, count, NULL)
       # Cache some counters in local extension variables
       self.l_chunksize = self.chunksize
       self.l_slicesize = self.slicesize
 
     # Get the addresses of buffer data
-    starts = index.starts;  lengths = index.lengths
+    starts = index.starts
+    lengths = index.lengths
     self.rbufst = starts.data
     self.rbufln = lengths.data
     # The 1st cache is loaded completely in memory and needs to be reloaded
@@ -409,7 +415,8 @@ cdef class IndexArray(Array):
       vpointer = self.sortedcache.getitem1_(nslot)
     else:
       # The sorted chunk is not in cache. Read it and put it in the LRU cache.
-      start = cs*nchunk;  stop = cs*(nchunk+1)
+      start = cs*nchunk
+      stop = cs*(nchunk+1)
       vpointer = self._g_read_sorted_slice(nrow, start, stop)
       self.sortedcache.setitem_(nckey, vpointer, 0)
     return vpointer
@@ -420,16 +427,28 @@ cdef class IndexArray(Array):
   def _search_bin_na_b(self, long item1, long item2):
     cdef int cs, ss, ncs, nrow, nrows, nbounds, rvrow
     cdef int start, stop, tlength, length, bread, nchunk, nchunk2
-    cdef int *rbufst, *rbufln
-    # Variables with specific type
-    cdef npy_int8 *rbufrv, *rbufbc = NULL, *rbuflb = NULL
+    cdef int *rbufst
+    cdef int *rbufln
 
-    cs = self.l_chunksize;  ss = self.l_slicesize; ncs = ss / cs
-    nbounds = self.nbounds;  nrows = self.nrows
-    rbufst = <int *>self.rbufst;  rbufln = <int *>self.rbufln
-    rbufrv = <npy_int8 *>self.rbufrv; tlength = 0
+    # Variables with specific type
+    cdef npy_int8 *rbufrv
+    cdef npy_int8 *rbufbc = NULL
+    cdef npy_int8 *rbuflb = NULL
+
+    cs = self.l_chunksize
+    ss = self.l_slicesize
+    ncs = ss / cs
+    nbounds = self.nbounds
+    nrows = self.nrows
+    rbufst = <int *>self.rbufst
+    rbufln = <int *>self.rbufln
+    rbufrv = <npy_int8 *>self.rbufrv
+    tlength = 0
     for nrow from 0 <= nrow < nrows:
-      rvrow = nrow*2;  bread = 0;  nchunk = -1
+      rvrow = nrow*2
+      bread = 0
+      nchunk = -1
+
       # Look if item1 is in this row
       if item1 > rbufrv[rvrow]:
         if item1 <= rbufrv[rvrow+1]:
@@ -459,8 +478,10 @@ cdef class IndexArray(Array):
           stop = ss
       else:
         stop = 0
-      length = stop - start;  tlength = tlength + length
-      rbufst[nrow] = start;  rbufln[nrow] = length;
+      length = stop - start
+      tlength = tlength + length
+      rbufst[nrow] = start
+      rbufln[nrow] = length
     return tlength
 
   _searchBinNA_b = previous_api(_search_bin_na_b)
@@ -469,16 +490,28 @@ cdef class IndexArray(Array):
   def _search_bin_na_ub(self, long item1, long item2):
     cdef int cs, ss, ncs, nrow, nrows, nbounds, rvrow
     cdef int start, stop, tlength, length, bread, nchunk, nchunk2
-    cdef int *rbufst, *rbufln
-    # Variables with specific type
-    cdef npy_uint8 *rbufrv, *rbufbc = NULL, *rbuflb = NULL
+    cdef int *rbufst
+    cdef int *rbufln
 
-    cs = self.l_chunksize;  ss = self.l_slicesize; ncs = ss / cs
-    nbounds = self.nbounds;  nrows = self.nrows
-    rbufst = <int *>self.rbufst;  rbufln = <int *>self.rbufln
-    rbufrv = <npy_uint8 *>self.rbufrv; tlength = 0
+    # Variables with specific type
+    cdef npy_uint8 *rbufrv
+    cdef npy_uint8 *rbufbc = NULL
+    cdef npy_uint8 *rbuflb = NULL
+
+    cs = self.l_chunksize
+    ss = self.l_slicesize
+    ncs = ss / cs
+    nbounds = self.nbounds
+    nrows = self.nrows
+    rbufst = <int *>self.rbufst
+    rbufln = <int *>self.rbufln
+    rbufrv = <npy_uint8 *>self.rbufrv
+    tlength = 0
     for nrow from 0 <= nrow < nrows:
-      rvrow = nrow*2;  bread = 0;  nchunk = -1
+      rvrow = nrow*2
+      bread = 0
+      nchunk = -1
+
       # Look if item1 is in this row
       if item1 > rbufrv[rvrow]:
         if item1 <= rbufrv[rvrow+1]:
@@ -508,8 +541,10 @@ cdef class IndexArray(Array):
           stop = ss
       else:
         stop = 0
-      length = stop - start;  tlength = tlength + length
-      rbufst[nrow] = start;  rbufln[nrow] = length;
+      length = stop - start
+      tlength = tlength + length
+      rbufst[nrow] = start
+      rbufln[nrow] = length
     return tlength
 
   _searchBinNA_ub = previous_api(_search_bin_na_ub)
@@ -518,16 +553,27 @@ cdef class IndexArray(Array):
   def _search_bin_na_s(self, long item1, long item2):
     cdef int cs, ss, ncs, nrow, nrows, nbounds, rvrow
     cdef int start, stop, tlength, length, bread, nchunk, nchunk2
-    cdef int *rbufst, *rbufln
-    # Variables with specific type
-    cdef npy_int16 *rbufrv, *rbufbc = NULL, *rbuflb = NULL
+    cdef int *rbufst
+    cdef int *rbufln
 
-    cs = self.l_chunksize;  ss = self.l_slicesize; ncs = ss / cs
-    nbounds = self.nbounds;  nrows = self.nrows
-    rbufst = <int *>self.rbufst;  rbufln = <int *>self.rbufln
-    rbufrv = <npy_int16 *>self.rbufrv; tlength = 0
+    # Variables with specific type
+    cdef npy_int16 *rbufrv
+    cdef npy_int16 *rbufbc = NULL
+    cdef npy_int16 *rbuflb = NULL
+
+    cs = self.l_chunksize
+    ss = self.l_slicesize
+    ncs = ss / cs
+    nbounds = self.nbounds
+    nrows = self.nrows
+    rbufst = <int *>self.rbufst
+    rbufln = <int *>self.rbufln
+    rbufrv = <npy_int16 *>self.rbufrv
+    tlength = 0
     for nrow from 0 <= nrow < nrows:
-      rvrow = nrow*2;  bread = 0;  nchunk = -1
+      rvrow = nrow*2
+      bread = 0
+      nchunk = -1
       # Look if item1 is in this row
       if item1 > rbufrv[rvrow]:
         if item1 <= rbufrv[rvrow+1]:
@@ -557,8 +603,10 @@ cdef class IndexArray(Array):
           stop = ss
       else:
         stop = 0
-      length = stop - start;  tlength = tlength + length
-      rbufst[nrow] = start;  rbufln[nrow] = length;
+      length = stop - start
+      tlength = tlength + length
+      rbufst[nrow] = start
+      rbufln[nrow] = length
     return tlength
 
   _searchBinNA_s = previous_api(_search_bin_na_s)
@@ -567,16 +615,27 @@ cdef class IndexArray(Array):
   def _search_bin_na_us(self, long item1, long item2):
     cdef int cs, ss, ncs, nrow, nrows, nbounds, rvrow
     cdef int start, stop, tlength, length, bread, nchunk, nchunk2
-    cdef int *rbufst, *rbufln
-    # Variables with specific type
-    cdef npy_uint16 *rbufrv, *rbufbc = NULL, *rbuflb = NULL
+    cdef int *rbufst
+    cdef int *rbufln
 
-    cs = self.l_chunksize;  ss = self.l_slicesize; ncs = ss / cs
-    nbounds = self.nbounds;  nrows = self.nrows
-    rbufst = <int *>self.rbufst;  rbufln = <int *>self.rbufln
-    rbufrv = <npy_uint16 *>self.rbufrv; tlength = 0
+    # Variables with specific type
+    cdef npy_uint16 *rbufrv
+    cdef npy_uint16 *rbufbc = NULL
+    cdef npy_uint16 *rbuflb = NULL
+
+    cs = self.l_chunksize
+    ss = self.l_slicesize
+    ncs = ss / cs
+    nbounds = self.nbounds
+    nrows = self.nrows
+    rbufst = <int *>self.rbufst
+    rbufln = <int *>self.rbufln
+    rbufrv = <npy_uint16 *>self.rbufrv
+    tlength = 0
     for nrow from 0 <= nrow < nrows:
-      rvrow = nrow*2;  bread = 0;  nchunk = -1
+      rvrow = nrow*2
+      bread = 0
+      nchunk = -1
       # Look if item1 is in this row
       if item1 > rbufrv[rvrow]:
         if item1 <= rbufrv[rvrow+1]:
@@ -606,8 +665,10 @@ cdef class IndexArray(Array):
           stop = ss
       else:
         stop = 0
-      length = stop - start;  tlength = tlength + length
-      rbufst[nrow] = start;  rbufln[nrow] = length;
+      length = stop - start
+      tlength = tlength + length
+      rbufst[nrow] = start
+      rbufln[nrow] = length
     return tlength
 
   _searchBinNA_us = previous_api(_search_bin_na_us)
@@ -616,16 +677,27 @@ cdef class IndexArray(Array):
   def _search_bin_na_i(self, long item1, long item2):
     cdef int cs, ss, ncs, nrow, nrows, nbounds, rvrow
     cdef int start, stop, tlength, length, bread, nchunk, nchunk2
-    cdef int *rbufst, *rbufln
-    # Variables with specific type
-    cdef npy_int32 *rbufrv, *rbufbc = NULL, *rbuflb = NULL
+    cdef int *rbufst
+    cdef int *rbufln
 
-    cs = self.l_chunksize;  ss = self.l_slicesize; ncs = ss / cs
-    nbounds = self.nbounds;  nrows = self.nrows
-    rbufst = <int *>self.rbufst;  rbufln = <int *>self.rbufln
-    rbufrv = <npy_int32 *>self.rbufrv; tlength = 0
+    # Variables with specific type
+    cdef npy_int32 *rbufrv
+    cdef npy_int32 *rbufbc = NULL
+    cdef npy_int32 *rbuflb = NULL
+
+    cs = self.l_chunksize
+    ss = self.l_slicesize
+    ncs = ss / cs
+    nbounds = self.nbounds
+    nrows = self.nrows
+    rbufst = <int *>self.rbufst
+    rbufln = <int *>self.rbufln
+    rbufrv = <npy_int32 *>self.rbufrv
+    tlength = 0
     for nrow from 0 <= nrow < nrows:
-      rvrow = nrow*2;  bread = 0;  nchunk = -1
+      rvrow = nrow*2
+      bread = 0
+      nchunk = -1
       # Look if item1 is in this row
       if item1 > rbufrv[rvrow]:
         if item1 <= rbufrv[rvrow+1]:
@@ -655,8 +727,10 @@ cdef class IndexArray(Array):
           stop = ss
       else:
         stop = 0
-      length = stop - start;  tlength = tlength + length
-      rbufst[nrow] = start;  rbufln[nrow] = length;
+      length = stop - start
+      tlength = tlength + length
+      rbufst[nrow] = start
+      rbufln[nrow] = length
     return tlength
 
   _searchBinNA_i = previous_api(_search_bin_na_i)
@@ -665,16 +739,27 @@ cdef class IndexArray(Array):
   def _search_bin_na_ui(self, npy_uint32 item1, npy_uint32 item2):
     cdef int cs, ss, ncs, nrow, nrows, nbounds, rvrow
     cdef int start, stop, tlength, length, bread, nchunk, nchunk2
-    cdef int *rbufst, *rbufln
-    # Variables with specific type
-    cdef npy_uint32 *rbufrv, *rbufbc = NULL, *rbuflb = NULL
+    cdef int *rbufst
+    cdef int *rbufln
 
-    cs = self.l_chunksize;  ss = self.l_slicesize; ncs = ss / cs
-    nbounds = self.nbounds;  nrows = self.nrows
-    rbufst = <int *>self.rbufst;  rbufln = <int *>self.rbufln
-    rbufrv = <npy_uint32 *>self.rbufrv; tlength = 0
+    # Variables with specific type
+    cdef npy_uint32 *rbufrv
+    cdef npy_uint32 *rbufbc = NULL
+    cdef npy_uint32 *rbuflb = NULL
+
+    cs = self.l_chunksize
+    ss = self.l_slicesize
+    ncs = ss / cs
+    nbounds = self.nbounds
+    nrows = self.nrows
+    rbufst = <int *>self.rbufst
+    rbufln = <int *>self.rbufln
+    rbufrv = <npy_uint32 *>self.rbufrv
+    tlength = 0
     for nrow from 0 <= nrow < nrows:
-      rvrow = nrow*2;  bread = 0;  nchunk = -1
+      rvrow = nrow*2
+      bread = 0
+      nchunk = -1
       # Look if item1 is in this row
       if item1 > rbufrv[rvrow]:
         if item1 <= rbufrv[rvrow+1]:
@@ -704,8 +789,10 @@ cdef class IndexArray(Array):
           stop = ss
       else:
         stop = 0
-      length = stop - start;  tlength = tlength + length
-      rbufst[nrow] = start;  rbufln[nrow] = length;
+      length = stop - start
+      tlength = tlength + length
+      rbufst[nrow] = start
+      rbufln[nrow] = length
     return tlength
 
   _searchBinNA_ui = previous_api(_search_bin_na_ui)
@@ -714,16 +801,27 @@ cdef class IndexArray(Array):
   def _search_bin_na_ll(self, npy_int64 item1, npy_int64 item2):
     cdef int cs, ss, ncs, nrow, nrows, nbounds, rvrow
     cdef int start, stop, tlength, length, bread, nchunk, nchunk2
-    cdef int *rbufst, *rbufln
-    # Variables with specific type
-    cdef npy_int64 *rbufrv, *rbufbc = NULL, *rbuflb = NULL
+    cdef int *rbufst
+    cdef int *rbufln
 
-    cs = self.l_chunksize;  ss = self.l_slicesize; ncs = ss / cs
-    nbounds = self.nbounds;  nrows = self.nrows
-    rbufst = <int *>self.rbufst;  rbufln = <int *>self.rbufln
-    rbufrv = <npy_int64 *>self.rbufrv; tlength = 0
+    # Variables with specific type
+    cdef npy_int64 *rbufrv
+    cdef npy_int64 *rbufbc = NULL
+    cdef npy_int64 *rbuflb = NULL
+
+    cs = self.l_chunksize
+    ss = self.l_slicesize
+    ncs = ss / cs
+    nbounds = self.nbounds
+    nrows = self.nrows
+    rbufst = <int *>self.rbufst
+    rbufln = <int *>self.rbufln
+    rbufrv = <npy_int64 *>self.rbufrv
+    tlength = 0
     for nrow from 0 <= nrow < nrows:
-      rvrow = nrow*2;  bread = 0;  nchunk = -1
+      rvrow = nrow*2
+      bread = 0
+      nchunk = -1
       # Look if item1 is in this row
       if item1 > rbufrv[rvrow]:
         if item1 <= rbufrv[rvrow+1]:
@@ -753,8 +851,10 @@ cdef class IndexArray(Array):
           stop = ss
       else:
         stop = 0
-      length = stop - start;  tlength = tlength + length
-      rbufst[nrow] = start;  rbufln[nrow] = length;
+      length = stop - start
+      tlength = tlength + length
+      rbufst[nrow] = start
+      rbufln[nrow] = length
     return tlength
 
   _searchBinNA_ll = previous_api(_search_bin_na_ll)
@@ -763,16 +863,27 @@ cdef class IndexArray(Array):
   def _search_bin_na_ull(self, npy_uint64 item1, npy_uint64 item2):
     cdef int cs, ss, ncs, nrow, nrows, nbounds, rvrow
     cdef int start, stop, tlength, length, bread, nchunk, nchunk2
-    cdef int *rbufst, *rbufln
-    # Variables with specific type
-    cdef npy_uint64 *rbufrv, *rbufbc = NULL, *rbuflb = NULL
+    cdef int *rbufst
+    cdef int *rbufln
 
-    cs = self.l_chunksize;  ss = self.l_slicesize; ncs = ss / cs
-    nbounds = self.nbounds;  nrows = self.nrows
-    rbufst = <int *>self.rbufst;  rbufln = <int *>self.rbufln
-    rbufrv = <npy_uint64 *>self.rbufrv; tlength = 0
+    # Variables with specific type
+    cdef npy_uint64 *rbufrv
+    cdef npy_uint64 *rbufbc = NULL
+    cdef npy_uint64 *rbuflb = NULL
+
+    cs = self.l_chunksize
+    ss = self.l_slicesize
+    ncs = ss / cs
+    nbounds = self.nbounds
+    nrows = self.nrows
+    rbufst = <int *>self.rbufst
+    rbufln = <int *>self.rbufln
+    rbufrv = <npy_uint64 *>self.rbufrv
+    tlength = 0
     for nrow from 0 <= nrow < nrows:
-      rvrow = nrow*2;  bread = 0;  nchunk = -1
+      rvrow = nrow*2
+      bread = 0
+      nchunk = -1
       # Look if item1 is in this row
       if item1 > rbufrv[rvrow]:
         if item1 <= rbufrv[rvrow+1]:
@@ -802,8 +913,10 @@ cdef class IndexArray(Array):
           stop = ss
       else:
         stop = 0
-      length = stop - start;  tlength = tlength + length
-      rbufst[nrow] = start;  rbufln[nrow] = length;
+      length = stop - start
+      tlength = tlength + length
+      rbufst[nrow] = start
+      rbufln[nrow] = length
     return tlength
 
   _searchBinNA_ull = previous_api(_search_bin_na_ull)
@@ -812,17 +925,29 @@ cdef class IndexArray(Array):
   def _search_bin_na_e(self, npy_float64 item1, npy_float64 item2):
     cdef int cs, ss, ncs, nrow, nrows, nrow2, nbounds, rvrow
     cdef int start, stop, tlength, length, bread, nchunk, nchunk2
-    cdef int *rbufst, *rbufln
-    # Variables with specific type
-    cdef npy_float16 *rbufrv, *rbufbc = NULL, *rbuflb = NULL
+    cdef int *rbufst
+    cdef int *rbufln
 
-    cs = self.l_chunksize;  ss = self.l_slicesize;  ncs = ss / cs
-    nbounds = self.nbounds;  nrows = self.nrows;  tlength = 0
-    rbufst = <int *>self.rbufst;  rbufln = <int *>self.rbufln
+    # Variables with specific type
+    cdef npy_float16 *rbufrv
+    cdef npy_float16 *rbufbc = NULL
+    cdef npy_float16 *rbuflb = NULL
+
+    cs = self.l_chunksize
+    ss = self.l_slicesize
+    ncs = ss / cs
+    nbounds = self.nbounds
+    nrows = self.nrows
+    tlength = 0
+    rbufst = <int *>self.rbufst
+    rbufln = <int *>self.rbufln
     # Limits not in cache, do a lookup
     rbufrv = <npy_float16 *>self.rbufrv
     for nrow from 0 <= nrow < nrows:
-      rvrow = nrow*2;  bread = 0;  nchunk = -1
+      rvrow = nrow*2
+      bread = 0
+      nchunk = -1
+
       # Look if item1 is in this row
       if item1 > rbufrv[rvrow]:
         if item1 <= rbufrv[rvrow+1]:
@@ -852,8 +977,10 @@ cdef class IndexArray(Array):
           stop = ss
       else:
         stop = 0
-      length = stop - start;  tlength = tlength + length
-      rbufst[nrow] = start;  rbufln[nrow] = length;
+      length = stop - start
+      tlength = tlength + length
+      rbufst[nrow] = start
+      rbufln[nrow] = length
     return tlength
 
   _searchBinNA_e = previous_api(_search_bin_na_e)
@@ -862,17 +989,28 @@ cdef class IndexArray(Array):
   def _search_bin_na_f(self, npy_float64 item1, npy_float64 item2):
     cdef int cs, ss, ncs, nrow, nrows, nrow2, nbounds, rvrow
     cdef int start, stop, tlength, length, bread, nchunk, nchunk2
-    cdef int *rbufst, *rbufln
+    cdef int *rbufst
+    cdef int *rbufln
     # Variables with specific type
-    cdef npy_float32 *rbufrv, *rbufbc = NULL, *rbuflb = NULL
+    cdef npy_float32 *rbufrv
+    cdef npy_float32 *rbufbc = NULL
+    cdef npy_float32 *rbuflb = NULL
+
+    cs = self.l_chunksize
+    ss = self.l_slicesize
+    ncs = ss / cs
+    nbounds = self.nbounds
+    nrows = self.nrows
+    tlength = 0
+    rbufst = <int *>self.rbufst
+    rbufln = <int *>self.rbufln
 
-    cs = self.l_chunksize;  ss = self.l_slicesize;  ncs = ss / cs
-    nbounds = self.nbounds;  nrows = self.nrows;  tlength = 0
-    rbufst = <int *>self.rbufst;  rbufln = <int *>self.rbufln
     # Limits not in cache, do a lookup
     rbufrv = <npy_float32 *>self.rbufrv
     for nrow from 0 <= nrow < nrows:
-      rvrow = nrow*2;  bread = 0;  nchunk = -1
+      rvrow = nrow*2
+      bread = 0
+      nchunk = -1
       # Look if item1 is in this row
       if item1 > rbufrv[rvrow]:
         if item1 <= rbufrv[rvrow+1]:
@@ -902,8 +1040,10 @@ cdef class IndexArray(Array):
           stop = ss
       else:
         stop = 0
-      length = stop - start;  tlength = tlength + length
-      rbufst[nrow] = start;  rbufln[nrow] = length;
+      length = stop - start
+      tlength = tlength + length
+      rbufst[nrow] = start
+      rbufln[nrow] = length
     return tlength
 
   _searchBinNA_f = previous_api(_search_bin_na_f)
@@ -912,17 +1052,30 @@ cdef class IndexArray(Array):
   def _search_bin_na_d(self, npy_float64 item1, npy_float64 item2):
     cdef int cs, ss, ncs, nrow, nrows, nrow2, nbounds, rvrow
     cdef int start, stop, tlength, length, bread, nchunk, nchunk2
-    cdef int *rbufst, *rbufln
+    cdef int *rbufst
+    cdef int *rbufln
+
     # Variables with specific type
-    cdef npy_float64 *rbufrv, *rbufbc = NULL, *rbuflb = NULL
+    cdef npy_float64 *rbufrv
+    cdef npy_float64 *rbufbc = NULL
+    cdef npy_float64 *rbuflb = NULL
+
+    cs = self.l_chunksize
+    ss = self.l_slicesize
+    ncs = ss / cs
+    nbounds = self.nbounds
+    nrows = self.nrows
+    tlength = 0
+    rbufst = <int *>self.rbufst
+    rbufln = <int *>self.rbufln
 
-    cs = self.l_chunksize;  ss = self.l_slicesize;  ncs = ss / cs
-    nbounds = self.nbounds;  nrows = self.nrows;  tlength = 0
-    rbufst = <int *>self.rbufst;  rbufln = <int *>self.rbufln
     # Limits not in cache, do a lookup
     rbufrv = <npy_float64 *>self.rbufrv
     for nrow from 0 <= nrow < nrows:
-      rvrow = nrow*2;  bread = 0;  nchunk = -1
+      rvrow = nrow*2
+      bread = 0
+      nchunk = -1
+
       # Look if item1 is in this row
       if item1 > rbufrv[rvrow]:
         if item1 <= rbufrv[rvrow+1]:
@@ -952,8 +1105,10 @@ cdef class IndexArray(Array):
           stop = ss
       else:
         stop = 0
-      length = stop - start;  tlength = tlength + length
-      rbufst[nrow] = start;  rbufln[nrow] = length;
+      length = stop - start
+      tlength = tlength + length
+      rbufst[nrow] = start
+      rbufln[nrow] = length
     return tlength
 
   _searchBinNA_d = previous_api(_search_bin_na_d)
@@ -962,17 +1117,30 @@ cdef class IndexArray(Array):
   def _search_bin_na_g(self, npy_longdouble item1, npy_longdouble item2):
     cdef int cs, ss, ncs, nrow, nrows, nrow2, nbounds, rvrow
     cdef int start, stop, tlength, length, bread, nchunk, nchunk2
-    cdef int *rbufst, *rbufln
+    cdef int *rbufst
+    cdef int *rbufln
+
     # Variables with specific type
-    cdef npy_longdouble *rbufrv, *rbufbc = NULL, *rbuflb = NULL
+    cdef npy_longdouble *rbufrv
+    cdef npy_longdouble *rbufbc = NULL
+    cdef npy_longdouble *rbuflb = NULL
+
+    cs = self.l_chunksize
+    ss = self.l_slicesize
+    ncs = ss / cs
+    nbounds = self.nbounds
+    nrows = self.nrows
+    tlength = 0
+    rbufst = <int *>self.rbufst
+    rbufln = <int *>self.rbufln
 
-    cs = self.l_chunksize;  ss = self.l_slicesize;  ncs = ss / cs
-    nbounds = self.nbounds;  nrows = self.nrows;  tlength = 0
-    rbufst = <int *>self.rbufst;  rbufln = <int *>self.rbufln
     # Limits not in cache, do a lookup
     rbufrv = <npy_longdouble *>self.rbufrv
     for nrow from 0 <= nrow < nrows:
-      rvrow = nrow*2;  bread = 0;  nchunk = -1
+      rvrow = nrow*2
+      bread = 0
+      nchunk = -1
+
       # Look if item1 is in this row
       if item1 > rbufrv[rvrow]:
         if item1 <= rbufrv[rvrow+1]:
@@ -1002,8 +1170,10 @@ cdef class IndexArray(Array):
           stop = ss
       else:
         stop = 0
-      length = stop - start;  tlength = tlength + length
-      rbufst[nrow] = start;  rbufln[nrow] = length;
+      length = stop - start
+      tlength = tlength + length
+      rbufst[nrow] = start
+      rbufln[nrow] = length
     return tlength
 
   _searchBinNA_g = previous_api(_search_bin_na_g)
diff --git a/tables/leaf.py b/tables/leaf.py
index c0df7e7..3788557 100644
--- a/tables/leaf.py
+++ b/tables/leaf.py
@@ -173,7 +173,7 @@ class Leaf(Node):
     # `````````````````````````
     @lazyattr
     def filters(self):
-        """Filter properties for this leaf
+        """Filter properties for this leaf.
 
         See Also
         --------
@@ -273,10 +273,8 @@ class Leaf(Node):
         return self.nrows
 
     def __str__(self):
-        """
-        The string representation for this object is its pathname in
-        the HDF5 object tree plus some additional metainfo.
-        """
+        """The string representation for this object is its pathname in the
+        HDF5 object tree plus some additional metainfo."""
 
         # Get this class name
         classname = self.__class__.__name__
@@ -300,6 +298,7 @@ class Leaf(Node):
         """Code to be run after node creation and before creation logging.
 
         This method gets or sets the flavor of the leaf.
+
         """
 
         super(Leaf, self)._g_post_init_hook()
@@ -372,7 +371,7 @@ class Leaf(Node):
         # equal to the chunksize.
         # See gh-206 and gh-238
         if self.chunkshape is not None:
-            chunksize = numpy.asarray(self.chunkshape).prod()
+            chunksize = self.chunkshape[self.maindim]
             if nrowsinbuf < chunksize:
                 nrowsinbuf = chunksize
 
@@ -406,7 +405,8 @@ very small/large chunksize, you may want to increase/decrease it."""
         # The next function is a substitute for slice().indices in order to
         # support full 64-bit integer for slices even in 32-bit machines.
         # F. Alted 2005-05-08
-        start, stop, step = utilsextension.get_indices(start, stop, step, long(nrows))
+        start, stop, step = utilsextension.get_indices(start, stop, step,
+                                                       long(nrows))
         return (start, stop, step)
 
     _processRange = previous_api(_process_range)
@@ -431,7 +431,7 @@ very small/large chunksize, you may want to increase/decrease it."""
             else:
                 stop = start + 1
         # Finally, get the correct values (over the main dimension)
-        start, stop, step = self._process_range(start, stop, step, 
+        start, stop, step = self._process_range(start, stop, step,
                                                 warn_negstep=warn_negstep)
         return (start, stop, step)
 
@@ -550,7 +550,7 @@ very small/large chunksize, you may want to increase/decrease it."""
             # Get the True coordinates (64-bit indices!)
             coords = numpy.asarray(key.nonzero(), dtype='i8')
             coords = numpy.transpose(coords)
-        elif key.dtype.kind == 'i':
+        elif key.dtype.kind == 'i' or key.dtype.kind == 'u':
             if len(key.shape) > 2:
                 raise IndexError(
                     "Coordinate indexing array has incompatible shape")
@@ -725,10 +725,10 @@ very small/large chunksize, you may want to increase/decrease it."""
     def flush(self):
         """Flush pending data to disk.
 
-        Saves whatever remaining buffered data to disk. It also releases I/O
-        buffers, so if you are filling many datasets in the same PyTables
-        session, please call flush() extensively so as to help PyTables to keep
-        memory requirements low.
+        Saves whatever remaining buffered data to disk. It also releases
+        I/O buffers, so if you are filling many datasets in the same
+        PyTables session, please call flush() extensively so as to help
+        PyTables to keep memory requirements low.
 
         """
 
diff --git a/tables/link.py b/tables/link.py
index de0316f..43bd113 100644
--- a/tables/link.py
+++ b/tables/link.py
@@ -28,7 +28,7 @@ Misc variables:
 """
 
 import os
-import tables as t
+import tables
 from tables import linkextension
 from tables.node import Node
 from tables.utils import lazyattr
@@ -162,9 +162,9 @@ class SoftLink(linkextension.SoftLink, Link):
         ::
 
             >>> f=tables.open_file('data/test.h5')
-            >>> print f.root.link0
+            >>> print(f.root.link0)
             /link0 (SoftLink) -> /another/path
-            >>> print f.root.link0()
+            >>> print(f.root.link0())
             /another/path (Group) ''
 
         """
@@ -184,7 +184,7 @@ class SoftLink(linkextension.SoftLink, Link):
         ::
 
             >>> f=tables.open_file('data/test.h5')
-            >>> print f.root.link0
+            >>> print(f.root.link0)
             /link0 (SoftLink) -> /path/to/node
 
         """
@@ -252,12 +252,12 @@ class ExternalLink(linkextension.ExternalLink, Link):
         ::
 
             >>> f=tables.open_file('data1/test1.h5')
-            >>> print f.root.link2
+            >>> print(f.root.link2)
             /link2 (ExternalLink) -> data2/test2.h5:/path/to/node
             >>> plink2 = f.root.link2('a')  # open in 'a'ppend mode
-            >>> print plink2
+            >>> print(plink2)
             /path/to/node (Group) ''
-            >>> print plink2._v_filename
+            >>> print(plink2._v_filename)
             'data2/test2.h5'        # belongs to referenced file
 
         """
@@ -270,13 +270,13 @@ class ExternalLink(linkextension.ExternalLink, Link):
             base_directory = os.path.dirname(self._v_file.filename)
             filename = os.path.join(base_directory, filename)
 
-        # Fetch the external file and save a reference to it.
-        # Check first in already opened files.
-        open_files = tables.file._open_files
-        if filename in open_files:
-            self.extfile = open_files[filename]
+        if self.extfile is None or not self.extfile.isopen:
+            self.extfile = tables.open_file(filename, **kwargs)
         else:
-            self.extfile = t.open_file(filename, **kwargs)
+            # XXX: implement better consistency checks
+            assert self.extfile.filename == filename
+            assert self.extfile.mode == kwargs.get('mode', 'r')
+
         return self.extfile._get_node(target)
 
     def umount(self):
@@ -303,7 +303,7 @@ class ExternalLink(linkextension.ExternalLink, Link):
         ::
 
             >>> f=tables.open_file('data1/test1.h5')
-            >>> print f.root.link2
+            >>> print(f.root.link2)
             /link2 (ExternalLink) -> data2/test2.h5:/path/to/node
 
         """
diff --git a/tables/linkExtension.py b/tables/linkExtension.py
index dc6a119..0fd306a 100644
--- a/tables/linkExtension.py
+++ b/tables/linkExtension.py
@@ -3,4 +3,4 @@ from tables.linkextension import *
 
 _warnmsg = ("linkExtension is pending deprecation, import linextension instead. "
             "You may use the pt2to3 tool to update your source code.")
-warn(_warnmsg, PendingDeprecationWarning, stacklevel=2)
+warn(_warnmsg, DeprecationWarning, stacklevel=2)
diff --git a/tables/linkextension.pyx b/tables/linkextension.pyx
index 481bb78..5c42ad1 100644
--- a/tables/linkextension.pyx
+++ b/tables/linkextension.pyx
@@ -236,7 +236,9 @@ cdef class ExternalLink(Link):
     cdef herr_t ret
     cdef H5L_info_t link_buff
     cdef size_t val_size
-    cdef char *clinkval, *cfilename, *c_obj_path
+    cdef char *clinkval
+    cdef char *cfilename
+    cdef char *c_obj_path
     cdef unsigned flags
     cdef bytes encoded_name
     cdef str filename, obj_path
diff --git a/tables/lrucacheExtension.py b/tables/lrucacheExtension.py
index 7315d71..374b3fd 100644
--- a/tables/lrucacheExtension.py
+++ b/tables/lrucacheExtension.py
@@ -3,4 +3,4 @@ from tables.lrucacheextension import *
 
 _warnmsg = ("lrucacheExtension is pending deprecation, import lrucacheextension instead. "
             "You may use the pt2to3 tool to update your source code.")
-warn(_warnmsg, PendingDeprecationWarning, stacklevel=2)
+warn(_warnmsg, DeprecationWarning, stacklevel=2)
diff --git a/tables/lrucacheextension.pxd b/tables/lrucacheextension.pxd
index f2fb698..bc3e78f 100644
--- a/tables/lrucacheextension.pxd
+++ b/tables/lrucacheextension.pxd
@@ -15,7 +15,8 @@ from numpy cimport ndarray
 # Declaration of instance variables for shared classes
 # The NodeCache class is useful for caching general objects (like Nodes).
 cdef class NodeCache:
-  cdef long nextslot, nslots
+  cdef readonly long nslots
+  cdef long nextslot
   cdef object nodes, paths
   cdef object setitem(self, object path, object node)
   cdef long getslot(self, object path)
diff --git a/tables/lrucacheextension.pyx b/tables/lrucacheextension.pyx
index 1f36444..7abf2c1 100644
--- a/tables/lrucacheextension.pyx
+++ b/tables/lrucacheextension.pyx
@@ -67,9 +67,6 @@ import_array()
 cdef class NodeCache:
   """Least-Recently-Used (LRU) cache for PyTables nodes."""
 
-  # This class variables are declared in utilsextension.pxd
-
-
   def __init__(self, nslots):
     """Maximum nslots of the cache.
 
@@ -143,16 +140,30 @@ cdef class NodeCache:
 
     return nslot
 
-  def pop(self, path):
-    return self.cpop(path)
+  __marker = object()
+
+  def pop(self, path, d=__marker):
+    try:
+      node = self.cpop(path)
+    except KeyError:
+      if d is not self.__marker:
+        return d
+      else:
+        raise
+    else:
+      return node
 
   cdef object cpop(self, object path):
     cdef long nslot
 
     nslot = self.getslot(path)
-    node = self.nodes[nslot]
-    del self.nodes[nslot];  del self.paths[nslot]
-    self.nextslot = self.nextslot - 1
+    if nslot == -1:
+        raise KeyError(path)
+    else:
+        node = self.nodes[nslot]
+        del self.nodes[nslot]
+        del self.paths[nslot]
+        self.nextslot = self.nextslot - 1
     return node
 
   def __iter__(self):
diff --git a/tables/misc/enum.py b/tables/misc/enum.py
index 00a6d09..d8da8cc 100644
--- a/tables/misc/enum.py
+++ b/tables/misc/enum.py
@@ -25,6 +25,7 @@ value is not used directly, and frequently it is entirely irrelevant.
 For the same reason, an enumerated variable is not usually compared with
 concrete values out of its enumerated type.  For that kind of use,
 standard variables and constants are more adequate.
+
 """
 
 from tables._past import previous_api
@@ -111,6 +112,7 @@ class Enum(object):
     (If you ask, the __getitem__() method is
     not used for this purpose to avoid ambiguity in the case of using
     strings as concrete values.)
+
     """
 
     def __init__(self, enum):
@@ -163,8 +165,7 @@ sequences, mappings and other enumerations""")
     _checkAndSetPair = previous_api(_check_and_set_pair)
 
     def __getitem__(self, name):
-        """
-        Get the concrete value of the enumerated value with that name.
+        """Get the concrete value of the enumerated value with that name.
 
         The name of the enumerated value must be a string. If there is no value
         with that name in the enumeration, a KeyError is raised.
@@ -184,6 +185,7 @@ sequences, mappings and other enumerations""")
         Traceback (most recent call last):
           ...
         KeyError: "no enumerated value with that name: 'foo'"
+
         """
 
         try:
@@ -200,8 +202,7 @@ sequences, mappings and other enumerations""")
         raise IndexError("operation not allowed")
 
     def __getattr__(self, name):
-        """
-        Get the concrete value of the enumerated value with that name.
+        """Get the concrete value of the enumerated value with that name.
 
         The name of the enumerated value must be a string. If there is no value
         with that name in the enumeration, an AttributeError is raised.
@@ -220,11 +221,12 @@ sequences, mappings and other enumerations""")
         Traceback (most recent call last):
           ...
         AttributeError: no enumerated value with that name: 'foo'
+
         """
 
         try:
             return self[name]
-        except KeyError, ke:
+        except KeyError as ke:
             raise AttributeError(*ke.args)
 
     def __setattr__(self, name, value):
@@ -236,8 +238,7 @@ sequences, mappings and other enumerations""")
         raise AttributeError("operation not allowed")
 
     def __contains__(self, name):
-        """
-        Is there an enumerated value with that name in the type?
+        """Is there an enumerated value with that name in the type?
 
         If the enumerated type has an enumerated value with that name, True is
         returned.  Otherwise, False is returned. The name must be a string.
@@ -265,6 +266,7 @@ sequences, mappings and other enumerations""")
         Traceback (most recent call last):
           ...
         TypeError: name of enumerated value is not a string: 2
+
         """
 
         if not isinstance(name, basestring):
@@ -273,8 +275,7 @@ sequences, mappings and other enumerations""")
         return name in self._names
 
     def __call__(self, value, *default):
-        """
-        Get the name of the enumerated value with that concrete value.
+        """Get the name of the enumerated value with that concrete value.
 
         If there is no value with that concrete value in the enumeration and a
         second argument is given as a default, this is returned. Else, a
@@ -299,6 +300,7 @@ sequences, mappings and other enumerations""")
         Traceback (most recent call last):
           ...
         ValueError: no enumerated value with that concrete value: 42
+
         """
 
         try:
@@ -310,20 +312,19 @@ sequences, mappings and other enumerations""")
                 "no enumerated value with that concrete value: %r" % (value,))
 
     def __len__(self):
-        """
-        Return the number of enumerated values in the enumerated type.
+        """Return the number of enumerated values in the enumerated type.
 
         Examples
         --------
         >>> len(Enum(['e%d' % i for i in range(10)]))
         10
+
         """
 
         return len(self._names)
 
     def __iter__(self):
-        """
-        Iterate over the enumerated values.
+        """Iterate over the enumerated values.
 
         Enumerated values are returned as (name, value) pairs *in no particular
         order*.
@@ -335,14 +336,14 @@ sequences, mappings and other enumerations""")
         >>> enumdict = dict([(name, value) for (name, value) in enum])
         >>> enumvals == enumdict
         True
+
         """
 
         for name_value in self._names.iteritems():
             yield name_value
 
     def __eq__(self, other):
-        """
-        Is the other enumerated type equivalent to this one?
+        """Is the other enumerated type equivalent to this one?
 
         Two enumerated types are equivalent if they have exactly the same
         enumerated values (i.e. with the same names and concrete values).
@@ -381,6 +382,7 @@ sequences, mappings and other enumerations""")
         False
         >>> enum1 == 2
         False
+
         """
 
         if not isinstance(other, Enum):
@@ -388,8 +390,7 @@ sequences, mappings and other enumerations""")
         return self._names == other._names
 
     def __ne__(self, other):
-        """
-        Is the `other` enumerated type different from this one?
+        """Is the `other` enumerated type different from this one?
 
         Two enumerated types are different if they don't have exactly
         the same enumerated values (i.e. with the same names and
@@ -419,6 +420,7 @@ sequences, mappings and other enumerations""")
         True
         >>> enum1 != enum6
         True
+
         """
 
         return not self.__eq__(other)
@@ -428,8 +430,7 @@ sequences, mappings and other enumerations""")
     # def __hash__(self):
     #    return hash((self.__class__, tuple(self._names.items())))
     def __repr__(self):
-        """
-        Return the canonical string representation of the enumeration. The
+        """Return the canonical string representation of the enumeration. The
         output of this method can be evaluated to give a new enumeration object
         that will compare equal to this one.
 
@@ -437,6 +438,7 @@ sequences, mappings and other enumerations""")
         --------
         >>> repr(Enum({'name': 10}))
         "Enum({'name': 10})"
+
         """
 
         return 'Enum(%s)' % self._names
diff --git a/tables/misc/proxydict.py b/tables/misc/proxydict.py
index 89243b7..0dd3f5d 100644
--- a/tables/misc/proxydict.py
+++ b/tables/misc/proxydict.py
@@ -10,7 +10,7 @@
 #
 ########################################################################
 
-"""Proxy dictionary for objects stored in a container"""
+"""Proxy dictionary for objects stored in a container."""
 
 import weakref
 
diff --git a/tables/node.py b/tables/node.py
index 936e6cd..297ccf8 100644
--- a/tables/node.py
+++ b/tables/node.py
@@ -10,7 +10,7 @@
 #
 ########################################################################
 
-"""PyTables nodes"""
+"""PyTables nodes."""
 
 import warnings
 
@@ -310,19 +310,19 @@ class Node(object):
         if not self._v_isopen:
             return  # the node is already closed or not initialized
 
+        self._v__deleting = True
+
         # If we get here, the `Node` is still open.
-        file_ = self._v_file
-        if self._v_pathname in file_._aliveNodes:
-            # If the node is alive, kill it (to save it).
-            file_._killnode(self)
-        elif file_._aliveNodes.hasdeadnodes:
-            # The node is already dead and there are no references to it,
-            # so follow the usual deletion procedure.
-            # This means closing the (still open) node.
-            # `self._v__deleting` is asserted so that the node
-            # does not try to unreference itself again from the file.
-            self._v__deleting = True
-            self._f_close()
+        try:
+            node_manager = self._v_file._node_manager
+            node_manager.drop_node(self, check_unregistered=False)
+        finally:
+            # At this point the node can still be open if there is still some
+            # alive reference around (e.g. if the __del__ method is called
+            # explicitly by the user).
+            if self._v_isopen:
+                self._v__deleting = True
+                self._f_close()
 
     def _g_pre_kill_hook(self):
         """Code to be called before killing the node."""
@@ -330,12 +330,6 @@ class Node(object):
 
     _g_preKillHook = previous_api(_g_pre_kill_hook)
 
-    def _g_post_revive_hook(self):
-        """Code to be called after reviving the node."""
-        pass
-
-    _g_postReviveHook = previous_api(_g_post_revive_hook)
-
     def _g_create(self):
         """Create a new HDF5 node and return its object identifier."""
         raise NotImplementedError
@@ -400,7 +394,8 @@ be ready to see PyTables asking for *lots* of memory and possibly slow I/O"""
                           % (self._v_pathname, self._v_maxtreedepth),
                           PerformanceWarning)
 
-        file_._refnode(self, self._v_pathname)
+        if self._v_pathname != '/':
+            file_._node_manager.cache_node(self, self._v_pathname)
 
     _g_setLocation = previous_api(_g_set_location)
 
@@ -432,9 +427,8 @@ moved descendent node is exceeding the recommended maximum depth (%d);\
 be ready to see PyTables asking for *lots* of memory and possibly slow I/O"""
                           % (self._v_maxtreedepth,), PerformanceWarning)
 
-        file_ = self._v_file
-        file_._unrefnode(oldpath)
-        file_._refnode(self, newpath)
+        node_manager = self._v_file._node_manager
+        node_manager.rename_node(oldpath, newpath)
 
         # Tell dependent objects about the new location of this node.
         self._g_update_dependent()
@@ -448,20 +442,21 @@ be ready to see PyTables asking for *lots* of memory and possibly slow I/O"""
 
         """
 
-        file_ = self._v_file
+        node_manager = self._v_file._node_manager
         pathname = self._v_pathname
 
+        if not self._v__deleting:
+            node_manager.drop_from_cache(pathname)
+            # Note: node_manager.drop_node do not removes the node form the
+            # registry if it is still open
+            node_manager.registry.pop(pathname, None)
+
         self._v_file = None
         self._v_isopen = False
         self._v_pathname = None
         self._v_name = None
         self._v_depth = None
 
-        # If the node object is being deleted,
-        # it has already been unreferenced from the file.
-        if not self._v__deleting:
-            file_._unrefnode(pathname)
-
     _g_delLocation = previous_api(_g_del_location)
 
     def _g_post_init_hook(self):
@@ -903,7 +898,8 @@ you may want to use the ``overwrite`` argument""" % (parent._v_pathname, name))
     def _f_getattr(self, name):
         """Get a PyTables attribute from this node.
 
-        If the named attribute does not exist, an AttributeError is raised.
+        If the named attribute does not exist, an AttributeError is
+        raised.
 
         """
 
@@ -926,7 +922,8 @@ you may want to use the ``overwrite`` argument""" % (parent._v_pathname, name))
     def _f_delattr(self, name):
         """Delete a PyTables attribute from this node.
 
-        If the named attribute does not exist, an AttributeError is raised.
+        If the named attribute does not exist, an AttributeError is
+        raised.
 
         """
 
diff --git a/tables/nodes/filenode.py b/tables/nodes/filenode.py
index 0fc8866..1d3422b 100644
--- a/tables/nodes/filenode.py
+++ b/tables/nodes/filenode.py
@@ -78,7 +78,7 @@ class RawPyTablesIO(io.RawIOBase):
     # read only attribute
     @property
     def mode(self):
-        '''File mode'''
+        """File mode."""
 
         return self._mode
 
@@ -128,8 +128,8 @@ class RawPyTablesIO(io.RawIOBase):
     def seekable(self):
         """Return whether object supports random access.
 
-        If False, seek(), tell() and truncate() will raise IOError.
-        This method may need to do a test seek().
+        If False, seek(), tell() and truncate() will raise IOError. This
+        method may need to do a test seek().
 
         """
 
@@ -139,7 +139,8 @@ class RawPyTablesIO(io.RawIOBase):
     def fileno(self):
         """Returns underlying file descriptor if one exists.
 
-        An IOError is raised if the IO object does not use a file descriptor.
+        An IOError is raised if the IO object does not use a file
+        descriptor.
 
         """
 
@@ -343,7 +344,8 @@ class RawPyTablesIO(io.RawIOBase):
     def write(self, b):
         """Write the given buffer to the IO stream.
 
-        Returns the number of bytes written, which may be less than len(b).
+        Returns the number of bytes written, which may be less than
+        len(b).
 
         """
 
@@ -376,9 +378,9 @@ class RawPyTablesIO(io.RawIOBase):
     def _checkClosed(self):
         """Checks if file node is open.
 
-        Checks whether the file node is open or has been closed.
-        In the second case, a ValueError is raised.
-        If the host PyTables has been closed, ValueError is also raised.
+        Checks whether the file node is open or has been closed. In the
+        second case, a ValueError is raised. If the host PyTables has
+        been closed, ValueError is also raised.
 
         """
 
@@ -478,12 +480,12 @@ class RawPyTablesIO(io.RawIOBase):
 
 
 class FileNodeMixin(object):
-    """Mixin class for FileNode objects
+    """Mixin class for FileNode objects.
 
     It provides access to the attribute set of the node that becomes
-    available via the attrs property.
-    You can add attributes there, but try to avoid attribute names in all
-    caps or starting with '_', since they may clash with internal attributes.
+    available via the attrs property. You can add attributes there, but
+    try to avoid attribute names in all caps or starting with '_', since
+    they may clash with internal attributes.
 
     """
 
@@ -678,11 +680,11 @@ newNode = previous_api(new_node)
 def open_node(node, mode='r'):
     """Opens an existing file node.
 
-    Returns a file node object from the existing specified PyTables node.
-    If mode is not specified or it is 'r', the file can only be read,
-    and the pointer is positioned at the beginning of the file.
-    If mode is 'a+', the file can be read and appended, and the pointer
-    is positioned at the end of the file.
+    Returns a file node object from the existing specified PyTables
+    node. If mode is not specified or it is 'r', the file can only be
+    read, and the pointer is positioned at the beginning of the file. If
+    mode is 'a+', the file can be read and appended, and the pointer is
+    positioned at the end of the file.
 
     """
 
diff --git a/tables/nodes/tests/__init__.py b/tables/nodes/tests/__init__.py
index 0e18473..993d0f8 100644
--- a/tables/nodes/tests/__init__.py
+++ b/tables/nodes/tests/__init__.py
@@ -10,4 +10,4 @@
 #
 ########################################################################
 
-"""Unit tests for special node behaviours"""
+"""Unit tests for special node behaviours."""
diff --git a/tables/nodes/tests/test_filenode.py b/tables/nodes/tests/test_filenode.py
index d9a7dbf..1986974 100644
--- a/tables/nodes/tests/test_filenode.py
+++ b/tables/nodes/tests/test_filenode.py
@@ -495,10 +495,10 @@ class ReadlineTestCase(common.TempFileMixin, common.PyTablesTestCase):
 
         self.fnode.seek(0)
 
-        line = self.fnode.next()
+        line = next(self.fnode)
         self.assertEqual(line, linesep)
 
-        line = self.fnode.next()
+        line = next(self.fnode)
         self.assertEqual(line, b'short line' + linesep)
 
     def test03_Readlines(self):
@@ -769,12 +769,12 @@ class ClosedH5FileTestCase(common.TempFileMixin, common.PyTablesTestCase):
 
 
 class OldVersionTestCase(common.PyTablesTestCase):
-    """
-    Base class for old version compatibility test cases.
+    """Base class for old version compatibility test cases.
 
     It provides some basic tests for file operations and attribute handling.
     Sub-classes must provide the 'oldversion' attribute
     and the 'oldh5fname' attribute.
+
     """
 
     def setUp(self):
diff --git a/tables/parameters.py b/tables/parameters.py
index d0617e0..8b6c078 100644
--- a/tables/parameters.py
+++ b/tables/parameters.py
@@ -159,13 +159,14 @@ METADATA_CACHE_SIZE = 1 * _MB  # 1 MB is the default for HDF5
 # number of leaves, try increasing this value and see if it fits better
 # for you. Please report back your feedback.
 NODE_CACHE_SLOTS = 64
-"""Maximum number of unreferenced nodes to be kept in memory.
+"""Maximum number of nodes to be kept in the metadata cache.
 
-If positive, this is the number of *unreferenced* nodes to be kept in
-the metadata cache. Least recently used nodes are unloaded from memory
-when this number of loaded nodes is reached. To load a node again,
-simply access it as usual. Nodes referenced by user variables are not
-taken into account nor unloaded.
+It is the number of nodes to be kept in the metadata cache. Least recently
+used nodes are unloaded from memory when this number of loaded nodes is
+reached. To load a node again, simply access it as usual.
+Nodes referenced by user variables and, in general, all nodes that are still
+open are registered in the node manager and can be quickly accessed even
+if they are not in the cache.
 
 Negative value means that all the touched nodes will be kept in an
 internal dictionary.  This is the faster way to load/retrieve nodes.
@@ -267,6 +268,11 @@ Following drivers are supported:
       memory until the file is closed. At closing, the memory version
       of the file can be written back to disk or abandoned.
 
+    * H5FD_SPLIT: this file driver splits a file into two parts.
+      One part stores metadata, and the other part stores raw data.
+      This splitting a file into two parts is a limited case of the
+      Multi driver.
+
 The following drivers are not currently supported:
 
     * H5FD_LOG: this is the H5FD_SEC2 driver with logging capabilities.
@@ -282,11 +288,6 @@ The following drivers are not currently supported:
       data is stored in separate files based on the type of data.
       The Split driver is a special case of this driver.
 
-    * H5FD_SPLIT: this file driver splits a file into two parts.
-      One part stores metadata, and the other part stores raw data.
-      This splitting a file into two parts is a limited case of the
-      Multi driver.
-
     * H5FD_MPIO: this is the standard HDF5 file driver for parallel
       file systems. This driver uses the MPI standard for both
       communication and file I/O.
@@ -411,6 +412,32 @@ using the :meth:`tables.File.get_file_image` method.
 
 """
 
+DRIVER_SPLIT_META_EXT = '-m.h5'
+"""The extension for the metadata file used by the H5FD_SPLIT driver.
+
+If this option is passed to the :func:`tables.openFile` function along
+with driver='H5FD_SPLIT', the extension is appended to the name passed
+as the first parameter to form the name of the metadata file. If the
+string '%s' is used in the extension, the metadata file name is formed
+by replacing '%s' with the name passed as the first parameter instead.
+
+.. versionadded:: 3.1
+
+"""
+
+DRIVER_SPLIT_RAW_EXT = '-r.h5'
+"""The extension for the raw data file used by the H5FD_SPLIT driver.
+
+If this option is passed to the :func:`tables.openFile` function along
+with driver='H5FD_SPLIT', the extension is appended to the name passed
+as the first parameter to form the name of the raw data file. If the
+string '%s' is used in the extension, the raw data file name is formed
+by replacing '%s' with the name passed as the first parameter instead.
+
+.. versionadded:: 3.1
+
+"""
+
 
 ## Local Variables:
 ## mode: python
diff --git a/tables/path.py b/tables/path.py
index 37b2d65..b9624c7 100644
--- a/tables/path.py
+++ b/tables/path.py
@@ -72,6 +72,7 @@ def check_name_validity(name):
     If the name is not valid, a ``ValueError`` is raised.  If it is
     valid but it can not be used with natural naming, a
     `NaturalNameWarning` is issued.
+
     """
 
     warnInfo = (
diff --git a/tables/scripts/__init__.py b/tables/scripts/__init__.py
index f9424b7..942f2a0 100644
--- a/tables/scripts/__init__.py
+++ b/tables/scripts/__init__.py
@@ -10,8 +10,9 @@
 #
 ########################################################################
 
-"""Utility scripts for PyTables
+"""Utility scripts for PyTables.
 
 This package contains some modules which provide a ``main()`` function
 (with no arguments), so that they can be used as scripts.
+
 """
diff --git a/tables/scripts/pt2to3.py b/tables/scripts/pt2to3.py
index be1be69..60f0e6c 100644
--- a/tables/scripts/pt2to3.py
+++ b/tables/scripts/pt2to3.py
@@ -48,9 +48,11 @@ def main():
             '$ pt2to3 oldfile.py > newfile.py')
     parser = argparse.ArgumentParser(description=desc)
     parser.add_argument('-r', '--reverse', action='store_true', default=False,
-                        dest='reverse', help="reverts changes, going from 3.x -> 2.x.")
+                        dest='reverse',
+                        help="reverts changes, going from 3.x -> 2.x.")
     parser.add_argument('-p', '--no-ignore-previous', action='store_false',
-                        default=True, dest='ignore_previous', help="ignores previous_api() calls.")
+                        default=True, dest='ignore_previous',
+                        help="ignores previous_api() calls.")
     parser.add_argument('-o', default=None, dest='output',
                         help="output file to write to.")
     parser.add_argument('-i', '--inplace', action='store_true', default=False,
diff --git a/tables/scripts/ptdump.py b/tables/scripts/ptdump.py
index a408d57..06c634c 100644
--- a/tables/scripts/ptdump.py
+++ b/tables/scripts/ptdump.py
@@ -16,9 +16,9 @@ Pass the flag -h to this for help on usage.
 
 """
 
-import sys
-import os.path
-import getopt
+from __future__ import print_function
+
+import argparse
 
 from tables.file import open_file
 from tables.group import Group
@@ -28,29 +28,27 @@ from tables.unimplemented import UnImplemented
 from tables._past import previous_api
 
 # default options
-
-
-class Options(object):
-    rng = slice(None)
-    showattrs = 0
-    verbose = 0
-    dump = 0
-    colinfo = 0
-    idxinfo = 0
-
-options = Options()
+options = argparse.Namespace(
+    rng=slice(None),
+    showattrs=0,
+    verbose=0,
+    dump=0,
+    colinfo=0,
+    idxinfo=0,
+)
 
 
 def dump_leaf(leaf):
     if options.verbose:
-        print repr(leaf)
+        print(repr(leaf))
     else:
-        print str(leaf)
+        print(str(leaf))
     if options.showattrs:
-        print "  "+repr(leaf.attrs)
+        print("  "+repr(leaf.attrs))
     if options.dump and not isinstance(leaf, UnImplemented):
-        print "  Data dump:"
-        # print leaf.read(options.rng.start, options.rng.stop, options.rng.step)
+        print("  Data dump:")
+        # print((leaf.read(options.rng.start, options.rng.stop,
+        #        options.rng.step))
         # This is better for large objects
         if options.rng.start is None:
             start = 0
@@ -66,15 +64,15 @@ def dump_leaf(leaf):
         else:
             step = options.rng.step
         if leaf.shape == ():
-            print "[SCALAR] %s" % (leaf[()])
+            print("[SCALAR] %s" % (leaf[()]))
         else:
             for i in range(start, stop, step):
-                print "[%s] %s" % (i, leaf[i])
+                print("[%s] %s" % (i, leaf[i]))
 
     if isinstance(leaf, Table) and options.colinfo:
         # Show info of columns
         for colname in leaf.colnames:
-            print repr(leaf.cols._f_col(colname))
+            print(repr(leaf.cols._f_col(colname)))
 
     if isinstance(leaf, Table) and options.idxinfo:
         # Show info of indexes
@@ -82,7 +80,7 @@ def dump_leaf(leaf):
             col = leaf.cols._f_col(colname)
             if isinstance(col, Column) and col.index is not None:
                 idx = col.index
-                print repr(idx)
+                print(repr(idx))
 
 dumpLeaf = previous_api(dump_leaf)
 
@@ -90,77 +88,78 @@ dumpLeaf = previous_api(dump_leaf)
 def dump_group(pgroup):
     node_kinds = pgroup._v_file._node_kinds[1:]
     for group in pgroup._f_walk_groups():
-        print str(group)
+        print(str(group))
         if options.showattrs:
-            print "  "+repr(group._v_attrs)
+            print("  "+repr(group._v_attrs))
         for kind in node_kinds:
             for node in group._f_list_nodes(kind):
                 if options.verbose or options.dump:
                     dump_leaf(node)
                 else:
-                    print str(node)
+                    print(str(node))
 
 
 dumpGroup = previous_api(dump_group)
 
 
+def _get_parser():
+    parser = argparse.ArgumentParser(
+        description='''The ptdump utility allows you look into the contents
+        of your PyTables files. It lets you see not only the data but also
+        the metadata (that is, the *structure* and additional information in
+        the form of *attributes*).''')
+
+    parser.add_argument(
+        '-v', '--verbose', action='store_true',
+        help='dump more metainformation on nodes',
+    )
+    parser.add_argument(
+        '-d', '--dump', action='store_true',
+        help='dump data information on leaves',
+    )
+    parser.add_argument(
+        '-a', '--showattrs', action='store_true',
+        help='show attributes in nodes (only useful when -v or -d are active)',
+    )
+    parser.add_argument(
+        '-c', '--colinfo', action='store_true',
+        help='''show info of columns in tables (only useful when -v or -d
+        are active)''',
+    )
+    parser.add_argument(
+        '-i', '--idxinfo', action='store_true',
+        help='''show info of indexed columns (only useful when -v or -d are
+        active)''',
+    )
+    parser.add_argument(
+        '-R', '--range', dest='rng', metavar='RANGE',
+        help='''select a RANGE of rows (in the form "start,stop,step")
+        during the copy of *all* the leaves.
+        Default values are "None,None,1", which means a copy of all the
+        rows.''',
+    )
+    parser.add_argument('src', metavar='filename[:nodepath]',
+                        help='name of the HDF5 file to dump')
+
+    return parser
+
+
 def main():
-    usage = \
-        """usage: %s [-d] [-v] [-a] [-c] [-i] [-R start,stop,step] [-h] file[:nodepath]
-      -d -- Dump data information on leaves
-      -v -- Dump more metainformation on nodes
-      -a -- Show attributes in nodes (only useful when -v or -d are active)
-      -c -- Show info of columns in tables (only useful when -v or -d are active)
-      -i -- Show info of indexed columns (only useful when -v or -d are active)
-      -R RANGE -- Select a RANGE of rows in the form "start,stop,step"
-      -h -- Print help on usage
-                \n""" \
-    % os.path.basename(sys.argv[0])
-
-    try:
-        opts, pargs = getopt.getopt(sys.argv[1:], 'R:ahdvci')
-    except:
-        sys.stderr.write(usage)
-        sys.exit(0)
-
-    # if we pass too much parameters, abort
-    if len(pargs) != 1:
-        sys.stderr.write(usage)
-        sys.exit(0)
+    parser = _get_parser()
+
+    args = parser.parse_args(namespace=options)
 
     # Get the options
-    for option in opts:
-        if option[0] == '-R':
-            options.dump = 1
-            try:
-                options.rng = eval("slice("+option[1]+")")
-            except:
-                print "Error when getting the range parameter."
-                (type, value, traceback) = sys.exc_info()
-                print "  The error was:", value
-                sys.stderr.write(usage)
-                sys.exit(0)
-
-        elif option[0] == '-a':
-            options.showattrs = 1
-        elif option[0] == '-h':
-            sys.stderr.write(usage)
-            sys.exit(0)
-        elif option[0] == '-v':
-            options.verbose = 1
-        elif option[0] == '-d':
-            options.dump = 1
-        elif option[0] == '-c':
-            options.colinfo = 1
-        elif option[0] == '-i':
-            options.idxinfo = 1
+    if isinstance(args.rng, basestring):
+        try:
+            options.rng = eval("slice(" + args.rng + ")")
+        except Exception:
+            parser.error("Error when getting the range parameter.")
         else:
-            print option[0], ": Unrecognized option"
-            sys.stderr.write(usage)
-            sys.exit(0)
+            args.dump = 1
 
     # Catch the files passed as the last arguments
-    src = pargs[0].split(':')
+    src = args.src.split(':')
     if len(src) == 1:
         filename, nodename = src[0], "/"
     else:
@@ -180,7 +179,7 @@ def main():
         dump_leaf(nodeobject)
     else:
         # This should never happen
-        print "Unrecognized object:", nodeobject
+        print("Unrecognized object:", nodeobject)
 
     # Close the file
     h5file.close()
diff --git a/tables/scripts/ptrepack.py b/tables/scripts/ptrepack.py
index 7afc97e..d2752f1 100644
--- a/tables/scripts/ptrepack.py
+++ b/tables/scripts/ptrepack.py
@@ -16,12 +16,14 @@ Pass the flag -h to this for help on usage.
 
 """
 
+from __future__ import print_function
 import sys
-import os.path
 import time
-import getopt
+import os.path
+import argparse
 import warnings
 
+
 from tables.file import open_file
 from tables.group import Group
 from tables.leaf import Filters
@@ -68,16 +70,16 @@ def recreate_indexes(table, dstfileh, dsttable):
     if listoldindexes != []:
         if not regoldindexes:
             if verbose:
-                print "[I]Not regenerating indexes for table: '%s:%s'" % \
-                      (dstfileh.filename, dsttable._v_pathname)
+                print("[I]Not regenerating indexes for table: '%s:%s'" %
+                      (dstfileh.filename, dsttable._v_pathname))
             return
         # Now, recreate the indexed columns
         if verbose:
-            print "[I]Regenerating indexes for table: '%s:%s'" % \
-                  (dstfileh.filename, dsttable._v_pathname)
+            print("[I]Regenerating indexes for table: '%s:%s'" %
+                  (dstfileh.filename, dsttable._v_pathname))
         for colname in listoldindexes:
             if verbose:
-                print "[I]Indexing column: '%s'. Please wait..." % colname
+                print("[I]Indexing column: '%s'. Please wait..." % colname)
             colobj = dsttable.cols._f_col(colname)
             # We don't specify the filters for the indexes
             colobj.create_index(filters=None)
@@ -97,8 +99,8 @@ def copy_leaf(srcfile, dstfile, srcnode, dstnode, title,
     # Get the destination node and its parent
     last_slash = dstnode.rindex('/')
     if last_slash == len(dstnode)-1:
-        # print "Detected a trailing slash in destination node. Interpreting it
-        # as a destination group."
+        # print("Detected a trailing slash in destination node. "
+        #       "Interpreting it as a destination group.")
         dstgroup = dstnode[:-1]
     elif last_slash > 0:
         dstgroup = dstnode[:last_slash]
@@ -148,10 +150,10 @@ def copy_leaf(srcfile, dstfile, srcnode, dstnode, title,
             sortby=sortby, checkCSI=checkCSI, propindexes=propindexes)
     except:
         (type, value, traceback) = sys.exc_info()
-        print "Problems doing the copy from '%s:%s' to '%s:%s'" % \
-              (srcfile, srcnode, dstfile, dstnode)
-        print "The error was --> %s: %s" % (type, value)
-        print "The destination file looks like:\n", dstfileh
+        print("Problems doing the copy from '%s:%s' to '%s:%s'" %
+              (srcfile, srcnode, dstfile, dstnode))
+        print("The error was --> %s: %s" % (type, value))
+        print("The destination file looks like:\n", dstfileh)
         # Close all the open files:
         srcfileh.close()
         dstfileh.close()
@@ -238,10 +240,10 @@ def copy_children(srcfile, dstfile, srcgroup, dstgroup, title,
             sortby=sortby, checkCSI=checkCSI, propindexes=propindexes)
     except:
         (type, value, traceback) = sys.exc_info()
-        print "Problems doing the copy from '%s:%s' to '%s:%s'" % \
-              (srcfile, srcgroup, dstfile, dstgroup)
-        print "The error was --> %s: %s" % (type, value)
-        print "The destination file looks like:\n", dstfileh
+        print("Problems doing the copy from '%s:%s' to '%s:%s'" %
+              (srcfile, srcgroup, dstfile, dstgroup))
+        print("The error was --> %s: %s" % (type, value))
+        print("The destination file looks like:\n", dstfileh)
         # Close all the open files:
         srcfileh.close()
         dstfileh.close()
@@ -273,162 +275,158 @@ def copy_children(srcfile, dstfile, srcgroup, dstgroup, title,
 copyChildren = previous_api(copy_children)
 
 
+def _get_parser():
+    parser = argparse.ArgumentParser(
+        description='''This utility is very powerful and lets you copy any
+        leaf, group or complete subtree into another file.
+        During the copy process you are allowed to change the filter
+        properties if you want so. Also, in the case of duplicated pathnames,
+        you can decide if you want to overwrite already existing nodes on the
+        destination file. Generally speaking, ptrepack can be useful in may
+        situations, like replicating a subtree in another file, change the
+        filters in objects and see how affect this to the compression degree
+        or I/O performance, consolidating specific data in repositories or
+        even *importing* generic HDF5 files and create true PyTables
+        counterparts.''')
+
+    parser.add_argument(
+        '-v', '--verbose', action='store_true',
+        help='show verbose information',
+    )
+    parser.add_argument(
+        '-o', '--overwrite', action='store_true', dest='overwritefile',
+        help='overwrite destination file',
+    )
+    parser.add_argument(
+        '-R', '--range', dest='rng', metavar='RANGE',
+        help='''select a RANGE of rows (in the form "start,stop,step")
+        during the copy of *all* the leaves.
+        Default values are "None,None,1", which means a copy of all the
+        rows.''',
+    )
+    parser.add_argument(
+        '--non-recursive', action='store_false', default=True,
+        dest='recursive',
+        help='do not do a recursive copy. Default is to do it',
+    )
+    parser.add_argument(
+        '--dest-title', dest='title', default='',
+        help='title for the new file (if not specified, the source is copied)',
+    )
+    parser.add_argument(
+        '--dont-create-sysattrs', action='store_false', default=True,
+        dest='createsysattrs',
+        help='do not create sys attrs (default is to do it)',
+    )
+    parser.add_argument(
+        '--dont-copy-userattrs', action='store_false', default=True,
+        dest='copyuserattrs',
+        help='do not copy the user attrs (default is to do it)',
+    )
+    parser.add_argument(
+        '--overwrite-nodes', action='store_true', dest='overwrtnodes',
+        help='''overwrite destination nodes if they exist.
+        Default is to not overwrite them''',
+    )
+    parser.add_argument(
+        '--complevel', type=int, default=0,
+        help='''set a compression level (0 for no compression, which is the
+        default)''',
+    )
+    parser.add_argument(
+        '--complib', choices=(
+            "zlib", "lzo", "bzip2", "blosc", "blosc:blosclz",
+            "blosc:lz4", "blosc:lz4hc", "blosc:snappy",
+            "blosc:zlib"), default='zlib',
+        help='''set the compression library to be used during the copy.
+        Defaults to %(default)s''',
+    )
+    parser.add_argument(
+        '--shuffle', type=int, choices=(0, 1),
+        help='''activate or not the shuffling filter (default is active if
+        complevel > 0)''',
+    )
+    parser.add_argument(
+        '--fletcher32', type=int, choices=(0, 1),
+        help='''whether to activate or not the fletcher32 filter (not active
+        by default)''',
+    )
+    parser.add_argument(
+        '--keep-source-filters', action='store_true', dest='keepfilters',
+        help='''use the original filters in source files.
+        The default is not doing that if any of --complevel, --complib,
+        --shuffle or --fletcher32 option is specified''',
+    )
+    parser.add_argument(
+        '--chunkshape', default='keep',
+        help='''set a chunkshape.
+        Possible options are: "keep" | "auto" | int | tuple.
+        A value of "auto" computes a sensible value for the chunkshape of the
+        leaves copied.  The default is to "keep" the original value''',
+    )
+    parser.add_argument(
+        '--upgrade-flavors', action='store_true', dest='upgradeflavors',
+        help='''when repacking PyTables 1.x or PyTables 2.x files, the flavor
+        of leaves will be unset. With this, such a leaves will be serialized
+        as objects with the internal flavor ('numpy' for 3.x series)''',
+    )
+    parser.add_argument(
+        '--dont-regenerate-old-indexes', action='store_false', default=True,
+        dest='regoldindexes',
+        help='''disable regenerating old indexes.
+        The default is to regenerate old indexes as they are found''',
+    )
+    parser.add_argument(
+        '--sortby', metavar='COLUMN',
+        help='''do a table copy sorted by the index in "column".
+        For reversing the order, use a negative value in the "step" part of
+        "RANGE" (see "-r" flag).  Only applies to table objects''',
+    )
+    parser.add_argument(
+        '--checkCSI', action='store_true',
+        help='Force the check for a CSI index for the --sortby column',
+    )
+    parser.add_argument(
+        '--propindexes', action='store_true',
+        help='''propagate the indexes existing in original tables. The default
+        is to not propagate them.  Only applies to table objects''',
+    )
+    parser.add_argument(
+        'src', metavar='sourcefile:sourcegroup', help='source file/group',
+    )
+    parser.add_argument(
+        'dst', metavar='destfile:destgroup', help='destination file/group',
+    )
+
+    return parser
+
+
 def main():
     global verbose
     global regoldindexes
     global createsysattrs
 
-    usage = """usage: %s [-h] [-v] [-o] [-R start,stop,step] [--non-recursive] [--dest-title=title] [--dont-create-sysattrs] [--dont-copy-userattrs] [--overwrite-nodes] [--complevel=(0-9)] [--complib=lib] [--shuffle=(0|1)] [--fletcher32=(0|1)] [--keep-source-filters] [--chunkshape=value] [--upgrade-flavors] [--dont-regenerate-old-indexes] [--sortby=column] [--checkCSI] [--propindexes] sourcefile:sourcegroup destfile:destgroup
-     -h -- Print usage message.
-     -v -- Show more information.
-     -o -- Overwrite destination file.
-     -R RANGE -- Select a RANGE of rows (in the form "start,stop,step")
-         during the copy of *all* the leaves.  Default values are
-         "None,None,1", which means a copy of all the rows.
-     --non-recursive -- Do not do a recursive copy. Default is to do it.
-     --dest-title=title -- Title for the new file (if not specified,
-         the source is copied).
-     --dont-create-sysattrs -- Do not create sys attrs (default is to do it).
-     --dont-copy-userattrs -- Do not copy the user attrs (default is to do it).
-     --overwrite-nodes -- Overwrite destination nodes if they exist. Default is
-         to not overwrite them.
-     --complevel=(0-9) -- Set a compression level (0 for no compression, which
-         is the default).
-     --complib=lib -- Set the compression library to be used during the copy.
-         lib can be set to "zlib", "lzo", "bzip2" or "blosc".  Defaults to
-         "zlib".
-     --shuffle=(0|1) -- Activate or not the shuffling filter (default is active
-         if complevel>0).
-     --fletcher32=(0|1) -- Whether to activate or not the fletcher32 filter
-        (not active by default).
-     --keep-source-filters -- Use the original filters in source files. The
-         default is not doing that if any of --complevel, --complib, --shuffle
-         or --fletcher32 option is specified.
-     --chunkshape=("keep"|"auto"|int|tuple) -- Set a chunkshape.  A value
-         of "auto" computes a sensible value for the chunkshape of the
-         leaves copied.  The default is to "keep" the original value.
-     --upgrade-flavors -- When repacking PyTables 1.x files, the flavor of
-         leaves will be unset. With this, such a leaves will be serialized
-         as objects with the internal flavor ('numpy' for 2.x series).
-     --dont-regenerate-old-indexes -- Disable regenerating old indexes. The
-         default is to regenerate old indexes as they are found.
-     --sortby=column -- Do a table copy sorted by the index in "column".
-         For reversing the order, use a negative value in the "step" part of
-         "RANGE" (see "-R" flag).  Only applies to table objects.
-     --checkCSI -- Force the check for a CSI index for the --sortby column.
-     --propindexes -- Propagate the indexes existing in original tables.  The
-         default is to not propagate them.  Only applies to table objects.
-    \n""" % os.path.basename(sys.argv[0])
+    parser = _get_parser()
+    args = parser.parse_args()
 
-    try:
-        opts, pargs = getopt.getopt(sys.argv[1:], 'hvoR:',
-                                    ['non-recursive',
-                                     'dest-title=',
-                                     'dont-create-sysattrs',
-                                     'dont-copy-userattrs',
-                                     'overwrite-nodes',
-                                     'complevel=',
-                                     'complib=',
-                                     'shuffle=',
-                                     'fletcher32=',
-                                     'keep-source-filters',
-                                     'chunkshape=',
-                                     'upgrade-flavors',
-                                     'dont-regenerate-old-indexes',
-                                     'sortby=',
-                                     'checkCSI',
-                                     'propindexes',
-                                     ])
-    except:
-        (type, value, traceback) = sys.exc_info()
-        print "Error parsing the options. The error was:", value
-        sys.stderr.write(usage)
-        sys.exit(0)
-
-    # default options
-    overwritefile = False
-    keepfilters = False
-    chunkshape = "keep"
-    complevel = None
-    complib = None
-    shuffle = None
-    fletcher32 = None
-    title = ""
-    copyuserattrs = True
-    rng = None
-    recursive = True
-    overwrtnodes = False
-    upgradeflavors = False
-    sortby = None
-    checkCSI = False
-    propindexes = False
-
-    # Get the options
-    for option in opts:
-        if option[0] == '-h':
-            sys.stderr.write(usage)
-            sys.exit(0)
-        elif option[0] == '-v':
-            verbose = True
-        elif option[0] == '-o':
-            overwritefile = True
-        elif option[0] == '-R':
-            try:
-                rng = eval("slice("+option[1]+")")
-            except:
-                print "Error when getting the range parameter."
-                (type, value, traceback) = sys.exc_info()
-                print "  The error was:", value
-                sys.stderr.write(usage)
-                sys.exit(0)
-        elif option[0] == '--dest-title':
-            title = option[1]
-        elif option[0] == '--dont-create-sysattrs':
-            createsysattrs = False
-        elif option[0] == '--dont-copy-userattrs':
-            copyuserattrs = False
-        elif option[0] == '--non-recursive':
-            recursive = False
-        elif option[0] == '--overwrite-nodes':
-            overwrtnodes = True
-        elif option[0] == '--keep-source-filters':
-            keepfilters = True
-        elif option[0] == '--chunkshape':
-            chunkshape = option[1]
-            if chunkshape.isdigit() or chunkshape.startswith('('):
-                chunkshape = eval(chunkshape)
-        elif option[0] == '--upgrade-flavors':
-            upgradeflavors = True
-        elif option[0] == '--dont-regenerate-old-indexes':
-            regoldindexes = False
-        elif option[0] == '--complevel':
-            complevel = int(option[1])
-        elif option[0] == '--complib':
-            complib = option[1]
-        elif option[0] == '--shuffle':
-            shuffle = int(option[1])
-        elif option[0] == '--fletcher32':
-            fletcher32 = int(option[1])
-        elif option[0] == '--sortby':
-            sortby = option[1]
-        elif option[0] == '--propindexes':
-            propindexes = True
-        elif option[0] == '--checkCSI':
-            checkCSI = True
-        else:
-            print option[0], ": Unrecognized option"
-            sys.stderr.write(usage)
-            sys.exit(0)
+    # check arguments
+    if args.rng:
+        try:
+            args.rng = eval("slice(" + args.rng + ")")
+        except Exception:
+            parser.error("Error when getting the range parameter.")
 
-    # if we pass a number of files different from 2, abort
-    if len(pargs) != 2:
-        print "You need to pass both source and destination!."
-        sys.stderr.write(usage)
-        sys.exit(0)
+    if args.chunkshape.isdigit() or args.chunkshape.startswith('('):
+        args.chunkshape = eval(args.chunkshape)
+
+    if args.complevel < 0 or args.complevel > 9:
+        parser.error(
+            'invalid "complevel" value, it sould be in te range [0, 9]'
+        )
 
     # Catch the files passed as the last arguments
-    src = pargs[0].split(':')
-    dst = pargs[1].split(':')
+    src = args.src.split(':')
+    dst = args.dst.split(':')
     if len(src) == 1:
         srcfile, srcnode = src[0], "/"
     else:
@@ -449,53 +447,66 @@ def main():
     # Ignore the warnings for tables that contains oldindexes
     # (these will be handled by the copying routines)
     warnings.filterwarnings("ignore", category=OldIndexWarning)
+
     # Ignore the flavors warnings during upgrading flavor operations
-    if upgradeflavors:
+    if args.upgradeflavors:
         warnings.filterwarnings("ignore", category=FlavorWarning)
 
     # Build the Filters instance
-    if ((complevel, complib, shuffle, fletcher32) == (None,)*4 or keepfilters):
+    filter_params = (
+        args.complevel,
+        args.complib,
+        args.shuffle,
+        args.fletcher32,
+    )
+    if (filter_params == (None,) * 4 or args.keepfilters):
         filters = None
     else:
-        if complevel is None:
-            complevel = 0
-        if shuffle is None:
-            if complevel > 0:
-                shuffle = True
+        if args.complevel is None:
+            args.complevel = 0
+        if args.shuffle is None:
+            if args.complevel > 0:
+                args.shuffle = True
             else:
-                shuffle = False
-        if complib is None:
-            complib = "zlib"
-        if fletcher32 is None:
-            fletcher32 = False
-        filters = Filters(complevel=complevel, complib=complib,
-                          shuffle=shuffle, fletcher32=fletcher32)
+                args.shuffle = False
+        if args.complib is None:
+            args.complib = "zlib"
+        if args.fletcher32 is None:
+            args.fletcher32 = False
+        filters = Filters(complevel=args.complevel, complib=args.complib,
+                          shuffle=args.shuffle, fletcher32=args.fletcher32)
 
     # The start, stop and step params:
     start, stop, step = None, None, 1  # Defaults
-    if rng:
-        start, stop, step = rng.start, rng.stop, rng.step
+    if args.rng:
+        start, stop, step = args.rng.start, args.rng.stop, args.rng.step
+
+    # Set globals
+    verbose = args.verbose
+    regoldindexes = args.regoldindexes
+    createsysattrs = args.createsysattrs
 
     # Some timing
     t1 = time.time()
     cpu1 = time.clock()
     # Copy the file
     if verbose:
-        print "+=+"*20
-        print "Recursive copy:", recursive
-        print "Applying filters:", filters
-        if sortby is not None:
-            print "Sorting table(s) by column:", sortby
-            print "Forcing a CSI creation:", checkCSI
-        if propindexes:
-            print "Recreating indexes in copied table(s)"
-        print "Start copying %s:%s to %s:%s" % (srcfile, srcnode,
-                                                dstfile, dstnode)
-        print "+=+"*20
+        print("+=+" * 20)
+        print("Recursive copy:", args.recursive)
+        print("Applying filters:", filters)
+        if args.sortby is not None:
+            print("Sorting table(s) by column:", args.sortby)
+            print("Forcing a CSI creation:", args.checkCSI)
+        if args.propindexes:
+            print("Recreating indexes in copied table(s)")
+        print("Start copying %s:%s to %s:%s" % (srcfile, srcnode,
+                                                dstfile, dstnode))
+        print("+=+" * 20)
 
     # Check whether the specified source node is a group or a leaf
     h5srcfile = open_file(srcfile, 'r')
     srcnodeobject = h5srcfile.get_node(srcnode)
+
     # Close the file again
     h5srcfile.close()
 
@@ -503,29 +514,32 @@ def main():
     if isinstance(srcnodeobject, Group):
         copy_children(
             srcfile, dstfile, srcnode, dstnode,
-            title=title, recursive=recursive, filters=filters,
-            copyuserattrs=copyuserattrs, overwritefile=overwritefile,
-            overwrtnodes=overwrtnodes, stats=stats,
-            start=start, stop=stop, step=step, chunkshape=chunkshape,
-            sortby=sortby, checkCSI=checkCSI, propindexes=propindexes,
-            upgradeflavors=upgradeflavors)
+            title=args.title, recursive=args.recursive, filters=filters,
+            copyuserattrs=args.copyuserattrs, overwritefile=args.overwritefile,
+            overwrtnodes=args.overwrtnodes, stats=stats,
+            start=start, stop=stop, step=step, chunkshape=args.chunkshape,
+            sortby=args.sortby, checkCSI=args.checkCSI,
+            propindexes=args.propindexes,
+            upgradeflavors=args.upgradeflavors)
     else:
         # If not a Group, it should be a Leaf
         copy_leaf(
             srcfile, dstfile, srcnode, dstnode,
-            title=title, filters=filters, copyuserattrs=copyuserattrs,
-            overwritefile=overwritefile, overwrtnodes=overwrtnodes,
+            title=args.title, filters=filters,
+            copyuserattrs=args.copyuserattrs,
+            overwritefile=args.overwritefile, overwrtnodes=args.overwrtnodes,
             stats=stats, start=start, stop=stop, step=step,
-            chunkshape=chunkshape,
-            sortby=sortby, checkCSI=checkCSI, propindexes=propindexes,
-            upgradeflavors=upgradeflavors)
+            chunkshape=args.chunkshape,
+            sortby=args.sortby, checkCSI=args.checkCSI,
+            propindexes=args.propindexes,
+            upgradeflavors=args.upgradeflavors)
 
     # Gather some statistics
     t2 = time.time()
     cpu2 = time.clock()
-    tcopy = round(t2-t1, 3)
-    cpucopy = round(cpu2-cpu1, 3)
-    tpercent = int(round(cpucopy/tcopy, 2)*100)
+    tcopy = round(t2 - t1, 3)
+    cpucopy = round(cpu2 - cpu1, 3)
+    tpercent = int(round(cpucopy / tcopy, 2) * 100)
 
     if verbose:
         ngroups = stats['groups']
@@ -534,16 +548,17 @@ def main():
         nbytescopied = stats['bytes']
         nnodes = ngroups + nleaves + nlinks
 
-        print \
-            "Groups copied:", ngroups, \
-            " Leaves copied:", nleaves, \
-            " Links copied:", nlinks
-        if copyuserattrs:
-            print "User attrs copied"
+        print((
+            "Groups copied:", ngroups,
+            " Leaves copied:", nleaves,
+            " Links copied:", nlinks,
+        ))
+        if args.copyuserattrs:
+            print("User attrs copied")
         else:
-            print "User attrs not copied"
-        print "KBytes copied:", round(nbytescopied/1024., 3)
-        print "Time copying: %s s (real) %s s (cpu)  %s%%" % \
-              (tcopy, cpucopy, tpercent)
-        print "Copied nodes/sec: ", round((nnodes) / float(tcopy), 1)
-        print "Copied KB/s :", int(nbytescopied / (tcopy * 1024))
+            print("User attrs not copied")
+        print("KBytes copied:", round(nbytescopied / 1024., 3))
+        print("Time copying: %s s (real) %s s (cpu)  %s%%" % (
+            tcopy, cpucopy, tpercent))
+        print("Copied nodes/sec: ", round((nnodes) / float(tcopy), 1))
+        print("Copied KB/s :", int(nbytescopied / (tcopy * 1024)))
diff --git a/tables/table.py b/tables/table.py
index 1030ec0..29262e1 100644
--- a/tables/table.py
+++ b/tables/table.py
@@ -242,7 +242,7 @@ def _table__where_indexed(self, compiled, condition, condvars,
         # Get the row sequence from the cache
         seq = self._seqcache.getitem(nslot)
         if len(seq) == 0:
-            return iter([])
+            return None
         seq = numpy.array(seq, dtype='int64')
         # Correct the ranges in cached sequence
         if (start, stop, step) != (0, self.nrows, 1):
@@ -284,14 +284,14 @@ def _table__where_indexed(self, compiled, condition, condvars,
 
     if index.reduction == 1 and tcoords == 0:
         # No candidates found in any indexed expression component, so leave now
-        return iter([])
+        return None
 
     # Compute the final chunkmap
     chunkmap = numexpr.evaluate(strexpr, cmvars)
     # Method .any() is twice as faster than method .sum()
     if not chunkmap.any():
         # The chunkmap is empty
-        return iter([])
+        return None
 
     if profile:
         show_stats("Exiting table_whereIndexed", tref)
@@ -1156,7 +1156,11 @@ class Table(tableextension.Table, Leaf):
     _check_column = _get_column_instance
 
     def _disable_indexing_in_queries(self):
-        """Force queries not to use indexing.  *Use only for testing.*"""
+        """Force queries not to use indexing.
+
+        *Use only for testing.*
+
+        """
 
         if not self._enabled_indexing_in_queries:
             return  # already disabled
@@ -1168,7 +1172,11 @@ class Table(tableextension.Table, Leaf):
     _disableIndexingInQueries = previous_api(_disable_indexing_in_queries)
 
     def _enable_indexing_in_queries(self):
-        """Allow queries to use indexing.  *Use only for testing.*"""
+        """Allow queries to use indexing.
+
+        *Use only for testing.*
+
+        """
 
         if self._enabled_indexing_in_queries:
             return  # already enabled
@@ -1258,7 +1266,7 @@ class Table(tableextension.Table, Leaf):
                         "a multidimensional column, "
                         "not yet supported in conditions, sorry" % var)
                 if (val._table_file is not tblfile or
-                    val._table_path != tblpath):
+                        val._table_path != tblpath):
                     raise ValueError("variable ``%s`` refers to a column "
                                      "which is not part of table ``%s``"
                                      % (var, tblpath))
@@ -1350,7 +1358,7 @@ class Table(tableextension.Table, Leaf):
 
             # Get the set of columns with usable indexes.
             if (self._enabled_indexing_in_queries  # not test in-kernel searches
-               and self.colindexed[col.pathname] and not col.index.dirty):
+                    and self.colindexed[col.pathname] and not col.index.dirty):
                 indexedcols.append(colname)
 
         indexedcols = frozenset(indexedcols)
@@ -1443,7 +1451,7 @@ class Table(tableextension.Table, Leaf):
             >>> passvalues = [ row['col3'] for row in
             ...                table.where('(col1 > 0) & (col2 <= 20)', step=5)
             ...                if your_function(row['col2']) ]
-            >>> print "Values that pass the cuts:", passvalues
+            >>> print("Values that pass the cuts:", passvalues)
 
         Note that, from PyTables 1.1 on, you can nest several
         iterators over the same table. For example::
@@ -1451,15 +1459,51 @@ class Table(tableextension.Table, Leaf):
             for p in rout.where('pressure < 16'):
                 for q in rout.where('pressure < 9'):
                     for n in rout.where('energy < 10'):
-                        print "pressure, energy:", p['pressure'], n['energy']
+                        print("pressure, energy:", p['pressure'], n['energy'])
 
         In this example, iterators returned by :meth:`Table.where` have been
         used, but you may as well use any of the other reading iterators that
         Table objects offer. See the file :file:`examples/nested-iter.py` for
         the full code.
 
+        .. note::
+
+            A special care should be taken when the query condition includes
+            string literals.  Indeed Python 2 string literals are string of
+            bytes while Python 3 strings are unicode objects.
+
+            Let's assume that the table ``table`` has the following
+            structure::
+
+                class Record(IsDescription):
+                    col1 = StringCol(4)  # 4-character String of bytes
+                    col2 = IntCol()
+                    col3 = FloatCol()
+
+            The type of "col1" do not change depending on the Python version
+            used (of course) and it always corresponds to strings of bytes.
+
+            Any condition involving "col1" should be written using the
+            appropriate type for string literals in order to avoid
+            :exc:`TypeError`\ s.
+
+            The code below will work fine in Python 2 but will fail with a
+            :exc:`TypeError` in Python 3::
+
+                condition = 'col1 == "AAAA"'
+                for record in table.where(condition):  # TypeError in Python3
+                    # do something with "record"
+
+            The reason is that in Python 3 "condition" implies a comparison
+            between a string of bytes ("col1" contents) and an unicode literal
+            ("AAAA").
+
+            The correct way to write the condition is::
+
+                condition = 'col1 == b"AAAA"'
+
         .. versionchanged:: 3.0
-        The start, stop and step parameters now behave like in slice.
+           The start, stop and step parameters now behave like in slice.
 
         """
 
@@ -1493,7 +1537,8 @@ class Table(tableextension.Table, Leaf):
                 self._use_index = False
                 self._where_condition = None
                 # ...and return the iterator
-                return chunkmap
+                if chunkmap is not None:
+                    return chunkmap
         else:
             chunkmap = None  # default to an in-kernel query
 
@@ -1698,7 +1743,7 @@ class Table(tableextension.Table, Leaf):
         :meth:`Table.read`.
 
         .. versionchanged:: 3.0
-        The start, stop and step parameters now behave like in slice.
+           The start, stop and step parameters now behave like in slice.
 
         """
 
@@ -1933,6 +1978,8 @@ class Table(tableextension.Table, Leaf):
     def _read_coordinates(self, coords, field=None):
         """Private part of `read_coordinates()` with no flavor conversion."""
 
+        coords = self._point_selection(coords)
+
         ncoords = len(coords)
         # Create a read buffer only if needed
         if field is None or ncoords > 0:
@@ -2081,8 +2128,7 @@ class Table(tableextension.Table, Leaf):
             return self.read(start, stop, step)
         # Try with a boolean or point selection
         elif type(key) in (list, tuple) or isinstance(key, numpy.ndarray):
-            coords = self._point_selection(key)
-            return self._read_coordinates(coords, None)
+            return self._read_coordinates(key, None)
         else:
             raise IndexError("Invalid index or slice: %r" % (key,))
 
@@ -2155,7 +2201,7 @@ class Table(tableextension.Table, Leaf):
             raise IndexError("Invalid index or slice: %r" % (key,))
 
     def _save_buffered_rows(self, wbufRA, lenrows):
-        """Update the indexes after a flushing of rows"""
+        """Update the indexes after a flushing of rows."""
 
         self._open_append(wbufRA)
         self._append_records(lenrows)
@@ -2223,7 +2269,7 @@ class Table(tableextension.Table, Leaf):
             # Works for Python structures and always copies the original,
             # so the resulting object is safe for in-place conversion.
             wbufRA = numpy.rec.array(rows, dtype=self._v_dtype)
-        except Exception, exc:  # XXX
+        except Exception as exc:  # XXX
             raise ValueError("rows parameter cannot be converted into a "
                              "recarray object compliant with table '%s'. "
                              "The error was: <%s>" % (str(self), exc))
@@ -2250,7 +2296,7 @@ class Table(tableextension.Table, Leaf):
                 # Works for Python structures and always copies the original,
                 # so the resulting object is safe for in-place conversion.
                 recarr = numpy.rec.array(obj, dtype=self._v_dtype)
-        except Exception, exc:  # XXX
+        except Exception as exc:  # XXX
             raise ValueError("Object cannot be converted into a recarray "
                              "object compliant with table format '%s'. "
                              "The error was: <%s>" %
@@ -2259,7 +2305,7 @@ class Table(tableextension.Table, Leaf):
         return recarr
 
     def modify_coordinates(self, coords, rows):
-        """Modify a series of rows in positions specified in coords
+        """Modify a series of rows in positions specified in coords.
 
         The values in the selected rows will be modified with the data given in
         rows.  This method returns the number of rows modified.
@@ -2397,7 +2443,7 @@ class Table(tableextension.Table, Leaf):
                 # so the resulting object is safe for in-place conversion.
                 iflavor = flavor_of(column)
                 column = array_as_internal(column, iflavor)
-        except Exception, exc:  # XXX
+        except Exception as exc:  # XXX
             raise ValueError("column parameter cannot be converted into a "
                              "ndarray object compliant with specified column "
                              "'%s'. The error was: <%s>" % (str(column), exc))
@@ -2480,7 +2526,7 @@ class Table(tableextension.Table, Leaf):
                 recarray = numpy.rec.array(columns, dtype=descr)
             else:
                 recarray = numpy.rec.fromarrays(columns, dtype=descr)
-        except Exception, exc:  # XXX
+        except Exception as exc:  # XXX
             raise ValueError("columns parameter cannot be converted into a "
                              "recarray object compliant with table '%s'. "
                              "The error was: <%s>" % (str(self), exc))
@@ -2539,7 +2585,7 @@ class Table(tableextension.Table, Leaf):
     flushRowsToIndex = previous_api(flush_rows_to_index)
 
     def _add_rows_to_index(self, colname, start, nrows, lastrow, update):
-        """Add more elements to the existing index"""
+        """Add more elements to the existing index."""
 
         # This method really belongs to Column, but since it makes extensive
         # use of the table, it gets dangerous when closing the file, since the
@@ -2573,7 +2619,7 @@ class Table(tableextension.Table, Leaf):
         """Remove a range of rows in the table.
 
         .. versionchanged:: 3.0
-        The start, stop and step parameters now behave like in slice.
+           The start, stop and step parameters now behave like in slice.
 
         .. seealso:: remove_row()
 
@@ -2766,9 +2812,9 @@ class Table(tableextension.Table, Leaf):
     def reindex(self):
         """Recompute all the existing indexes in the table.
 
-        This can be useful when you suspect that, for any reason, the index
-        information for columns is no longer valid and want to rebuild the
-        indexes on it.
+        This can be useful when you suspect that, for any reason, the
+        index information for columns is no longer valid and want to
+        rebuild the indexes on it.
 
         """
 
@@ -2855,7 +2901,7 @@ class Table(tableextension.Table, Leaf):
 
     def _g_copy_with_stats(self, group, name, start, stop, step,
                            title, filters, chunkshape, _log, **kwargs):
-        """Private part of Leaf.copy() for each kind of leaf"""
+        """Private part of Leaf.copy() for each kind of leaf."""
 
         # Get the private args for the Table flavor of copy()
         sortby = kwargs.pop('sortby', None)
@@ -3068,8 +3114,9 @@ class Cols(object):
     def _g_gettable(self):
         return self._v__tableFile._get_node(self._v__tablePath)
 
-    _v_table = property(_g_gettable, None, None,
-                    "The parent Table instance (see :ref:`TableClassDescr`).")
+    _v_table = property(
+        _g_gettable, None, None,
+        "The parent Table instance (see :ref:`TableClassDescr`).")
 
     def __init__(self, table, desc):
 
@@ -3414,6 +3461,7 @@ class Column(object):
         """Get the number of elements in the column.
 
         This matches the length in rows of the parent table.
+
         """
 
         return self.table.nrows
@@ -3431,14 +3479,14 @@ class Column(object):
 
         ::
 
-            print "Column handlers:"
+            print("Column handlers:")
             for name in table.colnames:
-                print table.cols._f_col(name)
-                print "Select table.cols.name[1]-->", table.cols.name[1]
-                print "Select table.cols.name[1:2]-->", table.cols.name[1:2]
-                print "Select table.cols.name[:]-->", table.cols.name[:]
-                print "Select table.cols._f_col('name')[:]-->",
-                                                table.cols._f_col('name')[:]
+                print(table.cols._f_col(name))
+                print("Select table.cols.name[1]-->", table.cols.name[1])
+                print("Select table.cols.name[1:2]-->", table.cols.name[1:2])
+                print("Select table.cols.name[:]-->", table.cols.name[:])
+                print("Select table.cols._f_col('name')[:]-->",
+                                                table.cols._f_col('name')[:])
 
         The output of this for a certain arbitrary table is::
 
@@ -3609,7 +3657,7 @@ class Column(object):
         if kind not in kinds:
             raise ValueError("Kind must have any of these values: %s" % kinds)
         if (not isinstance(optlevel, (int, long)) or
-            (optlevel < 0 or optlevel > 9)):
+                (optlevel < 0 or optlevel > 9)):
             raise ValueError("Optimization level must be an integer in the "
                              "range 0-9")
         if filters is None:
@@ -3619,13 +3667,13 @@ class Column(object):
         else:
             if not os.path.isdir(tmp_dir):
                 raise ValueError("Temporary directory '%s' does not exist" %
-                                                                    tmp_dir)
+                                 tmp_dir)
         if (_blocksizes is not None and
-            (not isinstance(_blocksizes, tuple) or len(_blocksizes) != 4)):
+                (not isinstance(_blocksizes, tuple) or len(_blocksizes) != 4)):
             raise ValueError("_blocksizes must be a tuple with exactly 4 "
                              "elements")
         idxrows = _column__create_index(self, optlevel, kind, filters,
-                                       tmp_dir, _blocksizes, _verbose)
+                                        tmp_dir, _blocksizes, _verbose)
         return SizeType(idxrows)
 
     createIndex = previous_api(create_index)
@@ -3732,7 +3780,7 @@ class Column(object):
     removeIndex = previous_api(remove_index)
 
     def close(self):
-        """Close this column"""
+        """Close this column."""
 
         self.__dict__.clear()
 
diff --git a/tables/tableExtension.py b/tables/tableExtension.py
index 2fd3dbc..1e871ce 100644
--- a/tables/tableExtension.py
+++ b/tables/tableExtension.py
@@ -3,4 +3,4 @@ from tables.tableextension import *
 
 _warnmsg = ("tableExtension is pending deprecation, import tableextension instead. "
             "You may use the pt2to3 tool to update your source code.")
-warn(_warnmsg, PendingDeprecationWarning, stacklevel=2)
+warn(_warnmsg, DeprecationWarning, stacklevel=2)
diff --git a/tables/tableextension.pyx b/tables/tableextension.pyx
index 489f58a..603fd98 100644
--- a/tables/tableextension.pyx
+++ b/tables/tableextension.pyx
@@ -162,7 +162,8 @@ cdef class Table(Leaf):
     cdef ndarray recarr
     cdef object  name
     cdef bytes encoded_title, encoded_complib, encoded_obversion
-    cdef char *ctitle = NULL, *cobversion = NULL
+    cdef char *ctitle = NULL
+    cdef char *cobversion = NULL
     cdef bytes encoded_name
     cdef char fieldname[128]
     cdef int i
@@ -264,7 +265,8 @@ cdef class Table(Leaf):
     """Open a nested type and return a nested dictionary as description."""
 
     cdef hid_t   member_type_id, native_member_type_id
-    cdef hsize_t nfields, dims[1]
+    cdef hsize_t nfields
+    cdef hsize_t dims[1]
     cdef size_t  itemsize
     cdef int     i
     cdef char    *c_colname
@@ -365,7 +367,8 @@ cdef class Table(Leaf):
 
     cdef hid_t   space_id, plist
     cdef size_t  type_size, size2
-    cdef hsize_t dims[1], chunksize[1]  # enough for unidimensional tables
+    cdef hsize_t dims[1]        # enough for unidimensional tables
+    cdef hsize_t chunksize[1]
     cdef H5D_layout_t layout
     cdef bytes encoded_name
 
@@ -531,7 +534,8 @@ cdef class Table(Leaf):
   def _update_elements(self, hsize_t nrecords, ndarray coords,
                        ndarray recarr):
     cdef herr_t ret
-    cdef void *rbuf, *rcoords
+    cdef void *rbuf
+    cdef void *rcoords
 
     # Get the chunk of the coords that correspond to a buffer
     rcoords = coords.data
@@ -609,7 +613,8 @@ cdef class Table(Leaf):
 
   def _read_elements(self, ndarray coords, ndarray recarr):
     cdef long nrecords
-    cdef void *rbuf, *rbuf2
+    cdef void *rbuf
+    cdef void *rbuf2
     cdef int ret
 
     # Get the chunk of the coords that correspond to a buffer
@@ -709,8 +714,10 @@ cdef class Row:
   cdef int     _bufferinfo_done, sss_on
   cdef int     iterseq_max_elements
   cdef ndarray bufcoords, indexvalid, indexvalues, chunkmap
-  cdef hsize_t *bufcoords_data, *index_values_data
-  cdef char    *chunkmap_data, *index_valid_data
+  cdef hsize_t *bufcoords_data
+  cdef hsize_t *index_values_data
+  cdef char    *chunkmap_data
+  cdef char    *index_valid_data
   cdef object  dtype
   cdef object  iobuf, iobufcpy
   cdef object  wrec, wreccpy
@@ -831,7 +838,7 @@ cdef class Row:
     self.step = step
     self.coords = coords
     self.startb = 0
-    if step > 0: 
+    if step > 0:
         self._row = -1  # a sentinel
         self.nrowsread = start
     elif step < 0:
@@ -1036,7 +1043,7 @@ cdef class Row:
         # All the elements have been read for this mode
         self._finish_riterator()
     elif 0 > self.step:
-      #print "self.nextelement = ", self.nextelement, self.start, self.nrowsread, self.nextelement <  self.start - self.nrowsread + 1
+      #print("self.nextelement = ", self.nextelement, self.start, self.nrowsread, self.nextelement <  self.start - self.nrowsread + 1)
       while self.nextelement - 1 > self.stop:
         if self.nextelement < self.start - (<long long> self.nrowsread) + 1:
           if 0 > self.nextelement - (<long long> self.nrowsinbuf) + 1:
@@ -1050,9 +1057,9 @@ cdef class Row:
           self._row = len(self.bufcoords) - 1
         else:
           self._row = (self._row + self.step) % len(self.bufcoords)
-            
+
         self._nrow = self.nextelement - self.step
-        self.nextelement = self.nextelement + self.step 
+        self.nextelement = self.nextelement + self.step
         # Return this value
         return self
       else:
@@ -1100,7 +1107,7 @@ cdef class Row:
               correct = (self.nextelement - self.start) % self.step
               self.nextelement = self.nextelement - correct
           continue
-      
+
       self._row = self._row + self.step
       self._nrow = self.nextelement
       if self._row + self.step >= self.stopb:
@@ -1151,15 +1158,15 @@ cdef class Row:
       while self.nextelement - 1 > self.stop:
         if self.nextelement < self.start - self.nrowsread + 1:
           # Read a chunk
-          recout = self.table._read_records(self.nextelement - self.nrowsinbuf + 1, 
+          recout = self.table._read_records(self.nextelement - self.nrowsinbuf + 1,
                                             self.nrowsinbuf, self.iobuf)
           self.nrowsread = self.nrowsread + self.nrowsinbuf
           self._row = self.nrowsinbuf - 1
         else:
           self._row = (self._row + self.step) % self.nrowsinbuf
-            
+
         self._nrow = self.nextelement - self.step
-        self.nextelement = self.nextelement + self.step 
+        self.nextelement = self.nextelement + self.step
         # Return this value
         return self
       else:
@@ -1223,13 +1230,13 @@ cdef class Row:
         i = i + inrowsinbuf
     elif 0 > istep:
       inrowsinbuf = self.nrowsinbuf
-      #istartb = self.startb 
+      #istartb = self.startb
       istartb = self.nrowsinbuf - 1
       #istopb = self.stopb - 1
       istopb = -1
       startr = 0
       i = istart
-      inextelement = istart  
+      inextelement = istart
       inrowsread = 0
       while i-1 > istop:
         #if (inextelement <= inrowsread + inrowsinbuf):
@@ -1240,7 +1247,7 @@ cdef class Row:
         # Compute the end for this iteration
         stopr = startr + ((istopb - istartb - 1) / istep)
         # Read a chunk
-        inrowsread = inrowsread + self.table._read_records(i - inrowsinbuf + 1, 
+        inrowsread = inrowsread + self.table._read_records(i - inrowsinbuf + 1,
                                                            inrowsinbuf, self.iobuf)
         # Assign the correct part to result
         fields = self.iobuf
@@ -1253,7 +1260,7 @@ cdef class Row:
 
         # Compute some indexes for the next iteration
         startr = stopr
-        istartb = (i - istartb)%inrowsinbuf 
+        istartb = (i - istartb)%inrowsinbuf
         inextelement = inextelement + istep
         i = i - inrowsinbuf
     self._riterator = 0  # out of iterator
diff --git a/tables/tests/__init__.py b/tables/tests/__init__.py
index 23af537..1512a6f 100644
--- a/tables/tests/__init__.py
+++ b/tables/tests/__init__.py
@@ -10,11 +10,11 @@
 #
 ########################################################################
 
-"""Unit tests for PyTables
+"""Unit tests for PyTables.
 
-This package contains some modules which provide a ``suite()``
-function (with no arguments) which returns a test suite for some
-PyTables functionality.
+This package contains some modules which provide a ``suite()`` function
+(with no arguments) which returns a test suite for some PyTables
+functionality.
 
 """
 
diff --git a/tables/tests/check_leaks.py b/tables/tests/check_leaks.py
index 89f2487..111d6dc 100644
--- a/tables/tests/check_leaks.py
+++ b/tables/tests/check_leaks.py
@@ -1,8 +1,9 @@
 # -*- coding: utf-8 -*-
 
+from __future__ import print_function
 import os
-import popen2
 import time
+
 import tables
 
 tref = time.time()
@@ -12,35 +13,34 @@ trel = tref
 def show_mem(explain):
     global tref, trel
 
-    cmd = "cat /proc/%s/status" % os.getpid()
-    sout, sin = popen2.popen2(cmd)
-    for line in sout:
-        if line.startswith("VmSize:"):
-            vmsize = int(line.split()[1])
-        elif line.startswith("VmRSS:"):
-            vmrss = int(line.split()[1])
-        elif line.startswith("VmData:"):
-            vmdata = int(line.split()[1])
-        elif line.startswith("VmStk:"):
-            vmstk = int(line.split()[1])
-        elif line.startswith("VmExe:"):
-            vmexe = int(line.split()[1])
-        elif line.startswith("VmLib:"):
-            vmlib = int(line.split()[1])
-    sout.close()
-    sin.close()
-    print "\nMemory usage: ******* %s *******" % explain
-    print "VmSize: %7s kB\tVmRSS: %7s kB" % (vmsize, vmrss)
-    print "VmData: %7s kB\tVmStk: %7s kB" % (vmdata, vmstk)
-    print "VmExe:  %7s kB\tVmLib: %7s kB" % (vmexe, vmlib)
-    print "WallClock time:", time.time() - tref,
-    print "  Delta time:", time.time() - trel
+    filename = "/proc/%s/status" % os.getpid()
+    with open(filename) as fd:
+        for line in fd:
+            if line.startswith("VmSize:"):
+                vmsize = int(line.split()[1])
+            elif line.startswith("VmRSS:"):
+                vmrss = int(line.split()[1])
+            elif line.startswith("VmData:"):
+                vmdata = int(line.split()[1])
+            elif line.startswith("VmStk:"):
+                vmstk = int(line.split()[1])
+            elif line.startswith("VmExe:"):
+                vmexe = int(line.split()[1])
+            elif line.startswith("VmLib:"):
+                vmlib = int(line.split()[1])
+
+    print("\nMemory usage: ******* %s *******" % explain)
+    print("VmSize: %7s kB\tVmRSS: %7s kB" % (vmsize, vmrss))
+    print("VmData: %7s kB\tVmStk: %7s kB" % (vmdata, vmstk))
+    print("VmExe:  %7s kB\tVmLib: %7s kB" % (vmexe, vmlib))
+    print("WallClock time:", time.time() - tref, end=' ')
+    print("  Delta time:", time.time() - trel)
     trel = time.time()
 
 
-def write_group(file, nchildren, niter):
+def write_group(filename, nchildren, niter):
     for i in range(niter):
-        fileh = tables.open_file(file, mode="w")
+        fileh = tables.open_file(filename, mode="w")
         for child in range(nchildren):
             fileh.create_group(fileh.root, 'group' + str(child),
                                "child: %d" % child)
@@ -49,9 +49,9 @@ def write_group(file, nchildren, niter):
         show_mem("After close")
 
 
-def read_group(file, nchildren, niter):
+def read_group(filename, nchildren, niter):
     for i in range(niter):
-        fileh = tables.open_file(file, mode="r")
+        fileh = tables.open_file(filename, mode="r")
         for child in range(nchildren):
             node = fileh.get_node(fileh.root, 'group' + str(child))
             assert node is not None
@@ -63,9 +63,9 @@ def read_group(file, nchildren, niter):
         show_mem("After close")
 
 
-def write_array(file, nchildren, niter):
+def write_array(filename, nchildren, niter):
     for i in range(niter):
-        fileh = tables.open_file(file, mode="w")
+        fileh = tables.open_file(filename, mode="w")
         for child in range(nchildren):
             fileh.create_array(fileh.root, 'array' + str(child),
                                [1, 1], "child: %d" % child)
@@ -74,9 +74,9 @@ def write_array(file, nchildren, niter):
         show_mem("After close")
 
 
-def read_array(file, nchildren, niter):
+def read_array(filename, nchildren, niter):
     for i in range(niter):
-        fileh = tables.open_file(file, mode="r")
+        fileh = tables.open_file(filename, mode="r")
         for child in range(nchildren):
             node = fileh.get_node(fileh.root, 'array' + str(child))
             # flavor = node._v_attrs.FLAVOR
@@ -94,9 +94,9 @@ def read_array(file, nchildren, niter):
         show_mem("After close")
 
 
-def write_carray(file, nchildren, niter):
+def write_carray(filename, nchildren, niter):
     for i in range(niter):
-        fileh = tables.open_file(file, mode="w")
+        fileh = tables.open_file(filename, mode="w")
         for child in range(nchildren):
             fileh.create_carray(fileh.root, 'array' + str(child),
                                 tables.IntAtom(), (2,), "child: %d" % child)
@@ -105,23 +105,23 @@ def write_carray(file, nchildren, niter):
         show_mem("After close")
 
 
-def read_carray(file, nchildren, niter):
+def read_carray(filename, nchildren, niter):
     for i in range(niter):
-        fileh = tables.open_file(file, mode="r")
+        fileh = tables.open_file(filename, mode="r")
         for child in range(nchildren):
             node = fileh.get_node(fileh.root, 'array' + str(child))
             # flavor = node._v_attrs.FLAVOR
             data = node[:]  # Read data
             assert data is not None
-            # print "data-->", data
+            # print("data-->", data)
         show_mem("After reading data. Iter %s" % i)
         fileh.close()
         show_mem("After close")
 
 
-def write_earray(file, nchildren, niter):
+def write_earray(filename, nchildren, niter):
     for i in range(niter):
-        fileh = tables.open_file(file, mode="w")
+        fileh = tables.open_file(filename, mode="w")
         for child in range(nchildren):
             ea = fileh.create_earray(fileh.root, 'array' + str(child),
                                      tables.IntAtom(), shape=(0,),
@@ -132,23 +132,23 @@ def write_earray(file, nchildren, niter):
         show_mem("After close")
 
 
-def read_earray(file, nchildren, niter):
+def read_earray(filename, nchildren, niter):
     for i in range(niter):
-        fileh = tables.open_file(file, mode="r")
+        fileh = tables.open_file(filename, mode="r")
         for child in range(nchildren):
             node = fileh.get_node(fileh.root, 'array' + str(child))
             # flavor = node._v_attrs.FLAVOR
             data = node[:]  # Read data
             assert data is not None
-            # print "data-->", data
+            # print("data-->", data)
         show_mem("After reading data. Iter %s" % i)
         fileh.close()
         show_mem("After close")
 
 
-def write_vlarray(file, nchildren, niter):
+def write_vlarray(filename, nchildren, niter):
     for i in range(niter):
-        fileh = tables.open_file(file, mode="w")
+        fileh = tables.open_file(filename, mode="w")
         for child in range(nchildren):
             vl = fileh.create_vlarray(fileh.root, 'array' + str(child),
                                       tables.IntAtom(), "child: %d" % child)
@@ -158,21 +158,21 @@ def write_vlarray(file, nchildren, niter):
         show_mem("After close")
 
 
-def read_vlarray(file, nchildren, niter):
+def read_vlarray(filename, nchildren, niter):
     for i in range(niter):
-        fileh = tables.open_file(file, mode="r")
+        fileh = tables.open_file(filename, mode="r")
         for child in range(nchildren):
             node = fileh.get_node(fileh.root, 'array' + str(child))
             # flavor = node._v_attrs.FLAVOR
             data = node[:]  # Read data
             assert data is not None
-            # print "data-->", data
+            # print("data-->", data)
         show_mem("After reading data. Iter %s" % i)
         fileh.close()
         show_mem("After close")
 
 
-def write_table(file, nchildren, niter):
+def write_table(filename, nchildren, niter):
 
     class Record(tables.IsDescription):
         var1 = tables.IntCol(pos=1)
@@ -180,7 +180,7 @@ def write_table(file, nchildren, niter):
         var3 = tables.FloatCol(pos=3)
 
     for i in range(niter):
-        fileh = tables.open_file(file, mode="w")
+        fileh = tables.open_file(filename, mode="w")
         for child in range(nchildren):
             t = fileh.create_table(fileh.root, 'table' + str(child),
                                    Record, "child: %d" % child)
@@ -190,21 +190,21 @@ def write_table(file, nchildren, niter):
         show_mem("After close")
 
 
-def read_table(file, nchildren, niter):
+def read_table(filename, nchildren, niter):
     for i in range(niter):
-        fileh = tables.open_file(file, mode="r")
+        fileh = tables.open_file(filename, mode="r")
         for child in range(nchildren):
             node = fileh.get_node(fileh.root, 'table' + str(child))
             # klass = node._v_attrs.CLASS
             data = node[:]  # Read data
             assert data is not None
-            # print "data-->", data
+            # print("data-->", data)
         show_mem("After reading data. Iter %s" % i)
         fileh.close()
         show_mem("After close")
 
 
-def write_xtable(file, nchildren, niter):
+def write_xtable(filename, nchildren, niter):
 
     class Record(tables.IsDescription):
         var1 = tables.IntCol(pos=1)
@@ -212,7 +212,7 @@ def write_xtable(file, nchildren, niter):
         var3 = tables.FloatCol(pos=3)
 
     for i in range(niter):
-        fileh = tables.open_file(file, mode="w")
+        fileh = tables.open_file(filename, mode="w")
         for child in range(nchildren):
             t = fileh.create_table(fileh.root, 'table' + str(child),
                                    Record, "child: %d" % child)
@@ -223,14 +223,14 @@ def write_xtable(file, nchildren, niter):
         show_mem("After close")
 
 
-def read_xtable(file, nchildren, niter):
+def read_xtable(filename, nchildren, niter):
     for i in range(niter):
-        fileh = tables.open_file(file, mode="r")
+        fileh = tables.open_file(filename, mode="r")
         for child in range(nchildren):
             node = fileh.get_node(fileh.root, 'table' + str(child))
             # klass = node._v_attrs.CLASS
             # data = node[:]  # Read data
-            # print "data-->", data
+            # print("data-->", data)
         show_mem("After reading data. Iter %s" % i)
         fileh.close()
         show_mem("After close")
@@ -238,134 +238,105 @@ def read_xtable(file, nchildren, niter):
 
 
 if __name__ == '__main__':
-    import sys
-    import getopt
     import pstats
+    import argparse
     import profile as prof
 
-    usage = """usage: %s [-v] [-p] [-a] [-c] [-e] [-l] [-t] [-x] [-g] [-r] [-w] [-c nchildren] [-n iter] file
-            -v verbose
-            -p profile
-            -a create/read arrays  (default)
-            -c create/read carrays
-            -e create/read earrays
-            -l create/read vlrrays
-            -t create/read tables
-            -x create/read indexed tables
-            -g create/read groups
-            -r only read test
-            -w only write test
-            -n number of children (4000 is the default)
-            -i number of iterations (default is 3)
-            \n"""
-    try:
-        opts, pargs = getopt.getopt(sys.argv[1:], 'vpaceltxgrwn:i:')
-    except:
-        sys.stderr.write(usage)
-        sys.exit(0)
-
-    # if we pass too much parameters, abort
-    if len(pargs) != 1:
-        sys.stderr.write(usage)
-        sys.exit(0)
-
-    # default options
-    verbose = 0
-    profile = 0
-    array = 1
-    carray = 0
-    earray = 0
-    vlarray = 0
-    table = 0
-    xtable = 0
-    group = 0
-    write = 0
-    read = 0
-    nchildren = 1000
-    niter = 5
-
-    # Get the options
-    for option in opts:
-        if option[0] == '-v':
-            verbose = 1
-        elif option[0] == '-p':
-            profile = 1
-        elif option[0] == '-a':
-            carray = 1
-        elif option[0] == '-c':
-            array = 0
-            carray = 1
-        elif option[0] == '-e':
-            array = 0
-            earray = 1
-        elif option[0] == '-l':
-            array = 0
-            vlarray = 1
-        elif option[0] == '-t':
-            array = 0
-            table = 1
-        elif option[0] == '-x':
-            array = 0
-            xtable = 1
-        elif option[0] == '-g':
-            array = 0
-            cgroup = 1
-        elif option[0] == '-w':
-            write = 1
-        elif option[0] == '-r':
-            read = 1
-        elif option[0] == '-n':
-            nchildren = int(option[1])
-        elif option[0] == '-i':
-            niter = int(option[1])
-
-    # Catch the hdf5 file passed as the last argument
-    file = pargs[0]
-
-    if array:
+    def _get_parser():
+        parser = argparse.ArgumentParser(
+            description='Check for PyTables memory leaks.')
+        parser.add_argument('-v', '--verbose', action='store_true',
+                            help='enable verbose mode')
+        parser.add_argument('-p', '--profile', action='store_true',
+                            help='profile')
+        parser.add_argument('-a', '--array', action='store_true',
+                            help='create/read arrays (default)')
+        parser.add_argument('-c', '--carray', action='store_true',
+                            help='create/read carrays')
+        parser.add_argument('-e', '--earray', action='store_true',
+                            help='create/read earrays')
+        parser.add_argument('-l', '--vlarray', action='store_true',
+                            help='create/read vlarrays')
+        parser.add_argument('-t', '--table', action='store_true',
+                            help='create/read tables')
+        parser.add_argument('-x', '--indexed-table', action='store_true',
+                            dest='xtable', help='create/read indexed-tables')
+        parser.add_argument('-g', '--group', action='store_true',
+                            help='create/read groups')
+        parser.add_argument('-r', '--read', action='store_true',
+                            help='only read test')
+        parser.add_argument('-w', '--write', action='store_true',
+                            help='only write test')
+        parser.add_argument('-n', '--nchildren', type=int, default=1000,
+                            help='number of children (%(default)d is the '
+                                 'default)')
+        parser.add_argument('-i', '--niter', type=int, default=3,
+                            help='number of iterations (default: %(default)d)')
+
+        parser.add_argument('filename', help='HDF5 file name')
+
+        return parser
+
+    parser = _get_parser()
+    args = parser.parse_args()
+
+    # set 'array' as default value if no ather option has been specified
+    for name in ('carray', 'earray', 'vlarray', 'table', 'xtable', 'group'):
+        if getattr(args, name):
+            break
+    else:
+        args.array = True
+
+    filename = args.filename
+    nchildren = args.nchildren
+    niter = args.niter
+
+    if args.array:
         fwrite = 'write_array'
         fread = 'read_array'
-    elif carray:
+    elif args.carray:
         fwrite = 'write_carray'
         fread = 'read_carray'
-    elif earray:
+    elif args.earray:
         fwrite = 'write_earray'
         fread = 'read_earray'
-    elif vlarray:
+    elif args.vlarray:
         fwrite = 'write_vlarray'
         fread = 'read_vlarray'
-    elif table:
+    elif args.table:
         fwrite = 'write_table'
         fread = 'read_table'
-    elif xtable:
+    elif args.xtable:
         fwrite = 'write_xtable'
         fread = 'read_xtable'
-    elif group:
+    elif args.group:
         fwrite = 'write_group'
         fread = 'read_group'
 
     show_mem("Before open")
-    if write:
-        if profile:
-            prof.run(str(fwrite)+'(file, nchildren, niter)', 'write_file.prof')
+    if args.write:
+        if args.profile:
+            prof.run(str(fwrite)+'(filename, nchildren, niter)',
+                     'write_file.prof')
             stats = pstats.Stats('write_file.prof')
             stats.strip_dirs()
             stats.sort_stats('time', 'calls')
-            if verbose:
+            if args.verbose:
                 stats.print_stats()
             else:
                 stats.print_stats(20)
         else:
-            eval(fwrite+'(file, nchildren, niter)')
-    if read:
-        if profile:
-            prof.run(fread+'(file, nchildren, niter)', 'read_file.prof')
+            eval(fwrite+'(filename, nchildren, niter)')
+    if args.read:
+        if args.profile:
+            prof.run(fread+'(filename, nchildren, niter)', 'read_file.prof')
             stats = pstats.Stats('read_file.prof')
             stats.strip_dirs()
             stats.sort_stats('time', 'calls')
-            if verbose:
+            if args.verbose:
+                print('profile -verbose')
                 stats.print_stats()
             else:
                 stats.print_stats(20)
         else:
-            eval(fread+'(file, nchildren, niter)')
+            eval(fread+'(filename, nchildren, niter)')
diff --git a/tables/tests/common.py b/tables/tests/common.py
index e9cd66a..83e842e 100644
--- a/tables/tests/common.py
+++ b/tables/tests/common.py
@@ -10,28 +10,18 @@
 #
 ########################################################################
 
-"""Utilities for PyTables' test suites"""
+"""Utilities for PyTables' test suites."""
 
+from __future__ import print_function
 import os
 import sys
 import time
 import unittest
 import tempfile
 import warnings
-
 import os.path
 
-try:
-    # collections.Callable is new in python 2.6
-    from collections import Callable
-except ImportError:
-    is_callable = callable
-else:
-    def is_callable(x):
-        return isinstance(x, Callable)
-
 import numpy
-
 import tables
 
 verbose = False
@@ -61,24 +51,24 @@ def verbosePrint(string, nonl=False):
     if not verbose:
         return
     if nonl:
-        print string,
+        print(string, end=' ')
     else:
-        print string
+        print(string)
 
 
 def cleanup(klass):
     # klass.__dict__.clear()     # This is too hard. Don't do that
-#    print "Class attributes deleted"
+#    print("Class attributes deleted")
     for key in klass.__dict__:
         if not klass.__dict__[key].__class__.__name__ in ('instancemethod'):
             klass.__dict__[key] = None
 
 
 def allequal(a, b, flavor="numpy"):
-    """Checks if two numerical objects are equal"""
+    """Checks if two numerical objects are equal."""
 
-    # print "a-->", repr(a)
-    # print "b-->", repr(b)
+    # print("a-->", repr(a))
+    # print("b-->", repr(b))
     if not hasattr(b, "shape"):
         # Scalar case
         return a == b
@@ -89,13 +79,13 @@ def allequal(a, b, flavor="numpy"):
 
     if a.shape != b.shape:
         if verbose:
-            print "Shape is not equal:", a.shape, "!=", b.shape
+            print("Shape is not equal:", a.shape, "!=", b.shape)
         return 0
 
     # Way to check the type equality without byteorder considerations
     if hasattr(b, "dtype") and a.dtype.str[1:] != b.dtype.str[1:]:
         if verbose:
-            print "dtype is not equal:", a.dtype, "!=", b.dtype
+            print("dtype is not equal:", a.dtype, "!=", b.dtype)
         return 0
 
     # Rank-0 case
@@ -104,7 +94,7 @@ def allequal(a, b, flavor="numpy"):
             return 1
         else:
             if verbose:
-                print "Shape is not equal:", a.shape, "!=", b.shape
+                print("Shape is not equal:", a.shape, "!=", b.shape)
             return 0
 
     # null arrays
@@ -113,27 +103,27 @@ def allequal(a, b, flavor="numpy"):
             return 1
         else:
             if verbose:
-                print "length is not equal"
-                print "len(a.data) ==>", len(a.data)
-                print "len(b.data) ==>", len(b.data)
+                print("length is not equal")
+                print("len(a.data) ==>", len(a.data))
+                print("len(b.data) ==>", len(b.data))
             return 0
 
     # Multidimensional case
     result = (a == b)
     result = numpy.all(result)
     if not result and verbose:
-        print "Some of the elements in arrays are not equal"
+        print("Some of the elements in arrays are not equal")
 
     return result
 
 
 def areArraysEqual(arr1, arr2):
-    """
-    Are both `arr1` and `arr2` equal arrays?
+    """Are both `arr1` and `arr2` equal arrays?
 
     Arguments can be regular NumPy arrays, chararray arrays or
-    structured arrays (including structured record arrays).
-    They are checked for type and value equality.
+    structured arrays (including structured record arrays). They are
+    checked for type and value equality.
+
     """
 
     t1 = type(arr1)
@@ -152,20 +142,20 @@ def pyTablesTest(oldmethod):
         try:
             try:
                 return oldmethod(self, *args, **kwargs)
-            except SkipTest, se:
+            except SkipTest as se:
                 if se.args:
                     msg = se.args[0]
                 else:
                     msg = "<skipped>"
                 verbosePrint("\nSkipped test: %s" % msg)
-            except self.failureException, fe:
+            except self.failureException as fe:
                 if fe.args:
                     msg = fe.args[0]
                 else:
                     msg = "<failed>"
                 verbosePrint("\nTest failed: %s" % msg)
                 raise
-            except Exception, exc:
+            except Exception as exc:
                 cname = exc.__class__.__name__
                 verbosePrint("\nError in test::\n\n  %s: %s" % (cname, exc))
                 raise
@@ -190,7 +180,7 @@ class MetaPyTablesTestCase(type):
     def __new__(class_, name, bases, dict_):
         newdict = {}
         for (aname, avalue) in dict_.iteritems():
-            if is_callable(avalue) and aname.startswith('test'):
+            if callable(avalue) and aname.startswith('test'):
                 avalue = pyTablesTest(avalue)
             newdict[aname] = avalue
         return type.__new__(class_, name, bases, newdict)
@@ -218,14 +208,12 @@ class PyTablesTestCase(unittest.TestCase):
             methodName = self._getMethodName()
 
             title = "Running %s.%s" % (name, methodName)
-            print '%s\n%s' % (title, '-' * len(title))
+            print('%s\n%s' % (title, '-' * len(title)))
 
     @classmethod
     def _testFilename(class_, filename):
-        """
-        Returns an absolute version of the `filename`, taking care of
-        the location of the calling test case class.
-        """
+        """Returns an absolute version of the `filename`, taking care of the
+        location of the calling test case class."""
         modname = class_.__module__
         # When the definitive switch to ``setuptools`` is made,
         # this should definitely use the ``pkg_resouces`` API::
@@ -237,8 +225,7 @@ class PyTablesTestCase(unittest.TestCase):
         return os.path.join(dirname, filename)
 
     def failUnlessWarns(self, warnClass, callableObj, *args, **kwargs):
-        """
-        Fail unless a warning of class `warnClass` is issued.
+        """Fail unless a warning of class `warnClass` is issued.
 
         This method will fail if no warning belonging to the given
         `warnClass` is issued when invoking `callableObj` with arguments
@@ -247,6 +234,7 @@ class PyTablesTestCase(unittest.TestCase):
 
         This method returns the value returned by the call to
         `callableObj`.
+
         """
 
         issued = [False]  # let's avoid scoping problems ;)
@@ -302,12 +290,12 @@ class PyTablesTestCase(unittest.TestCase):
 
         try:
             callableObj(*args, **kwargs)
-        except excClass, exc:
-            print (
+        except excClass as exc:
+            print((
                 "Great!  The following ``%s`` was caught::\n"
                 "\n"
                 "  %s\n"
-                % (exc.__class__.__name__, exc))
+                % (exc.__class__.__name__, exc)))
         else:
             raise self.failureException(
                 "``%s`` was not raised" % excClass.__name__)
@@ -316,8 +304,8 @@ class PyTablesTestCase(unittest.TestCase):
 
     def _checkEqualityGroup(self, node1, node2, hardlink=False):
         if verbose:
-            print "Group 1:", node1
-            print "Group 2:", node2
+            print("Group 1:", node1)
+            print("Group 2:", node2)
         if hardlink:
             self.assertTrue(node1._v_pathname != node2._v_pathname,
                             "node1 and node2 have the same pathnames.")
@@ -329,8 +317,8 @@ class PyTablesTestCase(unittest.TestCase):
 
     def _checkEqualityLeaf(self, node1, node2, hardlink=False):
         if verbose:
-            print "Leaf 1:", node1
-            print "Leaf 2:", node2
+            print("Leaf 1:", node1)
+            print("Leaf 2:", node2)
         if hardlink:
             self.assertTrue(node1._v_pathname != node2._v_pathname,
                 "node1 and node2 have the same pathnames.")
@@ -343,11 +331,11 @@ class PyTablesTestCase(unittest.TestCase):
 
 class TempFileMixin:
     def setUp(self):
-        """
-        Set ``h5file`` and ``h5fname`` instance attributes.
+        """Set ``h5file`` and ``h5fname`` instance attributes.
 
         * ``h5fname``: the name of the temporary HDF5 file.
         * ``h5file``: the writable, empty, temporary HDF5 file.
+
         """
 
         self.h5fname = tempfile.mktemp(suffix='.h5')
@@ -366,6 +354,7 @@ class TempFileMixin:
 
         Returns a true or false value depending on whether the file was
         reopenend or not.  If not, nothing is changed.
+
         """
 
         self.h5file.close()
@@ -394,11 +383,11 @@ class ShowMemTime(PyTablesTestCase):
                 vmexe = int(line.split()[1])
             elif line.startswith("VmLib:"):
                 vmlib = int(line.split()[1])
-        print "\nWallClock time:", time.time() - self.tref
-        print "Memory usage: ******* %s *******" % self._getName()
-        print "VmSize: %7s kB\tVmRSS: %7s kB" % (vmsize, vmrss)
-        print "VmData: %7s kB\tVmStk: %7s kB" % (vmdata, vmstk)
-        print "VmExe:  %7s kB\tVmLib: %7s kB" % (vmexe, vmlib)
+        print("\nWallClock time:", time.time() - self.tref)
+        print("Memory usage: ******* %s *******" % self._getName())
+        print("VmSize: %7s kB\tVmRSS: %7s kB" % (vmsize, vmrss))
+        print("VmData: %7s kB\tVmStk: %7s kB" % (vmdata, vmstk))
+        print("VmExe:  %7s kB\tVmLib: %7s kB" % (vmexe, vmlib))
 
 
 ## Local Variables:
diff --git a/tables/tests/test_all.py b/tables/tests/test_all.py
index 22c23d3..e0564f7 100644
--- a/tables/tests/test_all.py
+++ b/tables/tests/test_all.py
@@ -2,11 +2,12 @@
 
 """Run all test cases."""
 
-import os
+from __future__ import print_function
 import re
 import sys
 import locale
 import unittest
+import platform
 
 import numpy
 
@@ -54,7 +55,7 @@ def suite():
         'tables.nodes.tests.test_filenode',
     ]
 
-    # print '-=' * 38
+    # print('-=' * 38)
 
     # The test for garbage must be run *in the last place*.
     # Else, it is not as useful.
@@ -80,10 +81,11 @@ def suite():
 
 def print_versions():
     """Print all the versions of software that PyTables relies on."""
-    print '-=' * 38
-    print "PyTables version:  %s" % tables.__version__
-    print "HDF5 version:      %s" % tables.which_lib_version("hdf5")[1]
-    print "NumPy version:     %s" % numpy.__version__
+
+    print('-=' * 38)
+    print("PyTables version:  %s" % tables.__version__)
+    print("HDF5 version:      %s" % tables.which_lib_version("hdf5")[1])
+    print("NumPy version:     %s" % numpy.__version__)
     tinfo = tables.which_lib_version("zlib")
     if numexpr.use_vml:
         # Get only the main version number and strip out all the rest
@@ -92,46 +94,54 @@ def print_versions():
         vml_avail = "using VML/MKL %s" % vml_version
     else:
         vml_avail = "not using Intel's VML/MKL"
-    print "Numexpr version:   %s (%s)" % (numexpr.__version__, vml_avail)
+    print("Numexpr version:   %s (%s)" % (numexpr.__version__, vml_avail))
     if tinfo is not None:
-        print "Zlib version:      %s (%s)" % (tinfo[1], "in Python interpreter")
+        print("Zlib version:      %s (%s)" % (tinfo[1],
+                                              "in Python interpreter"))
     tinfo = tables.which_lib_version("lzo")
     if tinfo is not None:
-        print "LZO version:       %s (%s)" % (tinfo[1], tinfo[2])
+        print("LZO version:       %s (%s)" % (tinfo[1], tinfo[2]))
     tinfo = tables.which_lib_version("bzip2")
     if tinfo is not None:
-        print "BZIP2 version:     %s (%s)" % (tinfo[1], tinfo[2])
+        print("BZIP2 version:     %s (%s)" % (tinfo[1], tinfo[2]))
     tinfo = tables.which_lib_version("blosc")
     if tinfo is not None:
         blosc_date = tinfo[2].split()[1]
-        print "Blosc version:     %s (%s)" % (tinfo[1], blosc_date)
+        print("Blosc version:     %s (%s)" % (tinfo[1], blosc_date))
+        blosc_cnames = tables.blosc_compressor_list()
+        print("Blosc compressors: %s" % (blosc_cnames,))
     try:
         from Cython.Compiler.Main import Version as Cython_Version
-        print 'Cython version:    %s' % Cython_Version.version
+        print('Cython version:    %s' % Cython_Version.version)
     except:
         pass
-    print 'Python version:    %s' % sys.version
-    if os.name == 'posix':
-        (sysname, nodename, release, version, machine) = os.uname()
-        print 'Platform:          %s-%s' % (sys.platform, machine)
-    print 'Byte-ordering:     %s' % sys.byteorder
-    print 'Detected cores:    %s' % detect_number_of_cores()
-    print 'Default encoding:  %s' % sys.getdefaultencoding()
-    print '-=' * 38
+    print('Python version:    %s' % sys.version)
+    print('Platform:          %s' % platform.platform())
+    #if os.name == 'posix':
+    #    (sysname, nodename, release, version, machine) = os.uname()
+    #    print('Platform:          %s-%s' % (sys.platform, machine))
+    print('Byte-ordering:     %s' % sys.byteorder)
+    print('Detected cores:    %s' % detect_number_of_cores())
+    print('Default encoding:  %s' % sys.getdefaultencoding())
+    print('Default locale:    (%s, %s)' % locale.getdefaultlocale())
+    print('-=' * 38)
+
+    # This should improve readability whan tests are run by CI tools
+    sys.stdout.flush()
 
 
 def print_heavy(heavy):
     if heavy:
-        print """\
-Performing the complete test suite!"""
+        print("""\
+Performing the complete test suite!""")
     else:
-        print """\
+        print("""\
 Performing only a light (yet comprehensive) subset of the test suite.
 If you want a more complete test, try passing the --heavy flag to this script
 (or set the 'heavy' parameter in case you are using tables.test() call).
 The whole suite will take more than 4 hours to complete on a relatively
-modern CPU and around 512 MB of main memory."""
-    print '-=' * 38
+modern CPU and around 512 MB of main memory.""")
+    print('-=' * 38)
 
 
 def test(verbose=False, heavy=False):
@@ -146,6 +156,7 @@ def test(verbose=False, heavy=False):
     resources from your computer).
 
     Return 0 (os.EX_OK) if all tests pass, 1 in case of failure
+
     """
 
     print_versions()
@@ -169,12 +180,12 @@ if __name__ == '__main__':
 
     hdf5_version = get_tuple_version(tables.which_lib_version("hdf5")[0])
     if hdf5_version < min_hdf5_version:
-        print "*Warning*: HDF5 version is lower than recommended: %s < %s" % \
-              (hdf5_version, min_hdf5_version)
+        print("*Warning*: HDF5 version is lower than recommended: %s < %s" %
+              (hdf5_version, min_hdf5_version))
 
     if numpy.__version__ < min_numpy_version:
-        print "*Warning*: NumPy version is lower than recommended: %s < %s" % \
-              (numpy.__version__, min_numpy_version)
+        print("*Warning*: NumPy version is lower than recommended: %s < %s" %
+              (numpy.__version__, min_numpy_version))
 
     # Handle some global flags (i.e. only useful for test_all.py)
     only_versions = 0
diff --git a/tables/tests/test_array.py b/tables/tests/test_array.py
index 65e1393..cea4774 100644
--- a/tables/tests/test_array.py
+++ b/tables/tests/test_array.py
@@ -1,5 +1,6 @@
 # -*- coding: utf-8 -*-
 
+from __future__ import print_function
 import sys
 import unittest
 import os
@@ -22,16 +23,19 @@ warnings.resetwarnings()
 
 class BasicTestCase(unittest.TestCase):
     """Basic test for all the supported typecodes present in numpy.
+
     All of them are included on pytables.
+
     """
     endiancheck = False
 
     def write_read(self, testarray):
         a = testarray
         if common.verbose:
-            print '\n', '-=' * 30
-            print "Running test for array with type '%s'" % a.dtype.type,
-            print "for class check:", self.title
+            print('\n', '-=' * 30)
+            print("Running test for array with type '%s'" % a.dtype.type,
+                  end=' ')
+            print("for class check:", self.title)
 
         # Create an instance of HDF5 file
         filename = tempfile.mktemp(".h5")
@@ -56,18 +60,18 @@ class BasicTestCase(unittest.TestCase):
 
                 # Compare them. They should be equal.
                 if common.verbose and not allequal(a, b):
-                    print "Write and read arrays differ!"
-                    # print "Array written:", a
-                    print "Array written shape:", a.shape
-                    print "Array written itemsize:", a.itemsize
-                    print "Array written type:", a.dtype.type
-                    # print "Array read:", b
-                    print "Array read shape:", b.shape
-                    print "Array read itemsize:", b.itemsize
-                    print "Array read type:", b.dtype.type
+                    print("Write and read arrays differ!")
+                    # print("Array written:", a)
+                    print("Array written shape:", a.shape)
+                    print("Array written itemsize:", a.itemsize)
+                    print("Array written type:", a.dtype.type)
+                    # print("Array read:", b)
+                    print("Array read shape:", b.shape)
+                    print("Array read itemsize:", b.itemsize)
+                    print("Array read type:", b.dtype.type)
                     if a.dtype.kind != "S":
-                        print "Array written byteorder:", a.dtype.byteorder
-                        print "Array read byteorder:", b.dtype.byteorder
+                        print("Array written byteorder:", a.dtype.byteorder)
+                        print("Array read byteorder:", b.dtype.byteorder)
 
                 # Check strictly the array equality
                 self.assertEqual(a.shape, b.shape)
@@ -108,9 +112,10 @@ class BasicTestCase(unittest.TestCase):
         a = testarray
 
         if common.verbose:
-            print '\n', '-=' * 30
-            print "Running test for array with type '%s'" % a.dtype.type,
-            print "for class check:", self.title
+            print('\n', '-=' * 30)
+            print("Running test for array with type '%s'" % a.dtype.type,
+                  end=' ')
+            print("for class check:", self.title)
 
         # Create an instance of HDF5 file
         filename = tempfile.mktemp(".h5")
@@ -163,9 +168,10 @@ class BasicTestCase(unittest.TestCase):
         byteorder = None
 
         if common.verbose:
-            print '\n', '-=' * 30
-            print "Running test for array with type '%s'" % a.dtype.type,
-            print "for class check:", self.title
+            print('\n', '-=' * 30)
+            print("Running test for array with type '%s'" % a.dtype.type,
+                  end=' ')
+            print("for class check:", self.title)
 
         # Create an instance of HDF5 file
         filename = tempfile.mktemp(".h5")
@@ -201,18 +207,18 @@ class BasicTestCase(unittest.TestCase):
 
                 # Compare them. They should be equal.
                 if common.verbose and not allequal(a, b):
-                    print "Write and read arrays differ!"
-                    # print "Array written:", a
-                    print "Array written shape:", a.shape
-                    print "Array written itemsize:", a.itemsize
-                    print "Array written type:", a.dtype.type
-                    # print "Array read:", b
-                    print "Array read shape:", b.shape
-                    print "Array read itemsize:", b.itemsize
-                    print "Array read type:", b.dtype.type
+                    print("Write and read arrays differ!")
+                    # print("Array written:", a)
+                    print("Array written shape:", a.shape)
+                    print("Array written itemsize:", a.itemsize)
+                    print("Array written type:", a.dtype.type)
+                    # print("Array read:", b)
+                    print("Array read shape:", b.shape)
+                    print("Array read itemsize:", b.itemsize)
+                    print("Array read type:", b.dtype.type)
                     if a.dtype.kind != "S":
-                        print "Array written byteorder:", a.dtype.byteorder
-                        print "Array read byteorder:", b.dtype.byteorder
+                        print("Array written byteorder:", a.dtype.byteorder)
+                        print("Array read byteorder:", b.dtype.byteorder)
 
                 # Check strictly the array equality
                 self.assertEqual(a.shape, b.shape)
@@ -399,7 +405,8 @@ class BasicTestCase(unittest.TestCase):
 
         for name in ('float16', 'float96', 'float128',
                      'complex192', 'complex256'):
-            if hasattr(numpy, name):
+            atomname = name.capitalize() + 'Atom'
+            if atomname in globals():
                 typecodes.append(name)
 
         for typecode in typecodes:
@@ -420,7 +427,8 @@ class BasicTestCase(unittest.TestCase):
 
         for name in ('float16', 'float96', 'float128',
                      'complex192', 'complex256'):
-            if hasattr(numpy, name):
+            atomname = name.capitalize() + 'Atom'
+            if atomname in globals():
                 typecodes.append(name)
 
         for typecode in typecodes:
@@ -638,7 +646,9 @@ class SizeOnDiskInMemoryPropertyTestCase(unittest.TestCase):
 
 class UnalignedAndComplexTestCase(unittest.TestCase):
     """Basic test for all the supported typecodes present in numpy.
+
     Most of them are included on PyTables.
+
     """
 
     def setUp(self):
@@ -656,9 +666,9 @@ class UnalignedAndComplexTestCase(unittest.TestCase):
 
     def write_read(self, testArray):
         if common.verbose:
-            print '\n', '-=' * 30
-            print "\nRunning test for array with type '%s'" % \
-                  testArray.dtype.type
+            print('\n', '-=' * 30)
+            print("\nRunning test for array with type '%s'" %
+                  testArray.dtype.type)
 
         # Create the array under root and name 'somearray'
         a = testArray
@@ -684,15 +694,15 @@ class UnalignedAndComplexTestCase(unittest.TestCase):
 
         # Compare them. They should be equal.
         if not allequal(c, b) and common.verbose:
-            print "Write and read arrays differ!"
-            print "Array written:", a
-            print "Array written shape:", a.shape
-            print "Array written itemsize:", a.itemsize
-            print "Array written type:", a.dtype.type
-            print "Array read:", b
-            print "Array read shape:", b.shape
-            print "Array read itemsize:", b.itemsize
-            print "Array read type:", b.dtype.type
+            print("Write and read arrays differ!")
+            print("Array written:", a)
+            print("Array written shape:", a.shape)
+            print("Array written itemsize:", a.itemsize)
+            print("Array written type:", a.dtype.type)
+            print("Array read:", b)
+            print("Array read shape:", b.shape)
+            print("Array read itemsize:", b.itemsize)
+            print("Array read type:", b.dtype.type)
 
         # Check strictly the array equality
         self.assertEqual(a.shape, b.shape)
@@ -817,10 +827,10 @@ class UnalignedAndComplexTestCase(unittest.TestCase):
         # Check that the array is back in the correct byteorder
         c = array[...]
         if common.verbose:
-            print "byteorder of array on disk-->", array.byteorder
-            print "byteorder of subarray-->", b.dtype.byteorder
-            print "subarray-->", b
-            print "retrieved array-->", c
+            print("byteorder of array on disk-->", array.byteorder)
+            print("byteorder of subarray-->", b.dtype.byteorder)
+            print("subarray-->", b)
+            print("retrieved array-->", c)
         self.assertTrue(allequal(a, c))
         # Close the file
         fileh.close()
@@ -851,10 +861,10 @@ class UnalignedAndComplexTestCase(unittest.TestCase):
         # Check that the array is back in the correct byteorder
         c = array[...]
         if common.verbose:
-            print "byteorder of array on disk-->", array.byteorder
-            print "byteorder of subarray-->", b.dtype.byteorder
-            print "subarray-->", b
-            print "retrieved array-->", c
+            print("byteorder of array on disk-->", array.byteorder)
+            print("byteorder of subarray-->", b.dtype.byteorder)
+            print("subarray-->", b)
+            print("retrieved array-->", c)
         self.assertTrue(allequal(a, c))
         # Close the file
         fileh.close()
@@ -889,9 +899,9 @@ class GroupsArrayTestCase(unittest.TestCase):
         """Checking combinations of arrays with groups."""
 
         if common.verbose:
-            print '\n', '-=' * 30
-            print "Running %s.test00_iterativeGroups..." % \
-                  self.__class__.__name__
+            print('\n', '-=' * 30)
+            print("Running %s.test00_iterativeGroups..." %
+                  self.__class__.__name__)
 
         # Open a new empty HDF5 file
         file = tempfile.mktemp(".h5")
@@ -906,18 +916,18 @@ class GroupsArrayTestCase(unittest.TestCase):
         # http://projects.scipy.org/scipy/numpy/ticket/290
         typecodes = ['b', 'B', 'h', 'H', 'i', 'I', 'l', 'L', 'q', 'f', 'd',
                      'F', 'D']
-        if hasattr(numpy, 'float16'):
+        if 'Float16Atom' in globals():
             typecodes.append('e')
-        if hasattr(numpy, 'float96') or hasattr(numpy, 'float128'):
+        if 'Float96Atom' in globals() or 'Float128Atom' in globals():
             typecodes.append('g')
-        if hasattr(numpy, 'complex192') or hasattr(numpy, 'complex256'):
+        if 'Complex192Atom' in globals() or 'Complex256Atom' in globals():
             typecodes.append('G')
 
         for i, typecode in enumerate(typecodes):
             a = numpy.ones((3,), typecode)
             dsetname = 'array_' + typecode
             if common.verbose:
-                print "Creating dataset:", group._g_join(dsetname)
+                print("Creating dataset:", group._g_join(dsetname))
             fileh.create_array(group, dsetname, a, "Large array")
             group = fileh.create_group(group, 'group' + str(i))
 
@@ -938,11 +948,11 @@ class GroupsArrayTestCase(unittest.TestCase):
             # Get the actual array
             b = dset.read()
             if common.verbose:
-                print "Info from dataset:", dset._v_pathname
-                print "  shape ==>", dset.shape,
-                print "  type ==> %s" % dset.atom.dtype
-                print "Array b read from file. Shape: ==>", b.shape,
-                print ". Type ==> %s" % b.dtype
+                print("Info from dataset:", dset._v_pathname)
+                print("  shape ==>", dset.shape, end=' ')
+                print("  type ==> %s" % dset.atom.dtype)
+                print("Array b read from file. Shape: ==>", b.shape, end=' ')
+                print(". Type ==> %s" % b.dtype)
             self.assertEqual(a.shape, b.shape)
             self.assertEqual(a.dtype, b.dtype)
             self.assertTrue(allequal(a, b))
@@ -969,22 +979,22 @@ class GroupsArrayTestCase(unittest.TestCase):
         maxrank = 32
 
         if common.verbose:
-            print '\n', '-=' * 30
-            print "Running %s.test01_largeRankArrays..." % \
-                  self.__class__.__name__
-            print "Maximum rank for tested arrays:", maxrank
+            print('\n', '-=' * 30)
+            print("Running %s.test01_largeRankArrays..." %
+                  self.__class__.__name__)
+            print("Maximum rank for tested arrays:", maxrank)
         # Open a new empty HDF5 file
         # file = tempfile.mktemp(".h5")
         file = "test_array.h5"
         fileh = open_file(file, mode="w")
         group = fileh.root
         if common.verbose:
-            print "Rank array writing progress: ",
+            print("Rank array writing progress: ", end=' ')
         for rank in range(minrank, maxrank + 1):
             # Create an array of integers, with incrementally bigger ranges
             a = numpy.ones((1,) * rank, numpy.int32)
             if common.verbose:
-                print "%3d," % (rank),
+                print("%3d," % (rank), end=' ')
             fileh.create_array(group, "array", a, "Rank: %s" % rank)
             group = fileh.create_group(group, 'group' + str(rank))
         # Flush the buffers
@@ -996,8 +1006,8 @@ class GroupsArrayTestCase(unittest.TestCase):
         fileh = open_file(file, mode="r")
         group = fileh.root
         if common.verbose:
-            print
-            print "Rank array reading progress: "
+            print()
+            print("Rank array reading progress: ")
         # Get the metadata on the previosly saved arrays
         for rank in range(minrank, maxrank + 1):
             # Create an array for later comparison
@@ -1005,24 +1015,24 @@ class GroupsArrayTestCase(unittest.TestCase):
             # Get the actual array
             b = group.array.read()
             if common.verbose:
-                print "%3d," % (rank),
+                print("%3d," % (rank), end=' ')
             if common.verbose and not allequal(a, b):
-                print "Info from dataset:", dset._v_pathname
-                print "  Shape: ==>", dset.shape,
-                print "  typecode ==> %c" % dset.typecode
-                print "Array b read from file. Shape: ==>", b.shape,
-                print ". Type ==> %c" % b.dtype
+                print("Info from dataset:", dset._v_pathname)
+                print("  Shape: ==>", dset.shape, end=' ')
+                print("  typecode ==> %c" % dset.typecode)
+                print("Array b read from file. Shape: ==>", b.shape, end=' ')
+                print(". Type ==> %c" % b.dtype)
 
             self.assertEqual(a.shape, b.shape)
             self.assertEqual(a.dtype, b.dtype)
             self.assertTrue(allequal(a, b))
 
-            # print fileh
+            # print(fileh)
             # Iterate over the next group
             group = fileh.get_node(group, 'group' + str(rank))
 
         if common.verbose:
-            print  # This flush the stdout buffer
+            print()  # This flush the stdout buffer
         # Close the file
         fileh.close()
 
@@ -1033,11 +1043,11 @@ class GroupsArrayTestCase(unittest.TestCase):
 class CopyTestCase(unittest.TestCase):
 
     def test01_copy(self):
-        """Checking Array.copy() method """
+        """Checking Array.copy() method."""
 
         if common.verbose:
-            print '\n', '-=' * 30
-            print "Running %s.test01_copy..." % self.__class__.__name__
+            print('\n', '-=' * 30)
+            print("Running %s.test01_copy..." % self.__class__.__name__)
 
         # Create an instance of an HDF5 file
         file = tempfile.mktemp(".h5")
@@ -1052,18 +1062,18 @@ class CopyTestCase(unittest.TestCase):
 
         if self.close:
             if common.verbose:
-                print "(closing file version)"
+                print("(closing file version)")
             fileh.close()
             fileh = open_file(file, mode="r")
             array1 = fileh.root.array1
             array2 = fileh.root.array2
 
         if common.verbose:
-            print "array1-->", array1.read()
-            print "array2-->", array2.read()
-            # print "dirs-->", dir(array1), dir(array2)
-            print "attrs array1-->", repr(array1.attrs)
-            print "attrs array2-->", repr(array2.attrs)
+            print("array1-->", array1.read())
+            print("array2-->", array2.read())
+            # print("dirs-->", dir(array1), dir(array2))
+            print("attrs array1-->", repr(array1.attrs))
+            print("attrs array2-->", repr(array2.attrs))
 
         # Check that all the elements are equal
         self.assertTrue(allequal(array1.read(), array2.read()))
@@ -1082,8 +1092,8 @@ class CopyTestCase(unittest.TestCase):
         """Checking Array.copy() method (where specified)"""
 
         if common.verbose:
-            print '\n', '-=' * 30
-            print "Running %s.test02_copy..." % self.__class__.__name__
+            print('\n', '-=' * 30)
+            print("Running %s.test02_copy..." % self.__class__.__name__)
 
         # Create an instance of an HDF5 file
         file = tempfile.mktemp(".h5")
@@ -1099,18 +1109,18 @@ class CopyTestCase(unittest.TestCase):
 
         if self.close:
             if common.verbose:
-                print "(closing file version)"
+                print("(closing file version)")
             fileh.close()
             fileh = open_file(file, mode="r")
             array1 = fileh.root.array1
             array2 = fileh.root.group1.array2
 
         if common.verbose:
-            print "array1-->", array1.read()
-            print "array2-->", array2.read()
-            # print "dirs-->", dir(array1), dir(array2)
-            print "attrs array1-->", repr(array1.attrs)
-            print "attrs array2-->", repr(array2.attrs)
+            print("array1-->", array1.read())
+            print("array2-->", array2.read())
+            # print("dirs-->", dir(array1), dir(array2))
+            print("attrs array1-->", repr(array1.attrs))
+            print("attrs array2-->", repr(array2.attrs))
 
         # Check that all the elements are equal
         self.assertTrue(allequal(array1.read(), array2.read()))
@@ -1129,8 +1139,8 @@ class CopyTestCase(unittest.TestCase):
         """Checking Array.copy() method (checking title copying)"""
 
         if common.verbose:
-            print '\n', '-=' * 30
-            print "Running %s.test04_copy..." % self.__class__.__name__
+            print('\n', '-=' * 30)
+            print("Running %s.test04_copy..." % self.__class__.__name__)
 
         # Create an instance of an HDF5 file
         file = tempfile.mktemp(".h5")
@@ -1147,7 +1157,7 @@ class CopyTestCase(unittest.TestCase):
 
         if self.close:
             if common.verbose:
-                print "(closing file version)"
+                print("(closing file version)")
             fileh.close()
             fileh = open_file(file, mode="r")
             array1 = fileh.root.array1
@@ -1155,7 +1165,7 @@ class CopyTestCase(unittest.TestCase):
 
         # Assert user attributes
         if common.verbose:
-            print "title of destination array-->", array2.title
+            print("title of destination array-->", array2.title)
         self.assertEqual(array2.title, "title array2")
 
         # Close the file
@@ -1166,8 +1176,8 @@ class CopyTestCase(unittest.TestCase):
         """Checking Array.copy() method (user attributes copied)"""
 
         if common.verbose:
-            print '\n', '-=' * 30
-            print "Running %s.test05_copy..." % self.__class__.__name__
+            print('\n', '-=' * 30)
+            print("Running %s.test05_copy..." % self.__class__.__name__)
 
         # Create an instance of an HDF5 file
         file = tempfile.mktemp(".h5")
@@ -1184,15 +1194,15 @@ class CopyTestCase(unittest.TestCase):
 
         if self.close:
             if common.verbose:
-                print "(closing file version)"
+                print("(closing file version)")
             fileh.close()
             fileh = open_file(file, mode="r")
             array1 = fileh.root.array1
             array2 = fileh.root.array2
 
         if common.verbose:
-            print "attrs array1-->", repr(array1.attrs)
-            print "attrs array2-->", repr(array2.attrs)
+            print("attrs array1-->", repr(array1.attrs))
+            print("attrs array2-->", repr(array2.attrs))
 
         # Assert user attributes
         self.assertEqual(array2.attrs.attr1, "attr1")
@@ -1206,8 +1216,8 @@ class CopyTestCase(unittest.TestCase):
         """Checking Array.copy() method (user attributes not copied)"""
 
         if common.verbose:
-            print '\n', '-=' * 30
-            print "Running %s.test05b_copy..." % self.__class__.__name__
+            print('\n', '-=' * 30)
+            print("Running %s.test05b_copy..." % self.__class__.__name__)
 
         # Create an instance of an HDF5 file
         file = tempfile.mktemp(".h5")
@@ -1224,15 +1234,15 @@ class CopyTestCase(unittest.TestCase):
 
         if self.close:
             if common.verbose:
-                print "(closing file version)"
+                print("(closing file version)")
             fileh.close()
             fileh = open_file(file, mode="r")
             array1 = fileh.root.array1
             array2 = fileh.root.array2
 
         if common.verbose:
-            print "attrs array1-->", repr(array1.attrs)
-            print "attrs array2-->", repr(array2.attrs)
+            print("attrs array1-->", repr(array1.attrs))
+            print("attrs array2-->", repr(array2.attrs))
 
         # Assert user attributes
         self.assertEqual(hasattr(array2.attrs, "attr1"), 0)
@@ -1254,11 +1264,11 @@ class OpenCopyTestCase(CopyTestCase):
 class CopyIndexTestCase(unittest.TestCase):
 
     def test01_index(self):
-        """Checking Array.copy() method with indexes"""
+        """Checking Array.copy() method with indexes."""
 
         if common.verbose:
-            print '\n', '-=' * 30
-            print "Running %s.test01_index..." % self.__class__.__name__
+            print('\n', '-=' * 30)
+            print("Running %s.test01_index..." % self.__class__.__name__)
 
         # Create an instance of an HDF5 Array
         file = tempfile.mktemp(".h5")
@@ -1276,10 +1286,10 @@ class CopyIndexTestCase(unittest.TestCase):
                              stop=self.stop,
                              step=self.step)
         if common.verbose:
-            print "array1-->", array1.read()
-            print "array2-->", array2.read()
-            print "attrs array1-->", repr(array1.attrs)
-            print "attrs array2-->", repr(array2.attrs)
+            print("array1-->", array1.read())
+            print("array2-->", array2.read())
+            print("attrs array1-->", repr(array1.attrs))
+            print("attrs array2-->", repr(array2.attrs))
 
         # Check that all the elements are equal
         r2 = r[self.start:self.stop:self.step]
@@ -1287,8 +1297,8 @@ class CopyIndexTestCase(unittest.TestCase):
 
         # Assert the number of rows in array
         if common.verbose:
-            print "nrows in array2-->", array2.nrows
-            print "and it should be-->", r2.shape[0]
+            print("nrows in array2-->", array2.nrows)
+            print("and it should be-->", r2.shape[0])
         self.assertEqual(r2.shape[0], array2.nrows)
 
         # Close the file
@@ -1299,8 +1309,8 @@ class CopyIndexTestCase(unittest.TestCase):
         """Checking Array.copy() method with indexes (close file version)"""
 
         if common.verbose:
-            print '\n', '-=' * 30
-            print "Running %s.test02_indexclosef..." % self.__class__.__name__
+            print('\n', '-=' * 30)
+            print("Running %s.test02_indexclosef..." % self.__class__.__name__)
 
         # Create an instance of an HDF5 Array
         file = tempfile.mktemp(".h5")
@@ -1324,10 +1334,10 @@ class CopyIndexTestCase(unittest.TestCase):
         array2 = fileh.root.array2
 
         if common.verbose:
-            print "array1-->", array1.read()
-            print "array2-->", array2.read()
-            print "attrs array1-->", repr(array1.attrs)
-            print "attrs array2-->", repr(array2.attrs)
+            print("array1-->", array1.read())
+            print("array2-->", array2.read())
+            print("attrs array1-->", repr(array1.attrs))
+            print("attrs array2-->", repr(array2.attrs))
 
         # Check that all the elements are equal
         r2 = r[self.start:self.stop:self.step]
@@ -1335,8 +1345,8 @@ class CopyIndexTestCase(unittest.TestCase):
 
         # Assert the number of rows in array
         if common.verbose:
-            print "nrows in array2-->", array2.nrows
-            print "and it should be-->", r2.shape[0]
+            print("nrows in array2-->", array2.nrows)
+            print("and it should be-->", r2.shape[0])
         self.assertEqual(r2.shape[0], array2.nrows)
 
         # Close the file
@@ -1434,8 +1444,8 @@ class GetItemTestCase(unittest.TestCase):
 
         # Get and compare an element
         if common.verbose:
-            print "Original first element:", a[0], type(a[0])
-            print "Read first element:", arr[0], type(arr[0])
+            print("Original first element:", a[0], type(a[0]))
+            print("Read first element:", arr[0], type(arr[0]))
         self.assertTrue(allequal(a[0], arr[0]))
         self.assertEqual(type(a[0]), type(arr[0]))
 
@@ -1461,8 +1471,8 @@ class GetItemTestCase(unittest.TestCase):
 
         # Get and compare an element
         if common.verbose:
-            print "Original first element:", a[0], type(a[0])
-            print "Read first element:", arr[0], type(arr[0])
+            print("Original first element:", a[0], type(a[0]))
+            print("Read first element:", arr[0], type(arr[0]))
         self.assertEqual(a[0], arr[0])
         self.assertEqual(type(a[0]), type(arr[0]))
 
@@ -1488,8 +1498,8 @@ class GetItemTestCase(unittest.TestCase):
 
         # Get and compare an element
         if common.verbose:
-            print "Original elements:", a[1:4]
-            print "Read elements:", arr[1:4]
+            print("Original elements:", a[1:4])
+            print("Read elements:", arr[1:4])
         self.assertTrue(allequal(a[1:4], arr[1:4]))
 
         # Close the file
@@ -1514,8 +1524,8 @@ class GetItemTestCase(unittest.TestCase):
 
         # Get and compare an element
         if common.verbose:
-            print "Original elements:", a[1:4]
-            print "Read elements:", arr[1:4]
+            print("Original elements:", a[1:4])
+            print("Read elements:", arr[1:4])
         self.assertTrue(allequal(a[1:4], arr[1:4]))
 
         # Close the file
@@ -1540,8 +1550,8 @@ class GetItemTestCase(unittest.TestCase):
 
         # Get and compare an element
         if common.verbose:
-            print "Original elements:", a[1:4:2]
-            print "Read elements:", arr[1:4:2]
+            print("Original elements:", a[1:4:2])
+            print("Read elements:", arr[1:4:2])
         self.assertTrue(allequal(a[1:4:2], arr[1:4:2]))
 
         # Close the file
@@ -1566,8 +1576,8 @@ class GetItemTestCase(unittest.TestCase):
 
         # Get and compare an element
         if common.verbose:
-            print "Original elements:", a[1:4:2]
-            print "Read elements:", arr[1:4:2]
+            print("Original elements:", a[1:4:2])
+            print("Read elements:", arr[1:4:2])
         self.assertTrue(allequal(a[1:4:2], arr[1:4:2]))
         # Close the file
         fileh.close()
@@ -1591,8 +1601,8 @@ class GetItemTestCase(unittest.TestCase):
 
         # Get and compare an element
         if common.verbose:
-            print "Original last element:", a[-1]
-            print "Read last element:", arr[-1]
+            print("Original last element:", a[-1])
+            print("Read last element:", arr[-1])
         self.assertTrue(allequal(a[-1], arr[-1]))
 
         # Close the file
@@ -1617,8 +1627,8 @@ class GetItemTestCase(unittest.TestCase):
 
         # Get and compare an element
         if common.verbose:
-            print "Original before last element:", a[-2]
-            print "Read before last element:", arr[-2]
+            print("Original before last element:", a[-2])
+            print("Read before last element:", arr[-2])
         if isinstance(a[-2], numpy.ndarray):
             self.assertTrue(allequal(a[-2], arr[-2]))
         else:
@@ -1646,8 +1656,8 @@ class GetItemTestCase(unittest.TestCase):
 
         # Get and compare an element
         if common.verbose:
-            print "Original last elements:", a[-4:-1]
-            print "Read last elements:", arr[-4:-1]
+            print("Original last elements:", a[-4:-1])
+            print("Read last elements:", arr[-4:-1])
         self.assertTrue(allequal(a[-4:-1], arr[-4:-1]))
         # Close the file
         fileh.close()
@@ -1671,8 +1681,8 @@ class GetItemTestCase(unittest.TestCase):
 
         # Get and compare an element
         if common.verbose:
-            print "Original last elements:", a[-4:-1]
-            print "Read last elements:", arr[-4:-1]
+            print("Original last elements:", a[-4:-1])
+            print("Read last elements:", arr[-4:-1])
         self.assertTrue(allequal(a[-4:-1], arr[-4:-1]))
 
         # Close the file
@@ -1751,8 +1761,8 @@ class SetItemTestCase(unittest.TestCase):
 
         # Get and compare an element
         if common.verbose:
-            print "Original first element:", a[0]
-            print "Read first element:", arr[0]
+            print("Original first element:", a[0])
+            print("Read first element:", arr[0])
         self.assertTrue(allequal(a[0], arr[0]))
 
         # Close the file
@@ -1781,8 +1791,8 @@ class SetItemTestCase(unittest.TestCase):
 
         # Get and compare an element
         if common.verbose:
-            print "Original first element:", a[0]
-            print "Read first element:", arr[0]
+            print("Original first element:", a[0])
+            print("Read first element:", arr[0])
         self.assertEqual(a[0], arr[0])
 
         # Close the file
@@ -1811,8 +1821,8 @@ class SetItemTestCase(unittest.TestCase):
 
         # Get and compare an element
         if common.verbose:
-            print "Original elements:", a[1:4]
-            print "Read elements:", arr[1:4]
+            print("Original elements:", a[1:4])
+            print("Read elements:", arr[1:4])
         self.assertTrue(allequal(a[1:4], arr[1:4]))
 
         # Close the file
@@ -1844,8 +1854,8 @@ class SetItemTestCase(unittest.TestCase):
 
         # Get and compare an element
         if common.verbose:
-            print "Original elements:", a[1:4]
-            print "Read elements:", arr[1:4]
+            print("Original elements:", a[1:4])
+            print("Read elements:", arr[1:4])
         self.assertTrue(allequal(a[1:4], arr[1:4]))
 
         # Close the file
@@ -1875,8 +1885,8 @@ class SetItemTestCase(unittest.TestCase):
 
         # Get and compare an element
         if common.verbose:
-            print "Original elements:", a[1:4:2]
-            print "Read elements:", arr[1:4:2]
+            print("Original elements:", a[1:4:2])
+            print("Read elements:", arr[1:4:2])
         self.assertTrue(allequal(a[1:4:2], arr[1:4:2]))
 
         # Close the file
@@ -1908,8 +1918,8 @@ class SetItemTestCase(unittest.TestCase):
 
         # Get and compare an element
         if common.verbose:
-            print "Original elements:", a[1:4:2]
-            print "Read elements:", arr[1:4:2]
+            print("Original elements:", a[1:4:2])
+            print("Read elements:", arr[1:4:2])
         self.assertTrue(allequal(a[1:4:2], arr[1:4:2]))
 
         # Close the file
@@ -1939,8 +1949,8 @@ class SetItemTestCase(unittest.TestCase):
 
         # Get and compare an element
         if common.verbose:
-            print "Original last element:", a[-1]
-            print "Read last element:", arr[-1]
+            print("Original last element:", a[-1])
+            print("Read last element:", arr[-1])
         self.assertTrue(allequal(a[-1], arr[-1]))
 
         # Close the file
@@ -1970,8 +1980,8 @@ class SetItemTestCase(unittest.TestCase):
 
         # Get and compare an element
         if common.verbose:
-            print "Original before last element:", a[-2]
-            print "Read before last element:", arr[-2]
+            print("Original before last element:", a[-2])
+            print("Read before last element:", arr[-2])
         if isinstance(a[-2], numpy.ndarray):
             self.assertTrue(allequal(a[-2], arr[-2]))
         else:
@@ -2004,8 +2014,8 @@ class SetItemTestCase(unittest.TestCase):
 
         # Get and compare an element
         if common.verbose:
-            print "Original last elements:", a[-4:-1]
-            print "Read last elements:", arr[-4:-1]
+            print("Original last elements:", a[-4:-1])
+            print("Read last elements:", arr[-4:-1])
         self.assertTrue(allequal(a[-4:-1], arr[-4:-1]))
 
         # Close the file
@@ -2037,8 +2047,8 @@ class SetItemTestCase(unittest.TestCase):
 
         # Get and compare an element
         if common.verbose:
-            print "Original last elements:", a[-4:-1]
-            print "Read last elements:", arr[-4:-1]
+            print("Original last elements:", a[-4:-1])
+            print("Read last elements:", arr[-4:-1])
         self.assertTrue(allequal(a[-4:-1], arr[-4:-1]))
 
         # Close the file
@@ -2073,8 +2083,8 @@ class SetItemTestCase(unittest.TestCase):
 
         # Get and compare an element
         if common.verbose:
-            print "Original last elements:", a[-4:-1]
-            print "Read last elements:", arr[-4:-1]
+            print("Original last elements:", a[-4:-1])
+            print("Read last elements:", arr[-4:-1])
         self.assertTrue(allequal(a[-4:-1], arr[-4:-1]))
 
         # Close the file
@@ -2151,8 +2161,8 @@ class GeneratorTestCase(unittest.TestCase):
         ga = [i for i in a]
         garr = [i for i in arr]
         if common.verbose:
-            print "Result of original iterator:", ga
-            print "Result of read generator:", garr
+            print("Result of original iterator:", ga)
+            print("Result of read generator:", garr)
         self.assertEqual(ga, garr)
 
         # Close the file
@@ -2180,8 +2190,8 @@ class GeneratorTestCase(unittest.TestCase):
         garr = [i for i in arr]
 
         if common.verbose:
-            print "Result of original iterator:", ga
-            print "Result of read generator:", garr
+            print("Result of original iterator:", ga)
+            print("Result of read generator:", garr)
         for i in range(len(ga)):
             self.assertTrue(allequal(ga[i], garr[i]))
 
@@ -2209,8 +2219,8 @@ class GeneratorTestCase(unittest.TestCase):
         ga = [i for i in a]
         garr = [i for i in arr]
         if common.verbose:
-            print "Result of original iterator:", ga
-            print "Result of read generator:", garr
+            print("Result of original iterator:", ga)
+            print("Result of read generator:", garr)
         self.assertEqual(ga, garr)
 
         # Close the file
@@ -2237,8 +2247,8 @@ class GeneratorTestCase(unittest.TestCase):
         ga = [i for i in a]
         garr = [i for i in arr]
         if common.verbose:
-            print "Result of original iterator:", ga
-            print "Result of read generator:", garr
+            print("Result of original iterator:", ga)
+            print("Result of read generator:", garr)
         for i in range(len(ga)):
             self.assertTrue(allequal(ga[i], garr[i]))
 
@@ -2344,12 +2354,12 @@ class PointSelectionTestCase(common.PyTablesTestCase):
         for value1, value2 in self.limits:
             key = (nparr >= value1) & (nparr < value2)
             if common.verbose:
-                print "Selection to test:", key
+                print("Selection to test:", key)
             a = nparr[key]
             b = tbarr[key]
 #             if common.verbose:
-#                 print "NumPy selection:", a
-#                 print "PyTables selection:", b
+#                 print("NumPy selection:", a)
+#                 print("PyTables selection:", b)
             self.assertTrue(
                 numpy.alltrue(a == b),
                 "NumPy array and PyTables selections does not match.")
@@ -2361,12 +2371,12 @@ class PointSelectionTestCase(common.PyTablesTestCase):
         for value1, value2 in self.limits:
             key = numpy.where((nparr >= value1) & (nparr < value2))
             if common.verbose:
-                print "Selection to test:", key
+                print("Selection to test:", key)
             a = nparr[key]
             b = tbarr[key]
 #             if common.verbose:
-#                 print "NumPy selection:", a
-#                 print "PyTables selection:", b
+#                 print("NumPy selection:", a)
+#                 print("PyTables selection:", b)
             self.assertTrue(
                 numpy.alltrue(a == b),
                 "NumPy array and PyTables selections does not match.")
@@ -2378,7 +2388,7 @@ class PointSelectionTestCase(common.PyTablesTestCase):
         for value1, value2 in self.limits:
             key = numpy.where((nparr >= value1) & (nparr < value2))
             if common.verbose:
-                print "Selection to test:", key
+                print("Selection to test:", key)
             # a = nparr[key]
             fkey = numpy.array(key, "f4")
             self.assertRaises(IndexError, tbarr.__getitem__, fkey)
@@ -2390,15 +2400,15 @@ class PointSelectionTestCase(common.PyTablesTestCase):
         for value1, value2 in self.limits:
             key = (nparr >= value1) & (nparr < value2)
             if common.verbose:
-                print "Selection to test:", key
+                print("Selection to test:", key)
             s = nparr[key]
             nparr[key] = s * 2
             tbarr[key] = s * 2
             a = nparr[:]
             b = tbarr[:]
 #             if common.verbose:
-#                 print "NumPy modified array:", a
-#                 print "PyTables modifyied array:", b
+#                 print("NumPy modified array:", a)
+#                 print("PyTables modifyied array:", b)
             self.assertTrue(
                 numpy.alltrue(a == b),
                 "NumPy array and PyTables modifications does not match.")
@@ -2410,15 +2420,15 @@ class PointSelectionTestCase(common.PyTablesTestCase):
         for value1, value2 in self.limits:
             key = numpy.where((nparr >= value1) & (nparr < value2))
             if common.verbose:
-                print "Selection to test:", key
+                print("Selection to test:", key)
             s = nparr[key]
             nparr[key] = s * 2
             tbarr[key] = s * 2
             a = nparr[:]
             b = tbarr[:]
 #             if common.verbose:
-#                 print "NumPy modified array:", a
-#                 print "PyTables modifyied array:", b
+#                 print("NumPy modified array:", a)
+#                 print("PyTables modifyied array:", b)
             self.assertTrue(
                 numpy.alltrue(a == b),
                 "NumPy array and PyTables modifications does not match.")
@@ -2430,15 +2440,15 @@ class PointSelectionTestCase(common.PyTablesTestCase):
         for value1, value2 in self.limits:
             key = numpy.where((nparr >= value1) & (nparr < value2))
             if common.verbose:
-                print "Selection to test:", key
+                print("Selection to test:", key)
             # s = nparr[key]
             nparr[key] = 2   # force a broadcast
             tbarr[key] = 2   # force a broadcast
             a = nparr[:]
             b = tbarr[:]
 #             if common.verbose:
-#                 print "NumPy modified array:", a
-#                 print "PyTables modifyied array:", b
+#                 print("NumPy modified array:", a)
+#                 print("PyTables modifyied array:", b)
             self.assertTrue(
                 numpy.alltrue(a == b),
                 "NumPy array and PyTables modifications does not match.")
@@ -2483,11 +2493,12 @@ class FancySelectionTestCase(common.PyTablesTestCase):
                 [-1, 2], 'i8'), 2, -1),  # array 64-bit instead of list
         ]
 
+        # Using booleans instead of ints is deprecated since numpy 1.8
         # Tests for keys that have to support the __index__ attribute
-        if (sys.version_info[0] >= 2 and sys.version_info[1] >= 5):
-            self.working_keyset.append(
-                (False, True),  # equivalent to (0,1) ;-)
-            )
+        #if (sys.version_info[0] >= 2 and sys.version_info[1] >= 5):
+        #    self.working_keyset.append(
+        #        (False, True),  # equivalent to (0,1) ;-)
+        #    )
 
         # Valid selections for NumPy, but not for PyTables (yet)
         # The next should raise an IndexError
@@ -2497,6 +2508,7 @@ class FancySelectionTestCase(common.PyTablesTestCase):
             ([1, 2], 2, [1, 2]),  # several lists
             ([], 2, 1),         # empty selections
             (Ellipsis, [1, 2], Ellipsis),  # several ellipsis
+            # Using booleans instead of ints is deprecated since numpy 1.8
             ([False, True]),    # boolean values with incompatible shape
         ]
 
@@ -2533,12 +2545,12 @@ class FancySelectionTestCase(common.PyTablesTestCase):
         tbarr = self.tbarr
         for key in self.working_keyset:
             if common.verbose:
-                print "Selection to test:", key
+                print("Selection to test:", key)
             a = nparr[key]
             b = tbarr[key]
 #             if common.verbose:
-#                 print "NumPy selection:", a
-#                 print "PyTables selection:", b
+#                 print("NumPy selection:", a)
+#                 print("PyTables selection:", b)
             self.assertTrue(
                 numpy.alltrue(a == b),
                 "NumPy array and PyTables selections does not match.")
@@ -2549,7 +2561,7 @@ class FancySelectionTestCase(common.PyTablesTestCase):
         tbarr = self.tbarr
         for key in self.not_working_keyset:
             if common.verbose:
-                print "Selection to test:", key
+                print("Selection to test:", key)
             # a = nparr[key]
             self.assertRaises(IndexError, tbarr.__getitem__, key)
 
@@ -2559,7 +2571,7 @@ class FancySelectionTestCase(common.PyTablesTestCase):
         tbarr = self.tbarr
         for key in self.not_working_oob:
             if common.verbose:
-                print "Selection to test:", key
+                print("Selection to test:", key)
             self.assertRaises(IndexError, nparr.__getitem__, key)
             self.assertRaises(IndexError, tbarr.__getitem__, key)
 
@@ -2569,7 +2581,7 @@ class FancySelectionTestCase(common.PyTablesTestCase):
         tbarr = self.tbarr
         for key in self.not_working_too_many:
             if common.verbose:
-                print "Selection to test:", key
+                print("Selection to test:", key)
             # ValueError for numpy 1.6.x and earlier
             # IndexError in numpy > 1.8.0
             self.assertRaises((ValueError, IndexError), nparr.__getitem__, key)
@@ -2581,15 +2593,15 @@ class FancySelectionTestCase(common.PyTablesTestCase):
         tbarr = self.tbarr
         for key in self.working_keyset:
             if common.verbose:
-                print "Selection to test:", key
+                print("Selection to test:", key)
             s = nparr[key]
             nparr[key] = s * 2
             tbarr[key] = s * 2
             a = nparr[:]
             b = tbarr[:]
 #             if common.verbose:
-#                 print "NumPy modified array:", a
-#                 print "PyTables modifyied array:", b
+#                 print("NumPy modified array:", a)
+#                 print("PyTables modifyied array:", b)
             self.assertTrue(
                 numpy.alltrue(a == b),
                 "NumPy array and PyTables modifications does not match.")
@@ -2600,15 +2612,15 @@ class FancySelectionTestCase(common.PyTablesTestCase):
         tbarr = self.tbarr
         for key in self.working_keyset:
             if common.verbose:
-                print "Selection to test:", key
+                print("Selection to test:", key)
             # s = nparr[key]
             nparr[key] = 2   # broadcast value
             tbarr[key] = 2   # broadcast value
             a = nparr[:]
             b = tbarr[:]
 #             if common.verbose:
-#                 print "NumPy modified array:", a
-#                 print "PyTables modifyied array:", b
+#                 print("NumPy modified array:", a)
+#                 print("PyTables modifyied array:", b)
             self.assertTrue(
                 numpy.alltrue(a == b),
                 "NumPy array and PyTables modifications does not match.")
@@ -2683,6 +2695,26 @@ class AccessClosedTestCase(common.TempFileMixin, common.PyTablesTestCase):
         self.assertRaises(ClosedNodeError, self.array.__setitem__, 0, 0)
 
 
+class BroadcastTest(common.TempFileMixin, common.PyTablesTestCase):
+
+    def test(self):
+        """Test correct broadcasting when the array atom is not scalar."""
+
+        array_shape = (2, 3)
+        element_shape = (3,)
+
+        dtype = numpy.dtype((numpy.int, element_shape))
+        atom = Atom.from_dtype(dtype)
+        h5arr = self.h5file.create_carray(self.h5file.root, 'array',
+                                          atom, array_shape)
+
+        size = numpy.prod(element_shape)
+        nparr = numpy.arange(size).reshape(element_shape)
+
+        h5arr[0] = nparr
+        self.assertTrue(numpy.all(h5arr[0] == nparr))
+
+
 class TestCreateArrayArgs(common.TempFileMixin, common.PyTablesTestCase):
     where = '/'
     name = 'array'
@@ -2950,6 +2982,7 @@ def suite():
         theSuite.addTest(unittest.makeSuite(CopyNativeHDF5MDAtom))
         theSuite.addTest(unittest.makeSuite(AccessClosedTestCase))
         theSuite.addTest(unittest.makeSuite(TestCreateArrayArgs))
+        theSuite.addTest(unittest.makeSuite(BroadcastTest))
 
     return theSuite
 
diff --git a/tables/tests/test_attributes.py b/tables/tests/test_attributes.py
index 1c5151b..305f89e 100644
--- a/tables/tests/test_attributes.py
+++ b/tables/tests/test_attributes.py
@@ -2,6 +2,7 @@
 
 """This test unit checks node attributes that are persistent (AttributeSet)."""
 
+from __future__ import print_function
 import os
 import sys
 import unittest
@@ -34,7 +35,7 @@ class CreateTestCase(unittest.TestCase):
         # Create an instance of HDF5 Table
         self.file = tempfile.mktemp(".h5")
         self.fileh = open_file(
-            self.file, mode="w", node_cache_slots=self.nodeCacheSlots)
+            self.file, mode="w", node_cache_slots=self.node_cache_slots)
         self.root = self.fileh.root
 
         # Create a table object
@@ -69,10 +70,10 @@ class CreateTestCase(unittest.TestCase):
 
         if self.close:
             if common.verbose:
-                print "(closing file version)"
+                print("(closing file version)")
             self.fileh.close()
             self.fileh = open_file(
-                self.file, mode="r+", node_cache_slots=self.nodeCacheSlots)
+                self.file, mode="r+", node_cache_slots=self.node_cache_slots)
             self.root = self.fileh.root
 
         self.assertEqual(self.fileh.get_node_attr(self.root.agroup, 'attr1'),
@@ -96,10 +97,10 @@ class CreateTestCase(unittest.TestCase):
 
         if self.close:
             if common.verbose:
-                print "(closing file version)"
+                print("(closing file version)")
             self.fileh.close()
             self.fileh = open_file(
-                self.file, mode="r+", node_cache_slots=self.nodeCacheSlots)
+                self.file, mode="r+", node_cache_slots=self.node_cache_slots)
             self.root = self.fileh.root
 
         self.assertEqual(self.root.agroup._f_getattr(
@@ -120,10 +121,10 @@ class CreateTestCase(unittest.TestCase):
 
         if self.close:
             if common.verbose:
-                print "(closing file version)"
+                print("(closing file version)")
             self.fileh.close()
             self.fileh = open_file(
-                self.file, mode="r+", node_cache_slots=self.nodeCacheSlots)
+                self.file, mode="r+", node_cache_slots=self.node_cache_slots)
             self.root = self.fileh.root
 
         # This should work even when the node cache is disabled
@@ -132,35 +133,35 @@ class CreateTestCase(unittest.TestCase):
         self.assertEqual(self.root.anarray.attrs.attr1, "n" * attrlength)
 
     def test04_listAttributes(self):
-        """Checking listing attributes """
+        """Checking listing attributes."""
 
         # With a Group object
         self.group._v_attrs.pq = "1"
         self.group._v_attrs.qr = "2"
         self.group._v_attrs.rs = "3"
         if common.verbose:
-            print "Attribute list:", self.group._v_attrs._f_list()
+            print("Attribute list:", self.group._v_attrs._f_list())
 
         # Now, try with a Table object
         self.table.attrs.a = "1"
         self.table.attrs.c = "2"
         self.table.attrs.b = "3"
         if common.verbose:
-            print "Attribute list:", self.table.attrs._f_list()
+            print("Attribute list:", self.table.attrs._f_list())
 
         # Finally, try with an Array object
         self.array.attrs.k = "1"
         self.array.attrs.j = "2"
         self.array.attrs.i = "3"
         if common.verbose:
-            print "Attribute list:", self.array.attrs._f_list()
+            print("Attribute list:", self.array.attrs._f_list())
 
         if self.close:
             if common.verbose:
-                print "(closing file version)"
+                print("(closing file version)")
             self.fileh.close()
             self.fileh = open_file(
-                self.file, mode="r+", node_cache_slots=self.nodeCacheSlots)
+                self.file, mode="r+", node_cache_slots=self.node_cache_slots)
             self.root = self.fileh.root
 
         agroup = self.root.agroup
@@ -194,13 +195,15 @@ class CreateTestCase(unittest.TestCase):
 
         anarray = self.root.anarray
         self.assertEqual(anarray.attrs._f_list(), ["i", "j", "k"])
-        self.assertEqual(anarray.attrs._f_list("sys"),
-                         ['CLASS', 'FLAVOR', 'TITLE', 'VERSION'])
-        self.assertEqual(anarray.attrs._f_list("all"),
-                         ['CLASS', 'FLAVOR', 'TITLE', 'VERSION', "i", "j", "k"])
+        self.assertEqual(
+            anarray.attrs._f_list("sys"),
+            ['CLASS', 'FLAVOR', 'TITLE', 'VERSION'])
+        self.assertEqual(
+            anarray.attrs._f_list("all"),
+            ['CLASS', 'FLAVOR', 'TITLE', 'VERSION', "i", "j", "k"])
 
     def test05_removeAttributes(self):
-        """Checking removing attributes """
+        """Checking removing attributes."""
 
         # With a Group object
         self.group._v_attrs.pq = "1"
@@ -211,20 +214,19 @@ class CreateTestCase(unittest.TestCase):
 
         if self.close:
             if common.verbose:
-                print "(closing file version)"
+                print("(closing file version)")
             self.fileh.close()
             self.fileh = open_file(
-                self.file, mode="r+", node_cache_slots=self.nodeCacheSlots)
+                self.file, mode="r+", node_cache_slots=self.node_cache_slots)
             self.root = self.fileh.root
 
         agroup = self.root.agroup
         if common.verbose:
-            print "Attribute list:", agroup._v_attrs._f_list()
+            print("Attribute list:", agroup._v_attrs._f_list())
         # Check the local attributes names
         self.assertEqual(agroup._v_attrs._f_list(), ["qr", "rs"])
         if common.verbose:
-            print "Attribute list in disk:", \
-                  agroup._v_attrs._f_list("all")
+            print("Attribute list in disk:", agroup._v_attrs._f_list("all"))
         # Check the disk attribute names
         self.assertEqual(agroup._v_attrs._f_list("all"),
                          ['CLASS', 'TITLE', 'VERSION', "qr", "rs"])
@@ -232,18 +234,17 @@ class CreateTestCase(unittest.TestCase):
         # delete an attribute (__delattr__ method)
         del agroup._v_attrs.qr
         if common.verbose:
-            print "Attribute list:", agroup._v_attrs._f_list()
+            print("Attribute list:", agroup._v_attrs._f_list())
         # Check the local attributes names
         self.assertEqual(agroup._v_attrs._f_list(), ["rs"])
         if common.verbose:
-            print "Attribute list in disk:", \
-                  agroup._v_attrs._f_list()
+            print("Attribute list in disk:", agroup._v_attrs._f_list())
         # Check the disk attribute names
         self.assertEqual(agroup._v_attrs._f_list("all"),
                          ['CLASS', 'TITLE', 'VERSION', "rs"])
 
     def test05b_removeAttributes(self):
-        """Checking removing attributes (using File.del_node_attr()) """
+        """Checking removing attributes (using File.del_node_attr())"""
 
         # With a Group object
         self.group._v_attrs.pq = "1"
@@ -254,20 +255,19 @@ class CreateTestCase(unittest.TestCase):
 
         if self.close:
             if common.verbose:
-                print "(closing file version)"
+                print("(closing file version)")
             self.fileh.close()
             self.fileh = open_file(
-                self.file, mode="r+", node_cache_slots=self.nodeCacheSlots)
+                self.file, mode="r+", node_cache_slots=self.node_cache_slots)
             self.root = self.fileh.root
 
         agroup = self.root.agroup
         if common.verbose:
-            print "Attribute list:", agroup._v_attrs._f_list()
+            print("Attribute list:", agroup._v_attrs._f_list())
         # Check the local attributes names
         self.assertEqual(agroup._v_attrs._f_list(), ["qr", "rs"])
         if common.verbose:
-            print "Attribute list in disk:", \
-                  agroup._v_attrs._f_list("all")
+            print("Attribute list in disk:", agroup._v_attrs._f_list("all"))
         # Check the disk attribute names
         self.assertEqual(agroup._v_attrs._f_list("all"),
                          ['CLASS', 'TITLE', 'VERSION', "qr", "rs"])
@@ -275,32 +275,31 @@ class CreateTestCase(unittest.TestCase):
         # delete an attribute (File.del_node_attr method)
         self.fileh.del_node_attr(self.root, "qr", "agroup")
         if common.verbose:
-            print "Attribute list:", agroup._v_attrs._f_list()
+            print("Attribute list:", agroup._v_attrs._f_list())
         # Check the local attributes names
         self.assertEqual(agroup._v_attrs._f_list(), ["rs"])
         if common.verbose:
-            print "Attribute list in disk:", \
-                  agroup._v_attrs._f_list()
+            print("Attribute list in disk:", agroup._v_attrs._f_list())
         # Check the disk attribute names
         self.assertEqual(agroup._v_attrs._f_list("all"),
                          ['CLASS', 'TITLE', 'VERSION', "rs"])
 
     def test06_removeAttributes(self):
-        """Checking removing system attributes """
+        """Checking removing system attributes."""
 
         # remove a system attribute
         if common.verbose:
-            print "Before removing CLASS attribute"
-            print "System attrs:", self.group._v_attrs._v_attrnamessys
+            print("Before removing CLASS attribute")
+            print("System attrs:", self.group._v_attrs._v_attrnamessys)
         del self.group._v_attrs.CLASS
         self.assertEqual(self.group._v_attrs._f_list("sys"),
                          ['TITLE', 'VERSION'])
         if common.verbose:
-            print "After removing CLASS attribute"
-            print "System attrs:", self.group._v_attrs._v_attrnamessys
+            print("After removing CLASS attribute")
+            print("System attrs:", self.group._v_attrs._v_attrnamessys)
 
     def test07_renameAttributes(self):
-        """Checking renaming attributes """
+        """Checking renaming attributes."""
 
         # With a Group object
         self.group._v_attrs.pq = "1"
@@ -311,34 +310,34 @@ class CreateTestCase(unittest.TestCase):
 
         if self.close:
             if common.verbose:
-                print "(closing file version)"
+                print("(closing file version)")
             self.fileh.close()
             self.fileh = open_file(
-                self.file, mode="r+", node_cache_slots=self.nodeCacheSlots)
+                self.file, mode="r+", node_cache_slots=self.node_cache_slots)
             self.root = self.fileh.root
 
         agroup = self.root.agroup
         if common.verbose:
-            print "Attribute list:", agroup._v_attrs._f_list()
+            print("Attribute list:", agroup._v_attrs._f_list())
         # Check the local attributes names (alphabetically sorted)
         self.assertEqual(agroup._v_attrs._f_list(), ["op", "qr", "rs"])
         if common.verbose:
-            print "Attribute list in disk:", agroup._v_attrs._f_list("all")
+            print("Attribute list in disk:", agroup._v_attrs._f_list("all"))
         # Check the disk attribute names (not sorted)
         self.assertEqual(agroup._v_attrs._f_list("all"),
                          ['CLASS', 'TITLE', 'VERSION', "op", "qr", "rs"])
 
     def test08_renameAttributes(self):
-        """Checking renaming system attributes """
+        """Checking renaming system attributes."""
 
         if common.verbose:
-            print "Before renaming CLASS attribute"
-            print "All attrs:", self.group._v_attrs._v_attrnames
+            print("Before renaming CLASS attribute")
+            print("All attrs:", self.group._v_attrs._v_attrnames)
         # rename a system attribute
         self.group._v_attrs._f_rename("CLASS", "op")
         if common.verbose:
-            print "After renaming CLASS attribute"
-            print "All attrs:", self.group._v_attrs._v_attrnames
+            print("After renaming CLASS attribute")
+            print("All attrs:", self.group._v_attrs._v_attrnames)
 
         # Check the disk attribute names (not sorted)
         agroup = self.root.agroup
@@ -346,7 +345,7 @@ class CreateTestCase(unittest.TestCase):
                          ['TITLE', 'VERSION', "op"])
 
     def test09_overwriteAttributes(self):
-        """Checking overwriting attributes """
+        """Checking overwriting attributes."""
 
         # With a Group object
         self.group._v_attrs.pq = "1"
@@ -359,28 +358,27 @@ class CreateTestCase(unittest.TestCase):
 
         if self.close:
             if common.verbose:
-                print "(closing file version)"
+                print("(closing file version)")
             self.fileh.close()
             self.fileh = open_file(
-                self.file, mode="r+", node_cache_slots=self.nodeCacheSlots)
+                self.file, mode="r+", node_cache_slots=self.node_cache_slots)
             self.root = self.fileh.root
 
         agroup = self.root.agroup
         if common.verbose:
-            print "Value of Attribute pq:", agroup._v_attrs.pq
+            print("Value of Attribute pq:", agroup._v_attrs.pq)
         # Check the local attributes names (alphabetically sorted)
         self.assertEqual(agroup._v_attrs.pq, "4")
         self.assertEqual(agroup._v_attrs.qr, 2)
         self.assertEqual(agroup._v_attrs.rs, [1, 2, 3])
         if common.verbose:
-            print "Attribute list in disk:", \
-                  agroup._v_attrs._f_list("all")
+            print("Attribute list in disk:", agroup._v_attrs._f_list("all"))
         # Check the disk attribute names (not sorted)
         self.assertEqual(agroup._v_attrs._f_list("all"),
                          ['CLASS', 'TITLE', 'VERSION', "pq", "qr", "rs"])
 
     def test10a_copyAttributes(self):
-        """Checking copying attributes """
+        """Checking copying attributes."""
 
         # With a Group object
         self.group._v_attrs.pq = "1"
@@ -391,19 +389,19 @@ class CreateTestCase(unittest.TestCase):
 
         if self.close:
             if common.verbose:
-                print "(closing file version)"
+                print("(closing file version)")
             self.fileh.close()
             self.fileh = open_file(
-                self.file, mode="r+", node_cache_slots=self.nodeCacheSlots)
+                self.file, mode="r+", node_cache_slots=self.node_cache_slots)
             self.root = self.fileh.root
 
         atable = self.root.atable
         if common.verbose:
-            print "Attribute list:", atable._v_attrs._f_list()
+            print("Attribute list:", atable._v_attrs._f_list())
         # Check the local attributes names (alphabetically sorted)
         self.assertEqual(atable._v_attrs._f_list(), ["pq", "qr", "rs"])
         if common.verbose:
-            print "Complete attribute list:", atable._v_attrs._f_list("all")
+            print("Complete attribute list:", atable._v_attrs._f_list("all"))
         # Check the disk attribute names (not sorted)
         self.assertEqual(atable._v_attrs._f_list("all"),
                          ['CLASS',
@@ -428,19 +426,19 @@ class CreateTestCase(unittest.TestCase):
 
         if self.close:
             if common.verbose:
-                print "(closing file version)"
+                print("(closing file version)")
             self.fileh.close()
             self.fileh = open_file(
-                self.file, mode="r+", node_cache_slots=self.nodeCacheSlots)
+                self.file, mode="r+", node_cache_slots=self.node_cache_slots)
             self.root = self.fileh.root
 
         atable = self.root.atable
         if common.verbose:
-            print "Attribute list:", atable._v_attrs._f_list()
+            print("Attribute list:", atable._v_attrs._f_list())
         # Check the local attributes names (alphabetically sorted)
         self.assertEqual(atable._v_attrs._f_list(), ["pq", "qr", "rs"])
         if common.verbose:
-            print "Complete attribute list:", atable._v_attrs._f_list("all")
+            print("Complete attribute list:", atable._v_attrs._f_list("all"))
         # Check the disk attribute names (not sorted)
         self.assertEqual(atable._v_attrs._f_list("all"),
                          ['CLASS',
@@ -454,7 +452,7 @@ class CreateTestCase(unittest.TestCase):
                           "pq", "qr", "rs"])
 
     def test10c_copyAttributes(self):
-        """Checking copying attributes during group copies"""
+        """Checking copying attributes during group copies."""
 
         # With a Group object
         self.group._v_attrs['CLASS'] = "GROUP2"
@@ -464,20 +462,20 @@ class CreateTestCase(unittest.TestCase):
 
         if self.close:
             if common.verbose:
-                print "(closing file version)"
+                print("(closing file version)")
             self.fileh.close()
             self.fileh = open_file(
-                self.file, mode="r+", node_cache_slots=self.nodeCacheSlots)
+                self.file, mode="r+", node_cache_slots=self.node_cache_slots)
             self.root = self.fileh.root
 
         agroup2 = self.root.agroup2
         if common.verbose:
-            print "Complete attribute list:", agroup2._v_attrs._f_list("all")
+            print("Complete attribute list:", agroup2._v_attrs._f_list("all"))
         self.assertEqual(agroup2._v_attrs['CLASS'], "GROUP2")
         self.assertEqual(agroup2._v_attrs['VERSION'], "1.3")
 
     def test10d_copyAttributes(self):
-        """Checking copying attributes during leaf copies"""
+        """Checking copying attributes during leaf copies."""
 
         # With a Group object
         atable = self.root.atable
@@ -488,15 +486,15 @@ class CreateTestCase(unittest.TestCase):
 
         if self.close:
             if common.verbose:
-                print "(closing file version)"
+                print("(closing file version)")
             self.fileh.close()
             self.fileh = open_file(
-                self.file, mode="r+", node_cache_slots=self.nodeCacheSlots)
+                self.file, mode="r+", node_cache_slots=self.node_cache_slots)
             self.root = self.fileh.root
 
         atable2 = self.root.atable2
         if common.verbose:
-            print "Complete attribute list:", atable2._v_attrs._f_list("all")
+            print("Complete attribute list:", atable2._v_attrs._f_list("all"))
         self.assertEqual(atable2._v_attrs['CLASS'], "TABLE2")
         self.assertEqual(atable2._v_attrs['VERSION'], "1.3")
 
@@ -556,32 +554,32 @@ class CreateTestCase(unittest.TestCase):
 
 class NotCloseCreate(CreateTestCase):
     close = False
-    nodeCacheSlots = NODE_CACHE_SLOTS
+    node_cache_slots = NODE_CACHE_SLOTS
 
 
 class CloseCreate(CreateTestCase):
     close = True
-    nodeCacheSlots = NODE_CACHE_SLOTS
+    node_cache_slots = NODE_CACHE_SLOTS
 
 
 class NoCacheNotCloseCreate(CreateTestCase):
     close = False
-    nodeCacheSlots = 0
+    node_cache_slots = 0
 
 
 class NoCacheCloseCreate(CreateTestCase):
     close = True
-    nodeCacheSlots = 0
+    node_cache_slots = 0
 
 
 class DictCacheNotCloseCreate(CreateTestCase):
     close = False
-    nodeCacheSlots = -NODE_CACHE_SLOTS
+    node_cache_slots = -NODE_CACHE_SLOTS
 
 
 class DictCacheCloseCreate(CreateTestCase):
     close = True
-    nodeCacheSlots = -NODE_CACHE_SLOTS
+    node_cache_slots = -NODE_CACHE_SLOTS
 
 
 class TypesTestCase(unittest.TestCase):
@@ -615,13 +613,13 @@ class TypesTestCase(unittest.TestCase):
 
         # Check the results
         if common.verbose:
-            print "pq -->", self.array.attrs.pq
-            print "qr -->", self.array.attrs.qr
-            print "rs -->", self.array.attrs.rs
+            print("pq -->", self.array.attrs.pq)
+            print("qr -->", self.array.attrs.qr)
+            print("rs -->", self.array.attrs.rs)
 
         if self.close:
             if common.verbose:
-                print "(closing file version)"
+                print("(closing file version)")
             self.fileh.close()
             self.fileh = open_file(self.file, mode="r+")
             self.root = self.fileh.root
@@ -640,13 +638,13 @@ class TypesTestCase(unittest.TestCase):
 
         # Check the results
         if common.verbose:
-            print "pq -->", self.array.attrs.pq
-            print "qr -->", self.array.attrs.qr
-            print "rs -->", self.array.attrs.rs
+            print("pq -->", self.array.attrs.pq)
+            print("qr -->", self.array.attrs.qr)
+            print("rs -->", self.array.attrs.rs)
 
         if self.close:
             if common.verbose:
-                print "(closing file version)"
+                print("(closing file version)")
             self.fileh.close()
             self.fileh = open_file(self.file, mode="r+")
             self.root = self.fileh.root
@@ -668,13 +666,13 @@ class TypesTestCase(unittest.TestCase):
 
         # Check the results
         if common.verbose:
-            print "pq -->", self.array.attrs.pq
-            print "qr -->", self.array.attrs.qr
-            print "rs -->", self.array.attrs.rs
+            print("pq -->", self.array.attrs.pq)
+            print("qr -->", self.array.attrs.qr)
+            print("rs -->", self.array.attrs.rs)
 
         if self.close:
             if common.verbose:
-                print "(closing file version)"
+                print("(closing file version)")
             self.fileh.close()
             self.fileh = open_file(self.file, mode="r+")
             self.root = self.fileh.root
@@ -693,13 +691,13 @@ class TypesTestCase(unittest.TestCase):
 
         # Check the results
         if common.verbose:
-            print "pq -->", self.array.attrs.pq
-            print "qr -->", self.array.attrs.qr
-            print "rs -->", self.array.attrs.rs
+            print("pq -->", self.array.attrs.pq)
+            print("qr -->", self.array.attrs.qr)
+            print("rs -->", self.array.attrs.rs)
 
         if self.close:
             if common.verbose:
-                print "(closing file version)"
+                print("(closing file version)")
             self.fileh.close()
             self.fileh = open_file(self.file, mode="r+")
             self.root = self.fileh.root
@@ -719,13 +717,13 @@ class TypesTestCase(unittest.TestCase):
 
         # Check the results
         if common.verbose:
-            print "pq -->", self.array.attrs.pq
-            print "qr -->", self.array.attrs.qr
-            print "rs -->", self.array.attrs.rs
+            print("pq -->", self.array.attrs.pq)
+            print("qr -->", self.array.attrs.qr)
+            print("rs -->", self.array.attrs.rs)
 
         if self.close:
             if common.verbose:
-                print "(closing file version)"
+                print("(closing file version)")
             self.fileh.close()
             self.fileh = open_file(self.file, mode="r+")
             self.root = self.fileh.root
@@ -751,11 +749,12 @@ class TypesTestCase(unittest.TestCase):
         # Check the results
         if common.verbose:
             for dtype in checktypes:
-                print "type, value-->", dtype, getattr(self.array.attrs, dtype)
+                print("type, value-->", dtype,
+                      getattr(self.array.attrs, dtype))
 
         if self.close:
             if common.verbose:
-                print "(closing file version)"
+                print("(closing file version)")
             self.fileh.close()
             self.fileh = open_file(self.file, mode="r+")
             self.root = self.fileh.root
@@ -778,7 +777,7 @@ class TypesTestCase(unittest.TestCase):
         # Check the results
         if self.close:
             if common.verbose:
-                print "(closing file version)"
+                print("(closing file version)")
             self.fileh.close()
             self.fileh = open_file(self.file, mode="r+")
             self.root = self.fileh.root
@@ -786,7 +785,8 @@ class TypesTestCase(unittest.TestCase):
 
         for dtype in checktypes:
             if common.verbose:
-                print "type, value-->", dtype, getattr(self.array.attrs, dtype)
+                print("type, value-->", dtype,
+                      getattr(self.array.attrs, dtype))
             assert_array_equal(getattr(self.array.attrs, dtype),
                                numpy.array([1, 2], dtype=dtype))
 
@@ -804,7 +804,7 @@ class TypesTestCase(unittest.TestCase):
         # Check the results
         if self.close:
             if common.verbose:
-                print "(closing file version)"
+                print("(closing file version)")
             self.fileh.close()
             self.fileh = open_file(self.file, mode="r+")
             self.root = self.fileh.root
@@ -813,7 +813,8 @@ class TypesTestCase(unittest.TestCase):
         for dtype in checktypes:
             arr = numpy.array([1, 2, 3, 4], dtype=dtype)[::2]
             if common.verbose:
-                print "type, value-->", dtype, getattr(self.array.attrs, dtype)
+                print("type, value-->", dtype,
+                      getattr(self.array.attrs, dtype))
             assert_array_equal(getattr(self.array.attrs, dtype), arr)
 
     def test01e_setIntAttributes(self):
@@ -829,7 +830,7 @@ class TypesTestCase(unittest.TestCase):
 
         if self.close:
             if common.verbose:
-                print "(closing file version)"
+                print("(closing file version)")
             self.fileh.close()
             self.fileh = open_file(self.file, mode="r+")
             self.root = self.fileh.root
@@ -838,12 +839,13 @@ class TypesTestCase(unittest.TestCase):
         # Check the results
         for dtype in checktypes:
             if common.verbose:
-                print "type, value-->", dtype, getattr(self.array.attrs, dtype)
+                print("type, value-->", dtype,
+                      getattr(self.array.attrs, dtype))
             assert_array_equal(getattr(self.array.attrs, dtype),
                                numpy.array([[1, 2], [2, 3]], dtype=dtype))
 
     def test02a_setFloatAttributes(self):
-        """Checking setting Float (double) attributes"""
+        """Checking setting Float (double) attributes."""
 
         # Set some attrs
         self.array.attrs.pq = 1.0
@@ -852,13 +854,13 @@ class TypesTestCase(unittest.TestCase):
 
         # Check the results
         if common.verbose:
-            print "pq -->", self.array.attrs.pq
-            print "qr -->", self.array.attrs.qr
-            print "rs -->", self.array.attrs.rs
+            print("pq -->", self.array.attrs.pq)
+            print("qr -->", self.array.attrs.qr)
+            print("rs -->", self.array.attrs.rs)
 
         if self.close:
             if common.verbose:
-                print "(closing file version)"
+                print("(closing file version)")
             self.fileh.close()
             self.fileh = open_file(self.file, mode="r+")
             self.root = self.fileh.root
@@ -883,11 +885,12 @@ class TypesTestCase(unittest.TestCase):
         # Check the results
         if common.verbose:
             for dtype in checktypes:
-                print "type, value-->", dtype, getattr(self.array.attrs, dtype)
+                print("type, value-->", dtype,
+                      getattr(self.array.attrs, dtype))
 
         if self.close:
             if common.verbose:
-                print "(closing file version)"
+                print("(closing file version)")
             self.fileh.close()
             self.fileh = open_file(self.file, mode="r+")
             self.root = self.fileh.root
@@ -910,11 +913,12 @@ class TypesTestCase(unittest.TestCase):
         # Check the results
         if common.verbose:
             for dtype in checktypes:
-                print "type, value-->", dtype, getattr(self.array.attrs, dtype)
+                print("type, value-->", dtype,
+                      getattr(self.array.attrs, dtype))
 
         if self.close:
             if common.verbose:
-                print "(closing file version)"
+                print("(closing file version)")
             self.fileh.close()
             self.fileh = open_file(self.file, mode="r+")
             self.root = self.fileh.root
@@ -925,7 +929,8 @@ class TypesTestCase(unittest.TestCase):
                                numpy.array([1.1, 2.1], dtype=dtype))
 
     def test02d_setFloatAttributes(self):
-        """Checking setting Float attributes (unidimensional, non-contiguous)"""
+        """Checking setting Float attributes (unidimensional,
+        non-contiguous)"""
 
         checktypes = ['Float32', 'Float64']
 
@@ -936,11 +941,12 @@ class TypesTestCase(unittest.TestCase):
         # Check the results
         if common.verbose:
             for dtype in checktypes:
-                print "type, value-->", dtype, getattr(self.array.attrs, dtype)
+                print("type, value-->", dtype,
+                      getattr(self.array.attrs, dtype))
 
         if self.close:
             if common.verbose:
-                print "(closing file version)"
+                print("(closing file version)")
             self.fileh.close()
             self.fileh = open_file(self.file, mode="r+")
             self.root = self.fileh.root
@@ -962,22 +968,24 @@ class TypesTestCase(unittest.TestCase):
         # Check the results
         if common.verbose:
             for dtype in checktypes:
-                print "type, value-->", dtype, getattr(self.array.attrs, dtype)
+                print("type, value-->", dtype,
+                      getattr(self.array.attrs, dtype))
 
         if self.close:
             if common.verbose:
-                print "(closing file version)"
+                print("(closing file version)")
             self.fileh.close()
             self.fileh = open_file(self.file, mode="r+")
             self.root = self.fileh.root
             self.array = self.fileh.root.anarray
 
         for dtype in checktypes:
-            assert_array_equal(getattr(self.array.attrs, dtype),
-                               numpy.array([[1.1, 2.1], [2.1, 3.1]], dtype=dtype))
+            assert_array_equal(
+                getattr(self.array.attrs, dtype),
+                numpy.array([[1.1, 2.1], [2.1, 3.1]], dtype=dtype))
 
     def test03_setObjectAttributes(self):
-        """Checking setting Object attributes"""
+        """Checking setting Object attributes."""
 
         # Set some attrs
         self.array.attrs.pq = [1.0, 2]
@@ -986,13 +994,13 @@ class TypesTestCase(unittest.TestCase):
 
         # Check the results
         if common.verbose:
-            print "pq -->", self.array.attrs.pq
-            print "qr -->", self.array.attrs.qr
-            print "rs -->", self.array.attrs.rs
+            print("pq -->", self.array.attrs.pq)
+            print("qr -->", self.array.attrs.qr)
+            print("rs -->", self.array.attrs.rs)
 
         if self.close:
             if common.verbose:
-                print "(closing file version)"
+                print("(closing file version)")
             self.fileh.close()
             self.fileh = open_file(self.file, mode="r+")
             self.root = self.fileh.root
@@ -1011,13 +1019,13 @@ class TypesTestCase(unittest.TestCase):
 
         # Check the results
         if common.verbose:
-            print "pq -->", self.array.attrs.pq
-            print "qr -->", self.array.attrs.qr
-            print "rs -->", self.array.attrs.rs
+            print("pq -->", self.array.attrs.pq)
+            print("qr -->", self.array.attrs.qr)
+            print("rs -->", self.array.attrs.rs)
 
         if self.close:
             if common.verbose:
-                print "(closing file version)"
+                print("(closing file version)")
             self.fileh.close()
             self.fileh = open_file(self.file, mode="r+")
             self.root = self.fileh.root
@@ -1037,11 +1045,11 @@ class TypesTestCase(unittest.TestCase):
 
         # Check the results
         if common.verbose:
-            print "pq -->", self.array.attrs.pq
+            print("pq -->", self.array.attrs.pq)
 
         if self.close:
             if common.verbose:
-                print "(closing file version)"
+                print("(closing file version)")
             self.fileh.close()
             self.fileh = open_file(self.file, mode="r+")
             self.root = self.fileh.root
@@ -1051,23 +1059,24 @@ class TypesTestCase(unittest.TestCase):
                            numpy.array(['foo']))
 
     def test04c_setStringAttributes(self):
-        """Checking setting string attributes (empty unidimensional 1-elem case)"""
+        """Checking setting string attributes (empty unidimensional
+        1-elem case)"""
 
         self.array.attrs.pq = numpy.array([''])
 
         # Check the results
         if common.verbose:
-            print "pq -->", self.array.attrs.pq
+            print("pq -->", self.array.attrs.pq)
 
         if self.close:
             if common.verbose:
-                print "(closing file version)"
+                print("(closing file version)")
             self.fileh.close()
             self.fileh = open_file(self.file, mode="r+")
             self.root = self.fileh.root
             self.array = self.fileh.root.anarray
             if common.verbose:
-                print "pq -->", self.array.attrs.pq
+                print("pq -->", self.array.attrs.pq)
 
         assert_array_equal(self.root.anarray.attrs.pq,
                            numpy.array(['']))
@@ -1079,11 +1088,11 @@ class TypesTestCase(unittest.TestCase):
 
         # Check the results
         if common.verbose:
-            print "pq -->", self.array.attrs.pq
+            print("pq -->", self.array.attrs.pq)
 
         if self.close:
             if common.verbose:
-                print "(closing file version)"
+                print("(closing file version)")
             self.fileh.close()
             self.fileh = open_file(self.file, mode="r+")
             self.root = self.fileh.root
@@ -1093,17 +1102,18 @@ class TypesTestCase(unittest.TestCase):
                            numpy.array(['foo', 'bar3']))
 
     def test04e_setStringAttributes(self):
-        """Checking setting string attributes (empty unidimensional 2-elem case)"""
+        """Checking setting string attributes (empty unidimensional
+        2-elem case)"""
 
         self.array.attrs.pq = numpy.array(['', ''])
 
         # Check the results
         if common.verbose:
-            print "pq -->", self.array.attrs.pq
+            print("pq -->", self.array.attrs.pq)
 
         if self.close:
             if common.verbose:
-                print "(closing file version)"
+                print("(closing file version)")
             self.fileh.close()
             self.fileh = open_file(self.file, mode="r+")
             self.root = self.fileh.root
@@ -1120,11 +1130,11 @@ class TypesTestCase(unittest.TestCase):
 
         # Check the results
         if common.verbose:
-            print "pq -->", self.array.attrs.pq
+            print("pq -->", self.array.attrs.pq)
 
         if self.close:
             if common.verbose:
-                print "(closing file version)"
+                print("(closing file version)")
             self.fileh.close()
             self.fileh = open_file(self.file, mode="r+")
             self.root = self.fileh.root
@@ -1135,7 +1145,7 @@ class TypesTestCase(unittest.TestCase):
                                         ['foo3', 'foo4']]))
 
     def test05a_setComplexAttributes(self):
-        """Checking setting Complex (python) attributes"""
+        """Checking setting Complex (python) attributes."""
 
         # Set some attrs
         self.array.attrs.pq = 1.0 + 2j
@@ -1144,13 +1154,13 @@ class TypesTestCase(unittest.TestCase):
 
         # Check the results
         if common.verbose:
-            print "pq -->", self.array.attrs.pq
-            print "qr -->", self.array.attrs.qr
-            print "rs -->", self.array.attrs.rs
+            print("pq -->", self.array.attrs.pq)
+            print("qr -->", self.array.attrs.qr)
+            print("rs -->", self.array.attrs.rs)
 
         if self.close:
             if common.verbose:
-                print "(closing file version)"
+                print("(closing file version)")
             self.fileh.close()
             self.fileh = open_file(self.file, mode="r+")
             self.root = self.fileh.root
@@ -1175,11 +1185,12 @@ class TypesTestCase(unittest.TestCase):
         # Check the results
         if common.verbose:
             for dtype in checktypes:
-                print "type, value-->", dtype, getattr(self.array.attrs, dtype)
+                print("type, value-->", dtype,
+                      getattr(self.array.attrs, dtype))
 
         if self.close:
             if common.verbose:
-                print "(closing file version)"
+                print("(closing file version)")
             self.fileh.close()
             self.fileh = open_file(self.file, mode="r+")
             self.root = self.fileh.root
@@ -1202,11 +1213,12 @@ class TypesTestCase(unittest.TestCase):
         # Check the results
         if common.verbose:
             for dtype in checktypes:
-                print "type, value-->", dtype, getattr(self.array.attrs, dtype)
+                print("type, value-->", dtype,
+                      getattr(self.array.attrs, dtype))
 
         if self.close:
             if common.verbose:
-                print "(closing file version)"
+                print("(closing file version)")
             self.fileh.close()
             self.fileh = open_file(self.file, mode="r+")
             self.root = self.fileh.root
@@ -1228,19 +1240,21 @@ class TypesTestCase(unittest.TestCase):
         # Check the results
         if common.verbose:
             for dtype in checktypes:
-                print "type, value-->", dtype, getattr(self.array.attrs, dtype)
+                print("type, value-->",
+                      dtype, getattr(self.array.attrs, dtype))
 
         if self.close:
             if common.verbose:
-                print "(closing file version)"
+                print("(closing file version)")
             self.fileh.close()
             self.fileh = open_file(self.file, mode="r+")
             self.root = self.fileh.root
             self.array = self.fileh.root.anarray
 
         for dtype in checktypes:
-            assert_array_equal(getattr(self.array.attrs, dtype),
-                               numpy.array([[1.1, 2.1], [2.1, 3.1]], dtype=dtype))
+            assert_array_equal(
+                getattr(self.array.attrs, dtype),
+                numpy.array([[1.1, 2.1], [2.1, 3.1]], dtype=dtype))
 
     def test06a_setUnicodeAttributes(self):
         """Checking setting unicode attributes (scalar case)"""
@@ -1253,15 +1267,15 @@ class TypesTestCase(unittest.TestCase):
         if common.verbose:
             if sys.platform != 'win32':
                 # It seems that Windows cannot print this
-                print "pq -->", repr(self.array.attrs.pq)
+                print("pq -->", repr(self.array.attrs.pq))
                 # XXX: try to use repr instead
-                # print "pq -->", repr(self.array.attrs.pq)
-            print "qr -->", self.array.attrs.qr
-            print "rs -->", self.array.attrs.rs
+                # print("pq -->", repr(self.array.attrs.pq))
+            print("qr -->", self.array.attrs.qr)
+            print("rs -->", self.array.attrs.rs)
 
         if self.close:
             if common.verbose:
-                print "(closing file version)"
+                print("(closing file version)")
             self.fileh.close()
             self.fileh = open_file(self.file, mode="r+")
             self.root = self.fileh.root
@@ -1281,11 +1295,11 @@ class TypesTestCase(unittest.TestCase):
 
         # Check the results
         if common.verbose:
-            print "pq -->", self.array.attrs.pq
+            print("pq -->", self.array.attrs.pq)
 
         if self.close:
             if common.verbose:
-                print "(closing file version)"
+                print("(closing file version)")
             self.fileh.close()
             self.fileh = open_file(self.file, mode="r+")
             self.root = self.fileh.root
@@ -1295,7 +1309,8 @@ class TypesTestCase(unittest.TestCase):
                            numpy.array([u'para\u0140lel']))
 
     def test06c_setUnicodeAttributes(self):
-        """Checking setting unicode attributes (empty unidimensional 1-elem case)"""
+        """Checking setting unicode attributes (empty unidimensional
+        1-elem case)"""
 
         # The next raises a `TypeError` when unpickled. See:
         # http://projects.scipy.org/numpy/ticket/1037
@@ -1304,17 +1319,17 @@ class TypesTestCase(unittest.TestCase):
 
         # Check the results
         if common.verbose:
-            print "pq -->", self.array.attrs.pq
+            print("pq -->", self.array.attrs.pq)
 
         if self.close:
             if common.verbose:
-                print "(closing file version)"
+                print("(closing file version)")
             self.fileh.close()
             self.fileh = open_file(self.file, mode="r+")
             self.root = self.fileh.root
             self.array = self.fileh.root.anarray
             if common.verbose:
-                print "pq -->", repr(self.array.attrs.pq)
+                print("pq -->", repr(self.array.attrs.pq))
 
         assert_array_equal(self.array.attrs.pq,
                            numpy.array([u''], dtype="U1"))
@@ -1326,11 +1341,11 @@ class TypesTestCase(unittest.TestCase):
 
         # Check the results
         if common.verbose:
-            print "pq -->", self.array.attrs.pq
+            print("pq -->", self.array.attrs.pq)
 
         if self.close:
             if common.verbose:
-                print "(closing file version)"
+                print("(closing file version)")
             self.fileh.close()
             self.fileh = open_file(self.file, mode="r+")
             self.root = self.fileh.root
@@ -1340,17 +1355,18 @@ class TypesTestCase(unittest.TestCase):
                            numpy.array([u'para\u0140lel', u'bar3']))
 
     def test06e_setUnicodeAttributes(self):
-        """Checking setting unicode attributes (empty unidimensional 2-elem case)"""
+        """Checking setting unicode attributes (empty unidimensional
+        2-elem case)"""
 
         self.array.attrs.pq = numpy.array(['', ''], dtype="U1")
 
         # Check the results
         if common.verbose:
-            print "pq -->", self.array.attrs.pq
+            print("pq -->", self.array.attrs.pq)
 
         if self.close:
             if common.verbose:
-                print "(closing file version)"
+                print("(closing file version)")
             self.fileh.close()
             self.fileh = open_file(self.file, mode="r+")
             self.root = self.fileh.root
@@ -1367,11 +1383,11 @@ class TypesTestCase(unittest.TestCase):
 
         # Check the results
         if common.verbose:
-            print "pq -->", self.array.attrs.pq
+            print("pq -->", self.array.attrs.pq)
 
         if self.close:
             if common.verbose:
-                print "(closing file version)"
+                print("(closing file version)")
             self.fileh.close()
             self.fileh = open_file(self.file, mode="r+")
             self.root = self.fileh.root
@@ -1382,7 +1398,7 @@ class TypesTestCase(unittest.TestCase):
                                         ['foo3', u'para\u0140lel4']]))
 
     def test07a_setRecArrayAttributes(self):
-        """Checking setting RecArray (NumPy) attributes"""
+        """Checking setting RecArray (NumPy) attributes."""
 
         dt = numpy.dtype('i4,f8')
         # Set some attrs
@@ -1392,13 +1408,13 @@ class TypesTestCase(unittest.TestCase):
 
         # Check the results
         if common.verbose:
-            print "pq -->", self.array.attrs.pq
-            print "qr -->", self.array.attrs.qr
-            print "rs -->", self.array.attrs.rs
+            print("pq -->", self.array.attrs.pq)
+            print("qr -->", self.array.attrs.qr)
+            print("rs -->", self.array.attrs.rs)
 
         if self.close:
             if common.verbose:
-                print "(closing file version)"
+                print("(closing file version)")
             self.fileh.close()
             self.fileh = open_file(self.file, mode="r+")
             self.root = self.fileh.root
@@ -1412,7 +1428,7 @@ class TypesTestCase(unittest.TestCase):
         assert_array_equal(self.array.attrs.rs, numpy.array([(1, 2.)], dt))
 
     def test07b_setRecArrayAttributes(self):
-        """Checking setting nested RecArray (NumPy) attributes"""
+        """Checking setting nested RecArray (NumPy) attributes."""
 
         # Build a nested dtype
         dt = numpy.dtype([('f1', [('f1', 'i2'), ('f2', 'f8')])])
@@ -1423,13 +1439,13 @@ class TypesTestCase(unittest.TestCase):
 
         # Check the results
         if common.verbose:
-            print "pq -->", self.array.attrs.pq
-            print "qr -->", self.array.attrs.qr
-            print "rs -->", self.array.attrs.rs
+            print("pq -->", self.array.attrs.pq)
+            print("qr -->", self.array.attrs.qr)
+            print("rs -->", self.array.attrs.rs)
 
         if self.close:
             if common.verbose:
-                print "(closing file version)"
+                print("(closing file version)")
             self.fileh.close()
             self.fileh = open_file(self.file, mode="r+")
             self.root = self.fileh.root
@@ -1443,7 +1459,7 @@ class TypesTestCase(unittest.TestCase):
         assert_array_equal(self.array.attrs.rs, numpy.array([((1, 2),)], dt))
 
     def test07c_setRecArrayAttributes(self):
-        """Checking setting multidim nested RecArray (NumPy) attributes"""
+        """Checking setting multidim nested RecArray (NumPy) attributes."""
 
         # Build a nested dtype
         dt = numpy.dtype([('f1', [('f1', 'i2', (2,)), ('f2', 'f8')])])
@@ -1454,13 +1470,13 @@ class TypesTestCase(unittest.TestCase):
 
         # Check the results
         if common.verbose:
-            print "pq -->", self.array.attrs.pq
-            print "qr -->", self.array.attrs.qr
-            print "rs -->", self.array.attrs.rs
+            print("pq -->", self.array.attrs.pq)
+            print("qr -->", self.array.attrs.qr)
+            print("rs -->", self.array.attrs.rs)
 
         if self.close:
             if common.verbose:
-                print "(closing file version)"
+                print("(closing file version)")
             self.fileh.close()
             self.fileh = open_file(self.file, mode="r+")
             self.root = self.fileh.root
@@ -1515,25 +1531,25 @@ class NoSysAttrsTestCase(unittest.TestCase):
         self.group._v_attrs.qr = "2"
         self.group._v_attrs.rs = "3"
         if common.verbose:
-            print "Attribute list:", self.group._v_attrs._f_list()
+            print("Attribute list:", self.group._v_attrs._f_list())
 
         # Now, try with a Table object
         self.table.attrs.a = "1"
         self.table.attrs.c = "2"
         self.table.attrs.b = "3"
         if common.verbose:
-            print "Attribute list:", self.table.attrs._f_list()
+            print("Attribute list:", self.table.attrs._f_list())
 
         # Finally, try with an Array object
         self.array.attrs.k = "1"
         self.array.attrs.j = "2"
         self.array.attrs.i = "3"
         if common.verbose:
-            print "Attribute list:", self.array.attrs._f_list()
+            print("Attribute list:", self.array.attrs._f_list())
 
         if self.close:
             if common.verbose:
-                print "(closing file version)"
+                print("(closing file version)")
             self.fileh.close()
             self.fileh = open_file(
                 self.file, mode="r+")
@@ -1577,7 +1593,7 @@ class SegFaultPythonTestCase(common.TempFileMixin, common.PyTablesTestCase):
         self.assertEqual(self.h5file.root._v_attrs.trouble1, "0")
         self.assertEqual(self.h5file.root._v_attrs.trouble2, "0.")
         if common.verbose:
-            print "Great! '0' and '0.' values can be safely retrieved."
+            print("Great! '0' and '0.' values can be safely retrieved.")
 
 
 class VlenStrAttrTestCase(PyTablesTestCase):
@@ -1637,7 +1653,7 @@ class SpecificAttrsTestCase(common.TempFileMixin, common.PyTablesTestCase):
         "Testing EArray specific attrs (create)."
         ea = self.h5file.create_earray('/', 'ea', Int32Atom(), (2, 0, 4))
         if common.verbose:
-            print "EXTDIM-->", ea.attrs.EXTDIM
+            print("EXTDIM-->", ea.attrs.EXTDIM)
         self.assertEqual(ea.attrs.EXTDIM, 1)
 
     def test01_earray(self):
@@ -1646,7 +1662,7 @@ class SpecificAttrsTestCase(common.TempFileMixin, common.PyTablesTestCase):
         self._reopen('r')
         ea = self.h5file.root.ea
         if common.verbose:
-            print "EXTDIM-->", ea.attrs.EXTDIM
+            print("EXTDIM-->", ea.attrs.EXTDIM)
         self.assertEqual(ea.attrs.EXTDIM, 0)
 
 
diff --git a/tables/tests/test_backcompat.py b/tables/tests/test_backcompat.py
index bf71fb1..859dec9 100644
--- a/tables/tests/test_backcompat.py
+++ b/tables/tests/test_backcompat.py
@@ -1,5 +1,6 @@
 # -*- coding: utf-8 -*-
 
+from __future__ import print_function
 import os
 import shutil
 import tempfile
@@ -24,11 +25,11 @@ class BackCompatTablesTestCase(common.PyTablesTestCase):
     #----------------------------------------
 
     def test01_readTable(self):
-        """Checking backward compatibility of old formats of tables"""
+        """Checking backward compatibility of old formats of tables."""
 
         if common.verbose:
-            print '\n', '-=' * 30
-            print "Running %s.test01_readTable..." % self.__class__.__name__
+            print('\n', '-=' * 30)
+            print("Running %s.test01_readTable..." % self.__class__.__name__)
 
         # Create an instance of an HDF5 Table
         warnings.filterwarnings("ignore", category=UserWarning)
@@ -40,9 +41,9 @@ class BackCompatTablesTestCase(common.PyTablesTestCase):
         # Read the 100 records
         result = [rec['var2'] for rec in table]
         if common.verbose:
-            print "Nrows in", table._v_pathname, ":", table.nrows
-            print "Last record in table ==>", rec
-            print "Total selected records in table ==> ", len(result)
+            print("Nrows in", table._v_pathname, ":", table.nrows)
+            print("Last record in table ==>", rec)
+            print("Total selected records in table ==> ", len(result))
 
         self.assertEqual(len(result), 100)
         self.fileh.close()
@@ -74,11 +75,11 @@ class BackCompatAttrsTestCase(common.PyTablesTestCase):
     file = "zerodim-attrs-%s.h5"
 
     def test01_readAttr(self):
-        """Checking backward compatibility of old formats for attributes"""
+        """Checking backward compatibility of old formats for attributes."""
 
         if common.verbose:
-            print '\n', '-=' * 30
-            print "Running %s.test01_readAttr..." % self.__class__.__name__
+            print('\n', '-=' * 30)
+            print("Running %s.test01_readAttr..." % self.__class__.__name__)
 
         # Read old formats
         filename = self._testFilename(self.file)
@@ -109,7 +110,7 @@ class Attrs_1_4(BackCompatAttrsTestCase):
 class VLArrayTestCase(common.PyTablesTestCase):
 
     def test01_backCompat(self):
-        """Checking backward compatibility with old flavors of VLArray"""
+        """Checking backward compatibility with old flavors of VLArray."""
 
         # Open a PYTABLES_FORMAT_VERSION=1.6 file
         filename = self._testFilename("flavored_vlarrays-format1.6.h5")
@@ -219,6 +220,7 @@ class OldFlavorsTestCase02(common.PyTablesTestCase):
 
 #----------------------------------------------------------------------
 
+
 def suite():
     theSuite = unittest.TestSuite()
     niter = 1
diff --git a/tables/tests/test_basics.py b/tables/tests/test_basics.py
index 2269147..a7dd0af 100644
--- a/tables/tests/test_basics.py
+++ b/tables/tests/test_basics.py
@@ -1,11 +1,14 @@
 # -*- coding: utf-8 -*-
 
+from __future__ import print_function
 import os
-import shutil
 import sys
-import unittest
+import Queue
+import shutil
 import tempfile
+import unittest
 import warnings
+import threading
 import subprocess
 
 try:
@@ -35,8 +38,8 @@ class OpenFileFailureTestCase(common.PyTablesTestCase):
         self.N = len(tables.file._open_files)
         self.open_files = tables.file._open_files
 
-    def test01_openFile(self):
-        """Checking opening of a non existing file"""
+    def test01_open_file(self):
+        """Checking opening of a non existing file."""
 
         filename = tempfile.mktemp(".h5")
         try:
@@ -47,8 +50,8 @@ class OpenFileFailureTestCase(common.PyTablesTestCase):
         else:
             self.fail("IOError exception not raised")
 
-    def test02_openFile(self):
-        """Checking opening of an existing non HDF5 file"""
+    def test02_open_file(self):
+        """Checking opening of an existing non HDF5 file."""
 
         # create a dummy file
         filename = tempfile.mktemp(".h5")
@@ -66,6 +69,21 @@ class OpenFileFailureTestCase(common.PyTablesTestCase):
         finally:
             os.remove(filename)
 
+    def test03_open_file(self):
+        """Checking opening of an existing file with invalid mode."""
+
+        # See gh-318
+
+        # create a dummy file
+        filename = tempfile.mktemp(".h5")
+        fileh = tables.open_file(filename, "w")
+        fileh.close()
+
+        # Try to open the dummy file
+        self.assertRaises(ValueError, tables.open_file, filename, "ab")
+
+        os.remove(filename)
+
 
 class OpenFileTestCase(common.PyTablesTestCase):
 
@@ -73,11 +91,10 @@ class OpenFileTestCase(common.PyTablesTestCase):
         # Create an HDF5 file
         self.file = tempfile.mktemp(".h5")
         fileh = open_file(self.file, mode="w", title="File title",
-                          node_cache_slots=self.nodeCacheSlots)
+                          node_cache_slots=self.node_cache_slots)
         root = fileh.root
         # Create an array
-        fileh.create_array(root, 'array', [1, 2],
-                           title="Array example")
+        fileh.create_array(root, 'array', [1, 2], title="Array example")
         fileh.create_table(root, 'table', {'var1': IntCol()}, "Table example")
         root._v_attrs.testattr = 41
 
@@ -121,14 +138,14 @@ class OpenFileTestCase(common.PyTablesTestCase):
         common.cleanup(self)
 
     def test00_newFile(self):
-        """Checking creation of a new file"""
+        """Checking creation of a new file."""
 
         # Create an HDF5 file
         file = tempfile.mktemp(".h5")
         fileh = open_file(
-            file, mode="w", node_cache_slots=self.nodeCacheSlots)
-        fileh.create_array(fileh.root, 'array', [
-                           1, 2], title="Array example")
+            file, mode="w", node_cache_slots=self.node_cache_slots)
+        fileh.create_array(fileh.root, 'array', [1, 2],
+                           title="Array example")
         # Get the CLASS attribute of the arr object
         class_ = fileh.root.array.attrs.CLASS
 
@@ -160,11 +177,11 @@ class OpenFileTestCase(common.PyTablesTestCase):
         shutil.rmtree(temp_dir)
 
     def test01_openFile(self):
-        """Checking opening of an existing file"""
+        """Checking opening of an existing file."""
 
         # Open the old HDF5 file
         fileh = open_file(
-            self.file, mode="r", node_cache_slots=self.nodeCacheSlots)
+            self.file, mode="r", node_cache_slots=self.node_cache_slots)
         # Get the CLASS attribute of the arr object
         title = fileh.root.array.get_attr("TITLE")
 
@@ -172,18 +189,18 @@ class OpenFileTestCase(common.PyTablesTestCase):
         fileh.close()
 
     def test02_appendFile(self):
-        """Checking appending objects to an existing file"""
+        """Checking appending objects to an existing file."""
 
         # Append a new array to the existing file
         fileh = open_file(
-            self.file, mode="r+", node_cache_slots=self.nodeCacheSlots)
+            self.file, mode="r+", node_cache_slots=self.node_cache_slots)
         fileh.create_array(fileh.root, 'array2', [3, 4],
                            title="Title example 2")
         fileh.close()
 
         # Open this file in read-only mode
         fileh = open_file(
-            self.file, mode="r", node_cache_slots=self.nodeCacheSlots)
+            self.file, mode="r", node_cache_slots=self.node_cache_slots)
         # Get the CLASS attribute of the arr object
         title = fileh.root.array2.get_attr("TITLE")
 
@@ -195,14 +212,14 @@ class OpenFileTestCase(common.PyTablesTestCase):
 
         # Append a new array to the existing file
         fileh = open_file(
-            self.file, mode="a", node_cache_slots=self.nodeCacheSlots)
+            self.file, mode="a", node_cache_slots=self.node_cache_slots)
         fileh.create_array(fileh.root, 'array2', [3, 4],
                            title="Title example 2")
         fileh.close()
 
         # Open this file in read-only mode
         fileh = open_file(
-            self.file, mode="r", node_cache_slots=self.nodeCacheSlots)
+            self.file, mode="r", node_cache_slots=self.node_cache_slots)
         # Get the CLASS attribute of the arr object
         title = fileh.root.array2.get_attr("TITLE")
 
@@ -212,19 +229,19 @@ class OpenFileTestCase(common.PyTablesTestCase):
     # Begin to raise errors...
 
     def test03_appendErrorFile(self):
-        """Checking appending objects to an existing file in "w" mode"""
+        """Checking appending objects to an existing file in "w" mode."""
 
         # Append a new array to the existing file but in write mode
         # so, the existing file should be deleted!
         fileh = open_file(
-            self.file, mode="w", node_cache_slots=self.nodeCacheSlots)
+            self.file, mode="w", node_cache_slots=self.node_cache_slots)
         fileh.create_array(fileh.root, 'array2', [3, 4],
                            title="Title example 2")
         fileh.close()
 
         # Open this file in read-only mode
         fileh = open_file(
-            self.file, mode="r", node_cache_slots=self.nodeCacheSlots)
+            self.file, mode="r", node_cache_slots=self.node_cache_slots)
 
         try:
             # Try to get the 'array' object in the old existing file
@@ -232,8 +249,8 @@ class OpenFileTestCase(common.PyTablesTestCase):
         except LookupError:
             if common.verbose:
                 (type, value, traceback) = sys.exc_info()
-                print "\nGreat!, the next LookupError was catched!"
-                print value
+                print("\nGreat!, the next LookupError was catched!")
+                print(value)
         else:
             self.fail("expected an LookupError")
         fileh.close()
@@ -243,24 +260,24 @@ class OpenFileTestCase(common.PyTablesTestCase):
 
         try:
             open_file("nonexistent.h5", mode="r",
-                      node_cache_slots=self.nodeCacheSlots)
+                      node_cache_slots=self.node_cache_slots)
         except IOError:
             if common.verbose:
                 (type, value, traceback) = sys.exc_info()
-                print "\nGreat!, the next IOError was catched!"
-                print value
+                print("\nGreat!, the next IOError was catched!")
+                print(value)
         else:
             self.fail("expected an IOError")
 
     def test04b_alternateRootFile(self):
-        """Checking alternate root access to the object tree"""
+        """Checking alternate root access to the object tree."""
 
         # Open the existent HDF5 file
         fileh = open_file(self.file, mode="r", root_uep="/agroup",
-                          node_cache_slots=self.nodeCacheSlots)
+                          node_cache_slots=self.node_cache_slots)
         # Get the CLASS attribute of the arr object
         if common.verbose:
-            print "\nFile tree dump:", fileh
+            print("\nFile tree dump:", fileh)
         title = fileh.root.anarray1.get_attr("TITLE")
         # Get the node again, as this can trigger errors in some situations
         anarray1 = fileh.root.anarray1
@@ -276,29 +293,29 @@ class OpenFileTestCase(common.PyTablesTestCase):
 
         try:
             open_file(self.file, mode="r", root_uep="/nonexistent",
-                      node_cache_slots=self.nodeCacheSlots)
+                      node_cache_slots=self.node_cache_slots)
         except RuntimeError:
             if common.verbose:
                 (type, value, traceback) = sys.exc_info()
-                print "\nGreat!, the next RuntimeError was catched!"
-                print value
+                print("\nGreat!, the next RuntimeError was catched!")
+                print(value)
         else:
             self.fail("expected an IOError")
 
     def test05a_removeGroupRecursively(self):
-        """Checking removing a group recursively"""
+        """Checking removing a group recursively."""
 
         # Delete a group with leafs
         fileh = open_file(
-            self.file, mode="r+", node_cache_slots=self.nodeCacheSlots)
+            self.file, mode="r+", node_cache_slots=self.node_cache_slots)
 
         try:
             fileh.remove_node(fileh.root.agroup)
         except NodeError:
             if common.verbose:
                 (type, value, traceback) = sys.exc_info()
-                print "\nGreat!, the next NodeError was catched!"
-                print value
+                print("\nGreat!, the next NodeError was catched!")
+                print(value)
         else:
             self.fail("expected a NodeError")
 
@@ -309,15 +326,15 @@ class OpenFileTestCase(common.PyTablesTestCase):
 
         # Open this file in read-only mode
         fileh = open_file(
-            self.file, mode="r", node_cache_slots=self.nodeCacheSlots)
+            self.file, mode="r", node_cache_slots=self.node_cache_slots)
         # Try to get the removed object
         try:
             fileh.root.agroup
         except LookupError:
             if common.verbose:
                 (type, value, traceback) = sys.exc_info()
-                print "\nGreat!, the next LookupError was catched!"
-                print value
+                print("\nGreat!, the next LookupError was catched!")
+                print(value)
         else:
             self.fail("expected an LookupError")
         # Try to get a child of the removed object
@@ -326,30 +343,32 @@ class OpenFileTestCase(common.PyTablesTestCase):
         except LookupError:
             if common.verbose:
                 (type, value, traceback) = sys.exc_info()
-                print "\nGreat!, the next LookupError was catched!"
-                print value
+                print("\nGreat!, the next LookupError was catched!")
+                print(value)
         else:
             self.fail("expected an LookupError")
         fileh.close()
 
     def test05b_removeGroupRecursively(self):
-        """Checking removing a group recursively and access to it immediately"""
+        """Checking removing a group recursively and access to it
+        immediately."""
 
         if common.verbose:
-            print '\n', '-=' * 30
-            print "Running %s.test05b_removeGroupRecursively..." % self.__class__.__name__
+            print('\n', '-=' * 30)
+            print("Running %s.test05b_removeGroupRecursively..." %
+                  self.__class__.__name__)
 
         # Delete a group with leafs
         fileh = open_file(
-            self.file, mode="r+", node_cache_slots=self.nodeCacheSlots)
+            self.file, mode="r+", node_cache_slots=self.node_cache_slots)
 
         try:
             fileh.remove_node(fileh.root, 'agroup')
         except NodeError:
             if common.verbose:
                 (type, value, traceback) = sys.exc_info()
-                print "\nGreat!, the next NodeError was catched!"
-                print value
+                print("\nGreat!, the next NodeError was catched!")
+                print(value)
         else:
             self.fail("expected a NodeError")
 
@@ -362,8 +381,8 @@ class OpenFileTestCase(common.PyTablesTestCase):
         except LookupError:
             if common.verbose:
                 (type, value, traceback) = sys.exc_info()
-                print "\nGreat!, the next LookupError was catched!"
-                print value
+                print("\nGreat!, the next LookupError was catched!")
+                print(value)
         else:
             self.fail("expected an LookupError")
         # Try to get a child of the removed object
@@ -372,8 +391,8 @@ class OpenFileTestCase(common.PyTablesTestCase):
         except LookupError:
             if common.verbose:
                 (type, value, traceback) = sys.exc_info()
-                print "\nGreat!, the next LookupError was catched!"
-                print value
+                print("\nGreat!, the next LookupError was catched!")
+                print(value)
         else:
             self.fail("expected an LookupError")
         fileh.close()
@@ -382,7 +401,7 @@ class OpenFileTestCase(common.PyTablesTestCase):
         """Checking removing a node using ``__delattr__()``"""
 
         fileh = open_file(
-            self.file, mode="r+", node_cache_slots=self.nodeCacheSlots)
+            self.file, mode="r+", node_cache_slots=self.node_cache_slots)
 
         try:
             # This should fail because there is no *Python attribute*
@@ -391,64 +410,64 @@ class OpenFileTestCase(common.PyTablesTestCase):
         except AttributeError:
             if common.verbose:
                 (type, value, traceback) = sys.exc_info()
-                print "\nGreat!, the next AttributeError was catched!"
-                print value
+                print("\nGreat!, the next AttributeError was catched!")
+                print(value)
         else:
             self.fail("expected an AttributeError")
 
         fileh.close()
 
     def test06a_removeGroup(self):
-        """Checking removing a lonely group from an existing file"""
+        """Checking removing a lonely group from an existing file."""
 
         fileh = open_file(
-            self.file, mode="r+", node_cache_slots=self.nodeCacheSlots)
+            self.file, mode="r+", node_cache_slots=self.node_cache_slots)
         fileh.remove_node(fileh.root, 'agroup2')
         fileh.close()
 
         # Open this file in read-only mode
         fileh = open_file(
-            self.file, mode="r", node_cache_slots=self.nodeCacheSlots)
+            self.file, mode="r", node_cache_slots=self.node_cache_slots)
         # Try to get the removed object
         try:
             fileh.root.agroup2
         except LookupError:
             if common.verbose:
                 (type, value, traceback) = sys.exc_info()
-                print "\nGreat!, the next LookupError was catched!"
-                print value
+                print("\nGreat!, the next LookupError was catched!")
+                print(value)
         else:
             self.fail("expected an LookupError")
         fileh.close()
 
     def test06b_removeLeaf(self):
-        """Checking removing Leaves from an existing file"""
+        """Checking removing Leaves from an existing file."""
 
         fileh = open_file(
-            self.file, mode="r+", node_cache_slots=self.nodeCacheSlots)
+            self.file, mode="r+", node_cache_slots=self.node_cache_slots)
         fileh.remove_node(fileh.root, 'anarray')
         fileh.close()
 
         # Open this file in read-only mode
         fileh = open_file(
-            self.file, mode="r", node_cache_slots=self.nodeCacheSlots)
+            self.file, mode="r", node_cache_slots=self.node_cache_slots)
         # Try to get the removed object
         try:
             fileh.root.anarray
         except LookupError:
             if common.verbose:
                 (type, value, traceback) = sys.exc_info()
-                print "\nGreat!, the next LookupError was catched!"
-                print value
+                print("\nGreat!, the next LookupError was catched!")
+                print(value)
         else:
             self.fail("expected an LookupError")
         fileh.close()
 
     def test06c_removeLeaf(self):
-        """Checking removing Leaves and access it immediately"""
+        """Checking removing Leaves and access it immediately."""
 
         fileh = open_file(
-            self.file, mode="r+", node_cache_slots=self.nodeCacheSlots)
+            self.file, mode="r+", node_cache_slots=self.node_cache_slots)
         fileh.remove_node(fileh.root, 'anarray')
 
         # Try to get the removed object
@@ -457,8 +476,8 @@ class OpenFileTestCase(common.PyTablesTestCase):
         except LookupError:
             if common.verbose:
                 (type, value, traceback) = sys.exc_info()
-                print "\nGreat!, the next LookupError was catched!"
-                print value
+                print("\nGreat!, the next LookupError was catched!")
+                print(value)
         else:
             self.fail("expected an LookupError")
         fileh.close()
@@ -467,7 +486,7 @@ class OpenFileTestCase(common.PyTablesTestCase):
         """Checking removing a non-existent node"""
 
         fileh = open_file(
-            self.file, mode="r+", node_cache_slots=self.nodeCacheSlots)
+            self.file, mode="r+", node_cache_slots=self.node_cache_slots)
 
         # Try to get the removed object
         try:
@@ -475,46 +494,46 @@ class OpenFileTestCase(common.PyTablesTestCase):
         except LookupError:
             if common.verbose:
                 (type, value, traceback) = sys.exc_info()
-                print "\nGreat!, the next LookupError was catched!"
-                print value
+                print("\nGreat!, the next LookupError was catched!")
+                print(value)
         else:
             self.fail("expected an LookupError")
         fileh.close()
 
     def test06e_removeTable(self):
-        """Checking removing Tables from an existing file"""
+        """Checking removing Tables from an existing file."""
 
         fileh = open_file(
-            self.file, mode="r+", node_cache_slots=self.nodeCacheSlots)
+            self.file, mode="r+", node_cache_slots=self.node_cache_slots)
         fileh.remove_node(fileh.root, 'atable')
         fileh.close()
 
         # Open this file in read-only mode
         fileh = open_file(
-            self.file, mode="r", node_cache_slots=self.nodeCacheSlots)
+            self.file, mode="r", node_cache_slots=self.node_cache_slots)
         # Try to get the removed object
         try:
             fileh.root.atable
         except LookupError:
             if common.verbose:
                 (type, value, traceback) = sys.exc_info()
-                print "\nGreat!, the next LookupError was catched!"
-                print value
+                print("\nGreat!, the next LookupError was catched!")
+                print(value)
         else:
             self.fail("expected an LookupError")
         fileh.close()
 
     def test07_renameLeaf(self):
-        """Checking renaming a leave and access it after a close/open"""
+        """Checking renaming a leave and access it after a close/open."""
 
         fileh = open_file(
-            self.file, mode="r+", node_cache_slots=self.nodeCacheSlots)
+            self.file, mode="r+", node_cache_slots=self.node_cache_slots)
         fileh.rename_node(fileh.root.anarray, 'anarray2')
         fileh.close()
 
         # Open this file in read-only mode
         fileh = open_file(
-            self.file, mode="r", node_cache_slots=self.nodeCacheSlots)
+            self.file, mode="r", node_cache_slots=self.node_cache_slots)
         # Ensure that the new name exists
         array_ = fileh.root.anarray2
         self.assertEqual(array_.name, "anarray2")
@@ -526,17 +545,17 @@ class OpenFileTestCase(common.PyTablesTestCase):
         except LookupError:
             if common.verbose:
                 (type, value, traceback) = sys.exc_info()
-                print "\nGreat!, the next LookupError was catched!"
-                print value
+                print("\nGreat!, the next LookupError was catched!")
+                print(value)
         else:
             self.fail("expected an LookupError")
         fileh.close()
 
     def test07b_renameLeaf(self):
-        """Checking renaming Leaves and accesing them immediately"""
+        """Checking renaming Leaves and accesing them immediately."""
 
         fileh = open_file(
-            self.file, mode="r+", node_cache_slots=self.nodeCacheSlots)
+            self.file, mode="r+", node_cache_slots=self.node_cache_slots)
         fileh.rename_node(fileh.root.anarray, 'anarray2')
 
         # Ensure that the new name exists
@@ -550,17 +569,17 @@ class OpenFileTestCase(common.PyTablesTestCase):
         except LookupError:
             if common.verbose:
                 (type, value, traceback) = sys.exc_info()
-                print "\nGreat!, the next LookupError was catched!"
-                print value
+                print("\nGreat!, the next LookupError was catched!")
+                print(value)
         else:
             self.fail("expected an LookupError")
         fileh.close()
 
     def test07c_renameLeaf(self):
-        """Checking renaming Leaves and modify attributes after that"""
+        """Checking renaming Leaves and modify attributes after that."""
 
         fileh = open_file(
-            self.file, mode="r+", node_cache_slots=self.nodeCacheSlots)
+            self.file, mode="r+", node_cache_slots=self.node_cache_slots)
         fileh.rename_node(fileh.root.anarray, 'anarray2')
         array_ = fileh.root.anarray2
         array_.attrs.TITLE = "hello"
@@ -570,10 +589,10 @@ class OpenFileTestCase(common.PyTablesTestCase):
         fileh.close()
 
     def test07d_renameLeaf(self):
-        """Checking renaming a Group under a nested group"""
+        """Checking renaming a Group under a nested group."""
 
         fileh = open_file(
-            self.file, mode="r+", node_cache_slots=self.nodeCacheSlots)
+            self.file, mode="r+", node_cache_slots=self.node_cache_slots)
         fileh.rename_node(fileh.root.agroup.anarray2, 'anarray3')
 
         # Ensure that we can access n attributes in the new group
@@ -582,19 +601,19 @@ class OpenFileTestCase(common.PyTablesTestCase):
         fileh.close()
 
     def test08_renameToExistingLeaf(self):
-        """Checking renaming a node to an existing name"""
+        """Checking renaming a node to an existing name."""
 
         # Open this file
         fileh = open_file(
-            self.file, mode="r+", node_cache_slots=self.nodeCacheSlots)
+            self.file, mode="r+", node_cache_slots=self.node_cache_slots)
         # Try to get the previous object with the old name
         try:
             fileh.rename_node(fileh.root.anarray, 'array')
         except NodeError:
             if common.verbose:
                 (type, value, traceback) = sys.exc_info()
-                print "\nGreat!, the next NodeError was catched!"
-                print value
+                print("\nGreat!, the next NodeError was catched!")
+                print(value)
         else:
             self.fail("expected an NodeError")
         # Now overwrite the destination node.
@@ -609,7 +628,7 @@ class OpenFileTestCase(common.PyTablesTestCase):
 
         # Open this file
         fileh = open_file(
-            self.file, mode="r+", node_cache_slots=self.nodeCacheSlots)
+            self.file, mode="r+", node_cache_slots=self.node_cache_slots)
         warnings.filterwarnings("error", category=NaturalNameWarning)
         # Try to get the previous object with the old name
         try:
@@ -617,8 +636,8 @@ class OpenFileTestCase(common.PyTablesTestCase):
         except NaturalNameWarning:
             if common.verbose:
                 (type, value, traceback) = sys.exc_info()
-                print "\nGreat!, the next NaturalNameWarning was catched!"
-                print value
+                print("\nGreat!, the next NaturalNameWarning was catched!")
+                print(value)
         else:
             self.fail("expected an NaturalNameWarning")
         # Reset the warning
@@ -626,16 +645,16 @@ class OpenFileTestCase(common.PyTablesTestCase):
         fileh.close()
 
     def test09_renameGroup(self):
-        """Checking renaming a Group and access it after a close/open"""
+        """Checking renaming a Group and access it after a close/open."""
 
         fileh = open_file(
-            self.file, mode="r+", node_cache_slots=self.nodeCacheSlots)
+            self.file, mode="r+", node_cache_slots=self.node_cache_slots)
         fileh.rename_node(fileh.root.agroup, 'agroup3')
         fileh.close()
 
         # Open this file in read-only mode
         fileh = open_file(
-            self.file, mode="r", node_cache_slots=self.nodeCacheSlots)
+            self.file, mode="r", node_cache_slots=self.node_cache_slots)
         # Ensure that the new name exists
         group = fileh.root.agroup3
         self.assertEqual(group._v_name, "agroup3")
@@ -651,8 +670,8 @@ class OpenFileTestCase(common.PyTablesTestCase):
         except LookupError:
             if common.verbose:
                 (type, value, traceback) = sys.exc_info()
-                print "\nGreat!, the next LookupError was catched!"
-                print value
+                print("\nGreat!, the next LookupError was catched!")
+                print(value)
         else:
             self.fail("expected an LookupError")
         # Try to get a child with the old pathname
@@ -661,17 +680,17 @@ class OpenFileTestCase(common.PyTablesTestCase):
         except LookupError:
             if common.verbose:
                 (type, value, traceback) = sys.exc_info()
-                print "\nGreat!, the next LookupError was catched!"
-                print value
+                print("\nGreat!, the next LookupError was catched!")
+                print(value)
         else:
             self.fail("expected an LookupError")
         fileh.close()
 
     def test09b_renameGroup(self):
-        """Checking renaming a Group and access it immediately"""
+        """Checking renaming a Group and access it immediately."""
 
         fileh = open_file(
-            self.file, mode="r+", node_cache_slots=self.nodeCacheSlots)
+            self.file, mode="r+", node_cache_slots=self.node_cache_slots)
         fileh.rename_node(fileh.root.agroup, 'agroup3')
 
         # Ensure that the new name exists
@@ -689,8 +708,8 @@ class OpenFileTestCase(common.PyTablesTestCase):
         except LookupError:
             if common.verbose:
                 (type, value, traceback) = sys.exc_info()
-                print "\nGreat!, the next LookupError was catched!"
-                print value
+                print("\nGreat!, the next LookupError was catched!")
+                print(value)
         else:
             self.fail("expected an LookupError")
         # Try to get a child with the old pathname
@@ -699,17 +718,17 @@ class OpenFileTestCase(common.PyTablesTestCase):
         except LookupError:
             if common.verbose:
                 (type, value, traceback) = sys.exc_info()
-                print "\nGreat!, the next LookupError was catched!"
-                print value
+                print("\nGreat!, the next LookupError was catched!")
+                print(value)
         else:
             self.fail("expected an LookupError")
         fileh.close()
 
     def test09c_renameGroup(self):
-        """Checking renaming a Group and modify attributes afterwards"""
+        """Checking renaming a Group and modify attributes afterwards."""
 
         fileh = open_file(
-            self.file, mode="r+", node_cache_slots=self.nodeCacheSlots)
+            self.file, mode="r+", node_cache_slots=self.node_cache_slots)
         fileh.rename_node(fileh.root.agroup, 'agroup3')
 
         # Ensure that we can modify attributes in the new group
@@ -720,10 +739,10 @@ class OpenFileTestCase(common.PyTablesTestCase):
         fileh.close()
 
     def test09d_renameGroup(self):
-        """Checking renaming a Group under a nested group"""
+        """Checking renaming a Group under a nested group."""
 
         fileh = open_file(
-            self.file, mode="r+", node_cache_slots=self.nodeCacheSlots)
+            self.file, mode="r+", node_cache_slots=self.node_cache_slots)
         fileh.rename_node(fileh.root.agroup.agroup3, 'agroup4')
 
         # Ensure that we can access n attributes in the new group
@@ -732,11 +751,11 @@ class OpenFileTestCase(common.PyTablesTestCase):
         fileh.close()
 
     def test09e_renameGroup(self):
-        """Checking renaming a Group with nested groups in the LRU cache"""
+        """Checking renaming a Group with nested groups in the LRU cache."""
         # This checks for ticket #126.
 
         fileh = open_file(
-            self.file, mode="r+", node_cache_slots=self.nodeCacheSlots)
+            self.file, mode="r+", node_cache_slots=self.node_cache_slots)
         # Load intermediate groups and keep a nested one alive.
         g = fileh.root.agroup.agroup3.agroup4
         self.assertTrue(g is not None)
@@ -750,17 +769,17 @@ class OpenFileTestCase(common.PyTablesTestCase):
         fileh.close()
 
     def test10_moveLeaf(self):
-        """Checking moving a leave and access it after a close/open"""
+        """Checking moving a leave and access it after a close/open."""
 
         fileh = open_file(
-            self.file, mode="r+", node_cache_slots=self.nodeCacheSlots)
+            self.file, mode="r+", node_cache_slots=self.node_cache_slots)
         newgroup = fileh.create_group("/", "newgroup")
         fileh.move_node(fileh.root.anarray, newgroup, 'anarray2')
         fileh.close()
 
         # Open this file in read-only mode
         fileh = open_file(
-            self.file, mode="r", node_cache_slots=self.nodeCacheSlots)
+            self.file, mode="r", node_cache_slots=self.node_cache_slots)
         # Ensure that the new name exists
         array_ = fileh.root.newgroup.anarray2
         self.assertEqual(array_.name, "anarray2")
@@ -772,17 +791,17 @@ class OpenFileTestCase(common.PyTablesTestCase):
         except LookupError:
             if common.verbose:
                 (type, value, traceback) = sys.exc_info()
-                print "\nGreat!, the next LookupError was catched!"
-                print value
+                print("\nGreat!, the next LookupError was catched!")
+                print(value)
         else:
             self.fail("expected an LookupError")
         fileh.close()
 
     def test10b_moveLeaf(self):
-        """Checking moving a leave and access it without a close/open"""
+        """Checking moving a leave and access it without a close/open."""
 
         fileh = open_file(
-            self.file, mode="r+", node_cache_slots=self.nodeCacheSlots)
+            self.file, mode="r+", node_cache_slots=self.node_cache_slots)
         newgroup = fileh.create_group("/", "newgroup")
         fileh.move_node(fileh.root.anarray, newgroup, 'anarray2')
 
@@ -797,17 +816,17 @@ class OpenFileTestCase(common.PyTablesTestCase):
         except LookupError:
             if common.verbose:
                 (type, value, traceback) = sys.exc_info()
-                print "\nGreat!, the next LookupError was catched!"
-                print value
+                print("\nGreat!, the next LookupError was catched!")
+                print(value)
         else:
             self.fail("expected an LookupError")
         fileh.close()
 
     def test10c_moveLeaf(self):
-        """Checking moving Leaves and modify attributes after that"""
+        """Checking moving Leaves and modify attributes after that."""
 
         fileh = open_file(
-            self.file, mode="r+", node_cache_slots=self.nodeCacheSlots)
+            self.file, mode="r+", node_cache_slots=self.node_cache_slots)
         newgroup = fileh.create_group("/", "newgroup")
         fileh.move_node(fileh.root.anarray, newgroup, 'anarray2')
         array_ = fileh.root.newgroup.anarray2
@@ -818,35 +837,35 @@ class OpenFileTestCase(common.PyTablesTestCase):
         fileh.close()
 
     def test10d_moveToExistingLeaf(self):
-        """Checking moving a leaf to an existing name"""
+        """Checking moving a leaf to an existing name."""
 
         # Open this file
         fileh = open_file(
-            self.file, mode="r+", node_cache_slots=self.nodeCacheSlots)
+            self.file, mode="r+", node_cache_slots=self.node_cache_slots)
         # Try to get the previous object with the old name
         try:
             fileh.move_node(fileh.root.anarray, fileh.root, 'array')
         except NodeError:
             if common.verbose:
                 (type, value, traceback) = sys.exc_info()
-                print "\nGreat!, the next NodeError was catched!"
-                print value
+                print("\nGreat!, the next NodeError was catched!")
+                print(value)
         else:
             self.fail("expected an NodeError")
         fileh.close()
 
     def test10_2_moveTable(self):
-        """Checking moving a table and access it after a close/open"""
+        """Checking moving a table and access it after a close/open."""
 
         fileh = open_file(
-            self.file, mode="r+", node_cache_slots=self.nodeCacheSlots)
+            self.file, mode="r+", node_cache_slots=self.node_cache_slots)
         newgroup = fileh.create_group("/", "newgroup")
         fileh.move_node(fileh.root.atable, newgroup, 'atable2')
         fileh.close()
 
         # Open this file in read-only mode
         fileh = open_file(
-            self.file, mode="r", node_cache_slots=self.nodeCacheSlots)
+            self.file, mode="r", node_cache_slots=self.node_cache_slots)
         # Ensure that the new name exists
         table_ = fileh.root.newgroup.atable2
         self.assertEqual(table_.name, "atable2")
@@ -858,17 +877,17 @@ class OpenFileTestCase(common.PyTablesTestCase):
         except LookupError:
             if common.verbose:
                 (type, value, traceback) = sys.exc_info()
-                print "\nGreat!, the next LookupError was catched!"
-                print value
+                print("\nGreat!, the next LookupError was catched!")
+                print(value)
         else:
             self.fail("expected an LookupError")
         fileh.close()
 
     def test10_2b_moveTable(self):
-        """Checking moving a table and access it without a close/open"""
+        """Checking moving a table and access it without a close/open."""
 
         fileh = open_file(
-            self.file, mode="r+", node_cache_slots=self.nodeCacheSlots)
+            self.file, mode="r+", node_cache_slots=self.node_cache_slots)
         newgroup = fileh.create_group("/", "newgroup")
         fileh.move_node(fileh.root.atable, newgroup, 'atable2')
 
@@ -883,17 +902,17 @@ class OpenFileTestCase(common.PyTablesTestCase):
         except LookupError:
             if common.verbose:
                 (type, value, traceback) = sys.exc_info()
-                print "\nGreat!, the next LookupError was catched!"
-                print value
+                print("\nGreat!, the next LookupError was catched!")
+                print(value)
         else:
             self.fail("expected an LookupError")
         fileh.close()
 
     def test10_2b_bis_moveTable(self):
-        """Checking moving a table and use cached row without a close/open"""
+        """Checking moving a table and use cached row without a close/open."""
 
         fileh = open_file(
-            self.file, mode="r+", node_cache_slots=self.nodeCacheSlots)
+            self.file, mode="r+", node_cache_slots=self.node_cache_slots)
         newgroup = fileh.create_group("/", "newgroup")
         # Cache the Row attribute prior to the move
         row = fileh.root.atable.row
@@ -915,10 +934,10 @@ class OpenFileTestCase(common.PyTablesTestCase):
         fileh.close()
 
     def test10_2c_moveTable(self):
-        """Checking moving tables and modify attributes after that"""
+        """Checking moving tables and modify attributes after that."""
 
         fileh = open_file(
-            self.file, mode="r+", node_cache_slots=self.nodeCacheSlots)
+            self.file, mode="r+", node_cache_slots=self.node_cache_slots)
         newgroup = fileh.create_group("/", "newgroup")
         fileh.move_node(fileh.root.atable, newgroup, 'atable2')
         table_ = fileh.root.newgroup.atable2
@@ -929,28 +948,28 @@ class OpenFileTestCase(common.PyTablesTestCase):
         fileh.close()
 
     def test10_2d_moveToExistingTable(self):
-        """Checking moving a table to an existing name"""
+        """Checking moving a table to an existing name."""
 
         # Open this file
         fileh = open_file(
-            self.file, mode="r+", node_cache_slots=self.nodeCacheSlots)
+            self.file, mode="r+", node_cache_slots=self.node_cache_slots)
         # Try to get the previous object with the old name
         try:
             fileh.move_node(fileh.root.atable, fileh.root, 'table')
         except NodeError:
             if common.verbose:
                 (type, value, traceback) = sys.exc_info()
-                print "\nGreat!, the next NodeError was catched!"
-                print value
+                print("\nGreat!, the next NodeError was catched!")
+                print(value)
         else:
             self.fail("expected an NodeError")
         fileh.close()
 
     def test10_2e_moveToExistingTableOverwrite(self):
-        """Checking moving a table to an existing name, overwriting it"""
+        """Checking moving a table to an existing name, overwriting it."""
 
         fileh = open_file(
-            self.file, mode="r+", node_cache_slots=self.nodeCacheSlots)
+            self.file, mode="r+", node_cache_slots=self.node_cache_slots)
 
         srcNode = fileh.root.atable
         fileh.move_node(srcNode, fileh.root, 'table', overwrite=True)
@@ -960,17 +979,17 @@ class OpenFileTestCase(common.PyTablesTestCase):
         fileh.close()
 
     def test11_moveGroup(self):
-        """Checking moving a Group and access it after a close/open"""
+        """Checking moving a Group and access it after a close/open."""
 
         fileh = open_file(
-            self.file, mode="r+", node_cache_slots=self.nodeCacheSlots)
+            self.file, mode="r+", node_cache_slots=self.node_cache_slots)
         newgroup = fileh.create_group(fileh.root, 'newgroup')
         fileh.move_node(fileh.root.agroup, newgroup, 'agroup3')
         fileh.close()
 
         # Open this file in read-only mode
         fileh = open_file(
-            self.file, mode="r", node_cache_slots=self.nodeCacheSlots)
+            self.file, mode="r", node_cache_slots=self.node_cache_slots)
         # Ensure that the new name exists
         group = fileh.root.newgroup.agroup3
         self.assertEqual(group._v_name, "agroup3")
@@ -988,8 +1007,8 @@ class OpenFileTestCase(common.PyTablesTestCase):
         except LookupError:
             if common.verbose:
                 (type, value, traceback) = sys.exc_info()
-                print "\nGreat!, the next LookupError was catched!"
-                print value
+                print("\nGreat!, the next LookupError was catched!")
+                print(value)
         else:
             self.fail("expected an LookupError")
         # Try to get a child with the old pathname
@@ -998,17 +1017,17 @@ class OpenFileTestCase(common.PyTablesTestCase):
         except LookupError:
             if common.verbose:
                 (type, value, traceback) = sys.exc_info()
-                print "\nGreat!, the next LookupError was catched!"
-                print value
+                print("\nGreat!, the next LookupError was catched!")
+                print(value)
         else:
             self.fail("expected an LookupError")
         fileh.close()
 
     def test11b_moveGroup(self):
-        """Checking moving a Group and access it immediately"""
+        """Checking moving a Group and access it immediately."""
 
         fileh = open_file(
-            self.file, mode="r+", node_cache_slots=self.nodeCacheSlots)
+            self.file, mode="r+", node_cache_slots=self.node_cache_slots)
         newgroup = fileh.create_group(fileh.root, 'newgroup')
         fileh.move_node(fileh.root.agroup, newgroup, 'agroup3')
         # Ensure that the new name exists
@@ -1028,8 +1047,8 @@ class OpenFileTestCase(common.PyTablesTestCase):
         except LookupError:
             if common.verbose:
                 (type, value, traceback) = sys.exc_info()
-                print "\nGreat!, the next LookupError was catched!"
-                print value
+                print("\nGreat!, the next LookupError was catched!")
+                print(value)
         else:
             self.fail("expected an LookupError")
         # Try to get a child with the old pathname
@@ -1038,17 +1057,17 @@ class OpenFileTestCase(common.PyTablesTestCase):
         except LookupError:
             if common.verbose:
                 (type, value, traceback) = sys.exc_info()
-                print "\nGreat!, the next LookupError was catched!"
-                print value
+                print("\nGreat!, the next LookupError was catched!")
+                print(value)
         else:
             self.fail("expected an LookupError")
         fileh.close()
 
     def test11c_moveGroup(self):
-        """Checking moving a Group and modify attributes afterwards"""
+        """Checking moving a Group and modify attributes afterwards."""
 
         fileh = open_file(
-            self.file, mode="r+", node_cache_slots=self.nodeCacheSlots)
+            self.file, mode="r+", node_cache_slots=self.node_cache_slots)
         newgroup = fileh.create_group(fileh.root, 'newgroup')
         fileh.move_node(fileh.root.agroup, newgroup, 'agroup3')
 
@@ -1062,28 +1081,28 @@ class OpenFileTestCase(common.PyTablesTestCase):
         fileh.close()
 
     def test11d_moveToExistingGroup(self):
-        """Checking moving a group to an existing name"""
+        """Checking moving a group to an existing name."""
 
         # Open this file
         fileh = open_file(
-            self.file, mode="r+", node_cache_slots=self.nodeCacheSlots)
+            self.file, mode="r+", node_cache_slots=self.node_cache_slots)
         # Try to get the previous object with the old name
         try:
             fileh.move_node(fileh.root.agroup, fileh.root, 'agroup2')
         except NodeError:
             if common.verbose:
                 (type, value, traceback) = sys.exc_info()
-                print "\nGreat!, the next NodeError was catched!"
-                print value
+                print("\nGreat!, the next NodeError was catched!")
+                print(value)
         else:
             self.fail("expected an NodeError")
         fileh.close()
 
     def test11e_moveToExistingGroupOverwrite(self):
-        """Checking moving a group to an existing name, overwriting it"""
+        """Checking moving a group to an existing name, overwriting it."""
 
         fileh = open_file(
-            self.file, mode="r+", node_cache_slots=self.nodeCacheSlots)
+            self.file, mode="r+", node_cache_slots=self.node_cache_slots)
 
         # agroup2 -> agroup
         srcNode = fileh.root.agroup2
@@ -1094,10 +1113,10 @@ class OpenFileTestCase(common.PyTablesTestCase):
         fileh.close()
 
     def test12a_moveNodeOverItself(self):
-        """Checking moving a node over itself"""
+        """Checking moving a node over itself."""
 
         fileh = open_file(
-            self.file, mode="r+", node_cache_slots=self.nodeCacheSlots)
+            self.file, mode="r+", node_cache_slots=self.node_cache_slots)
 
         # array -> array
         srcNode = fileh.root.array
@@ -1108,19 +1127,19 @@ class OpenFileTestCase(common.PyTablesTestCase):
         fileh.close()
 
     def test12b_moveGroupIntoItself(self):
-        """Checking moving a group into itself"""
+        """Checking moving a group into itself."""
 
         # Open this file
         fileh = open_file(
-            self.file, mode="r+", node_cache_slots=self.nodeCacheSlots)
+            self.file, mode="r+", node_cache_slots=self.node_cache_slots)
         try:
             # agroup2 -> agroup2/
             fileh.move_node(fileh.root.agroup2, fileh.root.agroup2)
         except NodeError:
             if common.verbose:
                 (type, value, traceback) = sys.exc_info()
-                print "\nGreat!, the next NodeError was catched!"
-                print value
+                print("\nGreat!, the next NodeError was catched!")
+                print(value)
         else:
             self.fail("expected an NodeError")
         fileh.close()
@@ -1129,7 +1148,7 @@ class OpenFileTestCase(common.PyTablesTestCase):
         "Copying a leaf."
 
         fileh = open_file(
-            self.file, mode="r+", node_cache_slots=self.nodeCacheSlots)
+            self.file, mode="r+", node_cache_slots=self.node_cache_slots)
 
         # array => agroup2/
         new_node = fileh.copy_node(fileh.root.array, fileh.root.agroup2)
@@ -1142,7 +1161,7 @@ class OpenFileTestCase(common.PyTablesTestCase):
         "Copying a group."
 
         fileh = open_file(
-            self.file, mode="r+", node_cache_slots=self.nodeCacheSlots)
+            self.file, mode="r+", node_cache_slots=self.node_cache_slots)
 
         # agroup2 => agroup/
         new_node = fileh.copy_node(fileh.root.agroup2, fileh.root.agroup)
@@ -1155,7 +1174,7 @@ class OpenFileTestCase(common.PyTablesTestCase):
         "Copying a group into itself."
 
         fileh = open_file(
-            self.file, mode="r+", node_cache_slots=self.nodeCacheSlots)
+            self.file, mode="r+", node_cache_slots=self.node_cache_slots)
 
         # agroup2 => agroup2/
         new_node = fileh.copy_node(fileh.root.agroup2, fileh.root.agroup2)
@@ -1168,7 +1187,7 @@ class OpenFileTestCase(common.PyTablesTestCase):
         "Recursively copying a group."
 
         fileh = open_file(
-            self.file, mode="r+", node_cache_slots=self.nodeCacheSlots)
+            self.file, mode="r+", node_cache_slots=self.node_cache_slots)
 
         # agroup => agroup2/
         new_node = fileh.copy_node(
@@ -1188,10 +1207,10 @@ class OpenFileTestCase(common.PyTablesTestCase):
         "Recursively copying the root group into the root of another file."
 
         fileh = open_file(
-            self.file, mode="r+", node_cache_slots=self.nodeCacheSlots)
+            self.file, mode="r+", node_cache_slots=self.node_cache_slots)
         file2 = tempfile.mktemp(".h5")
         fileh2 = open_file(
-            file2, mode="w", node_cache_slots=self.nodeCacheSlots)
+            file2, mode="w", node_cache_slots=self.node_cache_slots)
 
         # fileh.root => fileh2.root
         new_node = fileh.copy_node(
@@ -1211,10 +1230,10 @@ class OpenFileTestCase(common.PyTablesTestCase):
         "Recursively copying the root group into a group in another file."
 
         fileh = open_file(
-            self.file, mode="r+", node_cache_slots=self.nodeCacheSlots)
+            self.file, mode="r+", node_cache_slots=self.node_cache_slots)
         file2 = tempfile.mktemp(".h5")
         fileh2 = open_file(
-            file2, mode="w", node_cache_slots=self.nodeCacheSlots)
+            file2, mode="w", node_cache_slots=self.node_cache_slots)
         fileh2.create_group('/', 'agroup2')
 
         # fileh.root => fileh2.root.agroup2
@@ -1235,7 +1254,7 @@ class OpenFileTestCase(common.PyTablesTestCase):
         "Recursively copying the root group into itself."
 
         fileh = open_file(
-            self.file, mode="r+", node_cache_slots=self.nodeCacheSlots)
+            self.file, mode="r+", node_cache_slots=self.node_cache_slots)
         agroup2 = fileh.root
         self.assertTrue(agroup2 is not None)
 
@@ -1248,15 +1267,15 @@ class OpenFileTestCase(common.PyTablesTestCase):
         "Copying over an existing node."
 
         fileh = open_file(
-            self.file, mode="r+", node_cache_slots=self.nodeCacheSlots)
+            self.file, mode="r+", node_cache_slots=self.node_cache_slots)
         try:
             # agroup2 => agroup
             fileh.copy_node(fileh.root.agroup2, newname='agroup')
         except NodeError:
             if common.verbose:
                 (type, value, traceback) = sys.exc_info()
-                print "\nGreat!, the next NodeError was catched!"
-                print value
+                print("\nGreat!, the next NodeError was catched!")
+                print(value)
         else:
             self.fail("expected an NodeError")
         fileh.close()
@@ -1265,7 +1284,7 @@ class OpenFileTestCase(common.PyTablesTestCase):
         "Copying over an existing node, overwriting it."
 
         fileh = open_file(
-            self.file, mode="r+", node_cache_slots=self.nodeCacheSlots)
+            self.file, mode="r+", node_cache_slots=self.node_cache_slots)
 
         # agroup2 => agroup
         new_node = fileh.copy_node(fileh.root.agroup2, newname='agroup',
@@ -1279,11 +1298,11 @@ class OpenFileTestCase(common.PyTablesTestCase):
         "Copying over an existing node in other file, overwriting it."
 
         fileh = open_file(
-            self.file, mode="r+", node_cache_slots=self.nodeCacheSlots)
+            self.file, mode="r+", node_cache_slots=self.node_cache_slots)
 
         file2 = tempfile.mktemp(".h5")
         fileh2 = open_file(
-            file2, mode="w", node_cache_slots=self.nodeCacheSlots)
+            file2, mode="w", node_cache_slots=self.node_cache_slots)
 
         # file1:/anarray1 => file2:/anarray1
         new_node = fileh.copy_node(fileh.root.agroup.anarray1,
@@ -1302,15 +1321,15 @@ class OpenFileTestCase(common.PyTablesTestCase):
         "Copying over self."
 
         fileh = open_file(
-            self.file, mode="r+", node_cache_slots=self.nodeCacheSlots)
+            self.file, mode="r+", node_cache_slots=self.node_cache_slots)
         try:
             # agroup => agroup
             fileh.copy_node(fileh.root.agroup, newname='agroup')
         except NodeError:
             if common.verbose:
                 (type, value, traceback) = sys.exc_info()
-                print "\nGreat!, the next NodeError was catched!"
-                print value
+                print("\nGreat!, the next NodeError was catched!")
+                print(value)
         else:
             self.fail("expected an NodeError")
         fileh.close()
@@ -1319,7 +1338,7 @@ class OpenFileTestCase(common.PyTablesTestCase):
         "Copying over self, trying to overwrite."
 
         fileh = open_file(
-            self.file, mode="r+", node_cache_slots=self.nodeCacheSlots)
+            self.file, mode="r+", node_cache_slots=self.node_cache_slots)
         try:
             # agroup => agroup
             fileh.copy_node(
@@ -1327,8 +1346,8 @@ class OpenFileTestCase(common.PyTablesTestCase):
         except NodeError:
             if common.verbose:
                 (type, value, traceback) = sys.exc_info()
-                print "\nGreat!, the next NodeError was catched!"
-                print value
+                print("\nGreat!, the next NodeError was catched!")
+                print(value)
         else:
             self.fail("expected an NodeError")
         fileh.close()
@@ -1337,7 +1356,7 @@ class OpenFileTestCase(common.PyTablesTestCase):
         "Recursively copying a group into itself."
 
         fileh = open_file(
-            self.file, mode="r+", node_cache_slots=self.nodeCacheSlots)
+            self.file, mode="r+", node_cache_slots=self.node_cache_slots)
         try:
             # agroup => agroup/
             fileh.copy_node(
@@ -1345,8 +1364,8 @@ class OpenFileTestCase(common.PyTablesTestCase):
         except NodeError:
             if common.verbose:
                 (type, value, traceback) = sys.exc_info()
-                print "\nGreat!, the next NodeError was catched!"
-                print value
+                print("\nGreat!, the next NodeError was catched!")
+                print(value)
         else:
             self.fail("expected an NodeError")
         fileh.close()
@@ -1355,7 +1374,7 @@ class OpenFileTestCase(common.PyTablesTestCase):
         "Moving and renaming a node in a single action."
 
         fileh = open_file(
-            self.file, mode="r+", node_cache_slots=self.nodeCacheSlots)
+            self.file, mode="r+", node_cache_slots=self.node_cache_slots)
 
         # anarray1 -> agroup/array
         srcNode = fileh.root.anarray1
@@ -1369,7 +1388,7 @@ class OpenFileTestCase(common.PyTablesTestCase):
         "Copying and renaming a node in a single action."
 
         fileh = open_file(
-            self.file, mode="r+", node_cache_slots=self.nodeCacheSlots)
+            self.file, mode="r+", node_cache_slots=self.node_cache_slots)
 
         # anarray1 => agroup/array
         new_node = fileh.copy_node(
@@ -1383,7 +1402,7 @@ class OpenFileTestCase(common.PyTablesTestCase):
         "Copying full data and user attributes."
 
         fileh = open_file(
-            self.file, mode="r+", node_cache_slots=self.nodeCacheSlots)
+            self.file, mode="r+", node_cache_slots=self.node_cache_slots)
 
         # agroup => groupcopy
         srcNode = fileh.root.agroup
@@ -1402,7 +1421,7 @@ class OpenFileTestCase(common.PyTablesTestCase):
         "Copying partial data and no user attributes."
 
         fileh = open_file(
-            self.file, mode="r+", node_cache_slots=self.nodeCacheSlots)
+            self.file, mode="r+", node_cache_slots=self.node_cache_slots)
 
         # agroup => groupcopy
         srcNode = fileh.root.agroup
@@ -1423,11 +1442,11 @@ class OpenFileTestCase(common.PyTablesTestCase):
         "Copying full data and user attributes (from file to file)."
 
         fileh = open_file(
-            self.file, mode="r+", node_cache_slots=self.nodeCacheSlots)
+            self.file, mode="r+", node_cache_slots=self.node_cache_slots)
 
         file2 = tempfile.mktemp(".h5")
         fileh2 = open_file(
-            file2, mode="w", node_cache_slots=self.nodeCacheSlots)
+            file2, mode="w", node_cache_slots=self.node_cache_slots)
 
         # file1:/ => file2:groupcopy
         srcNode = fileh.root
@@ -1451,7 +1470,7 @@ class OpenFileTestCase(common.PyTablesTestCase):
         "Copying dataset with a chunkshape."
 
         fileh = open_file(
-            self.file, mode="r+", node_cache_slots=self.nodeCacheSlots)
+            self.file, mode="r+", node_cache_slots=self.node_cache_slots)
         srcTable = fileh.root.table
         newTable = fileh.copy_node(
             srcTable, newname='tablecopy', chunkshape=11)
@@ -1464,7 +1483,7 @@ class OpenFileTestCase(common.PyTablesTestCase):
         "Copying dataset with a chunkshape with 'keep' value."
 
         fileh = open_file(
-            self.file, mode="r+", node_cache_slots=self.nodeCacheSlots)
+            self.file, mode="r+", node_cache_slots=self.node_cache_slots)
         srcTable = fileh.root.table
         newTable = fileh.copy_node(
             srcTable, newname='tablecopy', chunkshape='keep')
@@ -1476,7 +1495,7 @@ class OpenFileTestCase(common.PyTablesTestCase):
         "Copying dataset with a chunkshape with 'auto' value."
 
         fileh = open_file(
-            self.file, mode="r+", node_cache_slots=self.nodeCacheSlots)
+            self.file, mode="r+", node_cache_slots=self.node_cache_slots)
         srcTable = fileh.root.table
         newTable = fileh.copy_node(
             srcTable, newname='tablecopy', chunkshape=11)
@@ -1489,7 +1508,7 @@ class OpenFileTestCase(common.PyTablesTestCase):
     def test18_closedRepr(self):
         "Representing a closed node as a string."
         fileh = open_file(
-            self.file, node_cache_slots=self.nodeCacheSlots)
+            self.file, node_cache_slots=self.node_cache_slots)
         for node in [fileh.root.agroup, fileh.root.anarray]:
             node._f_close()
             self.assertTrue('closed' in str(node))
@@ -1497,29 +1516,29 @@ class OpenFileTestCase(common.PyTablesTestCase):
         fileh.close()
 
     def test19_fileno(self):
-        """Checking that the 'fileno()' method works"""
+        """Checking that the 'fileno()' method works."""
 
         # Open the old HDF5 file
         fileh = open_file(
-            self.file, mode="r", node_cache_slots=self.nodeCacheSlots)
+            self.file, mode="r", node_cache_slots=self.node_cache_slots)
         # Get the file descriptor for this file
         fd = fileh.fileno()
         if common.verbose:
-            print "Value of fileno():", fd
+            print("Value of fileno():", fd)
         self.assertTrue(fd >= 0)
         fileh.close()
 
 
 class NodeCacheOpenFile(OpenFileTestCase):
-    nodeCacheSlots = NODE_CACHE_SLOTS
+    node_cache_slots = NODE_CACHE_SLOTS
 
 
 class NoNodeCacheOpenFile(OpenFileTestCase):
-    nodeCacheSlots = 0
+    node_cache_slots = 0
 
 
 class DictNodeCacheOpenFile(OpenFileTestCase):
-    nodeCacheSlots = -NODE_CACHE_SLOTS
+    node_cache_slots = -NODE_CACHE_SLOTS
 
 
 class CheckFileTestCase(common.PyTablesTestCase):
@@ -1528,8 +1547,8 @@ class CheckFileTestCase(common.PyTablesTestCase):
         """Checking is_hdf5_file function (TRUE case)"""
 
         # Create a PyTables file (and by so, an HDF5 file)
-        file = tempfile.mktemp(".h5")
-        fileh = open_file(file, mode="w")
+        filename = tempfile.mktemp(".h5")
+        fileh = open_file(filename, mode="w")
         fileh.create_array(fileh.root, 'array', [
                            1, 2], title="Title example")
 
@@ -1538,11 +1557,12 @@ class CheckFileTestCase(common.PyTablesTestCase):
 
         # When file has an HDF5 format, always returns 1
         if common.verbose:
-            print "\nisHDF5File(%s) ==> %d" % (file, is_hdf5_file(file))
-        self.assertEqual(is_hdf5_file(file), 1)
+            print("\nisHDF5File(%s) ==> %d" % (filename,
+                                               is_hdf5_file(filename)))
+        self.assertEqual(is_hdf5_file(filename), 1)
 
         # Then, delete the file
-        os.remove(file)
+        os.remove(filename)
 
     def test01_isHDF5File(self):
         """Checking is_hdf5_file function (FALSE case)"""
@@ -1593,9 +1613,8 @@ class CheckFileTestCase(common.PyTablesTestCase):
         # When file has a PyTables format, always returns "1.0" string or
         # greater
         if common.verbose:
-            print
-            print "\nPyTables format version number ==> %s" % \
-                version
+            print()
+            print("\nPyTables format version number ==> %s" % version)
         self.assertTrue(version >= "1.0")
 
         # Then, delete the file
@@ -1614,16 +1633,15 @@ class CheckFileTestCase(common.PyTablesTestCase):
         # When file is not a PyTables format, always returns 0 or
         # negative value
         if common.verbose:
-            print
-            print "\nPyTables format version number ==> %s" % \
-                version
+            print()
+            print("\nPyTables format version number ==> %s" % version)
         self.assertTrue(version is None)
 
         # Then, delete the file
         os.remove(file)
 
     def test04_openGenericHDF5File(self):
-        """Checking opening of a generic HDF5 file"""
+        """Checking opening of a generic HDF5 file."""
 
         # Open an existing generic HDF5 file
         fileh = open_file(self._testFilename("ex-noattr.h5"), mode="r")
@@ -1644,8 +1662,8 @@ class CheckFileTestCase(common.PyTablesTestCase):
         ui = fileh.get_node(columns, "pressure", classname="Array")
         self.assertEqual(ui._v_name, "pressure")
         if common.verbose:
-            print "Array object with type H5T_ARRAY -->", repr(ui)
-            print "Array contents -->", ui[:]
+            print("Array object with type H5T_ARRAY -->", repr(ui))
+            print("Array contents -->", ui[:])
 
         # A Table
         table = fileh.get_node("/detector", "table", classname="Table")
@@ -1654,7 +1672,7 @@ class CheckFileTestCase(common.PyTablesTestCase):
         fileh.close()
 
     def test04b_UnImplementedOnLoading(self):
-        """Checking failure loading resulting in an ``UnImplemented`` node"""
+        """Checking failure loading resulting in an ``UnImplemented`` node."""
 
         ############### Note for developers ###############################
         # This test fails if you have the line:                           #
@@ -1672,7 +1690,7 @@ class CheckFileTestCase(common.PyTablesTestCase):
 
     def test04c_UnImplementedScalar(self):
         """Checking opening of HDF5 files containing scalar dataset of
-        UnImlemented type"""
+        UnImlemented type."""
 
         h5file = open_file(self._testFilename("scalar.h5"))
         try:
@@ -1683,7 +1701,7 @@ class CheckFileTestCase(common.PyTablesTestCase):
             h5file.close()
 
     def test05_copyUnimplemented(self):
-        """Checking that an UnImplemented object cannot be copied"""
+        """Checking that an UnImplemented object cannot be copied."""
 
         # Open an existing generic HDF5 file
         fileh = open_file(self._testFilename("smpl_unsupptype.h5"), mode="r")
@@ -1691,7 +1709,7 @@ class CheckFileTestCase(common.PyTablesTestCase):
             UserWarning, fileh.get_node, '/CompoundChunked')
         self.assertEqual(ui._v_name, 'CompoundChunked')
         if common.verbose:
-            print "UnImplement object -->", repr(ui)
+            print("UnImplement object -->", repr(ui))
 
         # Check that it cannot be copied to another file
         file2 = tempfile.mktemp(".h5")
@@ -1703,8 +1721,8 @@ class CheckFileTestCase(common.PyTablesTestCase):
         except UserWarning:
             if common.verbose:
                 (type, value, traceback) = sys.exc_info()
-                print "\nGreat!, the next UserWarning was catched:"
-                print value
+                print("\nGreat!, the next UserWarning was catched:")
+                print(value)
         else:
             self.fail("expected an UserWarning")
 
@@ -1724,7 +1742,7 @@ class CheckFileTestCase(common.PyTablesTestCase):
     # The next can be used to check the copy of Array objects with H5T_ARRAY
     # in the future
     def _test05_copyUnimplemented(self):
-        """Checking that an UnImplemented object cannot be copied"""
+        """Checking that an UnImplemented object cannot be copied."""
 
         # Open an existing generic HDF5 file
         # We don't need to wrap this in a try clause because
@@ -1735,7 +1753,7 @@ class CheckFileTestCase(common.PyTablesTestCase):
         ui = fileh.get_node(fileh.root.columns, "pressure")
         self.assertEqual(ui._v_name, "pressure")
         if common.verbose:
-            print "UnImplement object -->", repr(ui)
+            print("UnImplement object -->", repr(ui))
 
         # Check that it cannot be copied to another file
         file2 = tempfile.mktemp(".h5")
@@ -1747,8 +1765,8 @@ class CheckFileTestCase(common.PyTablesTestCase):
         except UserWarning:
             if common.verbose:
                 (type, value, traceback) = sys.exc_info()
-                print "\nGreat!, the next UserWarning was catched:"
-                print value
+                print("\nGreat!, the next UserWarning was catched:")
+                print(value)
         else:
             self.fail("expected an UserWarning")
 
@@ -1766,6 +1784,41 @@ class CheckFileTestCase(common.PyTablesTestCase):
         fileh.close()
 
 
+class ThreadingTestCase(common.TempFileMixin, common.PyTablesTestCase):
+    def setUp(self):
+        super(ThreadingTestCase, self).setUp()
+        self.h5file.create_carray('/', 'test_array', tables.Int64Atom(),
+                                  (200, 300))
+        self.h5file.close()
+
+    def test(self):
+        filename = self.h5fname
+
+        def run(filename, q):
+            try:
+                f = tables.open_file(filename, mode='r')
+                arr = f.root.test_array[8:12, 18:22]
+                assert arr.max() == arr.min() == 0
+                f.close()
+            except Exception as e:
+                q.put(sys.exc_info())
+            else:
+                q.put('OK')
+
+        threads = []
+        q = Queue.Queue()
+        for i in xrange(10):
+            t = threading.Thread(target=run, args=(filename, q))
+            t.start()
+            threads.append(t)
+
+        for i in xrange(10):
+            self.assertEqual(q.get(), 'OK')
+
+        for t in threads:
+            t.join()
+
+
 class PythonAttrsTestCase(common.TempFileMixin, common.PyTablesTestCase):
 
     """Test interactions of Python attributes and child nodes."""
@@ -1867,10 +1920,8 @@ class PythonAttrsTestCase(common.TempFileMixin, common.PyTablesTestCase):
 
 class StateTestCase(common.TempFileMixin, common.PyTablesTestCase):
 
-    """
-    Test that ``File`` and ``Node`` operations check their state (open
-    or closed, readable or writable) before proceeding.
-    """
+    """Test that ``File`` and ``Node`` operations check their state (open or
+    closed, readable or writable) before proceeding."""
 
     def test00_fileCopyFileClosed(self):
         """Test copying a closed file."""
@@ -2145,28 +2196,30 @@ class StateTestCase(common.TempFileMixin, common.PyTablesTestCase):
 
         file1 = open_file(self.h5fname, "r")
         self.assertEqual(file1.open_count, 1)
-        file2 = open_file(self.h5fname, "r")
-        self.assertEqual(file1.open_count, 2)
-        self.assertEqual(file2.open_count, 2)
-        if common.verbose:
-            print "(file1) open_count:", file1.open_count
-            print "(file1) test[1]:", file1.root.test[1]
-        self.assertEqual(file1.root.test[1], 2)
-        file1.close()
-        self.assertEqual(file2.open_count, 1)
-        if common.verbose:
-            print "(file2) open_count:", file2.open_count
-            print "(file2) test[1]:", file2.root.test[1]
-        self.assertEqual(file2.root.test[1], 2)
-        file2.close()
+        if tables.file._FILE_OPEN_POLICY == 'strict':
+            self.assertRaises(ValueError, tables.open_file, self.h5fname, "r")
+            file1.close()
+        else:
+            file2 = open_file(self.h5fname, "r")
+            self.assertEqual(file1.open_count, 1)
+            self.assertEqual(file2.open_count, 1)
+            if common.verbose:
+                print("(file1) open_count:", file1.open_count)
+                print("(file1) test[1]:", file1.root.test[1])
+            self.assertEqual(file1.root.test[1], 2)
+            file1.close()
+            self.assertEqual(file2.open_count, 1)
+            if common.verbose:
+                print("(file2) open_count:", file2.open_count)
+                print("(file2) test[1]:", file2.root.test[1])
+            self.assertEqual(file2.root.test[1], 2)
+            file2.close()
 
 
 class FlavorTestCase(common.TempFileMixin, common.PyTablesTestCase):
 
-    """
-    Test that setting, getting and changing the ``flavor`` attribute
-    of a leaf works as expected.
-    """
+    """Test that setting, getting and changing the ``flavor`` attribute of a
+    leaf works as expected."""
 
     array_data = numpy.arange(10)
     scalar_data = numpy.int32(10)
@@ -2295,9 +2348,9 @@ class UnicodeFilename(common.PyTablesTestCase):
 
         test = self.h5file.root.test
         if common.verbose:
-            print "Filename:", self.h5fname
-            print "Array:", test[:]
-            print "Should look like:", [1, 2]
+            print("Filename:", self.h5fname)
+            print("Array:", test[:])
+            print("Should look like:", [1, 2])
         self.assertEqual(test[:], [1, 2], "Values does not match.")
 
     def test02(self):
@@ -2305,8 +2358,8 @@ class UnicodeFilename(common.PyTablesTestCase):
 
         self.h5file.close()
         if common.verbose:
-            print "Filename:", self.h5fname
-            print "is_hdf5_file?:", tables.is_hdf5_file(self.h5fname)
+            print("Filename:", self.h5fname)
+            print("is_hdf5_file?:", tables.is_hdf5_file(self.h5fname))
         self.assertTrue(tables.is_hdf5_file(self.h5fname))
 
     def test03(self):
@@ -2314,8 +2367,8 @@ class UnicodeFilename(common.PyTablesTestCase):
 
         self.h5file.close()
         if common.verbose:
-            print "Filename:", self.h5fname
-            print "is_pytables_file?:", tables.is_pytables_file(self.h5fname)
+            print("Filename:", self.h5fname)
+            print("is_pytables_file?:", tables.is_pytables_file(self.h5fname))
         self.assertNotEqual(tables.is_pytables_file(self.h5fname), False)
 
 
@@ -2419,13 +2472,13 @@ class BloscBigEndian(common.PyTablesTestCase):
 def _worker(fn, qout=None):
     fp = tables.open_file(fn)
     if common.verbose:
-        print "About to load: ", fn
+        print("About to load: ", fn)
     rows = fp.root.table.where('(f0 < 10)')
     if common.verbose:
-        print "Got the iterator, about to iterate"
+        print("Got the iterator, about to iterate")
     next(rows)
     if common.verbose:
-        print "Succeeded in one iteration\n"
+        print("Succeeded in one iteration\n")
     fp.close()
 
     if qout is not None:
@@ -2458,16 +2511,16 @@ class BloscSubprocess(common.PyTablesTestCase):
         fp.close()
 
         if common.verbose:
-            print "**** Running from main process:"
+            print("**** Running from main process:")
         _worker(fn)
 
         if common.verbose:
-            print "**** Running from subprocess:"
+            print("**** Running from subprocess:")
 
         try:
             qout = mp.Queue()
         except OSError:
-            print "Permission denied due to /dev/shm settings"
+            print("Permission denied due to /dev/shm settings")
         else:
             ps = mp.Process(target=_worker, args=(fn, qout,))
             ps.daemon = True
@@ -2475,7 +2528,7 @@ class BloscSubprocess(common.PyTablesTestCase):
 
             result = qout.get()
             if common.verbose:
-                print result
+                print(result)
 
         os.remove(fn)
 
@@ -2555,7 +2608,7 @@ except tables.HDF5ExtError as e:
 
         try:
             self._raise_exterror()
-        except tables.HDF5ExtError, e:
+        except tables.HDF5ExtError as e:
             self.assertFalse(e.h5backtrace is None)
         else:
             self.fail("HDF5ExtError exception not raised")
@@ -2565,7 +2618,7 @@ except tables.HDF5ExtError as e:
 
         try:
             self._raise_exterror()
-        except tables.HDF5ExtError, e:
+        except tables.HDF5ExtError as e:
             self.assertFalse(e.h5backtrace is None)
             msg = str(e)
             self.assertTrue(e.h5backtrace[-1][-1] in msg)
@@ -2577,7 +2630,7 @@ except tables.HDF5ExtError as e:
 
         try:
             self._raise_exterror()
-        except tables.HDF5ExtError, e:
+        except tables.HDF5ExtError as e:
             self.assertTrue(e.h5backtrace is None)
         else:
             self.fail("HDF5ExtError exception not raised")
@@ -2783,10 +2836,13 @@ def suite():
     blosc_avail = which_lib_version("blosc") is not None
 
     for i in range(niter):
+        theSuite.addTest(unittest.makeSuite(OpenFileFailureTestCase))
         theSuite.addTest(unittest.makeSuite(NodeCacheOpenFile))
         theSuite.addTest(unittest.makeSuite(NoNodeCacheOpenFile))
         theSuite.addTest(unittest.makeSuite(DictNodeCacheOpenFile))
         theSuite.addTest(unittest.makeSuite(CheckFileTestCase))
+        if tables.file._FILE_OPEN_POLICY != 'strict':
+            theSuite.addTest(unittest.makeSuite(ThreadingTestCase))
         theSuite.addTest(unittest.makeSuite(PythonAttrsTestCase))
         theSuite.addTest(unittest.makeSuite(StateTestCase))
         theSuite.addTest(unittest.makeSuite(FlavorTestCase))
diff --git a/tables/tests/test_carray.py b/tables/tests/test_carray.py
index 5f7da1a..0dfc888 100644
--- a/tables/tests/test_carray.py
+++ b/tables/tests/test_carray.py
@@ -1,11 +1,13 @@
 # -*- coding: utf-8 -*-
 
+from __future__ import print_function
 import unittest
 import os
 import tempfile
 
 import numpy
 
+import tables
 from tables import *
 from tables.tests import common
 from tables.tests.common import allequal
@@ -76,7 +78,7 @@ class BasicTestCase(unittest.TestCase):
                 object = numpy.arange(self.objsize, dtype=carray.atom.dtype)
                 object.shape = carray.shape
         if common.verbose:
-            print "Object to append -->", repr(object)
+            print("Object to append -->", repr(object))
 
         carray[...] = object
 
@@ -110,11 +112,11 @@ class BasicTestCase(unittest.TestCase):
         self.assertEqual(obj.atom.type, self.type)
 
     def test01_readCArray(self):
-        """Checking read() of chunked layout arrays"""
+        """Checking read() of chunked layout arrays."""
 
         if common.verbose:
-            print '\n', '-=' * 30
-            print "Running %s.test01_readCArray..." % self.__class__.__name__
+            print('\n', '-=' * 30)
+            print("Running %s.test01_readCArray..." % self.__class__.__name__)
 
         # Create an instance of an HDF5 Table
         if self.reopen:
@@ -124,9 +126,9 @@ class BasicTestCase(unittest.TestCase):
         # Choose a small value for buffer size
         carray.nrowsinbuf = 3
         if common.verbose:
-            print "CArray descr:", repr(carray)
-            print "shape of read array ==>", carray.shape
-            print "reopening?:", self.reopen
+            print("CArray descr:", repr(carray))
+            print("shape of read array ==>", carray.shape)
+            print("reopening?:", self.reopen)
 
         shape = self._get_shape()
 
@@ -169,9 +171,9 @@ class BasicTestCase(unittest.TestCase):
 
         if common.verbose:
             if hasattr(object, "shape"):
-                print "shape should look as:", object.shape
-            print "Object read ==>", repr(data)
-            print "Should look like ==>", repr(object)
+                print("shape should look as:", object.shape)
+            print("Object read ==>", repr(data))
+            print("Should look like ==>", repr(object))
 
         if hasattr(data, "shape"):
             self.assertEqual(len(data.shape), len(shape))
@@ -182,7 +184,7 @@ class BasicTestCase(unittest.TestCase):
         self.assertTrue(allequal(data, object, self.flavor))
 
     def test01_readCArray_out_argument(self):
-        """Checking read() of chunked layout arrays"""
+        """Checking read() of chunked layout arrays."""
 
         # Create an instance of an HDF5 Table
         if self.reopen:
@@ -241,11 +243,12 @@ class BasicTestCase(unittest.TestCase):
         self.assertTrue(allequal(data, object, self.flavor))
 
     def test02_getitemCArray(self):
-        """Checking chunked layout array __getitem__ special method"""
+        """Checking chunked layout array __getitem__ special method."""
 
         if common.verbose:
-            print '\n', '-=' * 30
-            print "Running %s.test02_getitemCArray..." % self.__class__.__name__
+            print('\n', '-=' * 30)
+            print("Running %s.test02_getitemCArray..." %
+                  self.__class__.__name__)
 
         if not hasattr(self, "slices"):
             # If there is not a slices attribute, create it
@@ -257,9 +260,9 @@ class BasicTestCase(unittest.TestCase):
         carray = self.fileh.get_node("/carray1")
 
         if common.verbose:
-            print "CArray descr:", repr(carray)
-            print "shape of read array ==>", carray.shape
-            print "reopening?:", self.reopen
+            print("CArray descr:", repr(carray))
+            print("shape of read array ==>", carray.shape)
+            print("reopening?:", self.reopen)
 
         shape = self._get_shape()
 
@@ -280,19 +283,19 @@ class BasicTestCase(unittest.TestCase):
         try:
             data = carray.__getitem__(self.slices)
         except IndexError:
-            print "IndexError!"
+            print("IndexError!")
             if self.flavor == "numpy":
                 data = numpy.empty(shape=self.shape, dtype=self.type)
             else:
                 data = numpy.empty(shape=self.shape, dtype=self.type)
 
         if common.verbose:
-            print "Object read:\n", repr(data)  # , data.info()
-            print "Should look like:\n", repr(object)  # , object.info()
+            print("Object read:\n", repr(data))  # , data.info()
+            print("Should look like:\n", repr(object))  # , object.info()
             if hasattr(object, "shape"):
-                print "Original object shape:", self.shape
-                print "Shape read:", data.shape
-                print "shape should look as:", object.shape
+                print("Original object shape:", self.shape)
+                print("Shape read:", data.shape)
+                print("shape should look as:", object.shape)
 
         if not hasattr(data, "shape"):
             # Scalar case
@@ -301,14 +304,15 @@ class BasicTestCase(unittest.TestCase):
         self.assertTrue(allequal(data, object, self.flavor))
 
     def test03_setitemCArray(self):
-        """Checking chunked layout array __setitem__ special method"""
+        """Checking chunked layout array __setitem__ special method."""
 
         if self.__class__.__name__ == "Ellipsis6CArrayTestCase":
             # see test_earray.py BasicTestCase.test03_setitemEArray
             return
         if common.verbose:
-            print '\n', '-=' * 30
-            print "Running %s.test03_setitemCArray..." % self.__class__.__name__
+            print('\n', '-=' * 30)
+            print("Running %s.test03_setitemCArray..." %
+                  self.__class__.__name__)
 
         if not hasattr(self, "slices"):
             # If there is not a slices attribute, create it
@@ -320,9 +324,9 @@ class BasicTestCase(unittest.TestCase):
         carray = self.fileh.get_node("/carray1")
 
         if common.verbose:
-            print "CArray descr:", repr(carray)
-            print "shape of read array ==>", carray.shape
-            print "reopening?:", self.reopen
+            print("CArray descr:", repr(carray))
+            print("shape of read array ==>", carray.shape)
+            print("reopening?:", self.reopen)
 
         shape = self._get_shape()
 
@@ -361,19 +365,19 @@ class BasicTestCase(unittest.TestCase):
         try:
             data = carray.__getitem__(self.slices)
         except IndexError:
-            print "IndexError!"
+            print("IndexError!")
             if self.flavor == "numpy":
                 data = numpy.empty(shape=self.shape, dtype=self.type)
             else:
                 data = numpy.empty(shape=self.shape, dtype=self.type)
 
         if common.verbose:
-            print "Object read:\n", repr(data)  # , data.info()
-            print "Should look like:\n", repr(object)  # , object.info()
+            print("Object read:\n", repr(data))  # , data.info()
+            print("Should look like:\n", repr(object))  # , object.info()
             if hasattr(object, "shape"):
-                print "Original object shape:", self.shape
-                print "Shape read:", data.shape
-                print "shape should look as:", object.shape
+                print("Original object shape:", self.shape)
+                print("Shape read:", data.shape)
+                print("shape should look as:", object.shape)
 
         if not hasattr(data, "shape"):
             # Scalar case
@@ -513,15 +517,17 @@ class Slices3CArrayTestCase(BasicTestCase):
     chunkshape = (5, 5, 5, 5)
     slices = (slice(1, 2, 1), slice(
         0, None, None), slice(1, 4, 2))  # Don't work
-    # slices = (slice(None, None, None), slice(0, None, None), slice(1,4,1)) # W
-    # slices = (slice(None, None, None), slice(None, None, None), slice(1,4,2)) # N
-    # slices = (slice(1,2,1), slice(None, None, None), slice(1,4,2)) # N
+    # slices = (slice(None, None, None), slice(0, None, None),
+    #           slice(1,4,1))  # W
+    # slices = (slice(None, None, None), slice(None, None, None),
+    #           slice(1,4,2))  # N
+    # slices = (slice(1,2,1), slice(None, None, None), slice(1,4,2))  # N
     # Disable the failing test temporarily with a working test case
     slices = (slice(1, 2, 1), slice(1, 4, None), slice(1, 4, 2))  # Y
-    # slices = (slice(1,2,1), slice(0, 4, None), slice(1,4,1)) # Y
+    # slices = (slice(1,2,1), slice(0, 4, None), slice(1,4,1))  # Y
     slices = (slice(1, 2, 1), slice(0, 4, None), slice(1, 4, 2))  # N
-    # slices = (slice(1,2,1), slice(0, 4, None), slice(1,4,2), slice(0,100,1))
-    # # N
+    # slices = (slice(1,2,1), slice(0, 4, None), slice(1,4,2),
+    #           slice(0,100,1))  # N
 
 
 class Slices4CArrayTestCase(BasicTestCase):
@@ -666,6 +672,74 @@ class BloscShuffleTestCase(BasicTestCase):
     step = 7
 
 
+class BloscFletcherTestCase(BasicTestCase):
+    # see gh-21
+    shape = (200, 300)
+    compress = 1
+    shuffle = 1
+    fletcher32 = 1
+    complib = "blosc"
+    chunkshape = (100, 100)
+    start = 3
+    stop = 10
+    step = 7
+
+
+class BloscBloscLZTestCase(BasicTestCase):
+    shape = (20, 30)
+    compress = 1
+    shuffle = 1
+    complib = "blosc:blosclz"
+    chunkshape = (200, 100)
+    start = 2
+    stop = 11
+    step = 7
+
+
+class BloscLZ4TestCase(BasicTestCase):
+    shape = (20, 30)
+    compress = 1
+    shuffle = 1
+    complib = "blosc:lz4"
+    chunkshape = (100, 100)
+    start = 3
+    stop = 10
+    step = 7
+
+
+class BloscLZ4HCTestCase(BasicTestCase):
+    shape = (20, 30)
+    compress = 1
+    shuffle = 1
+    complib = "blosc:lz4hc"
+    chunkshape = (100, 100)
+    start = 3
+    stop = 10
+    step = 7
+
+
+class BloscSnappyTestCase(BasicTestCase):
+    shape = (20, 30)
+    compress = 1
+    shuffle = 1
+    complib = "blosc:snappy"
+    chunkshape = (100, 100)
+    start = 3
+    stop = 10
+    step = 7
+
+
+class BloscZlibTestCase(BasicTestCase):
+    shape = (20, 30)
+    compress = 1
+    shuffle = 1
+    complib = "blosc:zlib"
+    chunkshape = (100, 100)
+    start = 3
+    stop = 10
+    step = 7
+
+
 class LZOComprTestCase(BasicTestCase):
     compress = 1  # sss
     complib = "lzo"
@@ -1065,12 +1139,12 @@ class OffsetStrideTestCase(unittest.TestCase):
     #----------------------------------------
 
     def test01a_String(self):
-        """Checking carray with offseted NumPy strings appends"""
+        """Checking carray with offseted NumPy strings appends."""
 
         root = self.rootgroup
         if common.verbose:
-            print '\n', '-=' * 30
-            print "Running %s.test01a_String..." % self.__class__.__name__
+            print('\n', '-=' * 30)
+            print("Running %s.test01a_String..." % self.__class__.__name__)
 
         shape = (3, 2, 2)
         # Create an string atom
@@ -1089,9 +1163,9 @@ class OffsetStrideTestCase(unittest.TestCase):
         # Read all the data:
         data = carray.read()
         if common.verbose:
-            print "Object read:", data
-            print "Nrows in", carray._v_pathname, ":", carray.nrows
-            print "Second row in carray ==>", data[1].tolist()
+            print("Object read:", data)
+            print("Nrows in", carray._v_pathname, ":", carray.nrows)
+            print("Second row in carray ==>", data[1].tolist())
 
         self.assertEqual(carray.nrows, 3)
         self.assertEqual(data[0].tolist(), [[b"123", b"45"], [b"45", b"123"]])
@@ -1100,12 +1174,12 @@ class OffsetStrideTestCase(unittest.TestCase):
         self.assertEqual(len(data[1]), 2)
 
     def test01b_String(self):
-        """Checking carray with strided NumPy strings appends"""
+        """Checking carray with strided NumPy strings appends."""
 
         root = self.rootgroup
         if common.verbose:
-            print '\n', '-=' * 30
-            print "Running %s.test01b_String..." % self.__class__.__name__
+            print('\n', '-=' * 30)
+            print("Running %s.test01b_String..." % self.__class__.__name__)
 
         shape = (3, 2, 2)
         # Create an string atom
@@ -1124,9 +1198,9 @@ class OffsetStrideTestCase(unittest.TestCase):
         # Read all the rows:
         data = carray.read()
         if common.verbose:
-            print "Object read:", data
-            print "Nrows in", carray._v_pathname, ":", carray.nrows
-            print "Second row in carray ==>", data[1].tolist()
+            print("Object read:", data)
+            print("Nrows in", carray._v_pathname, ":", carray.nrows)
+            print("Second row in carray ==>", data[1].tolist())
 
         self.assertEqual(carray.nrows, 3)
         self.assertEqual(data[0].tolist(), [[b"a", b"b"], [b"45", b"123"]])
@@ -1135,12 +1209,12 @@ class OffsetStrideTestCase(unittest.TestCase):
         self.assertEqual(len(data[1]), 2)
 
     def test02a_int(self):
-        """Checking carray with offseted NumPy ints appends"""
+        """Checking carray with offseted NumPy ints appends."""
 
         root = self.rootgroup
         if common.verbose:
-            print '\n', '-=' * 30
-            print "Running %s.test02a_int..." % self.__class__.__name__
+            print('\n', '-=' * 30)
+            print("Running %s.test02a_int..." % self.__class__.__name__)
 
         shape = (3, 3)
         # Create an string atom
@@ -1157,9 +1231,9 @@ class OffsetStrideTestCase(unittest.TestCase):
         # Read all the rows:
         data = carray.read()
         if common.verbose:
-            print "Object read:", data
-            print "Nrows in", carray._v_pathname, ":", carray.nrows
-            print "Third row in carray ==>", data[2]
+            print("Object read:", data)
+            print("Nrows in", carray._v_pathname, ":", carray.nrows)
+            print("Third row in carray ==>", data[2])
 
         self.assertEqual(carray.nrows, 3)
         self.assertTrue(allequal(data[
@@ -1170,12 +1244,12 @@ class OffsetStrideTestCase(unittest.TestCase):
                         2], numpy.array([-1, 0, 0], dtype='int32')))
 
     def test02b_int(self):
-        """Checking carray with strided NumPy ints appends"""
+        """Checking carray with strided NumPy ints appends."""
 
         root = self.rootgroup
         if common.verbose:
-            print '\n', '-=' * 30
-            print "Running %s.test02b_int..." % self.__class__.__name__
+            print('\n', '-=' * 30)
+            print("Running %s.test02b_int..." % self.__class__.__name__)
 
         shape = (3, 3)
         # Create an string atom
@@ -1192,9 +1266,9 @@ class OffsetStrideTestCase(unittest.TestCase):
         # Read all the rows:
         data = carray.read()
         if common.verbose:
-            print "Object read:", data
-            print "Nrows in", carray._v_pathname, ":", carray.nrows
-            print "Third row in carray ==>", data[2]
+            print("Object read:", data)
+            print("Nrows in", carray._v_pathname, ":", carray.nrows)
+            print("Third row in carray ==>", data[2])
 
         self.assertEqual(carray.nrows, 3)
         self.assertTrue(allequal(data[
@@ -1208,11 +1282,11 @@ class OffsetStrideTestCase(unittest.TestCase):
 class CopyTestCase(unittest.TestCase):
 
     def test01a_copy(self):
-        """Checking CArray.copy() method """
+        """Checking CArray.copy() method."""
 
         if common.verbose:
-            print '\n', '-=' * 30
-            print "Running %s.test01a_copy..." % self.__class__.__name__
+            print('\n', '-=' * 30)
+            print("Running %s.test01a_copy..." % self.__class__.__name__)
 
         # Create an instance of an HDF5 Table
         file = tempfile.mktemp(".h5")
@@ -1228,7 +1302,7 @@ class CopyTestCase(unittest.TestCase):
 
         if self.close:
             if common.verbose:
-                print "(closing file version)"
+                print("(closing file version)")
             fileh.close()
             fileh = open_file(file, mode="a")
             array1 = fileh.root.array1
@@ -1238,18 +1312,18 @@ class CopyTestCase(unittest.TestCase):
 
         if self.close:
             if common.verbose:
-                print "(closing file version)"
+                print("(closing file version)")
             fileh.close()
             fileh = open_file(file, mode="r")
             array1 = fileh.root.array1
             array2 = fileh.root.array2
 
         if common.verbose:
-            print "array1-->", array1.read()
-            print "array2-->", array2.read()
-            # print "dirs-->", dir(array1), dir(array2)
-            print "attrs array1-->", repr(array1.attrs)
-            print "attrs array2-->", repr(array2.attrs)
+            print("array1-->", array1.read())
+            print("array2-->", array2.read())
+            # print("dirs-->", dir(array1), dir(array2))
+            print("attrs array1-->", repr(array1.attrs))
+            print("attrs array2-->", repr(array2.attrs))
 
         # Check that all the elements are equal
         self.assertTrue(allequal(array1.read(), array2.read()))
@@ -1273,11 +1347,11 @@ class CopyTestCase(unittest.TestCase):
         os.remove(file)
 
     def test01b_copy(self):
-        """Checking CArray.copy() method """
+        """Checking CArray.copy() method."""
 
         if common.verbose:
-            print '\n', '-=' * 30
-            print "Running %s.test01b_copy..." % self.__class__.__name__
+            print('\n', '-=' * 30)
+            print("Running %s.test01b_copy..." % self.__class__.__name__)
 
         # Create an instance of an HDF5 Table
         file = tempfile.mktemp(".h5")
@@ -1293,7 +1367,7 @@ class CopyTestCase(unittest.TestCase):
 
         if self.close:
             if common.verbose:
-                print "(closing file version)"
+                print("(closing file version)")
             fileh.close()
             fileh = open_file(file, mode="a")
             array1 = fileh.root.array1
@@ -1303,18 +1377,18 @@ class CopyTestCase(unittest.TestCase):
 
         if self.close:
             if common.verbose:
-                print "(closing file version)"
+                print("(closing file version)")
             fileh.close()
             fileh = open_file(file, mode="r")
             array1 = fileh.root.array1
             array2 = fileh.root.array2
 
         if common.verbose:
-            print "array1-->", array1.read()
-            print "array2-->", array2.read()
-            # print "dirs-->", dir(array1), dir(array2)
-            print "attrs array1-->", repr(array1.attrs)
-            print "attrs array2-->", repr(array2.attrs)
+            print("array1-->", array1.read())
+            print("array2-->", array2.read())
+            # print("dirs-->", dir(array1), dir(array2))
+            print("attrs array1-->", repr(array1.attrs))
+            print("attrs array2-->", repr(array2.attrs))
 
         # Check that all the elements are equal
         self.assertTrue(allequal(array1.read(), array2.read()))
@@ -1336,11 +1410,11 @@ class CopyTestCase(unittest.TestCase):
         os.remove(file)
 
     def test01c_copy(self):
-        """Checking CArray.copy() method """
+        """Checking CArray.copy() method."""
 
         if common.verbose:
-            print '\n', '-=' * 30
-            print "Running %s.test01c_copy..." % self.__class__.__name__
+            print('\n', '-=' * 30)
+            print("Running %s.test01c_copy..." % self.__class__.__name__)
 
         # Create an instance of an HDF5 Table
         file = tempfile.mktemp(".h5")
@@ -1356,7 +1430,7 @@ class CopyTestCase(unittest.TestCase):
 
         if self.close:
             if common.verbose:
-                print "(closing file version)"
+                print("(closing file version)")
             fileh.close()
             fileh = open_file(file, mode="a")
             array1 = fileh.root.array1
@@ -1366,18 +1440,18 @@ class CopyTestCase(unittest.TestCase):
 
         if self.close:
             if common.verbose:
-                print "(closing file version)"
+                print("(closing file version)")
             fileh.close()
             fileh = open_file(file, mode="r")
             array1 = fileh.root.array1
             array2 = fileh.root.array2
 
         if common.verbose:
-            print "array1-->", array1.read()
-            print "array2-->", array2.read()
-            # print "dirs-->", dir(array1), dir(array2)
-            print "attrs array1-->", repr(array1.attrs)
-            print "attrs array2-->", repr(array2.attrs)
+            print("array1-->", array1.read())
+            print("array2-->", array2.read())
+            # print("dirs-->", dir(array1), dir(array2))
+            print("attrs array1-->", repr(array1.attrs))
+            print("attrs array2-->", repr(array2.attrs))
 
         # Check that all the elements are equal
         self.assertTrue(allequal(array1.read(), array2.read()))
@@ -1404,8 +1478,8 @@ class CopyTestCase(unittest.TestCase):
         """Checking CArray.copy() method (where specified)"""
 
         if common.verbose:
-            print '\n', '-=' * 30
-            print "Running %s.test02_copy..." % self.__class__.__name__
+            print('\n', '-=' * 30)
+            print("Running %s.test02_copy..." % self.__class__.__name__)
 
         # Create an instance of an HDF5 Table
         file = tempfile.mktemp(".h5")
@@ -1421,7 +1495,7 @@ class CopyTestCase(unittest.TestCase):
 
         if self.close:
             if common.verbose:
-                print "(closing file version)"
+                print("(closing file version)")
             fileh.close()
             fileh = open_file(file, mode="a")
             array1 = fileh.root.array1
@@ -1432,18 +1506,18 @@ class CopyTestCase(unittest.TestCase):
 
         if self.close:
             if common.verbose:
-                print "(closing file version)"
+                print("(closing file version)")
             fileh.close()
             fileh = open_file(file, mode="r")
             array1 = fileh.root.array1
             array2 = fileh.root.group1.array2
 
         if common.verbose:
-            print "array1-->", array1.read()
-            print "array2-->", array2.read()
-            # print "dirs-->", dir(array1), dir(array2)
-            print "attrs array1-->", repr(array1.attrs)
-            print "attrs array2-->", repr(array2.attrs)
+            print("array1-->", array1.read())
+            print("array2-->", array2.read())
+            # print("dirs-->", dir(array1), dir(array2))
+            print("attrs array1-->", repr(array1.attrs))
+            print("attrs array2-->", repr(array2.attrs))
 
         # Check that all the elements are equal
         self.assertTrue(allequal(array1.read(), array2.read()))
@@ -1470,8 +1544,8 @@ class CopyTestCase(unittest.TestCase):
         """Checking CArray.copy() method (python flavor)"""
 
         if common.verbose:
-            print '\n', '-=' * 30
-            print "Running %s.test03c_copy..." % self.__class__.__name__
+            print('\n', '-=' * 30)
+            print("Running %s.test03c_copy..." % self.__class__.__name__)
 
         # Create an instance of an HDF5 Table
         file = tempfile.mktemp(".h5")
@@ -1487,7 +1561,7 @@ class CopyTestCase(unittest.TestCase):
 
         if self.close:
             if common.verbose:
-                print "(closing file version)"
+                print("(closing file version)")
             fileh.close()
             fileh = open_file(file, mode="a")
             array1 = fileh.root.array1
@@ -1497,15 +1571,15 @@ class CopyTestCase(unittest.TestCase):
 
         if self.close:
             if common.verbose:
-                print "(closing file version)"
+                print("(closing file version)")
             fileh.close()
             fileh = open_file(file, mode="r")
             array1 = fileh.root.array1
             array2 = fileh.root.array2
 
         if common.verbose:
-            print "attrs array1-->", repr(array1.attrs)
-            print "attrs array2-->", repr(array2.attrs)
+            print("attrs array1-->", repr(array1.attrs))
+            print("attrs array2-->", repr(array2.attrs))
 
         # Check that all elements are equal
         self.assertEqual(array1.read(), array2.read())
@@ -1531,8 +1605,8 @@ class CopyTestCase(unittest.TestCase):
         """Checking CArray.copy() method (string python flavor)"""
 
         if common.verbose:
-            print '\n', '-=' * 30
-            print "Running %s.test03d_copy..." % self.__class__.__name__
+            print('\n', '-=' * 30)
+            print("Running %s.test03d_copy..." % self.__class__.__name__)
 
         # Create an instance of an HDF5 Table
         file = tempfile.mktemp(".h5")
@@ -1548,7 +1622,7 @@ class CopyTestCase(unittest.TestCase):
 
         if self.close:
             if common.verbose:
-                print "(closing file version)"
+                print("(closing file version)")
             fileh.close()
             fileh = open_file(file, mode="a")
             array1 = fileh.root.array1
@@ -1558,17 +1632,17 @@ class CopyTestCase(unittest.TestCase):
 
         if self.close:
             if common.verbose:
-                print "(closing file version)"
+                print("(closing file version)")
             fileh.close()
             fileh = open_file(file, mode="r")
             array1 = fileh.root.array1
             array2 = fileh.root.array2
 
         if common.verbose:
-            print "type value-->", type(array2[:][0][0])
-            print "value-->", array2[:]
-            print "attrs array1-->", repr(array1.attrs)
-            print "attrs array2-->", repr(array2.attrs)
+            print("type value-->", type(array2[:][0][0]))
+            print("value-->", array2[:])
+            print("attrs array1-->", repr(array1.attrs))
+            print("attrs array2-->", repr(array2.attrs))
 
         # Check that all elements are equal
         self.assertEqual(array1.read(), array2.read())
@@ -1595,8 +1669,8 @@ class CopyTestCase(unittest.TestCase):
         """Checking CArray.copy() method (chararray flavor)"""
 
         if common.verbose:
-            print '\n', '-=' * 30
-            print "Running %s.test03e_copy..." % self.__class__.__name__
+            print('\n', '-=' * 30)
+            print("Running %s.test03e_copy..." % self.__class__.__name__)
 
         # Create an instance of an HDF5 Table
         file = tempfile.mktemp(".h5")
@@ -1611,7 +1685,7 @@ class CopyTestCase(unittest.TestCase):
 
         if self.close:
             if common.verbose:
-                print "(closing file version)"
+                print("(closing file version)")
             fileh.close()
             fileh = open_file(file, mode="a")
             array1 = fileh.root.array1
@@ -1621,15 +1695,15 @@ class CopyTestCase(unittest.TestCase):
 
         if self.close:
             if common.verbose:
-                print "(closing file version)"
+                print("(closing file version)")
             fileh.close()
             fileh = open_file(file, mode="r")
             array1 = fileh.root.array1
             array2 = fileh.root.array2
 
         if common.verbose:
-            print "attrs array1-->", repr(array1.attrs)
-            print "attrs array2-->", repr(array2.attrs)
+            print("attrs array1-->", repr(array1.attrs))
+            print("attrs array2-->", repr(array2.attrs))
 
         # Check that all elements are equal
         self.assertTrue(allequal(array1.read(), array2.read()))
@@ -1655,8 +1729,8 @@ class CopyTestCase(unittest.TestCase):
         """Checking CArray.copy() method (checking title copying)"""
 
         if common.verbose:
-            print '\n', '-=' * 30
-            print "Running %s.test04_copy..." % self.__class__.__name__
+            print('\n', '-=' * 30)
+            print("Running %s.test04_copy..." % self.__class__.__name__)
 
         # Create an instance of an HDF5 Table
         file = tempfile.mktemp(".h5")
@@ -1675,7 +1749,7 @@ class CopyTestCase(unittest.TestCase):
 
         if self.close:
             if common.verbose:
-                print "(closing file version)"
+                print("(closing file version)")
             fileh.close()
             fileh = open_file(file, mode="a")
             array1 = fileh.root.array1
@@ -1685,7 +1759,7 @@ class CopyTestCase(unittest.TestCase):
 
         if self.close:
             if common.verbose:
-                print "(closing file version)"
+                print("(closing file version)")
             fileh.close()
             fileh = open_file(file, mode="r")
             array1 = fileh.root.array1
@@ -1693,7 +1767,7 @@ class CopyTestCase(unittest.TestCase):
 
         # Assert user attributes
         if common.verbose:
-            print "title of destination array-->", array2.title
+            print("title of destination array-->", array2.title)
         self.assertEqual(array2.title, "title array2")
 
         # Close the file
@@ -1704,8 +1778,8 @@ class CopyTestCase(unittest.TestCase):
         """Checking CArray.copy() method (user attributes copied)"""
 
         if common.verbose:
-            print '\n', '-=' * 30
-            print "Running %s.test05_copy..." % self.__class__.__name__
+            print('\n', '-=' * 30)
+            print("Running %s.test05_copy..." % self.__class__.__name__)
 
         # Create an instance of an HDF5 Table
         file = tempfile.mktemp(".h5")
@@ -1724,7 +1798,7 @@ class CopyTestCase(unittest.TestCase):
 
         if self.close:
             if common.verbose:
-                print "(closing file version)"
+                print("(closing file version)")
             fileh.close()
             fileh = open_file(file, mode="a")
             array1 = fileh.root.array1
@@ -1734,15 +1808,15 @@ class CopyTestCase(unittest.TestCase):
 
         if self.close:
             if common.verbose:
-                print "(closing file version)"
+                print("(closing file version)")
             fileh.close()
             fileh = open_file(file, mode="r")
             array1 = fileh.root.array1
             array2 = fileh.root.array2
 
         if common.verbose:
-            print "attrs array1-->", repr(array1.attrs)
-            print "attrs array2-->", repr(array2.attrs)
+            print("attrs array1-->", repr(array1.attrs))
+            print("attrs array2-->", repr(array2.attrs))
 
         # Assert user attributes
         self.assertEqual(array2.attrs.attr1, "attr1")
@@ -1756,8 +1830,8 @@ class CopyTestCase(unittest.TestCase):
         """Checking CArray.copy() method (user attributes not copied)"""
 
         if common.verbose:
-            print '\n', '-=' * 30
-            print "Running %s.test05b_copy..." % self.__class__.__name__
+            print('\n', '-=' * 30)
+            print("Running %s.test05b_copy..." % self.__class__.__name__)
 
         # Create an instance of an HDF5 Table
         file = tempfile.mktemp(".h5")
@@ -1776,7 +1850,7 @@ class CopyTestCase(unittest.TestCase):
 
         if self.close:
             if common.verbose:
-                print "(closing file version)"
+                print("(closing file version)")
             fileh.close()
             fileh = open_file(file, mode="a")
             array1 = fileh.root.array1
@@ -1786,15 +1860,15 @@ class CopyTestCase(unittest.TestCase):
 
         if self.close:
             if common.verbose:
-                print "(closing file version)"
+                print("(closing file version)")
             fileh.close()
             fileh = open_file(file, mode="r")
             array1 = fileh.root.array1
             array2 = fileh.root.array2
 
         if common.verbose:
-            print "attrs array1-->", repr(array1.attrs)
-            print "attrs array2-->", repr(array2.attrs)
+            print("attrs array1-->", repr(array1.attrs))
+            print("attrs array2-->", repr(array2.attrs))
 
         # Assert user attributes
         self.assertEqual(hasattr(array2.attrs, "attr1"), 0)
@@ -1817,11 +1891,11 @@ class CopyIndexTestCase(unittest.TestCase):
     nrowsinbuf = 2
 
     def test01_index(self):
-        """Checking CArray.copy() method with indexes"""
+        """Checking CArray.copy() method with indexes."""
 
         if common.verbose:
-            print '\n', '-=' * 30
-            print "Running %s.test01_index..." % self.__class__.__name__
+            print('\n', '-=' * 30)
+            print("Running %s.test01_index..." % self.__class__.__name__)
 
         # Create an instance of an HDF5 Array
         file = tempfile.mktemp(".h5")
@@ -1846,10 +1920,10 @@ class CopyIndexTestCase(unittest.TestCase):
                              stop=self.stop,
                              step=self.step)
         if common.verbose:
-            print "array1-->", array1.read()
-            print "array2-->", array2.read()
-            print "attrs array1-->", repr(array1.attrs)
-            print "attrs array2-->", repr(array2.attrs)
+            print("array1-->", array1.read())
+            print("array2-->", array2.read())
+            print("attrs array1-->", repr(array1.attrs))
+            print("attrs array2-->", repr(array2.attrs))
 
         # Check that all the elements are equal
         r2 = r[self.start:self.stop:self.step]
@@ -1857,8 +1931,8 @@ class CopyIndexTestCase(unittest.TestCase):
 
         # Assert the number of rows in array
         if common.verbose:
-            print "nrows in array2-->", array2.nrows
-            print "and it should be-->", r2.shape[0]
+            print("nrows in array2-->", array2.nrows)
+            print("and it should be-->", r2.shape[0])
 
         # The next line is commented out because a copy should not
         # keep the same chunkshape anymore.
@@ -1874,8 +1948,8 @@ class CopyIndexTestCase(unittest.TestCase):
         """Checking CArray.copy() method with indexes (close file version)"""
 
         if common.verbose:
-            print '\n', '-=' * 30
-            print "Running %s.test02_indexclosef..." % self.__class__.__name__
+            print('\n', '-=' * 30)
+            print("Running %s.test02_indexclosef..." % self.__class__.__name__)
 
         # Create an instance of an HDF5 Array
         file = tempfile.mktemp(".h5")
@@ -1906,10 +1980,10 @@ class CopyIndexTestCase(unittest.TestCase):
         array2 = fileh.root.array2
 
         if common.verbose:
-            print "array1-->", array1.read()
-            print "array2-->", array2.read()
-            print "attrs array1-->", repr(array1.attrs)
-            print "attrs array2-->", repr(array2.attrs)
+            print("array1-->", array1.read())
+            print("array2-->", array2.read())
+            print("attrs array1-->", repr(array1.attrs))
+            print("attrs array2-->", repr(array2.attrs))
 
         # Check that all the elements are equal
         r2 = r[self.start:self.stop:self.step]
@@ -1918,8 +1992,8 @@ class CopyIndexTestCase(unittest.TestCase):
 
         # Assert the number of rows in array
         if common.verbose:
-            print "nrows in array2-->", array2.nrows
-            print "and it should be-->", r2.shape[0]
+            print("nrows in array2-->", array2.nrows)
+            print("and it should be-->", r2.shape[0])
         self.assertEqual(r2.shape[0], array2.nrows)
 
         # Close the file
@@ -2051,30 +2125,30 @@ class Rows64bitsTestCase(unittest.TestCase):
         if self.close:
             if common.verbose:
                 # Check how many entries there are in the array
-                print "Before closing"
-                print "Entries:", array.nrows, type(array.nrows)
-                print "Entries:", array.nrows / (1000 * 1000), "Millions"
-                print "Shape:", array.shape
+                print("Before closing")
+                print("Entries:", array.nrows, type(array.nrows))
+                print("Entries:", array.nrows / (1000 * 1000), "Millions")
+                print("Shape:", array.shape)
             # Close the file
             fileh.close()
             # Re-open the file
             fileh = self.fileh = open_file(self.file)
             array = fileh.root.array
             if common.verbose:
-                print "After re-open"
+                print("After re-open")
 
         # Check how many entries there are in the array
         if common.verbose:
-            print "Entries:", array.nrows, type(array.nrows)
-            print "Entries:", array.nrows / (1000 * 1000), "Millions"
-            print "Shape:", array.shape
-            print "Last 10 elements-->", array[-10:]
+            print("Entries:", array.nrows, type(array.nrows))
+            print("Entries:", array.nrows / (1000 * 1000), "Millions")
+            print("Shape:", array.shape)
+            print("Last 10 elements-->", array[-10:])
             stop = self.narows % 256
             if stop > 127:
                 stop -= 256
             start = stop - 10
-            # print "start, stop-->", start, stop
-            print "Should look like:", numpy.arange(start, stop, dtype='int8')
+            # print("start, stop-->", start, stop)
+            print("Should look like:", numpy.arange(start, stop, dtype='int8'))
 
         nrows = self.narows * self.nanumber
         # check nrows
@@ -2150,7 +2224,7 @@ class DfltAtomTestCase(common.TempFileMixin, common.PyTablesTestCase):
         # Check the values
         values = self.h5file.root.bar[:]
         if common.verbose:
-            print "Read values:", values
+            print("Read values:", values)
         self.assertTrue(
             allequal(values, numpy.array(["abdef"]*100, "S5").reshape(10, 10)))
 
@@ -2167,7 +2241,7 @@ class DfltAtomTestCase(common.TempFileMixin, common.PyTablesTestCase):
         # Check the values
         values = self.h5file.root.bar[:]
         if common.verbose:
-            print "Read values:", values
+            print("Read values:", values)
         self.assertTrue(allequal(values, numpy.ones((10, 10), "i4")))
 
     def test02_dflt(self):
@@ -2183,7 +2257,7 @@ class DfltAtomTestCase(common.TempFileMixin, common.PyTablesTestCase):
         # Check the values
         values = self.h5file.root.bar[:]
         if common.verbose:
-            print "Read values:", values
+            print("Read values:", values)
         self.assertTrue(allequal(values, numpy.ones((10, 10), "f8")*1.134))
 
 
@@ -2208,8 +2282,8 @@ class AtomDefaultReprTestCase(common.TempFileMixin, common.PyTablesTestCase):
             ca = self.h5file.root.test
         # Check the value
         if common.verbose:
-            print "First row-->", repr(ca[0])
-            print "Defaults-->", repr(ca.atom.dflt)
+            print("First row-->", repr(ca[0]))
+            print("Defaults-->", repr(ca.atom.dflt))
         self.assertTrue(allequal(ca[0], numpy.zeros(N, 'S3')))
         self.assertTrue(allequal(ca.atom.dflt, numpy.zeros(N, 'S3')))
 
@@ -2223,8 +2297,8 @@ class AtomDefaultReprTestCase(common.TempFileMixin, common.PyTablesTestCase):
             ca = self.h5file.root.test
         # Check the value
         if common.verbose:
-            print "First row-->", ca[0]
-            print "Defaults-->", ca.atom.dflt
+            print("First row-->", ca[0])
+            print("Defaults-->", ca.atom.dflt)
         self.assertTrue(allequal(ca[0], numpy.zeros(N, 'S3')))
         self.assertTrue(allequal(ca.atom.dflt, numpy.zeros(N, 'S3')))
 
@@ -2238,8 +2312,8 @@ class AtomDefaultReprTestCase(common.TempFileMixin, common.PyTablesTestCase):
             ca = self.h5file.root.test
         # Check the value
         if common.verbose:
-            print "First row-->", ca[0]
-            print "Defaults-->", ca.atom.dflt
+            print("First row-->", ca[0])
+            print("Defaults-->", ca.atom.dflt)
         self.assertTrue(allequal(ca[0], numpy.ones(N, 'i4')))
         self.assertTrue(allequal(ca.atom.dflt, numpy.ones(N, 'i4')))
 
@@ -2254,8 +2328,8 @@ class AtomDefaultReprTestCase(common.TempFileMixin, common.PyTablesTestCase):
             ca = self.h5file.root.test
         # Check the value
         if common.verbose:
-            print "First row-->", ca[0]
-            print "Defaults-->", ca.atom.dflt
+            print("First row-->", ca[0])
+            print("Defaults-->", ca.atom.dflt)
         self.assertTrue(allequal(ca[0], numpy.ones(N, 'f4')*generic))
         self.assertTrue(allequal(ca.atom.dflt, numpy.ones(N, 'f4')*generic))
 
@@ -2269,8 +2343,8 @@ class AtomDefaultReprTestCase(common.TempFileMixin, common.PyTablesTestCase):
             ca = self.h5file.root.test
         # Check the value
         if common.verbose:
-            print "First row-->", repr(ca[0])
-            print "Defaults-->", repr(ca.atom.dflt)
+            print("First row-->", repr(ca[0]))
+            print("Defaults-->", repr(ca.atom.dflt))
         self.assertTrue(allequal(ca.atom.dflt, numpy.zeros(N, 'i4')))
 
     def test02b_None(self):
@@ -2283,8 +2357,8 @@ class AtomDefaultReprTestCase(common.TempFileMixin, common.PyTablesTestCase):
             ca = self.h5file.root.test
         # Check the value
         if common.verbose:
-            print "First row-->", ca[0]
-            print "Defaults-->", ca.atom.dflt
+            print("First row-->", ca[0])
+            print("Defaults-->", ca.atom.dflt)
         self.assertTrue(allequal(ca.atom.dflt, numpy.zeros(N, 'i4')))
 
 
@@ -2318,7 +2392,7 @@ class MDAtomTestCase(common.TempFileMixin, common.PyTablesTestCase):
         ca[0] = [[1, 3], [4, 5]]
         self.assertEqual(ca.nrows, 1)
         if common.verbose:
-            print "First row-->", ca[0]
+            print("First row-->", ca[0])
         self.assertTrue(allequal(ca[0], numpy.array([[1, 3], [4, 5]], 'i4')))
 
     def test01b_assign(self):
@@ -2333,7 +2407,7 @@ class MDAtomTestCase(common.TempFileMixin, common.PyTablesTestCase):
         ca[:] = [[[1]], [[2]], [[3]]]   # Simple broadcast
         self.assertEqual(ca.nrows, 3)
         if common.verbose:
-            print "Third row-->", ca[2]
+            print("Third row-->", ca[2])
         self.assertTrue(allequal(ca[2], numpy.array([[3, 3], [3, 3]], 'i4')))
 
     def test02a_assign(self):
@@ -2348,7 +2422,7 @@ class MDAtomTestCase(common.TempFileMixin, common.PyTablesTestCase):
         ca[:] = [[[1, 3], [4, 5], [7, 9]]]
         self.assertEqual(ca.nrows, 1)
         if common.verbose:
-            print "First row-->", ca[0]
+            print("First row-->", ca[0])
         self.assertTrue(allequal(ca[0], numpy.array(
             [[1, 3], [4, 5], [7, 9]], 'i4')))
 
@@ -2366,7 +2440,7 @@ class MDAtomTestCase(common.TempFileMixin, common.PyTablesTestCase):
                  [[-2, 3], [-5, 5], [7, -9]]]
         self.assertEqual(ca.nrows, 3)
         if common.verbose:
-            print "Third row-->", ca[2]
+            print("Third row-->", ca[2])
         self.assertTrue(
             allequal(ca[2], numpy.array([[-2, 3], [-5, 5], [7, -9]], 'i4')))
 
@@ -2384,7 +2458,7 @@ class MDAtomTestCase(common.TempFileMixin, common.PyTablesTestCase):
         ca[:] = [a * 1, a*2, a*3]
         self.assertEqual(ca.nrows, 3)
         if common.verbose:
-            print "Third row-->", ca[2]
+            print("Third row-->", ca[2])
         self.assertTrue(allequal(ca[2], a * 3))
 
     def test03b_MDMDMD(self):
@@ -2401,7 +2475,7 @@ class MDAtomTestCase(common.TempFileMixin, common.PyTablesTestCase):
         ca[:] = a
         self.assertEqual(ca.nrows, 2)
         if common.verbose:
-            print "Third row-->", ca[:, 2, ...]
+            print("Third row-->", ca[:, 2, ...])
         self.assertTrue(allequal(ca[:, 2, ...], a[:, 2, ...]))
 
     def test03c_MDMDMD(self):
@@ -2418,7 +2492,7 @@ class MDAtomTestCase(common.TempFileMixin, common.PyTablesTestCase):
         ca[:] = a
         self.assertEqual(ca.nrows, 3)
         if common.verbose:
-            print "Second row-->", ca[:, :, 1, ...]
+            print("Second row-->", ca[:, :, 1, ...])
         self.assertTrue(allequal(ca[:, :, 1, ...], a[:, :, 1, ...]))
 
 
@@ -2443,7 +2517,7 @@ class MDLargeAtomTestCase(common.TempFileMixin, common.PyTablesTestCase):
             ca = self.h5file.root.test
         # Check the value
         if common.verbose:
-            print "First row-->", ca[0]
+            print("First row-->", ca[0])
         self.assertTrue(allequal(ca[0], numpy.zeros(N, 'i4')))
 
 
@@ -2746,6 +2820,15 @@ def suite():
         theSuite.addTest(unittest.makeSuite(ZlibShuffleTestCase))
         theSuite.addTest(unittest.makeSuite(BloscComprTestCase))
         theSuite.addTest(unittest.makeSuite(BloscShuffleTestCase))
+        theSuite.addTest(unittest.makeSuite(BloscFletcherTestCase))
+        theSuite.addTest(unittest.makeSuite(BloscBloscLZTestCase))
+        if 'lz4' in tables.blosc_compressor_list():
+            theSuite.addTest(unittest.makeSuite(BloscLZ4TestCase))
+            theSuite.addTest(unittest.makeSuite(BloscLZ4HCTestCase))
+        if 'snappy' in tables.blosc_compressor_list():
+            theSuite.addTest(unittest.makeSuite(BloscSnappyTestCase))
+        if 'zlib' in tables.blosc_compressor_list():
+            theSuite.addTest(unittest.makeSuite(BloscZlibTestCase))
         theSuite.addTest(unittest.makeSuite(LZOComprTestCase))
         theSuite.addTest(unittest.makeSuite(LZOShuffleTestCase))
         theSuite.addTest(unittest.makeSuite(Bzip2ComprTestCase))
@@ -2758,19 +2841,19 @@ def suite():
         theSuite.addTest(unittest.makeSuite(Int8TestCase))
         theSuite.addTest(unittest.makeSuite(Int16TestCase))
         theSuite.addTest(unittest.makeSuite(Int32TestCase))
-        if hasattr(numpy, 'float16'):
+        if 'Float16Atom' in globals():
             theSuite.addTest(unittest.makeSuite(Float16TestCase))
         theSuite.addTest(unittest.makeSuite(Float32TestCase))
         theSuite.addTest(unittest.makeSuite(Float64TestCase))
-        if hasattr(numpy, 'float96'):
+        if 'Float96Atom' in globals():
             theSuite.addTest(unittest.makeSuite(Float96TestCase))
-        if hasattr(numpy, 'float128'):
+        if 'Float128Atom' in globals():
             theSuite.addTest(unittest.makeSuite(Float128TestCase))
         theSuite.addTest(unittest.makeSuite(Complex64TestCase))
         theSuite.addTest(unittest.makeSuite(Complex128TestCase))
-        if hasattr(numpy, 'complex192'):
+        if 'Complex192Atom' in globals():
             theSuite.addTest(unittest.makeSuite(Complex192TestCase))
-        if hasattr(numpy, 'complex256'):
+        if 'Complex256Atom' in globals():
             theSuite.addTest(unittest.makeSuite(Complex256TestCase))
         theSuite.addTest(unittest.makeSuite(ComprTestCase))
         theSuite.addTest(unittest.makeSuite(OffsetStrideTestCase))
diff --git a/tables/tests/test_create.py b/tables/tests/test_create.py
index eae7062..c658868 100644
--- a/tables/tests/test_create.py
+++ b/tables/tests/test_create.py
@@ -11,6 +11,7 @@ It also checks:
 
 """
 
+from __future__ import print_function
 import os
 import sys
 import hashlib
@@ -26,6 +27,7 @@ from tables import Group, Leaf, Table, Array, hdf5_version
 from tables.tests import common
 from tables.parameters import MAX_COLUMNS
 from tables.hdf5extension import HAVE_DIRECT_DRIVER, HAVE_WINDOWS_DRIVER
+from tables.utils import quantize
 
 import tables
 
@@ -73,14 +75,14 @@ class createTestCase(unittest.TestCase):
     #----------------------------------------
 
     def test00_isClass(self):
-        """Testing table creation"""
+        """Testing table creation."""
         self.assertTrue(isinstance(self.table, Table))
         self.assertTrue(isinstance(self.array, Array))
         self.assertTrue(isinstance(self.array, Leaf))
         self.assertTrue(isinstance(self.group, Group))
 
     def test01_overwriteNode(self):
-        """Checking protection against node overwriting"""
+        """Checking protection against node overwriting."""
 
         try:
             self.array = self.fileh.create_array(self.root, 'anarray',
@@ -88,13 +90,13 @@ class createTestCase(unittest.TestCase):
         except NodeError:
             if common.verbose:
                 (type, value, traceback) = sys.exc_info()
-                print "\nGreat!, the next NameError was catched!"
-                print value
+                print("\nGreat!, the next NameError was catched!")
+                print(value)
         else:
             self.fail("expected a NodeError")
 
     def test02_syntaxname(self):
-        """Checking syntax in object tree names"""
+        """Checking syntax in object tree names."""
 
         # Now, try to attach an array to the object tree with
         # a not allowed Python variable name
@@ -105,8 +107,8 @@ class createTestCase(unittest.TestCase):
         except NaturalNameWarning:
             if common.verbose:
                 (type, value, traceback) = sys.exc_info()
-                print "\nGreat!, the next NaturalNameWarning was catched!"
-                print value
+                print("\nGreat!, the next NaturalNameWarning was catched!")
+                print(value)
         else:
             self.fail("expected a NaturalNameWarning")
 
@@ -117,8 +119,8 @@ class createTestCase(unittest.TestCase):
         except NaturalNameWarning:
             if common.verbose:
                 (type, value, traceback) = sys.exc_info()
-                print "\nGreat!, the next NaturalNameWarning was catched!"
-                print value
+                print("\nGreat!, the next NaturalNameWarning was catched!")
+                print(value)
         else:
             self.fail("expected a NaturalNameWarning")
 
@@ -129,15 +131,15 @@ class createTestCase(unittest.TestCase):
         except NaturalNameWarning:
             if common.verbose:
                 (type, value, traceback) = sys.exc_info()
-                print "\nGreat!, the next NaturalNameWarning was catched!"
-                print value
+                print("\nGreat!, the next NaturalNameWarning was catched!")
+                print(value)
         else:
             self.fail("expected a NaturalNameWarning")
         # Reset the warning
         warnings.filterwarnings("default", category=NaturalNameWarning)
 
     def test03a_titleAttr(self):
-        """Checking the self.title attr in nodes"""
+        """Checking the self.title attr in nodes."""
 
         # Close the opened file to destroy the object tree
         self.fileh.close()
@@ -212,8 +214,8 @@ class createTestCase(unittest.TestCase):
         # Compare the input rowlist and output row list. They should
         # be equal.
         if common.verbose:
-            print "Original row list:", listrows[-1]
-            print "Retrieved row list:", listout[-1]
+            print("Original row list:", listrows[-1])
+            print("Retrieved row list:", listout[-1])
         self.assertEqual(listrows, listout)
 
     # The next limitation has been released. A warning is still there, though
@@ -247,8 +249,8 @@ class createTestCase(unittest.TestCase):
         except PerformanceWarning:
             if common.verbose:
                 (type, value, traceback) = sys.exc_info()
-                print "\nGreat!, the next PerformanceWarning was catched!"
-                print value
+                print("\nGreat!, the next PerformanceWarning was catched!")
+                print(value)
         else:
             self.fail("expected an PerformanceWarning")
         # Reset the warning
@@ -260,8 +262,8 @@ class createTestCase(unittest.TestCase):
 
         # Build a dictionary with the types as values and varnames as keys
         recordDict = {}
-        recordDict["a"*255] = IntCol(dflt=1)
-        recordDict["b"*256] = IntCol(dflt=1)  # Should trigger a ValueError
+        recordDict["a" * 255] = IntCol(dflt=1)
+        recordDict["b" * 256] = IntCol(dflt=1)  # Should trigger a ValueError
 
         # Now, create a table with this record object
         # This way of creating node objects has been deprecated
@@ -276,8 +278,8 @@ class createTestCase(unittest.TestCase):
         except ValueError:
             if common.verbose:
                 (type, value, traceback) = sys.exc_info()
-                print "\nGreat!, the next ValueError was catched!"
-                print value
+                print("\nGreat!, the next ValueError was catched!")
+                print(value)
         else:
             self.fail("expected an ValueError")
 
@@ -286,15 +288,15 @@ class createTestCase(unittest.TestCase):
 
         # Build a dictionary with the types as values and varnames as keys
         recordDict = {}
-        recordDict["a"*255] = IntCol(dflt=1, pos=0)
-        recordDict["b"*1024] = IntCol(dflt=1, pos=1)  # Should work well
+        recordDict["a" * 255] = IntCol(dflt=1, pos=0)
+        recordDict["b" * 1024] = IntCol(dflt=1, pos=1)  # Should work well
 
         # Attach the table to object tree
         # Here, IndexError should be raised!
         table = self.fileh.create_table(self.root, 'table',
                                         recordDict, "MetaRecord instance")
-        self.assertEqual(table.colnames[0], "a"*255)
-        self.assertEqual(table.colnames[1], "b"*1024)
+        self.assertEqual(table.colnames[0], "a" * 255)
+        self.assertEqual(table.colnames[1], "b" * 1024)
 
 
 class Record2(IsDescription):
@@ -350,14 +352,23 @@ class FiltersTreeTestCase(unittest.TestCase):
             ea1.append(var1List)
             ea2.append(var3List)
 
+            # Finally a couple of VLArrays too
+            vla1 = self.h5file.create_vlarray(group, 'vlarray1',
+                                              StringAtom(itemsize=4), "col 1")
+            vla2 = self.h5file.create_vlarray(group, 'vlarray2',
+                                              Int16Atom(), "col 3")
+            # And fill them with some values
+            vla1.append(var1List)
+            vla2.append(var3List)
+
             # Create a new group (descendant of group)
             if j == 1:  # The second level
-                group2 = self.h5file.create_group(group, 'group'+str(j),
+                group2 = self.h5file.create_group(group, 'group' + str(j),
                                                   filters=self.gfilters)
             elif j == 2:  # third level
-                group2 = self.h5file.create_group(group, 'group'+str(j))
+                group2 = self.h5file.create_group(group, 'group' + str(j))
             else:   # The rest of levels
-                group2 = self.h5file.create_group(group, 'group'+str(j),
+                group2 = self.h5file.create_group(group, 'group' + str(j),
                                                   filters=self.filters)
             # Iterate over this new group (group2)
             group = group2
@@ -376,22 +387,25 @@ class FiltersTreeTestCase(unittest.TestCase):
         "Checking inheritance of filters on trees (open file version)"
 
         if common.verbose:
-            print '\n', '-=' * 30
-            print "Running %s.test00_checkFilters..." % self.__class__.__name__
+            print('\n', '-=' * 30)
+            print("Running %s.test00_checkFilters..." %
+                  self.__class__.__name__)
 
         # First level check
         if common.verbose:
-            print "Test filter:", repr(self.filters)
-            print "Filters in file:", repr(self.h5file.filters)
+            print("Test filter:", repr(self.filters))
+            print("Filters in file:", repr(self.h5file.filters))
 
-        if self.filters == None:
+        if self.filters is None:
             filters = Filters()
         else:
             filters = self.filters
         self.assertEqual(repr(filters), repr(self.h5file.filters))
         # The next nodes have to have the same filter properties as
         # self.filters
-        nodelist = ['/table1', '/group0/earray1', '/group0']
+        nodelist = [
+            '/table1', '/group0/earray1', '/group0/vlarray1', '/group0',
+        ]
         for node in nodelist:
             object = self.h5file.get_node(node)
             if isinstance(object, Group):
@@ -401,21 +415,22 @@ class FiltersTreeTestCase(unittest.TestCase):
 
         # Second and third level check
         group1 = self.h5file.root.group0.group1
-        if self.gfilters == None:
-            if self.filters == None:
+        if self.gfilters is None:
+            if self.filters is None:
                 gfilters = Filters()
             else:
                 gfilters = self.filters
         else:
             gfilters = self.gfilters
         if common.verbose:
-            print "Test gfilter:", repr(gfilters)
-            print "Filters in file:", repr(group1._v_filters)
+            print("Test gfilter:", repr(gfilters))
+            print("Filters in file:", repr(group1._v_filters))
 
         self.assertEqual(repr(gfilters), repr(group1._v_filters))
         # The next nodes have to have the same filter properties as
         # gfilters
         nodelist = ['/group0/group1', '/group0/group1/earray1',
+                    '/group0/group1/vlarray1',
                     '/group0/group1/table1', '/group0/group1/group2/table1']
         for node in nodelist:
             object = self.h5file.get_node(node)
@@ -425,9 +440,9 @@ class FiltersTreeTestCase(unittest.TestCase):
                 self.assertEqual(repr(gfilters), repr(object.filters))
 
         # Fourth and fifth level check
-        if self.filters == None:
+        if self.filters is None:
             # If None, the filters are inherited!
-            if self.gfilters == None:
+            if self.gfilters is None:
                 filters = Filters()
             else:
                 filters = self.gfilters
@@ -435,14 +450,15 @@ class FiltersTreeTestCase(unittest.TestCase):
             filters = self.filters
         group3 = self.h5file.root.group0.group1.group2.group3
         if common.verbose:
-            print "Test filter:", repr(filters)
-            print "Filters in file:", repr(group3._v_filters)
+            print("Test filter:", repr(filters))
+            print("Filters in file:", repr(group3._v_filters))
 
         self.assertEqual(repr(filters), repr(group3._v_filters))
         # The next nodes have to have the same filter properties as
         # self.filter
         nodelist = ['/group0/group1/group2/group3',
                     '/group0/group1/group2/group3/earray1',
+                    '/group0/group1/group2/group3/vlarray1',
                     '/group0/group1/group2/group3/table1',
                     '/group0/group1/group2/group3/group4']
         for node in nodelist:
@@ -469,8 +485,9 @@ class FiltersTreeTestCase(unittest.TestCase):
         "Checking inheritance of filters on trees (close file version)"
 
         if common.verbose:
-            print '\n', '-=' * 30
-            print "Running %s.test01_checkFilters..." % self.__class__.__name__
+            print('\n', '-=' * 30)
+            print("Running %s.test01_checkFilters..." %
+                  self.__class__.__name__)
 
         # Close the file
         self.h5file.close()
@@ -478,18 +495,20 @@ class FiltersTreeTestCase(unittest.TestCase):
         self.h5file = open_file(self.file, "r")
 
         # First level check
-        if self.filters == None:
+        if self.filters is None:
             filters = Filters()
         else:
             filters = self.filters
         if common.verbose:
-            print "Test filter:", repr(filters)
-            print "Filters in file:", repr(self.h5file.filters)
+            print("Test filter:", repr(filters))
+            print("Filters in file:", repr(self.h5file.filters))
 
         self.assertEqual(repr(filters), repr(self.h5file.filters))
         # The next nodes have to have the same filter properties as
         # self.filters
-        nodelist = ['/table1', '/group0/earray1', '/group0']
+        nodelist = [
+            '/table1', '/group0/earray1', '/group0/vlarray1', '/group0',
+        ]
         for node in nodelist:
             object_ = self.h5file.get_node(node)
             if isinstance(object_, Group):
@@ -499,21 +518,22 @@ class FiltersTreeTestCase(unittest.TestCase):
 
         # Second and third level check
         group1 = self.h5file.root.group0.group1
-        if self.gfilters == None:
-            if self.filters == None:
+        if self.gfilters is None:
+            if self.filters is None:
                 gfilters = Filters()
             else:
                 gfilters = self.filters
         else:
             gfilters = self.gfilters
         if common.verbose:
-            print "Test filter:", repr(gfilters)
-            print "Filters in file:", repr(group1._v_filters)
+            print("Test filter:", repr(gfilters))
+            print("Filters in file:", repr(group1._v_filters))
 
         repr(gfilters) == repr(group1._v_filters)
         # The next nodes have to have the same filter properties as
         # gfilters
         nodelist = ['/group0/group1', '/group0/group1/earray1',
+                    '/group0/group1/vlarray1',
                     '/group0/group1/table1', '/group0/group1/group2/table1']
         for node in nodelist:
             object_ = self.h5file.get_node(node)
@@ -523,8 +543,8 @@ class FiltersTreeTestCase(unittest.TestCase):
                 self.assertEqual(repr(gfilters), repr(object_.filters))
 
         # Fourth and fifth level check
-        if self.filters == None:
-            if self.gfilters == None:
+        if self.filters is None:
+            if self.gfilters is None:
                 filters = Filters()
             else:
                 filters = self.gfilters
@@ -532,14 +552,15 @@ class FiltersTreeTestCase(unittest.TestCase):
             filters = self.filters
         group3 = self.h5file.root.group0.group1.group2.group3
         if common.verbose:
-            print "Test filter:", repr(filters)
-            print "Filters in file:", repr(group3._v_filters)
+            print("Test filter:", repr(filters))
+            print("Filters in file:", repr(group3._v_filters))
 
         repr(filters) == repr(group3._v_filters)
         # The next nodes have to have the same filter properties as
         # self.filters
         nodelist = ['/group0/group1/group2/group3',
                     '/group0/group1/group2/group3/earray1',
+                    '/group0/group1/group2/group3/vlarray1',
                     '/group0/group1/group2/group3/table1',
                     '/group0/group1/group2/group3/group4']
         for node in nodelist:
@@ -613,6 +634,31 @@ class FiltersCase10(FiltersTreeTestCase):
     gfilters = Filters(complevel=5, shuffle=True, complib="blosc")
 
 
+class FiltersCaseBloscBloscLZ(FiltersTreeTestCase):
+    filters = Filters(shuffle=False, complevel=1, complib="blosc:blosclz")
+    gfilters = Filters(complevel=5, shuffle=True, complib="blosc:blosclz")
+
+
+class FiltersCaseBloscLZ4(FiltersTreeTestCase):
+    filters = Filters(shuffle=False, complevel=1, complib="blosc:lz4")
+    gfilters = Filters(complevel=5, shuffle=True, complib="blosc:lz4")
+
+
+class FiltersCaseBloscLZ4HC(FiltersTreeTestCase):
+    filters = Filters(shuffle=False, complevel=1, complib="blosc:lz4hc")
+    gfilters = Filters(complevel=5, shuffle=True, complib="blosc:lz4hc")
+
+
+class FiltersCaseBloscSnappy(FiltersTreeTestCase):
+    filters = Filters(shuffle=False, complevel=1, complib="blosc:snappy")
+    gfilters = Filters(complevel=5, shuffle=True, complib="blosc:snappy")
+
+
+class FiltersCaseBloscZlib(FiltersTreeTestCase):
+    filters = Filters(shuffle=False, complevel=1, complib="blosc:zlib")
+    gfilters = Filters(complevel=5, shuffle=True, complib="blosc:zlib")
+
+
 class CopyGroupTestCase(unittest.TestCase):
     title = "A title"
     nrows = 10
@@ -636,7 +682,7 @@ class CopyGroupTestCase(unittest.TestCase):
         for j in range(5):
             for i in range(2):
                 # Create a new group (brother of group)
-                group2 = self.h5file.create_group(group, 'bgroup'+str(i),
+                group2 = self.h5file.create_group(group, 'bgroup' + str(i),
                                                   filters=None)
 
                 # Create a table
@@ -679,7 +725,7 @@ class CopyGroupTestCase(unittest.TestCase):
                 ea2.append(var3List)
 
             # Create a new group (descendant of group)
-            group3 = self.h5file.create_group(group, 'group'+str(j),
+            group3 = self.h5file.create_group(group, 'group' + str(j),
                                               filters=None)
             # Iterate over this new group (group3)
             group = group3
@@ -704,8 +750,9 @@ class CopyGroupTestCase(unittest.TestCase):
         "Checking non-recursive copy of a Group"
 
         if common.verbose:
-            print '\n', '-=' * 30
-            print "Running %s.test00_nonRecursive..." % self.__class__.__name__
+            print('\n', '-=' * 30)
+            print("Running %s.test00_nonRecursive..." %
+                  self.__class__.__name__)
 
         # Copy a group non-recursively
         srcgroup = self.h5file.root.group0.group1
@@ -728,8 +775,8 @@ class CopyGroupTestCase(unittest.TestCase):
         nodelist1.sort()
         nodelist2.sort()
         if common.verbose:
-            print "The origin node list -->", nodelist1
-            print "The copied node list -->", nodelist2
+            print("The origin node list -->", nodelist1)
+            print("The copied node list -->", nodelist2)
         self.assertEqual(srcgroup._v_nchildren, dstgroup._v_nchildren)
         self.assertEqual(nodelist1, nodelist2)
 
@@ -737,8 +784,9 @@ class CopyGroupTestCase(unittest.TestCase):
         "Checking non-recursive copy of a Group (attributes copied)"
 
         if common.verbose:
-            print '\n', '-=' * 30
-            print "Running %s.test01_nonRecursiveAttrs..." % self.__class__.__name__
+            print('\n', '-=' * 30)
+            print(("Running %s.test01_nonRecursiveAttrs..." %
+                   self.__class__.__name__))
 
         # Copy a group non-recursively with attrs
         srcgroup = self.h5file.root.group0.group1
@@ -771,13 +819,13 @@ class CopyGroupTestCase(unittest.TestCase):
                 dstattrskeys.remove('FILTERS')
             # These lists should already be ordered
             if common.verbose:
-                print "srcattrskeys for node %s: %s" % (srcnode._v_name,
-                                                        srcattrskeys)
-                print "dstattrskeys for node %s: %s" % (dstnode._v_name,
-                                                        dstattrskeys)
+                print("srcattrskeys for node %s: %s" % (srcnode._v_name,
+                                                        srcattrskeys))
+                print("dstattrskeys for node %s: %s" % (dstnode._v_name,
+                                                        dstattrskeys))
             self.assertEqual(srcattrskeys, dstattrskeys)
             if common.verbose:
-                print "The attrs names has been copied correctly"
+                print("The attrs names has been copied correctly")
 
             # Now, for the contents of attributes
             for srcattrname in srcattrskeys:
@@ -788,14 +836,14 @@ class CopyGroupTestCase(unittest.TestCase):
                 self.assertEqual(dstattrs.FILTERS, self.filters)
 
             if common.verbose:
-                print "The attrs contents has been copied correctly"
+                print("The attrs contents has been copied correctly")
 
     def test02_Recursive(self):
         "Checking recursive copy of a Group"
 
         if common.verbose:
-            print '\n', '-=' * 30
-            print "Running %s.test02_Recursive..." % self.__class__.__name__
+            print('\n', '-=' * 30)
+            print("Running %s.test02_Recursive..." % self.__class__.__name__)
 
         # Create the destination node
         group = self.h5file2.root
@@ -842,16 +890,17 @@ class CopyGroupTestCase(unittest.TestCase):
             nodelist2.append(node._v_pathname[lenDstGroup:])
 
         if common.verbose:
-            print "The origin node list -->", nodelist1
-            print "The copied node list -->", nodelist2
+            print("The origin node list -->", nodelist1)
+            print("The copied node list -->", nodelist2)
         self.assertEqual(nodelist1, nodelist2)
 
     def test03_RecursiveFilters(self):
         "Checking recursive copy of a Group (cheking Filters)"
 
         if common.verbose:
-            print '\n', '-=' * 30
-            print "Running %s.test03_RecursiveFilters..." % self.__class__.__name__
+            print('\n', '-=' * 30)
+            print(("Running %s.test03_RecursiveFilters..." %
+                   self.__class__.__name__))
 
         # Create the destination node
         group = self.h5file2.root
@@ -977,7 +1026,7 @@ class CopyFileTestCase(unittest.TestCase):
         for j in range(5):
             for i in range(2):
                 # Create a new group (brother of group)
-                group2 = self.h5file.create_group(group, 'bgroup'+str(i),
+                group2 = self.h5file.create_group(group, 'bgroup' + str(i),
                                                   filters=None)
 
                 # Create a table
@@ -1021,7 +1070,7 @@ class CopyFileTestCase(unittest.TestCase):
                 ea2.append(var3List)
 
             # Create a new group (descendant of group)
-            group3 = self.h5file.create_group(group, 'group'+str(j),
+            group3 = self.h5file.create_group(group, 'group' + str(j),
                                               filters=None)
             # Iterate over this new group (group3)
             group = group3
@@ -1047,8 +1096,8 @@ class CopyFileTestCase(unittest.TestCase):
         "Checking copy of a File (overwriting file)"
 
         if common.verbose:
-            print '\n', '-=' * 30
-            print "Running %s.test00_overwrite..." % self.__class__.__name__
+            print('\n', '-=' * 30)
+            print("Running %s.test00_overwrite..." % self.__class__.__name__)
 
         # Create a temporary file
         file2h = open(self.file2, "w")
@@ -1077,8 +1126,8 @@ class CopyFileTestCase(unittest.TestCase):
         nodelist1.sort()
         nodelist2.sort()
         if common.verbose:
-            print "The origin node list -->", nodelist1
-            print "The copied node list -->", nodelist2
+            print("The origin node list -->", nodelist1)
+            print("The copied node list -->", nodelist2)
         self.assertEqual(srcgroup._v_nchildren, dstgroup._v_nchildren)
         self.assertEqual(nodelist1, nodelist2)
         self.assertEqual(self.h5file2.title, self.title)
@@ -1087,8 +1136,9 @@ class CopyFileTestCase(unittest.TestCase):
         "Checking copy of a File (srcfile == dstfile)"
 
         if common.verbose:
-            print '\n', '-=' * 30
-            print "Running %s.test00a_srcdstequal..." % self.__class__.__name__
+            print('\n', '-=' * 30)
+            print("Running %s.test00a_srcdstequal..." %
+                  self.__class__.__name__)
 
         # Copy the file to the destination
         self.assertRaises(IOError, self.h5file.copy_file, self.h5file.filename)
@@ -1097,8 +1147,8 @@ class CopyFileTestCase(unittest.TestCase):
         "Checking copy of a File (first-class function)"
 
         if common.verbose:
-            print '\n', '-=' * 30
-            print "Running %s.test00b_firstclass..." % self.__class__.__name__
+            print('\n', '-=' * 30)
+            print("Running %s.test00b_firstclass..." % self.__class__.__name__)
 
         # Close the temporary file
         self.h5file.close()
@@ -1120,8 +1170,8 @@ class CopyFileTestCase(unittest.TestCase):
         nodelist1.sort()
         nodelist2.sort()
         if common.verbose:
-            print "The origin node list -->", nodelist1
-            print "The copied node list -->", nodelist2
+            print("The origin node list -->", nodelist1)
+            print("The copied node list -->", nodelist2)
         self.assertEqual(srcgroup._v_nchildren, dstgroup._v_nchildren)
         self.assertEqual(nodelist1, nodelist2)
         self.assertEqual(self.h5file2.title, self.title)
@@ -1130,8 +1180,8 @@ class CopyFileTestCase(unittest.TestCase):
         "Checking copy of a File (attributes not copied)"
 
         if common.verbose:
-            print '\n', '-=' * 30
-            print "Running %s.test01_copy..." % self.__class__.__name__
+            print('\n', '-=' * 30)
+            print("Running %s.test01_copy..." % self.__class__.__name__)
 
         # Copy the file to the destination
         self.h5file.copy_file(self.file2, title=self.title,
@@ -1156,12 +1206,12 @@ class CopyFileTestCase(unittest.TestCase):
         nodelist1.sort()
         nodelist2.sort()
         if common.verbose:
-            print "The origin node list -->", nodelist1
-            print "The copied node list -->", nodelist2
+            print("The origin node list -->", nodelist1)
+            print("The copied node list -->", nodelist2)
         self.assertEqual(srcgroup._v_nchildren, dstgroup._v_nchildren)
         self.assertEqual(nodelist1, nodelist2)
-        # print "_v_attrnames-->", self.h5file2.root._v_attrs._v_attrnames
-        # print "--> <%s,%s>" % (self.h5file2.title, self.title)
+        # print("_v_attrnames-->", self.h5file2.root._v_attrs._v_attrnames)
+        # print("--> <%s,%s>" % (self.h5file2.title, self.title))
         self.assertEqual(self.h5file2.title, self.title)
 
         # Check that user attributes has not been copied
@@ -1176,13 +1226,13 @@ class CopyFileTestCase(unittest.TestCase):
                 dstattrskeys.remove('FILTERS')
             # These lists should already be ordered
             if common.verbose:
-                print "srcattrskeys for node %s: %s" % (srcnode._v_name,
-                                                        srcattrskeys)
-                print "dstattrskeys for node %s: %s" % (dstnode._v_name,
-                                                        dstattrskeys)
+                print("srcattrskeys for node %s: %s" % (srcnode._v_name,
+                                                        srcattrskeys))
+                print("dstattrskeys for node %s: %s" % (dstnode._v_name,
+                                                        dstattrskeys))
             self.assertEqual(srcattrskeys, dstattrskeys)
             if common.verbose:
-                print "The attrs names has been copied correctly"
+                print("The attrs names has been copied correctly")
 
             # Now, for the contents of attributes
             for srcattrname in srcattrskeys:
@@ -1193,14 +1243,14 @@ class CopyFileTestCase(unittest.TestCase):
                 self.assertEqual(dstattrs.FILTERS, self.filters)
 
             if common.verbose:
-                print "The attrs contents has been copied correctly"
+                print("The attrs contents has been copied correctly")
 
     def test02_Attrs(self):
         "Checking copy of a File (attributes copied)"
 
         if common.verbose:
-            print '\n', '-=' * 30
-            print "Running %s.test02_Attrs..." % self.__class__.__name__
+            print('\n', '-=' * 30)
+            print("Running %s.test02_Attrs..." % self.__class__.__name__)
 
         # Copy the file to the destination
         self.h5file.copy_file(self.file2, title=self.title,
@@ -1227,16 +1277,16 @@ class CopyFileTestCase(unittest.TestCase):
             dstattrskeys = dstattrs._f_list("all")
             # These lists should already be ordered
             if common.verbose:
-                print "srcattrskeys for node %s: %s" % (srcnode._v_name,
-                                                        srcattrskeys)
-                print "dstattrskeys for node %s: %s" % (dstnode._v_name,
-                                                        dstattrskeys)
+                print("srcattrskeys for node %s: %s" % (srcnode._v_name,
+                                                        srcattrskeys))
+                print("dstattrskeys for node %s: %s" % (dstnode._v_name,
+                                                        dstattrskeys))
             # Filters may differ, do not take into account
             if self.filters is not None:
                 dstattrskeys.remove('FILTERS')
             self.assertEqual(srcattrskeys, dstattrskeys)
             if common.verbose:
-                print "The attrs names has been copied correctly"
+                print("The attrs names has been copied correctly")
 
             # Now, for the contents of attributes
             for srcattrname in srcattrskeys:
@@ -1247,7 +1297,7 @@ class CopyFileTestCase(unittest.TestCase):
                 self.assertEqual(dstattrs.FILTERS, self.filters)
 
             if common.verbose:
-                print "The attrs contents has been copied correctly"
+                print("The attrs contents has been copied correctly")
 
 
 class CopyFileCase1(CopyFileTestCase):
@@ -1304,8 +1354,9 @@ class CopyFileCase10(unittest.TestCase):
         "Checking copy of a File (checking not overwriting)"
 
         if common.verbose:
-            print '\n', '-=' * 30
-            print "Running %s.test01_notoverwrite..." % self.__class__.__name__
+            print('\n', '-=' * 30)
+            print("Running %s.test01_notoverwrite..." %
+                  self.__class__.__name__)
 
         # Create two empty files:
         file = tempfile.mktemp(".h5")
@@ -1319,8 +1370,8 @@ class CopyFileCase10(unittest.TestCase):
         except IOError:
             if common.verbose:
                 (type, value, traceback) = sys.exc_info()
-                print "\nGreat!, the next IOError was catched!"
-                print value
+                print("\nGreat!, the next IOError was catched!")
+                print(value)
         else:
             self.fail("expected a IOError")
 
@@ -1431,23 +1482,24 @@ class GroupFiltersTestCase(common.TempFileMixin, common.PyTablesTestCase):
         self._test_change('/explicit_yes', del_filters, tables.Filters())
 
 
-class SetBloscMaxThreadsTestCase(common.TempFileMixin, common.PyTablesTestCase):
+class SetBloscMaxThreadsTestCase(common.TempFileMixin,
+                                 common.PyTablesTestCase):
     filters = tables.Filters(complevel=4, complib="blosc")
 
     def test00(self):
         """Checking set_blosc_max_threads()"""
         nthreads_old = tables.set_blosc_max_threads(4)
         if common.verbose:
-            print "Previous max threads:", nthreads_old
-            print "Should be:", self.h5file.params['MAX_BLOSC_THREADS']
+            print("Previous max threads:", nthreads_old)
+            print("Should be:", self.h5file.params['MAX_BLOSC_THREADS'])
         self.assertEqual(nthreads_old, self.h5file.params['MAX_BLOSC_THREADS'])
         self.h5file.create_carray('/', 'some_array',
                                   atom=tables.Int32Atom(), shape=(3, 3),
                                   filters = self.filters)
         nthreads_old = tables.set_blosc_max_threads(1)
         if common.verbose:
-            print "Previous max threads:", nthreads_old
-            print "Should be:", 4
+            print("Previous max threads:", nthreads_old)
+            print("Should be:", 4)
         self.assertEqual(nthreads_old, 4)
 
     def test01(self):
@@ -1459,24 +1511,69 @@ class SetBloscMaxThreadsTestCase(common.TempFileMixin, common.PyTablesTestCase):
         self._reopen()
         nthreads_old = tables.set_blosc_max_threads(4)
         if common.verbose:
-            print "Previous max threads:", nthreads_old
-            print "Should be:", self.h5file.params['MAX_BLOSC_THREADS']
+            print("Previous max threads:", nthreads_old)
+            print("Should be:", self.h5file.params['MAX_BLOSC_THREADS'])
         self.assertEqual(nthreads_old, self.h5file.params['MAX_BLOSC_THREADS'])
 
 
 class FilterTestCase(common.PyTablesTestCase):
-    def test_filter_01(self):
+    def test_filter_pack_type(self):
         self.assertEqual(type(Filters()._pack()), numpy.int64)
 
-    def test_filter_02(self):
+    @staticmethod
+    def _hexl(n):
         if sys.version_info[0] > 2:
-            hexl = lambda n: hex(int(n))
+            return hex(int(n))
         else:
-            hexl = lambda n: hex(long(n))
-            self.assertEqual(hexl(Filters()._pack()), '0x0L')
-            self.assertEqual(hexl(Filters(1, shuffle=False)._pack()), '0x101L')
-            filter_ = Filters(9, 'zlib', shuffle=True, fletcher32=True)
-            self.assertEqual(hexl(filter_._pack()), '0x30109L')
+            return hex(long(n)).rstrip('L')
+
+    def test_filter_pack_01(self):
+        filter_ = Filters()
+        self.assertEqual(self._hexl(filter_._pack()), '0x0')
+
+    def test_filter_pack_02(self):
+        filter_ = Filters(1, shuffle=False)
+        self.assertEqual(self._hexl(filter_._pack()), '0x101')
+
+    def test_filter_pack_03(self):
+        filter_ = Filters(9, 'zlib', shuffle=True, fletcher32=True)
+        self.assertEqual(self._hexl(filter_._pack()), '0x30109')
+
+    def test_filter_pack_04(self):
+        filter_ = Filters(1, shuffle=False, least_significant_digit=5)
+        self.assertEqual(self._hexl(filter_._pack()), '0x5040101')
+
+    def test_filter_unpack_01(self):
+        filter_ = Filters._unpack(numpy.int64(0x0))
+        self.assertFalse(filter_.shuffle)
+        self.assertFalse(filter_.fletcher32)
+        self.assertEqual(filter_.least_significant_digit, None)
+        self.assertEqual(filter_.complevel, 0)
+        self.assertEqual(filter_.complib, None)
+
+    def test_filter_unpack_02(self):
+        filter_ = Filters._unpack(numpy.int64(0x101))
+        self.assertFalse(filter_.shuffle)
+        self.assertFalse(filter_.fletcher32)
+        self.assertEqual(filter_.least_significant_digit, None)
+        self.assertEqual(filter_.complevel, 1)
+        self.assertEqual(filter_.complib, 'zlib')
+
+    def test_filter_unpack_03(self):
+        filter_ = Filters._unpack(numpy.int64(0x30109))
+        self.assertTrue(filter_.shuffle)
+        self.assertTrue(filter_.fletcher32)
+        self.assertEqual(filter_.least_significant_digit, None)
+        self.assertEqual(filter_.complevel, 9)
+        self.assertEqual(filter_.complib, 'zlib')
+
+    def test_filter_unpack_04(self):
+        filter_ = Filters._unpack(numpy.int64(0x5040101))
+        self.assertFalse(filter_.shuffle)
+        self.assertFalse(filter_.fletcher32)
+        self.assertEqual(filter_.least_significant_digit, 5)
+        self.assertEqual(filter_.complevel, 1)
+        self.assertEqual(filter_.complib, 'zlib')
 
 
 class DefaultDriverTestCase(common.PyTablesTestCase):
@@ -1486,7 +1583,8 @@ class DefaultDriverTestCase(common.PyTablesTestCase):
     def setUp(self):
         self.h5fname = tempfile.mktemp(suffix=".h5")
         self.h5file = tables.open_file(self.h5fname, mode="w",
-                                       driver=self.DRIVER, **self.DRIVER_PARAMS)
+                                       driver=self.DRIVER,
+                                       **self.DRIVER_PARAMS)
 
         # Create an HDF5 file and contents
         root = self.h5file.root
@@ -1502,19 +1600,23 @@ class DefaultDriverTestCase(common.PyTablesTestCase):
         if os.path.isfile(self.h5fname):
             os.remove(self.h5fname)
 
+    def assertIsFile(self):
+        self.assertTrue(os.path.isfile(self.h5fname))
+
     def test_newFile(self):
         self.assertTrue(isinstance(self.h5file, tables.File))
-        self.assertTrue(os.path.isfile(self.h5fname))
+        self.assertIsFile()
 
     def test_readFile(self):
         self.h5file.close()
         self.h5file = None
 
-        self.assertTrue(os.path.isfile(self.h5fname))
+        self.assertIsFile()
 
         # Open an existing HDF5 file
         self.h5file = tables.open_file(self.h5fname, mode="r",
-                                       driver=self.DRIVER, **self.DRIVER_PARAMS)
+                                       driver=self.DRIVER,
+                                       **self.DRIVER_PARAMS)
 
         # check contents
         root = self.h5file.root
@@ -1533,11 +1635,12 @@ class DefaultDriverTestCase(common.PyTablesTestCase):
         self.h5file.close()
         self.h5file = None
 
-        self.assertTrue(os.path.isfile(self.h5fname))
+        self.assertIsFile()
 
         # Open an existing HDF5 file in append mode
         self.h5file = tables.open_file(self.h5fname, mode="a",
-                                       driver=self.DRIVER, **self.DRIVER_PARAMS)
+                                       driver=self.DRIVER,
+                                       **self.DRIVER_PARAMS)
 
         # check contents
         root = self.h5file.root
@@ -1562,7 +1665,8 @@ class DefaultDriverTestCase(common.PyTablesTestCase):
 
         # check contents
         self.h5file = tables.open_file(self.h5fname, mode="a",
-                                       driver=self.DRIVER, **self.DRIVER_PARAMS)
+                                       driver=self.DRIVER,
+                                       **self.DRIVER_PARAMS)
 
         root = self.h5file.root
 
@@ -1589,11 +1693,12 @@ class DefaultDriverTestCase(common.PyTablesTestCase):
         self.h5file.close()
         self.h5file = None
 
-        self.assertTrue(os.path.isfile(self.h5fname))
+        self.assertIsFile()
 
         # Open an existing HDF5 file in append mode
         self.h5file = tables.open_file(self.h5fname, mode="r+",
-                                       driver=self.DRIVER, **self.DRIVER_PARAMS)
+                                       driver=self.DRIVER,
+                                       **self.DRIVER_PARAMS)
 
         # check contents
         root = self.h5file.root
@@ -1617,7 +1722,8 @@ class DefaultDriverTestCase(common.PyTablesTestCase):
 
         # check contents
         self.h5file = tables.open_file(self.h5fname, mode="r+",
-                                       driver=self.DRIVER, **self.DRIVER_PARAMS)
+                                       driver=self.DRIVER,
+                                       **self.DRIVER_PARAMS)
 
         root = self.h5file.root
 
@@ -1694,15 +1800,16 @@ class CoreDriverNoBackingStoreTestCase(common.PyTablesTestCase):
         if self.h5file:
             self.h5file.close()
         elif self.h5fname in tables.file._open_files:
-            h5file = tables.file._open_files[self.h5fname]
-            h5file.close()
+            open_files = tables.file._open_files
+            for h5file in open_files.get_handlers_by_name(self.h5fname):
+                h5file.close()
 
         self.h5file = None
         if os.path.isfile(self.h5fname):
             os.remove(self.h5fname)
 
     def test_newFile(self):
-        """Ensure that nothing is written to file"""
+        """Ensure that nothing is written to file."""
 
         self.assertFalse(os.path.isfile(self.h5fname))
 
@@ -1911,7 +2018,7 @@ class CoreDriverNoBackingStoreTestCase(common.PyTablesTestCase):
             self.h5file.set_node_attr(root, "testattr", 41)
             self.h5file.create_array(root, "array", [1, 2], title="array")
             self.h5file.create_table(root, "table", {"var1": tables.IntCol()},
-                                    title="table")
+                                     title="table")
 
             image = self.h5file.get_file_image()
 
@@ -1923,6 +2030,39 @@ class CoreDriverNoBackingStoreTestCase(common.PyTablesTestCase):
                 self.assertEqual([i for i in image[:4]], [137, 72, 68, 70])
 
 
+class SplitDriverTestCase(DefaultDriverTestCase):
+    DRIVER = "H5FD_SPLIT"
+    DRIVER_PARAMS = {
+        "driver_split_meta_ext": "-xm.h5",
+        "driver_split_raw_ext": "-xr.h5",
+    }
+
+    def setUp(self):
+        self.h5fname = tempfile.mktemp()
+        self.h5fnames = [self.h5fname + self.DRIVER_PARAMS[k] for k in
+                         ("driver_split_meta_ext", "driver_split_raw_ext")]
+        self.h5file = tables.open_file(self.h5fname, mode="w",
+                                       driver=self.DRIVER,
+                                       **self.DRIVER_PARAMS)
+        root = self.h5file.root
+        self.h5file.set_node_attr(root, "testattr", 41)
+        self.h5file.create_array(root, "array", [1, 2], title="array")
+        self.h5file.create_table(root, "table", {"var1": tables.IntCol()},
+                                 title="table")
+
+    def tearDown(self):
+        if self.h5file:
+            self.h5file.close()
+        self.h5file = None
+        for fname in self.h5fnames:
+            if os.path.isfile(fname):
+                os.remove(fname)
+
+    def assertIsFile(self):
+        for fname in self.h5fnames:
+            self.assertTrue(os.path.isfile(fname))
+
+
 class NotSpportedDriverTestCase(common.PyTablesTestCase):
     DRIVER = None
     DRIVER_PARAMS = {}
@@ -1932,9 +2072,10 @@ class NotSpportedDriverTestCase(common.PyTablesTestCase):
         self.h5fname = tempfile.mktemp(suffix=".h5")
 
     def tearDown(self):
-        if self.h5fname in tables.file._open_files:
-            h5file = tables.file._open_files[self.h5fname]
-            h5file.close()
+        open_files = tables.file._open_files
+        if self.h5fname in open_files:
+            for h5file in open_files.get_handlers_by_name(self.h5fname):
+                h5file.close()
         if os.path.exists(self.h5fname):
             os.remove(self.h5fname)
 
@@ -1996,10 +2137,6 @@ class MultiDriverTestCase(NotSpportedDriverTestCase):
     DRIVER = "H5FD_MULTI"
 
 
-class SplitDriverTestCase(NotSpportedDriverTestCase):
-    DRIVER = "H5FD_SPLIT"
-
-
 class MpioDriverTestCase(NotSpportedDriverTestCase):
     DRIVER = "H5FD_MPIO"
 
@@ -2029,7 +2166,7 @@ class InMemoryCoreDriverTestCase(common.PyTablesTestCase):
 
     def _create_image(self, filename="in-memory", title="Title", mode='w'):
         fileh = open_file(filename, mode=mode, title=title,
-                         driver=self.DRIVER, driver_core_backing_store=0)
+                          driver=self.DRIVER, driver_core_backing_store=0)
 
         try:
             fileh.create_array(fileh.root, 'array', [1, 2], title="Array")
@@ -2067,9 +2204,9 @@ class InMemoryCoreDriverTestCase(common.PyTablesTestCase):
 
         # Open an existing file
         self.h5file = open_file(self.h5fname, mode="r",
-                               driver=self.DRIVER,
-                               driver_core_image=image,
-                               driver_core_backing_store=0)
+                                driver=self.DRIVER,
+                                driver_core_image=image,
+                                driver_core_backing_store=0)
 
         # Get the CLASS attribute of the arr object
         self.assertTrue(hasattr(self.h5file.root._v_attrs, "TITLE"))
@@ -2088,9 +2225,9 @@ class InMemoryCoreDriverTestCase(common.PyTablesTestCase):
 
         # Open an existing file
         self.h5file = open_file(self.h5fname, mode="r+",
-                               driver=self.DRIVER,
-                               driver_core_image=image,
-                               driver_core_backing_store=0)
+                                driver=self.DRIVER,
+                                driver_core_image=image,
+                                driver_core_backing_store=0)
 
         # Get the CLASS attribute of the arr object
         self.assertTrue(hasattr(self.h5file.root._v_attrs, "TITLE"))
@@ -2104,7 +2241,7 @@ class InMemoryCoreDriverTestCase(common.PyTablesTestCase):
         self.assertEqual(self.h5file.root.array.read(), [1, 2])
 
         self.h5file.create_array(self.h5file.root, 'array2', range(10000),
-                                title="Array2")
+                                 title="Array2")
         self.h5file.root._v_attrs.testattr2 = 42
 
         self.h5file.close()
@@ -2118,9 +2255,9 @@ class InMemoryCoreDriverTestCase(common.PyTablesTestCase):
 
         # Open an existing file
         self.h5file = open_file(self.h5fname, mode="r+",
-                               driver=self.DRIVER,
-                               driver_core_image=image1,
-                               driver_core_backing_store=0)
+                                driver=self.DRIVER,
+                                driver_core_image=image1,
+                                driver_core_backing_store=0)
 
         # Get the CLASS attribute of the arr object
         self.assertTrue(hasattr(self.h5file.root._v_attrs, "TITLE"))
@@ -2135,7 +2272,7 @@ class InMemoryCoreDriverTestCase(common.PyTablesTestCase):
 
         data = range(2 * tables.parameters.DRIVER_CORE_INCREMENT)
         self.h5file.create_array(self.h5file.root, 'array2', data,
-                                title="Array2")
+                                 title="Array2")
         self.h5file.root._v_attrs.testattr2 = 42
 
         image2 = self.h5file.get_file_image()
@@ -2149,9 +2286,9 @@ class InMemoryCoreDriverTestCase(common.PyTablesTestCase):
 
         # Open an existing file
         self.h5file = open_file(self.h5fname, mode="r",
-                               driver=self.DRIVER,
-                               driver_core_image=image2,
-                               driver_core_backing_store=0)
+                                driver=self.DRIVER,
+                                driver_core_image=image2,
+                                driver_core_backing_store=0)
 
         # Get the CLASS attribute of the arr object
         self.assertTrue(hasattr(self.h5file.root._v_attrs, "TITLE"))
@@ -2181,9 +2318,9 @@ class InMemoryCoreDriverTestCase(common.PyTablesTestCase):
 
         # Open an existing file
         self.h5file = open_file(self.h5fname, mode="a",
-                               driver=self.DRIVER,
-                               driver_core_image=image,
-                               driver_core_backing_store=0)
+                                driver=self.DRIVER,
+                                driver_core_image=image,
+                                driver_core_backing_store=0)
 
         # Get the CLASS attribute of the arr object
         self.assertTrue(hasattr(self.h5file.root._v_attrs, "TITLE"))
@@ -2207,9 +2344,9 @@ class InMemoryCoreDriverTestCase(common.PyTablesTestCase):
 
         # Open an existing file
         self.h5file = open_file(self.h5fname, mode="a",
-                               driver=self.DRIVER,
-                               driver_core_image=image1,
-                               driver_core_backing_store=0)
+                                driver=self.DRIVER,
+                                driver_core_image=image1,
+                                driver_core_backing_store=0)
 
         # Get the CLASS attribute of the arr object
         self.assertTrue(hasattr(self.h5file.root._v_attrs, "TITLE"))
@@ -2224,7 +2361,7 @@ class InMemoryCoreDriverTestCase(common.PyTablesTestCase):
 
         data = range(2 * tables.parameters.DRIVER_CORE_INCREMENT)
         self.h5file.create_array(self.h5file.root, 'array2', data,
-                                title="Array2")
+                                 title="Array2")
         self.h5file.root._v_attrs.testattr2 = 42
 
         image2 = self.h5file.get_file_image()
@@ -2238,9 +2375,9 @@ class InMemoryCoreDriverTestCase(common.PyTablesTestCase):
 
         # Open an existing file
         self.h5file = open_file(self.h5fname, mode="r",
-                               driver=self.DRIVER,
-                               driver_core_image=image2,
-                               driver_core_backing_store=0)
+                                driver=self.DRIVER,
+                                driver_core_image=image2,
+                                driver_core_backing_store=0)
 
         # Get the CLASS attribute of the arr object
         self.assertTrue(hasattr(self.h5file.root._v_attrs, "TITLE"))
@@ -2266,13 +2403,13 @@ class InMemoryCoreDriverTestCase(common.PyTablesTestCase):
 
     def test_str(self):
         self.h5file = open_file(self.h5fname, mode="w", title="Title",
-                               driver=self.DRIVER,
-                               driver_core_backing_store=0)
+                                driver=self.DRIVER,
+                                driver_core_backing_store=0)
 
         self.h5file.create_array(self.h5file.root, 'array', [1, 2],
-                                title="Array")
+                                 title="Array")
         self.h5file.create_table(self.h5file.root, 'table', {'var1': IntCol()},
-                                "Table")
+                                 "Table")
         self.h5file.root._v_attrs.testattr = 41
 
         # ensure that the __str__ method works even if there is no phisical
@@ -2284,6 +2421,123 @@ class InMemoryCoreDriverTestCase(common.PyTablesTestCase):
         self.assertFalse(os.path.exists(self.h5fname))
 
 
+class QuantizeTestCase(unittest.TestCase):
+    mode = "w"
+    title = "This is the table title"
+    expectedrows = 10
+    appendrows = 5
+
+    def setUp(self):
+        self.data = numpy.linspace(-5., 5., 41)
+        self.randomdata = numpy.random.random_sample(1000000)
+        self.randomints = numpy.random.random_integers(
+            -1000000, 1000000, 1000000).astype('int64')
+        # Create a temporary file
+        self.file = tempfile.mktemp(".h5")
+        # Create an instance of HDF5 Table
+        self.h5file = open_file(self.file, self.mode, self.title)
+        self.populateFile()
+        self.h5file.close()
+        self.quantizeddata_0 = numpy.asarray(
+            [-5.] * 2 + [-4.] * 5 + [-3.] * 3 + [-2.] * 5 + [-1.] * 3 +
+            [0.] * 5 + [1.] * 3 + [2.] * 5 + [3.] * 3 + [4.] * 5 + [5.] * 2)
+        self.quantizeddata_m1 = numpy.asarray(
+            [-8.] * 4 + [0.] * 33 + [8.] * 4)
+
+    def populateFile(self):
+        root = self.h5file.root
+        filters = Filters(complevel=1, complib="blosc",
+                          least_significant_digit=1)
+        ints = self.h5file.create_carray(root, "integers", Int64Atom(),
+                                         (1000000,), filters=filters)
+        ints[:] = self.randomints
+        floats = self.h5file.create_carray(root, "floats", Float32Atom(),
+                                           (1000000,), filters=filters)
+        floats[:] = self.randomdata
+        data1 = self.h5file.create_carray(root, "data1", Float64Atom(),
+                                          (41,), filters=filters)
+        data1[:] = self.data
+        filters = Filters(complevel=1, complib="blosc",
+                          least_significant_digit=0)
+        data0 = self.h5file.create_carray(root, "data0", Float64Atom(),
+                                          (41,), filters=filters)
+        data0[:] = self.data
+        filters = Filters(complevel=1, complib="blosc",
+                          least_significant_digit=2)
+        data2 = self.h5file.create_carray(root, "data2", Float64Atom(),
+                                          (41,), filters=filters)
+        data2[:] = self.data
+        filters = Filters(complevel=1, complib="blosc",
+                          least_significant_digit=-1)
+        datam1 = self.h5file.create_carray(root, "datam1", Float64Atom(),
+                                           (41,), filters=filters)
+        datam1[:] = self.data
+
+    def tearDown(self):
+        # Close the file
+        if self.h5file.isopen:
+            self.h5file.close()
+
+        os.remove(self.file)
+        common.cleanup(self)
+
+    #----------------------------------------
+
+    def test00_quantizeData(self):
+        """Checking the quantize() function."""
+
+        quantized_0 = quantize(self.data, 0)
+        quantized_1 = quantize(self.data, 1)
+        quantized_2 = quantize(self.data, 2)
+        quantized_m1 = quantize(self.data, -1)
+        numpy.testing.assert_array_equal(quantized_0, self.quantizeddata_0)
+        numpy.testing.assert_array_equal(quantized_1, self.data)
+        numpy.testing.assert_array_equal(quantized_2, self.data)
+        numpy.testing.assert_array_equal(quantized_m1, self.quantizeddata_m1)
+
+    def test01_quantizeDataMaxError(self):
+        """Checking the maximum error introduced by the quantize() function."""
+
+        quantized_0 = quantize(self.randomdata, 0)
+        quantized_1 = quantize(self.randomdata, 1)
+        quantized_2 = quantize(self.randomdata, 2)
+        quantized_m1 = quantize(self.randomdata, -1)
+        # assertLess is new in Python 2.7
+        #self.assertLess(numpy.abs(quantized_0 - self.randomdata).max(), 0.5)
+        #self.assertLess(numpy.abs(quantized_1 - self.randomdata).max(), 0.05)
+        #self.assertLess(numpy.abs(quantized_2 - self.randomdata).max(), 0.005)
+        #self.assertLess(numpy.abs(quantized_m1 - self.randomdata).max(), 1.)
+
+        self.assertTrue(numpy.abs(quantized_0 - self.randomdata).max() < 0.5)
+        self.assertTrue(numpy.abs(quantized_1 - self.randomdata).max() < 0.05)
+        self.assertTrue(numpy.abs(quantized_2 - self.randomdata).max() < 0.005)
+        self.assertTrue(numpy.abs(quantized_m1 - self.randomdata).max() < 1.)
+
+    def test02_array(self):
+        """Checking quantized data as written to disk."""
+
+        self.h5file = open_file(self.file, "r")
+        numpy.testing.assert_array_equal(self.h5file.root.data1[:], self.data)
+        numpy.testing.assert_array_equal(self.h5file.root.data2[:], self.data)
+        numpy.testing.assert_array_equal(self.h5file.root.data0[:],
+                                         self.quantizeddata_0)
+        numpy.testing.assert_array_equal(self.h5file.root.datam1[:],
+                                         self.quantizeddata_m1)
+        numpy.testing.assert_array_equal(self.h5file.root.integers[:],
+                                         self.randomints)
+        self.assertEqual(self.h5file.root.integers[:].dtype,
+                         self.randomints.dtype)
+        # assertLess is new in Python 2.7
+        #self.assertLess(
+        #    numpy.abs(self.h5file.root.floats[:] - self.randomdata).max(),
+        #    0.05
+        #)
+        self.assertTrue(
+            numpy.abs(self.h5file.root.floats[:] - self.randomdata).max() <
+            0.05
+        )
+
+
 #----------------------------------------------------------------------
 
 def suite():
@@ -2297,6 +2551,14 @@ def suite():
         theSuite.addTest(unittest.makeSuite(FiltersCase1))
         theSuite.addTest(unittest.makeSuite(FiltersCase2))
         theSuite.addTest(unittest.makeSuite(FiltersCase10))
+        theSuite.addTest(unittest.makeSuite(FiltersCaseBloscBloscLZ))
+        if 'lz4' in tables.blosc_compressor_list():
+            theSuite.addTest(unittest.makeSuite(FiltersCaseBloscLZ4))
+            theSuite.addTest(unittest.makeSuite(FiltersCaseBloscLZ4HC))
+        if 'snappy' in tables.blosc_compressor_list():
+            theSuite.addTest(unittest.makeSuite(FiltersCaseBloscSnappy))
+        if 'zlib' in tables.blosc_compressor_list():
+            theSuite.addTest(unittest.makeSuite(FiltersCaseBloscZlib))
         theSuite.addTest(unittest.makeSuite(CopyGroupCase1))
         theSuite.addTest(unittest.makeSuite(CopyGroupCase2))
         theSuite.addTest(unittest.makeSuite(CopyFileCase1))
@@ -2311,6 +2573,7 @@ def suite():
         theSuite.addTest(unittest.makeSuite(StdioDriverTestCase))
         theSuite.addTest(unittest.makeSuite(CoreDriverTestCase))
         theSuite.addTest(unittest.makeSuite(CoreDriverNoBackingStoreTestCase))
+        theSuite.addTest(unittest.makeSuite(SplitDriverTestCase))
 
         theSuite.addTest(unittest.makeSuite(LogDriverTestCase))
         theSuite.addTest(unittest.makeSuite(DirectDriverTestCase))
@@ -2318,7 +2581,6 @@ def suite():
 
         theSuite.addTest(unittest.makeSuite(FamilyDriverTestCase))
         theSuite.addTest(unittest.makeSuite(MultiDriverTestCase))
-        theSuite.addTest(unittest.makeSuite(SplitDriverTestCase))
         theSuite.addTest(unittest.makeSuite(MpioDriverTestCase))
         theSuite.addTest(unittest.makeSuite(MpiPosixDriverTestCase))
         theSuite.addTest(unittest.makeSuite(StreamDriverTestCase))
@@ -2326,6 +2588,8 @@ def suite():
         if hdf5_version >= "1.8.9":
             theSuite.addTest(unittest.makeSuite(InMemoryCoreDriverTestCase))
 
+        theSuite.addTest(unittest.makeSuite(QuantizeTestCase))
+
     if common.heavy:
         theSuite.addTest(unittest.makeSuite(createTestCase))
         theSuite.addTest(unittest.makeSuite(FiltersCase3))
diff --git a/tables/tests/test_do_undo.py b/tables/tests/test_do_undo.py
index 7a9cfc6..ba3797a 100644
--- a/tables/tests/test_do_undo.py
+++ b/tables/tests/test_do_undo.py
@@ -1,5 +1,6 @@
 # -*- coding: utf-8 -*-
 
+from __future__ import print_function
 import sys
 import unittest
 import os
@@ -62,11 +63,11 @@ class BasicTestCase(unittest.TestCase):
         common.cleanup(self)
 
     def test00_simple(self):
-        """Checking simple do/undo"""
+        """Checking simple do/undo."""
 
         if common.verbose:
-            print '\n', '-=' * 30
-            print "Running %s.test00_simple..." % self.__class__.__name__
+            print('\n', '-=' * 30)
+            print("Running %s.test00_simple..." % self.__class__.__name__)
 
         # open the do/undo
         self.fileh.enable_undo()
@@ -82,7 +83,7 @@ class BasicTestCase(unittest.TestCase):
         self._doReopen()
         self.fileh.redo()
         if common.verbose:
-            print "Object tree after redo:", self.fileh
+            print("Object tree after redo:", self.fileh)
         # Check that otherarray has come back to life in a sane state
         self.assertTrue("/otherarray" in self.fileh)
         self.assertEqual(self.fileh.root.otherarray.read(), [3, 4])
@@ -94,8 +95,8 @@ class BasicTestCase(unittest.TestCase):
         """Checking do/undo (twice operations intertwined)"""
 
         if common.verbose:
-            print '\n', '-=' * 30
-            print "Running %s.test01_twice..." % self.__class__.__name__
+            print('\n', '-=' * 30)
+            print("Running %s.test01_twice..." % self.__class__.__name__)
 
         # open the do/undo
         self.fileh.enable_undo()
@@ -122,11 +123,11 @@ class BasicTestCase(unittest.TestCase):
         self.assertEqual(self.fileh._curmark, 0)
 
     def test02_twice2(self):
-        """Checking twice ops and two marks"""
+        """Checking twice ops and two marks."""
 
         if common.verbose:
-            print '\n', '-=' * 30
-            print "Running %s.test02_twice2..." % self.__class__.__name__
+            print('\n', '-=' * 30)
+            print("Running %s.test02_twice2..." % self.__class__.__name__)
 
         # open the do/undo
         self.fileh.enable_undo()
@@ -169,11 +170,12 @@ class BasicTestCase(unittest.TestCase):
         self.assertEqual(self.fileh._curmark, 1)
 
     def test03_6times3marks(self):
-        """Checking with six ops and three marks"""
+        """Checking with six ops and three marks."""
 
         if common.verbose:
-            print '\n', '-=' * 30
-            print "Running %s.test03_6times3marks..." % self.__class__.__name__
+            print('\n', '-=' * 30)
+            print("Running %s.test03_6times3marks..." %
+                  self.__class__.__name__)
 
         # open the do/undo
         self.fileh.enable_undo()
@@ -252,11 +254,13 @@ class BasicTestCase(unittest.TestCase):
         self.assertEqual(self.fileh.root.otherarray6.title, "Another array 6")
 
     def test04_6times3marksro(self):
-        """Checking with six operations, three marks and do/undo in random order"""
+        """Checking with six operations, three marks and do/undo in random
+        order."""
 
         if common.verbose:
-            print '\n', '-=' * 30
-            print "Running %s.test04_6times3marksro..." % self.__class__.__name__
+            print('\n', '-=' * 30)
+            print("Running %s.test04_6times3marksro..." %
+                  self.__class__.__name__)
 
         # open the do/undo
         self.fileh.enable_undo()
@@ -276,7 +280,7 @@ class BasicTestCase(unittest.TestCase):
         self.assertTrue("/otherarray4" not in self.fileh)
         # Put a mark in the middle of stack
         if common.verbose:
-            print "All nodes:", self.fileh.walk_nodes()
+            print("All nodes:", self.fileh.walk_nodes())
         self.fileh.mark()
         self._doReopen()
         self.fileh.create_array('/', 'otherarray5', [7, 8], "Another array 5")
@@ -322,11 +326,11 @@ class BasicTestCase(unittest.TestCase):
         self.assertEqual(self.fileh.root.otherarray6.title, "Another array 6")
 
     def test05_destructive(self):
-        """Checking with a destructive action during undo"""
+        """Checking with a destructive action during undo."""
 
         if common.verbose:
-            print '\n', '-=' * 30
-            print "Running %s.test05_destructive..." % self.__class__.__name__
+            print('\n', '-=' * 30)
+            print("Running %s.test05_destructive..." % self.__class__.__name__)
 
         # open the do/undo
         self.fileh.enable_undo()
@@ -354,8 +358,9 @@ class BasicTestCase(unittest.TestCase):
         """Checking with a destructive action during undo (II)"""
 
         if common.verbose:
-            print '\n', '-=' * 30
-            print "Running %s.test05b_destructive..." % self.__class__.__name__
+            print('\n', '-=' * 30)
+            print("Running %s.test05b_destructive..." %
+                  self.__class__.__name__)
 
         # open the do/undo
         self.fileh.enable_undo()
@@ -390,8 +395,9 @@ class BasicTestCase(unittest.TestCase):
         """Checking with a destructive action during undo (III)"""
 
         if common.verbose:
-            print '\n', '-=' * 30
-            print "Running %s.test05c_destructive..." % self.__class__.__name__
+            print('\n', '-=' * 30)
+            print("Running %s.test05c_destructive..." %
+                  self.__class__.__name__)
 
         # open the do/undo
         self.fileh.enable_undo()
@@ -424,8 +430,9 @@ class BasicTestCase(unittest.TestCase):
         """Checking with a destructive action during undo (IV)"""
 
         if common.verbose:
-            print '\n', '-=' * 30
-            print "Running %s.test05d_destructive..." % self.__class__.__name__
+            print('\n', '-=' * 30)
+            print("Running %s.test05d_destructive..." %
+                  self.__class__.__name__)
 
         # open the do/undo
         self.fileh.enable_undo()
@@ -456,8 +463,9 @@ class BasicTestCase(unittest.TestCase):
         """Checking with a destructive action during undo (V)"""
 
         if common.verbose:
-            print '\n', '-=' * 30
-            print "Running %s.test05e_destructive..." % self.__class__.__name__
+            print('\n', '-=' * 30)
+            print("Running %s.test05e_destructive..." %
+                  self.__class__.__name__)
 
         # open the do/undo
         self.fileh.enable_undo()
@@ -483,8 +491,9 @@ class BasicTestCase(unittest.TestCase):
         "Checking with a destructive creation of existing node during undo"
 
         if common.verbose:
-            print '\n', '-=' * 30
-            print "Running %s.test05f_destructive..." % self.__class__.__name__
+            print('\n', '-=' * 30)
+            print("Running %s.test05f_destructive..." %
+                  self.__class__.__name__)
 
         self.fileh.enable_undo()
         self.fileh.create_array('/', 'newarray', [1])
@@ -504,8 +513,8 @@ class BasicTestCase(unittest.TestCase):
         """Checking do/undo (total unwind)"""
 
         if common.verbose:
-            print '\n', '-=' * 30
-            print "Running %s.test06_totalunwind..." % self.__class__.__name__
+            print('\n', '-=' * 30)
+            print("Running %s.test06_totalunwind..." % self.__class__.__name__)
 
         # open the do/undo
         self.fileh.enable_undo()
@@ -523,8 +532,8 @@ class BasicTestCase(unittest.TestCase):
         """Checking do/undo (total rewind)"""
 
         if common.verbose:
-            print '\n', '-=' * 30
-            print "Running %s.test07_totalunwind..." % self.__class__.__name__
+            print('\n', '-=' * 30)
+            print("Running %s.test07_totalunwind..." % self.__class__.__name__)
 
         # open the do/undo
         self.fileh.enable_undo()
@@ -546,11 +555,11 @@ class BasicTestCase(unittest.TestCase):
         self.assertEqual(self.fileh.root.otherarray2.title, "Another array 2")
 
     def test08_marknames(self):
-        """Checking mark names"""
+        """Checking mark names."""
 
         if common.verbose:
-            print '\n', '-=' * 30
-            print "Running %s.test08_marknames..." % self.__class__.__name__
+            print('\n', '-=' * 30)
+            print("Running %s.test08_marknames..." % self.__class__.__name__)
 
         # open the do/undo
         self.fileh.enable_undo()
@@ -595,11 +604,11 @@ class BasicTestCase(unittest.TestCase):
         self.assertEqual(self.fileh.root.otherarray4.read(), [6, 7])
 
     def test08_initialmark(self):
-        """Checking initial mark"""
+        """Checking initial mark."""
 
         if common.verbose:
-            print '\n', '-=' * 30
-            print "Running %s.test08_initialmark..." % self.__class__.__name__
+            print('\n', '-=' * 30)
+            print("Running %s.test08_initialmark..." % self.__class__.__name__)
 
         # open the do/undo
         self.fileh.enable_undo()
@@ -628,8 +637,8 @@ class BasicTestCase(unittest.TestCase):
         """Checking mark names (wrong direction)"""
 
         if common.verbose:
-            print '\n', '-=' * 30
-            print "Running %s.test09_marknames..." % self.__class__.__name__
+            print('\n', '-=' * 30)
+            print("Running %s.test09_marknames..." % self.__class__.__name__)
 
         # open the do/undo
         self.fileh.enable_undo()
@@ -650,8 +659,8 @@ class BasicTestCase(unittest.TestCase):
         except UndoRedoError:
             if common.verbose:
                 (type, value, traceback) = sys.exc_info()
-                print "\nGreat!, the next UndoRedoError was catched!"
-                print value
+                print("\nGreat!, the next UndoRedoError was catched!")
+                print(value)
         else:
             self.fail("expected an UndoRedoError")
         # Now go to mark "third"
@@ -663,8 +672,8 @@ class BasicTestCase(unittest.TestCase):
         except UndoRedoError:
             if common.verbose:
                 (type, value, traceback) = sys.exc_info()
-                print "\nGreat!, the next UndoRedoError was catched!"
-                print value
+                print("\nGreat!, the next UndoRedoError was catched!")
+                print(value)
         else:
             self.fail("expected an UndoRedoError")
         # Final checks
@@ -677,8 +686,8 @@ class BasicTestCase(unittest.TestCase):
         """Checking mark names (goto)"""
 
         if common.verbose:
-            print '\n', '-=' * 30
-            print "Running %s.test10_goto..." % self.__class__.__name__
+            print('\n', '-=' * 30)
+            print("Running %s.test10_goto..." % self.__class__.__name__)
 
         # open the do/undo
         self.fileh.enable_undo()
@@ -728,8 +737,8 @@ class BasicTestCase(unittest.TestCase):
         """Checking mark sequential ids (goto)"""
 
         if common.verbose:
-            print '\n', '-=' * 30
-            print "Running %s.test10_gotoint..." % self.__class__.__name__
+            print('\n', '-=' * 30)
+            print("Running %s.test10_gotoint..." % self.__class__.__name__)
 
         # open the do/undo
         self.fileh.enable_undo()
@@ -786,8 +795,8 @@ class BasicTestCase(unittest.TestCase):
         "Creating contiguous marks"
 
         if common.verbose:
-            print '\n', '-=' * 30
-            print "Running %s.test11_contiguous..." % self.__class__.__name__
+            print('\n', '-=' * 30)
+            print("Running %s.test11_contiguous..." % self.__class__.__name__)
 
         self.fileh.enable_undo()
         m1 = self.fileh.mark()
@@ -812,8 +821,8 @@ class BasicTestCase(unittest.TestCase):
         "Ensuring the mark is kept after an UNDO operation"
 
         if common.verbose:
-            print '\n', '-=' * 30
-            print "Running %s.test12_keepMark..." % self.__class__.__name__
+            print('\n', '-=' * 30)
+            print("Running %s.test12_keepMark..." % self.__class__.__name__)
 
         self.fileh.enable_undo()
         self.fileh.create_array('/', 'newarray1', [1])
@@ -831,8 +840,9 @@ class BasicTestCase(unittest.TestCase):
         "Checking that successive enable/disable Undo works"
 
         if common.verbose:
-            print '\n', '-=' * 30
-            print "Running %s.test13_severalEnableDisable..." % self.__class__.__name__
+            print('\n', '-=' * 30)
+            print("Running %s.test13_severalEnableDisable..." %
+                  self.__class__.__name__)
 
         self.fileh.enable_undo()
         self.fileh.create_array('/', 'newarray1', [1])
@@ -930,11 +940,11 @@ class createArrayTestCase(unittest.TestCase):
         common.cleanup(self)
 
     def test00(self):
-        """Checking one action"""
+        """Checking one action."""
 
         if common.verbose:
-            print '\n', '-=' * 30
-            print "Running %s.test00..." % self.__class__.__name__
+            print('\n', '-=' * 30)
+            print("Running %s.test00..." % self.__class__.__name__)
 
         # open the do/undo
         self.fileh.enable_undo()
@@ -952,11 +962,11 @@ class createArrayTestCase(unittest.TestCase):
         self.assertEqual(self.fileh.root.otherarray1.read(), [1, 2])
 
     def test01(self):
-        """Checking two actions"""
+        """Checking two actions."""
 
         if common.verbose:
-            print '\n', '-=' * 30
-            print "Running %s.test01..." % self.__class__.__name__
+            print('\n', '-=' * 30)
+            print("Running %s.test01..." % self.__class__.__name__)
 
         # open the do/undo
         self.fileh.enable_undo()
@@ -979,11 +989,11 @@ class createArrayTestCase(unittest.TestCase):
         self.assertEqual(self.fileh.root.otherarray2.read(), [2, 3])
 
     def test02(self):
-        """Checking three actions"""
+        """Checking three actions."""
 
         if common.verbose:
-            print '\n', '-=' * 30
-            print "Running %s.test02..." % self.__class__.__name__
+            print('\n', '-=' * 30)
+            print("Running %s.test02..." % self.__class__.__name__)
 
         # open the do/undo
         self.fileh.enable_undo()
@@ -1011,11 +1021,11 @@ class createArrayTestCase(unittest.TestCase):
         self.assertEqual(self.fileh.root.otherarray3.read(), [3, 4])
 
     def test03(self):
-        """Checking three actions in different depth levels"""
+        """Checking three actions in different depth levels."""
 
         if common.verbose:
-            print '\n', '-=' * 30
-            print "Running %s.test03..." % self.__class__.__name__
+            print('\n', '-=' * 30)
+            print("Running %s.test03..." % self.__class__.__name__)
 
         # open the do/undo
         self.fileh.enable_undo()
@@ -1084,11 +1094,11 @@ class createGroupTestCase(unittest.TestCase):
         common.cleanup(self)
 
     def test00(self):
-        """Checking one action"""
+        """Checking one action."""
 
         if common.verbose:
-            print '\n', '-=' * 30
-            print "Running %s.test00..." % self.__class__.__name__
+            print('\n', '-=' * 30)
+            print("Running %s.test00..." % self.__class__.__name__)
 
         # open the do/undo
         self.fileh.enable_undo()
@@ -1106,11 +1116,11 @@ class createGroupTestCase(unittest.TestCase):
                          "Another group 1")
 
     def test01(self):
-        """Checking two actions"""
+        """Checking two actions."""
 
         if common.verbose:
-            print '\n', '-=' * 30
-            print "Running %s.test01..." % self.__class__.__name__
+            print('\n', '-=' * 30)
+            print("Running %s.test01..." % self.__class__.__name__)
 
         # open the do/undo
         self.fileh.enable_undo()
@@ -1133,11 +1143,11 @@ class createGroupTestCase(unittest.TestCase):
                          "Another group 2")
 
     def test02(self):
-        """Checking three actions"""
+        """Checking three actions."""
 
         if common.verbose:
-            print '\n', '-=' * 30
-            print "Running %s.test02..." % self.__class__.__name__
+            print('\n', '-=' * 30)
+            print("Running %s.test02..." % self.__class__.__name__)
 
         # open the do/undo
         self.fileh.enable_undo()
@@ -1165,11 +1175,11 @@ class createGroupTestCase(unittest.TestCase):
                          "Another group 3")
 
     def test03(self):
-        """Checking three actions in different depth levels"""
+        """Checking three actions in different depth levels."""
 
         if common.verbose:
-            print '\n', '-=' * 30
-            print "Running %s.test03..." % self.__class__.__name__
+            print('\n', '-=' * 30)
+            print("Running %s.test03..." % self.__class__.__name__)
 
         # open the do/undo
         self.fileh.enable_undo()
@@ -1232,9 +1242,9 @@ def populateTable(where, name):
     # Do not index the var4 column
     # indexrows = table.cols.var4.create_index()
     if common.verbose:
-        print "Number of written rows:", nrows
-        print "Number of indexed rows:", table.cols.var1.index.nelements
-        print "Number of indexed rows(2):", indexrows
+        print("Number of written rows:", nrows)
+        print("Number of indexed rows:", table.cols.var1.index.nelements)
+        print("Number of indexed rows(2):", indexrows)
 
 
 class renameNodeTestCase(unittest.TestCase):
@@ -1279,8 +1289,8 @@ class renameNodeTestCase(unittest.TestCase):
         """Checking rename_node (over Groups without children)"""
 
         if common.verbose:
-            print '\n', '-=' * 30
-            print "Running %s.test00..." % self.__class__.__name__
+            print('\n', '-=' * 30)
+            print("Running %s.test00..." % self.__class__.__name__)
 
         # open the do/undo
         self.fileh.enable_undo()
@@ -1303,8 +1313,8 @@ class renameNodeTestCase(unittest.TestCase):
         """Checking rename_node (over Groups with children)"""
 
         if common.verbose:
-            print '\n', '-=' * 30
-            print "Running %s.test01..." % self.__class__.__name__
+            print('\n', '-=' * 30)
+            print("Running %s.test01..." % self.__class__.__name__)
 
         # open the do/undo
         self.fileh.enable_undo()
@@ -1335,8 +1345,8 @@ class renameNodeTestCase(unittest.TestCase):
         """Checking rename_node (over Groups with children 2)"""
 
         if common.verbose:
-            print '\n', '-=' * 30
-            print "Running %s.test01b..." % self.__class__.__name__
+            print('\n', '-=' * 30)
+            print("Running %s.test01b..." % self.__class__.__name__)
 
         # open the do/undo
         self.fileh.enable_undo()
@@ -1368,8 +1378,8 @@ class renameNodeTestCase(unittest.TestCase):
         """Checking rename_node (over Leaves)"""
 
         if common.verbose:
-            print '\n', '-=' * 30
-            print "Running %s.test02..." % self.__class__.__name__
+            print('\n', '-=' * 30)
+            print("Running %s.test02..." % self.__class__.__name__)
 
         # open the do/undo
         self.fileh.enable_undo()
@@ -1392,8 +1402,8 @@ class renameNodeTestCase(unittest.TestCase):
         """Checking rename_node (over Tables)"""
 
         if common.verbose:
-            print '\n', '-=' * 30
-            print "Running %s.test03..." % self.__class__.__name__
+            print('\n', '-=' * 30)
+            print("Running %s.test03..." % self.__class__.__name__)
 
         # open the do/undo
         self.fileh.enable_undo()
@@ -1471,8 +1481,8 @@ class moveNodeTestCase(unittest.TestCase):
         """Checking move_node (over Leaf)"""
 
         if common.verbose:
-            print '\n', '-=' * 30
-            print "Running %s.test00..." % self.__class__.__name__
+            print('\n', '-=' * 30)
+            print("Running %s.test00..." % self.__class__.__name__)
 
         # open the do/undo
         self.fileh.enable_undo()
@@ -1496,8 +1506,8 @@ class moveNodeTestCase(unittest.TestCase):
         """Checking move_node (over Groups with children)"""
 
         if common.verbose:
-            print '\n', '-=' * 30
-            print "Running %s.test01..." % self.__class__.__name__
+            print('\n', '-=' * 30)
+            print("Running %s.test01..." % self.__class__.__name__)
 
         # open the do/undo
         self.fileh.enable_undo()
@@ -1529,8 +1539,8 @@ class moveNodeTestCase(unittest.TestCase):
         """Checking move_node (over Groups with children 2)"""
 
         if common.verbose:
-            print '\n', '-=' * 30
-            print "Running %s.test01b..." % self.__class__.__name__
+            print('\n', '-=' * 30)
+            print("Running %s.test01b..." % self.__class__.__name__)
 
         # open the do/undo
         self.fileh.enable_undo()
@@ -1563,8 +1573,8 @@ class moveNodeTestCase(unittest.TestCase):
         """Checking move_node (over Leaves)"""
 
         if common.verbose:
-            print '\n', '-=' * 30
-            print "Running %s.test02..." % self.__class__.__name__
+            print('\n', '-=' * 30)
+            print("Running %s.test02..." % self.__class__.__name__)
 
         # open the do/undo
         self.fileh.enable_undo()
@@ -1587,8 +1597,8 @@ class moveNodeTestCase(unittest.TestCase):
         """Checking move_node (over Tables)"""
 
         if common.verbose:
-            print '\n', '-=' * 30
-            print "Running %s.test03..." % self.__class__.__name__
+            print('\n', '-=' * 30)
+            print("Running %s.test03..." % self.__class__.__name__)
 
         # open the do/undo
         self.fileh.enable_undo()
@@ -1667,8 +1677,8 @@ class removeNodeTestCase(unittest.TestCase):
         """Checking remove_node (over Leaf)"""
 
         if common.verbose:
-            print '\n', '-=' * 30
-            print "Running %s.test00..." % self.__class__.__name__
+            print('\n', '-=' * 30)
+            print("Running %s.test00..." % self.__class__.__name__)
 
         # open the do/undo
         self.fileh.enable_undo()
@@ -1688,8 +1698,8 @@ class removeNodeTestCase(unittest.TestCase):
         """Checking remove_node (over several Leaves)"""
 
         if common.verbose:
-            print '\n', '-=' * 30
-            print "Running %s.test00b..." % self.__class__.__name__
+            print('\n', '-=' * 30)
+            print("Running %s.test00b..." % self.__class__.__name__)
 
         # open the do/undo
         self.fileh.enable_undo()
@@ -1714,8 +1724,8 @@ class removeNodeTestCase(unittest.TestCase):
         """Checking remove_node (over Tables)"""
 
         if common.verbose:
-            print '\n', '-=' * 30
-            print "Running %s.test00c..." % self.__class__.__name__
+            print('\n', '-=' * 30)
+            print("Running %s.test00c..." % self.__class__.__name__)
 
         # open the do/undo
         self.fileh.enable_undo()
@@ -1743,8 +1753,8 @@ class removeNodeTestCase(unittest.TestCase):
         """Checking remove_node (over Groups with children)"""
 
         if common.verbose:
-            print '\n', '-=' * 30
-            print "Running %s.test01..." % self.__class__.__name__
+            print('\n', '-=' * 30)
+            print("Running %s.test01..." % self.__class__.__name__)
 
         # open the do/undo
         self.fileh.enable_undo()
@@ -1770,8 +1780,8 @@ class removeNodeTestCase(unittest.TestCase):
         """Checking remove_node (over Groups with children 2)"""
 
         if common.verbose:
-            print '\n', '-=' * 30
-            print "Running %s.test01b..." % self.__class__.__name__
+            print('\n', '-=' * 30)
+            print("Running %s.test01b..." % self.__class__.__name__)
 
         # open the do/undo
         self.fileh.enable_undo()
@@ -1841,8 +1851,8 @@ class copyNodeTestCase(unittest.TestCase):
         """Checking copy_node (over Leaves)"""
 
         if common.verbose:
-            print '\n', '-=' * 30
-            print "Running %s.test00_copyLeaf..." % self.__class__.__name__
+            print('\n', '-=' * 30)
+            print("Running %s.test00_copyLeaf..." % self.__class__.__name__)
 
         # Enable undo/redo.
         self.fileh.enable_undo()
@@ -1864,8 +1874,8 @@ class copyNodeTestCase(unittest.TestCase):
         """Checking copy_node (over Tables)"""
 
         if common.verbose:
-            print '\n', '-=' * 30
-            print "Running %s.test00b_copyTable..." % self.__class__.__name__
+            print('\n', '-=' * 30)
+            print("Running %s.test00b_copyTable..." % self.__class__.__name__)
 
         # open the do/undo
         self.fileh.enable_undo()
@@ -1916,8 +1926,8 @@ class copyNodeTestCase(unittest.TestCase):
         "Copying a group (recursively)."
 
         if common.verbose:
-            print '\n', '-=' * 30
-            print "Running %s.test01_copyGroup..." % self.__class__.__name__
+            print('\n', '-=' * 30)
+            print("Running %s.test01_copyGroup..." % self.__class__.__name__)
 
         # Enable undo/redo.
         self.fileh.enable_undo()
@@ -1946,8 +1956,9 @@ class copyNodeTestCase(unittest.TestCase):
         "Copying a leaf, overwriting destination."
 
         if common.verbose:
-            print '\n', '-=' * 30
-            print "Running %s.test02_copyLeafOverwrite..." % self.__class__.__name__
+            print('\n', '-=' * 30)
+            print("Running %s.test02_copyLeafOverwrite..." %
+                  self.__class__.__name__)
 
         # Enable undo/redo.
         self.fileh.enable_undo()
@@ -1972,8 +1983,9 @@ class copyNodeTestCase(unittest.TestCase):
         "Copying the children of a group."
 
         if common.verbose:
-            print '\n', '-=' * 30
-            print "Running %s.test03_copyChildren..." % self.__class__.__name__
+            print('\n', '-=' * 30)
+            print("Running %s.test03_copyChildren..." %
+                  self.__class__.__name__)
 
         # Enable undo/redo.
         self.fileh.enable_undo()
@@ -2031,12 +2043,12 @@ class ComplexTestCase(unittest.TestCase):
         common.cleanup(self)
 
     def test00(self):
-        """Mix of create_array, create_group, renameNone, move_node, remove_node,
-           copy_node and copy_children."""
+        """Mix of create_array, create_group, renameNone, move_node,
+        remove_node, copy_node and copy_children."""
 
         if common.verbose:
-            print '\n', '-=' * 30
-            print "Running %s.test00..." % self.__class__.__name__
+            print('\n', '-=' * 30)
+            print("Running %s.test00..." % self.__class__.__name__)
 
         # Enable undo/redo.
         self.fileh.enable_undo()
@@ -2078,8 +2090,8 @@ class ComplexTestCase(unittest.TestCase):
         "Test with multiple generations (Leaf case)"
 
         if common.verbose:
-            print '\n', '-=' * 30
-            print "Running %s.test01..." % self.__class__.__name__
+            print('\n', '-=' * 30)
+            print("Running %s.test01..." % self.__class__.__name__)
 
         # Enable undo/redo.
         self.fileh.enable_undo()
@@ -2112,8 +2124,8 @@ class ComplexTestCase(unittest.TestCase):
         "Test with multiple generations (Group case)"
 
         if common.verbose:
-            print '\n', '-=' * 30
-            print "Running %s.test02..." % self.__class__.__name__
+            print('\n', '-=' * 30)
+            print("Running %s.test02..." % self.__class__.__name__)
 
         # Enable undo/redo.
         self.fileh.enable_undo()
@@ -2147,8 +2159,8 @@ class ComplexTestCase(unittest.TestCase):
         "Test with multiple generations (Group case, recursive remove)"
 
         if common.verbose:
-            print '\n', '-=' * 30
-            print "Running %s.test03..." % self.__class__.__name__
+            print('\n', '-=' * 30)
+            print("Running %s.test03..." % self.__class__.__name__)
 
         # Enable undo/redo.
         self.fileh.enable_undo()
@@ -2188,8 +2200,8 @@ class ComplexTestCase(unittest.TestCase):
         "Test with multiple generations (Group case, recursive remove, case 2)"
 
         if common.verbose:
-            print '\n', '-=' * 30
-            print "Running %s.test03b..." % self.__class__.__name__
+            print('\n', '-=' * 30)
+            print("Running %s.test03b..." % self.__class__.__name__)
 
         # Enable undo/redo.
         self.fileh.enable_undo()
@@ -2240,8 +2252,8 @@ class AttributesTestCase(unittest.TestCase):
         "Setting a nonexistent attribute."
 
         if common.verbose:
-            print '\n', '-=' * 30
-            print "Running %s.test00_setAttr..." % self.__class__.__name__
+            print('\n', '-=' * 30)
+            print("Running %s.test00_setAttr..." % self.__class__.__name__)
 
         array = self.fileh.root.array
         attrs = array.attrs
@@ -2260,8 +2272,9 @@ class AttributesTestCase(unittest.TestCase):
         "Setting an existing attribute."
 
         if common.verbose:
-            print '\n', '-=' * 30
-            print "Running %s.test01_setAttrExisting..." % self.__class__.__name__
+            print('\n', '-=' * 30)
+            print("Running %s.test01_setAttrExisting..." %
+                  self.__class__.__name__)
 
         array = self.fileh.root.array
         attrs = array.attrs
@@ -2281,8 +2294,8 @@ class AttributesTestCase(unittest.TestCase):
         "Removing an attribute."
 
         if common.verbose:
-            print '\n', '-=' * 30
-            print "Running %s.test02_delAttr..." % self.__class__.__name__
+            print('\n', '-=' * 30)
+            print("Running %s.test02_delAttr..." % self.__class__.__name__)
 
         array = self.fileh.root.array
         attrs = array.attrs
@@ -2300,8 +2313,9 @@ class AttributesTestCase(unittest.TestCase):
         "Copying an attribute set."
 
         if common.verbose:
-            print '\n', '-=' * 30
-            print "Running %s.test03_copyNodeAttrs..." % self.__class__.__name__
+            print('\n', '-=' * 30)
+            print("Running %s.test03_copyNodeAttrs..." %
+                  self.__class__.__name__)
 
         rattrs = self.fileh.root._v_attrs
         rattrs.attr_0 = 0
@@ -2331,8 +2345,8 @@ class AttributesTestCase(unittest.TestCase):
         "Replacing a node with a rewritten attribute."
 
         if common.verbose:
-            print '\n', '-=' * 30
-            print "Running %s.test04_replaceNode..." % self.__class__.__name__
+            print('\n', '-=' * 30)
+            print("Running %s.test04_replaceNode..." % self.__class__.__name__)
 
         array = self.fileh.root.array
         attrs = array.attrs
diff --git a/tables/tests/test_earray.py b/tables/tests/test_earray.py
index a01e40b..27323de 100644
--- a/tables/tests/test_earray.py
+++ b/tables/tests/test_earray.py
@@ -1,5 +1,6 @@
 # -*- coding: utf-8 -*-
 
+from __future__ import print_function
 import sys
 import unittest
 import os
@@ -89,9 +90,9 @@ class BasicTestCase(unittest.TestCase):
 
         if common.verbose:
             if self.flavor == "numpy":
-                print "Object to append -->", object
+                print("Object to append -->", object)
             else:
-                print "Object to append -->", repr(object)
+                print("Object to append -->", repr(object))
         for i in range(self.nappends):
             if self.type == "string":
                 earray.append(object)
@@ -132,11 +133,11 @@ class BasicTestCase(unittest.TestCase):
         self.assertEqual(obj.atom.type, self.type)
 
     def test01_iterEArray(self):
-        """Checking enlargeable array iterator"""
+        """Checking enlargeable array iterator."""
 
         if common.verbose:
-            print '\n', '-=' * 30
-            print "Running %s.test01_iterEArray..." % self.__class__.__name__
+            print('\n', '-=' * 30)
+            print("Running %s.test01_iterEArray..." % self.__class__.__name__)
 
         # Create an instance of an HDF5 Table
         if self.reopen:
@@ -146,9 +147,9 @@ class BasicTestCase(unittest.TestCase):
         # Choose a small value for buffer size
         earray.nrowsinbuf = 3
         if common.verbose:
-            print "EArray descr:", repr(earray)
-            print "shape of read array ==>", earray.shape
-            print "reopening?:", self.reopen
+            print("EArray descr:", repr(earray))
+            print("shape of read array ==>", earray.shape)
+            print("reopening?:", self.reopen)
 
         # Build the array to do comparisons
         if self.type == "string":
@@ -185,11 +186,11 @@ class BasicTestCase(unittest.TestCase):
             object = object__[chunk]
             # The next adds much more verbosity
             if common.verbose and 0:
-                print "number of row ==>", earray.nrow
+                print("number of row ==>", earray.nrow)
                 if hasattr(object, "shape"):
-                    print "shape should look as:", object.shape
-                print "row in earray ==>", repr(row)
-                print "Should look like ==>", repr(object)
+                    print("shape should look as:", object.shape)
+                print("row in earray ==>", repr(row))
+                print("Should look like ==>", repr(object))
 
             self.assertEqual(initialrows + self.nappends * self.chunksize,
                              earray.nrows)
@@ -202,26 +203,27 @@ class BasicTestCase(unittest.TestCase):
 
             # Check filters:
             if self.compress != earray.filters.complevel and common.verbose:
-                print "Error in compress. Class:", self.__class__.__name__
-                print "self, earray:", self.compress, earray.filters.complevel
+                print("Error in compress. Class:", self.__class__.__name__)
+                print("self, earray:", self.compress, earray.filters.complevel)
             self.assertEqual(earray.filters.complevel, self.compress)
             if self.compress > 0 and which_lib_version(self.complib):
                 self.assertEqual(earray.filters.complib, self.complib)
             if self.shuffle != earray.filters.shuffle and common.verbose:
-                print "Error in shuffle. Class:", self.__class__.__name__
-                print "self, earray:", self.shuffle, earray.filters.shuffle
+                print("Error in shuffle. Class:", self.__class__.__name__)
+                print("self, earray:", self.shuffle, earray.filters.shuffle)
             self.assertEqual(self.shuffle, earray.filters.shuffle)
             if self.fletcher32 != earray.filters.fletcher32 and common.verbose:
-                print "Error in fletcher32. Class:", self.__class__.__name__
-                print "self, earray:", self.fletcher32, earray.filters.fletcher32
+                print("Error in fletcher32. Class:", self.__class__.__name__)
+                print("self, earray:", self.fletcher32,
+                      earray.filters.fletcher32)
             self.assertEqual(self.fletcher32, earray.filters.fletcher32)
 
     def test02_sssEArray(self):
         """Checking enlargeable array iterator with (start, stop, step)"""
 
         if common.verbose:
-            print '\n', '-=' * 30
-            print "Running %s.test02_sssEArray..." % self.__class__.__name__
+            print('\n', '-=' * 30)
+            print("Running %s.test02_sssEArray..." % self.__class__.__name__)
 
         # Create an instance of an HDF5 Table
         if self.reopen:
@@ -231,9 +233,9 @@ class BasicTestCase(unittest.TestCase):
         # Choose a small value for buffer size
         earray.nrowsinbuf = 3
         if common.verbose:
-            print "EArray descr:", repr(earray)
-            print "shape of read array ==>", earray.shape
-            print "reopening?:", self.reopen
+            print("EArray descr:", repr(earray))
+            print("shape of read array ==>", earray.shape)
+            print("reopening?:", self.reopen)
 
         # Build the array to do comparisons
         if self.type == "string":
@@ -275,11 +277,11 @@ class BasicTestCase(unittest.TestCase):
 
             # The next adds much more verbosity
             if common.verbose and 0:
-                print "number of row ==>", earray.nrow
+                print("number of row ==>", earray.nrow)
                 if hasattr(object, "shape"):
-                    print "shape should look as:", object.shape
-                print "row in earray ==>", repr(row)
-                print "Should look like ==>", repr(object)
+                    print("shape should look as:", object.shape)
+                print("row in earray ==>", repr(row))
+                print("Should look like ==>", repr(object))
 
             self.assertEqual(initialrows + self.nappends * self.chunksize,
                              earray.nrows)
@@ -291,11 +293,11 @@ class BasicTestCase(unittest.TestCase):
                 self.assertEqual(len(shape), 1)
 
     def test03_readEArray(self):
-        """Checking read() of enlargeable arrays"""
+        """Checking read() of enlargeable arrays."""
 
         if common.verbose:
-            print '\n', '-=' * 30
-            print "Running %s.test03_readEArray..." % self.__class__.__name__
+            print('\n', '-=' * 30)
+            print("Running %s.test03_readEArray..." % self.__class__.__name__)
 
         # This conversion made just in case indices are numpy scalars
         if self.start is not None:
@@ -313,9 +315,9 @@ class BasicTestCase(unittest.TestCase):
         # Choose a small value for buffer size
         earray.nrowsinbuf = 3
         if common.verbose:
-            print "EArray descr:", repr(earray)
-            print "shape of read array ==>", earray.shape
-            print "reopening?:", self.reopen
+            print("EArray descr:", repr(earray))
+            print("shape of read array ==>", earray.shape)
+            print("reopening?:", self.reopen)
 
         # Build the array to do comparisons
         if self.type == "string":
@@ -386,9 +388,9 @@ class BasicTestCase(unittest.TestCase):
 
         if common.verbose:
             if hasattr(object, "shape"):
-                print "shape should look as:", object.shape
-            print "Object read ==>", repr(row)
-            print "Should look like ==>", repr(object)
+                print("shape should look as:", object.shape)
+            print("Object read ==>", repr(row))
+            print("Should look like ==>", repr(object))
 
         self.assertEqual(initialrows + self.nappends * self.chunksize,
                          earray.nrows)
@@ -404,7 +406,7 @@ class BasicTestCase(unittest.TestCase):
             self.assertEqual(len(shape), 1)
 
     def test03_readEArray_out_argument(self):
-        """Checking read() of enlargeable arrays"""
+        """Checking read() of enlargeable arrays."""
 
         # This conversion made just in case indices are numpy scalars
         if self.start is not None:
@@ -495,9 +497,9 @@ class BasicTestCase(unittest.TestCase):
 
         if common.verbose:
             if hasattr(object, "shape"):
-                print "shape should look as:", object.shape
-            print "Object read ==>", repr(row)
-            print "Should look like ==>", repr(object)
+                print("shape should look as:", object.shape)
+            print("Object read ==>", repr(row))
+            print("Should look like ==>", repr(object))
 
         self.assertEqual(initialrows + self.nappends * self.chunksize,
                          earray.nrows)
@@ -513,11 +515,12 @@ class BasicTestCase(unittest.TestCase):
             self.assertEqual(len(shape), 1)
 
     def test04_getitemEArray(self):
-        """Checking enlargeable array __getitem__ special method"""
+        """Checking enlargeable array __getitem__ special method."""
 
         if common.verbose:
-            print '\n', '-=' * 30
-            print "Running %s.test04_getitemEArray..." % self.__class__.__name__
+            print('\n', '-=' * 30)
+            print("Running %s.test04_getitemEArray..." %
+                  self.__class__.__name__)
 
         if not hasattr(self, "slices"):
             # If there is not a slices attribute, create it
@@ -538,9 +541,9 @@ class BasicTestCase(unittest.TestCase):
         # Choose a small value for buffer size
         # earray.nrowsinbuf = 3   # this does not really changes the chunksize
         if common.verbose:
-            print "EArray descr:", repr(earray)
-            print "shape of read array ==>", earray.shape
-            print "reopening?:", self.reopen
+            print("EArray descr:", repr(earray))
+            print("shape of read array ==>", earray.shape)
+            print("reopening?:", self.reopen)
 
         # Build the array to do comparisons
         if self.type == "string":
@@ -597,12 +600,12 @@ class BasicTestCase(unittest.TestCase):
             row = numpy.empty(shape=self.shape, dtype=self.dtype)
 
         if common.verbose:
-            print "Object read:\n", repr(row)
-            print "Should look like:\n", repr(object)
+            print("Object read:\n", repr(row))
+            print("Should look like:\n", repr(object))
             if hasattr(object, "shape"):
-                print "Original object shape:", self.shape
-                print "Shape read:", row.shape
-                print "shape should look as:", object.shape
+                print("Original object shape:", self.shape)
+                print("Shape read:", row.shape)
+                print("shape should look as:", object.shape)
 
         self.assertEqual(initialrows + self.nappends * self.chunksize,
                          earray.nrows)
@@ -612,7 +615,7 @@ class BasicTestCase(unittest.TestCase):
             self.assertEqual(len(self.shape), 1)
 
     def test05_setitemEArray(self):
-        """Checking enlargeable array __setitem__ special method"""
+        """Checking enlargeable array __setitem__ special method."""
 
         if self.__class__.__name__ == "Ellipsis6EArrayTestCase":
             # We have a problem with test design here, but I think
@@ -621,8 +624,9 @@ class BasicTestCase(unittest.TestCase):
             return
 
         if common.verbose:
-            print '\n', '-=' * 30
-            print "Running %s.test05_setitemEArray..." % self.__class__.__name__
+            print('\n', '-=' * 30)
+            print("Running %s.test05_setitemEArray..." %
+                  self.__class__.__name__)
 
         if not hasattr(self, "slices"):
             # If there is not a slices attribute, create it
@@ -643,9 +647,9 @@ class BasicTestCase(unittest.TestCase):
         # Choose a small value for buffer size
         # earray.nrowsinbuf = 3   # this does not really changes the chunksize
         if common.verbose:
-            print "EArray descr:", repr(earray)
-            print "shape of read array ==>", earray.shape
-            print "reopening?:", self.reopen
+            print("EArray descr:", repr(earray))
+            print("shape of read array ==>", earray.shape)
+            print("reopening?:", self.reopen)
 
         # Build the array to do comparisons
         if self.type == "string":
@@ -726,16 +730,16 @@ class BasicTestCase(unittest.TestCase):
         try:
             row = earray.__getitem__(self.slices)
         except IndexError:
-            print "IndexError!"
+            print("IndexError!")
             row = numpy.empty(shape=self.shape, dtype=self.dtype)
 
         if common.verbose:
-            print "Object read:\n", repr(row)
-            print "Should look like:\n", repr(object)
+            print("Object read:\n", repr(row))
+            print("Should look like:\n", repr(object))
             if hasattr(object, "shape"):
-                print "Original object shape:", self.shape
-                print "Shape read:", row.shape
-                print "shape should look as:", object.shape
+                print("Original object shape:", self.shape)
+                print("Shape read:", row.shape)
+                print("shape should look as:", object.shape)
 
         self.assertEqual(initialrows + self.nappends * self.chunksize,
                          earray.nrows)
@@ -901,17 +905,19 @@ class Slices3EArrayTestCase(BasicTestCase):
     shape = (2, 3, 4, 0)
     chunksize = 5
     nappends = 20
-    slices = (slice(1, 2, 1), slice(
-        0, None, None), slice(1, 4, 2))  # Don't work
-    # slices = (slice(None, None, None), slice(0, None, None), slice(1,4,1)) # W
-    # slices = (slice(None, None, None), slice(None, None, None), slice(1,4,2)) # N
+    slices = (slice(1, 2, 1), slice(0, None, None),
+              slice(1, 4, 2))  # Don't work
+    # slices = (slice(None, None, None), slice(0, None, None),
+    #           slice(1,4,1)) # W
+    # slices = (slice(None, None, None), slice(None, None, None),
+    #           slice(1,4,2)) # N
     # slices = (slice(1,2,1), slice(None, None, None), slice(1,4,2)) # N
     # Disable the failing test temporarily with a working test case
     slices = (slice(1, 2, 1), slice(1, 4, None), slice(1, 4, 2))  # Y
     # slices = (slice(1,2,1), slice(0, 4, None), slice(1,4,1)) # Y
     slices = (slice(1, 2, 1), slice(0, 4, None), slice(1, 4, 2))  # N
-    # slices = (slice(1,2,1), slice(0, 4, None), slice(1,4,2), slice(0,100,1))
-    # # N
+    # slices = (slice(1,2,1), slice(0, 4, None), slice(1,4,2),
+    #           slice(0,100,1))  # N
 
 
 class Slices4EArrayTestCase(BasicTestCase):
@@ -1292,12 +1298,12 @@ class OffsetStrideTestCase(unittest.TestCase):
     #----------------------------------------
 
     def test01a_String(self):
-        """Checking earray with offseted numpy strings appends"""
+        """Checking earray with offseted numpy strings appends."""
 
         root = self.rootgroup
         if common.verbose:
-            print '\n', '-=' * 30
-            print "Running %s.test01a_StringAtom..." % self.__class__.__name__
+            print('\n', '-=' * 30)
+            print("Running %s.test01a_StringAtom..." % self.__class__.__name__)
 
         earray = self.fileh.create_earray(root, 'strings',
                                           atom=StringAtom(itemsize=3),
@@ -1313,9 +1319,9 @@ class OffsetStrideTestCase(unittest.TestCase):
         # Read all the rows:
         row = earray.read()
         if common.verbose:
-            print "Object read:", row
-            print "Nrows in", earray._v_pathname, ":", earray.nrows
-            print "Second row in earray ==>", row[1].tolist()
+            print("Object read:", row)
+            print("Nrows in", earray._v_pathname, ":", earray.nrows)
+            print("Second row in earray ==>", row[1].tolist())
 
         self.assertEqual(earray.nrows, 2)
         self.assertEqual(row[0].tolist(), [[b"123", b"45"], [b"45", b"123"]])
@@ -1324,12 +1330,12 @@ class OffsetStrideTestCase(unittest.TestCase):
         self.assertEqual(len(row[1]), 2)
 
     def test01b_String(self):
-        """Checking earray with strided numpy strings appends"""
+        """Checking earray with strided numpy strings appends."""
 
         root = self.rootgroup
         if common.verbose:
-            print '\n', '-=' * 30
-            print "Running %s.test01b_StringAtom..." % self.__class__.__name__
+            print('\n', '-=' * 30)
+            print("Running %s.test01b_StringAtom..." % self.__class__.__name__)
 
         earray = self.fileh.create_earray(root, 'strings',
                                           atom=StringAtom(itemsize=3),
@@ -1345,9 +1351,9 @@ class OffsetStrideTestCase(unittest.TestCase):
         # Read all the rows:
         row = earray.read()
         if common.verbose:
-            print "Object read:", row
-            print "Nrows in", earray._v_pathname, ":", earray.nrows
-            print "Second row in earray ==>", row[1].tolist()
+            print("Object read:", row)
+            print("Nrows in", earray._v_pathname, ":", earray.nrows)
+            print("Second row in earray ==>", row[1].tolist())
 
         self.assertEqual(earray.nrows, 2)
         self.assertEqual(row[0].tolist(), [[b"a", b"b"], [b"45", b"123"]])
@@ -1356,12 +1362,12 @@ class OffsetStrideTestCase(unittest.TestCase):
         self.assertEqual(len(row[1]), 2)
 
     def test02a_int(self):
-        """Checking earray with offseted NumPy ints appends"""
+        """Checking earray with offseted NumPy ints appends."""
 
         root = self.rootgroup
         if common.verbose:
-            print '\n', '-=' * 30
-            print "Running %s.test02a_int..." % self.__class__.__name__
+            print('\n', '-=' * 30)
+            print("Running %s.test02a_int..." % self.__class__.__name__)
 
         # Create an string atom
         earray = self.fileh.create_earray(root, 'EAtom',
@@ -1376,9 +1382,9 @@ class OffsetStrideTestCase(unittest.TestCase):
         # Read all the rows:
         row = earray.read()
         if common.verbose:
-            print "Object read:", row
-            print "Nrows in", earray._v_pathname, ":", earray.nrows
-            print "Third row in vlarray ==>", row[2]
+            print("Object read:", row)
+            print("Nrows in", earray._v_pathname, ":", earray.nrows)
+            print("Third row in vlarray ==>", row[2])
 
         self.assertEqual(earray.nrows, 3)
         self.assertTrue(allequal(row[
@@ -1389,12 +1395,12 @@ class OffsetStrideTestCase(unittest.TestCase):
                         2], numpy.array([-1, 0, 0], dtype='int32')))
 
     def test02b_int(self):
-        """Checking earray with strided NumPy ints appends"""
+        """Checking earray with strided NumPy ints appends."""
 
         root = self.rootgroup
         if common.verbose:
-            print '\n', '-=' * 30
-            print "Running %s.test02b_int..." % self.__class__.__name__
+            print('\n', '-=' * 30)
+            print("Running %s.test02b_int..." % self.__class__.__name__)
 
         earray = self.fileh.create_earray(root, 'EAtom',
                                           atom=Int32Atom(), shape=(0, 3),
@@ -1408,9 +1414,9 @@ class OffsetStrideTestCase(unittest.TestCase):
         # Read all the rows:
         row = earray.read()
         if common.verbose:
-            print "Object read:", row
-            print "Nrows in", earray._v_pathname, ":", earray.nrows
-            print "Third row in vlarray ==>", row[2]
+            print("Object read:", row)
+            print("Nrows in", earray._v_pathname, ":", earray.nrows)
+            print("Third row in vlarray ==>", row[2])
 
         self.assertEqual(earray.nrows, 3)
         self.assertTrue(allequal(row[
@@ -1425,8 +1431,8 @@ class OffsetStrideTestCase(unittest.TestCase):
 
         root = self.rootgroup
         if common.verbose:
-            print '\n', '-=' * 30
-            print "Running %s.test03a_int..." % self.__class__.__name__
+            print('\n', '-=' * 30)
+            print("Running %s.test03a_int..." % self.__class__.__name__)
 
         earray = self.fileh.create_earray(root, 'EAtom',
                                           atom=Int32Atom(), shape=(0, 3),
@@ -1445,10 +1451,10 @@ class OffsetStrideTestCase(unittest.TestCase):
         native = earray[:4, :]
         swapped = earray[4:, :]
         if common.verbose:
-            print "Native rows:", native
-            print "Byteorder native rows:", native.dtype.byteorder
-            print "Swapped rows:", swapped
-            print "Byteorder swapped rows:", swapped.dtype.byteorder
+            print("Native rows:", native)
+            print("Byteorder native rows:", native.dtype.byteorder)
+            print("Swapped rows:", swapped)
+            print("Byteorder swapped rows:", swapped.dtype.byteorder)
 
         self.assertTrue(allequal(native, swapped))
 
@@ -1457,8 +1463,8 @@ class OffsetStrideTestCase(unittest.TestCase):
 
         root = self.rootgroup
         if common.verbose:
-            print '\n', '-=' * 30
-            print "Running %s.test03b_float..." % self.__class__.__name__
+            print('\n', '-=' * 30)
+            print("Running %s.test03b_float..." % self.__class__.__name__)
 
         earray = self.fileh.create_earray(root, 'EAtom',
                                           atom=Float64Atom(), shape=(0, 3),
@@ -1477,10 +1483,10 @@ class OffsetStrideTestCase(unittest.TestCase):
         native = earray[:4, :]
         swapped = earray[4:, :]
         if common.verbose:
-            print "Native rows:", native
-            print "Byteorder native rows:", native.dtype.byteorder
-            print "Swapped rows:", swapped
-            print "Byteorder swapped rows:", swapped.dtype.byteorder
+            print("Native rows:", native)
+            print("Byteorder native rows:", native.dtype.byteorder)
+            print("Swapped rows:", swapped)
+            print("Byteorder swapped rows:", swapped.dtype.byteorder)
 
         self.assertTrue(allequal(native, swapped))
 
@@ -1489,8 +1495,8 @@ class OffsetStrideTestCase(unittest.TestCase):
 
         root = self.rootgroup
         if common.verbose:
-            print '\n', '-=' * 30
-            print "Running %s.test04a_int..." % self.__class__.__name__
+            print('\n', '-=' * 30)
+            print("Running %s.test04a_int..." % self.__class__.__name__)
 
         byteorder = {'little': 'big', 'big': 'little'}[sys.byteorder]
         earray = self.fileh.create_earray(root, 'EAtom',
@@ -1511,8 +1517,8 @@ class OffsetStrideTestCase(unittest.TestCase):
         native = earray[:4, :]
         swapped = earray[4:, :]
         if common.verbose:
-            print "Byteorder native rows:", byteorders[native.dtype.byteorder]
-            print "Byteorder earray on-disk:", earray.byteorder
+            print("Byteorder native rows:", byteorders[native.dtype.byteorder])
+            print("Byteorder earray on-disk:", earray.byteorder)
 
         self.assertEqual(byteorders[native.dtype.byteorder], sys.byteorder)
         self.assertEqual(earray.byteorder, byteorder)
@@ -1523,8 +1529,8 @@ class OffsetStrideTestCase(unittest.TestCase):
 
         root = self.rootgroup
         if common.verbose:
-            print '\n', '-=' * 30
-            print "Running %s.test04b_int..." % self.__class__.__name__
+            print('\n', '-=' * 30)
+            print("Running %s.test04b_int..." % self.__class__.__name__)
 
         byteorder = {'little': 'big', 'big': 'little'}[sys.byteorder]
         earray = self.fileh.create_earray(root, 'EAtom',
@@ -1548,8 +1554,8 @@ class OffsetStrideTestCase(unittest.TestCase):
         native = earray[:4, :]
         swapped = earray[4:, :]
         if common.verbose:
-            print "Byteorder native rows:", byteorders[native.dtype.byteorder]
-            print "Byteorder earray on-disk:", earray.byteorder
+            print("Byteorder native rows:", byteorders[native.dtype.byteorder])
+            print("Byteorder earray on-disk:", earray.byteorder)
 
         self.assertEqual(byteorders[native.dtype.byteorder], sys.byteorder)
         self.assertEqual(earray.byteorder, byteorder)
@@ -1560,8 +1566,8 @@ class OffsetStrideTestCase(unittest.TestCase):
 
         root = self.rootgroup
         if common.verbose:
-            print '\n', '-=' * 30
-            print "Running %s.test04c_float..." % self.__class__.__name__
+            print('\n', '-=' * 30)
+            print("Running %s.test04c_float..." % self.__class__.__name__)
 
         byteorder = {'little': 'big', 'big': 'little'}[sys.byteorder]
         earray = self.fileh.create_earray(root, 'EAtom',
@@ -1582,8 +1588,8 @@ class OffsetStrideTestCase(unittest.TestCase):
         native = earray[:4, :]
         swapped = earray[4:, :]
         if common.verbose:
-            print "Byteorder native rows:", byteorders[native.dtype.byteorder]
-            print "Byteorder earray on-disk:", earray.byteorder
+            print("Byteorder native rows:", byteorders[native.dtype.byteorder])
+            print("Byteorder earray on-disk:", earray.byteorder)
 
         self.assertEqual(byteorders[native.dtype.byteorder], sys.byteorder)
         self.assertEqual(earray.byteorder, byteorder)
@@ -1594,8 +1600,8 @@ class OffsetStrideTestCase(unittest.TestCase):
 
         root = self.rootgroup
         if common.verbose:
-            print '\n', '-=' * 30
-            print "Running %s.test04d_float..." % self.__class__.__name__
+            print('\n', '-=' * 30)
+            print("Running %s.test04d_float..." % self.__class__.__name__)
 
         byteorder = {'little': 'big', 'big': 'little'}[sys.byteorder]
         earray = self.fileh.create_earray(root, 'EAtom',
@@ -1619,8 +1625,8 @@ class OffsetStrideTestCase(unittest.TestCase):
         native = earray[:4, :]
         swapped = earray[4:, :]
         if common.verbose:
-            print "Byteorder native rows:", byteorders[native.dtype.byteorder]
-            print "Byteorder earray on-disk:", earray.byteorder
+            print("Byteorder native rows:", byteorders[native.dtype.byteorder])
+            print("Byteorder earray on-disk:", earray.byteorder)
 
         self.assertEqual(byteorders[native.dtype.byteorder], sys.byteorder)
         self.assertEqual(earray.byteorder, byteorder)
@@ -1630,11 +1636,11 @@ class OffsetStrideTestCase(unittest.TestCase):
 class CopyTestCase(unittest.TestCase):
 
     def test01_copy(self):
-        """Checking EArray.copy() method """
+        """Checking EArray.copy() method."""
 
         if common.verbose:
-            print '\n', '-=' * 30
-            print "Running %s.test01_copy..." % self.__class__.__name__
+            print('\n', '-=' * 30)
+            print("Running %s.test01_copy..." % self.__class__.__name__)
 
         # Create an instance of an HDF5 Table
         file = tempfile.mktemp(".h5")
@@ -1649,7 +1655,7 @@ class CopyTestCase(unittest.TestCase):
 
         if self.close:
             if common.verbose:
-                print "(closing file version)"
+                print("(closing file version)")
             fileh.close()
             fileh = open_file(file, mode="a")
             array1 = fileh.root.array1
@@ -1659,18 +1665,18 @@ class CopyTestCase(unittest.TestCase):
 
         if self.close:
             if common.verbose:
-                print "(closing file version)"
+                print("(closing file version)")
             fileh.close()
             fileh = open_file(file, mode="r")
             array1 = fileh.root.array1
             array2 = fileh.root.array2
 
         if common.verbose:
-            print "array1-->", array1.read()
-            print "array2-->", array2.read()
-            # print "dirs-->", dir(array1), dir(array2)
-            print "attrs array1-->", repr(array1.attrs)
-            print "attrs array2-->", repr(array2.attrs)
+            print("array1-->", array1.read())
+            print("array2-->", array2.read())
+            # print("dirs-->", dir(array1), dir(array2))
+            print("attrs array1-->", repr(array1.attrs))
+            print("attrs array2-->", repr(array2.attrs))
 
         # Check that all the elements are equal
         self.assertTrue(allequal(array1.read(), array2.read()))
@@ -1694,8 +1700,8 @@ class CopyTestCase(unittest.TestCase):
         """Checking EArray.copy() method (where specified)"""
 
         if common.verbose:
-            print '\n', '-=' * 30
-            print "Running %s.test02_copy..." % self.__class__.__name__
+            print('\n', '-=' * 30)
+            print("Running %s.test02_copy..." % self.__class__.__name__)
 
         # Create an instance of an HDF5 Table
         file = tempfile.mktemp(".h5")
@@ -1710,7 +1716,7 @@ class CopyTestCase(unittest.TestCase):
 
         if self.close:
             if common.verbose:
-                print "(closing file version)"
+                print("(closing file version)")
             fileh.close()
             fileh = open_file(file, mode="a")
             array1 = fileh.root.array1
@@ -1721,18 +1727,18 @@ class CopyTestCase(unittest.TestCase):
 
         if self.close:
             if common.verbose:
-                print "(closing file version)"
+                print("(closing file version)")
             fileh.close()
             fileh = open_file(file, mode="r")
             array1 = fileh.root.array1
             array2 = fileh.root.group1.array2
 
         if common.verbose:
-            print "array1-->", array1.read()
-            print "array2-->", array2.read()
-            # print "dirs-->", dir(array1), dir(array2)
-            print "attrs array1-->", repr(array1.attrs)
-            print "attrs array2-->", repr(array2.attrs)
+            print("array1-->", array1.read())
+            print("array2-->", array2.read())
+            # print("dirs-->", dir(array1), dir(array2))
+            print("attrs array1-->", repr(array1.attrs))
+            print("attrs array2-->", repr(array2.attrs))
 
         # Check that all the elements are equal
         self.assertTrue(allequal(array1.read(), array2.read()))
@@ -1756,8 +1762,8 @@ class CopyTestCase(unittest.TestCase):
         """Checking EArray.copy() method (python flavor)"""
 
         if common.verbose:
-            print '\n', '-=' * 30
-            print "Running %s.test03b_copy..." % self.__class__.__name__
+            print('\n', '-=' * 30)
+            print("Running %s.test03b_copy..." % self.__class__.__name__)
 
         # Create an instance of an HDF5 Table
         file = tempfile.mktemp(".h5")
@@ -1772,7 +1778,7 @@ class CopyTestCase(unittest.TestCase):
 
         if self.close:
             if common.verbose:
-                print "(closing file version)"
+                print("(closing file version)")
             fileh.close()
             fileh = open_file(file, mode="a")
             array1 = fileh.root.array1
@@ -1782,15 +1788,15 @@ class CopyTestCase(unittest.TestCase):
 
         if self.close:
             if common.verbose:
-                print "(closing file version)"
+                print("(closing file version)")
             fileh.close()
             fileh = open_file(file, mode="r")
             array1 = fileh.root.array1
             array2 = fileh.root.array2
 
         if common.verbose:
-            print "attrs array1-->", repr(array1.attrs)
-            print "attrs array2-->", repr(array2.attrs)
+            print("attrs array1-->", repr(array1.attrs))
+            print("attrs array2-->", repr(array2.attrs))
 
         # Check that all elements are equal
         self.assertEqual(array1.read(), array2.read())
@@ -1813,8 +1819,8 @@ class CopyTestCase(unittest.TestCase):
         """Checking EArray.copy() method (python string flavor)"""
 
         if common.verbose:
-            print '\n', '-=' * 30
-            print "Running %s.test03d_copy..." % self.__class__.__name__
+            print('\n', '-=' * 30)
+            print("Running %s.test03d_copy..." % self.__class__.__name__)
 
         # Create an instance of an HDF5 Table
         file = tempfile.mktemp(".h5")
@@ -1829,7 +1835,7 @@ class CopyTestCase(unittest.TestCase):
 
         if self.close:
             if common.verbose:
-                print "(closing file version)"
+                print("(closing file version)")
             fileh.close()
             fileh = open_file(file, mode="a")
             array1 = fileh.root.array1
@@ -1839,15 +1845,15 @@ class CopyTestCase(unittest.TestCase):
 
         if self.close:
             if common.verbose:
-                print "(closing file version)"
+                print("(closing file version)")
             fileh.close()
             fileh = open_file(file, mode="r")
             array1 = fileh.root.array1
             array2 = fileh.root.array2
 
         if common.verbose:
-            print "attrs array1-->", repr(array1.attrs)
-            print "attrs array2-->", repr(array2.attrs)
+            print("attrs array1-->", repr(array1.attrs))
+            print("attrs array2-->", repr(array2.attrs))
 
         # Check that all elements are equal
         self.assertEqual(array1.read(), array2.read())
@@ -1871,8 +1877,8 @@ class CopyTestCase(unittest.TestCase):
         """Checking EArray.copy() method (String flavor)"""
 
         if common.verbose:
-            print '\n', '-=' * 30
-            print "Running %s.test03e_copy..." % self.__class__.__name__
+            print('\n', '-=' * 30)
+            print("Running %s.test03e_copy..." % self.__class__.__name__)
 
         # Create an instance of an HDF5 Table
         file = tempfile.mktemp(".h5")
@@ -1887,7 +1893,7 @@ class CopyTestCase(unittest.TestCase):
 
         if self.close:
             if common.verbose:
-                print "(closing file version)"
+                print("(closing file version)")
             fileh.close()
             fileh = open_file(file, mode="a")
             array1 = fileh.root.array1
@@ -1897,15 +1903,15 @@ class CopyTestCase(unittest.TestCase):
 
         if self.close:
             if common.verbose:
-                print "(closing file version)"
+                print("(closing file version)")
             fileh.close()
             fileh = open_file(file, mode="r")
             array1 = fileh.root.array1
             array2 = fileh.root.array2
 
         if common.verbose:
-            print "attrs array1-->", repr(array1.attrs)
-            print "attrs array2-->", repr(array2.attrs)
+            print("attrs array1-->", repr(array1.attrs))
+            print("attrs array2-->", repr(array2.attrs))
 
         # Check that all elements are equal
         self.assertTrue(allequal(array1.read(), array2.read()))
@@ -1928,8 +1934,8 @@ class CopyTestCase(unittest.TestCase):
         """Checking EArray.copy() method (checking title copying)"""
 
         if common.verbose:
-            print '\n', '-=' * 30
-            print "Running %s.test04_copy..." % self.__class__.__name__
+            print('\n', '-=' * 30)
+            print("Running %s.test04_copy..." % self.__class__.__name__)
 
         # Create an instance of an HDF5 Table
         file = tempfile.mktemp(".h5")
@@ -1947,7 +1953,7 @@ class CopyTestCase(unittest.TestCase):
 
         if self.close:
             if common.verbose:
-                print "(closing file version)"
+                print("(closing file version)")
             fileh.close()
             fileh = open_file(file, mode="a")
             array1 = fileh.root.array1
@@ -1957,7 +1963,7 @@ class CopyTestCase(unittest.TestCase):
 
         if self.close:
             if common.verbose:
-                print "(closing file version)"
+                print("(closing file version)")
             fileh.close()
             fileh = open_file(file, mode="r")
             array1 = fileh.root.array1
@@ -1965,7 +1971,7 @@ class CopyTestCase(unittest.TestCase):
 
         # Assert user attributes
         if common.verbose:
-            print "title of destination array-->", array2.title
+            print("title of destination array-->", array2.title)
         self.assertEqual(array2.title, "title array2")
 
         # Close the file
@@ -1976,8 +1982,8 @@ class CopyTestCase(unittest.TestCase):
         """Checking EArray.copy() method (user attributes copied)"""
 
         if common.verbose:
-            print '\n', '-=' * 30
-            print "Running %s.test05_copy..." % self.__class__.__name__
+            print('\n', '-=' * 30)
+            print("Running %s.test05_copy..." % self.__class__.__name__)
 
         # Create an instance of an HDF5 Table
         file = tempfile.mktemp(".h5")
@@ -1995,7 +2001,7 @@ class CopyTestCase(unittest.TestCase):
 
         if self.close:
             if common.verbose:
-                print "(closing file version)"
+                print("(closing file version)")
             fileh.close()
             fileh = open_file(file, mode="a")
             array1 = fileh.root.array1
@@ -2005,15 +2011,15 @@ class CopyTestCase(unittest.TestCase):
 
         if self.close:
             if common.verbose:
-                print "(closing file version)"
+                print("(closing file version)")
             fileh.close()
             fileh = open_file(file, mode="r")
             array1 = fileh.root.array1
             array2 = fileh.root.array2
 
         if common.verbose:
-            print "attrs array1-->", repr(array1.attrs)
-            print "attrs array2-->", repr(array2.attrs)
+            print("attrs array1-->", repr(array1.attrs))
+            print("attrs array2-->", repr(array2.attrs))
 
         # Assert user attributes
         self.assertEqual(array2.attrs.attr1, "attr1")
@@ -2027,8 +2033,8 @@ class CopyTestCase(unittest.TestCase):
         """Checking EArray.copy() method (user attributes not copied)"""
 
         if common.verbose:
-            print '\n', '-=' * 30
-            print "Running %s.test05b_copy..." % self.__class__.__name__
+            print('\n', '-=' * 30)
+            print("Running %s.test05b_copy..." % self.__class__.__name__)
 
         # Create an instance of an HDF5 Table
         file = tempfile.mktemp(".h5")
@@ -2046,7 +2052,7 @@ class CopyTestCase(unittest.TestCase):
 
         if self.close:
             if common.verbose:
-                print "(closing file version)"
+                print("(closing file version)")
             fileh.close()
             fileh = open_file(file, mode="a")
             array1 = fileh.root.array1
@@ -2056,15 +2062,15 @@ class CopyTestCase(unittest.TestCase):
 
         if self.close:
             if common.verbose:
-                print "(closing file version)"
+                print("(closing file version)")
             fileh.close()
             fileh = open_file(file, mode="r")
             array1 = fileh.root.array1
             array2 = fileh.root.array2
 
         if common.verbose:
-            print "attrs array1-->", repr(array1.attrs)
-            print "attrs array2-->", repr(array2.attrs)
+            print("attrs array1-->", repr(array1.attrs))
+            print("attrs array2-->", repr(array2.attrs))
 
         # Assert user attributes
         self.assertEqual(hasattr(array2.attrs, "attr1"), 0)
@@ -2087,11 +2093,11 @@ class CopyIndexTestCase(unittest.TestCase):
     nrowsinbuf = 2
 
     def test01_index(self):
-        """Checking EArray.copy() method with indexes"""
+        """Checking EArray.copy() method with indexes."""
 
         if common.verbose:
-            print '\n', '-=' * 30
-            print "Running %s.test01_index..." % self.__class__.__name__
+            print('\n', '-=' * 30)
+            print("Running %s.test01_index..." % self.__class__.__name__)
 
         # Create an instance of an HDF5 Array
         file = tempfile.mktemp(".h5")
@@ -2115,10 +2121,10 @@ class CopyIndexTestCase(unittest.TestCase):
                              stop=self.stop,
                              step=self.step)
         if common.verbose:
-            print "array1-->", array1.read()
-            print "array2-->", array2.read()
-            print "attrs array1-->", repr(array1.attrs)
-            print "attrs array2-->", repr(array2.attrs)
+            print("array1-->", array1.read())
+            print("array2-->", array2.read())
+            print("attrs array1-->", repr(array1.attrs))
+            print("attrs array2-->", repr(array2.attrs))
 
         # Check that all the elements are equal
         r2 = r[self.start:self.stop:self.step]
@@ -2126,8 +2132,8 @@ class CopyIndexTestCase(unittest.TestCase):
 
         # Assert the number of rows in array
         if common.verbose:
-            print "nrows in array2-->", array2.nrows
-            print "and it should be-->", r2.shape[0]
+            print("nrows in array2-->", array2.nrows)
+            print("and it should be-->", r2.shape[0])
         self.assertEqual(r2.shape[0], array2.nrows)
 
         # Close the file
@@ -2138,8 +2144,8 @@ class CopyIndexTestCase(unittest.TestCase):
         """Checking EArray.copy() method with indexes (close file version)"""
 
         if common.verbose:
-            print '\n', '-=' * 30
-            print "Running %s.test02_indexclosef..." % self.__class__.__name__
+            print('\n', '-=' * 30)
+            print("Running %s.test02_indexclosef..." % self.__class__.__name__)
 
         # Create an instance of an HDF5 Array
         file = tempfile.mktemp(".h5")
@@ -2169,10 +2175,10 @@ class CopyIndexTestCase(unittest.TestCase):
         array2 = fileh.root.array2
 
         if common.verbose:
-            print "array1-->", array1.read()
-            print "array2-->", array2.read()
-            print "attrs array1-->", repr(array1.attrs)
-            print "attrs array2-->", repr(array2.attrs)
+            print("array1-->", array1.read())
+            print("array2-->", array2.read())
+            print("attrs array1-->", repr(array1.attrs))
+            print("attrs array2-->", repr(array2.attrs))
 
         # Check that all the elements are equal
         r2 = r[self.start:self.stop:self.step]
@@ -2180,8 +2186,8 @@ class CopyIndexTestCase(unittest.TestCase):
 
         # Assert the number of rows in array
         if common.verbose:
-            print "nrows in array2-->", array2.nrows
-            print "and it should be-->", r2.shape[0]
+            print("nrows in array2-->", array2.nrows)
+            print("and it should be-->", r2.shape[0])
         self.assertEqual(r2.shape[0], array2.nrows)
 
         # Close the file
@@ -2298,13 +2304,13 @@ class TruncateTestCase(unittest.TestCase):
 
         if self.close:
             if common.verbose:
-                print "(closing file version)"
+                print("(closing file version)")
             self.fileh.close()
             self.fileh = open_file(self.file, mode="r")
             array1 = self.fileh.root.array1
 
         if common.verbose:
-            print "array1-->", array1.read()
+            print("array1-->", array1.read())
 
         self.assertTrue(allequal(
             array1[:], numpy.array([], dtype='Int16').reshape(0, 2)))
@@ -2318,13 +2324,13 @@ class TruncateTestCase(unittest.TestCase):
 
         if self.close:
             if common.verbose:
-                print "(closing file version)"
+                print("(closing file version)")
             self.fileh.close()
             self.fileh = open_file(self.file, mode="r")
             array1 = self.fileh.root.array1
 
         if common.verbose:
-            print "array1-->", array1.read()
+            print("array1-->", array1.read())
 
         self.assertTrue(allequal(
             array1.read(), numpy.array([[456, 2]], dtype='Int16')))
@@ -2338,17 +2344,17 @@ class TruncateTestCase(unittest.TestCase):
 
         if self.close:
             if common.verbose:
-                print "(closing file version)"
+                print("(closing file version)")
             self.fileh.close()
             self.fileh = open_file(self.file, mode="r")
             array1 = self.fileh.root.array1
 
         if common.verbose:
-            print "array1-->", array1.read()
+            print("array1-->", array1.read())
 
-        self.assertTrue(allequal(array1.read(),
-                                 numpy.array([[456, 2], [3, 457]],
-                                 dtype='Int16')))
+        self.assertTrue(
+            allequal(array1.read(),
+                     numpy.array([[456, 2], [3, 457]], dtype='Int16')))
 
     def test03_truncate(self):
         """Checking EArray.truncate() method (truncating to > self.nrows)"""
@@ -2359,13 +2365,13 @@ class TruncateTestCase(unittest.TestCase):
 
         if self.close:
             if common.verbose:
-                print "(closing file version)"
+                print("(closing file version)")
             self.fileh.close()
             self.fileh = open_file(self.file, mode="r")
             array1 = self.fileh.root.array1
 
         if common.verbose:
-            print "array1-->", array1.read()
+            print("array1-->", array1.read())
 
         self.assertEqual(array1.nrows, 4)
         # Check the original values
@@ -2425,30 +2431,30 @@ class Rows64bitsTestCase(unittest.TestCase):
         if self.close:
             if common.verbose:
                 # Check how many entries there are in the array
-                print "Before closing"
-                print "Entries:", array.nrows, type(array.nrows)
-                print "Entries:", array.nrows / (1000 * 1000), "Millions"
-                print "Shape:", array.shape
+                print("Before closing")
+                print("Entries:", array.nrows, type(array.nrows))
+                print("Entries:", array.nrows / (1000 * 1000), "Millions")
+                print("Shape:", array.shape)
             # Close the file
             fileh.close()
             # Re-open the file
             fileh = self.fileh = open_file(self.file)
             array = fileh.root.array
             if common.verbose:
-                print "After re-open"
+                print("After re-open")
 
         # Check how many entries there are in the array
         if common.verbose:
-            print "Entries:", array.nrows, type(array.nrows)
-            print "Entries:", array.nrows / (1000 * 1000), "Millions"
-            print "Shape:", array.shape
-            print "Last 10 elements-->", array[-10:]
+            print("Entries:", array.nrows, type(array.nrows))
+            print("Entries:", array.nrows / (1000 * 1000), "Millions")
+            print("Shape:", array.shape)
+            print("Last 10 elements-->", array[-10:])
             stop = self.narows % 256
             if stop > 127:
                 stop -= 256
             start = stop - 10
-            print "Should look like-->", numpy.arange(start, stop,
-                                                      dtype='Int8')
+            print("Should look like-->", numpy.arange(start, stop,
+                                                      dtype='Int8'))
 
         nrows = self.narows * self.nanumber
         # check nrows
@@ -2524,7 +2530,7 @@ class MDAtomTestCase(common.TempFileMixin, common.PyTablesTestCase):
         ea.append([[[1, 3], [4, 5]]])
         self.assertEqual(ea.nrows, 1)
         if common.verbose:
-            print "First row-->", ea[0]
+            print("First row-->", ea[0])
         self.assertTrue(allequal(ea[0], numpy.array([[1, 3], [4, 5]], 'i4')))
 
     def test01b_append(self):
@@ -2539,7 +2545,7 @@ class MDAtomTestCase(common.TempFileMixin, common.PyTablesTestCase):
         ea.append([[[1]], [[2]], [[3]]])   # Simple broadcast
         self.assertEqual(ea.nrows, 3)
         if common.verbose:
-            print "Third row-->", ea[2]
+            print("Third row-->", ea[2])
         self.assertTrue(allequal(ea[2], numpy.array([[3, 3], [3, 3]], 'i4')))
 
     def test02a_append(self):
@@ -2554,7 +2560,7 @@ class MDAtomTestCase(common.TempFileMixin, common.PyTablesTestCase):
         ea.append([[[1, 3], [4, 5], [7, 9]]])
         self.assertEqual(ea.nrows, 1)
         if common.verbose:
-            print "First row-->", ea[0]
+            print("First row-->", ea[0])
         self.assertTrue(allequal(ea[0], numpy.array(
             [[1, 3], [4, 5], [7, 9]], 'i4')))
 
@@ -2572,7 +2578,7 @@ class MDAtomTestCase(common.TempFileMixin, common.PyTablesTestCase):
                    [[-2, 3], [-5, 5], [7, -9]]])
         self.assertEqual(ea.nrows, 3)
         if common.verbose:
-            print "Third row-->", ea[2]
+            print("Third row-->", ea[2])
         self.assertTrue(allequal(
             ea[2], numpy.array([[-2, 3], [-5, 5], [7, -9]], 'i4')))
 
@@ -2590,7 +2596,7 @@ class MDAtomTestCase(common.TempFileMixin, common.PyTablesTestCase):
         ea.append([a * 1, a*2, a*3])
         self.assertEqual(ea.nrows, 3)
         if common.verbose:
-            print "Third row-->", ea[2]
+            print("Third row-->", ea[2])
         self.assertTrue(allequal(ea[2], a * 3))
 
     def test03b_MDMDMD(self):
@@ -2609,7 +2615,7 @@ class MDAtomTestCase(common.TempFileMixin, common.PyTablesTestCase):
         ea.append(a * 3)
         self.assertEqual(ea.nrows, 3)
         if common.verbose:
-            print "Third row-->", ea[:, 2, ...]
+            print("Third row-->", ea[:, 2, ...])
         self.assertTrue(allequal(ea[:, 2, ...], a.reshape((2, 3, 2, 4))*3))
 
     def test03c_MDMDMD(self):
@@ -2628,7 +2634,7 @@ class MDAtomTestCase(common.TempFileMixin, common.PyTablesTestCase):
         ea.append(a * 3)
         self.assertEqual(ea.nrows, 3)
         if common.verbose:
-            print "Third row-->", ea[:, :, 2, ...]
+            print("Third row-->", ea[:, :, 2, ...])
         self.assertTrue(allequal(ea[:, :, 2, ...], a.reshape((2, 3, 2, 4))*3))
 
 
diff --git a/tables/tests/test_enum.py b/tables/tests/test_enum.py
index dbb00e3..c1616a3 100644
--- a/tables/tests/test_enum.py
+++ b/tables/tests/test_enum.py
@@ -10,7 +10,7 @@
 #
 ########################################################################
 
-"""Test module for enumerated types under PyTables"""
+"""Test module for enumerated types under PyTables."""
 
 import unittest
 import operator
@@ -21,7 +21,6 @@ from tables.tests import common
 
 
 class CreateColTestCase(common.PyTablesTestCase):
-
     """Test creating enumerated column descriptions."""
 
     def _createCol(self, enum, dflt, base='uint32', shape=()):
@@ -82,9 +81,14 @@ class CreateColTestCase(common.PyTablesTestCase):
         colors = tables.Enum(['red', 'green', 'blue'])
         enumcol = tables.EnumCol(colors, 'red', base='uint32', shape=())
         # needed due to "Hash randomization" (default on python 3.3)
-        template = """EnumCol(enum=Enum({%s}), dflt='red', base=UInt32Atom(shape=(), dflt=0), shape=(), pos=None)"""
-        permitations = [template % ', '.join(items) for items in itertools.permutations(
-            ("'blue': 2", "'green': 1", "'red': 0"))]
+        template = (
+            "EnumCol(enum=Enum({%s}), dflt='red', base=UInt32Atom(shape=(), "
+            "dflt=0), shape=(), pos=None)"
+        )
+        permitations = [
+            template % ', '.join(items) for items in itertools.permutations(
+                ("'blue': 2", "'green': 1", "'red': 0"))
+        ]
         self.assertTrue(repr(enumcol) in permitations)
 
     def test99a_nonIntEnum(self):
@@ -94,7 +98,11 @@ class CreateColTestCase(common.PyTablesTestCase):
                           base=tables.FloatAtom())
 
     def test99b_nonIntDtype(self):
-        """Describing an enumerated column encoded as floats (not implemented)."""
+        """Describing an enumerated column encoded as floats.
+
+        (not implemented).
+
+        """
         colors = ['red', 'green', 'blue']
         self.assertRaises(
             NotImplementedError, self._createCol, colors, 'red', 'float64')
@@ -107,7 +115,6 @@ class CreateColTestCase(common.PyTablesTestCase):
 
 
 class CreateAtomTestCase(common.PyTablesTestCase):
-
     """Test creating enumerated atoms."""
 
     def _createAtom(self, enum, dflt, base='uint32', shape=()):
@@ -162,7 +169,11 @@ class CreateAtomTestCase(common.PyTablesTestCase):
                           base=tables.FloatAtom())
 
     def test99b_nonIntDtype(self):
-        """Describing an enumerated atom encoded as a float (not implemented)."""
+        """Describing an enumerated atom encoded as a float.
+
+        (not implemented).
+
+        """
         colors = ['red', 'green', 'blue']
         self.assertRaises(
             NotImplementedError, self._createAtom, colors, 'red', 'float64')
@@ -175,7 +186,6 @@ class CreateAtomTestCase(common.PyTablesTestCase):
 
 
 class EnumTableTestCase(common.TempFileMixin, common.PyTablesTestCase):
-
     """Test tables with enumerated columns."""
 
     enum = tables.Enum({'red': 4, 'green': 2, 'blue': 1, 'black': 0})
@@ -347,7 +357,6 @@ class EnumTableTestCase(common.TempFileMixin, common.PyTablesTestCase):
 
 
 class EnumEArrayTestCase(common.TempFileMixin, common.PyTablesTestCase):
-
     """Test extendable arrays of enumerated values."""
 
     enum = tables.Enum({'red': 4, 'green': 2, 'blue': 1, 'black': 0})
@@ -390,6 +399,78 @@ class EnumEArrayTestCase(common.TempFileMixin, common.PyTablesTestCase):
             self.h5file.root.test.get_enum(), self.enum,
             "Enumerated type was not restored correctly from disk.")
 
+    def test_enum_default_persistence_red(self):
+        dflt = 'red'
+        atom = tables.EnumAtom(
+            self.enum, dflt, base=self.enumType, shape=())
+
+        self.h5file.create_earray('/', 'test', atom, shape=(0,),
+                                  title=self._getMethodName())
+        self._reopen()
+
+        self.assertEqual(
+            self.h5file.root.test.get_enum(), self.enum,
+            "Enumerated type was not restored correctly from disk.")
+
+        self.assertEqual(
+            self.h5file.root.test.atom.dflt, self.enum[dflt],
+            "The default value of enumerated type was not restored correctly "
+            "from disk.")
+
+    def test_enum_default_persistence_green(self):
+        dflt = 'green'
+        atom = tables.EnumAtom(
+            self.enum, dflt, base=self.enumType, shape=())
+
+        self.h5file.create_earray('/', 'test', atom, shape=(0,),
+                                  title=self._getMethodName())
+        self._reopen()
+
+        self.assertEqual(
+            self.h5file.root.test.get_enum(), self.enum,
+            "Enumerated type was not restored correctly from disk.")
+
+        self.assertEqual(
+            self.h5file.root.test.atom.dflt, self.enum[dflt],
+            "The default value of enumerated type was not restored correctly "
+            "from disk.")
+
+    def test_enum_default_persistence_blue(self):
+        dflt = 'blue'
+        atom = tables.EnumAtom(
+            self.enum, dflt, base=self.enumType, shape=())
+
+        self.h5file.create_earray('/', 'test', atom, shape=(0,),
+                                  title=self._getMethodName())
+        self._reopen()
+
+        self.assertEqual(
+            self.h5file.root.test.get_enum(), self.enum,
+            "Enumerated type was not restored correctly from disk.")
+
+        self.assertEqual(
+            self.h5file.root.test.atom.dflt, self.enum[dflt],
+            "The default value of enumerated type was not restored correctly "
+            "from disk.")
+
+    def test_enum_default_persistence_black(self):
+        dflt = 'black'
+        atom = tables.EnumAtom(
+            self.enum, dflt, base=self.enumType, shape=())
+
+        self.h5file.create_earray('/', 'test', atom, shape=(0,),
+                                  title=self._getMethodName())
+        self._reopen()
+
+        self.assertEqual(
+            self.h5file.root.test.get_enum(), self.enum,
+            "Enumerated type was not restored correctly from disk.")
+
+        self.assertEqual(
+            self.h5file.root.test.atom.dflt, self.enum[dflt],
+            "The default value of enumerated type was not restored correctly "
+            "from disk.")
+
     def test01_append(self):
         """Appending scalar elements of enumerated values."""
 
@@ -440,7 +521,6 @@ class EnumEArrayTestCase(common.TempFileMixin, common.PyTablesTestCase):
 
 
 class EnumVLArrayTestCase(common.TempFileMixin, common.PyTablesTestCase):
-
     """Test variable-length arrays of enumerated values."""
 
     enum = tables.Enum({'red': 4, 'green': 2, 'blue': 1, 'black': 0})
@@ -453,7 +533,8 @@ class EnumVLArrayTestCase(common.TempFileMixin, common.PyTablesTestCase):
             self.enum, 'red', base=self.enumType, shape=shape)
 
     def test00a_reopen(self):
-        """Reopening a file with variable-length arrays using enumerated data."""
+        """Reopening a file with variable-length arrays using
+        enumerated data."""
 
         self.h5file.create_vlarray(
             '/', 'test', self._atom(),
@@ -574,7 +655,6 @@ if __name__ == '__main__':
     unittest.main(defaultTest='suite')
 
 
-
 ## Local Variables:
 ## mode: python
 ## py-indent-offset: 4
diff --git a/tables/tests/test_expression.py b/tables/tests/test_expression.py
index 2a92ca4..375d70b 100644
--- a/tables/tests/test_expression.py
+++ b/tables/tests/test_expression.py
@@ -10,8 +10,9 @@
 #
 ########################################################################
 
-"""Test module for evaluating expressions under PyTables"""
+"""Test module for evaluating expressions under PyTables."""
 
+from __future__ import print_function
 import unittest
 
 import numpy as np
@@ -113,14 +114,14 @@ class ExprTestCase(common.TempFileMixin, common.PyTablesTestCase):
         self.vars = {"a": self.a, "b": self.b, "c": self.c, }
 
     def test00_simple(self):
-        """Checking that expression is correctly evaluated"""
+        """Checking that expression is correctly evaluated."""
 
         expr = tb.Expr(self.expr, self.vars)
         r1 = expr.eval()
         r2 = eval(self.expr, self.npvars)
         if common.verbose:
-            print "Computed expression:", repr(r1)
-            print "Should look like:", repr(r2)
+            print("Computed expression:", repr(r1))
+            print("Should look like:", repr(r2))
         self.assertTrue(common.areArraysEqual(r1, r2),
                         "Evaluate is returning a wrong value.")
 
@@ -134,8 +135,8 @@ class ExprTestCase(common.TempFileMixin, common.PyTablesTestCase):
             r1 = r1[:]
         r2 = eval(self.expr, self.npvars)
         if common.verbose:
-            print "Computed expression:", repr(r1)
-            print "Should look like:", repr(r2)
+            print("Computed expression:", repr(r1))
+            print("Should look like:", repr(r2))
         self.assertTrue(common.areArraysEqual(r1, r2),
                         "Evaluate is returning a wrong value.")
 
@@ -221,28 +222,28 @@ class MixedContainersTestCase(common.TempFileMixin, common.PyTablesTestCase):
                      "e": self.e, "f": self.f, "g": self.g, }
 
     def test00a_simple(self):
-        """Checking expressions with mixed objects"""
+        """Checking expressions with mixed objects."""
 
         expr = tb.Expr(self.expr, self.vars)
         r1 = expr.eval()
         r2 = eval(self.expr, self.npvars)
         if common.verbose:
-            print "Computed expression:", repr(r1), r1.dtype
-            print "Should look like:", repr(r2), r2.dtype
+            print("Computed expression:", repr(r1), r1.dtype)
+            print("Should look like:", repr(r2), r2.dtype)
 
         self.assertTrue(common.areArraysEqual(r1, r2),
                         "Evaluate is returning a wrong value.")
 
     def test00b_simple_scalars(self):
-        """Checking that scalars in expression evaluate correctly"""
+        """Checking that scalars in expression evaluate correctly."""
 
         expr_str = "2 * f + g"
         expr = tb.Expr(expr_str, self.vars)
         r1 = expr.eval()
         r2 = eval(expr_str, self.npvars)
         if common.verbose:
-            print "Computed expression:", repr(r1), r1.dtype
-            print "Should look like:", repr(r2), r2.dtype
+            print("Computed expression:", repr(r1), r1.dtype)
+            print("Should look like:", repr(r2), r2.dtype)
         self.assertTrue(
             r1.shape == r2.shape and r1.dtype == r2.dtype and r1 == r2,
             "Evaluate is returning a wrong value.")
@@ -253,15 +254,15 @@ class MixedContainersTestCase(common.TempFileMixin, common.PyTablesTestCase):
         expr = tb.Expr(self.expr, self.vars)
         for r1 in self.rnda, self.rarr, self.rcarr, self.rearr, self.rcol:
             if common.verbose:
-                print "Checking output container:", type(r1)
+                print("Checking output container:", type(r1))
             expr.set_output(r1)
             r1 = expr.eval()
             if not isinstance(r1, type(self.rnda)):
                 r1 = r1[:]
             r2 = eval(self.expr, self.npvars)
             if common.verbose:
-                print "Computed expression:", repr(r1), r1.dtype
-                print "Should look like:", repr(r2), r2.dtype
+                print("Computed expression:", repr(r1), r1.dtype)
+                print("Should look like:", repr(r2), r2.dtype)
             self.assertTrue(common.areArraysEqual(r1, r2),
                             "Evaluate is returning a wrong value.")
 
@@ -275,14 +276,14 @@ class MixedContainersTestCase(common.TempFileMixin, common.PyTablesTestCase):
         expr = tb.Expr(expr_str, self.vars)
         for r1 in self.rnda, self.rarr, self.rcarr, self.rearr, self.rcol:
             if common.verbose:
-                print "Checking output container:", type(r1)
+                print("Checking output container:", type(r1))
             expr.set_output(r1)
             r1 = expr.eval()
             r1 = r1[()]  # convert a 0-dim array into a scalar
             r2 = eval(expr_str, self.npvars)
             if common.verbose:
-                print "Computed expression:", repr(r1), r1.dtype
-                print "Should look like:", repr(r2), r2.dtype
+                print("Computed expression:", repr(r1), r1.dtype)
+                print("Should look like:", repr(r2), r2.dtype)
             self.assertTrue(common.areArraysEqual(r1, r2),
                             "Evaluate is returning a wrong value.")
 
@@ -296,8 +297,8 @@ class MixedContainersTestCase(common.TempFileMixin, common.PyTablesTestCase):
         npvars = get_sliced_vars(self.npvars, start, stop, step)
         r2 = eval(self.expr, npvars)
         if common.verbose:
-            print "Computed expression:", repr(r1), r1.dtype
-            print "Should look like:", repr(r2), r2.dtype
+            print("Computed expression:", repr(r1), r1.dtype)
+            print("Should look like:", repr(r2), r2.dtype)
         self.assertTrue(common.areArraysEqual(r1, r2),
                         "Evaluate is returning a wrong value.")
 
@@ -311,8 +312,8 @@ class MixedContainersTestCase(common.TempFileMixin, common.PyTablesTestCase):
         npvars = get_sliced_vars(self.npvars, start, stop, step)
         r2 = eval(self.expr, npvars)
         if common.verbose:
-            print "Computed expression:", repr(r1), r1.dtype
-            print "Should look like:", repr(r2), r2.dtype
+            print("Computed expression:", repr(r1), r1.dtype)
+            print("Should look like:", repr(r2), r2.dtype)
         self.assertTrue(common.areArraysEqual(r1, r2),
                         "Evaluate is returning a wrong value.")
 
@@ -326,13 +327,13 @@ class MixedContainersTestCase(common.TempFileMixin, common.PyTablesTestCase):
         npvars = get_sliced_vars(self.npvars, start, stop, step)
         r2 = eval(self.expr, npvars)
         if common.verbose:
-            print "Computed expression:", repr(r1), r1.dtype
-            print "Should look like:", repr(r2), r2.dtype
+            print("Computed expression:", repr(r1), r1.dtype)
+            print("Should look like:", repr(r2), r2.dtype)
         self.assertTrue(common.areArraysEqual(r1, r2),
                         "Evaluate is returning a wrong value.")
 
     def test03_sss(self):
-        """Checking start, stop, step as numpy.int64"""
+        """Checking start, stop, step as numpy.int64."""
 
         start, stop, step = [np.int64(i) for i in
                                      (self.start, self.stop, self.step)]
@@ -342,8 +343,8 @@ class MixedContainersTestCase(common.TempFileMixin, common.PyTablesTestCase):
         npvars = get_sliced_vars(self.npvars, start, stop, step)
         r2 = eval(self.expr, npvars)
         if common.verbose:
-            print "Computed expression:", repr(r1), r1.dtype
-            print "Should look like:", repr(r2), r2.dtype
+            print("Computed expression:", repr(r1), r1.dtype)
+            print("Should look like:", repr(r2), r2.dtype)
         self.assertTrue(common.areArraysEqual(r1, r2),
                         "Evaluate is returning a wrong value.")
 
@@ -372,7 +373,7 @@ class MixedContainers3(MixedContainersTestCase):
 class UnalignedObject(common.PyTablesTestCase):
 
     def test00_simple(self):
-        """Checking expressions with unaligned objects"""
+        """Checking expressions with unaligned objects."""
 
         # Build unaligned arrays
         a0 = np.empty(10, dtype="int8")
@@ -391,8 +392,8 @@ class UnalignedObject(common.PyTablesTestCase):
         r1 = expr.eval()
         r2 = eval(sexpr)
         if common.verbose:
-            print "Computed expression:", repr(r1), r1.dtype
-            print "Should look like:", repr(r2), r2.dtype
+            print("Computed expression:", repr(r1), r1.dtype)
+            print("Should look like:", repr(r2), r2.dtype)
         self.assertTrue(common.areArraysEqual(r1, r2),
                         "Evaluate is returning a wrong value.")
 
@@ -416,8 +417,8 @@ class UnalignedObject(common.PyTablesTestCase):
         r1 = expr.eval()
         r2 = eval(sexpr)
         if common.verbose:
-            print "Computed expression:", repr(r1), r1.dtype
-            print "Should look like:", repr(r2), r2.dtype
+            print("Computed expression:", repr(r1), r1.dtype)
+            print("Should look like:", repr(r2), r2.dtype)
         self.assertTrue(common.areArraysEqual(r1, r2),
                         "Evaluate is returning a wrong value.")
 
@@ -440,8 +441,8 @@ class NonContiguousObject(common.PyTablesTestCase):
         r1 = expr.eval()
         r2 = eval(sexpr)
         if common.verbose:
-            print "Computed expression:", repr(r1), r1.dtype
-            print "Should look like:", repr(r2), r2.dtype
+            print("Computed expression:", repr(r1), r1.dtype)
+            print("Should look like:", repr(r2), r2.dtype)
         self.assertTrue(common.areArraysEqual(r1, r2),
                         "Evaluate is returning a wrong value.")
 
@@ -460,8 +461,8 @@ class NonContiguousObject(common.PyTablesTestCase):
         r1 = expr.eval()
         r2 = eval(sexpr)
         if common.verbose:
-            print "Computed expression:", repr(r1), r1.dtype
-            print "Should look like:", repr(r2), r2.dtype
+            print("Computed expression:", repr(r1), r1.dtype)
+            print("Should look like:", repr(r2), r2.dtype)
         self.assertTrue(common.areArraysEqual(r1, r2),
                         "Evaluate is returning a wrong value.")
 
@@ -480,8 +481,8 @@ class NonContiguousObject(common.PyTablesTestCase):
         r1 = expr.eval()
         r2 = eval(sexpr)
         if common.verbose:
-            print "Computed expression:", repr(r1), r1.dtype
-            print "Should look like:", repr(r2), r2.dtype
+            print("Computed expression:", repr(r1), r1.dtype)
+            print("Should look like:", repr(r2), r2.dtype)
         self.assertTrue(common.areArraysEqual(r1, r2),
                         "Evaluate is returning a wrong value.")
 
@@ -502,7 +503,7 @@ class ExprError(common.TempFileMixin, common.PyTablesTestCase):
         self.r1 = np.empty(N, dtype='int64').reshape(self.shape)
 
     def _test00_shape(self):
-        """Checking that inconsistent shapes are detected"""
+        """Checking that inconsistent shapes are detected."""
 
         self.b = self.b.reshape(self.shape+(1,))
         expr = "a * b + c"
@@ -511,7 +512,7 @@ class ExprError(common.TempFileMixin, common.PyTablesTestCase):
         self.assertRaises(ValueError, expr.eval)
 
     def test02_uint64(self):
-        """Checking that uint64 arrays in expression are detected"""
+        """Checking that uint64 arrays in expression are detected."""
 
         self.b = self.b.view('uint64')
         expr = "a * b + c"
@@ -519,7 +520,7 @@ class ExprError(common.TempFileMixin, common.PyTablesTestCase):
         self.assertRaises(NotImplementedError, tb.Expr, expr, vars_)
 
     def test03_table(self):
-        """Checking that tables in expression are detected"""
+        """Checking that tables in expression are detected."""
 
         class Rec(tb.IsDescription):
             col1 = tb.Int32Col()
@@ -531,7 +532,7 @@ class ExprError(common.TempFileMixin, common.PyTablesTestCase):
         self.assertRaises(TypeError, tb.Expr, expr, vars_)
 
     def test04_nestedcols(self):
-        """Checking that nested cols in expression are detected"""
+        """Checking that nested cols in expression are detected."""
 
         class Nested(tb.IsDescription):
             col1 = tb.Int32Col()
@@ -553,7 +554,7 @@ class ExprError(common.TempFileMixin, common.PyTablesTestCase):
         self.assertRaises(TypeError, tb.Expr, expr, vars_)
 
     def test05_vlarray(self):
-        """Checking that VLArrays in expression are detected"""
+        """Checking that VLArrays in expression are detected."""
 
         vla = self.h5file.create_vlarray("/", "a", tb.Int32Col())
         expr = "a * b + c"
@@ -565,7 +566,7 @@ class ExprError(common.TempFileMixin, common.PyTablesTestCase):
 class BroadcastTestCase(common.TempFileMixin, common.PyTablesTestCase):
 
     def test00_simple(self):
-        """Checking broadcast in expression"""
+        """Checking broadcast in expression."""
 
         shapes = (self.shape1, self.shape2, self.shape3)
         # Build arrays with different shapes as inputs
@@ -588,9 +589,9 @@ class BroadcastTestCase(common.TempFileMixin, common.PyTablesTestCase):
         r1 = expr.eval()
         r2 = eval("2 * a + b-c")
         if common.verbose:
-            print "Tested shapes:", self.shape1, self.shape2, self.shape3
-            print "Computed expression:", repr(r1), r1.dtype
-            print "Should look like:", repr(r2), r2.dtype
+            print("Tested shapes:", self.shape1, self.shape2, self.shape3)
+            print("Computed expression:", repr(r1), r1.dtype)
+            print("Should look like:", repr(r2), r2.dtype)
         self.assertTrue(common.areArraysEqual(r1, r2),
                         "Evaluate is returning a wrong value.")
 
@@ -635,7 +636,7 @@ class Broadcast5(BroadcastTestCase):
 class DiffLengthTestCase(common.TempFileMixin, common.PyTablesTestCase):
 
     def test00_simple(self):
-        """Checking different length inputs in expression"""
+        """Checking different length inputs in expression."""
 
         shapes = (list(self.shape1), list(self.shape2), list(self.shape3))
         # Build arrays with different shapes as inputs
@@ -661,9 +662,9 @@ class DiffLengthTestCase(common.TempFileMixin, common.PyTablesTestCase):
         self.assertTrue(c is not None)
         r2 = eval("2 * a + b-c")
         if common.verbose:
-            print "Tested shapes:", self.shape1, self.shape2, self.shape3
-            print "Computed expression:", repr(r1), r1.dtype
-            print "Should look like:", repr(r2), r2.dtype
+            print("Tested shapes:", self.shape1, self.shape2, self.shape3)
+            print("Computed expression:", repr(r1), r1.dtype)
+            print("Should look like:", repr(r2), r2.dtype)
         self.assertTrue(common.areArraysEqual(r1, r2),
                         "Evaluate is returning a wrong value.")
 
@@ -702,7 +703,7 @@ class DiffLength4(DiffLengthTestCase):
 class TypesTestCase(common.TempFileMixin, common.PyTablesTestCase):
 
     def test00_bool(self):
-        """Checking booleans in expression"""
+        """Checking booleans in expression."""
 
         # Build arrays with different shapes as inputs
         a = np.array([True, False, True])
@@ -716,17 +717,17 @@ class TypesTestCase(common.TempFileMixin, common.PyTablesTestCase):
         r1 = expr.eval()
         r2 = eval("a | b")
         if common.verbose:
-            print "Computed expression:", repr(r1), r1.dtype
-            print "Should look like:", repr(r2), r2.dtype
+            print("Computed expression:", repr(r1), r1.dtype)
+            print("Should look like:", repr(r2), r2.dtype)
         self.assertTrue(common.areArraysEqual(r1, r2),
                         "Evaluate is returning a wrong value.")
 
     def test01_shortint(self):
-        """Checking int8,uint8,int16,uint16 and int32 in expression"""
+        """Checking int8,uint8,int16,uint16 and int32 in expression."""
 
         for dtype in 'int8', 'uint8', 'int16', 'uint16', 'int32':
             if common.verbose:
-                print "Checking type:", dtype
+                print("Checking type:", dtype)
             # Build arrays with different shapes as inputs
             a = np.array([1, 2, 3], dtype)
             b = np.array([3, 4, 5], dtype)
@@ -740,8 +741,8 @@ class TypesTestCase(common.TempFileMixin, common.PyTablesTestCase):
             b = np.array([3, 4, 5], 'int32')
             r2 = eval("two * a-b")
             if common.verbose:
-                print "Computed expression:", repr(r1), r1.dtype
-                print "Should look like:", repr(r2), r2.dtype
+                print("Computed expression:", repr(r1), r1.dtype)
+                print("Should look like:", repr(r2), r2.dtype)
             self.assertEqual(r1.dtype, r2.dtype)
             self.assertTrue(common.areArraysEqual(r1, r2),
                             "Evaluate is returning a wrong value.")
@@ -750,11 +751,11 @@ class TypesTestCase(common.TempFileMixin, common.PyTablesTestCase):
             b1.remove()
 
     def test02_longint(self):
-        """Checking uint32 and int64 in expression"""
+        """Checking uint32 and int64 in expression."""
 
         for dtype in 'uint32', 'int64':
             if common.verbose:
-                print "Checking type:", dtype
+                print("Checking type:", dtype)
             # Build arrays with different shapes as inputs
             a = np.array([1, 2, 3], dtype)
             b = np.array([3, 4, 5], dtype)
@@ -767,8 +768,8 @@ class TypesTestCase(common.TempFileMixin, common.PyTablesTestCase):
             b = np.array([3, 4, 5], 'int64')
             r2 = eval("2 * a-b")
             if common.verbose:
-                print "Computed expression:", repr(r1), r1.dtype
-                print "Should look like:", repr(r2), r2.dtype
+                print("Computed expression:", repr(r1), r1.dtype)
+                print("Should look like:", repr(r2), r2.dtype)
             self.assertEqual(r1.dtype, r2.dtype)
             self.assertTrue(common.areArraysEqual(r1, r2),
                             "Evaluate is returning a wrong value.")
@@ -777,11 +778,11 @@ class TypesTestCase(common.TempFileMixin, common.PyTablesTestCase):
             b1.remove()
 
     def test03_float(self):
-        """Checking float32 and float64 in expression"""
+        """Checking float32 and float64 in expression."""
 
         for dtype in 'float32', 'float64':
             if common.verbose:
-                print "Checking type:", dtype
+                print("Checking type:", dtype)
             # Build arrays with different shapes as inputs
             a = np.array([1, 2, 3], dtype)
             b = np.array([3, 4, 5], dtype)
@@ -794,8 +795,8 @@ class TypesTestCase(common.TempFileMixin, common.PyTablesTestCase):
             b = np.array([3, 4, 5], dtype)
             r2 = eval("2 * a-b")
             if common.verbose:
-                print "Computed expression:", repr(r1), r1.dtype
-                print "Should look like:", repr(r2), r2.dtype
+                print("Computed expression:", repr(r1), r1.dtype)
+                print("Should look like:", repr(r2), r2.dtype)
             self.assertEqual(r1.dtype, r2.dtype)
             self.assertTrue(common.areArraysEqual(r1, r2),
                             "Evaluate is returning a wrong value.")
@@ -804,11 +805,11 @@ class TypesTestCase(common.TempFileMixin, common.PyTablesTestCase):
             b1.remove()
 
     def test04_complex(self):
-        """Checking complex64 and complex128 in expression"""
+        """Checking complex64 and complex128 in expression."""
 
         for dtype in 'complex64', 'complex128':
             if common.verbose:
-                print "Checking type:", dtype
+                print("Checking type:", dtype)
             # Build arrays with different shapes as inputs
             a = np.array([1, 2j, 3 + 2j], dtype)
             b = np.array([3, 4j, 5 + 1j], dtype)
@@ -821,8 +822,8 @@ class TypesTestCase(common.TempFileMixin, common.PyTablesTestCase):
             b = np.array([3, 4j, 5 + 1j], 'complex128')
             r2 = eval("2 * a-b")
             if common.verbose:
-                print "Computed expression:", repr(r1), r1.dtype
-                print "Should look like:", repr(r2), r2.dtype
+                print("Computed expression:", repr(r1), r1.dtype)
+                print("Should look like:", repr(r2), r2.dtype)
             self.assertEqual(r1.dtype, r2.dtype)
             self.assertTrue(common.areArraysEqual(r1, r2),
                             "Evaluate is returning a wrong value.")
@@ -831,7 +832,7 @@ class TypesTestCase(common.TempFileMixin, common.PyTablesTestCase):
             b1.remove()
 
     def test05_string(self):
-        """Checking strings in expression"""
+        """Checking strings in expression."""
 
         # Build arrays with different shapes as inputs
         a = np.array(['a', 'bd', 'cd'], 'S')
@@ -845,8 +846,8 @@ class TypesTestCase(common.TempFileMixin, common.PyTablesTestCase):
         r1 = expr.eval()
         r2 = eval("(a > b'a') | ( b > b'b')")
         if common.verbose:
-            print "Computed expression:", repr(r1), r1.dtype
-            print "Should look like:", repr(r2), r2.dtype
+            print("Computed expression:", repr(r1), r1.dtype)
+            print("Should look like:", repr(r2), r2.dtype)
         self.assertTrue(common.areArraysEqual(r1, r2),
                         "Evaluate is returning a wrong value.")
 
@@ -855,7 +856,7 @@ class TypesTestCase(common.TempFileMixin, common.PyTablesTestCase):
 class FunctionsTestCase(common.TempFileMixin, common.PyTablesTestCase):
 
     def test00_simple(self):
-        """Checking some math functions in expression"""
+        """Checking some math functions in expression."""
 
         # Build arrays with different shapes as inputs
         a = np.array([.1, .2, .3])
@@ -870,8 +871,8 @@ class FunctionsTestCase(common.TempFileMixin, common.PyTablesTestCase):
         r1 = expr.eval()
         r2 = np.sin(a) * np.sqrt(b)
         if common.verbose:
-            print "Computed expression:", repr(r1), r1.dtype
-            print "Should look like:", repr(r2), r2.dtype
+            print("Computed expression:", repr(r1), r1.dtype)
+            print("Should look like:", repr(r2), r2.dtype)
         self.assertTrue(common.areArraysEqual(r1, r2),
                         "Evaluate is returning a wrong value.")
 
@@ -880,7 +881,7 @@ class FunctionsTestCase(common.TempFileMixin, common.PyTablesTestCase):
 class MaindimTestCase(common.TempFileMixin, common.PyTablesTestCase):
 
     def test00_simple(self):
-        """Checking other dimensions than 0 as main dimension"""
+        """Checking other dimensions than 0 as main dimension."""
 
         shape = list(self.shape)
         # Build input arrays
@@ -903,9 +904,9 @@ class MaindimTestCase(common.TempFileMixin, common.PyTablesTestCase):
         r1 = expr.eval()
         r2 = eval("2 * a + b-c")
         if common.verbose:
-            print "Tested shape:", shape
-            print "Computed expression:", repr(r1), r1.dtype
-            print "Should look like:", repr(r2), r2.dtype
+            print("Tested shape:", shape)
+            print("Computed expression:", repr(r1), r1.dtype)
+            print("Should look like:", repr(r2), r2.dtype)
         self.assertTrue(common.areArraysEqual(r1, r2),
                         "Evaluate is returning a wrong value.")
 
@@ -937,14 +938,14 @@ class MaindimTestCase(common.TempFileMixin, common.PyTablesTestCase):
         expr.eval()
         r2 = eval("2 * a + b-c")
         if common.verbose:
-            print "Tested shape:", shape
-            print "Computed expression:", repr(r1[:]), r1.dtype
-            print "Should look like:", repr(r2), r2.dtype
+            print("Tested shape:", shape)
+            print("Computed expression:", repr(r1[:]), r1.dtype)
+            print("Should look like:", repr(r2), r2.dtype)
         self.assertTrue(common.areArraysEqual(r1[:], r2),
                         "Evaluate is returning a wrong value.")
 
     def test02_diff_in_maindims(self):
-        """Checking different main dimensions in inputs"""
+        """Checking different main dimensions in inputs."""
 
         shape = list(self.shape)
         # Build input arrays
@@ -974,14 +975,14 @@ class MaindimTestCase(common.TempFileMixin, common.PyTablesTestCase):
         r1 = expr.eval()
         r2 = eval("2 * a + b-c")
         if common.verbose:
-            print "Tested shape:", shape
-            print "Computed expression:", repr(r1), r1.dtype
-            print "Should look like:", repr(r2), r2.dtype
+            print("Tested shape:", shape)
+            print("Computed expression:", repr(r1), r1.dtype)
+            print("Should look like:", repr(r2), r2.dtype)
         self.assertTrue(common.areArraysEqual(r1, r2),
                         "Evaluate is returning a wrong value.")
 
     def test03_diff_in_out_maindims(self):
-        """Checking different maindims in inputs and output"""
+        """Checking different maindims in inputs and output."""
 
         shape = list(self.shape)
         # Build input arrays
@@ -1012,14 +1013,14 @@ class MaindimTestCase(common.TempFileMixin, common.PyTablesTestCase):
         expr.eval()
         r2 = eval("2 * a + b-c")
         if common.verbose:
-            print "Tested shape:", shape
-            print "Computed expression:", repr(r1[:]), r1.dtype
-            print "Should look like:", repr(r2), r2.dtype
+            print("Tested shape:", shape)
+            print("Computed expression:", repr(r1[:]), r1.dtype)
+            print("Should look like:", repr(r2), r2.dtype)
         self.assertTrue(common.areArraysEqual(r1[:], r2),
                         "Evaluate is returning a wrong value.")
 
     def test04_diff_in_out_maindims_lengths(self):
-        """Checking different maindims and lengths in inputs and output"""
+        """Checking different maindims and lengths in inputs and output."""
 
         shape = list(self.shape)
         # Build input arrays
@@ -1106,9 +1107,9 @@ class AppendModeTestCase(common.TempFileMixin, common.PyTablesTestCase):
         expr.eval()
         r2 = eval("2 * a + b-c")
         if common.verbose:
-            print "Tested shape:", shape
-            print "Computed expression:", repr(r1[:]), r1.dtype
-            print "Should look like:", repr(r2), r2.dtype
+            print("Tested shape:", shape)
+            print("Computed expression:", repr(r1[:]), r1.dtype)
+            print("Should look like:", repr(r2), r2.dtype)
         self.assertTrue(common.areArraysEqual(r1[:], r2),
                         "Evaluate is returning a wrong value.")
 
@@ -1148,15 +1149,15 @@ class iterTestCase(common.TempFileMixin, common.PyTablesTestCase):
         self.sexpr = "2 * a + b-c"
 
     def test00_iter(self):
-        """Checking the __iter__ iterator"""
+        """Checking the __iter__ iterator."""
 
         expr = tb.Expr(self.sexpr, self.vars)
         r1 = np.array([row for row in expr])
         r2 = eval(self.sexpr, self.npvars)
         if common.verbose:
-            print "Tested shape, maindim:", self.shape, self.maindim
-            print "Computed expression:", repr(r1[:]), r1.dtype
-            print "Should look like:", repr(r2), r2.dtype
+            print("Tested shape, maindim:", self.shape, self.maindim)
+            print("Computed expression:", repr(r1[:]), r1.dtype)
+            print("Should look like:", repr(r2), r2.dtype)
         self.assertTrue(common.areArraysEqual(r1[:], r2),
                         "Evaluate is returning a wrong value.")
 
@@ -1171,9 +1172,9 @@ class iterTestCase(common.TempFileMixin, common.PyTablesTestCase):
             self.npvars, start, stop, step, self.shape, self.maindim)
         r2 = eval(self.sexpr, npvars)
         if common.verbose:
-            print "Tested shape, maindim:", self.shape, self.maindim
-            print "Computed expression:", repr(r1[:]), r1.dtype
-            print "Should look like:", repr(r2), r2.dtype
+            print("Tested shape, maindim:", self.shape, self.maindim)
+            print("Computed expression:", repr(r1[:]), r1.dtype)
+            print("Should look like:", repr(r2), r2.dtype)
         self.assertTrue(common.areArraysEqual(r1[:], r2),
                         "Evaluate is returning a wrong value.")
 
@@ -1188,9 +1189,9 @@ class iterTestCase(common.TempFileMixin, common.PyTablesTestCase):
             self.npvars, start, stop, step, self.shape, self.maindim)
         r2 = eval(self.sexpr, npvars)
         if common.verbose:
-            print "Tested shape, maindim:", self.shape, self.maindim
-            print "Computed expression:", repr(r1[:]), r1.dtype
-            print "Should look like:", repr(r2), r2.dtype
+            print("Tested shape, maindim:", self.shape, self.maindim)
+            print("Computed expression:", repr(r1[:]), r1.dtype)
+            print("Should look like:", repr(r2), r2.dtype)
         self.assertTrue(common.areArraysEqual(r1[:], r2),
                         "Evaluate is returning a wrong value.")
 
@@ -1205,9 +1206,9 @@ class iterTestCase(common.TempFileMixin, common.PyTablesTestCase):
             self.npvars, start, stop, step, self.shape, self.maindim)
         r2 = eval(self.sexpr, npvars)
         if common.verbose:
-            print "Tested shape, maindim:", self.shape, self.maindim
-            print "Computed expression:", repr(r1[:]), r1.dtype
-            print "Should look like:", repr(r2), r2.dtype
+            print("Tested shape, maindim:", self.shape, self.maindim)
+            print("Computed expression:", repr(r1[:]), r1.dtype)
+            print("Should look like:", repr(r2), r2.dtype)
         self.assertTrue(common.areArraysEqual(r1[:], r2),
                         "Evaluate is returning a wrong value.")
 
@@ -1252,7 +1253,7 @@ class iter5(iterTestCase):
 class setOutputRangeTestCase(common.TempFileMixin, common.PyTablesTestCase):
 
     def test00_simple(self):
-        """Checking the range selection for output"""
+        """Checking the range selection for output."""
 
         shape = list(self.shape)
         start, stop, step = self.range_
@@ -1274,9 +1275,9 @@ class setOutputRangeTestCase(common.TempFileMixin, common.PyTablesTestCase):
         r2 = eval("a-b-1")
         r[start:stop:step] = r2[:len(xrange(start, stop, step))]
         if common.verbose:
-            print "Tested shape:", shape
-            print "Computed expression:", repr(r1[:]), r1.dtype
-            print "Should look like:", repr(r), r.dtype
+            print("Tested shape:", shape)
+            print("Computed expression:", repr(r1[:]), r1.dtype)
+            print("Should look like:", repr(r), r.dtype)
         self.assertTrue(common.areArraysEqual(r1[:], r),
                         "Evaluate is returning a wrong value.")
 
@@ -1312,9 +1313,9 @@ class setOutputRangeTestCase(common.TempFileMixin, common.PyTablesTestCase):
         r.__setitem__(lsl + (slice(start, stop, step),),
                       r2.__getitem__(lsl + (slice(0, l),)))
         if common.verbose:
-            print "Tested shape:", shape
-            print "Computed expression:", repr(r1[:]), r1.dtype
-            print "Should look like:", repr(r), r.dtype
+            print("Tested shape:", shape)
+            print("Computed expression:", repr(r1[:]), r1.dtype)
+            print("Should look like:", repr(r), r.dtype)
         self.assertTrue(common.areArraysEqual(r1[:], r),
                         "Evaluate is returning a wrong value.")
 
@@ -1383,7 +1384,7 @@ class setOutputRange9(setOutputRangeTestCase):
 class VeryLargeInputsTestCase(common.TempFileMixin, common.PyTablesTestCase):
 
     def test00_simple(self):
-        """Checking very large inputs"""
+        """Checking very large inputs."""
 
         shape = self.shape
         # Use filters so as to not use too much space
@@ -1410,9 +1411,9 @@ class VeryLargeInputsTestCase(common.TempFileMixin, common.PyTablesTestCase):
         r1 = r1[-10:]  # Get the last ten rows
         r2 = np.zeros(10, dtype='float64')
         if common.verbose:
-            print "Tested shape:", shape
-            print "Ten last rows:", repr(r1), r1.dtype
-            print "Should look like:", repr(r2), r2.dtype
+            print("Tested shape:", shape)
+            print("Ten last rows:", repr(r1), r1.dtype)
+            print("Should look like:", repr(r2), r2.dtype)
         self.assertTrue(common.areArraysEqual(r1, r2),
                         "Evaluate is returning a wrong value.")
 
@@ -1424,7 +1425,7 @@ class VeryLargeInputsTestCase(common.TempFileMixin, common.PyTablesTestCase):
             # The iterator is much more slower, so don't run it for
             # extremeley large arrays.
             if common.verbose:
-                print "Skipping this *very* long test"
+                print("Skipping this *very* long test")
             return
         # Use filters so as to not use too much space
         if tb.which_lib_version("lzo") is not None:
@@ -1445,9 +1446,9 @@ class VeryLargeInputsTestCase(common.TempFileMixin, common.PyTablesTestCase):
         expr = tb.Expr("a-b + 1")
         r1 = sum(expr)     # Should give 0
         if common.verbose:
-            print "Tested shape:", shape
-            print "Cummulated sum:", r1
-            print "Should look like:", 0
+            print("Tested shape:", shape)
+            print("Cummulated sum:", r1)
+            print("Should look like:", 0)
         self.assertEqual(r1, 0, "Evaluate is returning a wrong value.")
 
 # The next can go on regular tests, as it should be light enough
diff --git a/tables/tests/test_garbage.py b/tables/tests/test_garbage.py
index cdb5285..72497c0 100644
--- a/tables/tests/test_garbage.py
+++ b/tables/tests/test_garbage.py
@@ -10,7 +10,7 @@
 #
 ########################################################################
 
-"""Test module for detecting uncollectable garbage in PyTables
+"""Test module for detecting uncollectable garbage in PyTables.
 
 This test module *must* be loaded in the last place.  It just checks for
 the existence of uncollectable garbage in ``gc.garbage`` after running
@@ -18,6 +18,7 @@ all the tests.
 
 """
 
+from __future__ import print_function
 import unittest
 import gc
 
@@ -46,7 +47,7 @@ class GarbageTestCase(common.PyTablesTestCase):
                     classCount[objClass] = 1
             incidence = ['``%s``: %d' % (cls, cnt)
                          for (cls, cnt) in classCount.iteritems()]
-            print "Class incidence:", ', '.join(incidence)
+            print("Class incidence:", ', '.join(incidence))
         self.fail("Possible leak: %d uncollected objects." % garbageLen)
 
 
diff --git a/tables/tests/test_hdf5compat.py b/tables/tests/test_hdf5compat.py
index de27a40..1a066cc 100644
--- a/tables/tests/test_hdf5compat.py
+++ b/tables/tests/test_hdf5compat.py
@@ -10,7 +10,7 @@
 #
 ########################################################################
 
-"""Test module for compatibility with plain HDF files"""
+"""Test module for compatibility with plain HDF files."""
 
 import unittest
 import tempfile
@@ -26,12 +26,12 @@ from tables.tests.common import allequal
 
 class HDF5CompatibilityTestCase(common.PyTablesTestCase):
 
-    """
-    Base class for HDF5 compatibility tests.
+    """Base class for HDF5 compatibility tests.
 
     Test cases deriving from this class must define an ``h5fname``
     attribute with the name of the file to be opened, and a ``_test()``
     method with checks on the opened file.
+
     """
 
     def setUp(self):
@@ -50,10 +50,10 @@ class HDF5CompatibilityTestCase(common.PyTablesTestCase):
 
 class EnumTestCase(HDF5CompatibilityTestCase):
 
-    """
-    Test for enumerated datatype.
+    """Test for enumerated datatype.
 
     See ftp://ftp.hdfgroup.org/HDF5/current/src/unpacked/test/enum.c.
+
     """
 
     h5fname = 'smpl_enum.h5'
@@ -78,11 +78,12 @@ class EnumTestCase(HDF5CompatibilityTestCase):
 
 class NumericTestCase(HDF5CompatibilityTestCase):
 
-    """
-    Test for several numeric datatypes.
+    """Test for several numeric datatypes.
 
-    See ftp://ftp.ncsa.uiuc.edu/HDF/files/hdf5/samples/[fiu]l?{8,16,32,64}{be,le}.c
+    See
+    ftp://ftp.ncsa.uiuc.edu/HDF/files/hdf5/samples/[fiu]l?{8,16,32,64}{be,le}.c
     (they seem to be no longer available).
+
     """
 
     def _test(self):
@@ -144,11 +145,11 @@ class I32LETestCase(NumericTestCase):
 
 class ChunkedCompoundTestCase(HDF5CompatibilityTestCase):
 
-    """
-    Test for a more complex and chunked compound structure.
+    """Test for a more complex and chunked compound structure.
 
     This is generated by a chunked version of the example in
     ftp://ftp.ncsa.uiuc.edu/HDF/files/hdf5/samples/compound2.c.
+
     """
 
     h5fname = 'smpl_compound_chunked.h5'
@@ -200,10 +201,10 @@ class ChunkedCompoundTestCase(HDF5CompatibilityTestCase):
 
 class ContiguousCompoundTestCase(HDF5CompatibilityTestCase):
 
-    """
-    Test for support of native contiguous compound datasets.
+    """Test for support of native contiguous compound datasets.
 
     This example has been provided by Dav Clark.
+
     """
 
     h5fname = 'non-chunked-table.h5'
@@ -242,9 +243,7 @@ class ContiguousCompoundTestCase(HDF5CompatibilityTestCase):
 
 class ContiguousCompoundAppendTestCase(HDF5CompatibilityTestCase):
 
-    """
-    Test for appending data to native contiguous compound datasets.
-    """
+    """Test for appending data to native contiguous compound datasets."""
 
     h5fname = 'non-chunked-table.h5'
 
@@ -273,10 +272,10 @@ class ContiguousCompoundAppendTestCase(HDF5CompatibilityTestCase):
 
 class ExtendibleTestCase(HDF5CompatibilityTestCase):
 
-    """
-    Test for extendible datasets.
+    """Test for extendible datasets.
 
     See the example programs in the Introduction to HDF5.
+
     """
 
     h5fname = 'smpl_SDSextendible.h5'
@@ -310,9 +309,7 @@ class ExtendibleTestCase(HDF5CompatibilityTestCase):
 
 
 class SzipTestCase(HDF5CompatibilityTestCase):
-    """
-    Test for native HDF5 files with datasets compressed with szip.
-    """
+    """Test for native HDF5 files with datasets compressed with szip."""
 
     h5fname = 'test_szip.h5'
 
@@ -320,7 +317,8 @@ class SzipTestCase(HDF5CompatibilityTestCase):
         self.assertTrue('/dset_szip' in self.h5file)
 
         arr = self.h5file.get_node('/dset_szip')
-        filters = "Filters(complib='szip', shuffle=False, fletcher32=False)"
+        filters = ("Filters(complib='szip', shuffle=False, fletcher32=False, "
+                   "least_significant_digit=None)")
         self.assertEqual(repr(arr.filters), filters)
 
 
diff --git a/tables/tests/test_index_backcompat.py b/tables/tests/test_index_backcompat.py
index d4dedb7..7d46f62 100644
--- a/tables/tests/test_index_backcompat.py
+++ b/tables/tests/test_index_backcompat.py
@@ -1,5 +1,6 @@
 # -*- coding: utf-8 -*-
 
+from __future__ import print_function
 import unittest
 
 from tables import *
@@ -32,11 +33,11 @@ class IndexesTestCase(common.PyTablesTestCase):
             self.assertEqual(t1var1.index._v_version, "2.1")
 
     def test01_string(self):
-        """Checking string indexes"""
+        """Checking string indexes."""
 
         if common.verbose:
-            print '\n', '-=' * 30
-            print "Running %s.test01_string..." % self.__class__.__name__
+            print('\n', '-=' * 30)
+            print("Running %s.test01_string..." % self.__class__.__name__)
 
         table1 = self.table1
         table2 = self.table2
@@ -57,18 +58,18 @@ class IndexesTestCase(common.PyTablesTestCase):
         if verbose:
 #             print "Superior & inferior limits:", il, sl
 #             print "Selection results (index):", results1
-            print "Should look like:", results2
-            print "Length results:", len(results1)
-            print "Should be:", len(results2)
+            print("Should look like:", results2)
+            print("Length results:", len(results1))
+            print("Should be:", len(results2))
         self.assertEqual(len(results1), len(results2))
         self.assertEqual(results1, results2)
 
     def test02_bool(self):
-        """Checking bool indexes"""
+        """Checking bool indexes."""
 
         if common.verbose:
-            print '\n', '-=' * 30
-            print "Running %s.test02_bool..." % self.__class__.__name__
+            print('\n', '-=' * 30)
+            print("Running %s.test02_bool..." % self.__class__.__name__)
 
         table1 = self.table1
         table2 = self.table2
@@ -77,22 +78,21 @@ class IndexesTestCase(common.PyTablesTestCase):
         t1var2 = table1.cols.var2
         self.assertTrue(t1var2 is not None)
         results1 = [p["var2"] for p in table1.where('t1var2 == True')]
-        results2 = [p["var2"] for p in table2
-                    if p["var2"] == True]
+        results2 = [p["var2"] for p in table2 if p["var2"] is True]
         if verbose:
-            print "Selection results (index):", results1
-            print "Should look like:", results2
-            print "Length results:", len(results1)
-            print "Should be:", len(results2)
+            print("Selection results (index):", results1)
+            print("Should look like:", results2)
+            print("Length results:", len(results1))
+            print("Should be:", len(results2))
         self.assertEqual(len(results1), len(results2))
         self.assertEqual(results1, results2)
 
     def test03_int(self):
-        """Checking int indexes"""
+        """Checking int indexes."""
 
         if common.verbose:
-            print '\n', '-=' * 30
-            print "Running %s.test03_int..." % self.__class__.__name__
+            print('\n', '-=' * 30)
+            print("Running %s.test03_int..." % self.__class__.__name__)
 
         table1 = self.table1
         table2 = self.table2
@@ -116,17 +116,17 @@ class IndexesTestCase(common.PyTablesTestCase):
         if verbose:
 #             print "Selection results (index):", results1
 #             print "Should look like:", results2
-            print "Length results:", len(results1)
-            print "Should be:", len(results2)
+            print("Length results:", len(results1))
+            print("Should be:", len(results2))
         self.assertEqual(len(results1), len(results2))
         self.assertEqual(results1, results2)
 
     def test04_float(self):
-        """Checking float indexes"""
+        """Checking float indexes."""
 
         if common.verbose:
-            print '\n', '-=' * 30
-            print "Running %s.test04_float..." % self.__class__.__name__
+            print('\n', '-=' * 30)
+            print("Running %s.test04_float..." % self.__class__.__name__)
 
         table1 = self.table1
         table2 = self.table2
@@ -150,8 +150,8 @@ class IndexesTestCase(common.PyTablesTestCase):
         if verbose:
 #             print "Selection results (index):", results1
 #             print "Should look like:", results2
-            print "Length results:", len(results1)
-            print "Should be:", len(results2)
+            print("Length results:", len(results1))
+            print("Should be:", len(results2))
         self.assertEqual(len(results1), len(results2))
         self.assertEqual(results1.sort(), results2.sort())
 
diff --git a/tables/tests/test_indexes.py b/tables/tests/test_indexes.py
index 5dd1af8..9d318ef 100644
--- a/tables/tests/test_indexes.py
+++ b/tables/tests/test_indexes.py
@@ -1,5 +1,6 @@
 # -*- coding: utf-8 -*-
 
+from __future__ import print_function
 import unittest
 import os
 import tempfile
@@ -69,8 +70,8 @@ class BasicTestCase(PyTablesTestCase):
         for col in table.colinstances.itervalues():
             indexrows = col.create_index(_blocksizes=small_blocksizes)
         if verbose:
-            print "Number of written rows:", self.nrows
-            print "Number of indexed rows:", indexrows
+            print("Number of written rows:", self.nrows)
+            print("Number of indexed rows:", indexrows)
 
         return
 
@@ -86,8 +87,9 @@ class BasicTestCase(PyTablesTestCase):
         """Checking flushing an Index incrementing only the last row."""
 
         if verbose:
-            print '\n', '-=' * 30
-            print "Running %s.test00_flushLastRow..." % self.__class__.__name__
+            print('\n', '-=' * 30)
+            print("Running %s.test00_flushLastRow..." %
+                  self.__class__.__name__)
 
         # Open the HDF5 file in append mode
         self.fileh = open_file(self.file, mode="a")
@@ -99,10 +101,10 @@ class BasicTestCase(PyTablesTestCase):
         table.flush()  # redo the indexes
         idxcol = table.cols.var1.index
         if verbose:
-            print "Max rows in buf:", table.nrowsinbuf
-            print "Number of elements per slice:", idxcol.slicesize
-            print "Chunk size:", idxcol.sorted.chunksize
-            print "Elements in last row:", idxcol.indicesLR[-1]
+            print("Max rows in buf:", table.nrowsinbuf)
+            print("Number of elements per slice:", idxcol.slicesize)
+            print("Chunk size:", idxcol.sorted.chunksize)
+            print("Elements in last row:", idxcol.indicesLR[-1])
 
         # Do a selection
         results = [p["var1"] for p in table.where('var1 == b"1"')]
@@ -113,8 +115,8 @@ class BasicTestCase(PyTablesTestCase):
         """Checking automatic re-indexing after an update operation."""
 
         if verbose:
-            print '\n', '-=' * 30
-            print "Running %s.test00_update..." % self.__class__.__name__
+            print('\n', '-=' * 30)
+            print("Running %s.test00_update..." % self.__class__.__name__)
 
         # Open the HDF5 file in append mode
         self.fileh = open_file(self.file, mode="a")
@@ -128,8 +130,8 @@ class BasicTestCase(PyTablesTestCase):
         idxcol1 = table.cols.var1.index
         idxcol3 = table.cols.var3.index
         if verbose:
-            print "Dirtyness of var1 col:", idxcol1.dirty
-            print "Dirtyness of var3 col:", idxcol3.dirty
+            print("Dirtyness of var1 col:", idxcol1.dirty)
+            print("Dirtyness of var3 col:", idxcol3.dirty)
         self.assertEqual(idxcol1.dirty, False)
         self.assertEqual(idxcol3.dirty, False)
 
@@ -145,17 +147,17 @@ class BasicTestCase(PyTablesTestCase):
         """Checking reading an Index (string flavor)"""
 
         if verbose:
-            print '\n', '-=' * 30
-            print "Running %s.test01_readIndex..." % self.__class__.__name__
+            print('\n', '-=' * 30)
+            print("Running %s.test01_readIndex..." % self.__class__.__name__)
 
         # Open the HDF5 file in read-only mode
         self.fileh = open_file(self.file, mode="r")
         table = self.fileh.root.table
         idxcol = table.cols.var1.index
         if verbose:
-            print "Max rows in buf:", table.nrowsinbuf
-            print "Number of elements per slice:", idxcol.slicesize
-            print "Chunk size:", idxcol.sorted.chunksize
+            print("Max rows in buf:", table.nrowsinbuf)
+            print("Number of elements per slice:", idxcol.slicesize)
+            print("Chunk size:", idxcol.sorted.chunksize)
 
         # Do a selection
         results = [p["var1"] for p in table.where('var1 == b"1"')]
@@ -166,23 +168,23 @@ class BasicTestCase(PyTablesTestCase):
         """Checking reading an Index (bool flavor)"""
 
         if verbose:
-            print '\n', '-=' * 30
-            print "Running %s.test02_readIndex..." % self.__class__.__name__
+            print('\n', '-=' * 30)
+            print("Running %s.test02_readIndex..." % self.__class__.__name__)
 
         # Open the HDF5 file in read-only mode
         self.fileh = open_file(self.file, mode="r")
         table = self.fileh.root.table
         idxcol = table.cols.var2.index
         if verbose:
-            print "Rows in table:", table.nrows
-            print "Max rows in buf:", table.nrowsinbuf
-            print "Number of elements per slice:", idxcol.slicesize
-            print "Chunk size:", idxcol.sorted.chunksize
+            print("Rows in table:", table.nrows)
+            print("Max rows in buf:", table.nrowsinbuf)
+            print("Number of elements per slice:", idxcol.slicesize)
+            print("Chunk size:", idxcol.sorted.chunksize)
 
         # Do a selection
         results = [p["var2"] for p in table.where('var2 == True')]
         if verbose:
-            print "Selected values:", results
+            print("Selected values:", results)
         self.assertEqual(len(results), self.nrows // 2)
         self.assertEqual(results, [True]*(self.nrows // 2))
 
@@ -190,22 +192,22 @@ class BasicTestCase(PyTablesTestCase):
         """Checking reading an Index (int flavor)"""
 
         if verbose:
-            print '\n', '-=' * 30
-            print "Running %s.test03_readIndex..." % self.__class__.__name__
+            print('\n', '-=' * 30)
+            print("Running %s.test03_readIndex..." % self.__class__.__name__)
 
         # Open the HDF5 file in read-only mode
         self.fileh = open_file(self.file, mode="r")
         table = self.fileh.root.table
         idxcol = table.cols.var3.index
         if verbose:
-            print "Max rows in buf:", table.nrowsinbuf
-            print "Number of elements per slice:", idxcol.slicesize
-            print "Chunk size:", idxcol.sorted.chunksize
+            print("Max rows in buf:", table.nrowsinbuf)
+            print("Number of elements per slice:", idxcol.slicesize)
+            print("Chunk size:", idxcol.sorted.chunksize)
 
         # Do a selection
         results = [p["var3"] for p in table.where('(1<var3)&(var3<10)')]
         if verbose:
-            print "Selected values:", results
+            print("Selected values:", results)
         self.assertEqual(len(results), min(10, table.nrows) - 2)
         self.assertEqual(results, range(2, min(10, table.nrows)))
 
@@ -213,24 +215,24 @@ class BasicTestCase(PyTablesTestCase):
         """Checking reading an Index (float flavor)"""
 
         if verbose:
-            print '\n', '-=' * 30
-            print "Running %s.test04_readIndex..." % self.__class__.__name__
+            print('\n', '-=' * 30)
+            print("Running %s.test04_readIndex..." % self.__class__.__name__)
 
         # Open the HDF5 file in read-only mode
         self.fileh = open_file(self.file, mode="r")
         table = self.fileh.root.table
         idxcol = table.cols.var4.index
         if verbose:
-            print "Max rows in buf:", table.nrowsinbuf
-            print "Number of rows in table:", table.nrows
-            print "Number of elements per slice:", idxcol.slicesize
-            print "Chunk size:", idxcol.sorted.chunksize
+            print("Max rows in buf:", table.nrowsinbuf)
+            print("Number of rows in table:", table.nrows)
+            print("Number of elements per slice:", idxcol.slicesize)
+            print("Chunk size:", idxcol.sorted.chunksize)
 
         # Do a selection
         results = [p["var4"] for p in table.where('var4 < 10')]
         # results = [p["var4"] for p in table.where('(1<var4)&(var4<10)')]
         if verbose:
-            print "Selected values:", results
+            print("Selected values:", results)
         self.assertEqual(len(results), min(10, table.nrows))
         self.assertEqual(results, [float(i) for i in
                                    reversed(range(min(10, table.nrows)))])
@@ -239,25 +241,26 @@ class BasicTestCase(PyTablesTestCase):
         """Checking reading an Index with get_where_list (string flavor)"""
 
         if verbose:
-            print '\n', '-=' * 30
-            print "Running %s.test05_getWhereList..." % self.__class__.__name__
+            print('\n', '-=' * 30)
+            print("Running %s.test05_getWhereList..." %
+                  self.__class__.__name__)
 
         # Open the HDF5 file in read-write mode
         self.fileh = open_file(self.file, mode="a")
         table = self.fileh.root.table
         idxcol = table.cols.var4.index
         if verbose:
-            print "Max rows in buf:", table.nrowsinbuf
-            print "Number of elements per slice:", idxcol.slicesize
-            print "Chunk size:", idxcol.sorted.chunksize
+            print("Max rows in buf:", table.nrowsinbuf)
+            print("Number of elements per slice:", idxcol.slicesize)
+            print("Chunk size:", idxcol.sorted.chunksize)
 
         # Do a selection
         table.flavor = "python"
         rowList1 = table.get_where_list('var1 < b"10"')
         rowList2 = [p.nrow for p in table if p['var1'] < b"10"]
         if verbose:
-            print "Selected values:", rowList1
-            print "Should look like:", rowList2
+            print("Selected values:", rowList1)
+            print("Should look like:", rowList2)
         self.assertEqual(len(rowList1), len(rowList2))
         self.assertEqual(rowList1, rowList2)
 
@@ -265,28 +268,29 @@ class BasicTestCase(PyTablesTestCase):
         """Checking reading an Index with get_where_list (bool flavor)"""
 
         if verbose:
-            print '\n', '-=' * 30
-            print "Running %s.test06_getWhereList..." % self.__class__.__name__
+            print('\n', '-=' * 30)
+            print("Running %s.test06_getWhereList..." %
+                  self.__class__.__name__)
 
         # Open the HDF5 file in read-write mode
         self.fileh = open_file(self.file, mode="a")
         table = self.fileh.root.table
         idxcol = table.cols.var2.index
         if verbose:
-            print "Max rows in buf:", table.nrowsinbuf
-            print "Rows in tables:", table.nrows
-            print "Number of elements per slice:", idxcol.slicesize
-            print "Chunk size:", idxcol.sorted.chunksize
+            print("Max rows in buf:", table.nrowsinbuf)
+            print("Rows in tables:", table.nrows)
+            print("Number of elements per slice:", idxcol.slicesize)
+            print("Chunk size:", idxcol.sorted.chunksize)
 
         # Do a selection
         table.flavor = "numpy"
         rowList1 = table.get_where_list('var2 == False', sort=True)
-        rowList2 = [p.nrow for p in table if p['var2'] == False]
+        rowList2 = [p.nrow for p in table if p['var2'] is False]
         # Convert to a NumPy object
         rowList2 = numpy.array(rowList2, numpy.int64)
         if verbose:
-            print "Selected values:", rowList1
-            print "Should look like:", rowList2
+            print("Selected values:", rowList1)
+            print("Should look like:", rowList2)
         self.assertEqual(len(rowList1), len(rowList2))
         self.assertTrue(allequal(rowList1, rowList2))
 
@@ -294,25 +298,26 @@ class BasicTestCase(PyTablesTestCase):
         """Checking reading an Index with get_where_list (int flavor)"""
 
         if verbose:
-            print '\n', '-=' * 30
-            print "Running %s.test07_getWhereList..." % self.__class__.__name__
+            print('\n', '-=' * 30)
+            print("Running %s.test07_getWhereList..." %
+                  self.__class__.__name__)
 
         # Open the HDF5 file in read-write mode
         self.fileh = open_file(self.file, mode="a")
         table = self.fileh.root.table
         idxcol = table.cols.var4.index
         if verbose:
-            print "Max rows in buf:", table.nrowsinbuf
-            print "Number of elements per slice:", idxcol.slicesize
-            print "Chunk size:", idxcol.sorted.chunksize
+            print("Max rows in buf:", table.nrowsinbuf)
+            print("Number of elements per slice:", idxcol.slicesize)
+            print("Chunk size:", idxcol.sorted.chunksize)
 
         # Do a selection
         table.flavor = "python"
         rowList1 = table.get_where_list('var3 < 15', sort=True)
         rowList2 = [p.nrow for p in table if p["var3"] < 15]
         if verbose:
-            print "Selected values:", rowList1
-            print "Should look like:", rowList2
+            print("Selected values:", rowList1)
+            print("Should look like:", rowList2)
         self.assertEqual(len(rowList1), len(rowList2))
         self.assertEqual(rowList1, rowList2)
 
@@ -320,50 +325,52 @@ class BasicTestCase(PyTablesTestCase):
         """Checking reading an Index with get_where_list (float flavor)"""
 
         if verbose:
-            print '\n', '-=' * 30
-            print "Running %s.test08_getWhereList..." % self.__class__.__name__
+            print('\n', '-=' * 30)
+            print("Running %s.test08_getWhereList..." %
+                  self.__class__.__name__)
 
         # Open the HDF5 file in read-write mode
         self.fileh = open_file(self.file, mode="a")
         table = self.fileh.root.table
         idxcol = table.cols.var4.index
         if verbose:
-            print "Max rows in buf:", table.nrowsinbuf
-            print "Number of elements per slice:", idxcol.slicesize
-            print "Chunk size:", idxcol.sorted.chunksize
+            print("Max rows in buf:", table.nrowsinbuf)
+            print("Number of elements per slice:", idxcol.slicesize)
+            print("Chunk size:", idxcol.sorted.chunksize)
 
         # Do a selection
         table.flavor = "python"
         rowList1 = table.get_where_list('var4 < 10', sort=True)
         rowList2 = [p.nrow for p in table if p['var4'] < 10]
         if verbose:
-            print "Selected values:", rowList1
-            print "Should look like:", rowList2
+            print("Selected values:", rowList1)
+            print("Should look like:", rowList2)
         self.assertEqual(len(rowList1), len(rowList2))
         self.assertEqual(rowList1, rowList2)
 
     def test09a_removeIndex(self):
-        """Checking removing an index"""
+        """Checking removing an index."""
 
         if verbose:
-            print '\n', '-=' * 30
-            print "Running %s.test09a_removeIndex..." % self.__class__.__name__
+            print('\n', '-=' * 30)
+            print("Running %s.test09a_removeIndex..." %
+                  self.__class__.__name__)
 
         # Open the HDF5 file in read-write mode
         self.fileh = open_file(self.file, mode="a")
         table = self.fileh.root.table
         idxcol = table.cols.var1.index
         if verbose:
-            print "Before deletion"
-            print "var1 column:", table.cols.var1
+            print("Before deletion")
+            print("var1 column:", table.cols.var1)
         self.assertEqual(table.colindexed["var1"], 1)
         self.assertTrue(idxcol is not None)
 
         # delete the index
         table.cols.var1.remove_index()
         if verbose:
-            print "After deletion"
-            print "var1 column:", table.cols.var1
+            print("After deletion")
+            print("var1 column:", table.cols.var1)
         self.assertTrue(table.cols.var1.index is None)
         self.assertEqual(table.colindexed["var1"], 0)
 
@@ -372,8 +379,8 @@ class BasicTestCase(PyTablesTestCase):
         self.assertTrue(indexrows is not None)
         idxcol = table.cols.var1.index
         if verbose:
-            print "After re-creation"
-            print "var1 column:", table.cols.var1
+            print("After re-creation")
+            print("var1 column:", table.cols.var1)
         self.assertTrue(idxcol is not None)
         self.assertEqual(table.colindexed["var1"], 1)
 
@@ -381,16 +388,17 @@ class BasicTestCase(PyTablesTestCase):
         """Checking removing an index (persistent version)"""
 
         if verbose:
-            print '\n', '-=' * 30
-            print "Running %s.test09b_removeIndex..." % self.__class__.__name__
+            print('\n', '-=' * 30)
+            print("Running %s.test09b_removeIndex..." %
+                  self.__class__.__name__)
 
         # Open the HDF5 file in read-write mode
         self.fileh = open_file(self.file, mode="a")
         table = self.fileh.root.table
         idxcol = table.cols.var1.index
         if verbose:
-            print "Before deletion"
-            print "var1 index column:", table.cols.var1
+            print("Before deletion")
+            print("var1 index column:", table.cols.var1)
         self.assertTrue(idxcol is not None)
         self.assertEqual(table.colindexed["var1"], 1)
         # delete the index
@@ -403,8 +411,8 @@ class BasicTestCase(PyTablesTestCase):
         idxcol = table.cols.var1.index
 
         if verbose:
-            print "After deletion"
-            print "var1 column:", table.cols.var1
+            print("After deletion")
+            print("var1 column:", table.cols.var1)
         self.assertTrue(table.cols.var1.index is None)
         self.assertEqual(table.colindexed["var1"], 0)
 
@@ -413,25 +421,25 @@ class BasicTestCase(PyTablesTestCase):
         self.assertTrue(indexrows is not None)
         idxcol = table.cols.var1.index
         if verbose:
-            print "After re-creation"
-            print "var1 column:", table.cols.var1
+            print("After re-creation")
+            print("var1 column:", table.cols.var1)
         self.assertTrue(idxcol is not None)
         self.assertEqual(table.colindexed["var1"], 1)
 
     def test10a_moveIndex(self):
-        """Checking moving a table with an index"""
+        """Checking moving a table with an index."""
 
         if verbose:
-            print '\n', '-=' * 30
-            print "Running %s.test10a_moveIndex..." % self.__class__.__name__
+            print('\n', '-=' * 30)
+            print("Running %s.test10a_moveIndex..." % self.__class__.__name__)
 
         # Open the HDF5 file in read-write mode
         self.fileh = open_file(self.file, mode="a")
         table = self.fileh.root.table
         idxcol = table.cols.var1.index
         if verbose:
-            print "Before move"
-            print "var1 column:", idxcol
+            print("Before move")
+            print("var1 column:", idxcol)
         self.assertEqual(table.colindexed["var1"], 1)
         self.assertTrue(idxcol is not None)
 
@@ -441,8 +449,8 @@ class BasicTestCase(PyTablesTestCase):
         # move the table to "agroup"
         table.move(agroup, "table2")
         if verbose:
-            print "After move"
-            print "var1 column:", idxcol
+            print("After move")
+            print("var1 column:", idxcol)
         self.assertTrue(table.cols.var1.index is not None)
         self.assertEqual(table.colindexed["var1"], 1)
 
@@ -451,8 +459,8 @@ class BasicTestCase(PyTablesTestCase):
         rowList1 = table.get_where_list('var1 < b"10"')
         rowList2 = [p.nrow for p in table if p['var1'] < b"10"]
         if verbose:
-            print "Selected values:", rowList1
-            print "Should look like:", rowList2
+            print("Selected values:", rowList1)
+            print("Should look like:", rowList2)
         self.assertEqual(len(rowList1), len(rowList2))
         self.assertEqual(rowList1, rowList2)
 
@@ -460,16 +468,16 @@ class BasicTestCase(PyTablesTestCase):
         """Checking moving a table with an index (persistent version)"""
 
         if verbose:
-            print '\n', '-=' * 30
-            print "Running %s.test10b_moveIndex..." % self.__class__.__name__
+            print('\n', '-=' * 30)
+            print("Running %s.test10b_moveIndex..." % self.__class__.__name__)
 
         # Open the HDF5 file in read-write mode
         self.fileh = open_file(self.file, mode="a")
         table = self.fileh.root.table
         idxcol = table.cols.var1.index
         if verbose:
-            print "Before move"
-            print "var1 index column:", idxcol
+            print("Before move")
+            print("var1 index column:", idxcol)
         self.assertTrue(idxcol is not None)
         self.assertEqual(table.colindexed["var1"], 1)
         # Create a new group called "agroup"
@@ -485,8 +493,8 @@ class BasicTestCase(PyTablesTestCase):
         idxcol = table.cols.var1.index
 
         if verbose:
-            print "After move"
-            print "var1 column:", idxcol
+            print("After move")
+            print("var1 column:", idxcol)
         self.assertTrue(table.cols.var1.index is not None)
         self.assertEqual(table.colindexed["var1"], 1)
 
@@ -495,8 +503,8 @@ class BasicTestCase(PyTablesTestCase):
         rowList1 = table.get_where_list('var1 < b"10"')
         rowList2 = [p.nrow for p in table if p['var1'] < b"10"]
         if verbose:
-            print "Selected values:", rowList1, type(rowList1)
-            print "Should look like:", rowList2, type(rowList2)
+            print("Selected values:", rowList1, type(rowList1))
+            print("Should look like:", rowList2, type(rowList2))
         self.assertEqual(len(rowList1), len(rowList2))
         self.assertEqual(rowList1, rowList2)
 
@@ -504,16 +512,16 @@ class BasicTestCase(PyTablesTestCase):
         """Checking moving a table with an index (small node cache)."""
 
         if verbose:
-            print '\n', '-=' * 30
-            print "Running %s.test10c_moveIndex..." % self.__class__.__name__
+            print('\n', '-=' * 30)
+            print("Running %s.test10c_moveIndex..." % self.__class__.__name__)
 
         # Open the HDF5 file in read-write mode
         self.fileh = open_file(self.file, mode="a", node_cache_slots=10)
         table = self.fileh.root.table
         idxcol = table.cols.var1.index
         if verbose:
-            print "Before move"
-            print "var1 column:", idxcol
+            print("Before move")
+            print("var1 column:", idxcol)
         self.assertEqual(table.colindexed["var1"], 1)
         self.assertTrue(idxcol is not None)
 
@@ -523,8 +531,8 @@ class BasicTestCase(PyTablesTestCase):
         # move the table to "agroup"
         table.move(agroup, "table2")
         if verbose:
-            print "After move"
-            print "var1 column:", idxcol
+            print("After move")
+            print("var1 column:", idxcol)
         self.assertTrue(table.cols.var1.index is not None)
         self.assertEqual(table.colindexed["var1"], 1)
 
@@ -533,8 +541,8 @@ class BasicTestCase(PyTablesTestCase):
         rowList1 = table.get_where_list('var1 < b"10"')
         rowList2 = [p.nrow for p in table if p['var1'] < b"10"]
         if verbose:
-            print "Selected values:", rowList1
-            print "Should look like:", rowList2
+            print("Selected values:", rowList1)
+            print("Should look like:", rowList2)
         self.assertEqual(len(rowList1), len(rowList2))
         self.assertEqual(rowList1, rowList2)
 
@@ -542,16 +550,16 @@ class BasicTestCase(PyTablesTestCase):
         """Checking moving a table with an index (no node cache)."""
 
         if verbose:
-            print '\n', '-=' * 30
-            print "Running %s.test10d_moveIndex..." % self.__class__.__name__
+            print('\n', '-=' * 30)
+            print("Running %s.test10d_moveIndex..." % self.__class__.__name__)
 
         # Open the HDF5 file in read-write mode
         self.fileh = open_file(self.file, mode="a", node_cache_slots=0)
         table = self.fileh.root.table
         idxcol = table.cols.var1.index
         if verbose:
-            print "Before move"
-            print "var1 column:", idxcol
+            print("Before move")
+            print("var1 column:", idxcol)
         self.assertEqual(table.colindexed["var1"], 1)
         self.assertTrue(idxcol is not None)
 
@@ -561,8 +569,8 @@ class BasicTestCase(PyTablesTestCase):
         # move the table to "agroup"
         table.move(agroup, "table2")
         if verbose:
-            print "After move"
-            print "var1 column:", idxcol
+            print("After move")
+            print("var1 column:", idxcol)
         self.assertTrue(table.cols.var1.index is not None)
         self.assertEqual(table.colindexed["var1"], 1)
 
@@ -571,32 +579,33 @@ class BasicTestCase(PyTablesTestCase):
         rowList1 = table.get_where_list('var1 < b"10"')
         rowList2 = [p.nrow for p in table if p['var1'] < b"10"]
         if verbose:
-            print "Selected values:", rowList1
-            print "Should look like:", rowList2
+            print("Selected values:", rowList1)
+            print("Should look like:", rowList2)
         self.assertEqual(len(rowList1), len(rowList2))
         self.assertEqual(rowList1, rowList2)
 
     def test11a_removeTableWithIndex(self):
-        """Checking removing a table with indexes"""
+        """Checking removing a table with indexes."""
 
         if verbose:
-            print '\n', '-=' * 30
-            print "Running %s.test11a_removeTableWithIndex..." % self.__class__.__name__
+            print('\n', '-=' * 30)
+            print("Running %s.test11a_removeTableWithIndex..." %
+                  self.__class__.__name__)
 
         # Open the HDF5 file in read-write mode
         self.fileh = open_file(self.file, mode="a")
         table = self.fileh.root.table
         idxcol = table.cols.var1.index
         if verbose:
-            print "Before deletion"
-            print "var1 column:", table.cols.var1
+            print("Before deletion")
+            print("var1 column:", table.cols.var1)
         self.assertEqual(table.colindexed["var1"], 1)
         self.assertTrue(idxcol is not None)
 
         # delete the table
         self.fileh.remove_node("/table")
         if verbose:
-            print "After deletion"
+            print("After deletion")
         self.assertTrue("table" not in self.fileh.root)
 
         # re-create the table and the index again
@@ -615,8 +624,8 @@ class BasicTestCase(PyTablesTestCase):
             self.assertTrue(indexrows is not None)
         idxcol = table.cols.var1.index
         if verbose:
-            print "After re-creation"
-            print "var1 column:", table.cols.var1
+            print("After re-creation")
+            print("var1 column:", table.cols.var1)
         self.assertTrue(idxcol is not None)
         self.assertEqual(table.colindexed["var1"], 1)
 
@@ -624,22 +633,23 @@ class BasicTestCase(PyTablesTestCase):
         """Checking removing a table with indexes (persistent version 2)"""
 
         if verbose:
-            print '\n', '-=' * 30
-            print "Running %s.test11b_removeTableWithIndex..." % self.__class__.__name__
+            print('\n', '-=' * 30)
+            print("Running %s.test11b_removeTableWithIndex..." %
+                  self.__class__.__name__)
 
         self.fileh = open_file(self.file, mode="a")
         table = self.fileh.root.table
         idxcol = table.cols.var1.index
         if verbose:
-            print "Before deletion"
-            print "var1 column:", table.cols.var1
+            print("Before deletion")
+            print("var1 column:", table.cols.var1)
         self.assertEqual(table.colindexed["var1"], 1)
         self.assertTrue(idxcol is not None)
 
         # delete the table
         self.fileh.remove_node("/table")
         if verbose:
-            print "After deletion"
+            print("After deletion")
         self.assertTrue("table" not in self.fileh.root)
 
         # close and reopen the file
@@ -662,8 +672,8 @@ class BasicTestCase(PyTablesTestCase):
             self.assertTrue(indexrows is not None)
         idxcol = table.cols.var1.index
         if verbose:
-            print "After re-creation"
-            print "var1 column:", table.cols.var1
+            print("After re-creation")
+            print("var1 column:", table.cols.var1)
         self.assertTrue(idxcol is not None)
         self.assertEqual(table.colindexed["var1"], 1)
 
@@ -672,8 +682,9 @@ class BasicTestCase(PyTablesTestCase):
         """Checking removing a table with indexes (persistent version 3)"""
 
         if verbose:
-            print '\n', '-=' * 30
-            print "Running %s.test11c_removeTableWithIndex..." % self.__class__.__name__
+            print('\n', '-=' * 30)
+            print("Running %s.test11c_removeTableWithIndex..." %
+                  self.__class__.__name__)
 
         class Distance(IsDescription):
             frame = Int32Col(pos=0)
@@ -979,8 +990,8 @@ class AutomaticIndexingTestCase(unittest.TestCase):
     def test01_attrs(self):
         "Checking indexing attributes (part1)"
         if verbose:
-            print '\n', '-=' * 30
-            print "Running %s.test01_attrs..." % self.__class__.__name__
+            print('\n', '-=' * 30)
+            print("Running %s.test01_attrs..." % self.__class__.__name__)
 
         table = self.table
         if self.iprops is DefaultProps:
@@ -1011,16 +1022,16 @@ class AutomaticIndexingTestCase(unittest.TestCase):
     def test02_attrs(self):
         "Checking indexing attributes (part2)"
         if verbose:
-            print '\n', '-=' * 30
-            print "Running %s.test02_attrs..." % self.__class__.__name__
+            print('\n', '-=' * 30)
+            print("Running %s.test02_attrs..." % self.__class__.__name__)
 
         table = self.table
         # Check the policy parameters
         if verbose:
             if table.indexed:
-                print "index props:", table.autoindex
+                print("index props:", table.autoindex)
             else:
-                print "Table is not indexed"
+                print("Table is not indexed")
         # Check non-default values for index saving policy
         if self.iprops is NoAutoProps:
             self.assertFalse(table.autoindex)
@@ -1042,19 +1053,19 @@ class AutomaticIndexingTestCase(unittest.TestCase):
     def test03_counters(self):
         "Checking indexing counters"
         if verbose:
-            print '\n', '-=' * 30
-            print "Running %s.test03_counters..." % self.__class__.__name__
+            print('\n', '-=' * 30)
+            print("Running %s.test03_counters..." % self.__class__.__name__)
         table = self.table
         # Check the counters for indexes
         if verbose:
             if table.indexed:
-                print "indexedrows:", table._indexedrows
-                print "unsavedindexedrows:", table._unsaved_indexedrows
+                print("indexedrows:", table._indexedrows)
+                print("unsavedindexedrows:", table._unsaved_indexedrows)
                 index = table.cols.var1.index
-                print "table rows:", table.nrows
-                print "computed indexed rows:", index.nrows * index.slicesize
+                print("table rows:", table.nrows)
+                print("computed indexed rows:", index.nrows * index.slicesize)
             else:
-                print "Table is not indexed"
+                print("Table is not indexed")
         if self.iprops is not DefaultProps:
             index = table.cols.var1.index
             indexedrows = index.nelements
@@ -1066,20 +1077,20 @@ class AutomaticIndexingTestCase(unittest.TestCase):
     def test04_noauto(self):
         "Checking indexing counters (non-automatic mode)"
         if verbose:
-            print '\n', '-=' * 30
-            print "Running %s.test04_noauto..." % self.__class__.__name__
+            print('\n', '-=' * 30)
+            print("Running %s.test04_noauto..." % self.__class__.__name__)
         table = self.table
         # Force a sync in indexes
         table.flush_rows_to_index()
         # Check the counters for indexes
         if verbose:
             if table.indexed:
-                print "indexedrows:", table._indexedrows
-                print "unsavedindexedrows:", table._unsaved_indexedrows
+                print("indexedrows:", table._indexedrows)
+                print("unsavedindexedrows:", table._unsaved_indexedrows)
                 index = table.cols.var1.index
-                print "computed indexed rows:", index.nelements
+                print("computed indexed rows:", index.nelements)
             else:
-                print "Table is not indexed"
+                print("Table is not indexed")
 
         # No unindexated rows should remain
         index = table.cols.var1.index
@@ -1100,8 +1111,8 @@ class AutomaticIndexingTestCase(unittest.TestCase):
     def test05_icounters(self):
         "Checking indexing counters (remove_rows)"
         if verbose:
-            print '\n', '-=' * 30
-            print "Running %s.test05_icounters..." % self.__class__.__name__
+            print('\n', '-=' * 30)
+            print("Running %s.test05_icounters..." % self.__class__.__name__)
         table = self.table
         # Force a sync in indexes
         table.flush_rows_to_index()
@@ -1118,14 +1129,14 @@ class AutomaticIndexingTestCase(unittest.TestCase):
         # Check the counters for indexes
         if verbose:
             if table.indexed:
-                print "indexedrows:", table._indexedrows
-                print "original indexedrows:", indexedrows
-                print "unsavedindexedrows:", table._unsaved_indexedrows
-                print "original unsavedindexedrows:", unsavedindexedrows
+                print("indexedrows:", table._indexedrows)
+                print("original indexedrows:", indexedrows)
+                print("unsavedindexedrows:", table._unsaved_indexedrows)
+                print("original unsavedindexedrows:", unsavedindexedrows)
                 # index = table.cols.var1.index
-                print "index dirty:", table.cols.var1.index.dirty
+                print("index dirty:", table.cols.var1.index.dirty)
             else:
-                print "Table is not indexed"
+                print("Table is not indexed")
 
         # Check the counters
         self.assertEqual(table.nrows, self.nrows - 2)
@@ -1141,8 +1152,8 @@ class AutomaticIndexingTestCase(unittest.TestCase):
     def test06_dirty(self):
         "Checking dirty flags (remove_rows action)"
         if verbose:
-            print '\n', '-=' * 30
-            print "Running %s.test06_dirty..." % self.__class__.__name__
+            print('\n', '-=' * 30)
+            print("Running %s.test06_dirty..." % self.__class__.__name__)
         table = self.table
         # Force a sync in indexes
         table.flush_rows_to_index()
@@ -1154,11 +1165,11 @@ class AutomaticIndexingTestCase(unittest.TestCase):
             table = self.fileh.root.table
         # Check the dirty flag for indexes
         if verbose:
-            print "auto flag:", table.autoindex
+            print("auto flag:", table.autoindex)
             for colname in table.colnames:
                 if table.cols._f_col(colname).index:
-                    print "dirty flag col %s: %s" % \
-                          (colname, table.cols._f_col(colname).index.dirty)
+                    print("dirty flag col %s: %s" %
+                          (colname, table.cols._f_col(colname).index.dirty))
         # Check the flags
         for colname in table.colnames:
             if table.cols._f_col(colname).index:
@@ -1172,8 +1183,8 @@ class AutomaticIndexingTestCase(unittest.TestCase):
     def test07_noauto(self):
         "Checking indexing counters (modify_rows, no-auto mode)"
         if verbose:
-            print '\n', '-=' * 30
-            print "Running %s.test07_noauto..." % self.__class__.__name__
+            print('\n', '-=' * 30)
+            print("Running %s.test07_noauto..." % self.__class__.__name__)
         table = self.table
         # Force a sync in indexes
         table.flush_rows_to_index()
@@ -1190,14 +1201,14 @@ class AutomaticIndexingTestCase(unittest.TestCase):
         # Check the counters for indexes
         if verbose:
             if table.indexed:
-                print "indexedrows:", table._indexedrows
-                print "original indexedrows:", indexedrows
-                print "unsavedindexedrows:", table._unsaved_indexedrows
-                print "original unsavedindexedrows:", unsavedindexedrows
+                print("indexedrows:", table._indexedrows)
+                print("original indexedrows:", indexedrows)
+                print("unsavedindexedrows:", table._unsaved_indexedrows)
+                print("original unsavedindexedrows:", unsavedindexedrows)
                 index = table.cols.var1.index
-                print "computed indexed rows:", index.nelements
+                print("computed indexed rows:", index.nelements)
             else:
-                print "Table is not indexed"
+                print("Table is not indexed")
 
         # Check the counters
         self.assertEqual(table.nrows, self.nrows)
@@ -1208,8 +1219,8 @@ class AutomaticIndexingTestCase(unittest.TestCase):
         if verbose:
             for colname in table.colnames:
                 if table.cols._f_col(colname).index:
-                    print "dirty flag col %s: %s" % \
-                          (colname, table.cols._f_col(colname).index.dirty)
+                    print("dirty flag col %s: %s" %
+                          (colname, table.cols._f_col(colname).index.dirty))
         for colname in table.colnames:
             if table.cols._f_col(colname).index:
                 if not table.autoindex:
@@ -1222,8 +1233,8 @@ class AutomaticIndexingTestCase(unittest.TestCase):
     def test07b_noauto(self):
         "Checking indexing queries (modify in iterator, no-auto mode)"
         if verbose:
-            print '\n', '-=' * 30
-            print "Running %s.test07b_noauto..." % self.__class__.__name__
+            print('\n', '-=' * 30)
+            print("Running %s.test07b_noauto..." % self.__class__.__name__)
         table = self.table
         # Force a sync in indexes
         table.flush_rows_to_index()
@@ -1247,17 +1258,17 @@ class AutomaticIndexingTestCase(unittest.TestCase):
         resq = [row.nrow for row in table.where('(var2 == True) & (var3 > 0)')]
         res_ = res + [3]
         if verbose:
-            print "AutoIndex?:", table.autoindex
-            print "Query results (original):", res
-            print "Query results (after modifying table):", resq
-            print "Should look like:", res_
+            print("AutoIndex?:", table.autoindex)
+            print("Query results (original):", res)
+            print("Query results (after modifying table):", resq)
+            print("Should look like:", res_)
         self.assertEqual(res_, resq)
 
     def test07c_noauto(self):
         "Checking indexing queries (append, no-auto mode)"
         if verbose:
-            print '\n', '-=' * 30
-            print "Running %s.test07c_noauto..." % self.__class__.__name__
+            print('\n', '-=' * 30)
+            print("Running %s.test07c_noauto..." % self.__class__.__name__)
         table = self.table
         # Force a sync in indexes
         table.flush_rows_to_index()
@@ -1277,17 +1288,17 @@ class AutomaticIndexingTestCase(unittest.TestCase):
         resq = [row.nrow for row in table.where('(var2 == True) & (var3 > 0)')]
         res_ = res + [table.nrows-3, table.nrows-2, table.nrows-1]
         if verbose:
-            print "AutoIndex?:", table.autoindex
-            print "Query results (original):", res
-            print "Query results (after modifying table):", resq
-            print "Should look like:", res_
+            print("AutoIndex?:", table.autoindex)
+            print("Query results (original):", res)
+            print("Query results (after modifying table):", resq)
+            print("Should look like:", res_)
         self.assertEqual(res_, resq)
 
     def test08_dirty(self):
         "Checking dirty flags (modify_columns)"
         if verbose:
-            print '\n', '-=' * 30
-            print "Running %s.test08_dirty..." % self.__class__.__name__
+            print('\n', '-=' * 30)
+            print("Running %s.test08_dirty..." % self.__class__.__name__)
         table = self.table
         # Force a sync in indexes
         table.flush_rows_to_index()
@@ -1314,8 +1325,8 @@ class AutomaticIndexingTestCase(unittest.TestCase):
         if verbose:
             for colname in table.colnames:
                 if table.cols._f_col(colname).index:
-                    print "dirty flag col %s: %s" % \
-                          (colname, table.cols._f_col(colname).index.dirty)
+                    print("dirty flag col %s: %s" %
+                          (colname, table.cols._f_col(colname).index.dirty))
         for colname in table.colnames:
             if table.cols._f_col(colname).index:
                 if not table.autoindex:
@@ -1332,8 +1343,8 @@ class AutomaticIndexingTestCase(unittest.TestCase):
     def test09a_propIndex(self):
         "Checking propagate Index feature in Table.copy() (attrs)"
         if verbose:
-            print '\n', '-=' * 30
-            print "Running %s.test09a_propIndex..." % self.__class__.__name__
+            print('\n', '-=' * 30)
+            print("Running %s.test09a_propIndex..." % self.__class__.__name__)
         table = self.table
         # Don't force a sync in indexes
         # table.flush_rows_to_index()
@@ -1356,11 +1367,11 @@ class AutomaticIndexingTestCase(unittest.TestCase):
         index1 = table.cols.var1.index
         index2 = table2.cols.var1.index
         if verbose:
-            print "Copied index:", index2
-            print "Original index:", index1
+            print("Copied index:", index2)
+            print("Original index:", index1)
             if index1:
-                print "Elements in copied index:", index2.nelements
-                print "Elements in original index:", index1.nelements
+                print("Elements in copied index:", index2.nelements)
+                print("Elements in original index:", index1.nelements)
         # Check the counters
         self.assertEqual(table.nrows, table2.nrows)
         if table.indexed:
@@ -1376,8 +1387,8 @@ class AutomaticIndexingTestCase(unittest.TestCase):
         if verbose:
             for colname in table2.colnames:
                 if table2.cols._f_col(colname).index:
-                    print "dirty flag col %s: %s" % \
-                          (colname, table2.cols._f_col(colname).index.dirty)
+                    print("dirty flag col %s: %s" %
+                          (colname, table2.cols._f_col(colname).index.dirty))
         for colname in table2.colnames:
             if table2.cols._f_col(colname).index:
                 self.assertEqual(table2.cols._f_col(colname).index.dirty,
@@ -1386,8 +1397,8 @@ class AutomaticIndexingTestCase(unittest.TestCase):
     def test09b_propIndex(self):
         "Checking that propindexes=False works"
         if verbose:
-            print '\n', '-=' * 30
-            print "Running %s.test09b_propIndex..." % self.__class__.__name__
+            print('\n', '-=' * 30)
+            print("Running %s.test09b_propIndex..." % self.__class__.__name__)
         table = self.table
         # Don't force a sync in indexes
         # table.flush_rows_to_index()
@@ -1408,9 +1419,9 @@ class AutomaticIndexingTestCase(unittest.TestCase):
             table2 = self.fileh.root.table2
 
         if verbose:
-            print "autoindex?:", self.iprops.auto
-            print "Copied index indexed?:", table2.cols.var1.is_indexed
-            print "Original index indexed?:", table.cols.var1.is_indexed
+            print("autoindex?:", self.iprops.auto)
+            print("Copied index indexed?:", table2.cols.var1.is_indexed)
+            print("Original index indexed?:", table.cols.var1.is_indexed)
         if self.iprops is DefaultProps:
             # No index: the index should not exist
             self.assertFalse(table2.cols.var1.is_indexed)
@@ -1422,8 +1433,8 @@ class AutomaticIndexingTestCase(unittest.TestCase):
     def test10_propIndex(self):
         "Checking propagate Index feature in Table.copy() (values)"
         if verbose:
-            print '\n', '-=' * 30
-            print "Running %s.test10_propIndex..." % self.__class__.__name__
+            print('\n', '-=' * 30)
+            print("Running %s.test10_propIndex..." % self.__class__.__name__)
         table = self.table
         # Don't force a sync in indexes
         # table.flush_rows_to_index()
@@ -1446,17 +1457,17 @@ class AutomaticIndexingTestCase(unittest.TestCase):
         index1 = table.cols.var3.index
         index2 = table2.cols.var3.index
         if verbose:
-            print "Copied index:", index2
-            print "Original index:", index1
+            print("Copied index:", index2)
+            print("Original index:", index1)
             if index1:
-                print "Elements in copied index:", index2.nelements
-                print "Elements in original index:", index1.nelements
+                print("Elements in copied index:", index2.nelements)
+                print("Elements in original index:", index1.nelements)
 
     def test11_propIndex(self):
         "Checking propagate Index feature in Table.copy() (dirty flags)"
         if verbose:
-            print '\n', '-=' * 30
-            print "Running %s.test11_propIndex..." % self.__class__.__name__
+            print('\n', '-=' * 30)
+            print("Running %s.test11_propIndex..." % self.__class__.__name__)
         table = self.table
         # Force a sync in indexes
         table.flush_rows_to_index()
@@ -1481,18 +1492,18 @@ class AutomaticIndexingTestCase(unittest.TestCase):
         index1 = table.cols.var1.index
         index2 = table2.cols.var1.index
         if verbose:
-            print "Copied index:", index2
-            print "Original index:", index1
+            print("Copied index:", index2)
+            print("Original index:", index1)
             if index1:
-                print "Elements in copied index:", index2.nelements
-                print "Elements in original index:", index1.nelements
+                print("Elements in copied index:", index2.nelements)
+                print("Elements in original index:", index1.nelements)
 
         # Check the dirty flag for indexes
         if verbose:
             for colname in table2.colnames:
                 if table2.cols._f_col(colname).index:
-                    print "dirty flag col %s: %s" % \
-                          (colname, table2.cols._f_col(colname).index.dirty)
+                    print("dirty flag col %s: %s" %
+                          (colname, table2.cols._f_col(colname).index.dirty))
         for colname in table2.colnames:
             if table2.cols._f_col(colname).index:
                 if table2.autoindex:
@@ -1707,9 +1718,9 @@ class IndexFiltersTestCase(TempFileMixin, PyTablesTestCase):
         icol.reindex()
         ni = icol.index
         if verbose:
-            print "Old parameters: %s, %s, %s" % (kind, optlevel, filters)
-            print "New parameters: %s, %s, %s" % (
-                ni.kind, ni.optlevel, ni.filters)
+            print("Old parameters: %s, %s, %s" % (kind, optlevel, filters))
+            print("New parameters: %s, %s, %s" % (
+                ni.kind, ni.optlevel, ni.filters))
         self.assertEqual(ni.kind, kind)
         self.assertEqual(ni.optlevel, optlevel)
         self.assertEqual(ni.filters, filters)
@@ -1777,8 +1788,8 @@ class CompletelySortedIndexTestCase(TempFileMixin, PyTablesTestCase):
         sortedcol = numpy.sort(icol[:])
         sortedcol2 = icol.index.read_sorted()
         if verbose:
-            print "Original sorted column:", sortedcol
-            print "The values from the index:", sortedcol2
+            print("Original sorted column:", sortedcol)
+            print("The values from the index:", sortedcol2)
         self.assertTrue(allequal(sortedcol, sortedcol2))
 
     def test01_readSorted2(self):
@@ -1787,8 +1798,8 @@ class CompletelySortedIndexTestCase(TempFileMixin, PyTablesTestCase):
         sortedcol = numpy.sort(icol[:])[30:55]
         sortedcol2 = icol.index.read_sorted(30, 55)
         if verbose:
-            print "Original sorted column:", sortedcol
-            print "The values from the index:", sortedcol2
+            print("Original sorted column:", sortedcol)
+            print("The values from the index:", sortedcol2)
         self.assertTrue(allequal(sortedcol, sortedcol2))
 
     def test01_readSorted3(self):
@@ -1797,8 +1808,8 @@ class CompletelySortedIndexTestCase(TempFileMixin, PyTablesTestCase):
         sortedcol = numpy.sort(icol[:])[33:97]
         sortedcol2 = icol.index.read_sorted(33, 97)
         if verbose:
-            print "Original sorted column:", sortedcol
-            print "The values from the index:", sortedcol2
+            print("Original sorted column:", sortedcol)
+            print("The values from the index:", sortedcol2)
         self.assertTrue(allequal(sortedcol, sortedcol2))
 
     def test02_readIndices1(self):
@@ -1807,8 +1818,8 @@ class CompletelySortedIndexTestCase(TempFileMixin, PyTablesTestCase):
         indicescol = numpy.argsort(icol[:]).astype('uint64')
         indicescol2 = icol.index.read_indices()
         if verbose:
-            print "Original indices column:", indicescol
-            print "The values from the index:", indicescol2
+            print("Original indices column:", indicescol)
+            print("The values from the index:", indicescol2)
         self.assertTrue(allequal(indicescol, indicescol2))
 
     def test02_readIndices2(self):
@@ -1817,8 +1828,8 @@ class CompletelySortedIndexTestCase(TempFileMixin, PyTablesTestCase):
         indicescol = numpy.argsort(icol[:])[30:55].astype('uint64')
         indicescol2 = icol.index.read_indices(30, 55)
         if verbose:
-            print "Original indices column:", indicescol
-            print "The values from the index:", indicescol2
+            print("Original indices column:", indicescol)
+            print("The values from the index:", indicescol2)
         self.assertTrue(allequal(indicescol, indicescol2))
 
     def test02_readIndices3(self):
@@ -1827,8 +1838,8 @@ class CompletelySortedIndexTestCase(TempFileMixin, PyTablesTestCase):
         indicescol = numpy.argsort(icol[:])[33:97].astype('uint64')
         indicescol2 = icol.index.read_indices(33, 97)
         if verbose:
-            print "Original indices column:", indicescol
-            print "The values from the index:", indicescol2
+            print("Original indices column:", indicescol)
+            print("The values from the index:", indicescol2)
         self.assertTrue(allequal(indicescol, indicescol2))
 
     def test02_readIndices4(self):
@@ -1837,8 +1848,8 @@ class CompletelySortedIndexTestCase(TempFileMixin, PyTablesTestCase):
         indicescol = numpy.argsort(icol[:])[33:97:2].astype('uint64')
         indicescol2 = icol.index.read_indices(33, 97, 2)
         if verbose:
-            print "Original indices column:", indicescol
-            print "The values from the index:", indicescol2
+            print("Original indices column:", indicescol)
+            print("The values from the index:", indicescol2)
         self.assertTrue(allequal(indicescol, indicescol2))
 
     def test02_readIndices5(self):
@@ -1847,8 +1858,8 @@ class CompletelySortedIndexTestCase(TempFileMixin, PyTablesTestCase):
         indicescol = numpy.argsort(icol[:])[33:55:5].astype('uint64')
         indicescol2 = icol.index.read_indices(33, 55, 5)
         if verbose:
-            print "Original indices column:", indicescol
-            print "The values from the index:", indicescol2
+            print("Original indices column:", indicescol)
+            print("The values from the index:", indicescol2)
         self.assertTrue(allequal(indicescol, indicescol2))
 
     def test02_readIndices6(self):
@@ -1857,8 +1868,8 @@ class CompletelySortedIndexTestCase(TempFileMixin, PyTablesTestCase):
         indicescol = numpy.argsort(icol[:])[::3].astype('uint64')
         indicescol2 = icol.index.read_indices(step=3)
         if verbose:
-            print "Original indices column:", indicescol
-            print "The values from the index:", indicescol2
+            print("Original indices column:", indicescol)
+            print("The values from the index:", indicescol2)
         self.assertTrue(allequal(indicescol, indicescol2))
 
     def test03_getitem1(self):
@@ -1867,8 +1878,8 @@ class CompletelySortedIndexTestCase(TempFileMixin, PyTablesTestCase):
         indicescol = numpy.argsort(icol[:]).astype('uint64')
         indicescol2 = icol.index[:]
         if verbose:
-            print "Original indices column:", indicescol
-            print "The values from the index:", indicescol2
+            print("Original indices column:", indicescol)
+            print("The values from the index:", indicescol2)
         self.assertTrue(allequal(indicescol, indicescol2))
 
     def test03_getitem2(self):
@@ -1877,8 +1888,8 @@ class CompletelySortedIndexTestCase(TempFileMixin, PyTablesTestCase):
         indicescol = numpy.argsort(icol[:])[31].astype('uint64')
         indicescol2 = icol.index[31]
         if verbose:
-            print "Original indices column:", indicescol
-            print "The values from the index:", indicescol2
+            print("Original indices column:", indicescol)
+            print("The values from the index:", indicescol2)
         self.assertTrue(allequal(indicescol, indicescol2))
 
     def test03_getitem3(self):
@@ -1887,8 +1898,8 @@ class CompletelySortedIndexTestCase(TempFileMixin, PyTablesTestCase):
         indicescol = numpy.argsort(icol[:])[2:16].astype('uint64')
         indicescol2 = icol.index[2:16]
         if verbose:
-            print "Original indices column:", indicescol
-            print "The values from the index:", indicescol2
+            print("Original indices column:", indicescol)
+            print("The values from the index:", indicescol2)
         self.assertTrue(allequal(indicescol, indicescol2))
 
     def test04_itersorted1(self):
@@ -1899,8 +1910,8 @@ class CompletelySortedIndexTestCase(TempFileMixin, PyTablesTestCase):
             [row.fetch_all_fields() for row in table.itersorted(
              'icol')], dtype=table._v_dtype)
         if verbose:
-            print "Original sorted table:", sortedtable
-            print "The values from the iterator:", sortedtable2
+            print("Original sorted table:", sortedtable)
+            print("The values from the iterator:", sortedtable2)
         self.assertTrue(allequal(sortedtable, sortedtable2))
 
     def test04_itersorted2(self):
@@ -1911,8 +1922,8 @@ class CompletelySortedIndexTestCase(TempFileMixin, PyTablesTestCase):
             [row.fetch_all_fields() for row in table.itersorted(
              'icol', start=15)], dtype=table._v_dtype)
         if verbose:
-            print "Original sorted table:", sortedtable
-            print "The values from the iterator:", sortedtable2
+            print("Original sorted table:", sortedtable)
+            print("The values from the iterator:", sortedtable2)
         self.assertTrue(allequal(sortedtable, sortedtable2))
 
     def test04_itersorted3(self):
@@ -1923,8 +1934,8 @@ class CompletelySortedIndexTestCase(TempFileMixin, PyTablesTestCase):
             [row.fetch_all_fields() for row in table.itersorted(
              'icol', stop=20)], dtype=table._v_dtype)
         if verbose:
-            print "Original sorted table:", sortedtable
-            print "The values from the iterator:", sortedtable2
+            print("Original sorted table:", sortedtable)
+            print("The values from the iterator:", sortedtable2)
         self.assertTrue(allequal(sortedtable, sortedtable2))
 
     def test04_itersorted4(self):
@@ -1935,32 +1946,34 @@ class CompletelySortedIndexTestCase(TempFileMixin, PyTablesTestCase):
             [row.fetch_all_fields() for row in table.itersorted(
              'icol', start=15, stop=20)], dtype=table._v_dtype)
         if verbose:
-            print "Original sorted table:", sortedtable
-            print "The values from the iterator:", sortedtable2
+            print("Original sorted table:", sortedtable)
+            print("The values from the iterator:", sortedtable2)
         self.assertTrue(allequal(sortedtable, sortedtable2))
 
     def test04_itersorted5(self):
-        """Testing the Table.itersorted() method with a start, stop and step."""
+        """Testing the Table.itersorted() method with a start, stop and
+        step."""
         table = self.table
         sortedtable = numpy.sort(table[:], order='icol')[15:45:4]
         sortedtable2 = numpy.array(
             [row.fetch_all_fields() for row in table.itersorted(
              'icol', start=15, stop=45, step=4)], dtype=table._v_dtype)
         if verbose:
-            print "Original sorted table:", sortedtable
-            print "The values from the iterator:", sortedtable2
+            print("Original sorted table:", sortedtable)
+            print("The values from the iterator:", sortedtable2)
         self.assertTrue(allequal(sortedtable, sortedtable2))
 
     def test04_itersorted6(self):
-        """Testing the Table.itersorted() method with a start, stop and step."""
+        """Testing the Table.itersorted() method with a start, stop and
+        step."""
         table = self.table
         sortedtable = numpy.sort(table[:], order='icol')[33:55:5]
         sortedtable2 = numpy.array(
             [row.fetch_all_fields() for row in table.itersorted(
              'icol', start=33, stop=55, step=5)], dtype=table._v_dtype)
         if verbose:
-            print "Original sorted table:", sortedtable
-            print "The values from the iterator:", sortedtable2
+            print("Original sorted table:", sortedtable)
+            print("The values from the iterator:", sortedtable2)
         self.assertTrue(allequal(sortedtable, sortedtable2))
 
     def test04_itersorted7(self):
@@ -1971,8 +1984,8 @@ class CompletelySortedIndexTestCase(TempFileMixin, PyTablesTestCase):
             [row.fetch_all_fields() for row in table.itersorted(
              'icol', checkCSI=True)], dtype=table._v_dtype)
         if verbose:
-            print "Original sorted table:", sortedtable
-            print "The values from the iterator:", sortedtable2
+            print("Original sorted table:", sortedtable)
+            print("The values from the iterator:", sortedtable2)
         self.assertTrue(allequal(sortedtable, sortedtable2))
 
     def test04_itersorted8(self):
@@ -1985,8 +1998,8 @@ class CompletelySortedIndexTestCase(TempFileMixin, PyTablesTestCase):
             [row.fetch_all_fields() for row in table.itersorted(
              'icol', start=55, stop=33, step=-5)], dtype=table._v_dtype)
         if verbose:
-            print "Original sorted table:", sortedtable
-            print "The values from the iterator:", sortedtable2
+            print("Original sorted table:", sortedtable)
+            print("The values from the iterator:", sortedtable2)
         self.assertTrue(allequal(sortedtable, sortedtable2))
 
     def test04_itersorted9(self):
@@ -1998,8 +2011,8 @@ class CompletelySortedIndexTestCase(TempFileMixin, PyTablesTestCase):
             [row.fetch_all_fields() for row in table.itersorted(
              'icol', step=-5)], dtype=table._v_dtype)
         if verbose:
-            print "Original sorted table:", sortedtable
-            print "The values from the iterator:", sortedtable2
+            print("Original sorted table:", sortedtable)
+            print("The values from the iterator:", sortedtable2)
         self.assertTrue(allequal(sortedtable, sortedtable2))
 
     def test05_readSorted1(self):
@@ -2008,8 +2021,8 @@ class CompletelySortedIndexTestCase(TempFileMixin, PyTablesTestCase):
         sortedtable = numpy.sort(table[:], order='icol')
         sortedtable2 = table.read_sorted('icol')
         if verbose:
-            print "Original sorted table:", sortedtable
-            print "The values from read_sorted:", sortedtable2
+            print("Original sorted table:", sortedtable)
+            print("The values from read_sorted:", sortedtable2)
         self.assertTrue(allequal(sortedtable, sortedtable2))
 
     def test05_readSorted2(self):
@@ -2018,8 +2031,8 @@ class CompletelySortedIndexTestCase(TempFileMixin, PyTablesTestCase):
         sortedtable = numpy.sort(table[:], order='icol')[16:17]
         sortedtable2 = table.read_sorted('icol', start=16)
         if verbose:
-            print "Original sorted table:", sortedtable
-            print "The values from read_sorted:", sortedtable2
+            print("Original sorted table:", sortedtable)
+            print("The values from read_sorted:", sortedtable2)
         self.assertTrue(allequal(sortedtable, sortedtable2))
 
     def test05_readSorted3(self):
@@ -2028,18 +2041,19 @@ class CompletelySortedIndexTestCase(TempFileMixin, PyTablesTestCase):
         sortedtable = numpy.sort(table[:], order='icol')[16:33]
         sortedtable2 = table.read_sorted('icol', start=16, stop=33)
         if verbose:
-            print "Original sorted table:", sortedtable
-            print "The values from read_sorted:", sortedtable2
+            print("Original sorted table:", sortedtable)
+            print("The values from read_sorted:", sortedtable2)
         self.assertTrue(allequal(sortedtable, sortedtable2))
 
     def test05_readSorted4(self):
-        """Testing the Table.read_sorted() method with a start, stop and step."""
+        """Testing the Table.read_sorted() method with a start, stop and
+        step."""
         table = self.table
         sortedtable = numpy.sort(table[:], order='icol')[33:55:5]
         sortedtable2 = table.read_sorted('icol', start=33, stop=55, step=5)
         if verbose:
-            print "Original sorted table:", sortedtable
-            print "The values from read_sorted:", sortedtable2
+            print("Original sorted table:", sortedtable)
+            print("The values from read_sorted:", sortedtable2)
         self.assertTrue(allequal(sortedtable, sortedtable2))
 
     def test05_readSorted5(self):
@@ -2048,8 +2062,8 @@ class CompletelySortedIndexTestCase(TempFileMixin, PyTablesTestCase):
         sortedtable = numpy.sort(table[:], order='icol')[::3]
         sortedtable2 = table.read_sorted('icol', step=3)
         if verbose:
-            print "Original sorted table:", sortedtable
-            print "The values from read_sorted:", sortedtable2
+            print("Original sorted table:", sortedtable)
+            print("The values from read_sorted:", sortedtable2)
         self.assertTrue(allequal(sortedtable, sortedtable2))
 
     def test05_readSorted6(self):
@@ -2058,8 +2072,8 @@ class CompletelySortedIndexTestCase(TempFileMixin, PyTablesTestCase):
         sortedtable = numpy.sort(table[:], order='icol')[::-1]
         sortedtable2 = table.read_sorted('icol', step=-1)
         if verbose:
-            print "Original sorted table:", sortedtable
-            print "The values from read_sorted:", sortedtable2
+            print("Original sorted table:", sortedtable)
+            print("The values from read_sorted:", sortedtable2)
         self.assertTrue(allequal(sortedtable, sortedtable2))
 
     def test05_readSorted7(self):
@@ -2068,8 +2082,8 @@ class CompletelySortedIndexTestCase(TempFileMixin, PyTablesTestCase):
         sortedtable = numpy.sort(table[:], order='icol')[::-2]
         sortedtable2 = table.read_sorted('icol', step=-2)
         if verbose:
-            print "Original sorted table:", sortedtable
-            print "The values from read_sorted:", sortedtable2
+            print("Original sorted table:", sortedtable)
+            print("The values from read_sorted:", sortedtable2)
         self.assertTrue(allequal(sortedtable, sortedtable2))
 
     def test05_readSorted8(self):
@@ -2080,8 +2094,8 @@ class CompletelySortedIndexTestCase(TempFileMixin, PyTablesTestCase):
         sortedtable = numpy.sort(table[:], order='icol')[sstart:sstop:-1]
         sortedtable2 = table.read_sorted('icol', start=24, stop=54, step=-1)
         if verbose:
-            print "Original sorted table:", sortedtable
-            print "The values from read_sorted:", sortedtable2
+            print("Original sorted table:", sortedtable)
+            print("The values from read_sorted:", sortedtable2)
         self.assertTrue(allequal(sortedtable, sortedtable2))
 
     def test05_readSorted9(self):
@@ -2092,8 +2106,8 @@ class CompletelySortedIndexTestCase(TempFileMixin, PyTablesTestCase):
         sortedtable = numpy.sort(table[:], order='icol')[sstart:sstop:-3]
         sortedtable2 = table.read_sorted('icol', start=14, stop=54, step=-3)
         if verbose:
-            print "Original sorted table:", sortedtable
-            print "The values from read_sorted:", sortedtable2
+            print("Original sorted table:", sortedtable)
+            print("The values from read_sorted:", sortedtable2)
         self.assertTrue(allequal(sortedtable, sortedtable2))
 
     def test05_readSorted10(self):
@@ -2104,8 +2118,8 @@ class CompletelySortedIndexTestCase(TempFileMixin, PyTablesTestCase):
         sortedtable = numpy.sort(table[:], order='icol')[sstart:sstop:-2]
         sortedtable2 = table.read_sorted('icol', start=24, stop=25, step=-2)
         if verbose:
-            print "Original sorted table:", sortedtable
-            print "The values from read_sorted:", sortedtable2
+            print("Original sorted table:", sortedtable)
+            print("The values from read_sorted:", sortedtable2)
         self.assertTrue(allequal(sortedtable, sortedtable2))
 
     def test05_readSorted11(self):
@@ -2116,8 +2130,8 @@ class CompletelySortedIndexTestCase(TempFileMixin, PyTablesTestCase):
         sortedtable = numpy.sort(table[:], order='icol')[sstart:sstop:-2]
         sortedtable2 = table.read_sorted('icol', start=137, stop=25, step=-2)
         if verbose:
-            print "Original sorted table:", sortedtable
-            print "The values from read_sorted:", sortedtable2
+            print("Original sorted table:", sortedtable)
+            print("The values from read_sorted:", sortedtable2)
         self.assertTrue(allequal(sortedtable, sortedtable2))
 
     def test05a_readSorted12(self):
@@ -2126,8 +2140,8 @@ class CompletelySortedIndexTestCase(TempFileMixin, PyTablesTestCase):
         sortedtable = numpy.sort(table[:], order='icol')
         sortedtable2 = table.read_sorted('icol', checkCSI=True)
         if verbose:
-            print "Original sorted table:", sortedtable
-            print "The values from read_sorted:", sortedtable2
+            print("Original sorted table:", sortedtable)
+            print("The values from read_sorted:", sortedtable2)
         self.assertTrue(allequal(sortedtable, sortedtable2))
 
     def test05b_readSorted12(self):
@@ -2145,8 +2159,8 @@ class CompletelySortedIndexTestCase(TempFileMixin, PyTablesTestCase):
         sortedtable = numpy.sort(table[:], order='icol')
         sortedtable2 = table2[:]
         if verbose:
-            print "Original sorted table:", sortedtable
-            print "The values from copy:", sortedtable2
+            print("Original sorted table:", sortedtable)
+            print("The values from copy:", sortedtable2)
         self.assertTrue(allequal(sortedtable, sortedtable2))
 
     def test06_copy_sorted2(self):
@@ -2158,8 +2172,8 @@ class CompletelySortedIndexTestCase(TempFileMixin, PyTablesTestCase):
         sortedtable = numpy.sort(table[:], order='icol')[::-1]
         sortedtable2 = table2[:]
         if verbose:
-            print "Original sorted table:", sortedtable
-            print "The values from copy:", sortedtable2
+            print("Original sorted table:", sortedtable)
+            print("The values from copy:", sortedtable2)
         self.assertTrue(allequal(sortedtable, sortedtable2))
 
     def test06_copy_sorted3(self):
@@ -2171,8 +2185,8 @@ class CompletelySortedIndexTestCase(TempFileMixin, PyTablesTestCase):
         sortedtable = numpy.sort(table[:], order='icol')[3:4]
         sortedtable2 = table2[:]
         if verbose:
-            print "Original sorted table:", sortedtable
-            print "The values from copy:", sortedtable2
+            print("Original sorted table:", sortedtable)
+            print("The values from copy:", sortedtable2)
         self.assertTrue(allequal(sortedtable, sortedtable2))
 
     def test06_copy_sorted4(self):
@@ -2184,8 +2198,8 @@ class CompletelySortedIndexTestCase(TempFileMixin, PyTablesTestCase):
         sortedtable = numpy.sort(table[:], order='icol')[3:40]
         sortedtable2 = table2[:]
         if verbose:
-            print "Original sorted table:", sortedtable
-            print "The values from copy:", sortedtable2
+            print("Original sorted table:", sortedtable)
+            print("The values from copy:", sortedtable2)
         self.assertTrue(allequal(sortedtable, sortedtable2))
 
     def test06_copy_sorted5(self):
@@ -2198,8 +2212,8 @@ class CompletelySortedIndexTestCase(TempFileMixin, PyTablesTestCase):
         sortedtable = numpy.sort(table[:], order='icol')[3:33:5]
         sortedtable2 = table2[:]
         if verbose:
-            print "Original sorted table:", sortedtable
-            print "The values from copy:", sortedtable2
+            print("Original sorted table:", sortedtable)
+            print("The values from copy:", sortedtable2)
         self.assertTrue(allequal(sortedtable, sortedtable2))
 
     def test06_copy_sorted6(self):
@@ -2212,8 +2226,8 @@ class CompletelySortedIndexTestCase(TempFileMixin, PyTablesTestCase):
         sortedtable = numpy.sort(table[:], order='icol')
         sortedtable2 = table2[:]
         if verbose:
-            print "Original sorted table:", sortedtable
-            print "The values from copy:", sortedtable2
+            print("Original sorted table:", sortedtable)
+            print("The values from copy:", sortedtable2)
         self.assertTrue(allequal(sortedtable, sortedtable2))
 
     def test06_copy_sorted7(self):
@@ -2241,7 +2255,7 @@ class CompletelySortedIndexTestCase(TempFileMixin, PyTablesTestCase):
         t2 = self.h5file.create_table('/', 't2', self.MyDescription)
         irows = t2.cols.rcol.create_csindex()
         if verbose:
-            print "repr(t2)-->\n", repr(t2)
+            print("repr(t2)-->\n", repr(t2))
         self.assertEqual(irows, 0)
         self.assertEqual(t2.colindexes['rcol'].is_csi, False)
 
@@ -2278,22 +2292,23 @@ class ReadSortedIndexTestCase(TempFileMixin, PyTablesTestCase):
         sortedtable = numpy.sort(table[:], order='icol')
         sortedtable2 = table.read_sorted('icol')
         if verbose:
-            print "Sorted table:", sortedtable
-            print "The values from read_sorted:", sortedtable2
+            print("Sorted table:", sortedtable)
+            print("The values from read_sorted:", sortedtable2)
         # Compare with the sorted read table because we have no
         # guarantees that read_sorted returns a completely sorted table
         self.assertTrue(allequal(sortedtable,
                                  numpy.sort(sortedtable2, order="icol")))
 
     def test01_readSorted2(self):
-        """Testing the Table.read_sorted() method with no arguments (re-open)."""
+        """Testing the Table.read_sorted() method with no arguments (re-open).
+        """
         self._reopen()
         table = self.h5file.root.table
         sortedtable = numpy.sort(table[:], order='icol')
         sortedtable2 = table.read_sorted('icol')
         if verbose:
-            print "Sorted table:", sortedtable
-            print "The values from read_sorted:", sortedtable2
+            print("Sorted table:", sortedtable)
+            print("The values from read_sorted:", sortedtable2)
         # Compare with the sorted read table because we have no
         # guarantees that read_sorted returns a completely sorted table
         self.assertTrue(allequal(sortedtable,
@@ -2308,8 +2323,8 @@ class ReadSortedIndexTestCase(TempFileMixin, PyTablesTestCase):
         sortedtable = numpy.sort(table[:], order='icol')
         sortedtable2 = numpy.sort(table2[:], order='icol')
         if verbose:
-            print "Original table:", table2[:]
-            print "The sorted values from copy:", sortedtable2
+            print("Original table:", table2[:])
+            print("The sorted values from copy:", sortedtable2)
         self.assertTrue(allequal(sortedtable, sortedtable2))
 
     def test02_copy_sorted2(self):
@@ -2322,8 +2337,8 @@ class ReadSortedIndexTestCase(TempFileMixin, PyTablesTestCase):
         sortedtable = numpy.sort(table[:], order='icol')
         sortedtable2 = numpy.sort(table2[:], order='icol')
         if verbose:
-            print "Original table:", table2[:]
-            print "The sorted values from copy:", sortedtable2
+            print("Original table:", table2[:])
+            print("The sorted values from copy:", sortedtable2)
         self.assertTrue(allequal(sortedtable, sortedtable2))
 
 
@@ -2393,7 +2408,8 @@ class Issue156TestBase(PyTablesTestCase):
 
         # check column is sorted
         self.assertTrue(numpy.all(
-            new_node.col(self.sort_field) == sorted(oldNode.col(self.sort_field))))
+            new_node.col(self.sort_field) ==
+            sorted(oldNode.col(self.sort_field))))
         # check index is available
         self.assertTrue(self.sort_field in new_node.colindexes)
         # check CSI was propagated
@@ -2411,7 +2427,7 @@ class Issue156TestCase02(Issue156TestBase):
 
 
 class Issue119Time32ColTestCase(PyTablesTestCase):
-    """ TimeCol not properly indexing """
+    """TimeCol not properly indexing."""
 
     col_typ = Time32Col
     values = [
@@ -2427,15 +2443,14 @@ class Issue119Time32ColTestCase(PyTablesTestCase):
         0.75127635627046820,
     ]
 
-
     def setUp(self):
         # create hdf5 file
         self.filename = tempfile.mktemp(".hdf5")
         self.file = open_file(self.filename, mode="w")
 
         class Descr(IsDescription):
-            when = self.col_typ(pos = 1)
-            value = Float32Col(pos = 2)
+            when = self.col_typ(pos=1)
+            value = Float32Col(pos=2)
 
         self.table = self.file.create_table('/', 'test', Descr)
 
@@ -2452,11 +2467,11 @@ class Issue119Time32ColTestCase(PyTablesTestCase):
         tbl = self.table
         t = self.t
 
-        wherestr = '(when >= %d) & (when < %d)'%(t, t+5)
+        wherestr = '(when >= %d) & (when < %d)' % (t, t + 5)
 
         no_index = tbl.read_where(wherestr)
 
-        tbl.cols.when.create_index(_verbose = False)
+        tbl.cols.when.create_index(_verbose=False)
         with_index = tbl.read_where(wherestr)
 
         self.assertTrue((no_index == with_index).all())
@@ -2466,8 +2481,51 @@ class Issue119Time64ColTestCase(Issue119Time32ColTestCase):
     col_typ = Time64Col
 
 
+class TestIndexingNans(TempFileMixin, PyTablesTestCase):
+    def test_issue_282(self):
+        trMap = {'index': Int64Col(), 'values': FloatCol()}
+        table = self.h5file.create_table('/', 'table', trMap)
+
+        r = table.row
+        for i in range(5):
+            r['index'] = i
+            r['values'] = numpy.nan if i == 0 else i
+            r.append()
+        table.flush()
+
+        table.cols.values.create_index()
+
+        # retrieve
+        result = table.read_where('(values >= 0)')
+        self.assertTrue(len(result) == 4)
+
+    def test_issue_327(self):
+        table = self.h5file.create_table('/', 'table', dict(
+            index=Int64Col(),
+            values=FloatCol(shape=()),
+            values2=FloatCol(shape=()),
+        ))
+
+        r = table.row
+        for i in range(5):
+            r['index'] = i
+            r['values'] = numpy.nan if i == 2 or i == 3 else i
+            r['values2'] = i
+            r.append()
+        table.flush()
+
+        table.cols.values.create_index()
+        table.cols.values2.create_index()
+
+        results2 = table.read_where('(values2 > 0)')
+        self.assertTrue(len(results2) == 4)
+
+        results = table.read_where('(values > 0)')
+        self.assertTrue(len(results) == 2)
+
 #----------------------------------------------------------------------
 
+
 def suite():
     theSuite = unittest.TestSuite()
 
@@ -2503,6 +2561,7 @@ def suite():
         theSuite.addTest(unittest.makeSuite(Issue156TestCase02))
         theSuite.addTest(unittest.makeSuite(Issue119Time32ColTestCase))
         theSuite.addTest(unittest.makeSuite(Issue119Time64ColTestCase))
+        theSuite.addTest(unittest.makeSuite(TestIndexingNans))
     if heavy:
         # These are too heavy for normal testing
         theSuite.addTest(unittest.makeSuite(AI4bTestCase))
diff --git a/tables/tests/test_indexvalues.py b/tables/tests/test_indexvalues.py
index 64b5c3f..b72a667 100644
--- a/tables/tests/test_indexvalues.py
+++ b/tables/tests/test_indexvalues.py
@@ -1,5 +1,6 @@
 # -*- coding: utf-8 -*-
 
+from __future__ import print_function
 import os
 import random
 import unittest
@@ -48,7 +49,7 @@ class SelectValuesTestCase(unittest.TestCase):
     def setUp(self):
         # Create an instance of an HDF5 Table
         if verbose:
-            print "Checking index kind-->", self.kind
+            print("Checking index kind-->", self.kind)
         self.file = tempfile.mktemp(".h5")
         self.fileh = open_file(self.file, "w")
         self.rootgroup = self.fileh.root
@@ -67,11 +68,11 @@ class SelectValuesTestCase(unittest.TestCase):
                           shuffle=self.shuffle,
                           fletcher32=self.fletcher32)
         table1 = self.fileh.create_table(group, 'table1', Small, title,
-                                        filters, self.nrows,
-                                        chunkshape=(self.chunkshape,))
+                                         filters, self.nrows,
+                                         chunkshape=(self.chunkshape,))
         table2 = self.fileh.create_table(group, 'table2', Small, title,
-                                        filters, self.nrows,
-                                        chunkshape=(self.chunkshape,))
+                                         filters, self.nrows,
+                                         chunkshape=(self.chunkshape,))
         count = 0
         for i in xrange(0, self.nrows, self.nrep):
             for j in range(self.nrep):
@@ -106,8 +107,8 @@ class SelectValuesTestCase(unittest.TestCase):
             indexrows = col.create_index(
                 kind=self.kind, _blocksizes=self.blocksizes)
         if verbose:
-            print "Number of written rows:", table1.nrows
-            print "Number of indexed rows:", indexrows
+            print("Number of written rows:", table1.nrows)
+            print("Number of indexed rows:", indexrows)
 
         if self.reopen:
             self.fileh.close()
@@ -126,8 +127,8 @@ class SelectValuesTestCase(unittest.TestCase):
         """Checking selecting values from an Index (string flavor)"""
 
         if verbose:
-            print '\n', '-=' * 30
-            print "Running %s.test01a..." % self.__class__.__name__
+            print('\n', '-=' * 30)
+            print("Running %s.test01a..." % self.__class__.__name__)
 
         table1 = self.fileh.root.table1
         table2 = self.fileh.root.table2
@@ -148,9 +149,9 @@ class SelectValuesTestCase(unittest.TestCase):
         if verbose:
 #             print "Superior & inferior limits:", il, sl
 #             print "Selection results (index):", results1
-            print "Should look like:", results2
-            print "Length results:", len(results1)
-            print "Should be:", len(results2)
+            print("Should look like:", results2)
+            print("Length results:", len(results1))
+            print("Should be:", len(results2))
         self.assertEqual(len(results1), len(results2))
         self.assertEqual(results1, results2)
 
@@ -165,8 +166,8 @@ class SelectValuesTestCase(unittest.TestCase):
         if verbose:
 #             print "Selection results (index):", results1
 #             print "Should look like:", results2
-            print "Length results:", len(results1)
-            print "Should be:", len(results2)
+            print("Length results:", len(results1))
+            print("Should be:", len(results2))
         self.assertEqual(len(results1), len(results2))
         self.assertEqual(results1, results2)
 
@@ -181,8 +182,8 @@ class SelectValuesTestCase(unittest.TestCase):
         if verbose:
 #             print "Selection results (index):", results1
 #             print "Should look like:", results2
-            print "Length results:", len(results1)
-            print "Should be:", len(results2)
+            print("Length results:", len(results1))
+            print("Should be:", len(results2))
         self.assertEqual(len(results1), len(results2))
         self.assertEqual(results1, results2)
 
@@ -198,8 +199,8 @@ class SelectValuesTestCase(unittest.TestCase):
         if verbose:
 #             print "Selection results (index):", results1
 #             print "Should look like:", results2
-            print "Length results:", len(results1)
-            print "Should be:", len(results2)
+            print("Length results:", len(results1))
+            print("Should be:", len(results2))
         self.assertEqual(len(results1), len(results2))
         self.assertEqual(results1, results2)
 
@@ -207,8 +208,8 @@ class SelectValuesTestCase(unittest.TestCase):
         """Checking selecting values from an Index (string flavor)"""
 
         if verbose:
-            print '\n', '-=' * 30
-            print "Running %s.test01b..." % self.__class__.__name__
+            print('\n', '-=' * 30)
+            print("Running %s.test01b..." % self.__class__.__name__)
 
         table1 = self.fileh.root.table1
         table2 = self.fileh.root.table2
@@ -226,11 +227,11 @@ class SelectValuesTestCase(unittest.TestCase):
         results1.sort()
         results2.sort()
         if verbose:
-            print "Limit:", sl
+            print("Limit:", sl)
 #             print "Selection results (index):", results1
 #             print "Should look like:", results2
-            print "Length results:", len(results1)
-            print "Should be:", len(results2)
+            print("Length results:", len(results1))
+            print("Should be:", len(results2))
         self.assertEqual(len(results1), len(results2))
         self.assertEqual(results1, results2)
 
@@ -242,11 +243,11 @@ class SelectValuesTestCase(unittest.TestCase):
         results1.sort()
         results2.sort()
         if verbose:
-            print "Limit:", sl
+            print("Limit:", sl)
 #             print "Selection results (index):", results1
 #             print "Should look like:", results2
-            print "Length results:", len(results1)
-            print "Should be:", len(results2)
+            print("Length results:", len(results1))
+            print("Should be:", len(results2))
         self.assertEqual(len(results1), len(results2))
         self.assertEqual(results1, results2)
 
@@ -258,11 +259,11 @@ class SelectValuesTestCase(unittest.TestCase):
         results1.sort()
         results2.sort()
         if verbose:
-            print "Limit:", sl
+            print("Limit:", sl)
 #             print "Selection results (index):", results1
 #             print "Should look like:", results2
-            print "Length results:", len(results1)
-            print "Should be:", len(results2)
+            print("Length results:", len(results1))
+            print("Should be:", len(results2))
         self.assertEqual(len(results1), len(results2))
         self.assertEqual(results1, results2)
 
@@ -275,11 +276,11 @@ class SelectValuesTestCase(unittest.TestCase):
         results1.sort()
         results2.sort()
         if verbose:
-            print "Limit:", sl
+            print("Limit:", sl)
 #             print "Selection results (index):", results1
 #             print "Should look like:", results2
-            print "Length results:", len(results1)
-            print "Should be:", len(results2)
+            print("Length results:", len(results1))
+            print("Should be:", len(results2))
         self.assertEqual(len(results1), len(results2))
         self.assertEqual(results1, results2)
 
@@ -287,8 +288,8 @@ class SelectValuesTestCase(unittest.TestCase):
         """Checking selecting values from an Index (bool flavor)"""
 
         if verbose:
-            print '\n', '-=' * 30
-            print "Running %s.test02a..." % self.__class__.__name__
+            print('\n', '-=' * 30)
+            print("Running %s.test02a..." % self.__class__.__name__)
 
         table1 = self.fileh.root.table1
         table2 = self.fileh.root.table2
@@ -297,13 +298,12 @@ class SelectValuesTestCase(unittest.TestCase):
         t1var2 = table1.cols.var2
         self.assertTrue(t1var2 is not None)
         results1 = [p["var2"] for p in table1.where('t1var2 == True')]
-        results2 = [p["var2"] for p in table2
-                    if p["var2"] == True]
+        results2 = [p["var2"] for p in table2 if p["var2"] is True]
         if verbose:
 #             print "Selection results (index):", results1
 #             print "Should look like:", results2
-            print "Length results:", len(results1)
-            print "Should be:", len(results2)
+            print("Length results:", len(results1))
+            print("Should be:", len(results2))
         self.assertEqual(len(results1), len(results2))
         self.assertEqual(results1, results2)
 
@@ -311,8 +311,8 @@ class SelectValuesTestCase(unittest.TestCase):
         """Checking selecting values from an Index (bool flavor)"""
 
         if verbose:
-            print '\n', '-=' * 30
-            print "Running %s.test02b..." % self.__class__.__name__
+            print('\n', '-=' * 30)
+            print("Running %s.test02b..." % self.__class__.__name__)
 
         table1 = self.fileh.root.table1
         table2 = self.fileh.root.table2
@@ -321,13 +321,12 @@ class SelectValuesTestCase(unittest.TestCase):
         t1var2 = table1.cols.var2
         self.assertTrue(t1var2 is not None)
         results1 = [p["var2"] for p in table1.where('t1var2 == False')]
-        results2 = [p["var2"] for p in table2
-                    if p["var2"] == False]
+        results2 = [p["var2"] for p in table2 if p["var2"] is False]
         if verbose:
 #             print "Selection results (index):", results1
 #             print "Should look like:", results2
-            print "Length results:", len(results1)
-            print "Should be:", len(results2)
+            print("Length results:", len(results1))
+            print("Should be:", len(results2))
         self.assertEqual(len(results1), len(results2))
         self.assertEqual(results1, results2)
 
@@ -335,8 +334,8 @@ class SelectValuesTestCase(unittest.TestCase):
         """Checking selecting values from an Index (int flavor)"""
 
         if verbose:
-            print '\n', '-=' * 30
-            print "Running %s.test03a..." % self.__class__.__name__
+            print('\n', '-=' * 30)
+            print("Running %s.test03a..." % self.__class__.__name__)
 
         table1 = self.fileh.root.table1
         table2 = self.fileh.root.table2
@@ -360,8 +359,8 @@ class SelectValuesTestCase(unittest.TestCase):
         if verbose:
 #             print "Selection results (index):", results1
 #             print "Should look like:", results2
-            print "Length results:", len(results1)
-            print "Should be:", len(results2)
+            print("Length results:", len(results1))
+            print("Should be:", len(results2))
         self.assertEqual(len(results1), len(results2))
         self.assertEqual(results1, results2)
 
@@ -376,8 +375,8 @@ class SelectValuesTestCase(unittest.TestCase):
         if verbose:
 #             print "Selection results (index):", results1
 #             print "Should look like:", results2
-            print "Length results:", len(results1)
-            print "Should be:", len(results2)
+            print("Length results:", len(results1))
+            print("Should be:", len(results2))
         self.assertEqual(len(results1), len(results2))
         self.assertEqual(results1, results2)
 
@@ -392,8 +391,8 @@ class SelectValuesTestCase(unittest.TestCase):
         if verbose:
 #             print "Selection results (index):", results1
 #             print "Should look like:", results2
-            print "Length results:", len(results1)
-            print "Should be:", len(results2)
+            print("Length results:", len(results1))
+            print("Should be:", len(results2))
         self.assertEqual(len(results1), len(results2))
         self.assertEqual(results1, results2)
 
@@ -408,8 +407,8 @@ class SelectValuesTestCase(unittest.TestCase):
         if verbose:
 #             print "Selection results (index):", results1
 #             print "Should look like:", results2
-            print "Length results:", len(results1)
-            print "Should be:", len(results2)
+            print("Length results:", len(results1))
+            print("Should be:", len(results2))
         self.assertEqual(len(results1), len(results2))
         self.assertEqual(results1, results2)
 
@@ -417,8 +416,8 @@ class SelectValuesTestCase(unittest.TestCase):
         """Checking selecting values from an Index (int flavor)"""
 
         if verbose:
-            print '\n', '-=' * 30
-            print "Running %s.test03b..." % self.__class__.__name__
+            print('\n', '-=' * 30)
+            print("Running %s.test03b..." % self.__class__.__name__)
 
         table1 = self.fileh.root.table1
         table2 = self.fileh.root.table2
@@ -440,11 +439,11 @@ class SelectValuesTestCase(unittest.TestCase):
         results1.sort()
         results2.sort()
         if verbose:
-            print "Limit:", sl
+            print("Limit:", sl)
 #             print "Selection results (index):", results1
 #             print "Should look like:", results2
-            print "Length results:", len(results1)
-            print "Should be:", len(results2)
+            print("Length results:", len(results1))
+            print("Should be:", len(results2))
         self.assertEqual(len(results1), len(results2))
         self.assertEqual(results1, results2)
 
@@ -457,11 +456,11 @@ class SelectValuesTestCase(unittest.TestCase):
         results1.sort()
         results2.sort()
         if verbose:
-            print "Limit:", sl
+            print("Limit:", sl)
 #             print "Selection results (index):", results1
 #             print "Should look like:", results2
-            print "Length results:", len(results1)
-            print "Should be:", len(results2)
+            print("Length results:", len(results1))
+            print("Should be:", len(results2))
         self.assertEqual(len(results1), len(results2))
         self.assertEqual(results1, results2)
 
@@ -474,11 +473,11 @@ class SelectValuesTestCase(unittest.TestCase):
         results1.sort()
         results2.sort()
         if verbose:
-            print "Limit:", sl
+            print("Limit:", sl)
 #             print "Selection results (index):", results1
 #             print "Should look like:", results2
-            print "Length results:", len(results1)
-            print "Should be:", len(results2)
+            print("Length results:", len(results1))
+            print("Should be:", len(results2))
         self.assertEqual(len(results1), len(results2))
         self.assertEqual(results1, results2)
 
@@ -491,11 +490,11 @@ class SelectValuesTestCase(unittest.TestCase):
         results1.sort()
         results2.sort()
         if verbose:
-            print "Limit:", sl
+            print("Limit:", sl)
 #             print "Selection results (index):", results1
 #             print "Should look like:", results2
-            print "Length results:", len(results1)
-            print "Should be:", len(results2)
+            print("Length results:", len(results1))
+            print("Should be:", len(results2))
         self.assertEqual(len(results1), len(results2))
         self.assertEqual(results1, results2)
 
@@ -503,8 +502,8 @@ class SelectValuesTestCase(unittest.TestCase):
         """Checking selecting values from an Index (long flavor)"""
 
         if verbose:
-            print '\n', '-=' * 30
-            print "Running %s.test03c..." % self.__class__.__name__
+            print('\n', '-=' * 30)
+            print("Running %s.test03c..." % self.__class__.__name__)
 
         table1 = self.fileh.root.table1
         table2 = self.fileh.root.table2
@@ -526,11 +525,11 @@ class SelectValuesTestCase(unittest.TestCase):
         results1.sort()
         results2.sort()
         if verbose:
-            print "Limit:", sl
+            print("Limit:", sl)
 #             print "Selection results (index):", results1
 #             print "Should look like:", results2
-            print "Length results:", len(results1)
-            print "Should be:", len(results2)
+            print("Length results:", len(results1))
+            print("Should be:", len(results2))
         self.assertEqual(len(results1), len(results2))
         self.assertEqual(results1, results2)
 
@@ -543,11 +542,11 @@ class SelectValuesTestCase(unittest.TestCase):
         results1.sort()
         results2.sort()
         if verbose:
-            print "Limit:", sl
+            print("Limit:", sl)
 #             print "Selection results (index):", results1
 #             print "Should look like:", results2
-            print "Length results:", len(results1)
-            print "Should be:", len(results2)
+            print("Length results:", len(results1))
+            print("Should be:", len(results2))
         self.assertEqual(len(results1), len(results2))
         self.assertEqual(results1, results2)
 
@@ -560,11 +559,11 @@ class SelectValuesTestCase(unittest.TestCase):
         results1.sort()
         results2.sort()
         if verbose:
-            print "Limit:", sl
+            print("Limit:", sl)
 #             print "Selection results (index):", results1
 #             print "Should look like:", results2
-            print "Length results:", len(results1)
-            print "Should be:", len(results2)
+            print("Length results:", len(results1))
+            print("Should be:", len(results2))
         self.assertEqual(len(results1), len(results2))
         self.assertEqual(results1, results2)
 
@@ -577,11 +576,11 @@ class SelectValuesTestCase(unittest.TestCase):
         results1.sort()
         results2.sort()
         if verbose:
-            print "Limit:", sl
+            print("Limit:", sl)
 #             print "Selection results (index):", results1
 #             print "Should look like:", results2
-            print "Length results:", len(results1)
-            print "Should be:", len(results2)
+            print("Length results:", len(results1))
+            print("Should be:", len(results2))
         self.assertEqual(len(results1), len(results2))
         self.assertEqual(results1, results2)
 
@@ -589,8 +588,8 @@ class SelectValuesTestCase(unittest.TestCase):
         """Checking selecting values from an Index (long and int flavor)"""
 
         if verbose:
-            print '\n', '-=' * 30
-            print "Running %s.test03d..." % self.__class__.__name__
+            print('\n', '-=' * 30)
+            print("Running %s.test03d..." % self.__class__.__name__)
 
         table1 = self.fileh.root.table1
         table2 = self.fileh.root.table2
@@ -612,11 +611,11 @@ class SelectValuesTestCase(unittest.TestCase):
         results1.sort()
         results2.sort()
         if verbose:
-            print "Limit:", sl
+            print("Limit:", sl)
 #             print "Selection results (index):", results1
 #             print "Should look like:", results2
-            print "Length results:", len(results1)
-            print "Should be:", len(results2)
+            print("Length results:", len(results1))
+            print("Should be:", len(results2))
         self.assertEqual(len(results1), len(results2))
         self.assertEqual(results1, results2)
 
@@ -629,11 +628,11 @@ class SelectValuesTestCase(unittest.TestCase):
         results1.sort()
         results2.sort()
         if verbose:
-            print "Limit:", sl
+            print("Limit:", sl)
 #             print "Selection results (index):", results1
 #             print "Should look like:", results2
-            print "Length results:", len(results1)
-            print "Should be:", len(results2)
+            print("Length results:", len(results1))
+            print("Should be:", len(results2))
         self.assertEqual(len(results1), len(results2))
         self.assertEqual(results1, results2)
 
@@ -646,11 +645,11 @@ class SelectValuesTestCase(unittest.TestCase):
         results1.sort()
         results2.sort()
         if verbose:
-            print "Limit:", sl
+            print("Limit:", sl)
 #             print "Selection results (index):", results1
 #             print "Should look like:", results2
-            print "Length results:", len(results1)
-            print "Should be:", len(results2)
+            print("Length results:", len(results1))
+            print("Should be:", len(results2))
         self.assertEqual(len(results1), len(results2))
         self.assertEqual(results1, results2)
 
@@ -663,11 +662,11 @@ class SelectValuesTestCase(unittest.TestCase):
         results1.sort()
         results2.sort()
         if verbose:
-            print "Limit:", sl
+            print("Limit:", sl)
 #             print "Selection results (index):", results1
 #             print "Should look like:", results2
-            print "Length results:", len(results1)
-            print "Should be:", len(results2)
+            print("Length results:", len(results1))
+            print("Should be:", len(results2))
         self.assertEqual(len(results1), len(results2))
         self.assertEqual(results1, results2)
 
@@ -675,8 +674,8 @@ class SelectValuesTestCase(unittest.TestCase):
         """Checking selecting values from an Index (float flavor)"""
 
         if verbose:
-            print '\n', '-=' * 30
-            print "Running %s.test04a..." % self.__class__.__name__
+            print('\n', '-=' * 30)
+            print("Running %s.test04a..." % self.__class__.__name__)
 
         table1 = self.fileh.root.table1
         table2 = self.fileh.root.table2
@@ -700,8 +699,8 @@ class SelectValuesTestCase(unittest.TestCase):
         if verbose:
 #             print "Selection results (index):", results1
 #             print "Should look like:", results2
-            print "Length results:", len(results1)
-            print "Should be:", len(results2)
+            print("Length results:", len(results1))
+            print("Should be:", len(results2))
         self.assertEqual(len(results1), len(results2))
         self.assertEqual(results1.sort(), results2.sort())
 
@@ -716,8 +715,8 @@ class SelectValuesTestCase(unittest.TestCase):
         if verbose:
 #             print "Selection results (index):", results1
 #             print "Should look like:", results2
-            print "Length results:", len(results1)
-            print "Should be:", len(results2)
+            print("Length results:", len(results1))
+            print("Should be:", len(results2))
         self.assertEqual(len(results1), len(results2))
         self.assertEqual(results1, results2)
 
@@ -732,8 +731,8 @@ class SelectValuesTestCase(unittest.TestCase):
         if verbose:
 #             print "Selection results (index):", results1
 #             print "Should look like:", results2
-            print "Length results:", len(results1)
-            print "Should be:", len(results2)
+            print("Length results:", len(results1))
+            print("Should be:", len(results2))
         self.assertEqual(len(results1), len(results2))
         # sort lists (indexing does not guarantee that rows are returned in
         # order)
@@ -750,8 +749,8 @@ class SelectValuesTestCase(unittest.TestCase):
         if verbose:
 #             print "Selection results (index):", results1
 #             print "Should look like:", results2
-            print "Length results:", len(results1)
-            print "Should be:", len(results2)
+            print("Length results:", len(results1))
+            print("Should be:", len(results2))
         self.assertEqual(len(results1), len(results2))
         self.assertEqual(results1, results2)
 
@@ -759,8 +758,8 @@ class SelectValuesTestCase(unittest.TestCase):
         """Checking selecting values from an Index (float flavor)"""
 
         if verbose:
-            print '\n', '-=' * 30
-            print "Running %s.test04b..." % self.__class__.__name__
+            print('\n', '-=' * 30)
+            print("Running %s.test04b..." % self.__class__.__name__)
 
         table1 = self.fileh.root.table1
         table2 = self.fileh.root.table2
@@ -782,11 +781,11 @@ class SelectValuesTestCase(unittest.TestCase):
         results1.sort()
         results2.sort()
         if verbose:
-            print "Limit:", sl
+            print("Limit:", sl)
 #             print "Selection results (index):", results1
 #             print "Should look like:", results2
-            print "Length results:", len(results1)
-            print "Should be:", len(results2)
+            print("Length results:", len(results1))
+            print("Should be:", len(results2))
         self.assertEqual(len(results1), len(results2))
         self.assertEqual(results1, results2)
 
@@ -799,11 +798,11 @@ class SelectValuesTestCase(unittest.TestCase):
         results1.sort()
         results2.sort()
         if verbose:
-            print "Limit:", sl
+            print("Limit:", sl)
 #             print "Selection results (index):", results1
 #             print "Should look like:", results2
-            print "Length results:", len(results1)
-            print "Should be:", len(results2)
+            print("Length results:", len(results1))
+            print("Should be:", len(results2))
         self.assertEqual(len(results1), len(results2))
         self.assertEqual(results1, results2)
 
@@ -816,11 +815,11 @@ class SelectValuesTestCase(unittest.TestCase):
         results1.sort()
         results2.sort()
         if verbose:
-            print "Limit:", sl
+            print("Limit:", sl)
 #             print "Selection results (index):", results1
 #             print "Should look like:", results2
-            print "Length results:", len(results1)
-            print "Should be:", len(results2)
+            print("Length results:", len(results1))
+            print("Should be:", len(results2))
         self.assertEqual(len(results1), len(results2))
         self.assertEqual(results1, results2)
 
@@ -833,11 +832,11 @@ class SelectValuesTestCase(unittest.TestCase):
         results1.sort()
         results2.sort()
         if verbose:
-            print "Limit:", sl
+            print("Limit:", sl)
 #             print "Selection results (index):", results1
 #             print "Should look like:", results2
-            print "Length results:", len(results1)
-            print "Should be:", len(results2)
+            print("Length results:", len(results1))
+            print("Should be:", len(results2))
         self.assertEqual(len(results1), len(results2))
         self.assertEqual(results1, results2)
 
@@ -845,8 +844,8 @@ class SelectValuesTestCase(unittest.TestCase):
         """Checking get_where_list & itersequence (string, python flavor)"""
 
         if verbose:
-            print '\n', '-=' * 30
-            print "Running %s.test05a..." % self.__class__.__name__
+            print('\n', '-=' * 30)
+            print("Running %s.test05a..." % self.__class__.__name__)
 
         table1 = self.fileh.root.table1
         table2 = self.fileh.root.table2
@@ -860,7 +859,8 @@ class SelectValuesTestCase(unittest.TestCase):
         # First selection
         condition = '(il<=t1col)&(t1col<=sl)'
         self.assertTrue(
-            table1.will_query_use_indexing(condition) == fzset([t1col.pathname]))
+            table1.will_query_use_indexing(condition) ==
+            fzset([t1col.pathname]))
         table1.flavor = "python"
         rowList1 = table1.get_where_list(condition)
         results1 = [p['var1'] for p in table1.itersequence(rowList1)]
@@ -873,15 +873,16 @@ class SelectValuesTestCase(unittest.TestCase):
         if verbose:
 #             print "Selection results (index):", results1
 #             print "Should look like:", results2
-            print "Length results:", len(results1)
-            print "Should be:", len(results2)
+            print("Length results:", len(results1))
+            print("Should be:", len(results2))
         self.assertEqual(len(results1), len(results2))
         self.assertEqual(results1.sort(), results2.sort())
 
         # Second selection
         condition = '(il<=t1col)&(t1col<sl)'
         self.assertTrue(
-            table1.will_query_use_indexing(condition) == fzset([t1col.pathname]))
+            table1.will_query_use_indexing(condition) ==
+            fzset([t1col.pathname]))
         table1.flavor = "python"
         rowList1 = table1.get_where_list(condition)
         results1 = [p['var1'] for p in table1.itersequence(rowList1)]
@@ -894,15 +895,16 @@ class SelectValuesTestCase(unittest.TestCase):
         if verbose:
 #             print "Selection results (index):", results1
 #             print "Should look like:", results2
-            print "Length results:", len(results1)
-            print "Should be:", len(results2)
+            print("Length results:", len(results1))
+            print("Should be:", len(results2))
         self.assertEqual(len(results1), len(results2))
         self.assertEqual(results1, results2)
 
         # Third selection
         condition = '(il<t1col)&(t1col<=sl)'
         self.assertTrue(
-            table1.will_query_use_indexing(condition) == fzset([t1col.pathname]))
+            table1.will_query_use_indexing(condition) ==
+            fzset([t1col.pathname]))
         table1.flavor = "python"
         rowList1 = table1.get_where_list(condition)
         results1 = [p['var1'] for p in table1.itersequence(rowList1)]
@@ -915,8 +917,8 @@ class SelectValuesTestCase(unittest.TestCase):
         if verbose:
 #             print "Selection results (index):", results1
 #             print "Should look like:", results2
-            print "Length results:", len(results1)
-            print "Should be:", len(results2)
+            print("Length results:", len(results1))
+            print("Should be:", len(results2))
         self.assertEqual(len(results1), len(results2))
         # sort lists (indexing does not guarantee that rows are returned in
         # order)
@@ -925,7 +927,8 @@ class SelectValuesTestCase(unittest.TestCase):
         # Fourth selection
         condition = '(il<t1col)&(t1col<sl)'
         self.assertTrue(
-            table1.will_query_use_indexing(condition) == fzset([t1col.pathname]))
+            table1.will_query_use_indexing(condition) ==
+            fzset([t1col.pathname]))
         table1.flavor = "python"
         rowList1 = table1.get_where_list(condition)
         results1 = [p['var1'] for p in table1.itersequence(rowList1)]
@@ -938,17 +941,18 @@ class SelectValuesTestCase(unittest.TestCase):
         if verbose:
 #             print "Selection results (index):", results1
 #             print "Should look like:", results2
-            print "Length results:", len(results1)
-            print "Should be:", len(results2)
+            print("Length results:", len(results1))
+            print("Should be:", len(results2))
         self.assertEqual(len(results1), len(results2))
         self.assertEqual(results1, results2)
 
     def test05b(self):
-        """Checking get_where_list & itersequence (numpy string lims & python flavor)"""
+        """Checking get_where_list & itersequence (numpy string lims & python
+        flavor)"""
 
         if verbose:
-            print '\n', '-=' * 30
-            print "Running %s.test05b..." % self.__class__.__name__
+            print('\n', '-=' * 30)
+            print("Running %s.test05b..." % self.__class__.__name__)
 
         table1 = self.fileh.root.table1
         table2 = self.fileh.root.table2
@@ -963,7 +967,8 @@ class SelectValuesTestCase(unittest.TestCase):
         # First selection
         condition = 't1col<sl'
         self.assertTrue(
-            table1.will_query_use_indexing(condition) == fzset([t1col.pathname]))
+            table1.will_query_use_indexing(condition) ==
+            fzset([t1col.pathname]))
         table1.flavor = "python"
         rowList1 = table1.get_where_list(condition)
         results1 = [p['var1'] for p in table1.itersequence(rowList1)]
@@ -974,18 +979,19 @@ class SelectValuesTestCase(unittest.TestCase):
         results1.sort()
         results2.sort()
         if verbose:
-            print "Limit:", sl
+            print("Limit:", sl)
 #             print "Selection results (index):", results1
 #             print "Should look like:", results2
-            print "Length results:", len(results1)
-            print "Should be:", len(results2)
+            print("Length results:", len(results1))
+            print("Should be:", len(results2))
         self.assertEqual(len(results1), len(results2))
         self.assertEqual(results1, results2)
 
         # Second selection
         condition = 't1col<=sl'
         self.assertTrue(
-            table1.will_query_use_indexing(condition) == fzset([t1col.pathname]))
+            table1.will_query_use_indexing(condition) ==
+            fzset([t1col.pathname]))
         rowList1 = table1.get_where_list(condition)
         results1 = [p['var1'] for p in table1.itersequence(rowList1)]
         results2 = [p["var1"] for p in table2
@@ -995,18 +1001,19 @@ class SelectValuesTestCase(unittest.TestCase):
         results1.sort()
         results2.sort()
         if verbose:
-            print "Limit:", sl
+            print("Limit:", sl)
 #             print "Selection results (index):", results1
 #             print "Should look like:", results2
-            print "Length results:", len(results1)
-            print "Should be:", len(results2)
+            print("Length results:", len(results1))
+            print("Should be:", len(results2))
         self.assertEqual(len(results1), len(results2))
         self.assertEqual(results1, results2)
 
         # Third selection
         condition = 't1col>sl'
         self.assertTrue(
-            table1.will_query_use_indexing(condition) == fzset([t1col.pathname]))
+            table1.will_query_use_indexing(condition) ==
+            fzset([t1col.pathname]))
         rowList1 = table1.get_where_list(condition)
         results1 = [p['var1'] for p in table1.itersequence(rowList1)]
         results2 = [p["var1"] for p in table2
@@ -1016,32 +1023,32 @@ class SelectValuesTestCase(unittest.TestCase):
         results1.sort()
         results2.sort()
         if verbose:
-            print "Limit:", sl
+            print("Limit:", sl)
 #             print "Selection results (index):", results1
 #             print "Should look like:", results2
-            print "Length results:", len(results1)
-            print "Should be:", len(results2)
+            print("Length results:", len(results1))
+            print("Should be:", len(results2))
         self.assertEqual(len(results1), len(results2))
         self.assertEqual(results1, results2)
 
         # Fourth selection
         condition = 't1col>=sl'
         self.assertTrue(
-            table1.will_query_use_indexing(condition) == fzset([t1col.pathname]))
+            table1.will_query_use_indexing(condition) ==
+            fzset([t1col.pathname]))
         rowList1 = table1.get_where_list(condition)
         results1 = [p['var1'] for p in table1.itersequence(rowList1)]
-        results2 = [p["var1"] for p in table2
-                    if p["var1"] >= sl]
+        results2 = [p["var1"] for p in table2 if p["var1"] >= sl]
         # sort lists (indexing does not guarantee that rows are returned in
         # order)
         results1.sort()
         results2.sort()
         if verbose:
-            print "Limit:", sl
+            print("Limit:", sl)
 #             print "Selection results (index):", results1
 #             print "Should look like:", results2
-            print "Length results:", len(results1)
-            print "Should be:", len(results2)
+            print("Length results:", len(results1))
+            print("Should be:", len(results2))
         self.assertEqual(len(results1), len(results2))
         self.assertEqual(results1, results2)
 
@@ -1049,8 +1056,8 @@ class SelectValuesTestCase(unittest.TestCase):
         """Checking get_where_list & itersequence (bool flavor)"""
 
         if verbose:
-            print '\n', '-=' * 30
-            print "Running %s.test06a..." % self.__class__.__name__
+            print('\n', '-=' * 30)
+            print("Running %s.test06a..." % self.__class__.__name__)
 
         table1 = self.fileh.root.table1
         table2 = self.fileh.root.table2
@@ -1059,26 +1066,27 @@ class SelectValuesTestCase(unittest.TestCase):
         t1var2 = table1.cols.var2
         condition = 't1var2==True'
         self.assertTrue(
-            table1.will_query_use_indexing(condition) == fzset([t1var2.pathname]))
+            table1.will_query_use_indexing(condition) ==
+            fzset([t1var2.pathname]))
         table1.flavor = "python"
         rowList1 = table1.get_where_list(condition)
         results1 = [p['var2'] for p in table1.itersequence(rowList1)]
-        results2 = [p["var2"] for p in table2
-                    if p["var2"] == True]
+        results2 = [p["var2"] for p in table2 if p["var2"] is True]
         if verbose:
 #             print "Selection results (index):", results1
 #             print "Should look like:", results2
-            print "Length results:", len(results1)
-            print "Should be:", len(results2)
+            print("Length results:", len(results1))
+            print("Should be:", len(results2))
         self.assertEqual(len(results1), len(results2))
         self.assertEqual(results1, results2)
 
     def test06b(self):
-        """Checking get_where_list & itersequence (numpy bool limits & flavor)"""
+        """Checking get_where_list & itersequence (numpy bool limits &
+        flavor)"""
 
         if verbose:
-            print '\n', '-=' * 30
-            print "Running %s.test06b..." % self.__class__.__name__
+            print('\n', '-=' * 30)
+            print("Running %s.test06b..." % self.__class__.__name__)
 
         table1 = self.fileh.root.table1
         table2 = self.fileh.root.table2
@@ -1089,17 +1097,17 @@ class SelectValuesTestCase(unittest.TestCase):
         self.assertFalse(false)     # silence pyflakes
         condition = 't1var2==false'
         self.assertTrue(
-            table1.will_query_use_indexing(condition) == fzset([t1var2.pathname]))
+            table1.will_query_use_indexing(condition) ==
+            fzset([t1var2.pathname]))
         table1.flavor = "python"
         rowList1 = table1.get_where_list(condition)
         results1 = [p['var2'] for p in table1.itersequence(rowList1)]
-        results2 = [p["var2"] for p in table2
-                    if p["var2"] == False]
+        results2 = [p["var2"] for p in table2 if p["var2"] is False]
         if verbose:
 #             print "Selection results (index):", results1
 #             print "Should look like:", results2
-            print "Length results:", len(results1)
-            print "Should be:", len(results2)
+            print("Length results:", len(results1))
+            print("Should be:", len(results2))
         self.assertEqual(len(results1), len(results2))
         self.assertEqual(results1, results2)
 
@@ -1107,8 +1115,8 @@ class SelectValuesTestCase(unittest.TestCase):
         """Checking get_where_list & itersequence (int flavor)"""
 
         if verbose:
-            print '\n', '-=' * 30
-            print "Running %s.test07a..." % self.__class__.__name__
+            print('\n', '-=' * 30)
+            print("Running %s.test07a..." % self.__class__.__name__)
 
         table1 = self.fileh.root.table1
         table2 = self.fileh.root.table2
@@ -1122,7 +1130,8 @@ class SelectValuesTestCase(unittest.TestCase):
         # First selection
         condition = '(il<=t1col)&(t1col<=sl)'
         self.assertTrue(
-            table1.will_query_use_indexing(condition) == fzset([t1col.pathname]))
+            table1.will_query_use_indexing(condition) ==
+            fzset([t1col.pathname]))
         table1.flavor = "python"
         rowList1 = table1.get_where_list(condition)
         results1 = [p['var3'] for p in table1.itersequence(rowList1)]
@@ -1135,15 +1144,16 @@ class SelectValuesTestCase(unittest.TestCase):
         if verbose:
 #             print "Selection results (index):", results1
 #             print "Should look like:", results2
-            print "Length results:", len(results1)
-            print "Should be:", len(results2)
+            print("Length results:", len(results1))
+            print("Should be:", len(results2))
         self.assertEqual(len(results1), len(results2))
         self.assertEqual(results1.sort(), results2.sort())
 
         # Second selection
         condition = '(il<=t1col)&(t1col<sl)'
         self.assertTrue(
-            table1.will_query_use_indexing(condition) == fzset([t1col.pathname]))
+            table1.will_query_use_indexing(condition) ==
+            fzset([t1col.pathname]))
         table1.flavor = "python"
         rowList1 = table1.get_where_list(condition)
         results1 = [p['var3'] for p in table1.itersequence(rowList1)]
@@ -1156,15 +1166,16 @@ class SelectValuesTestCase(unittest.TestCase):
         if verbose:
 #             print "Selection results (index):", results1
 #             print "Should look like:", results2
-            print "Length results:", len(results1)
-            print "Should be:", len(results2)
+            print("Length results:", len(results1))
+            print("Should be:", len(results2))
         self.assertEqual(len(results1), len(results2))
         self.assertEqual(results1, results2)
 
         # Third selection
         condition = '(il<t1col)&(t1col<=sl)'
         self.assertTrue(
-            table1.will_query_use_indexing(condition) == fzset([t1col.pathname]))
+            table1.will_query_use_indexing(condition) ==
+            fzset([t1col.pathname]))
         table1.flavor = "python"
         rowList1 = table1.get_where_list(condition)
         results1 = [p['var3'] for p in table1.itersequence(rowList1)]
@@ -1177,8 +1188,8 @@ class SelectValuesTestCase(unittest.TestCase):
         if verbose:
 #             print "Selection results (index):", results1
 #             print "Should look like:", results2
-            print "Length results:", len(results1)
-            print "Should be:", len(results2)
+            print("Length results:", len(results1))
+            print("Should be:", len(results2))
         self.assertEqual(len(results1), len(results2))
         # sort lists (indexing does not guarantee that rows are returned in
         # order)
@@ -1187,7 +1198,8 @@ class SelectValuesTestCase(unittest.TestCase):
         # Fourth selection
         condition = '(il<t1col)&(t1col<sl)'
         self.assertTrue(
-            table1.will_query_use_indexing(condition) == fzset([t1col.pathname]))
+            table1.will_query_use_indexing(condition) ==
+            fzset([t1col.pathname]))
         table1.flavor = "python"
         rowList1 = table1.get_where_list(condition)
         results1 = [p['var3'] for p in table1.itersequence(rowList1)]
@@ -1200,17 +1212,18 @@ class SelectValuesTestCase(unittest.TestCase):
         if verbose:
 #             print "Selection results (index):", results1
 #             print "Should look like:", results2
-            print "Length results:", len(results1)
-            print "Should be:", len(results2)
+            print("Length results:", len(results1))
+            print("Should be:", len(results2))
         self.assertEqual(len(results1), len(results2))
         self.assertEqual(results1, results2)
 
     def test07b(self):
-        """Checking get_where_list & itersequence (numpy int limits & flavor)"""
+        """Checking get_where_list & itersequence (numpy int limits &
+        flavor)"""
 
         if verbose:
-            print '\n', '-=' * 30
-            print "Running %s.test07b..." % self.__class__.__name__
+            print('\n', '-=' * 30)
+            print("Running %s.test07b..." % self.__class__.__name__)
 
         table1 = self.fileh.root.table1
         table2 = self.fileh.root.table2
@@ -1225,7 +1238,8 @@ class SelectValuesTestCase(unittest.TestCase):
         # First selection
         condition = 't1col<sl'
         self.assertTrue(
-            table1.will_query_use_indexing(condition) == fzset([t1col.pathname]))
+            table1.will_query_use_indexing(condition) ==
+            fzset([t1col.pathname]))
         table1.flavor = "python"
         rowList1 = table1.get_where_list(condition)
         results1 = [p['var3'] for p in table1.itersequence(rowList1)]
@@ -1236,18 +1250,19 @@ class SelectValuesTestCase(unittest.TestCase):
         results1.sort()
         results2.sort()
         if verbose:
-            print "Limit:", sl
+            print("Limit:", sl)
 #             print "Selection results (index):", results1
 #             print "Should look like:", results2
-            print "Length results:", len(results1)
-            print "Should be:", len(results2)
+            print("Length results:", len(results1))
+            print("Should be:", len(results2))
         self.assertEqual(len(results1), len(results2))
         self.assertEqual(results1, results2)
 
         # Second selection
         condition = 't1col<=sl'
         self.assertTrue(
-            table1.will_query_use_indexing(condition) == fzset([t1col.pathname]))
+            table1.will_query_use_indexing(condition) ==
+            fzset([t1col.pathname]))
         rowList1 = table1.get_where_list(condition)
         results1 = [p['var3'] for p in table1.itersequence(rowList1)]
         results2 = [p["var3"] for p in table2
@@ -1257,18 +1272,19 @@ class SelectValuesTestCase(unittest.TestCase):
         results1.sort()
         results2.sort()
         if verbose:
-            print "Limit:", sl
+            print("Limit:", sl)
 #             print "Selection results (index):", results1
 #             print "Should look like:", results2
-            print "Length results:", len(results1)
-            print "Should be:", len(results2)
+            print("Length results:", len(results1))
+            print("Should be:", len(results2))
         self.assertEqual(len(results1), len(results2))
         self.assertEqual(results1, results2)
 
         # Third selection
         condition = 't1col>sl'
         self.assertTrue(
-            table1.will_query_use_indexing(condition) == fzset([t1col.pathname]))
+            table1.will_query_use_indexing(condition) ==
+            fzset([t1col.pathname]))
         rowList1 = table1.get_where_list(condition)
         results1 = [p['var3'] for p in table1.itersequence(rowList1)]
         results2 = [p["var3"] for p in table2
@@ -1278,18 +1294,19 @@ class SelectValuesTestCase(unittest.TestCase):
         results1.sort()
         results2.sort()
         if verbose:
-            print "Limit:", sl
+            print("Limit:", sl)
 #             print "Selection results (index):", results1
 #             print "Should look like:", results2
-            print "Length results:", len(results1)
-            print "Should be:", len(results2)
+            print("Length results:", len(results1))
+            print("Should be:", len(results2))
         self.assertEqual(len(results1), len(results2))
         self.assertEqual(results1, results2)
 
         # Fourth selection
         condition = 't1col>=sl'
         self.assertTrue(
-            table1.will_query_use_indexing(condition) == fzset([t1col.pathname]))
+            table1.will_query_use_indexing(condition) ==
+            fzset([t1col.pathname]))
         rowList1 = table1.get_where_list(condition)
         results1 = [p['var3'] for p in table1.itersequence(rowList1)]
         results2 = [p["var3"] for p in table2
@@ -1299,11 +1316,11 @@ class SelectValuesTestCase(unittest.TestCase):
         results1.sort()
         results2.sort()
         if verbose:
-            print "Limit:", sl
+            print("Limit:", sl)
 #             print "Selection results (index):", results1
 #             print "Should look like:", results2
-            print "Length results:", len(results1)
-            print "Should be:", len(results2)
+            print("Length results:", len(results1))
+            print("Should be:", len(results2))
         self.assertEqual(len(results1), len(results2))
         self.assertEqual(results1, results2)
 
@@ -1311,8 +1328,8 @@ class SelectValuesTestCase(unittest.TestCase):
         """Checking get_where_list & itersequence (float flavor)"""
 
         if verbose:
-            print '\n', '-=' * 30
-            print "Running %s.test08a..." % self.__class__.__name__
+            print('\n', '-=' * 30)
+            print("Running %s.test08a..." % self.__class__.__name__)
 
         table1 = self.fileh.root.table1
         table2 = self.fileh.root.table2
@@ -1327,7 +1344,8 @@ class SelectValuesTestCase(unittest.TestCase):
         condition = '(il<=t1col)&(t1col<=sl)'
         # results1 = [p["var4"] for p in table1.where(condition)]
         self.assertTrue(
-            table1.will_query_use_indexing(condition) == fzset([t1col.pathname]))
+            table1.will_query_use_indexing(condition) ==
+            fzset([t1col.pathname]))
         table1.flavor = "python"
         rowList1 = table1.get_where_list(condition)
         results1 = [p['var4'] for p in table1.itersequence(rowList1)]
@@ -1340,15 +1358,16 @@ class SelectValuesTestCase(unittest.TestCase):
         if verbose:
 #             print "Selection results (index):", results1
 #             print "Should look like:", results2
-            print "Length results:", len(results1)
-            print "Should be:", len(results2)
+            print("Length results:", len(results1))
+            print("Should be:", len(results2))
         self.assertEqual(len(results1), len(results2))
         self.assertEqual(results1.sort(), results2.sort())
 
         # Second selection
         condition = '(il<=t1col)&(t1col<sl)'
         self.assertTrue(
-            table1.will_query_use_indexing(condition) == fzset([t1col.pathname]))
+            table1.will_query_use_indexing(condition) ==
+            fzset([t1col.pathname]))
         table1.flavor = "python"
         rowList1 = table1.get_where_list(condition)
         results1 = [p['var4'] for p in table1.itersequence(rowList1)]
@@ -1361,20 +1380,20 @@ class SelectValuesTestCase(unittest.TestCase):
         if verbose:
 #             print "Selection results (index):", results1
 #             print "Should look like:", results2
-            print "Length results:", len(results1)
-            print "Should be:", len(results2)
+            print("Length results:", len(results1))
+            print("Should be:", len(results2))
         self.assertEqual(len(results1), len(results2))
         self.assertEqual(results1, results2)
 
         # Third selection
         condition = '(il<t1col)&(t1col<=sl)'
         self.assertTrue(
-            table1.will_query_use_indexing(condition) == fzset([t1col.pathname]))
+            table1.will_query_use_indexing(condition) ==
+            fzset([t1col.pathname]))
         table1.flavor = "python"
         rowList1 = table1.get_where_list(condition)
         results1 = [p['var4'] for p in table1.itersequence(rowList1)]
-        results2 = [p["var4"] for p in table2
-                    if il < p["var4"] <= sl]
+        results2 = [p["var4"] for p in table2 if il < p["var4"] <= sl]
         # sort lists (indexing does not guarantee that rows are returned in
         # order)
         results1.sort()
@@ -1382,8 +1401,8 @@ class SelectValuesTestCase(unittest.TestCase):
         if verbose:
 #             print "Selection results (index):", results1
 #             print "Should look like:", results2
-            print "Length results:", len(results1)
-            print "Should be:", len(results2)
+            print("Length results:", len(results1))
+            print("Should be:", len(results2))
         self.assertEqual(len(results1), len(results2))
         # sort lists (indexing does not guarantee that rows are returned in
         # order)
@@ -1392,12 +1411,12 @@ class SelectValuesTestCase(unittest.TestCase):
         # Fourth selection
         condition = '(il<t1col)&(t1col<sl)'
         self.assertTrue(
-            table1.will_query_use_indexing(condition) == fzset([t1col.pathname]))
+            table1.will_query_use_indexing(condition) ==
+            fzset([t1col.pathname]))
         table1.flavor = "python"
         rowList1 = table1.get_where_list(condition)
         results1 = [p['var4'] for p in table1.itersequence(rowList1)]
-        results2 = [p["var4"] for p in table2
-                    if il < p["var4"] < sl]
+        results2 = [p["var4"] for p in table2 if il < p["var4"] < sl]
         # sort lists (indexing does not guarantee that rows are returned in
         # order)
         results1.sort()
@@ -1405,17 +1424,18 @@ class SelectValuesTestCase(unittest.TestCase):
         if verbose:
 #             print "Selection results (index):", results1
 #             print "Should look like:", results2
-            print "Length results:", len(results1)
-            print "Should be:", len(results2)
+            print("Length results:", len(results1))
+            print("Should be:", len(results2))
         self.assertEqual(len(results1), len(results2))
         self.assertEqual(results1, results2)
 
     def test08b(self):
-        """Checking get_where_list & itersequence (numpy float limits & flavor)"""
+        """Checking get_where_list & itersequence (numpy float limits &
+        flavor)"""
 
         if verbose:
-            print '\n', '-=' * 30
-            print "Running %s.test08b..." % self.__class__.__name__
+            print('\n', '-=' * 30)
+            print("Running %s.test08b..." % self.__class__.__name__)
 
         table1 = self.fileh.root.table1
         table2 = self.fileh.root.table2
@@ -1430,84 +1450,84 @@ class SelectValuesTestCase(unittest.TestCase):
         # First selection
         condition = 't1col<sl'
         self.assertTrue(
-            table1.will_query_use_indexing(condition) == fzset([t1col.pathname]))
+            table1.will_query_use_indexing(condition) ==
+            fzset([t1col.pathname]))
         rowList1 = table1.get_where_list(condition)
         results1 = [p['var4'] for p in table1.itersequence(rowList1)]
-        results2 = [p["var4"] for p in table2
-                    if p["var4"] < sl]
+        results2 = [p["var4"] for p in table2 if p["var4"] < sl]
         # sort lists (indexing does not guarantee that rows are returned in
         # order)
         results1.sort()
         results2.sort()
         if verbose:
-            print "Limit:", sl
+            print("Limit:", sl)
 #             print "Selection results (index):", results1
 #             print "Should look like:", results2
-            print "Length results:", len(results1)
-            print "Should be:", len(results2)
+            print("Length results:", len(results1))
+            print("Should be:", len(results2))
         self.assertEqual(len(results1), len(results2))
         self.assertEqual(results1, results2)
 
         # Second selection
         condition = 't1col<=sl'
         self.assertTrue(
-            table1.will_query_use_indexing(condition) == fzset([t1col.pathname]))
+            table1.will_query_use_indexing(condition) ==
+            fzset([t1col.pathname]))
         rowList1 = table1.get_where_list(condition)
         results1 = [p['var4'] for p in table1.itersequence(rowList1)]
-        results2 = [p["var4"] for p in table2
-                    if p["var4"] <= sl]
+        results2 = [p["var4"] for p in table2 if p["var4"] <= sl]
         # sort lists (indexing does not guarantee that rows are returned in
         # order)
         results1.sort()
         results2.sort()
         if verbose:
-            print "Limit:", sl
+            print("Limit:", sl)
 #             print "Selection results (index):", results1
 #             print "Should look like:", results2
-            print "Length results:", len(results1)
-            print "Should be:", len(results2)
+            print("Length results:", len(results1))
+            print("Should be:", len(results2))
         self.assertEqual(len(results1), len(results2))
         self.assertEqual(results1, results2)
 
         # Third selection
         condition = 't1col>sl'
         self.assertTrue(
-            table1.will_query_use_indexing(condition) == fzset([t1col.pathname]))
+            table1.will_query_use_indexing(condition) ==
+            fzset([t1col.pathname]))
         rowList1 = table1.get_where_list(condition)
         results1 = [p['var4'] for p in table1.itersequence(rowList1)]
-        results2 = [p["var4"] for p in table2
-                    if p["var4"] > sl]
+        results2 = [p["var4"] for p in table2 if p["var4"] > sl]
         # sort lists (indexing does not guarantee that rows are returned in
         # order)
         results1.sort()
         results2.sort()
         if verbose:
-            print "Limit:", sl
+            print("Limit:", sl)
 #             print "Selection results (index):", results1
 #             print "Should look like:", results2
-            print "Length results:", len(results1)
-            print "Should be:", len(results2)
+            print("Length results:", len(results1))
+            print("Should be:", len(results2))
         self.assertEqual(len(results1), len(results2))
         self.assertEqual(results1, results2)
 
         # Fourth selection
         condition = 't1col>=sl'
         self.assertTrue(
-            table1.will_query_use_indexing(condition) == fzset([t1col.pathname]))
+            table1.will_query_use_indexing(condition) ==
+            fzset([t1col.pathname]))
         rowList1 = table1.get_where_list(condition)
         results1 = [p['var4'] for p in table1.itersequence(rowList1)]
-        results2 = [p["var4"] for p in table2
-                    if p["var4"] >= sl]
+        results2 = [p["var4"] for p in table2 if p["var4"] >= sl]
         # sort lists (indexing does not guarantee that rows are returned in
         # order)
         results1.sort()
         results2.sort()
         if verbose:
-            print "Limit:", sl
+            print("Limit:", sl)
 #             print "Selection results (index):", results1
 #             print "Should look like:", results2
-            print "Length results:", len(results1)
-            print "Should be:", len(results2)
+            print("Length results:", len(results1))
+            print("Should be:", len(results2))
         self.assertEqual(len(results1), len(results2))
         self.assertEqual(results1, results2)
 
@@ -1515,8 +1535,8 @@ class SelectValuesTestCase(unittest.TestCase):
         """Checking non-indexed where() (string flavor)"""
 
         if verbose:
-            print '\n', '-=' * 30
-            print "Running %s.test09a..." % self.__class__.__name__
+            print('\n', '-=' * 30)
+            print("Running %s.test09a..." % self.__class__.__name__)
 
         table1 = self.fileh.root.table1
         table2 = self.fileh.root.table2
@@ -1539,11 +1559,11 @@ class SelectValuesTestCase(unittest.TestCase):
         results2 = [p["var1"] for p in table2.iterrows(2, 10)
                     if p["var1"] <= sl]
         if verbose:
-            print "Limit:", sl
+            print("Limit:", sl)
 #             print "Selection results (in-kernel):", results1
 #             print "Should look like:", results2
-            print "Length results:", len(results1)
-            print "Should be:", len(results2)
+            print("Length results:", len(results1))
+            print("Should be:", len(results2))
         self.assertEqual(len(results1), len(results2))
         self.assertEqual(results1, results2)
 
@@ -1555,35 +1575,39 @@ class SelectValuesTestCase(unittest.TestCase):
         results2 = [p["var1"] for p in table2.iterrows(2, 30, 2)
                     if il < p["var1"] < sl]
         if verbose:
-            print "Limits:", il, sl
+            print("Limits:", il, sl)
 #             print "Selection results (in-kernel):", results1
 #             print "Should look like:", results2
-            print "Length results:", len(results1)
-            print "Should be:", len(results2)
+            print("Length results:", len(results1))
+            print("Should be:", len(results2))
         self.assertEqual(len(results1), len(results2))
         self.assertEqual(results1, results2)
 
         # Third selection
         condition = '(il>t1col)&(t1col>sl)'
         self.assertTrue(not table1.will_query_use_indexing(condition))
-        results1 = [p['var1'] for p in
-                    table1.where(condition, start=2, stop=-5)]
-        results2 = [p["var1"] for p in table2.iterrows(2, -5)  # Negative indices
-                    if (il > p["var1"] > sl)]
-        if verbose:
-            print "Limits:", il, sl
-            print "Limit:", sl
+        results1 = [
+            p['var1'] for p in table1.where(condition, start=2, stop=-5)
+        ]
+        results2 = [
+            p["var1"] for p in table2.iterrows(2, -5)  # Negative indices
+            if (il > p["var1"] > sl)
+        ]
+        if verbose:
+            print("Limits:", il, sl)
+            print("Limit:", sl)
 #             print "Selection results (in-kernel):", results1
 #             print "Should look like:", results2
-            print "Length results:", len(results1)
-            print "Should be:", len(results2)
+            print("Length results:", len(results1))
+            print("Should be:", len(results2))
         self.assertEqual(len(results1), len(results2))
         self.assertEqual(results1, results2)
 
         # This selection to be commented out
 #         condition = 't1col>=sl'
 #         self.assertTrue(not table1.will_query_use_indexing(condition))
-#         results1 = [p['var1'] for p in table1.where(condition,start=2,stop=-1,step=1)]
+#         results1 = [p['var1'] for p in table1.where(condition,start=2,
+#                                                     stop=-1,step=1)]
 #         results2 = [p["var1"] for p in table2.iterrows(2, -1, 1)
 #                     if p["var1"] >= sl]
 #         if verbose:
@@ -1605,11 +1629,11 @@ class SelectValuesTestCase(unittest.TestCase):
         results2 = [p["var1"] for p in table2.iterrows(2, -1, 3)
                     if p["var1"] >= sl]
         if verbose:
-            print "Limits:", il, sl
+            print("Limits:", il, sl)
 #             print "Selection results (in-kernel):", results1
 #             print "Should look like:", results2
-            print "Length results:", len(results1)
-            print "Should be:", len(results2)
+            print("Length results:", len(results1))
+            print("Should be:", len(results2))
         self.assertEqual(len(results1), len(results2))
         self.assertEqual(results1, results2)
 
@@ -1622,8 +1646,8 @@ class SelectValuesTestCase(unittest.TestCase):
         """Checking non-indexed where() (float flavor)"""
 
         if verbose:
-            print '\n', '-=' * 30
-            print "Running %s.test09b..." % self.__class__.__name__
+            print('\n', '-=' * 30)
+            print("Running %s.test09b..." % self.__class__.__name__)
 
         table1 = self.fileh.root.table1
         table2 = self.fileh.root.table2
@@ -1646,11 +1670,11 @@ class SelectValuesTestCase(unittest.TestCase):
         results2 = [p["var4"] for p in table2.iterrows(2, 5)
                     if p["var4"] < sl]
         if verbose:
-            print "Limit:", sl
+            print("Limit:", sl)
 #             print "Selection results (in-kernel):", results1
 #             print "Should look like:", results2
-            print "Length results:", len(results1)
-            print "Should be:", len(results2)
+            print("Length results:", len(results1))
+            print("Should be:", len(results2))
         self.assertEqual(len(results1), len(results2))
         self.assertEqual(results1, results2)
 
@@ -1662,27 +1686,30 @@ class SelectValuesTestCase(unittest.TestCase):
         results2 = [p["var4"] for p in table2.iterrows(2, -1, 2)
                     if il < p["var4"] <= sl]
         if verbose:
-            print "Limit:", sl
+            print("Limit:", sl)
 #             print "Selection results (in-kernel):", results1
 #             print "Should look like:", results2
-            print "Length results:", len(results1)
-            print "Should be:", len(results2)
+            print("Length results:", len(results1))
+            print("Should be:", len(results2))
         self.assertEqual(len(results1), len(results2))
         self.assertEqual(results1, results2)
 
         # Third selection
         condition = '(il<=t1col)&(t1col<=sl)'
         self.assertTrue(not table1.will_query_use_indexing(condition))
-        results1 = [p['var4'] for p in
-                    table1.where(condition, start=2, stop=-5)]
-        results2 = [p["var4"] for p in table2.iterrows(2, -5)  # Negative indices
-                    if il <= p["var4"] <= sl]
-        if verbose:
-            print "Limit:", sl
+        results1 = [
+            p['var4'] for p in table1.where(condition, start=2, stop=-5)
+        ]
+        results2 = [
+            p["var4"] for p in table2.iterrows(2, -5)  # Negative indices
+            if il <= p["var4"] <= sl
+        ]
+        if verbose:
+            print("Limit:", sl)
 #             print "Selection results (in-kernel):", results1
 #             print "Should look like:", results2
-            print "Length results:", len(results1)
-            print "Should be:", len(results2)
+            print("Length results:", len(results1))
+            print("Should be:", len(results2))
         self.assertEqual(len(results1), len(results2))
         self.assertEqual(results1, results2)
 
@@ -1694,11 +1721,11 @@ class SelectValuesTestCase(unittest.TestCase):
         results2 = [p["var4"] for p in table2.iterrows(0, -1, 3)
                     if p["var4"] >= sl]
         if verbose:
-            print "Limit:", sl
+            print("Limit:", sl)
 #             print "Selection results (in-kernel):", results1
 #             print "Should look like:", results2
-            print "Length results:", len(results1)
-            print "Should be:", len(results2)
+            print("Length results:", len(results1))
+            print("Should be:", len(results2))
         self.assertEqual(len(results1), len(results2))
         self.assertEqual(results1, results2)
 
@@ -1711,8 +1738,8 @@ class SelectValuesTestCase(unittest.TestCase):
         "Check non-indexed where() w/ ranges, changing step (string flavor)"
 
         if verbose:
-            print '\n', '-=' * 30
-            print "Running %s.test09c..." % self.__class__.__name__
+            print('\n', '-=' * 30)
+            print("Running %s.test09c..." % self.__class__.__name__)
 
         table1 = self.fileh.root.table1
         table2 = self.fileh.root.table2
@@ -1739,11 +1766,11 @@ class SelectValuesTestCase(unittest.TestCase):
         results1.sort()
         results2.sort()
         if verbose:
-            print "Limits:", il, sl
+            print("Limits:", il, sl)
 #             print "Selection results (indexed):", results1
 #             print "Should look like:", results2
-            print "Length results:", len(results1)
-            print "Should be:", len(results2)
+            print("Length results:", len(results1))
+            print("Should be:", len(results2))
         self.assertEqual(len(results1), len(results2))
         self.assertEqual(results1, results2)
 
@@ -1759,11 +1786,11 @@ class SelectValuesTestCase(unittest.TestCase):
         results1.sort()
         results2.sort()
         if verbose:
-            print "Limits:", il, sl
+            print("Limits:", il, sl)
 #             print "Selection results (indexed):", results1
 #             print "Should look like:", results2
-            print "Length results:", len(results1)
-            print "Should be:", len(results2)
+            print("Length results:", len(results1))
+            print("Should be:", len(results2))
         self.assertEqual(len(results1), len(results2))
         self.assertEqual(results1, results2)
 
@@ -1779,11 +1806,11 @@ class SelectValuesTestCase(unittest.TestCase):
         results1.sort()
         results2.sort()
         if verbose:
-            print "Limits:", il, sl
+            print("Limits:", il, sl)
 #             print "Selection results (indexed):", results1
 #             print "Should look like:", results2
-            print "Length results:", len(results1)
-            print "Should be:", len(results2)
+            print("Length results:", len(results1))
+            print("Should be:", len(results2))
         self.assertEqual(len(results1), len(results2))
         self.assertEqual(results1, results2)
 
@@ -1799,11 +1826,11 @@ class SelectValuesTestCase(unittest.TestCase):
         results1.sort()
         results2.sort()
         if verbose:
-            print "Limits:", il, sl
+            print("Limits:", il, sl)
 #             print "Selection results (indexed):", results1
 #             print "Should look like:", results2
-            print "Length results:", len(results1)
-            print "Should be:", len(results2)
+            print("Length results:", len(results1))
+            print("Should be:", len(results2))
         self.assertEqual(len(results1), len(results2))
         self.assertEqual(results1, results2)
 
@@ -1816,8 +1843,8 @@ class SelectValuesTestCase(unittest.TestCase):
         "Checking non-indexed where() w/ ranges, changing step (int flavor)"
 
         if verbose:
-            print '\n', '-=' * 30
-            print "Running %s.test09d..." % self.__class__.__name__
+            print('\n', '-=' * 30)
+            print("Running %s.test09d..." % self.__class__.__name__)
 
         table1 = self.fileh.root.table1
         table2 = self.fileh.root.table2
@@ -1844,11 +1871,11 @@ class SelectValuesTestCase(unittest.TestCase):
         results1.sort()
         results2.sort()
         if verbose:
-            print "Limits:", il, sl
+            print("Limits:", il, sl)
 #             print "Selection results (indexed):", results1
 #             print "Should look like:", results2
-            print "Length results:", len(results1)
-            print "Should be:", len(results2)
+            print("Length results:", len(results1))
+            print("Should be:", len(results2))
         self.assertEqual(len(results1), len(results2))
         self.assertEqual(results1, results2)
 
@@ -1864,11 +1891,11 @@ class SelectValuesTestCase(unittest.TestCase):
         results1.sort()
         results2.sort()
         if verbose:
-            print "Limits:", il, sl
+            print("Limits:", il, sl)
 #             print "Selection results (indexed):", results1
 #             print "Should look like:", results2
-            print "Length results:", len(results1)
-            print "Should be:", len(results2)
+            print("Length results:", len(results1))
+            print("Should be:", len(results2))
         self.assertEqual(len(results1), len(results2))
         self.assertEqual(results1, results2)
 
@@ -1884,11 +1911,11 @@ class SelectValuesTestCase(unittest.TestCase):
         results1.sort()
         results2.sort()
         if verbose:
-            print "Limits:", il, sl
+            print("Limits:", il, sl)
 #             print "Selection results (indexed):", results1
 #             print "Should look like:", results2
-            print "Length results:", len(results1)
-            print "Should be:", len(results2)
+            print("Length results:", len(results1))
+            print("Should be:", len(results2))
         self.assertEqual(len(results1), len(results2))
         self.assertEqual(results1, results2)
 
@@ -1904,11 +1931,11 @@ class SelectValuesTestCase(unittest.TestCase):
         results1.sort()
         results2.sort()
         if verbose:
-            print "Limits:", il, sl
+            print("Limits:", il, sl)
 #             print "Selection results (indexed):", results1
 #             print "Should look like:", results2
-            print "Length results:", len(results1)
-            print "Should be:", len(results2)
+            print("Length results:", len(results1))
+            print("Should be:", len(results2))
         self.assertEqual(len(results1), len(results2))
         self.assertEqual(results1, results2)
 
@@ -1921,8 +1948,8 @@ class SelectValuesTestCase(unittest.TestCase):
         """Checking indexed where() with ranges (string flavor)"""
 
         if verbose:
-            print '\n', '-=' * 30
-            print "Running %s.test10a..." % self.__class__.__name__
+            print('\n', '-=' * 30)
+            print("Running %s.test10a..." % self.__class__.__name__)
 
         table1 = self.fileh.root.table1
         table2 = self.fileh.root.table2
@@ -1936,105 +1963,127 @@ class SelectValuesTestCase(unittest.TestCase):
         # First selection
         condition = 't1col<=sl'
         self.assertTrue(
-            table1.will_query_use_indexing(condition) == fzset([t1col.pathname]))
-        results1 = [p['var1'] for p in
-                    table1.where(condition, start=2, stop=10)]
-        results2 = [p["var1"] for p in table2.iterrows(2, 10)
-                    if p["var1"] <= sl]
+            table1.will_query_use_indexing(condition) ==
+            fzset([t1col.pathname]))
+        results1 = [
+            p['var1'] for p in table1.where(condition, start=2, stop=10)
+        ]
+        results2 = [
+            p["var1"] for p in table2.iterrows(2, 10) if p["var1"] <= sl
+        ]
         # sort lists (indexing does not guarantee that rows are returned in
         # order)
         results1.sort()
         results2.sort()
         if verbose:
-            print "Limits:", il, sl
+            print("Limits:", il, sl)
 #             print "Selection results (indexed):", results1
 #             print "Should look like:", results2
-            print "Length results:", len(results1)
-            print "Should be:", len(results2)
+            print("Length results:", len(results1))
+            print("Should be:", len(results2))
         self.assertEqual(len(results1), len(results2))
         self.assertEqual(results1, results2)
 
         # Second selection
         condition = '(il<=t1col)&(t1col<=sl)'
         self.assertTrue(
-            table1.will_query_use_indexing(condition) == fzset([t1col.pathname]))
-        results1 = [p['var1'] for p in
-                    table1.where(condition, start=2, stop=30, step=1)]
-        results2 = [p["var1"] for p in table2.iterrows(2, 30, 1)
-                    if il <= p["var1"] <= sl]
+            table1.will_query_use_indexing(condition) ==
+            fzset([t1col.pathname]))
+        results1 = [
+            p['var1'] for p in table1.where(condition, start=2, stop=30,
+                                            step=1)
+        ]
+        results2 = [
+            p["var1"] for p in table2.iterrows(2, 30, 1)
+            if il <= p["var1"] <= sl
+        ]
         # sort lists (indexing does not guarantee that rows are returned in
         # order)
         results1.sort()
         results2.sort()
         if verbose:
-            print "Limits:", il, sl
+            print("Limits:", il, sl)
 #             print "Selection results (indexed):", results1
 #             print "Should look like:", results2
-            print "Length results:", len(results1)
-            print "Should be:", len(results2)
+            print("Length results:", len(results1))
+            print("Should be:", len(results2))
         self.assertEqual(len(results1), len(results2))
         self.assertEqual(results1, results2)
 
         # Repeat second selection (testing caches)
         condition = '(il<=t1col)&(t1col<=sl)'
         self.assertTrue(
-            table1.will_query_use_indexing(condition) == fzset([t1col.pathname]))
-        results1 = [p['var1'] for p in
-                    table1.where(condition, start=2, stop=30, step=2)]
-        results2 = [p["var1"] for p in table2.iterrows(2, 30, 2)
-                    if il <= p["var1"] <= sl]
+            table1.will_query_use_indexing(condition) ==
+            fzset([t1col.pathname]))
+        results1 = [
+            p['var1'] for p in table1.where(condition, start=2, stop=30,
+                                            step=2)
+        ]
+        results2 = [
+            p["var1"] for p in table2.iterrows(2, 30, 2)
+            if il <= p["var1"] <= sl
+        ]
         # sort lists (indexing does not guarantee that rows are returned in
         # order)
         results1.sort()
         results2.sort()
         if verbose:
-            print "Limits:", il, sl
-            print "Selection results (indexed):", results1
-            print "Should look like:", results2
-            print "Length results:", len(results1)
-            print "Should be:", len(results2)
+            print("Limits:", il, sl)
+            print("Selection results (indexed):", results1)
+            print("Should look like:", results2)
+            print("Length results:", len(results1))
+            print("Should be:", len(results2))
         self.assertEqual(len(results1), len(results2))
         self.assertEqual(results1, results2)
 
         # Third selection
         condition = '(il<t1col)&(t1col<sl)'
         self.assertTrue(
-            table1.will_query_use_indexing(condition) == fzset([t1col.pathname]))
-        results1 = [p['var1'] for p in
-                    table1.where(condition, start=2, stop=-5)]
-        results2 = [p["var1"] for p in table2.iterrows(2, -5)  # Negative indices
-                    if (il < p["var1"] < sl)]
+            table1.will_query_use_indexing(condition) ==
+            fzset([t1col.pathname]))
+        results1 = [
+            p['var1'] for p in table1.where(condition, start=2, stop=-5)
+        ]
+        results2 = [
+            p["var1"] for p in table2.iterrows(2, -5)  # Negative indices
+            if (il < p["var1"] < sl)
+        ]
         # sort lists (indexing does not guarantee that rows are returned in
         # order)
         results1.sort()
         results2.sort()
         if verbose:
-            print "Limits:", il, sl
+            print("Limits:", il, sl)
 #             print "Selection results (indexed):", results1
 #             print "Should look like:", results2
-            print "Length results:", len(results1)
-            print "Should be:", len(results2)
+            print("Length results:", len(results1))
+            print("Should be:", len(results2))
         self.assertEqual(len(results1), len(results2))
         self.assertEqual(results1, results2)
 
         # Fourth selection
         condition = 't1col>=sl'
         self.assertTrue(
-            table1.will_query_use_indexing(condition) == fzset([t1col.pathname]))
-        results1 = [p['var1'] for p in
-                    table1.where(condition, start=1, stop=-1, step=3)]
-        results2 = [p["var1"] for p in table2.iterrows(1, -1, 3)
-                    if p["var1"] >= sl]
+            table1.will_query_use_indexing(condition) ==
+            fzset([t1col.pathname]))
+        results1 = [
+            p['var1'] for p in table1.where(condition, start=1, stop=-1,
+                                            step=3)
+        ]
+        results2 = [
+            p["var1"] for p in table2.iterrows(1, -1, 3)
+            if p["var1"] >= sl
+        ]
         # sort lists (indexing does not guarantee that rows are returned in
         # order)
         results1.sort()
         results2.sort()
         if verbose:
-            print "Limits:", il, sl
+            print("Limits:", il, sl)
 #             print "Selection results (indexed):", results1
 #             print "Should look like:", results2
-            print "Length results:", len(results1)
-            print "Should be:", len(results2)
+            print("Length results:", len(results1))
+            print("Should be:", len(results2))
         self.assertEqual(len(results1), len(results2))
         self.assertEqual(results1, results2)
 
@@ -2042,8 +2091,8 @@ class SelectValuesTestCase(unittest.TestCase):
         """Checking indexed where() with ranges (int flavor)"""
 
         if verbose:
-            print '\n', '-=' * 30
-            print "Running %s.test10b..." % self.__class__.__name__
+            print('\n', '-=' * 30)
+            print("Running %s.test10b..." % self.__class__.__name__)
 
         table1 = self.fileh.root.table1
         table2 = self.fileh.root.table2
@@ -2057,70 +2106,84 @@ class SelectValuesTestCase(unittest.TestCase):
         # First selection
         condition = 't3col<=sl'
         self.assertTrue(
-            table1.will_query_use_indexing(condition) == fzset([t3col.pathname]))
-        results1 = [p['var3'] for p in
-                    table1.where(condition, start=2, stop=10)]
-        results2 = [p["var3"] for p in table2.iterrows(2, 10)
-                    if p["var3"] <= sl]
+            table1.will_query_use_indexing(condition) ==
+            fzset([t3col.pathname]))
+        results1 = [
+            p['var3'] for p in table1.where(condition, start=2, stop=10)
+        ]
+        results2 = [
+            p["var3"] for p in table2.iterrows(2, 10)
+            if p["var3"] <= sl
+        ]
         # sort lists (indexing does not guarantee that rows are returned in
         # order)
         results1.sort()
         results2.sort()
         if verbose:
-            print "Limits:", il, sl
+            print("Limits:", il, sl)
 #             print "Selection results (indexed):", results1
 #             print "Should look like:", results2
-            print "Length results:", len(results1)
-            print "Should be:", len(results2)
+            print("Length results:", len(results1))
+            print("Should be:", len(results2))
         self.assertEqual(len(results1), len(results2))
         self.assertEqual(results1, results2)
 
         # Second selection
         condition = '(il<=t3col)&(t3col<=sl)'
         self.assertTrue(
-            table1.will_query_use_indexing(condition) == fzset([t3col.pathname]))
-        results1 = [p['var3'] for p in
-                    table1.where(condition, start=2, stop=30, step=2)]
-        results2 = [p["var3"] for p in table2.iterrows(2, 30, 2)
-                    if il <= p["var3"] <= sl]
+            table1.will_query_use_indexing(condition) ==
+            fzset([t3col.pathname]))
+        results1 = [
+            p['var3'] for p in table1.where(condition, start=2, stop=30,
+                                            step=2)
+        ]
+        results2 = [
+            p["var3"] for p in table2.iterrows(2, 30, 2)
+            if il <= p["var3"] <= sl
+        ]
         # sort lists (indexing does not guarantee that rows are returned in
         # order)
         results1.sort()
         results2.sort()
         if verbose:
-            print "Limits:", il, sl
+            print("Limits:", il, sl)
 #             print "Selection results (indexed):", results1
 #             print "Should look like:", results2
-            print "Length results:", len(results1)
-            print "Should be:", len(results2)
+            print("Length results:", len(results1))
+            print("Should be:", len(results2))
         self.assertEqual(len(results1), len(results2))
         self.assertEqual(results1, results2)
 
         # Third selection
         condition = '(il<t3col)&(t3col<sl)'
         self.assertTrue(
-            table1.will_query_use_indexing(condition) == fzset([t3col.pathname]))
-        results1 = [p['var3'] for p in
-                    table1.where(condition, start=2, stop=-5)]
-        results2 = [p["var3"] for p in table2.iterrows(2, -5)  # Negative indices
-                    if (il < p["var3"] < sl)]
+            table1.will_query_use_indexing(condition) ==
+            fzset([t3col.pathname]))
+        results1 = [
+            p['var3'] for p in table1.where(condition, start=2, stop=-5)
+        ]
+        results2 = [
+            p["var3"] for p in table2.iterrows(2, -5)  # Negative indices
+            if (il < p["var3"] < sl)
+        ]
         # sort lists (indexing does not guarantee that rows are returned in
         # order)
         results1.sort()
         results2.sort()
         if verbose:
-            print "Limits:", il, sl
+            print("Limits:", il, sl)
 #             print "Selection results (indexed):", results1
 #             print "Should look like:", results2
-            print "Length results:", len(results1)
-            print "Should be:", len(results2)
+            print("Length results:", len(results1))
+            print("Should be:", len(results2))
         self.assertEqual(len(results1), len(results2))
         self.assertEqual(results1, results2)
 
         # Fourth selection
         condition = 't3col>=sl'
         self.assertTrue(
-            table1.will_query_use_indexing(condition) == fzset([t3col.pathname]))
+            table1.will_query_use_indexing(condition) ==
+            fzset([t3col.pathname]))
         results1 = [p['var3'] for p in
                     table1.where(condition, start=1, stop=-1, step=3)]
         results2 = [p["var3"] for p in table2.iterrows(1, -1, 3)
@@ -2130,20 +2193,21 @@ class SelectValuesTestCase(unittest.TestCase):
         results1.sort()
         results2.sort()
         if verbose:
-            print "Limits:", il, sl
+            print("Limits:", il, sl)
 #             print "Selection results (indexed):", results1
 #             print "Should look like:", results2
-            print "Length results:", len(results1)
-            print "Should be:", len(results2)
+            print("Length results:", len(results1))
+            print("Should be:", len(results2))
         self.assertEqual(len(results1), len(results2))
         self.assertEqual(results1, results2)
 
     def test10c(self):
-        """Checking indexed where() with ranges, changing step (string flavor)"""
+        """Checking indexed where() with ranges, changing step (string
+        flavor)"""
 
         if verbose:
-            print '\n', '-=' * 30
-            print "Running %s.test10c..." % self.__class__.__name__
+            print('\n', '-=' * 30)
+            print("Running %s.test10c..." % self.__class__.__name__)
 
         table1 = self.fileh.root.table1
         table2 = self.fileh.root.table2
@@ -2158,7 +2222,8 @@ class SelectValuesTestCase(unittest.TestCase):
         # First selection
         condition = 't1col>=sl'
         self.assertTrue(
-            table1.will_query_use_indexing(condition) == fzset([t1col.pathname]))
+            table1.will_query_use_indexing(condition) ==
+            fzset([t1col.pathname]))
         results1 = [p['var1'] for p in
                     table1.where(condition, start=2, stop=-1, step=3)]
         results2 = [p["var1"] for p in table2.iterrows(2, -1, 3)
@@ -2168,18 +2233,19 @@ class SelectValuesTestCase(unittest.TestCase):
         results1.sort()
         results2.sort()
         if verbose:
-            print "Limits:", il, sl
+            print("Limits:", il, sl)
 #             print "Selection results (indexed):", results1
 #             print "Should look like:", results2
-            print "Length results:", len(results1)
-            print "Should be:", len(results2)
+            print("Length results:", len(results1))
+            print("Should be:", len(results2))
         self.assertEqual(len(results1), len(results2))
         self.assertEqual(results1, results2)
 
         # Second selection
         condition = 't1col>=sl'
         self.assertTrue(
-            table1.will_query_use_indexing(condition) == fzset([t1col.pathname]))
+            table1.will_query_use_indexing(condition) ==
+            fzset([t1col.pathname]))
         results1 = [p['var1'] for p in
                     table1.where(condition, start=5, stop=-1, step=10)]
         results2 = [p["var1"] for p in table2.iterrows(5, -1, 10)
@@ -2189,18 +2255,19 @@ class SelectValuesTestCase(unittest.TestCase):
         results1.sort()
         results2.sort()
         if verbose:
-            print "Limits:", il, sl
+            print("Limits:", il, sl)
 #             print "Selection results (indexed):", results1
 #             print "Should look like:", results2
-            print "Length results:", len(results1)
-            print "Should be:", len(results2)
+            print("Length results:", len(results1))
+            print("Should be:", len(results2))
         self.assertEqual(len(results1), len(results2))
         self.assertEqual(results1, results2)
 
         # Third selection
         condition = 't1col>=sl'
         self.assertTrue(
-            table1.will_query_use_indexing(condition) == fzset([t1col.pathname]))
+            table1.will_query_use_indexing(condition) ==
+            fzset([t1col.pathname]))
         results1 = [p['var1'] for p in
                     table1.where(condition, start=5, stop=-3, step=11)]
         results2 = [p["var1"] for p in table2.iterrows(5, -3, 11)
@@ -2210,18 +2277,19 @@ class SelectValuesTestCase(unittest.TestCase):
         results1.sort()
         results2.sort()
         if verbose:
-            print "Limits:", il, sl
+            print("Limits:", il, sl)
 #             print "Selection results (indexed):", results1
 #             print "Should look like:", results2
-            print "Length results:", len(results1)
-            print "Should be:", len(results2)
+            print("Length results:", len(results1))
+            print("Should be:", len(results2))
         self.assertEqual(len(results1), len(results2))
         self.assertEqual(results1, results2)
 
         # Fourth selection
         condition = 't1col>=sl'
         self.assertTrue(
-            table1.will_query_use_indexing(condition) == fzset([t1col.pathname]))
+            table1.will_query_use_indexing(condition) ==
+            fzset([t1col.pathname]))
         results1 = [p['var1'] for p in
                     table1.where(condition, start=2, stop=-1, step=300)]
         results2 = [p["var1"] for p in table2.iterrows(2, -1, 300)
@@ -2231,11 +2299,11 @@ class SelectValuesTestCase(unittest.TestCase):
         results1.sort()
         results2.sort()
         if verbose:
-            print "Limits:", il, sl
+            print("Limits:", il, sl)
 #             print "Selection results (indexed):", results1
 #             print "Should look like:", results2
-            print "Length results:", len(results1)
-            print "Should be:", len(results2)
+            print("Length results:", len(results1))
+            print("Should be:", len(results2))
         self.assertEqual(len(results1), len(results2))
         self.assertEqual(results1, results2)
 
@@ -2243,8 +2311,8 @@ class SelectValuesTestCase(unittest.TestCase):
         """Checking indexed where() with ranges, changing step (int flavor)"""
 
         if verbose:
-            print '\n', '-=' * 30
-            print "Running %s.test10d..." % self.__class__.__name__
+            print('\n', '-=' * 30)
+            print("Running %s.test10d..." % self.__class__.__name__)
 
         table1 = self.fileh.root.table1
         table2 = self.fileh.root.table2
@@ -2259,7 +2327,8 @@ class SelectValuesTestCase(unittest.TestCase):
         # First selection
         condition = 't3col>=sl'
         self.assertTrue(
-            table1.will_query_use_indexing(condition) == fzset([t3col.pathname]))
+            table1.will_query_use_indexing(condition) ==
+            fzset([t3col.pathname]))
         results1 = [p['var3'] for p in
                     table1.where(condition, start=2, stop=-1, step=3)]
         results2 = [p["var3"] for p in table2.iterrows(2, -1, 3)
@@ -2269,18 +2338,19 @@ class SelectValuesTestCase(unittest.TestCase):
         results1.sort()
         results2.sort()
         if verbose:
-            print "Limits:", il, sl
+            print("Limits:", il, sl)
 #             print "Selection results (indexed):", results1
 #             print "Should look like:", results2
-            print "Length results:", len(results1)
-            print "Should be:", len(results2)
+            print("Length results:", len(results1))
+            print("Should be:", len(results2))
         self.assertEqual(len(results1), len(results2))
         self.assertEqual(results1, results2)
 
         # Second selection
         condition = 't3col>=sl'
         self.assertTrue(
-            table1.will_query_use_indexing(condition) == fzset([t3col.pathname]))
+            table1.will_query_use_indexing(condition) ==
+            fzset([t3col.pathname]))
         results1 = [p['var3'] for p in
                     table1.where(condition, start=5, stop=-1, step=10)]
         results2 = [p["var3"] for p in table2.iterrows(5, -1, 10)
@@ -2290,18 +2360,19 @@ class SelectValuesTestCase(unittest.TestCase):
         results1.sort()
         results2.sort()
         if verbose:
-            print "Limits:", il, sl
+            print("Limits:", il, sl)
 #             print "Selection results (indexed):", results1
 #             print "Should look like:", results2
-            print "Length results:", len(results1)
-            print "Should be:", len(results2)
+            print("Length results:", len(results1))
+            print("Should be:", len(results2))
         self.assertEqual(len(results1), len(results2))
         self.assertEqual(results1, results2)
 
         # Third selection
         condition = 't3col>=sl'
         self.assertTrue(
-            table1.will_query_use_indexing(condition) == fzset([t3col.pathname]))
+            table1.will_query_use_indexing(condition) ==
+            fzset([t3col.pathname]))
         results1 = [p['var3'] for p in
                     table1.where(condition, start=5, stop=-3, step=11)]
         results2 = [p["var3"] for p in table2.iterrows(5, -3, 11)
@@ -2311,18 +2382,19 @@ class SelectValuesTestCase(unittest.TestCase):
         results1.sort()
         results2.sort()
         if verbose:
-            print "Limits:", il, sl
+            print("Limits:", il, sl)
 #             print "Selection results (indexed):", results1
 #             print "Should look like:", results2
-            print "Length results:", len(results1)
-            print "Should be:", len(results2)
+            print("Length results:", len(results1))
+            print("Should be:", len(results2))
         self.assertEqual(len(results1), len(results2))
         self.assertEqual(results1, results2)
 
         # Fourth selection
         condition = 't3col>=sl'
         self.assertTrue(
-            table1.will_query_use_indexing(condition) == fzset([t3col.pathname]))
+            table1.will_query_use_indexing(condition) ==
+            fzset([t3col.pathname]))
         results1 = [p['var3'] for p in
                     table1.where(condition, start=2, stop=-1, step=300)]
         results2 = [p["var3"] for p in table2.iterrows(2, -1, 300)
@@ -2332,11 +2404,11 @@ class SelectValuesTestCase(unittest.TestCase):
         results1.sort()
         results2.sort()
         if verbose:
-            print "Limits:", il, sl
+            print("Limits:", il, sl)
 #             print "Selection results (indexed):", results1
 #             print "Should look like:", results2
-            print "Length results:", len(results1)
-            print "Should be:", len(results2)
+            print("Length results:", len(results1))
+            print("Should be:", len(results2))
         self.assertEqual(len(results1), len(results2))
         self.assertEqual(results1, results2)
 
@@ -2344,8 +2416,8 @@ class SelectValuesTestCase(unittest.TestCase):
         """Checking selecting values from an Index via read_coordinates()"""
 
         if verbose:
-            print '\n', '-=' * 30
-            print "Running %s.test11a..." % self.__class__.__name__
+            print('\n', '-=' * 30)
+            print("Running %s.test11a..." % self.__class__.__name__)
 
         table1 = self.fileh.root.table1
         table2 = self.fileh.root.table2
@@ -2358,7 +2430,9 @@ class SelectValuesTestCase(unittest.TestCase):
         t1var1 = table1.cols.var1
         condition = '(il<=t1var1)&(t1var1<=sl)'
         self.assertTrue(
-            table1.will_query_use_indexing(condition) == fzset([t1var1.pathname]))
+            table1.will_query_use_indexing(condition) ==
+            fzset([t1var1.pathname])
+        )
         coords1 = table1.get_where_list(condition)
         table1.flavor = "python"
         results1 = table1.read_coordinates(coords1, field="var1")
@@ -2370,8 +2444,8 @@ class SelectValuesTestCase(unittest.TestCase):
 #             print "Superior & inferior limits:", il, sl
 #             print "Selection results (index):", results1
 #             print "Should look like:", results2
-            print "Length results:", len(results1)
-            print "Should be:", len(results2)
+            print("Length results:", len(results1))
+            print("Should be:", len(results2))
         self.assertEqual(len(results1), len(results2))
         self.assertEqual(results1, results2)
 
@@ -2379,8 +2453,8 @@ class SelectValuesTestCase(unittest.TestCase):
         """Checking selecting values after a Table.append() operation."""
 
         if verbose:
-            print '\n', '-=' * 30
-            print "Running %s.test12a..." % self.__class__.__name__
+            print('\n', '-=' * 30)
+            print("Running %s.test12a..." % self.__class__.__name__)
 
         table1 = self.fileh.root.table1
         table2 = self.fileh.root.table2
@@ -2416,10 +2490,10 @@ class SelectValuesTestCase(unittest.TestCase):
         t1var2 = table1.cols.var2
         t1var3 = table1.cols.var3
         t1var4 = table1.cols.var4
-        self.assertTrue(t1var1.index.dirty == False)
-        self.assertTrue(t1var2.index.dirty == False)
-        self.assertTrue(t1var3.index.dirty == False)
-        self.assertTrue(t1var4.index.dirty == False)
+        self.assertFalse(t1var1.index.dirty)
+        self.assertFalse(t1var2.index.dirty)
+        self.assertFalse(t1var3.index.dirty)
+        self.assertFalse(t1var4.index.dirty)
 
         # Do some selections and check the results
         # First selection: string
@@ -2436,21 +2510,20 @@ class SelectValuesTestCase(unittest.TestCase):
         if verbose:
 #             print "Superior & inferior limits:", il, sl
 #             print "Selection results (index):", results1
-            print "Should look like:", results2
-            print "Length results:", len(results1)
-            print "Should be:", len(results2)
+            print("Should look like:", results2)
+            print("Length results:", len(results1))
+            print("Should be:", len(results2))
         self.assertEqual(len(results1), len(results2))
         self.assertEqual(results1, results2)
 
         # Second selection: bool
         results1 = [p["var2"] for p in table1.where('t1var2 == True')]
-        results2 = [p["var2"] for p in table2
-                    if p["var2"] == True]
+        results2 = [p["var2"] for p in table2 if p["var2"] is True]
         if verbose:
 #             print "Selection results (index):", results1
 #             print "Should look like:", results2
-            print "Length results:", len(results1)
-            print "Should be:", len(results2)
+            print("Length results:", len(results1))
+            print("Should be:", len(results2))
         self.assertEqual(len(results1), len(results2))
         self.assertEqual(results1, results2)
 
@@ -2471,8 +2544,8 @@ class SelectValuesTestCase(unittest.TestCase):
         if verbose:
 #             print "Selection results (index):", results1
 #             print "Should look like:", results2
-            print "Length results:", len(results1)
-            print "Should be:", len(results2)
+            print("Length results:", len(results1))
+            print("Should be:", len(results2))
         self.assertEqual(len(results1), len(results2))
         self.assertEqual(results1, results2)
 
@@ -2493,8 +2566,8 @@ class SelectValuesTestCase(unittest.TestCase):
         if verbose:
 #             print "Selection results (index):", results1
 #             print "Should look like:", results2
-            print "Length results:", len(results1)
-            print "Should be:", len(results2)
+            print("Length results:", len(results1))
+            print("Should be:", len(results2))
         self.assertEqual(len(results1), len(results2))
         self.assertEqual(results1.sort(), results2.sort())
 
@@ -2502,8 +2575,8 @@ class SelectValuesTestCase(unittest.TestCase):
         """Checking repeated queries (checking caches)"""
 
         if verbose:
-            print '\n', '-=' * 30
-            print "Running %s.test13a..." % self.__class__.__name__
+            print('\n', '-=' * 30)
+            print("Running %s.test13a..." % self.__class__.__name__)
 
         table1 = self.fileh.root.table1
         table2 = self.fileh.root.table2
@@ -2516,42 +2589,52 @@ class SelectValuesTestCase(unittest.TestCase):
         t1col = table1.cols.var1
         condition = '(il<=t1col)&(t1col<=sl)'
         self.assertTrue(
-            table1.will_query_use_indexing(condition) == fzset([t1col.pathname]))
-        results1 = [p['var1'] for p in
-                    table1.where(condition, start=2, stop=30, step=1)]
-        results2 = [p["var1"] for p in table2.iterrows(2, 30, 1)
-                    if il <= p["var1"] <= sl]
+            table1.will_query_use_indexing(condition) ==
+            fzset([t1col.pathname]))
+        results1 = [
+            p['var1'] for p in table1.where(condition, start=2, stop=30,
+                                            step=1)
+        ]
+        results2 = [
+            p["var1"] for p in table2.iterrows(2, 30, 1)
+            if il <= p["var1"] <= sl
+        ]
         # sort lists (indexing does not guarantee that rows are returned in
         # order)
         results1.sort()
         results2.sort()
         if verbose:
-            print "Limits:", il, sl
+            print("Limits:", il, sl)
 #             print "Selection results (indexed):", results1
 #             print "Should look like:", results2
-            print "Length results:", len(results1)
-            print "Should be:", len(results2)
+            print("Length results:", len(results1))
+            print("Should be:", len(results2))
         self.assertEqual(len(results1), len(results2))
         self.assertEqual(results1, results2)
 
         # Repeat the selection (testing caches)
         condition = '(il<=t1col)&(t1col<=sl)'
         self.assertTrue(
-            table1.will_query_use_indexing(condition) == fzset([t1col.pathname]))
-        results1 = [p['var1'] for p in
-                    table1.where(condition, start=2, stop=30, step=2)]
-        results2 = [p["var1"] for p in table2.iterrows(2, 30, 2)
-                    if il <= p["var1"] <= sl]
+            table1.will_query_use_indexing(condition) ==
+            fzset([t1col.pathname]))
+        results1 = [
+            p['var1'] for p in table1.where(condition, start=2, stop=30,
+                                            step=2)
+        ]
+        results2 = [
+            p["var1"] for p in table2.iterrows(2, 30, 2)
+            if il <= p["var1"] <= sl
+        ]
         # sort lists (indexing does not guarantee that rows are returned in
         # order)
         results1.sort()
         results2.sort()
         if verbose:
-            print "Limits:", il, sl
+            print("Limits:", il, sl)
 #             print "Selection results (indexed):", results1
 #             print "Should look like:", results2
-            print "Length results:", len(results1)
-            print "Should be:", len(results2)
+            print("Length results:", len(results1))
+            print("Should be:", len(results2))
         self.assertEqual(len(results1), len(results2))
         self.assertEqual(results1, results2)
 
@@ -2559,8 +2642,8 @@ class SelectValuesTestCase(unittest.TestCase):
         """Checking repeated queries, varying step (checking caches)"""
 
         if verbose:
-            print '\n', '-=' * 30
-            print "Running %s.test13b..." % self.__class__.__name__
+            print('\n', '-=' * 30)
+            print("Running %s.test13b..." % self.__class__.__name__)
 
         table1 = self.fileh.root.table1
         table2 = self.fileh.root.table2
@@ -2573,51 +2656,61 @@ class SelectValuesTestCase(unittest.TestCase):
         t1col = table1.cols.var1
         condition = '(il<=t1col)&(t1col<=sl)'
         self.assertTrue(
-            table1.will_query_use_indexing(condition) == fzset([t1col.pathname]))
-        results1 = [p['var1'] for p in
-                    table1.where(condition, start=2, stop=30, step=1)]
-        results2 = [p["var1"] for p in table2.iterrows(2, 30, 1)
-                    if il <= p["var1"] <= sl]
+            table1.will_query_use_indexing(condition) ==
+            fzset([t1col.pathname]))
+        results1 = [
+            p['var1'] for p in table1.where(condition, start=2, stop=30,
+                                            step=1)
+        ]
+        results2 = [
+            p["var1"] for p in table2.iterrows(2, 30, 1)
+            if il <= p["var1"] <= sl
+        ]
         # sort lists (indexing does not guarantee that rows are returned in
         # order)
         results1.sort()
         results2.sort()
         if verbose:
-            print "Limits:", il, sl
+            print("Limits:", il, sl)
 #             print "Selection results (indexed):", results1
 #             print "Should look like:", results2
-            print "Length results:", len(results1)
-            print "Should be:", len(results2)
+            print("Length results:", len(results1))
+            print("Should be:", len(results2))
         self.assertEqual(len(results1), len(results2))
         self.assertEqual(results1, results2)
 
         # Repeat the selection (testing caches)
         condition = '(il<=t1col)&(t1col<=sl)'
         self.assertTrue(
-            table1.will_query_use_indexing(condition) == fzset([t1col.pathname]))
-        results1 = [p['var1'] for p in
-                    table1.where(condition, start=2, stop=30, step=2)]
-        results2 = [p["var1"] for p in table2.iterrows(2, 30, 2)
-                    if il <= p["var1"] <= sl]
+            table1.will_query_use_indexing(condition) ==
+            fzset([t1col.pathname]))
+        results1 = [
+            p['var1'] for p in table1.where(condition, start=2, stop=30,
+                                            step=2)
+        ]
+        results2 = [
+            p["var1"] for p in table2.iterrows(2, 30, 2)
+            if il <= p["var1"] <= sl
+        ]
         # sort lists (indexing does not guarantee that rows are returned in
         # order)
         results1.sort()
         results2.sort()
         if verbose:
-            print "Limits:", il, sl
+            print("Limits:", il, sl)
 #             print "Selection results (indexed):", results1
 #             print "Should look like:", results2
-            print "Length results:", len(results1)
-            print "Should be:", len(results2)
+            print("Length results:", len(results1))
+            print("Should be:", len(results2))
         self.assertEqual(len(results1), len(results2))
         self.assertEqual(results1, results2)
 
     def test13c(self):
-        """Checking repeated queries, varying start, stop, step"""
+        """Checking repeated queries, varying start, stop, step."""
 
         if verbose:
-            print '\n', '-=' * 30
-            print "Running %s.test13c..." % self.__class__.__name__
+            print('\n', '-=' * 30)
+            print("Running %s.test13c..." % self.__class__.__name__)
 
         table1 = self.fileh.root.table1
         table2 = self.fileh.root.table2
@@ -2630,51 +2723,60 @@ class SelectValuesTestCase(unittest.TestCase):
         t1col = table1.cols.var1
         condition = '(il<=t1col)&(t1col<=sl)'
         self.assertTrue(
-            table1.will_query_use_indexing(condition) == fzset([t1col.pathname]))
-        results1 = [p['var1'] for p in
-                    table1.where(condition, start=0, stop=1, step=2)]
-        results2 = [p["var1"] for p in table2.iterrows(0, 1, 2)
-                    if il <= p["var1"] <= sl]
+            table1.will_query_use_indexing(condition) ==
+            fzset([t1col.pathname]))
+        results1 = [
+            p['var1'] for p in table1.where(condition, start=0, stop=1, step=2)
+        ]
+        results2 = [
+            p["var1"] for p in table2.iterrows(0, 1, 2)
+            if il <= p["var1"] <= sl
+        ]
         # sort lists (indexing does not guarantee that rows are returned in
         # order)
         results1.sort()
         results2.sort()
         if verbose:
-            print "Limits:", il, sl
+            print("Limits:", il, sl)
 #             print "Selection results (indexed):", results1
 #             print "Should look like:", results2
-            print "Length results:", len(results1)
-            print "Should be:", len(results2)
+            print("Length results:", len(results1))
+            print("Should be:", len(results2))
         self.assertEqual(len(results1), len(results2))
         self.assertEqual(results1, results2)
 
         # Repeat the selection (testing caches)
         condition = '(il<=t1col)&(t1col<=sl)'
         self.assertTrue(
-            table1.will_query_use_indexing(condition) == fzset([t1col.pathname]))
-        results1 = [p['var1'] for p in
-                    table1.where(condition, start=0, stop=5, step=1)]
-        results2 = [p["var1"] for p in table2.iterrows(0, 5, 1)
-                    if il <= p["var1"] <= sl]
+            table1.will_query_use_indexing(condition) ==
+            fzset([t1col.pathname]))
+        results1 = [
+            p['var1'] for p in table1.where(condition, start=0, stop=5, step=1)
+        ]
+        results2 = [
+            p["var1"] for p in table2.iterrows(0, 5, 1)
+            if il <= p["var1"] <= sl
+        ]
         # sort lists (indexing does not guarantee that rows are returned in
         # order)
         results1.sort()
         results2.sort()
         if verbose:
-            print "Limits:", il, sl
+            print("Limits:", il, sl)
 #             print "Selection results (indexed):", results1
 #             print "Should look like:", results2
-            print "Length results:", len(results1)
-            print "Should be:", len(results2)
+            print("Length results:", len(results1))
+            print("Should be:", len(results2))
         self.assertEqual(len(results1), len(results2))
         self.assertEqual(results1, results2)
 
     def test13d(self):
-        """Checking repeated queries, varying start, stop, step (another twist)"""
+        """Checking repeated queries, varying start, stop, step (another
+        twist)"""
 
         if verbose:
-            print '\n', '-=' * 30
-            print "Running %s.test13d..." % self.__class__.__name__
+            print('\n', '-=' * 30)
+            print("Running %s.test13d..." % self.__class__.__name__)
 
         table1 = self.fileh.root.table1
         table2 = self.fileh.root.table2
@@ -2687,42 +2789,51 @@ class SelectValuesTestCase(unittest.TestCase):
         t1col = table1.cols.var1
         condition = '(il<=t1col)&(t1col<=sl)'
         self.assertTrue(
-            table1.will_query_use_indexing(condition) == fzset([t1col.pathname]))
-        results1 = [p['var1'] for p in
-                    table1.where(condition, start=0, stop=1, step=1)]
-        results2 = [p["var1"] for p in table2.iterrows(0, 1, 1)
-                    if il <= p["var1"] <= sl]
+            table1.will_query_use_indexing(condition) ==
+            fzset([t1col.pathname])
+        )
+        results1 = [
+            p['var1'] for p in table1.where(condition, start=0, stop=1, step=1)
+        ]
+        results2 = [
+            p["var1"] for p in table2.iterrows(0, 1, 1)
+            if il <= p["var1"] <= sl
+        ]
         # sort lists (indexing does not guarantee that rows are returned in
         # order)
         results1.sort()
         results2.sort()
         if verbose:
-            print "Limits:", il, sl
+            print("Limits:", il, sl)
 #             print "Selection results (indexed):", results1
 #             print "Should look like:", results2
-            print "Length results:", len(results1)
-            print "Should be:", len(results2)
+            print("Length results:", len(results1))
+            print("Should be:", len(results2))
         self.assertEqual(len(results1), len(results2))
         self.assertEqual(results1, results2)
 
         # Repeat the selection (testing caches)
         condition = '(il<=t1col)&(t1col<=sl)'
         self.assertTrue(
-            table1.will_query_use_indexing(condition) == fzset([t1col.pathname]))
-        results1 = [p['var1'] for p in
-                    table1.where(condition, start=0, stop=1, step=1)]
-        results2 = [p["var1"] for p in table2.iterrows(0, 1, 1)
-                    if il <= p["var1"] <= sl]
+            table1.will_query_use_indexing(condition) ==
+            fzset([t1col.pathname]))
+        results1 = [
+            p['var1'] for p in table1.where(condition, start=0, stop=1, step=1)
+        ]
+        results2 = [
+            p["var1"] for p in table2.iterrows(0, 1, 1)
+            if il <= p["var1"] <= sl
+        ]
         # sort lists (indexing does not guarantee that rows are returned in
         # order)
         results1.sort()
         results2.sort()
         if verbose:
-            print "Limits:", il, sl
+            print("Limits:", il, sl)
 #            print "Selection results (indexed):", results1
 #            print "Should look like:", results2
-            print "Length results:", len(results1)
-            print "Should be:", len(results2)
+            print("Length results:", len(results1))
+            print("Should be:", len(results2))
         self.assertEqual(len(results1), len(results2))
         self.assertEqual(results1, results2)
 
@@ -2730,8 +2841,8 @@ class SelectValuesTestCase(unittest.TestCase):
         """Checking repeated queries, with varying condition."""
 
         if verbose:
-            print '\n', '-=' * 30
-            print "Running %s.test13e..." % self.__class__.__name__
+            print('\n', '-=' * 30)
+            print("Running %s.test13e..." % self.__class__.__name__)
 
         table1 = self.fileh.root.table1
         table2 = self.fileh.root.table2
@@ -2744,21 +2855,26 @@ class SelectValuesTestCase(unittest.TestCase):
         t1col = table1.cols.var1
         condition = '(il<=t1col)&(t1col<=sl)'
         self.assertTrue(
-            table1.will_query_use_indexing(condition) == fzset([t1col.pathname]))
-        results1 = [p['var1'] for p in
-                    table1.where(condition, start=0, stop=10, step=1)]
-        results2 = [p["var1"] for p in table2.iterrows(0, 10, 1)
-                    if il <= p["var1"] <= sl]
+            table1.will_query_use_indexing(condition) ==
+            fzset([t1col.pathname]))
+        results1 = [
+            p['var1'] for p in table1.where(condition, start=0, stop=10,
+                                            step=1)
+        ]
+        results2 = [
+            p["var1"] for p in table2.iterrows(0, 10, 1)
+            if il <= p["var1"] <= sl
+        ]
         # sort lists (indexing does not guarantee that rows are returned in
         # order)
         results1.sort()
         results2.sort()
         if verbose:
-            print "Limits:", il, sl
+            print("Limits:", il, sl)
 #             print "Selection results (indexed):", results1
 #             print "Should look like:", results2
-            print "Length results:", len(results1)
-            print "Should be:", len(results2)
+            print("Length results:", len(results1))
+            print("Should be:", len(results2))
         self.assertEqual(len(results1), len(results2))
         self.assertEqual(results1, results2)
 
@@ -2766,22 +2882,26 @@ class SelectValuesTestCase(unittest.TestCase):
         t2col = table1.cols.var2
         condition = '(il<=t1col)&(t1col<=sl)&(t2col==True)'
         self.assertTrue(
-            table1.will_query_use_indexing(condition) == fzset([t1col.pathname,
-                                                             t2col.pathname]))
-        results1 = [p['var1'] for p in
-                    table1.where(condition, start=0, stop=10, step=1)]
-        results2 = [p["var1"] for p in table2.iterrows(0, 10, 1)
-                    if il <= p["var1"] <= sl and p["var2"] == True]
+            table1.will_query_use_indexing(condition) ==
+            fzset([t1col.pathname, t2col.pathname]))
+        results1 = [
+            p['var1'] for p in
+            table1.where(condition, start=0, stop=10, step=1)
+        ]
+        results2 = [
+            p["var1"] for p in table2.iterrows(0, 10, 1)
+            if il <= p["var1"] <= sl and p["var2"] is True
+        ]
         # sort lists (indexing does not guarantee that rows are returned in
         # order)
         results1.sort()
         results2.sort()
         if verbose:
-            print "Limits:", il, sl
+            print("Limits:", il, sl)
 #             print "Selection results (indexed):", results1
 #             print "Should look like:", results2
-            print "Length results:", len(results1)
-            print "Should be:", len(results2)
+            print("Length results:", len(results1))
+            print("Should be:", len(results2))
         self.assertEqual(len(results1), len(results2))
         self.assertEqual(results1, results2)
 
@@ -2789,8 +2909,8 @@ class SelectValuesTestCase(unittest.TestCase):
         """Checking repeated queries, with varying condition."""
 
         if verbose:
-            print '\n', '-=' * 30
-            print "Running %s.test13f..." % self.__class__.__name__
+            print('\n', '-=' * 30)
+            print("Running %s.test13f..." % self.__class__.__name__)
 
         table1 = self.fileh.root.table1
         table2 = self.fileh.root.table2
@@ -2809,28 +2929,32 @@ class SelectValuesTestCase(unittest.TestCase):
         self.assertTrue(t2col is not None)
         condition = '(il<=t1col)&(t1col<=sl)&(t2col==True)'
         self.assertTrue(
-            table1.will_query_use_indexing(condition) == fzset([t1col.pathname]))
+            table1.will_query_use_indexing(condition) ==
+            fzset([t1col.pathname]))
         results1 = [p['var1'] for p in
                     table1.where(condition, start=0, stop=10, step=1)]
-        results2 = [p["var1"] for p in table2.iterrows(0, 10, 1)
-                    if il <= p["var1"] <= sl and p["var2"] == True]
+        results2 = [
+            p["var1"] for p in table2.iterrows(0, 10, 1)
+            if il <= p["var1"] <= sl and p["var2"] is True
+        ]
         # sort lists (indexing does not guarantee that rows are returned in
         # order)
         results1.sort()
         results2.sort()
         if verbose:
-            print "Limits:", il, sl
+            print("Limits:", il, sl)
 #             print "Selection results (indexed):", results1
 #             print "Should look like:", results2
-            print "Length results:", len(results1)
-            print "Should be:", len(results2)
+            print("Length results:", len(results1))
+            print("Should be:", len(results2))
         self.assertEqual(len(results1), len(results2))
         self.assertEqual(results1, results2)
 
         # Repeat the selection with a simpler condition
         condition = '(il<=t1col)&(t1col<=sl)'
         self.assertTrue(
-            table1.will_query_use_indexing(condition) == fzset([t1col.pathname]))
+            table1.will_query_use_indexing(condition) ==
+            fzset([t1col.pathname]))
         results1 = [p['var1'] for p in
                     table1.where(condition, start=0, stop=10, step=1)]
         results2 = [p["var1"] for p in table2.iterrows(0, 10, 1)
@@ -2840,11 +2964,11 @@ class SelectValuesTestCase(unittest.TestCase):
         results1.sort()
         results2.sort()
         if verbose:
-            print "Limits:", il, sl
+            print("Limits:", il, sl)
 #             print "Selection results (indexed):", results1
 #             print "Should look like:", results2
-            print "Length results:", len(results1)
-            print "Should be:", len(results2)
+            print("Length results:", len(results1))
+            print("Should be:", len(results2))
         self.assertEqual(len(results1), len(results2))
         self.assertEqual(results1, results2)
 
@@ -2852,7 +2976,8 @@ class SelectValuesTestCase(unittest.TestCase):
         constant = True
         condition = '(il<=t1col)&(t1col<=sl)&(t2col==constant)'
         self.assertTrue(
-            table1.will_query_use_indexing(condition) == fzset([t1col.pathname]))
+            table1.will_query_use_indexing(condition) ==
+            fzset([t1col.pathname]))
         results1 = [p['var1'] for p in
                     table1.where(condition, start=0, stop=10, step=1)]
         results2 = [p["var1"] for p in table2.iterrows(0, 10, 1)
@@ -2862,11 +2987,11 @@ class SelectValuesTestCase(unittest.TestCase):
         results1.sort()
         results2.sort()
         if verbose:
-            print "Limits:", il, sl
+            print("Limits:", il, sl)
 #             print "Selection results (indexed):", results1
 #             print "Should look like:", results2
-            print "Length results:", len(results1)
-            print "Should be:", len(results2)
+            print("Length results:", len(results1))
+            print("Should be:", len(results2))
         self.assertEqual(len(results1), len(results2))
         self.assertEqual(results1, results2)
 
@@ -2874,8 +2999,8 @@ class SelectValuesTestCase(unittest.TestCase):
         """Checking repeated queries, with different limits."""
 
         if verbose:
-            print '\n', '-=' * 30
-            print "Running %s.test13g..." % self.__class__.__name__
+            print('\n', '-=' * 30)
+            print("Running %s.test13g..." % self.__class__.__name__)
 
         table1 = self.fileh.root.table1
         table2 = self.fileh.root.table2
@@ -2888,7 +3013,8 @@ class SelectValuesTestCase(unittest.TestCase):
         t1col = table1.cols.var1
         condition = '(il<=t1col)&(t1col<=sl)'
         self.assertTrue(
-            table1.will_query_use_indexing(condition) == fzset([t1col.pathname]))
+            table1.will_query_use_indexing(condition) ==
+            fzset([t1col.pathname]))
         results1 = [p['var1'] for p in
                     table1.where(condition, start=0, stop=10, step=1)]
         results2 = [p["var1"] for p in table2.iterrows(0, 10, 1)
@@ -2898,11 +3024,11 @@ class SelectValuesTestCase(unittest.TestCase):
         results1.sort()
         results2.sort()
         if verbose:
-            print "Limits:", il, sl
+            print("Limits:", il, sl)
 #             print "Selection results (indexed):", results1
 #             print "Should look like:", results2
-            print "Length results:", len(results1)
-            print "Should be:", len(results2)
+            print("Length results:", len(results1))
+            print("Should be:", len(results2))
         self.assertEqual(len(results1), len(results2))
         self.assertEqual(results1, results2)
 
@@ -2913,7 +3039,8 @@ class SelectValuesTestCase(unittest.TestCase):
         self.assertTrue(t2col is not None)
         condition = '(il<=t1col)&(t1col<=sl)'
         self.assertTrue(
-            table1.will_query_use_indexing(condition) == fzset([t1col.pathname]))
+            table1.will_query_use_indexing(condition) ==
+            fzset([t1col.pathname]))
         results1 = [p['var1'] for p in
                     table1.where(condition, start=0, stop=10, step=1)]
         results2 = [p["var1"] for p in table2.iterrows(0, 10, 1)
@@ -2923,11 +3050,11 @@ class SelectValuesTestCase(unittest.TestCase):
         results1.sort()
         results2.sort()
         if verbose:
-            print "Limits:", il, sl
+            print("Limits:", il, sl)
 #             print "Selection results (indexed):", results1
 #             print "Should look like:", results2
-            print "Length results:", len(results1)
-            print "Should be:", len(results2)
+            print("Length results:", len(results1))
+            print("Should be:", len(results2))
         self.assertEqual(len(results1), len(results2))
         self.assertEqual(results1, results2)
 
@@ -2954,7 +3081,8 @@ class SV2aTestCase(SelectValuesTestCase):
     blocksizes = small_blocksizes
     chunkshape = 2
     buffersize = 2
-    ss = blocksizes[2]; nrows = ss * 2-1
+    ss = blocksizes[2]
+    nrows = ss * 2-1
     reopen = 1
     nrep = 1
     il = 0
@@ -2971,7 +3099,8 @@ class SV3aTestCase(SelectValuesTestCase):
     blocksizes = small_blocksizes
     chunkshape = 2
     buffersize = 3
-    ss = blocksizes[2]; nrows = ss * 5-1
+    ss = blocksizes[2]
+    nrows = ss * 5-1
     reopen = 1
     nrep = 3
     il = 0
@@ -2989,7 +3118,8 @@ class SV3bTestCase(SV3aTestCase):
 class SV4aTestCase(SelectValuesTestCase):
     blocksizes = small_blocksizes
     buffersize = 10
-    ss = blocksizes[2]; nrows = ss * 3
+    ss = blocksizes[2]
+    nrows = ss * 3
     reopen = 0
     nrep = 1
     # il = nrows-cs
@@ -3005,7 +3135,8 @@ class SV4bTestCase(SV4aTestCase):
 
 class SV5aTestCase(SelectValuesTestCase):
     blocksizes = small_blocksizes
-    ss = blocksizes[2]; nrows = ss * 5
+    ss = blocksizes[2]
+    nrows = ss * 5
     reopen = 0
     nrep = 1
     il = 0
@@ -3018,7 +3149,8 @@ class SV5bTestCase(SV5aTestCase):
 
 class SV6aTestCase(SelectValuesTestCase):
     blocksizes = small_blocksizes
-    ss = blocksizes[2]; nrows = ss * 5 + 1
+    ss = blocksizes[2]
+    nrows = ss * 5 + 1
     reopen = 0
     cs = blocksizes[3]
     nrep = cs + 1
@@ -3033,7 +3165,8 @@ class SV6bTestCase(SV6aTestCase):
 class SV7aTestCase(SelectValuesTestCase):
     random = 1
     blocksizes = small_blocksizes
-    ss = blocksizes[2]; nrows = ss * 5 + 3
+    ss = blocksizes[2]
+    nrows = ss * 5 + 3
     reopen = 0
     cs = blocksizes[3]
     nrep = cs-1
@@ -3049,7 +3182,8 @@ class SV8aTestCase(SelectValuesTestCase):
     random = 0
     chunkshape = 1
     blocksizes = small_blocksizes
-    ss = blocksizes[2]; nrows = ss * 5-3
+    ss = blocksizes[2]
+    nrows = ss * 5-3
     reopen = 0
     cs = blocksizes[3]
     nrep = cs-1
@@ -3065,7 +3199,8 @@ class SV8bTestCase(SV8aTestCase):
 class SV9aTestCase(SelectValuesTestCase):
     random = 1
     blocksizes = small_blocksizes
-    ss = blocksizes[2]; nrows = ss * 5 + 11
+    ss = blocksizes[2]
+    nrows = ss * 5 + 11
     reopen = 0
     cs = blocksizes[3]
     nrep = cs-1
@@ -3082,7 +3217,8 @@ class SV10aTestCase(SelectValuesTestCase):
     blocksizes = small_blocksizes
     chunkshape = 1
     buffersize = 1
-    ss = blocksizes[2]; nrows = ss
+    ss = blocksizes[2]
+    nrows = ss
     reopen = 0
     nrep = ss
     il = 0
@@ -3103,7 +3239,8 @@ class SV11aTestCase(SelectValuesTestCase):
     blocksizes = small_blocksizes
     chunkshape = 1
     buffersize = 1
-    ss = blocksizes[2]; nrows = ss
+    ss = blocksizes[2]
+    nrows = ss
     reopen = 0
     nrep = ss
     il = 0
@@ -3118,7 +3255,8 @@ class SV11bTestCase(SelectValuesTestCase):
     chunkshape = 2
     buffersize = 2
     blocksizes = calc_chunksize(minRowIndex, memlevel=1)
-    ss = blocksizes[2]; nrows = ss
+    ss = blocksizes[2]
+    nrows = ss
     reopen = 0
     nrep = ss
     il = 0
@@ -3134,7 +3272,8 @@ class SV12aTestCase(SelectValuesTestCase):
     blocksizes = small_blocksizes
     chunkshape = 1
     buffersize = 1
-    ss = blocksizes[2]; nrows = ss
+    ss = blocksizes[2]
+    nrows = ss
     reopen = 0
     nrep = ss
     il = 0
@@ -3150,7 +3289,8 @@ class SV12bTestCase(SelectValuesTestCase):
     blocksizes = calc_chunksize(minRowIndex, memlevel=1)
     chunkshape = 2
     buffersize = 2
-    ss = blocksizes[2]; nrows = ss
+    ss = blocksizes[2]
+    nrows = ss
     reopen = 1
     nrep = ss
     il = 0
@@ -3162,7 +3302,8 @@ class SV13aTestCase(SelectValuesTestCase):
     blocksizes = small_blocksizes
     chunkshape = 3
     buffersize = 5
-    ss = blocksizes[2]; nrows = ss
+    ss = blocksizes[2]
+    nrows = ss
     reopen = 0
     nrep = ss
     il = 0
@@ -3174,7 +3315,8 @@ class SV13bTestCase(SelectValuesTestCase):
     blocksizes = calc_chunksize(minRowIndex, memlevel=1)
     chunkshape = 5
     buffersize = 10
-    ss = blocksizes[2]; nrows = ss
+    ss = blocksizes[2]
+    nrows = ss
     reopen = 1
     nrep = ss
     il = 0
@@ -3186,7 +3328,8 @@ class SV14aTestCase(SelectValuesTestCase):
     blocksizes = small_blocksizes
     chunkshape = 2
     buffersize = 5
-    ss = blocksizes[2]; nrows = ss
+    ss = blocksizes[2]
+    nrows = ss
     reopen = 0
     cs = blocksizes[3]
     nrep = cs
@@ -3199,7 +3342,8 @@ class SV14bTestCase(SelectValuesTestCase):
     blocksizes = calc_chunksize(minRowIndex, memlevel=1)
     chunkshape = 9
     buffersize = 10
-    ss = blocksizes[2]; nrows = ss
+    ss = blocksizes[2]
+    nrows = ss
     reopen = 1
     nrep = 9
     il = 0
@@ -3216,7 +3360,8 @@ class SV15aTestCase(SelectValuesTestCase):
     # seed = 1885
     seed = 183
     blocksizes = small_blocksizes
-    ss = blocksizes[2]; nrows = ss * 5 + 1
+    ss = blocksizes[2]
+    nrows = ss * 5 + 1
     reopen = 0
     cs = blocksizes[3]
     nrep = cs-1
@@ -3233,7 +3378,8 @@ class SV15bTestCase(SelectValuesTestCase):
     seed = 1885
     # seed = 183
     blocksizes = calc_chunksize(minRowIndex, memlevel=1)
-    ss = blocksizes[2]; nrows = ss * 5 + 1
+    ss = blocksizes[2]
+    nrows = ss * 5 + 1
     reopen = 1
     cs = blocksizes[3]
     nrep = cs-1
@@ -3245,7 +3391,8 @@ class LastRowReuseBuffers(common.PyTablesTestCase):
     # Test that checks for possible reuse of buffers coming
     # from last row in the sorted part of indexes
     nelem = 1221
-    numpy.random.seed(1); random.seed(1)
+    numpy.random.seed(1)
+    random.seed(1)
 
     class Record(IsDescription):
         id1 = Int16Col()
@@ -3265,8 +3412,9 @@ class LastRowReuseBuffers(common.PyTablesTestCase):
             idx = ta.get_where_list('id1 == %s' % value)
             self.assertTrue(len(idx) > 0,
                             "idx--> %s %s %s %s" % (idx, i, nrow, value))
-            self.assertTrue(nrow in idx,
-                            "nrow not found: %s != %s, %s" % (idx, nrow, value))
+            self.assertTrue(
+                nrow in idx,
+                "nrow not found: %s != %s, %s" % (idx, nrow, value))
 
         fp.close()
         os.remove(filename)
@@ -3286,8 +3434,9 @@ class LastRowReuseBuffers(common.PyTablesTestCase):
             idx = ta.get_where_list('id1 == %s' % value)
             self.assertTrue(len(idx) > 0,
                             "idx--> %s %s %s %s" % (idx, i, nrow, value))
-            self.assertTrue(nrow in idx,
-                            "nrow not found: %s != %s, %s" % (idx, nrow, value))
+            self.assertTrue(
+                nrow in idx,
+                "nrow not found: %s != %s, %s" % (idx, nrow, value))
 
         fp.close()
         os.remove(filename)
@@ -3307,8 +3456,9 @@ class LastRowReuseBuffers(common.PyTablesTestCase):
             idx = ta.get_where_list('id1 == %s' % value)
             self.assertTrue(len(idx) > 0,
                             "idx--> %s %s %s %s" % (idx, i, nrow, value))
-            self.assertTrue(nrow in idx,
-                            "nrow not found: %s != %s, %s" % (idx, nrow, value))
+            self.assertTrue(
+                nrow in idx,
+                "nrow not found: %s != %s, %s" % (idx, nrow, value))
 
         fp.close()
         os.remove(filename)
@@ -3375,7 +3525,7 @@ def iclassdata():
 for (cname, cbasenames, cdict) in iclassdata():
     cbases = tuple(eval(cbase) for cbase in cbasenames)
     class_ = type(cname, cbases, cdict)
-    exec '%s = class_' % cname
+    exec('%s = class_' % cname)
 
 
 # -----------------------------
diff --git a/tables/tests/test_links.py b/tables/tests/test_links.py
index 53e9e5e..8c1b55f 100644
--- a/tables/tests/test_links.py
+++ b/tables/tests/test_links.py
@@ -10,18 +10,16 @@
 #
 ########################################################################
 
-"""Test module for diferent kind of links under PyTables"""
+"""Test module for diferent kind of links under PyTables."""
 
+from __future__ import print_function
 import os
 import unittest
 import tempfile
-import shutil
 
-import tables as t
+import tables
 from tables.tests import common
 
-from tables.link import ExternalLink
-
 
 # Test for hard links
 class HardLinkTestCase(common.TempFileMixin, common.PyTablesTestCase):
@@ -30,18 +28,15 @@ class HardLinkTestCase(common.TempFileMixin, common.PyTablesTestCase):
         self.h5file.create_array('/', 'arr1', [1, 2])
         group1 = self.h5file.create_group('/', 'group1')
         arr2 = self.h5file.create_array(group1, 'arr2', [1, 2, 3])
-        lgroup1 = self.h5file.create_hard_link(
-            '/', 'lgroup1', '/group1')
+        lgroup1 = self.h5file.create_hard_link('/', 'lgroup1', '/group1')
         self.assertTrue(lgroup1 is not None)
-        larr1 = self.h5file.create_hard_link(
-            group1, 'larr1', '/arr1')
+        larr1 = self.h5file.create_hard_link(group1, 'larr1', '/arr1')
         self.assertTrue(larr1 is not None)
-        larr2 = self.h5file.create_hard_link(
-            '/', 'larr2', arr2)
+        larr2 = self.h5file.create_hard_link('/', 'larr2', arr2)
         self.assertTrue(larr2 is not None)
 
     def test00_create(self):
-        """Creating hard links"""
+        """Creating hard links."""
         self._createFile()
         self._checkEqualityGroup(self.h5file.root.group1,
                                  self.h5file.root.lgroup1,
@@ -54,7 +49,7 @@ class HardLinkTestCase(common.TempFileMixin, common.PyTablesTestCase):
                                 hardlink=True)
 
     def test01_open(self):
-        """Opening a file with hard links"""
+        """Opening a file with hard links."""
 
         self._createFile()
         self._reopen()
@@ -69,7 +64,7 @@ class HardLinkTestCase(common.TempFileMixin, common.PyTablesTestCase):
                                 hardlink=True)
 
     def test02_removeLeaf(self):
-        """Removing a hard link to a Leaf"""
+        """Removing a hard link to a Leaf."""
 
         self._createFile()
         # First delete the initial link
@@ -77,31 +72,31 @@ class HardLinkTestCase(common.TempFileMixin, common.PyTablesTestCase):
         self.assertTrue('/arr1' not in self.h5file)
         # The second link should still be there
         if common.verbose:
-            print "Remaining link:", self.h5file.root.group1.larr1
+            print("Remaining link:", self.h5file.root.group1.larr1)
         self.assertTrue('/group1/larr1' in self.h5file)
         # Remove the second link
         self.h5file.root.group1.larr1.remove()
         self.assertTrue('/group1/larr1' not in self.h5file)
 
     def test03_removeGroup(self):
-        """Removing a hard link to a Group"""
+        """Removing a hard link to a Group."""
 
         self._createFile()
         if common.verbose:
-            print "Original object tree:", self.h5file
+            print("Original object tree:", self.h5file)
         # First delete the initial link
         self.h5file.root.group1._f_remove(force=True)
         self.assertTrue('/group1' not in self.h5file)
         # The second link should still be there
         if common.verbose:
-            print "Remaining link:", self.h5file.root.lgroup1
-            print "Object tree:", self.h5file
+            print("Remaining link:", self.h5file.root.lgroup1)
+            print("Object tree:", self.h5file)
         self.assertTrue('/lgroup1' in self.h5file)
         # Remove the second link
         self.h5file.root.lgroup1._g_remove(recursive=True)
         self.assertTrue('/lgroup1' not in self.h5file)
         if common.verbose:
-            print "Final object tree:", self.h5file
+            print("Final object tree:", self.h5file)
 
 
 # Test for soft links
@@ -111,18 +106,15 @@ class SoftLinkTestCase(common.TempFileMixin, common.PyTablesTestCase):
         self.h5file.create_array('/', 'arr1', [1, 2])
         group1 = self.h5file.create_group('/', 'group1')
         arr2 = self.h5file.create_array(group1, 'arr2', [1, 2, 3])
-        lgroup1 = self.h5file.create_soft_link(
-            '/', 'lgroup1', '/group1')
+        lgroup1 = self.h5file.create_soft_link('/', 'lgroup1', '/group1')
         self.assertTrue(lgroup1 is not None)
-        larr1 = self.h5file.create_soft_link(
-            group1, 'larr1', '/arr1')
+        larr1 = self.h5file.create_soft_link(group1, 'larr1', '/arr1')
         self.assertTrue(larr1 is not None)
-        larr2 = self.h5file.create_soft_link(
-            '/', 'larr2', arr2)
+        larr2 = self.h5file.create_soft_link('/', 'larr2', arr2)
         self.assertTrue(larr2 is not None)
 
     def test00_create(self):
-        """Creating soft links"""
+        """Creating soft links."""
         self._createFile()
         self._checkEqualityGroup(self.h5file.root.group1,
                                  self.h5file.root.lgroup1())
@@ -132,7 +124,7 @@ class SoftLinkTestCase(common.TempFileMixin, common.PyTablesTestCase):
                                 self.h5file.root.larr2())
 
     def test01_open(self):
-        """Opening a file with soft links"""
+        """Opening a file with soft links."""
 
         self._createFile()
         self._reopen()
@@ -152,7 +144,7 @@ class SoftLinkTestCase(common.TempFileMixin, common.PyTablesTestCase):
         self.assertTrue('/arr1' not in self.h5file)
         # The soft link should still be there (but dangling)
         if common.verbose:
-            print "Dangling link:", self.h5file.root.group1.larr1
+            print("Dangling link:", self.h5file.root.group1.larr1)
         self.assertTrue('/group1/larr1' in self.h5file)
         # Remove the soft link itself
         self.h5file.root.group1.larr1.remove()
@@ -171,7 +163,7 @@ class SoftLinkTestCase(common.TempFileMixin, common.PyTablesTestCase):
         self.assertTrue('lgroup2' in root._v_children)
         self.assertTrue('lgroup2' in root._v_links)
         if common.verbose:
-            print "Copied link:", lgroup2
+            print("Copied link:", lgroup2)
         # Remove the first link
         lgroup1.remove()
         self._checkEqualityGroup(self.h5file.root.group1,
@@ -191,7 +183,7 @@ class SoftLinkTestCase(common.TempFileMixin, common.PyTablesTestCase):
         self.assertTrue('lgroup2' in root._v_children)
         self.assertTrue('lgroup2' in root._v_links)
         if common.verbose:
-            print "Copied link:", lgroup2
+            print("Copied link:", lgroup2)
         # Remove the first link
         lgroup1.remove()
         self._checkEqualityGroup(self.h5file.root.group1,
@@ -207,7 +199,7 @@ class SoftLinkTestCase(common.TempFileMixin, common.PyTablesTestCase):
         lgroup1.move(group2, 'lgroup2')
         lgroup2 = self.h5file.root.group2.lgroup2
         if common.verbose:
-            print "Moved link:", lgroup2
+            print("Moved link:", lgroup2)
         self.assertTrue('/lgroup1' not in self.h5file)
         self.assertTrue('/group2/lgroup2' in self.h5file)
         self._checkEqualityGroup(self.h5file.root.group1,
@@ -222,7 +214,7 @@ class SoftLinkTestCase(common.TempFileMixin, common.PyTablesTestCase):
         lgroup1.rename('lgroup2')
         lgroup2 = self.h5file.root.lgroup2
         if common.verbose:
-            print "Moved link:", lgroup2
+            print("Moved link:", lgroup2)
         self.assertTrue('/lgroup1' not in self.h5file)
         self.assertTrue('/lgroup2' in self.h5file)
         self._checkEqualityGroup(self.h5file.root.group1,
@@ -238,7 +230,7 @@ class SoftLinkTestCase(common.TempFileMixin, common.PyTablesTestCase):
         lgroup3 = self.h5file.create_soft_link(
             '/group1', 'lgroup3', 'group3')
         if common.verbose:
-            print "Relative path link:", lgroup3
+            print("Relative path link:", lgroup3)
         self.assertTrue('/group1/lgroup3' in self.h5file)
         self._checkEqualityGroup(self.h5file.root.group1.group3,
                                  self.h5file.root.group1.lgroup3())
@@ -253,7 +245,7 @@ class SoftLinkTestCase(common.TempFileMixin, common.PyTablesTestCase):
         lgroup3 = self.h5file.create_soft_link(
             '/group1', 'lgroup3', './group3')
         if common.verbose:
-            print "Relative path link:", lgroup3
+            print("Relative path link:", lgroup3)
         self.assertTrue('/group1/lgroup3' in self.h5file)
         self._checkEqualityGroup(self.h5file.root.group1.group3,
                                  self.h5file.root.group1.lgroup3())
@@ -265,12 +257,12 @@ class SoftLinkTestCase(common.TempFileMixin, common.PyTablesTestCase):
         links = [node._v_pathname for node in
                  self.h5file.walk_nodes('/', classname="Link")]
         if common.verbose:
-            print "detected links (classname='Link'):", links
+            print("detected links (classname='Link'):", links)
         self.assertEqual(links, ['/larr2', '/lgroup1', '/group1/larr1'])
         links = [node._v_pathname for node in
                  self.h5file.walk_nodes('/', classname="SoftLink")]
         if common.verbose:
-            print "detected links (classname='SoftLink'):", links
+            print("detected links (classname='SoftLink'):", links)
         self.assertEqual(links, ['/larr2', '/lgroup1', '/group1/larr1'])
 
     def test08__v_links(self):
@@ -279,11 +271,11 @@ class SoftLinkTestCase(common.TempFileMixin, common.PyTablesTestCase):
         self._createFile()
         links = [node for node in self.h5file.root._v_links]
         if common.verbose:
-            print "detected links (under root):", links
+            print("detected links (under root):", links)
         self.assertEqual(len(links), 2)
         links = [node for node in self.h5file.root.group1._v_links]
         if common.verbose:
-            print "detected links (under /group1):", links
+            print("detected links (under /group1):", links)
         self.assertEqual(links, ['larr1'])
 
     def test09_link_to_link(self):
@@ -295,17 +287,17 @@ class SoftLinkTestCase(common.TempFileMixin, common.PyTablesTestCase):
         # Dereference it once:
         self.assertTrue(lgroup2() is self.h5file.get_node('/lgroup1'))
         if common.verbose:
-            print "First dereference is correct:", lgroup2()
+            print("First dereference is correct:", lgroup2())
         # Dereference it twice:
         self.assertTrue(lgroup2()() is self.h5file.get_node('/group1'))
         if common.verbose:
-            print "Second dereference is correct:", lgroup2()()
+            print("Second dereference is correct:", lgroup2()())
 
     def test10_copy_link_to_file(self):
         """Checking copying a link to another file."""
         self._createFile()
         fname = tempfile.mktemp(".h5")
-        h5f = t.open_file(fname, "a")
+        h5f = tables.open_file(fname, "a")
         h5f.create_array('/', 'arr1', [1, 2])
         h5f.create_group('/', 'group1')
         lgroup1 = self.h5file.root.lgroup1
@@ -314,7 +306,7 @@ class SoftLinkTestCase(common.TempFileMixin, common.PyTablesTestCase):
         self.assertTrue('/lgroup1' in h5f)
         self.assertTrue(lgroup1_ in h5f)
         if common.verbose:
-            print "Copied link:", lgroup1_, 'in:', lgroup1_._v_file.filename
+            print("Copied link:", lgroup1_, 'in:', lgroup1_._v_file.filename)
         h5f.close()
         os.remove(fname)
 
@@ -325,16 +317,23 @@ class ExternalLinkTestCase(common.TempFileMixin, common.PyTablesTestCase):
     def tearDown(self):
         """Remove ``extfname``."""
         self.exth5file.close()
-        os.remove(self.extfname)   # comment this for debugging purposes only
         super(ExternalLinkTestCase, self).tearDown()
 
+        #open_files = tables.file._open_files
+        #if self.extfname in open_files:
+        #    #assert False
+        #    for handler in open_files.get_handlers_by_name(self.extfname):
+        #        handler.close()
+
+        os.remove(self.extfname)   # comment this for debugging purposes only
+
     def _createFile(self):
         self.h5file.create_array('/', 'arr1', [1, 2])
         group1 = self.h5file.create_group('/', 'group1')
         self.h5file.create_array(group1, 'arr2', [1, 2, 3])
         # The external file
         self.extfname = tempfile.mktemp(".h5")
-        self.exth5file = t.open_file(self.extfname, "w")
+        self.exth5file = tables.open_file(self.extfname, "w")
         extarr1 = self.exth5file.create_array('/', 'arr1', [1, 2])
         self.assertTrue(extarr1 is not None)
         extgroup1 = self.exth5file.create_group('/', 'group1')
@@ -350,10 +349,10 @@ class ExternalLinkTestCase(common.TempFileMixin, common.PyTablesTestCase):
         self.assertTrue(larr2 is not None)
         # Re-open the external file in 'r'ead-only mode
         self.exth5file.close()
-        self.exth5file = t.open_file(self.extfname, "r")
+        self.exth5file = tables.open_file(self.extfname, "r")
 
     def test00_create(self):
-        """Creating soft links"""
+        """Creating soft links."""
         self._createFile()
         self._checkEqualityGroup(self.exth5file.root.group1,
                                  self.h5file.root.lgroup1())
@@ -363,7 +362,7 @@ class ExternalLinkTestCase(common.TempFileMixin, common.PyTablesTestCase):
                                 self.h5file.root.larr2())
 
     def test01_open(self):
-        """Opening a file with soft links"""
+        """Opening a file with soft links."""
 
         self._createFile()
         self._reopen()
@@ -380,13 +379,13 @@ class ExternalLinkTestCase(common.TempFileMixin, common.PyTablesTestCase):
         self._createFile()
         # Re-open the external file in 'a'ppend mode
         self.exth5file.close()
-        self.exth5file = t.open_file(self.extfname, "a")
+        self.exth5file = tables.open_file(self.extfname, "a")
         # First delete the referred link
         self.exth5file.root.arr1.remove()
         self.assertTrue('/arr1' not in self.exth5file)
         # The external link should still be there (but dangling)
         if common.verbose:
-            print "Dangling link:", self.h5file.root.group1.larr1
+            print("Dangling link:", self.h5file.root.group1.larr1)
         self.assertTrue('/group1/larr1' in self.h5file)
         # Remove the external link itself
         self.h5file.root.group1.larr1.remove()
@@ -405,7 +404,7 @@ class ExternalLinkTestCase(common.TempFileMixin, common.PyTablesTestCase):
         self.assertTrue('lgroup2' in root._v_children)
         self.assertTrue('lgroup2' in root._v_links)
         if common.verbose:
-            print "Copied link:", lgroup2
+            print("Copied link:", lgroup2)
         # Remove the first link
         lgroup1.remove()
         self._checkEqualityGroup(self.exth5file.root.group1,
@@ -425,7 +424,7 @@ class ExternalLinkTestCase(common.TempFileMixin, common.PyTablesTestCase):
         self.assertTrue('lgroup2' in root._v_children)
         self.assertTrue('lgroup2' in root._v_links)
         if common.verbose:
-            print "Copied link:", lgroup2
+            print("Copied link:", lgroup2)
         # Remove the first link
         lgroup1.remove()
         self._checkEqualityGroup(self.exth5file.root.group1,
@@ -441,7 +440,7 @@ class ExternalLinkTestCase(common.TempFileMixin, common.PyTablesTestCase):
         lgroup1.move(group2, 'lgroup2')
         lgroup2 = self.h5file.root.group2.lgroup2
         if common.verbose:
-            print "Moved link:", lgroup2
+            print("Moved link:", lgroup2)
         self.assertTrue('/lgroup1' not in self.h5file)
         self.assertTrue('/group2/lgroup2' in self.h5file)
         self._checkEqualityGroup(self.exth5file.root.group1,
@@ -456,7 +455,7 @@ class ExternalLinkTestCase(common.TempFileMixin, common.PyTablesTestCase):
         lgroup1.rename('lgroup2')
         lgroup2 = self.h5file.root.lgroup2
         if common.verbose:
-            print "Moved link:", lgroup2
+            print("Moved link:", lgroup2)
         self.assertTrue('/lgroup1' not in self.h5file)
         self.assertTrue('/lgroup2' in self.h5file)
         self._checkEqualityGroup(self.exth5file.root.group1,
@@ -471,13 +470,13 @@ class ExternalLinkTestCase(common.TempFileMixin, common.PyTablesTestCase):
         links = [node._v_pathname for node in
                  self.h5file.walk_nodes('/', classname="Link")]
         if common.verbose:
-            print "detected links (classname='Link'):", links
+            print("detected links (classname='Link'):", links)
         self.assertEqual(links, ['/larr2', '/lgroup1',
                                  '/group1/larr1', '/group1/lgroup3'])
         links = [node._v_pathname for node in
                  self.h5file.walk_nodes('/', classname="ExternalLink")]
         if common.verbose:
-            print "detected links (classname='ExternalLink'):", links
+            print("detected links (classname='ExternalLink'):", links)
         self.assertEqual(links, ['/larr2', '/lgroup1', '/group1/larr1'])
 
     def test08__v_links(self):
@@ -486,11 +485,11 @@ class ExternalLinkTestCase(common.TempFileMixin, common.PyTablesTestCase):
         self._createFile()
         links = [node for node in self.h5file.root._v_links]
         if common.verbose:
-            print "detected links (under root):", links
+            print("detected links (under root):", links)
         self.assertEqual(len(links), 2)
         links = [node for node in self.h5file.root.group1._v_links]
         if common.verbose:
-            print "detected links (under /group1):", links
+            print("detected links (under /group1):", links)
         self.assertEqual(links, ['larr1'])
 
     def test09_umount(self):
@@ -510,7 +509,7 @@ class ExternalLinkTestCase(common.TempFileMixin, common.PyTablesTestCase):
         """Checking copying a link to another file."""
         self._createFile()
         fname = tempfile.mktemp(".h5")
-        h5f = t.open_file(fname, "a")
+        h5f = tables.open_file(fname, "a")
         h5f.create_array('/', 'arr1', [1, 2])
         h5f.create_group('/', 'group1')
         lgroup1 = self.h5file.root.lgroup1
@@ -519,7 +518,7 @@ class ExternalLinkTestCase(common.TempFileMixin, common.PyTablesTestCase):
         self.assertTrue('/lgroup1' in h5f)
         self.assertTrue(lgroup1_ in h5f)
         if common.verbose:
-            print "Copied link:", lgroup1_, 'in:', lgroup1_._v_file.filename
+            print("Copied link:", lgroup1_, 'in:', lgroup1_._v_file.filename)
         h5f.close()
         os.remove(fname)
 
@@ -536,7 +535,8 @@ def suite():
     for i in range(niter):
         theSuite.addTest(unittest.makeSuite(HardLinkTestCase))
         theSuite.addTest(unittest.makeSuite(SoftLinkTestCase))
-        theSuite.addTest(unittest.makeSuite(ExternalLinkTestCase))
+        if tables.file._FILE_OPEN_POLICY != 'strict':
+            theSuite.addTest(unittest.makeSuite(ExternalLinkTestCase))
 
     return theSuite
 
diff --git a/tables/tests/test_lists.py b/tables/tests/test_lists.py
index ebb6c1e..d78cceb 100644
--- a/tables/tests/test_lists.py
+++ b/tables/tests/test_lists.py
@@ -1,5 +1,6 @@
 # -*- coding: utf-8 -*-
 
+from __future__ import print_function
 import sys
 import unittest
 import os
@@ -14,9 +15,8 @@ unittest.TestCase.tearDown = common.cleanup
 
 def WriteRead(filename, testTuple):
     if common.verbose:
-        print '\n', '-=' * 30
-        print "Running test for object %s" % \
-            (type(testTuple))
+        print('\n', '-=' * 30)
+        print("Running test for object %s" % type(testTuple))
 
     # Create an instance of HDF5 Table
     fileh = open_file(filename, mode="w")
@@ -38,9 +38,9 @@ def WriteRead(filename, testTuple):
         b = root.somearray.read()
         # Compare them. They should be equal.
         if not a == b and common.verbose:
-            print "Write and read lists/tuples differ!"
-            print "Object written:", a
-            print "Object read:", b
+            print("Write and read lists/tuples differ!")
+            print("Object written:", a)
+            print("Object read:", b)
 
         # Check strictly the array equality
         assert a == b
@@ -126,9 +126,8 @@ class ExceptionTestCase(unittest.TestCase):
         "Non suppported lists objects (character objects)"
 
         if common.verbose:
-            print '\n', '-=' * 30
-            print "Running test for %s" % \
-                  (self.title)
+            print('\n', '-=' * 30)
+            print("Running test for %s" % (self.title))
         a = self.charList
         try:
             fname = tempfile.mktemp(".h5")
@@ -139,8 +138,8 @@ class ExceptionTestCase(unittest.TestCase):
         except ValueError:
             if common.verbose:
                 (type, value, traceback) = sys.exc_info()
-                print "\nGreat!, the next error was catched!"
-                print type, ":", value
+                print("\nGreat!, the next error was catched!")
+                print(type, ":", value)
         else:
             self.fail("expected a ValueError")
 
@@ -157,8 +156,8 @@ class ExceptionTestCase(unittest.TestCase):
         except ValueError:
             if common.verbose:
                 (type, value, traceback) = sys.exc_info()
-                print "\nGreat!, the next was catched!"
-                print value
+                print("\nGreat!, the next was catched!")
+                print(value)
         else:
             self.fail("expected an ValueError")
 
@@ -181,8 +180,8 @@ class GetItemTestCase(unittest.TestCase):
 
         # Get and compare an element
         if common.verbose:
-            print "Original first element:", a[0]
-            print "Read first element:", arr[0]
+            print("Original first element:", a[0])
+            print("Read first element:", arr[0])
         self.assertEqual(a[0], arr[0])
 
         # Close the file
@@ -201,8 +200,8 @@ class GetItemTestCase(unittest.TestCase):
 
         # Get and compare an element
         if common.verbose:
-            print "Original first element:", a[0]
-            print "Read first element:", arr[0]
+            print("Original first element:", a[0])
+            print("Read first element:", arr[0])
         self.assertEqual(a[0], arr[0])
 
         # Close the file
@@ -221,8 +220,8 @@ class GetItemTestCase(unittest.TestCase):
 
         # Get and compare an element
         if common.verbose:
-            print "Original elements:", a[1:4]
-            print "Read elements:", arr[1:4]
+            print("Original elements:", a[1:4])
+            print("Read elements:", arr[1:4])
         self.assertEqual(a[1:4], arr[1:4])
 
         # Close the file
@@ -241,8 +240,8 @@ class GetItemTestCase(unittest.TestCase):
 
         # Get and compare an element
         if common.verbose:
-            print "Original elements:", a[1:4]
-            print "Read elements:", arr[1:4]
+            print("Original elements:", a[1:4])
+            print("Read elements:", arr[1:4])
         self.assertEqual(a[1:4], arr[1:4])
 
         # Close the file
@@ -261,8 +260,8 @@ class GetItemTestCase(unittest.TestCase):
 
         # Get and compare an element
         if common.verbose:
-            print "Original elements:", a[1:4:2]
-            print "Read elements:", arr[1:4:2]
+            print("Original elements:", a[1:4:2])
+            print("Read elements:", arr[1:4:2])
         self.assertEqual(a[1:4:2], arr[1:4:2])
 
         # Close the file
@@ -281,8 +280,8 @@ class GetItemTestCase(unittest.TestCase):
 
         # Get and compare an element
         if common.verbose:
-            print "Original elements:", a[1:4:2]
-            print "Read elements:", arr[1:4:2]
+            print("Original elements:", a[1:4:2])
+            print("Read elements:", arr[1:4:2])
         self.assertEqual(a[1:4:2], arr[1:4:2])
 
         # Close the file
@@ -301,8 +300,8 @@ class GetItemTestCase(unittest.TestCase):
 
         # Get and compare an element
         if common.verbose:
-            print "Original last element:", a[-1]
-            print "Read last element:", arr[-1]
+            print("Original last element:", a[-1])
+            print("Read last element:", arr[-1])
         self.assertEqual(a[-1], arr[-1])
 
         # Close the file
@@ -321,8 +320,8 @@ class GetItemTestCase(unittest.TestCase):
 
         # Get and compare an element
         if common.verbose:
-            print "Original before last element:", a[-2]
-            print "Read before last element:", arr[-2]
+            print("Original before last element:", a[-2])
+            print("Read before last element:", arr[-2])
         self.assertEqual(a[-2], arr[-2])
 
         # Close the file
@@ -341,8 +340,8 @@ class GetItemTestCase(unittest.TestCase):
 
         # Get and compare an element
         if common.verbose:
-            print "Original last elements:", a[-4:-1]
-            print "Read last elements:", arr[-4:-1]
+            print("Original last elements:", a[-4:-1])
+            print("Read last elements:", arr[-4:-1])
         self.assertEqual(a[-4:-1], arr[-4:-1])
 
         # Close the file
@@ -361,8 +360,8 @@ class GetItemTestCase(unittest.TestCase):
 
         # Get and compare an element
         if common.verbose:
-            print "Original last elements:", a[-4:-1]
-            print "Read last elements:", arr[-4:-1]
+            print("Original last elements:", a[-4:-1])
+            print("Read last elements:", arr[-4:-1])
         self.assertEqual(a[-4:-1], arr[-4:-1])
 
         # Close the file
@@ -390,13 +389,14 @@ class GI2ListTestCase(GetItemTestCase):
                        [3, 2, 1, 0, 4, 5, 6]]
 
     charList = [b"a", b"b"]
-    charListME = [[b"321", b"221", b"121", b"021", b"421", b"521", b"621"],
-                  [b"21", b"21", b"11", b"02", b"42", b"21", b"61"],
-                  [b"31", b"21", b"12", b"21", b"41", b"51", b"621"],
-                  [b"321", b"221", b"121", b"021", b"421", b"521", b"621"],
-                  [b"3241", b"2321", b"13216",
-                      b"0621", b"4421", b"5421", b"a621"],
-                  [b"a321", b"s221", b"d121", b"g021", b"b421", b"5vvv21", b"6zxzxs21"]]
+    charListME = [
+        [b"321", b"221", b"121", b"021", b"421", b"521", b"621"],
+        [b"21", b"21", b"11", b"02", b"42", b"21", b"61"],
+        [b"31", b"21", b"12", b"21", b"41", b"51", b"621"],
+        [b"321", b"221", b"121", b"021", b"421", b"521", b"621"],
+        [b"3241", b"2321", b"13216", b"0621", b"4421", b"5421", b"a621"],
+        [b"a321", b"s221", b"d121", b"g021", b"b421", b"5vvv21", b"6zxzxs21"],
+    ]
 
 
 class GeneratorTestCase(unittest.TestCase):
@@ -413,8 +413,8 @@ class GeneratorTestCase(unittest.TestCase):
         ga = [i for i in a]
         garr = [i for i in arr]
         if common.verbose:
-            print "Result of original iterator:", ga
-            print "Result of read generator:", garr
+            print("Result of original iterator:", ga)
+            print("Result of read generator:", garr)
         self.assertEqual(ga, garr)
 
         # Close the file
@@ -438,8 +438,8 @@ class GeneratorTestCase(unittest.TestCase):
             ga = [i for i in a]
         garr = [i for i in arr]
         if common.verbose:
-            print "Result of original iterator:", ga
-            print "Result of read generator:", garr
+            print("Result of original iterator:", ga)
+            print("Result of read generator:", garr)
         self.assertEqual(ga, garr)
 
         # Close the file
@@ -460,8 +460,8 @@ class GeneratorTestCase(unittest.TestCase):
         ga = [i for i in a]
         garr = [i for i in arr]
         if common.verbose:
-            print "Result of original iterator:", ga
-            print "Result of read generator:", garr
+            print("Result of original iterator:", ga)
+            print("Result of read generator:", garr)
         self.assertEqual(ga, garr)
 
         # Close the file
@@ -485,8 +485,8 @@ class GeneratorTestCase(unittest.TestCase):
             ga = [i for i in a]
         garr = [i for i in arr]
         if common.verbose:
-            print "Result of original iterator:", ga
-            print "Result of read generator:", garr
+            print("Result of original iterator:", ga)
+            print("Result of read generator:", garr)
         self.assertEqual(ga, garr)
 
         # Close the file
@@ -515,13 +515,14 @@ class GE2ListTestCase(GeneratorTestCase):
                        [3, 2, 1, 0, 4, 5, 6]]
 
     charList = [b"a", b"b"]
-    charListME = [[b"321", b"221", b"121", b"021", b"421", b"521", b"621"],
-                  [b"21", b"21", b"11", b"02", b"42", b"21", b"61"],
-                  [b"31", b"21", b"12", b"21", b"41", b"51", b"621"],
-                  [b"321", b"221", b"121", b"021", b"421", b"521", b"621"],
-                  [b"3241", b"2321", b"13216",
-                      b"0621", b"4421", b"5421", b"a621"],
-                  [b"a321", b"s221", b"d121", b"g021", b"b421", b"5vvv21", b"6zxzxs21"]]
+    charListME = [
+        [b"321", b"221", b"121", b"021", b"421", b"521", b"621"],
+        [b"21", b"21", b"11", b"02", b"42", b"21", b"61"],
+        [b"31", b"21", b"12", b"21", b"41", b"51", b"621"],
+        [b"321", b"221", b"121", b"021", b"421", b"521", b"621"],
+        [b"3241", b"2321", b"13216", b"0621", b"4421", b"5421", b"a621"],
+        [b"a321", b"s221", b"d121", b"g021", b"b421", b"5vvv21", b"6zxzxs21"],
+    ]
 
 
 def suite():
diff --git a/tables/tests/test_nestedtypes.py b/tables/tests/test_nestedtypes.py
index 6e38b1c..5dc8fd9 100644
--- a/tables/tests/test_nestedtypes.py
+++ b/tables/tests/test_nestedtypes.py
@@ -11,8 +11,9 @@
 #
 ########################################################################
 
-"""Test module for nested types under PyTables"""
+"""Test module for nested types under PyTables."""
 
+from __future__ import print_function
 import sys
 import unittest
 import itertools
@@ -129,11 +130,11 @@ testCondition = '(2 < col) & (col < 9)'
 
 
 def areDescriptionsEqual(desc1, desc2):
-    """
-    Are both `desc1` and `desc2` equivalent descriptions?
+    """Are both `desc1` and `desc2` equivalent descriptions?
 
     The arguments may be description objects (``IsDescription``,
     ``Description``) or dictionaries.
+
     """
 
     if isinstance(desc1, t.Col):
@@ -192,8 +193,8 @@ class DescriptionTestCase(common.PyTablesTestCase):
 
         descr = Description(self._TestTDescr().columns)
         if common.verbose:
-            print "Generated description:", descr._v_nested_descr
-            print "Should look like:", self._testADescr2
+            print("Generated description:", descr._v_nested_descr)
+            print("Should look like:", self._testADescr2)
         self.assertEqual(self._testADescr2, descr._v_nested_descr,
                          "Description._v_nested_descr does not match.")
 
@@ -206,9 +207,7 @@ class CreateTestCase(common.TempFileMixin, common.PyTablesTestCase):
     _testAData = testAData
 
     def _checkColumns(self, cols, desc):
-        """
-        Check that `cols` has all the accessors for `self._TestTDescr`.
-        """
+        """Check that `cols` has all the accessors for `self._TestTDescr`."""
 
         # ``_desc`` is a leaf column and ``cols`` a ``Column``.
         if isinstance(desc, t.Col):
@@ -226,9 +225,7 @@ class CreateTestCase(common.TempFileMixin, common.PyTablesTestCase):
         return True
 
     def _checkDescription(self, table):
-        """
-        Check that description of `table` matches `self._TestTDescr`.
-        """
+        """Check that description of `table` matches `self._TestTDescr`."""
 
         # Compare descriptions.
         self.assertTrue(
@@ -238,9 +235,7 @@ class CreateTestCase(common.TempFileMixin, common.PyTablesTestCase):
         self._checkColumns(table.cols, table.description)
 
     def _checkColinstances(self, table):
-        """
-        Check that ``colinstances`` and ``cols`` of `table` match.
-        """
+        """Check that ``colinstances`` and ``cols`` of `table` match."""
         for colpathname in table.description._v_pathnames:
             self.assertTrue(table.colinstances[colpathname]
                             is table.cols._f_col(colpathname))
@@ -281,8 +276,8 @@ class CreateTestCase(common.TempFileMixin, common.PyTablesTestCase):
         tbl.flush()
         readAData = tbl.read()
         if common.verbose:
-            print "Read data:", readAData
-            print "Should look like:", self._testAData
+            print("Read data:", readAData)
+            print("Should look like:", self._testAData)
         self.assertTrue(common.areArraysEqual(self._testAData, readAData),
                         "Written and read values differ.")
 
@@ -458,8 +453,8 @@ class WriteTestCase(common.TempFileMixin, common.PyTablesTestCase):
 
         raReadTable = tbl.read()
         if common.verbose:
-            print "Table read:", raReadTable
-            print "Should look like:", raTable
+            print("Table read:", raReadTable)
+            print("Should look like:", raTable)
 
         # Compare it to the written one.
         self.assertTrue(common.areArraysEqual(raTable, raReadTable),
@@ -491,8 +486,8 @@ class WriteTestCase(common.TempFileMixin, common.PyTablesTestCase):
 
         raReadTable = tbl.read()
         if common.verbose:
-            print "Table read:", raReadTable
-            print "Should look like:", raTable
+            print("Table read:", raReadTable)
+            print("Should look like:", raTable)
 
         # Compare it to the written one.
         self.assertTrue(common.areArraysEqual(raTable, raReadTable),
@@ -508,13 +503,14 @@ class WriteTestCase(common.TempFileMixin, common.PyTablesTestCase):
 
         # Get the nested column data and swap the first and last rows.
         colnames = ['x', 'color']  # Get the first two columns
-        raCols = numpy.rec.fromarrays([self._testAData['x'].copy(),
-                                self._testAData['color'].copy()],
-                                dtype=[('x', '(2,)i4'), ('color', '1a2')])
-                               # descr=tbl.description._v_nested_descr[0:2])
-                               # or...
-                               # names=tbl.description._v_nested_names[0:2],
-                               # formats=tbl.description._v_nested_formats[0:2])
+        raCols = numpy.rec.fromarrays([
+            self._testAData['x'].copy(),
+            self._testAData['color'].copy()],
+            dtype=[('x', '(2,)i4'), ('color', '1a2')])
+            # descr=tbl.description._v_nested_descr[0:2])
+            # or...
+            # names=tbl.description._v_nested_names[0:2],
+            # formats=tbl.description._v_nested_formats[0:2])
         (raCols[0], raCols[-1]) = (raCols[-1].copy(), raCols[0].copy())
 
         # Write the resulting columns
@@ -530,8 +526,8 @@ class WriteTestCase(common.TempFileMixin, common.PyTablesTestCase):
                                         tbl.cols._f_col('color')],
                                        dtype=raCols.dtype)
         if common.verbose:
-            print "Table read:", raCols2
-            print "Should look like:", raCols
+            print("Table read:", raCols2)
+            print("Should look like:", raCols)
 
         # Compare it to the written one.
         self.assertTrue(common.areArraysEqual(raCols, raCols2),
@@ -559,15 +555,15 @@ class WriteTestCase(common.TempFileMixin, common.PyTablesTestCase):
 
         raReadTable = tbl.read()
         if common.verbose:
-            print "Table read:", raReadTable
-            print "Should look like:", raTable
+            print("Table read:", raReadTable)
+            print("Should look like:", raTable)
 
         # Compare it to the written one.
         self.assertTrue(common.areArraysEqual(raTable, raReadTable),
                         "Written and read values differ.")
 
     def test07_index(self):
-        """Checking indexes of nested columns"""
+        """Checking indexes of nested columns."""
 
         tbl = self.h5file.create_table(
             '/', 'test', self._TestTDescr, title=self._getMethodName(),
@@ -585,8 +581,8 @@ class WriteTestCase(common.TempFileMixin, common.PyTablesTestCase):
             coltoindex = tbl.cols._f_col(self._testCondCol)
 
         if common.verbose:
-            print "Number of written rows:", tbl.nrows
-            print "Number of indexed rows:", coltoindex.index.nelements
+            print("Number of written rows:", tbl.nrows)
+            print("Number of indexed rows:", coltoindex.index.nelements)
 
         # Check indexing flags:
         self.assertEqual(tbl.indexed, True, "Table not indexed")
@@ -600,8 +596,8 @@ class WriteTestCase(common.TempFileMixin, common.PyTablesTestCase):
 
         expectedCoords = numpy.arange(0, minRowIndex * 2, 2, SizeType)
         if common.verbose:
-            print "Searched coords:", searchedCoords
-            print "Expected coords:", expectedCoords
+            print("Searched coords:", searchedCoords)
+            print("Expected coords:", expectedCoords)
         # All even rows match the condition.
         self.assertEqual(searchedCoords.tolist(), expectedCoords.tolist(),
                          "Search returned incorrect results.")
@@ -643,7 +639,7 @@ class ReadTestCase(common.TempFileMixin, common.PyTablesTestCase):
     _testNestedCol = testNestedCol
 
     def test00a_repr(self):
-        """Checking representation of a nested Table"""
+        """Checking representation of a nested Table."""
 
         tbl = self.h5file.create_table(
             '/', 'test', self._TestTDescr, title="test00")
@@ -654,8 +650,8 @@ class ReadTestCase(common.TempFileMixin, common.PyTablesTestCase):
             tbl = self.h5file.root.test
 
         if common.verbose:
-            print "str(tbl)-->", str(tbl)
-            print "repr(tbl)-->", repr(tbl)
+            print("str(tbl)-->", str(tbl))
+            print("repr(tbl)-->", repr(tbl))
 
         self.assertEqual(str(tbl), "/test (Table(2,)) 'test00'")
         tblrepr = repr(tbl)
@@ -719,11 +715,15 @@ class ReadTestCase(common.TempFileMixin, common.PyTablesTestCase):
         #
         # Also in this case it is genereted a representation string for each
         # of the possible default values.
-        enums = [', '.join(items) for items in
-                        itertools.permutations(("'r': 4", "'b': 1", "'g': 2"))]
+        enums = [
+            ', '.join(items) for items in itertools.permutations(
+                ("'r': 4", "'b': 1", "'g': 2"))
+        ]
         defaults = ('r', 'b', 'g')
-        values = [template % {'value': v, 'default': d}
-                                for v, d in itertools.product(enums, defaults)]
+        values = [
+            template % {'value': v, 'default': d}
+            for v, d in itertools.product(enums, defaults)
+        ]
         self.assertTrue(tblrepr in values)
 
     def test00b_repr(self):
@@ -738,8 +738,8 @@ class ReadTestCase(common.TempFileMixin, common.PyTablesTestCase):
             tbl = self.h5file.root.test
 
         if common.verbose:
-            print "str(tbl.cols.y)-->'%s'" % str(tbl.cols.y)
-            print "repr(tbl.cols.y)-->'%s'" % repr(tbl.cols.y)
+            print("str(tbl.cols.y)-->'%s'" % str(tbl.cols.y))
+            print("repr(tbl.cols.y)-->'%s'" % repr(tbl.cols.y))
 
         self.assertEqual(str(tbl.cols.y),
                          "/test.cols.y (Column(2, 2, 2), float64, idx=None)")
@@ -758,8 +758,8 @@ class ReadTestCase(common.TempFileMixin, common.PyTablesTestCase):
             tbl = self.h5file.root.test
 
         if common.verbose:
-            print "str(tbl.cols.Info.z2)-->'%s'" % str(tbl.cols.Info.z2)
-            print "repr(tbl.cols.Info.z2)-->'%s'" % repr(tbl.cols.Info.z2)
+            print("str(tbl.cols.Info.z2)-->'%s'" % str(tbl.cols.Info.z2))
+            print("repr(tbl.cols.Info.z2)-->'%s'" % repr(tbl.cols.Info.z2))
 
         self.assertEqual(str(tbl.cols.Info.z2),
                          "/test.cols.Info.z2 (Column(2,), uint8, idx=None)")
@@ -782,8 +782,8 @@ class ReadTestCase(common.TempFileMixin, common.PyTablesTestCase):
         tblcols = tbl.read(start=0, step=2, field='Info')
         nrarrcols = nrarr['Info'][0::2]
         if common.verbose:
-            print "Read cols:", tblcols
-            print "Should look like:", nrarrcols
+            print("Read cols:", tblcols)
+            print("Should look like:", nrarrcols)
         self.assertTrue(common.areArraysEqual(nrarrcols, tblcols),
                         "Original array are retrieved doesn't match.")
 
@@ -806,8 +806,8 @@ class ReadTestCase(common.TempFileMixin, common.PyTablesTestCase):
         tblcols = tbl.read(start=0, step=2, field='Info', out=all_cols)
         nrarrcols = nrarr['Info'][0::2]
         if common.verbose:
-            print "Read cols:", tblcols
-            print "Should look like:", nrarrcols
+            print("Read cols:", tblcols)
+            print("Should look like:", nrarrcols)
         self.assertTrue(common.areArraysEqual(nrarrcols, tblcols),
                         "Original array are retrieved doesn't match.")
         self.assertTrue(common.areArraysEqual(nrarr[0::2], all_cols),
@@ -878,8 +878,8 @@ class ColsTestCase(common.TempFileMixin, common.PyTablesTestCase):
             tbl = self.h5file.root.test
 
         if common.verbose:
-            print "str(tbl.cols)-->", str(tbl.cols)
-            print "repr(tbl.cols)-->", repr(tbl.cols)
+            print("str(tbl.cols)-->", str(tbl.cols))
+            print("repr(tbl.cols)-->", repr(tbl.cols))
 
         self.assertEqual(str(tbl.cols), "/test.cols (Cols), 6 columns")
         try:
@@ -902,8 +902,7 @@ class ColsTestCase(common.TempFileMixin, common.PyTablesTestCase):
   info (Cols(), Description)
   y (Column(0, 2, 2), ('%s', (2, 2)))
   z (Column(0,), uint8)
-""" % (numpy.int32(0).dtype.str, numpy.float64(0).dtype.str)
-                             )
+""" % (numpy.int32(0).dtype.str, numpy.float64(0).dtype.str))
 
     def test00b_repr(self):
         """Checking string representation of nested Cols."""
@@ -916,8 +915,8 @@ class ColsTestCase(common.TempFileMixin, common.PyTablesTestCase):
             tbl = self.h5file.root.test
 
         if common.verbose:
-            print "str(tbl.cols.Info)-->", str(tbl.cols.Info)
-            print "repr(tbl.cols.Info)-->", repr(tbl.cols.Info)
+            print("str(tbl.cols.Info)-->", str(tbl.cols.Info))
+            print("repr(tbl.cols.Info)-->", repr(tbl.cols.Info))
 
         self.assertEqual(str(
             tbl.cols.Info), "/test.cols.Info (Cols), 5 columns")
@@ -942,7 +941,7 @@ class ColsTestCase(common.TempFileMixin, common.PyTablesTestCase):
 
         tblcol = tbl.cols._f_col(self._testNestedCol)
         if common.verbose:
-            print "Column group name:", tblcol._v_desc._v_pathname
+            print("Column group name:", tblcol._v_desc._v_pathname)
         self.assertEqual(tblcol._v_desc._v_pathname, self._testNestedCol,
                          "Column group name doesn't match.")
 
@@ -958,7 +957,7 @@ class ColsTestCase(common.TempFileMixin, common.PyTablesTestCase):
 
         tblcol = tbl.cols._f_col(self._testNestedCol + "/name")
         if common.verbose:
-            print "Column name:", tblcol.name
+            print("Column name:", tblcol.name)
         self.assertEqual(tblcol.name, "name", "Column name doesn't match.")
 
     def test01c_f_col(self):
@@ -969,7 +968,7 @@ class ColsTestCase(common.TempFileMixin, common.PyTablesTestCase):
 
         tblcol = tbl.cols._f_col(self._testNestedCol + "/Info2")
         if common.verbose:
-            print "Column group name:", tblcol._v_desc._v_pathname
+            print("Column group name:", tblcol._v_desc._v_pathname)
         self.assertEqual(tblcol._v_desc._v_pathname,
                          self._testNestedCol + "/Info2",
                          "Column group name doesn't match.")
@@ -986,7 +985,7 @@ class ColsTestCase(common.TempFileMixin, common.PyTablesTestCase):
 
         length = len(tbl.cols)
         if common.verbose:
-            print "Column group length:", length
+            print("Column group length:", length)
         self.assertEqual(length, len(tbl.colnames),
                          "Column group length doesn't match.")
 
@@ -1002,7 +1001,7 @@ class ColsTestCase(common.TempFileMixin, common.PyTablesTestCase):
 
         length = len(tbl.cols.Info)
         if common.verbose:
-            print "Column group length:", length
+            print("Column group length:", length)
         self.assertEqual(length, len(tbl.cols.Info._v_colnames),
                          "Column group length doesn't match.")
 
@@ -1021,8 +1020,8 @@ class ColsTestCase(common.TempFileMixin, common.PyTablesTestCase):
         tblcols = tbl.cols[1]
         nrarrcols = nrarr[1]
         if common.verbose:
-            print "Read cols:", tblcols
-            print "Should look like:", nrarrcols
+            print("Read cols:", tblcols)
+            print("Should look like:", nrarrcols)
         self.assertTrue(common.areArraysEqual(nrarrcols, tblcols),
                         "Original array are retrieved doesn't match.")
 
@@ -1041,8 +1040,8 @@ class ColsTestCase(common.TempFileMixin, common.PyTablesTestCase):
         tblcols = tbl.cols[0:2]
         nrarrcols = nrarr[0:2]
         if common.verbose:
-            print "Read cols:", tblcols
-            print "Should look like:", nrarrcols
+            print("Read cols:", tblcols)
+            print("Should look like:", nrarrcols)
         self.assertTrue(common.areArraysEqual(nrarrcols, tblcols),
                         "Original array are retrieved doesn't match.")
 
@@ -1061,8 +1060,8 @@ class ColsTestCase(common.TempFileMixin, common.PyTablesTestCase):
         tblcols = tbl.cols[0::2]
         nrarrcols = nrarr[0::2]
         if common.verbose:
-            print "Read cols:", tblcols
-            print "Should look like:", nrarrcols
+            print("Read cols:", tblcols)
+            print("Should look like:", nrarrcols)
         self.assertTrue(common.areArraysEqual(nrarrcols, tblcols),
                         "Original array are retrieved doesn't match.")
 
@@ -1081,8 +1080,8 @@ class ColsTestCase(common.TempFileMixin, common.PyTablesTestCase):
         tblcols = tbl.cols._f_col('Info')[1]
         nrarrcols = nrarr['Info'][1]
         if common.verbose:
-            print "Read cols:", tblcols
-            print "Should look like:", nrarrcols
+            print("Read cols:", tblcols)
+            print("Should look like:", nrarrcols)
         self.assertTrue(common.areArraysEqual(nrarrcols, tblcols),
                         "Original array are retrieved doesn't match.")
 
@@ -1101,14 +1100,14 @@ class ColsTestCase(common.TempFileMixin, common.PyTablesTestCase):
         tblcols = tbl.cols._f_col('Info')[0:2]
         nrarrcols = nrarr['Info'][0:2]
         if common.verbose:
-            print "Read cols:", tblcols
-            print "Should look like:", nrarrcols
+            print("Read cols:", tblcols)
+            print("Should look like:", nrarrcols)
         self.assertTrue(common.areArraysEqual(nrarrcols, tblcols),
                         "Original array are retrieved doesn't match.")
 
     def test04c__getitem__(self):
-        """Checking cols.__getitem__() with subgroups with a range
-        index with step."""
+        """Checking cols.__getitem__() with subgroups with a range index with
+        step."""
 
         tbl = self.h5file.create_table(
             '/', 'test', self._TestTDescr, title=self._getMethodName())
@@ -1122,8 +1121,8 @@ class ColsTestCase(common.TempFileMixin, common.PyTablesTestCase):
         tblcols = tbl.cols._f_col('Info')[0::2]
         nrarrcols = nrarr['Info'][0::2]
         if common.verbose:
-            print "Read cols:", tblcols
-            print "Should look like:", nrarrcols
+            print("Read cols:", tblcols)
+            print("Should look like:", nrarrcols)
         self.assertTrue(common.areArraysEqual(nrarrcols, tblcols),
                         "Original array are retrieved doesn't match.")
 
@@ -1142,8 +1141,8 @@ class ColsTestCase(common.TempFileMixin, common.PyTablesTestCase):
         tblcols = tbl.cols._f_col('Info/value')[1]
         nrarrcols = nrarr['Info']['value'][1]
         if common.verbose:
-            print "Read cols:", tblcols
-            print "Should look like:", nrarrcols
+            print("Read cols:", tblcols)
+            print("Should look like:", nrarrcols)
         self.assertEqual(nrarrcols, tblcols,
                          "Original array are retrieved doesn't match.")
 
@@ -1162,14 +1161,14 @@ class ColsTestCase(common.TempFileMixin, common.PyTablesTestCase):
         tblcols = tbl.cols._f_col('Info/value')[0:2]
         nrarrcols = nrarr['Info']['value'][0:2]
         if common.verbose:
-            print "Read cols:", tblcols
-            print "Should look like:", nrarrcols
+            print("Read cols:", tblcols)
+            print("Should look like:", nrarrcols)
         self.assertTrue(common.areArraysEqual(nrarrcols, tblcols),
                         "Original array are retrieved doesn't match.")
 
     def test05c__getitem__(self):
-        """Checking cols.__getitem__() with a column with a range index
-        with step."""
+        """Checking cols.__getitem__() with a column with a range index with
+        step."""
 
         tbl = self.h5file.create_table(
             '/', 'test', self._TestTDescr, title=self._getMethodName())
@@ -1183,8 +1182,8 @@ class ColsTestCase(common.TempFileMixin, common.PyTablesTestCase):
         tblcols = tbl.cols._f_col('Info/value')[0::2]
         nrarrcols = nrarr['Info']['value'][0::2]
         if common.verbose:
-            print "Read cols:", tblcols
-            print "Should look like:", nrarrcols
+            print("Read cols:", tblcols)
+            print("Should look like:", nrarrcols)
         self.assertTrue(common.areArraysEqual(nrarrcols, tblcols),
                         "Original array are retrieved doesn't match.")
 
@@ -1270,8 +1269,8 @@ class SameNestedTestCase(common.TempFileMixin, common.PyTablesTestCase):
         names = [col._v_pathname for col in tbl.description._f_walk(
             type="All")]
         if common.verbose:
-            print "Pathnames of columns:", names
-            print "Should look like:", self.correct_names
+            print("Pathnames of columns:", names)
+            print("Should look like:", self.correct_names)
         self.assertEqual(names, self.correct_names,
                          "Column nested names doesn't match.")
 
@@ -1288,8 +1287,8 @@ class SameNestedTestCase(common.TempFileMixin, common.PyTablesTestCase):
         names = [col._v_pathname for col in tbl.description._f_walk(
             type="All")]
         if common.verbose:
-            print "Pathnames of columns:", names
-            print "Should look like:", self.correct_names
+            print("Pathnames of columns:", names)
+            print("Should look like:", self.correct_names)
         self.assertEqual(names, self.correct_names,
                          "Column nested names doesn't match.")
 
@@ -1306,8 +1305,8 @@ class SameNestedTestCase(common.TempFileMixin, common.PyTablesTestCase):
         names = [col._v_pathname for col in tbl.description._f_walk(
             type="All")]
         if common.verbose:
-            print "Pathnames of columns:", names
-            print "Should look like:", self.correct_names
+            print("Pathnames of columns:", names)
+            print("Should look like:", self.correct_names)
         self.assertEqual(names, self.correct_names,
                          "Column nested names doesn't match.")
 
@@ -1324,8 +1323,8 @@ class SameNestedTestCase(common.TempFileMixin, common.PyTablesTestCase):
         names = [col._v_pathname for col in tbl.description._f_walk(
             type="All")]
         if common.verbose:
-            print "Pathnames of columns:", names
-            print "Should look like:", self.correct_names
+            print("Pathnames of columns:", names)
+            print("Should look like:", self.correct_names)
         self.assertEqual(names, self.correct_names,
                          "Column nested names doesn't match.")
 
@@ -1342,8 +1341,8 @@ class SameNestedTestCase(common.TempFileMixin, common.PyTablesTestCase):
         names = [col._v_pathname for col in tbl.description._f_walk(
             type="All")]
         if common.verbose:
-            print "Pathnames of columns:", names
-            print "Should look like:", self.correct_names
+            print("Pathnames of columns:", names)
+            print("Should look like:", self.correct_names)
         self.assertEqual(names, self.correct_names,
                          "Column nested names doesn't match.")
 
@@ -1360,8 +1359,8 @@ class SameNestedTestCase(common.TempFileMixin, common.PyTablesTestCase):
         names = [col._v_pathname for col in tbl.description._f_walk(
             type="All")]
         if common.verbose:
-            print "Pathnames of columns:", names
-            print "Should look like:", self.correct_names
+            print("Pathnames of columns:", names)
+            print("Should look like:", self.correct_names)
         self.assertEqual(names, self.correct_names,
                          "Column nested names doesn't match.")
 
@@ -1370,8 +1369,10 @@ class SameNestedTestCase(common.TempFileMixin, common.PyTablesTestCase):
 
         desc = {
             'nested': {
-            'i1': t.Int32Col(),
-            'i2': t.Int32Col()}}
+                'i1': t.Int32Col(),
+                'i2': t.Int32Col()
+            }
+        }
 
         i1 = 'nested/i1'
         i2 = 'nested/i2'
@@ -1401,10 +1402,10 @@ class SameNestedTestCase(common.TempFileMixin, common.PyTablesTestCase):
         i2res = [row[i2] for row in tbl.where('i2 < 10', cols)]
 
         if common.verbose:
-            print "Retrieved values (i1):", i1res
-            print "Should look like:", range(10)
-            print "Retrieved values (i2):", i2res
-            print "Should look like:", range(0, 10, 2)
+            print("Retrieved values (i1):", i1res)
+            print("Should look like:", range(10))
+            print("Retrieved values (i2):", i2res)
+            print("Should look like:", range(0, 10, 2))
 
         self.assertEqual(i1res, range(10),
                          "Select for nested column (i1) doesn't match.")
@@ -1416,10 +1417,14 @@ class SameNestedTestCase(common.TempFileMixin, common.PyTablesTestCase):
 
         desc = {
             'nested1': {
-            'nested2': {
-            'nested3': {
-            'i1': t.Int32Col(),
-            'i2': t.Int32Col()}}}}
+                'nested2': {
+                    'nested3': {
+                        'i1': t.Int32Col(),
+                        'i2': t.Int32Col()
+                    }
+                }
+            }
+        }
 
         i1 = 'nested1/nested2/nested3/i1'
         i2 = 'nested1/nested2/nested3/i2'
@@ -1450,10 +1455,10 @@ class SameNestedTestCase(common.TempFileMixin, common.PyTablesTestCase):
         i2res = [row[i2] for row in tbl.where('i2 < 10', cols)]
 
         if common.verbose:
-            print "Retrieved values (i1):", i1res
-            print "Should look like:", range(10)
-            print "Retrieved values (i2):", i2res
-            print "Should look like:", range(0, 10, 2)
+            print("Retrieved values (i1):", i1res)
+            print("Should look like:", range(10))
+            print("Retrieved values (i2):", i2res)
+            print("Should look like:", range(0, 10, 2))
 
         self.assertEqual(i1res, range(10),
                          "Select for nested column (i1) doesn't match.")
@@ -1485,16 +1490,16 @@ class NestedTypesWithGaps(common.PyTablesTestCase):
         tbl = h5file.get_node('/nestedtype')
         type_descr = repr(tbl.description)
         if common.verbose:
-            print "Type size with no gaps:", tbl.description._v_itemsize
-            print "And should be: 13"
-            print "Representation of the nested type:\n", type_descr
-            print "And should be:\n", self.correct_descr
+            print("Type size with no gaps:", tbl.description._v_itemsize)
+            print("And should be: 13")
+            print("Representation of the nested type:\n", type_descr)
+            print("And should be:\n", self.correct_descr)
 
         self.assertEqual(tbl.description._v_itemsize, 13)
         self.assertEqual(type_descr, self.correct_descr)
 
         if common.verbose:
-            print "Great!  Nested types with gaps recognized correctly."
+            print("Great!  Nested types with gaps recognized correctly.")
 
         h5file.close()
 
diff --git a/tables/tests/test_numpy.py b/tables/tests/test_numpy.py
index f78aeb5..314fa4a 100644
--- a/tables/tests/test_numpy.py
+++ b/tables/tests/test_numpy.py
@@ -1,5 +1,6 @@
 # -*- coding: utf-8 -*-
 
+from __future__ import print_function
 import sys
 import unittest
 import os
@@ -24,11 +25,11 @@ else:
     typecodes += ['B', 'H', 'I', 'L', 'F', 'D']
 typecodes += ['b1']   # boolean
 
-if 'float16' in typeDict:
+if 'Float16Atom' in globals():
     typecodes.append('e')
-if 'float96' in typeDict or 'float128' in typeDict:
+if 'Float96Atom' in globals() or 'Float128Atom' in globals():
     typecodes.append('g')
-if 'complex192' in typeDict or 'conplex256' in typeDict:
+if 'Complex192Atom' in globals() or 'Conplex256Atom' in globals():
     typecodes.append('G')
 
 byteorder = {'little': '<', 'big': '>'}[sys.byteorder]
@@ -36,16 +37,18 @@ byteorder = {'little': '<', 'big': '>'}[sys.byteorder]
 
 class BasicTestCase(unittest.TestCase):
     """Basic test for all the supported typecodes present in NumPy.
+
     All of them are included on PyTables.
+
     """
     endiancheck = 0
 
     def WriteRead(self, testArray):
         if common.verbose:
-            print '\n', '-=' * 30
-            print "Running test for array with typecode '%s'" % \
-                  testArray.dtype.char,
-            print "for class check:", self.title
+            print('\n', '-=' * 30)
+            print("Running test for array with typecode '%s'" %
+                  testArray.dtype.char, end=' ')
+            print("for class check:", self.title)
 
         # Create an instance of HDF5 Table
         self.file = tempfile.mktemp(".h5")
@@ -71,14 +74,14 @@ class BasicTestCase(unittest.TestCase):
         # Compare them. They should be equal.
         # if not allequal(a,b, "numpy") and common.verbose:
         if common.verbose:
-            print "Array written:", a
-            print "Array written shape:", a.shape
-            print "Array written itemsize:", a.itemsize
-            print "Array written type:", a.dtype.char
-            print "Array read:", b
-            print "Array read shape:", b.shape
-            print "Array read itemsize:", b.itemsize
-            print "Array read type:", b.dtype.char
+            print("Array written:", a)
+            print("Array written shape:", a.shape)
+            print("Array written itemsize:", a.itemsize)
+            print("Array written type:", a.dtype.char)
+            print("Array read:", b)
+            print("Array read shape:", b.shape)
+            print("Array read itemsize:", b.itemsize)
+            print("Array read type:", b.dtype.char)
 
         type_ = self.root.somearray.atom.type
         # Check strictly the array equality
@@ -217,7 +220,9 @@ class Basic10DTestCase(BasicTestCase):
 
 class GroupsArrayTestCase(unittest.TestCase):
     """This test class checks combinations of arrays with groups.
+
     It also uses arrays ranks which ranges until 10.
+
     """
 
     def test00_iterativeGroups(self):
@@ -226,9 +231,9 @@ class GroupsArrayTestCase(unittest.TestCase):
         """
 
         if common.verbose:
-            print '\n', '-=' * 30
-            print "Running %s.test00_iterativeGroups..." % \
-                  self.__class__.__name__
+            print('\n', '-=' * 30)
+            print("Running %s.test00_iterativeGroups..." %
+                  self.__class__.__name__)
 
         # Open a new empty HDF5 file
         file = tempfile.mktemp(".h5")
@@ -243,7 +248,7 @@ class GroupsArrayTestCase(unittest.TestCase):
             # Save it on the HDF5 file
             dsetname = 'array_' + typecode
             if common.verbose:
-                print "Creating dataset:", group._g_join(dsetname)
+                print("Creating dataset:", group._g_join(dsetname))
             fileh.create_array(group, dsetname, a, "Large array")
             # Create a new group
             group = fileh.create_group(group, 'group' + str(i))
@@ -267,13 +272,13 @@ class GroupsArrayTestCase(unittest.TestCase):
             # Get the actual array
             b = dset.read()
             if not allequal(a, b, "numpy") and common.verbose:
-                print "Array a original. Shape: ==>", a.shape
-                print "Array a original. Data: ==>", a
-                print "Info from dataset:", dset._v_pathname
-                print "  shape ==>", dset.shape,
-                print "  dtype ==> %s" % dset.dtype
-                print "Array b read from file. Shape: ==>", b.shape,
-                print ". Type ==> %s" % b.dtype.char
+                print("Array a original. Shape: ==>", a.shape)
+                print("Array a original. Data: ==>", a)
+                print("Info from dataset:", dset._v_pathname)
+                print("  shape ==>", dset.shape, end=' ')
+                print("  dtype ==> %s" % dset.dtype)
+                print("Array b read from file. Shape: ==>", b.shape, end=' ')
+                print(". Type ==> %s" % b.dtype.char)
 
             self.assertEqual(a.shape, b.shape)
             if dtype('l').itemsize == 4:
@@ -324,21 +329,21 @@ class GroupsArrayTestCase(unittest.TestCase):
         maxrank = 32
 
         if common.verbose:
-            print '\n', '-=' * 30
-            print "Running %s.test01_largeRankArrays..." % \
-                  self.__class__.__name__
-            print "Maximum rank for tested arrays:", maxrank
+            print('\n', '-=' * 30)
+            print("Running %s.test01_largeRankArrays..." %
+                  self.__class__.__name__)
+            print("Maximum rank for tested arrays:", maxrank)
         # Open a new empty HDF5 file
         file = tempfile.mktemp(".h5")
         fileh = open_file(file, mode="w")
         group = fileh.root
         if common.verbose:
-            print "Rank array writing progress: ",
+            print("Rank array writing progress: ", end=' ')
         for rank in range(minrank, maxrank + 1):
             # Create an array of integers, with incrementally bigger ranges
             a = ones((1,) * rank, 'i')
             if common.verbose:
-                print "%3d," % (rank),
+                print("%3d," % (rank), end=' ')
             fileh.create_array(group, "array", a, "Rank: %s" % rank)
             group = fileh.create_group(group, 'group' + str(rank))
         # Flush the buffers
@@ -350,8 +355,8 @@ class GroupsArrayTestCase(unittest.TestCase):
         fileh = open_file(file, mode="r")
         group = fileh.root
         if common.verbose:
-            print
-            print "Rank array reading progress: "
+            print()
+            print("Rank array reading progress: ")
         # Get the metadata on the previosly saved arrays
         for rank in range(minrank, maxrank + 1):
             # Create an array for later comparison
@@ -359,13 +364,13 @@ class GroupsArrayTestCase(unittest.TestCase):
             # Get the actual array
             b = group.array.read()
             if common.verbose:
-                print "%3d," % (rank),
+                print("%3d," % (rank), end=' ')
             if not a.tolist() == b.tolist() and common.verbose:
-                print "Info from dataset:", dset._v_pathname
-                print "  Shape: ==>", dset.shape,
-                print "  typecode ==> %c" % dset.typecode
-                print "Array b read from file. Shape: ==>", b.shape,
-                print ". Type ==> %c" % b.dtype.char
+                print("Info from dataset:", dset._v_pathname)
+                print("  Shape: ==>", dset.shape, end=' ')
+                print("  typecode ==> %c" % dset.typecode)
+                print("Array b read from file. Shape: ==>", b.shape, end=' ')
+                print(". Type ==> %c" % b.dtype.char)
             self.assertEqual(a.shape, b.shape)
             if a.dtype.char == "i":
                 # Special expection. We have no way to distinguish between
@@ -381,7 +386,7 @@ class GroupsArrayTestCase(unittest.TestCase):
             group = fileh.get_node(group, 'group' + str(rank))
 
         if common.verbose:
-            print  # This flush the stdout buffer
+            print()  # This flush the stdout buffer
         # Close the file
         fileh.close()
         # Delete the file
@@ -404,15 +409,15 @@ class Record(IsDescription):
     var12 = Float64Col(dflt=1.0)
     var13 = ComplexCol(itemsize=8, dflt=(1.+0.j))
     var14 = ComplexCol(itemsize=16, dflt=(1.+0.j))
-    if 'float16' in typeDict:
+    if 'Float16Col' in globals():
         var15 = Float16Col(dflt=1.0)
-    if 'float96' in typeDict:
+    if 'Float96Col' in globals():
         var16 = Float96Col(dflt=1.0)
-    if 'float128' in typeDict:
+    if 'Float128Col' in globals():
         var17 = Float128Col(dflt=1.0)
-    if 'complex196' in typeDict:
+    if 'Complex196Col' in globals():
         var18 = ComplexCol(itemsize=24, dflt=(1.+0.j))
-    if 'complex256' in typeDict:
+    if 'Complex256Col' in globals():
         var19 = ComplexCol(itemsize=32, dflt=(1.+0.j))
 
 
@@ -436,7 +441,11 @@ class TableReadTestCase(common.PyTablesTestCase):
         common.cleanup(self)
 
     def test01_readTableChar(self):
-        """Checking column conversion into NumPy in read(). Char flavor"""
+        """Checking column conversion into NumPy in read().
+
+        Char flavor
+
+        """
 
         table = self.fileh.root.table
         table.flavor = "numpy"
@@ -451,17 +460,21 @@ class TableReadTestCase(common.PyTablesTestCase):
                 else:
                     orignumcol = array(['a']*self.nrows, dtype='S1')
                 if common.verbose:
-                    print "Typecode of NumPy column read:", nctypecode
-                    print "Should look like:", 'c'
-                    print "Itemsize of column:", itemsizecol
-                    print "Shape of NumPy column read:", numcol.shape
-                    print "Should look like:", orignumcol.shape
-                    print "First 3 elements of read col:", numcol[:3]
+                    print("Typecode of NumPy column read:", nctypecode)
+                    print("Should look like:", 'c')
+                    print("Itemsize of column:", itemsizecol)
+                    print("Shape of NumPy column read:", numcol.shape)
+                    print("Should look like:", orignumcol.shape)
+                    print("First 3 elements of read col:", numcol[:3])
                 # Check that both NumPy objects are equal
                 self.assertTrue(allequal(numcol, orignumcol, "numpy"))
 
     def test01_readTableNum(self):
-        """Checking column conversion into NumPy in read(). NumPy flavor"""
+        """Checking column conversion into NumPy in read().
+
+        NumPy flavor
+
+        """
 
         table = self.fileh.root.table
         table.flavor = "numpy"
@@ -471,18 +484,22 @@ class TableReadTestCase(common.PyTablesTestCase):
             nctypecode = typeNA[numcol.dtype.char[0]]
             if typecol != "string":
                 if common.verbose:
-                    print "Typecode of NumPy column read:", nctypecode
-                    print "Should look like:", typecol
+                    print("Typecode of NumPy column read:", nctypecode)
+                    print("Should look like:", typecol)
                 orignumcol = ones(shape=self.nrows, dtype=numcol.dtype.char)
                 # Check that both NumPy objects are equal
                 self.assertTrue(allequal(numcol, orignumcol, "numpy"))
 
     def test02_readCoordsChar(self):
-        """Column conversion into NumPy in readCoords(). Chars"""
+        """Column conversion into NumPy in readCoords().
+
+        Chars
+
+        """
 
         table = self.fileh.root.table
         table.flavor = "numpy"
-        coords = (1, 2, 3)
+        coords = [1, 2, 3]
         self.nrows = len(coords)
         for colname in table.colnames:
             numcol = table.read_coordinates(coords, field=colname)
@@ -495,21 +512,25 @@ class TableReadTestCase(common.PyTablesTestCase):
                 else:
                     orignumcol = array(['a']*self.nrows, dtype='S1')
                 if common.verbose:
-                    print "Typecode of NumPy column read:", nctypecode
-                    print "Should look like:", 'c'
-                    print "Itemsize of column:", itemsizecol
-                    print "Shape of NumPy column read:", numcol.shape
-                    print "Should look like:", orignumcol.shape
-                    print "First 3 elements of read col:", numcol[:3]
+                    print("Typecode of NumPy column read:", nctypecode)
+                    print("Should look like:", 'c')
+                    print("Itemsize of column:", itemsizecol)
+                    print("Shape of NumPy column read:", numcol.shape)
+                    print("Should look like:", orignumcol.shape)
+                    print("First 3 elements of read col:", numcol[:3])
                 # Check that both NumPy objects are equal
                 self.assertTrue(allequal(numcol, orignumcol, "numpy"))
 
     def test02_readCoordsNum(self):
-        """Column conversion into NumPy in read_coordinates(). NumPy."""
+        """Column conversion into NumPy in read_coordinates().
+
+        NumPy.
+
+        """
 
         table = self.fileh.root.table
         table.flavor = "numpy"
-        coords = (1, 2, 3)
+        coords = [1, 2, 3]
         self.nrows = len(coords)
         for colname in table.colnames:
             numcol = table.read_coordinates(coords, field=colname)
@@ -519,8 +540,8 @@ class TableReadTestCase(common.PyTablesTestCase):
                 if typecol == "int64":
                     return
                 if common.verbose:
-                    print "Type of read NumPy column:", type_
-                    print "Should look like:", typecol
+                    print("Type of read NumPy column:", type_)
+                    print("Should look like:", typecol)
                 orignumcol = ones(shape=self.nrows, dtype=numcol.dtype.char)
                 # Check that both NumPy objects are equal
                 self.assertTrue(allequal(numcol, orignumcol, "numpy"))
@@ -539,8 +560,8 @@ class TableReadTestCase(common.PyTablesTestCase):
                 numcol = numpy.array(numcol, typecol)
                 if common.verbose:
                     type_ = numcol.dtype.type
-                    print "Type of read NumPy column:", type_
-                    print "Should look like:", typecol
+                    print("Type of read NumPy column:", type_)
+                    print("Should look like:", typecol)
                 orignumcol = ones(shape=len(numcol), dtype=numcol.dtype.char)
                 # Check that both NumPy objects are equal
                 self.assertTrue(allequal(numcol, orignumcol, "numpy"))
@@ -564,10 +585,10 @@ class TableReadTestCase(common.PyTablesTestCase):
         # record = list(table[coords[0]])
         record = table.read(coords[0], coords[0] + 1)
         if common.verbose:
-            print """Original row:
+            print("""Original row:
 ['aasa', 'x', True, -24, 232, 232, 232, 232, 232L, 232, 232.0, 232.0, (232 + 0j), (232+0j), 232.0, (232+0j)]
-"""
-            print "Read row:\n", record
+""")
+            print("Read row:\n", record)
         self.assertEqual(record['var1'], b'aasa')
         self.assertEqual(record['var2'], b'x')
         self.assertEqual(record['var3'], True)
@@ -636,9 +657,9 @@ class TableNativeFlavorTestCase(common.PyTablesTestCase):
         table = self.fileh.root.table
         data = table[:]
         if common.verbose:
-            print "Type of read:", type(data)
-            print "Description of the record:", data.dtype.descr
-            print "First 3 elements of read:", data[:3]
+            print("Type of read:", type(data))
+            print("Description of the record:", data.dtype.descr)
+            print("First 3 elements of read:", data[:3])
         # Check that both NumPy objects are equal
         self.assertTrue(isinstance(data, ndarray))
         # Check the value of some columns
@@ -661,8 +682,8 @@ class TableNativeFlavorTestCase(common.PyTablesTestCase):
         npcol = zeros((3,), dtype=dtype)
         self.assertEqual(col.dtype.descr, npcol.dtype.descr)
         if common.verbose:
-            print "col-->", col
-            print "npcol-->", npcol
+            print("col-->", col)
+            print("npcol-->", npcol)
         # A copy() is needed in case the buffer can be in different segments
         self.assertEqual(bytes(col.copy().data), bytes(npcol.data))
 
@@ -675,9 +696,9 @@ class TableNativeFlavorTestCase(common.PyTablesTestCase):
         table = self.fileh.root.table
         data = table[::3]
         if common.verbose:
-            print "Type of read:", type(data)
-            print "Description of the record:", data.dtype.descr
-            print "First 3 elements of read:", data[:3]
+            print("Type of read:", type(data))
+            print("Description of the record:", data.dtype.descr)
+            print("First 3 elements of read:", data[:3])
         # Check that both NumPy objects are equal
         self.assertTrue(isinstance(data, ndarray))
         # Check the value of some columns
@@ -700,8 +721,8 @@ class TableNativeFlavorTestCase(common.PyTablesTestCase):
         npcol = zeros((3,), dtype=dtype)
         self.assertEqual(col.dtype.descr, npcol.dtype.descr)
         if common.verbose:
-            print "col-->", col
-            print "npcol-->", npcol
+            print("col-->", col)
+            print("npcol-->", npcol)
         # A copy() is needed in case the buffer can be in different segments
         self.assertEqual(bytes(col.copy().data), bytes(npcol.data))
 
@@ -714,9 +735,9 @@ class TableNativeFlavorTestCase(common.PyTablesTestCase):
         table = self.fileh.root.table
         data = table.get_where_list('z == 1')
         if common.verbose:
-            print "Type of read:", type(data)
-            print "Description of the record:", data.dtype.descr
-            print "First 3 elements of read:", data[:3]
+            print("Type of read:", type(data))
+            print("Description of the record:", data.dtype.descr)
+            print("First 3 elements of read:", data[:3])
         # Check that both NumPy objects are equal
         self.assertTrue(isinstance(data, ndarray))
         # Check that all columns have been selected
@@ -735,8 +756,8 @@ class TableNativeFlavorTestCase(common.PyTablesTestCase):
             table = self.fileh.root.table
         data = table.read_where('color == b"ab"')
         if common.verbose:
-            print "Type of read:", type(data)
-            print "Length of the data read:", len(data)
+            print("Type of read:", type(data))
+            print("Length of the data read:", len(data))
         # Check that both NumPy objects are equal
         self.assertTrue(isinstance(data, ndarray))
         # Check that all columns have been selected
@@ -753,8 +774,8 @@ class TableNativeFlavorTestCase(common.PyTablesTestCase):
             table = self.fileh.root.table
         data = table.read_where('z == 0')
         if common.verbose:
-            print "Type of read:", type(data)
-            print "Length of the data read:", len(data)
+            print("Type of read:", type(data))
+            print("Length of the data read:", len(data))
         # Check that both NumPy objects are equal
         self.assertTrue(isinstance(data, ndarray))
         # Check that all columns have been selected
@@ -779,17 +800,17 @@ class TableNativeFlavorTestCase(common.PyTablesTestCase):
             table = self.fileh.root.table2
         data = table[:]
         if common.verbose:
-            print "Type of read:", type(data)
-            print "Description of the record:", data.dtype.descr
-            print "First 3 elements of read:", data[:3]
-            print "Length of the data read:", len(data)
+            print("Type of read:", type(data))
+            print("Description of the record:", data.dtype.descr)
+            print("First 3 elements of read:", data[:3])
+            print("Length of the data read:", len(data))
         # Check that both NumPy objects are equal
         self.assertTrue(isinstance(data, ndarray))
         # Check the type
         self.assertEqual(data.dtype.descr, npdata.dtype.descr)
         if common.verbose:
-            print "npdata-->", npdata
-            print "data-->", data
+            print("npdata-->", npdata)
+            print("data-->", data)
         # A copy() is needed in case the buffer would be in different segments
         self.assertEqual(bytes(data.copy().data), bytes(npdata.data))
 
@@ -805,17 +826,17 @@ class TableNativeFlavorTestCase(common.PyTablesTestCase):
             table = self.fileh.root.table
         data = table[-3:]
         if common.verbose:
-            print "Type of read:", type(data)
-            print "Description of the record:", data.dtype.descr
-            print "Last 3 elements of read:", data[-3:]
-            print "Length of the data read:", len(data)
+            print("Type of read:", type(data))
+            print("Description of the record:", data.dtype.descr)
+            print("Last 3 elements of read:", data[-3:])
+            print("Length of the data read:", len(data))
         # Check that both NumPy objects are equal
         self.assertTrue(isinstance(data, ndarray))
         # Check the type
         self.assertEqual(data.dtype.descr, npdata.dtype.descr)
         if common.verbose:
-            print "npdata-->", npdata
-            print "data-->", data
+            print("npdata-->", npdata)
+            print("data-->", data)
         # A copy() is needed in case the buffer would be in different segments
         self.assertEqual(bytes(data.copy().data), bytes(npdata.data))
 
@@ -830,10 +851,10 @@ class TableNativeFlavorTestCase(common.PyTablesTestCase):
             table = self.fileh.root.table
         data = table.cols.z[:]
         if common.verbose:
-            print "Type of read:", type(data)
-            print "Description of the record:", data.dtype.descr
-            print "First 3 elements of read:", data[:3]
-            print "Length of the data read:", len(data)
+            print("Type of read:", type(data))
+            print("Description of the record:", data.dtype.descr)
+            print("First 3 elements of read:", data[:3])
+            print("Length of the data read:", len(data))
         # Check that both NumPy objects are equal
         self.assertTrue(isinstance(data, ndarray))
         # Check that all columns have been selected
@@ -855,17 +876,17 @@ class TableNativeFlavorTestCase(common.PyTablesTestCase):
             table = self.fileh.root.table
         data = table.cols.y[3:6]
         if common.verbose:
-            print "Type of read:", type(data)
-            print "Description of the record:", data.dtype.descr
-            print "First 3 elements of read:", data[:3]
-            print "Length of the data read:", len(data)
+            print("Type of read:", type(data))
+            print("Description of the record:", data.dtype.descr)
+            print("First 3 elements of read:", data[:3])
+            print("Length of the data read:", len(data))
         # Check that both NumPy objects are equal
         self.assertTrue(isinstance(data, ndarray))
         # Check the type
         self.assertEqual(data.dtype.descr, ycol.dtype.descr)
         if common.verbose:
-            print "ycol-->", ycol
-            print "data-->", data
+            print("ycol-->", ycol)
+            print("data-->", data)
         # A copy() is needed in case the buffer would be in different segments
         self.assertEqual(data.copy().data, ycol.data)
 
@@ -883,17 +904,17 @@ class TableNativeFlavorTestCase(common.PyTablesTestCase):
         ycol = zeros((3, 2, 2), 'float64')
         data = table.cols.y[3:6]
         if common.verbose:
-            print "Type of read:", type(data)
-            print "Description of the record:", data.dtype.descr
-            print "First 3 elements of read:", data[:3]
-            print "Length of the data read:", len(data)
+            print("Type of read:", type(data))
+            print("Description of the record:", data.dtype.descr)
+            print("First 3 elements of read:", data[:3])
+            print("Length of the data read:", len(data))
         # Check that both NumPy objects are equal
         self.assertTrue(isinstance(data, ndarray))
         # Check the type
         self.assertEqual(data.dtype.descr, ycol.dtype.descr)
         if common.verbose:
-            print "ycol-->", ycol
-            print "data-->", data
+            print("ycol-->", ycol)
+            print("data-->", data)
         # A copy() is needed in case the buffer would be in different segments
         self.assertEqual(data.copy().data, ycol.data)
 
@@ -918,17 +939,17 @@ class TableNativeFlavorTestCase(common.PyTablesTestCase):
             table = self.fileh.root.table
         data = table.cols.Info[3:6]
         if common.verbose:
-            print "Type of read:", type(data)
-            print "Description of the record:", data.dtype.descr
-            print "First 3 elements of read:", data[:3]
-            print "Length of the data read:", len(data)
+            print("Type of read:", type(data))
+            print("Description of the record:", data.dtype.descr)
+            print("First 3 elements of read:", data[:3])
+            print("Length of the data read:", len(data))
         # Check that both NumPy objects are equal
         self.assertTrue(isinstance(data, ndarray))
         # Check the type
         self.assertEqual(data.dtype.descr, npdata.dtype.descr)
         if common.verbose:
-            print "npdata-->", npdata
-            print "data-->", data
+            print("npdata-->", npdata)
+            print("data-->", data)
         # A copy() is needed in case the buffer would be in different segments
         self.assertEqual(bytes(data.copy().data), bytes(npdata.data))
 
@@ -954,17 +975,17 @@ class TableNativeFlavorTestCase(common.PyTablesTestCase):
             table = self.fileh.root.table
         data = table.cols.Info[3:6]
         if common.verbose:
-            print "Type of read:", type(data)
-            print "Description of the record:", data.dtype.descr
-            print "First 3 elements of read:", data[:3]
-            print "Length of the data read:", len(data)
+            print("Type of read:", type(data))
+            print("Description of the record:", data.dtype.descr)
+            print("First 3 elements of read:", data[:3])
+            print("Length of the data read:", len(data))
         # Check that both NumPy objects are equal
         self.assertTrue(isinstance(data, ndarray))
         # Check the type
         self.assertEqual(data.dtype.descr, npdata.dtype.descr)
         if common.verbose:
-            print "npdata-->", npdata
-            print "data-->", data
+            print("npdata-->", npdata)
+            print("data-->", data)
         # A copy() is needed in case the buffer would be in different segments
         self.assertEqual(bytes(data.copy().data), bytes(npdata.data))
 
@@ -984,17 +1005,17 @@ class TableNativeFlavorTestCase(common.PyTablesTestCase):
         ycol = zeros((3, 2, 2), 'float64')-1
         data = table.cols.y[3:6]
         if common.verbose:
-            print "Type of read:", type(data)
-            print "Description of the record:", data.dtype.descr
-            print "First 3 elements of read:", data[:3]
-            print "Length of the data read:", len(data)
+            print("Type of read:", type(data))
+            print("Description of the record:", data.dtype.descr)
+            print("First 3 elements of read:", data[:3])
+            print("Length of the data read:", len(data))
         # Check that both NumPy objects are equal
         self.assertTrue(isinstance(data, ndarray))
         # Check the type
         self.assertEqual(data.dtype.descr, ycol.dtype.descr)
         if common.verbose:
-            print "ycol-->", ycol
-            print "data-->", data
+            print("ycol-->", ycol)
+            print("data-->", data)
         self.assertTrue(allequal(ycol, data, "numpy"))
 
     def test07b_modifyingRows(self):
@@ -1014,17 +1035,17 @@ class TableNativeFlavorTestCase(common.PyTablesTestCase):
         ycol = zeros((3, 2, 2), 'float64')-1
         data = table.cols.y[3:6]
         if common.verbose:
-            print "Type of read:", type(data)
-            print "Description of the record:", data.dtype.descr
-            print "First 3 elements of read:", data[:3]
-            print "Length of the data read:", len(data)
+            print("Type of read:", type(data))
+            print("Description of the record:", data.dtype.descr)
+            print("First 3 elements of read:", data[:3])
+            print("Length of the data read:", len(data))
         # Check that both NumPy objects are equal
         self.assertTrue(isinstance(data, ndarray))
         # Check the type
         self.assertEqual(data.dtype.descr, ycol.dtype.descr)
         if common.verbose:
-            print "ycol-->", ycol
-            print "data-->", data
+            print("ycol-->", ycol)
+            print("data-->", data)
         self.assertTrue(allequal(ycol, data, "numpy"))
 
     def test08a_modifyingRows(self):
@@ -1044,17 +1065,17 @@ class TableNativeFlavorTestCase(common.PyTablesTestCase):
         ycol = zeros((2, 2), 'float64')-1
         data = table.cols.y[6]
         if common.verbose:
-            print "Type of read:", type(data)
-            print "Description of the record:", data.dtype.descr
-            print "First 3 elements of read:", data[:3]
-            print "Length of the data read:", len(data)
+            print("Type of read:", type(data))
+            print("Description of the record:", data.dtype.descr)
+            print("First 3 elements of read:", data[:3])
+            print("Length of the data read:", len(data))
         # Check that both NumPy objects are equal
         self.assertTrue(isinstance(data, ndarray))
         # Check the type
         self.assertEqual(data.dtype.descr, ycol.dtype.descr)
         if common.verbose:
-            print "ycol-->", ycol
-            print "data-->", data
+            print("ycol-->", ycol)
+            print("data-->", data)
         self.assertTrue(allequal(ycol, data, "numpy"))
 
     def test08b_modifyingRows(self):
@@ -1074,17 +1095,17 @@ class TableNativeFlavorTestCase(common.PyTablesTestCase):
         ycol = zeros((2, 2), 'float64')-1
         data = table.cols.y[6]
         if common.verbose:
-            print "Type of read:", type(data)
-            print "Description of the record:", data.dtype.descr
-            print "First 3 elements of read:", data[:3]
-            print "Length of the data read:", len(data)
+            print("Type of read:", type(data))
+            print("Description of the record:", data.dtype.descr)
+            print("First 3 elements of read:", data[:3])
+            print("Length of the data read:", len(data))
         # Check that both NumPy objects are equal
         self.assertTrue(isinstance(data, ndarray))
         # Check the type
         self.assertEqual(data.dtype.descr, ycol.dtype.descr)
         if common.verbose:
-            print "ycol-->", ycol
-            print "data-->", data
+            print("ycol-->", ycol)
+            print("data-->", data)
         self.assertTrue(allequal(ycol, data, "numpy"))
 
     def test09a_getStrings(self):
@@ -1097,9 +1118,9 @@ class TableNativeFlavorTestCase(common.PyTablesTestCase):
         rdata = table.get_where_list('color == b"ab"')
         data = table.read_coordinates(rdata)
         if common.verbose:
-            print "Type of read:", type(data)
-            print "Description of the record:", data.dtype.descr
-            print "First 3 elements of read:", data[:3]
+            print("Type of read:", type(data))
+            print("Description of the record:", data.dtype.descr)
+            print("First 3 elements of read:", data[:3])
         # Check that both NumPy objects are equal
         self.assertTrue(isinstance(data, ndarray))
         # Check that all columns have been selected
@@ -1109,7 +1130,11 @@ class TableNativeFlavorTestCase(common.PyTablesTestCase):
             self.assertEqual(idata, array("ab", dtype="|S4"))
 
     def test09b_getStrings(self):
-        """Checking the return of string columns with spaces. (modify)"""
+        """Checking the return of string columns with spaces.
+
+        (modify)
+
+        """
 
         if self.close:
             self.fileh.close()
@@ -1120,9 +1145,9 @@ class TableNativeFlavorTestCase(common.PyTablesTestCase):
         table.flush()
         data = table[:]
         if common.verbose:
-            print "Type of read:", type(data)
-            print "Description of the record:", data.dtype.descr
-            print "First 3 elements of read:", data[:3]
+            print("Type of read:", type(data))
+            print("Description of the record:", data.dtype.descr)
+            print("First 3 elements of read:", data[:3])
         # Check that both NumPy objects are equal
         self.assertTrue(isinstance(data, ndarray))
         # Check that all columns have been selected
@@ -1136,7 +1161,11 @@ class TableNativeFlavorTestCase(common.PyTablesTestCase):
                 self.assertEqual(idata, array("a  ", dtype="|S4"))
 
     def test09c_getStrings(self):
-        """Checking the return of string columns with spaces. (append)"""
+        """Checking the return of string columns with spaces.
+
+        (append)
+
+        """
 
         if self.close:
             self.fileh.close()
@@ -1152,9 +1181,9 @@ class TableNativeFlavorTestCase(common.PyTablesTestCase):
             self.fileh = open_file(self.file, "a")
         data = self.fileh.root.table[:]
         if common.verbose:
-            print "Type of read:", type(data)
-            print "Description of the record:", data.dtype.descr
-            print "First 3 elements of read:", data[:3]
+            print("Type of read:", type(data))
+            print("Description of the record:", data.dtype.descr)
+            print("First 3 elements of read:", data[:3])
         # Check that both NumPy objects are equal
         self.assertTrue(isinstance(data, ndarray))
         # Check that all columns have been selected
@@ -1209,8 +1238,8 @@ class AttributesTestCase(common.PyTablesTestCase):
         # Check the type
         self.assertEqual(data.dtype.descr, npcomp.dtype.descr)
         if common.verbose:
-            print "npcomp-->", npcomp
-            print "data-->", data
+            print("npcomp-->", npcomp)
+            print("data-->", data)
         self.assertTrue(allequal(npcomp, data, "numpy"))
 
     def test02_updateAttribute(self):
@@ -1234,8 +1263,8 @@ class AttributesTestCase(common.PyTablesTestCase):
         # Check the type
         self.assertEqual(data.dtype.descr, npcomp.dtype.descr)
         if common.verbose:
-            print "npcomp-->", npcomp
-            print "data-->", data
+            print("npcomp-->", npcomp)
+            print("data-->", data)
         self.assertTrue(allequal(npcomp, data, "numpy"))
 
 
@@ -1280,8 +1309,8 @@ class StrlenTestCase(common.PyTablesTestCase):
         str1 = self.table.col('Text')[0]
         str2 = self.table.col('Text')[1]
         if common.verbose:
-            print "string1-->", str1
-            print "string2-->", str2
+            print("string1-->", str1)
+            print("string2-->", str2)
         # Check that both NumPy objects are equal
         self.assertEqual(len(str1), len(b'Hello Francesc!'))
         self.assertEqual(len(str2), len(b'Hola Francesc!'))
diff --git a/tables/tests/test_queries.py b/tables/tests/test_queries.py
index 57f2856..457f14d 100644
--- a/tables/tests/test_queries.py
+++ b/tables/tests/test_queries.py
@@ -10,7 +10,7 @@
 #
 ########################################################################
 
-"""Test module for queries on datasets"""
+"""Test module for queries on datasets."""
 
 import re
 import sys
@@ -94,11 +94,11 @@ enum = tables.Enum(dict(('n%d' % i, i) for i in range(_maxnvalue)))
 # Table description
 # -----------------
 def append_columns(classdict, shape=()):
-    """
-    Append a ``Col`` of each PyTables data type to the `classdict`.
+    """Append a ``Col`` of each PyTables data type to the `classdict`.
 
     A column of a certain TYPE gets called ``c_TYPE``.  The number of
     added columns is returned.
+
     """
     heavy = common.heavy
     for (itype, type_) in enumerate(sorted(type_info.iterkeys())):
@@ -119,11 +119,11 @@ def append_columns(classdict, shape=()):
 
 
 def nested_description(classname, pos, shape=()):
-    """
-    Return a nested column description with all PyTables data types.
+    """Return a nested column description with all PyTables data types.
 
     A column of a certain TYPE gets called ``c_TYPE``.  The nested
     column will be placed in the position indicated by `pos`.
+
     """
     classdict = {}
     append_columns(classdict, shape=shape)
@@ -132,8 +132,7 @@ def nested_description(classname, pos, shape=()):
 
 
 def table_description(classname, nclassname, shape=()):
-    """
-    Return a table description for testing queries.
+    """Return a table description for testing queries.
 
     The description consists of all PyTables data types, both in the
     top level and in the ``c_nested`` nested column.  A column of a
@@ -142,6 +141,7 @@ def table_description(classname, nclassname, shape=()):
     used for all columns.  Finally, an extra indexed column
     ``c_idxextra`` is added as well in order to provide some basic
     tests for multi-index queries.
+
     """
     classdict = {}
     colpos = append_columns(classdict, shape)
@@ -177,8 +177,7 @@ table_data = {}
 
 
 def fill_table(table, shape, nrows):
-    """
-    Fill the given `table` with `nrows` rows of data.
+    """Fill the given `table` with `nrows` rows of data.
 
     Values in the i-th row (where 0 <= i < `row_period`) for a
     multidimensional field with M elements span from i to i + M-1.  For
@@ -186,6 +185,7 @@ def fill_table(table, shape, nrows):
 
     The same goes for the ``c_extra`` column, but values range from
     -`row_period`/2 to +`row_period`/2.
+
     """
     # Reuse already computed data if possible.
     tdata = table_data.get((shape, nrows))
@@ -230,8 +230,7 @@ def fill_table(table, shape, nrows):
 # ---------------
 class BaseTableQueryTestCase(common.TempFileMixin, common.PyTablesTestCase):
 
-    """
-    Base test case for querying tables.
+    """Base test case for querying tables.
 
     Sub-classes must define the following attributes:
 
@@ -249,6 +248,7 @@ class BaseTableQueryTestCase(common.TempFileMixin, common.PyTablesTestCase):
         to index them.
     ``optlevel``
         The level of optimisation of column indexes.  Default is 0.
+
     """
 
     indexed = False
@@ -257,7 +257,7 @@ class BaseTableQueryTestCase(common.TempFileMixin, common.PyTablesTestCase):
     colNotIndexable_re = re.compile(r"\bcan not be indexed\b")
     condNotBoolean_re = re.compile(r"\bdoes not have a boolean type\b")
 
-    def createIndexes(self, colname, ncolname, extracolname):
+    def create_indexes(self, colname, ncolname, extracolname):
         if not self.indexed:
             return
         try:
@@ -269,7 +269,7 @@ class BaseTableQueryTestCase(common.TempFileMixin, common.PyTablesTestCase):
                     kind=self.kind, optlevel=self.optlevel,
                     _blocksizes=small_blocksizes, _testmode=True)
 
-        except TypeError, te:
+        except TypeError as te:
             if self.colNotIndexable_re.search(str(te)):
                 raise common.SkipTest(
                     "Columns of this type can not be indexed.")
@@ -318,13 +318,13 @@ extra_conditions = [
 
 
 class TableDataTestCase(BaseTableQueryTestCase):
-    """
-    Base test case for querying table data.
+    """Base test case for querying table data.
 
     Automatically created test method names have the format
     ``test_XNNNN``, where ``NNNN`` is the zero-padded test number and
     ``X`` indicates whether the test belongs to the light (``l``) or
     heavy (``h``) set.
+
     """
     _testfmt_light = 'test_l%04d'
     _testfmt_heavy = 'test_h%04d'
@@ -371,7 +371,7 @@ def create_test_method(type_, op, extracond):
         pycond = compile(pycond, '<string>', 'eval')
 
         table = self.table
-        self.createIndexes(colname, ncolname, 'c_idxextra')
+        self.create_indexes(colname, ncolname, 'c_idxextra')
 
         table_slice = dict(start=1, stop=table.nrows - 5, step=3)
         rownos, fvalues = None, None
@@ -415,10 +415,12 @@ def create_test_method(type_, op, extracond):
                 ptrownos = [table.get_where_list(cond, condvars, sort=True,
                                                  **table_slice)
                             for _ in range(2)]
-                ptfvalues = [table.read_where(cond, condvars, field=acolname,
-                                              **table_slice)
-                                             for _ in range(2)]
-            except TypeError, te:
+                ptfvalues = [
+                    table.read_where(cond, condvars, field=acolname,
+                                     **table_slice)
+                    for _ in range(2)
+                ]
+            except TypeError as te:
                 if self.condNotBoolean_re.search(str(te)):
                     raise common.SkipTest("The condition is not boolean.")
                 raise
@@ -577,7 +579,7 @@ for cdatafunc in [niclassdata, iclassdata]:
     for (cname, cbasenames, cdict) in cdatafunc():
         cbases = tuple(eval(cbase) for cbase in cbasenames)
         class_ = type(cname, cbases, cdict)
-        exec '%s = class_' % cname
+        exec('%s = class_' % cname)
 
 
 # Test cases on query usage
@@ -591,10 +593,10 @@ _gvar = None
 
 class ScalarTableUsageTestCase(ScalarTableMixin, BaseTableUsageTestCase):
 
-    """
-    Test case for query usage on scalar tables.
+    """Test case for query usage on scalar tables.
 
     This also tests for most usage errors and situations.
+
     """
 
     def test_empty_condition(self):
@@ -723,11 +725,11 @@ class MDTableUsageTestCase(MDTableMixin, BaseTableUsageTestCase):
 
 class IndexedTableUsage(ScalarTableMixin, BaseTableUsageTestCase):
 
-    """
-    Test case for query usage on indexed tables.
+    """Test case for query usage on indexed tables.
+
+    Indexing could be used in more cases, but it is expected to kick in
+    at least in the cases tested here.
 
-    Indexing could be used in more cases, but it is expected to kick
-    in at least in the cases tested here.
     """
     nrows = 50
     indexed = True
@@ -1070,8 +1072,9 @@ class IndexedTableUsage25(IndexedTableUsage):
         '~~~c_bool',
         '~~(~c_bool) & (c_extra != 2)',
     ]
-    idx_expr = [('c_bool', ('eq',), (False,)),
-                 ]
+    idx_expr = [
+        ('c_bool', ('eq',), (False,)),
+    ]
     str_expr = 'e0'
 
 
@@ -1092,10 +1095,11 @@ class IndexedTableUsage27(IndexedTableUsage):
         '(((c_int32 == 3) | (c_bool == True)) | (c_int32 == 5))' +
         ' & (c_extra > 0)',
         ]
-    idx_expr = [('c_int32', ('eq',), (3,)),
-                 ('c_bool', ('eq',), (True,)),
-                 ('c_int32', ('eq',), (5,)),
-                 ]
+    idx_expr = [
+        ('c_int32', ('eq',), (3,)),
+        ('c_bool', ('eq',), (True,)),
+        ('c_int32', ('eq',), (5,)),
+    ]
     str_expr = '((e0 | e1) | e2)'
 
 
@@ -1105,10 +1109,11 @@ class IndexedTableUsage28(IndexedTableUsage):
         '(((c_int32 == 3) | (c_bool == True)) & (c_int32 == 5))' +
         ' & (c_extra > 0)',
         ]
-    idx_expr = [('c_int32', ('eq',), (3,)),
-                 ('c_bool', ('eq',), (True,)),
-                 ('c_int32', ('eq',), (5,)),
-                 ]
+    idx_expr = [
+        ('c_int32', ('eq',), (3,)),
+        ('c_bool', ('eq',), (True,)),
+        ('c_int32', ('eq',), (5,)),
+    ]
     str_expr = '((e0 | e1) & e2)'
 
 
@@ -1118,10 +1123,11 @@ class IndexedTableUsage29(IndexedTableUsage):
         '((c_int32 == 3) | ((c_int32 == 4) & (c_int32 == 5)))' +
         ' & (c_extra > 0)',
         ]
-    idx_expr = [('c_int32', ('eq',), (4,)),
-                 ('c_int32', ('eq',), (5,)),
-                 ('c_int32', ('eq',), (3,)),
-                 ]
+    idx_expr = [
+        ('c_int32', ('eq',), (4,)),
+        ('c_int32', ('eq',), (5,)),
+        ('c_int32', ('eq',), (3,)),
+    ]
     str_expr = '((e0 & e1) | e2)'
 
 
@@ -1131,10 +1137,11 @@ class IndexedTableUsage30(IndexedTableUsage):
         '((c_int32 == 3) | (c_int32 == 4)) & (c_int32 == 5)' +
         ' & (c_extra > 0)',
         ]
-    idx_expr = [('c_int32', ('eq',), (3,)),
-                 ('c_int32', ('eq',), (4,)),
-                 ('c_int32', ('eq',), (5,)),
-                 ]
+    idx_expr = [
+        ('c_int32', ('eq',), (3,)),
+        ('c_int32', ('eq',), (4,)),
+        ('c_int32', ('eq',), (5,)),
+    ]
     str_expr = '((e0 | e1) & e2)'
 
 
@@ -1144,8 +1151,9 @@ class IndexedTableUsage31(IndexedTableUsage):
         '(c_extra > 0) & ((c_bool == True) & (c_extra < 5))',
         '((c_int32 > 0) | (c_extra > 0)) & (c_bool == True)',
         ]
-    idx_expr = [('c_bool', ('eq',), (True,)),
-                 ]
+    idx_expr = [
+        ('c_bool', ('eq',), (True,)),
+    ]
     str_expr = 'e0'
 
 
diff --git a/tables/tests/test_tables.py b/tables/tests/test_tables.py
index dd911de..c6396d1 100644
--- a/tables/tests/test_tables.py
+++ b/tables/tests/test_tables.py
@@ -1,5 +1,6 @@
 # -*- coding: utf-8 -*-
 
+from __future__ import print_function
 import sys
 import unittest
 import os
@@ -10,6 +11,7 @@ import numpy as np
 from numpy import rec as records
 from numpy import testing as npt
 
+import tables
 from tables import *
 from tables.utils import SizeType, byteorders
 from tables.tests import common
@@ -35,18 +37,18 @@ class Record(IsDescription):
         0.+1.j), pos=8)  # Complex single precision
     var10 = ComplexCol(itemsize=16, dflt=(
         1.-0.j), pos=9)  # Complex double precision
-    if 'float16' in np.typeDict:
+    if 'Float16Col' in globals():
         var11 = Float16Col(dflt=6.4)               # float  (half-precision)
-    if 'float96' in np.typeDict:
+    if 'Float96Col' in globals():
         var12 = Float96Col(
             dflt=6.4)               # float  (extended precision)
-    if 'float128' in np.typeDict:
+    if 'Float128Col' in globals():
         var13 = Float128Col(
             dflt=6.4)              # float  (extended precision)
-    if 'complex192' in np.typeDict:
+    if 'Complex192Col' in globals():
         var14 = ComplexCol(itemsize=24, dflt=(
             1.-0.j))  # Complex double (extended precision)
-    if 'complex256' in np.typeDict:
+    if 'Complex256Col' in globals():
         var15 = ComplexCol(itemsize=32, dflt=(
             1.-0.j))  # Complex double (extended precision)
 
@@ -60,23 +62,25 @@ RecordDescriptionDict = {
     'var6': UInt16Col(dflt=5, pos=5),           # unsigned short integer
     'var7': StringCol(itemsize=1, dflt=b"e", pos=6),  # 1-character String
     'var8': BoolCol(dflt=True, pos=7),          # boolean
-    'var9': ComplexCol(itemsize=8, dflt=(0.+1.j), pos=8),   # Complex single precision
-    'var10': ComplexCol(itemsize=16, dflt=(1.-0.j), pos=9),  # Complex double precision
+    'var9': ComplexCol(itemsize=8, dflt=(0.+1.j), pos=8),
+                                                # Complex single precision
+    'var10': ComplexCol(itemsize=16, dflt=(1.-0.j), pos=9),
+                                                # Complex double precision
 }
 
-if 'float16' in np.typeDict:
+if 'Float16Col' in globals():
     RecordDescriptionDict['var11'] = Float16Col(
         dflt=6.4)    # float  (half-precision)
-if 'float96' in np.typeDict:
+if 'Float96Col' in globals():
     RecordDescriptionDict['var12'] = Float96Col(
         dflt=6.4)    # float  (extended precision)
-if 'float128' in np.typeDict:
+if 'Float128Col' in globals():
     RecordDescriptionDict['var13'] = Float128Col(
         dflt=6.4)   # float  (extended precision)
-if 'complex192' in np.typeDict:
+if 'Complex192Col' in globals():
     RecordDescriptionDict['var14'] = ComplexCol(itemsize=24, dflt=(
         1.-0.j))  # Complex double (extended precision)
-if 'complex256' in np.typeDict:
+if 'Complex256Col' in globals():
     RecordDescriptionDict['var15'] = ComplexCol(itemsize=32, dflt=(
         1.-0.j))  # Complex double (extended precision)
 
@@ -93,15 +97,15 @@ class OldRecord(IsDescription):
     var8 = Col.from_type("bool", shape=(), dflt=1, pos=7)
     var9 = ComplexCol(itemsize=8, shape=(), dflt=(0.+1.j), pos=8)
     var10 = ComplexCol(itemsize=16, shape=(), dflt=(1.-0.j), pos = 9)
-    if 'float16' in np.typeDict:
+    if 'Float16Col' in globals():
         var11 = Col.from_type("float16", (), 6.4)
-    if 'float96' in np.typeDict:
+    if 'Float96Col' in globals():
         var12 = Col.from_type("float96", (), 6.4)
-    if 'float128' in np.typeDict:
+    if 'Float128Col' in globals():
         var13 = Col.from_type("float128", (), 6.4)
-    if 'complex192' in np.typeDict:
+    if 'Complex192Col' in globals():
         var14 = ComplexCol(itemsize=24, shape=(), dflt=(1.-0.j))
-    if 'complex256' in np.typeDict:
+    if 'Complex256Col' in globals():
         var15 = ComplexCol(itemsize=32, shape=(), dflt=(1.-0.j))
 
 
@@ -164,27 +168,27 @@ class BasicTestCase(common.PyTablesTestCase):
                 tmplist.append([float(i)+0j, 1 + float(i)*1j])
             else:
                 tmplist.append(1 + float(i)*1j)
-            if 'float16' in np.typeDict:
+            if 'Float16Col' in globals():
                 if isinstance(row['var11'], np.ndarray):
                     tmplist.append(np.array((float(i),)*4))
                 else:
                     tmplist.append(float(i))
-            if 'float96' in np.typeDict:
+            if 'Float96Col' in globals():
                 if isinstance(row['var12'], np.ndarray):
                     tmplist.append(np.array((float(i),)*4))
                 else:
                     tmplist.append(float(i))
-            if 'float128' in np.typeDict:
+            if 'Float128Col' in globals():
                 if isinstance(row['var13'], np.ndarray):
                     tmplist.append(np.array((float(i),)*4))
                 else:
                     tmplist.append(float(i))
-            if 'complex192' in np.typeDict:
+            if 'Complex192Col' in globals():
                 if isinstance(row['var14'], np.ndarray):
                     tmplist.append([float(i)+0j, 1 + float(i)*1j])
                 else:
                     tmplist.append(1 + float(i)*1j)
-            if 'complex256' in np.typeDict:
+            if 'Complex256Col' in globals():
                 if isinstance(row['var15'], np.ndarray):
                     tmplist.append([float(i)+0j, 1 + float(i)*1j])
                 else:
@@ -247,27 +251,27 @@ class BasicTestCase(common.PyTablesTestCase):
                         row['var5'] = np.array((float(i),)*4)
                     else:
                         row['var5'] = float(i)
-                    if 'float16' in np.typeDict:
+                    if 'Float16Col' in globals():
                         if isinstance(row['var11'], np.ndarray):
                             row['var11'] = np.array((float(i),)*4)
                         else:
                             row['var11'] = float(i)
-                    if 'float96' in np.typeDict:
+                    if 'Float96Col' in globals():
                         if isinstance(row['var12'], np.ndarray):
                             row['var12'] = np.array((float(i),)*4)
                         else:
                             row['var12'] = float(i)
-                    if 'float128' in np.typeDict:
+                    if 'Float128Col' in globals():
                         if isinstance(row['var13'], np.ndarray):
                             row['var13'] = np.array((float(i),)*4)
                         else:
                             row['var13'] = float(i)
-                    if 'complex192' in np.typeDict:
+                    if 'Complex192Col' in globals():
                         if isinstance(row['var14'], np.ndarray):
                             row['var14'] = [float(i)+0j, 1 + float(i)*1j]
                         else:
                             row['var14'] = 1 + float(i)*1j
-                    if 'complex256' in np.typeDict:
+                    if 'Complex256Col' in globals():
                         if isinstance(row['var15'], np.ndarray):
                             row['var15'] = [float(i)+0j, 1 + float(i)*1j]
                         else:
@@ -294,7 +298,7 @@ class BasicTestCase(common.PyTablesTestCase):
     #----------------------------------------
 
     def test00_description(self):
-        """Checking table description and descriptive fields"""
+        """Checking table description and descriptive fields."""
 
         self.fileh = open_file(self.file)
 
@@ -321,7 +325,8 @@ class BasicTestCase(common.PyTablesTestCase):
         expectedNames = ['var%d' % n for n in range(1, fix_n_column + 1)]
         types = ("float16", "float96", "float128", "complex192", "complex256")
         for n, typename in enumerate(types, fix_n_column + 1):
-            if typename in np.typeDict:
+            name = typename.capitalize() + 'Col'
+            if name in globals():
                 expectedNames.append('var%d' % n)
 
         self.assertEqual(expectedNames, list(tbl.colnames))
@@ -351,9 +356,10 @@ class BasicTestCase(common.PyTablesTestCase):
         # Column defaults.
         for v in expectedNames:
             if common.verbose:
-                print "dflt-->", columns[v].dflt, type(columns[v].dflt)
-                print "coldflts-->", tbl.coldflts[v], type(tbl.coldflts[v])
-                print "desc.dflts-->", desc._v_dflts[v], type(desc._v_dflts[v])
+                print("dflt-->", columns[v].dflt, type(columns[v].dflt))
+                print("coldflts-->", tbl.coldflts[v], type(tbl.coldflts[v]))
+                print("desc.dflts-->", desc._v_dflts[v],
+                      type(desc._v_dflts[v]))
             self.assertTrue(areArraysEqual(tbl.coldflts[v], columns[v].dflt))
             self.assertTrue(areArraysEqual(desc._v_dflts[v], columns[v].dflt))
 
@@ -369,11 +375,11 @@ class BasicTestCase(common.PyTablesTestCase):
             self.assertEqual(expectedCol.type, col.type)
 
     def test01_readTable(self):
-        """Checking table read"""
+        """Checking table read."""
 
         if common.verbose:
-            print '\n', '-=' * 30
-            print "Running %s.test01_readTable..." % self.__class__.__name__
+            print('\n', '-=' * 30)
+            print("Running %s.test01_readTable..." % self.__class__.__name__)
 
         # Create an instance of an HDF5 Table
         self.fileh = open_file(self.file, "r")
@@ -384,9 +390,9 @@ class BasicTestCase(common.PyTablesTestCase):
         # Read the records and select those with "var2" file less than 20
         result = [rec['var2'] for rec in table.iterrows() if rec['var2'] < 20]
         if common.verbose:
-            print "Nrows in", table._v_pathname, ":", table.nrows
-            print "Last record in table ==>", rec
-            print "Total selected records in table ==> ", len(result)
+            print("Nrows in", table._v_pathname, ":", table.nrows)
+            print("Last record in table ==>", rec)
+            print("Total selected records in table ==> ", len(result))
         nrows = self.expectedrows - 1
         rec = list(table.iterrows())[-1]
         self.assertEqual((rec['var1'], rec['var2'], rec['var7']),
@@ -409,8 +415,9 @@ class BasicTestCase(common.PyTablesTestCase):
         """Checking table read (using Row.fetch_all_fields)"""
 
         if common.verbose:
-            print '\n', '-=' * 30
-            print "Running %s.test01a_fetch_all_fields..." % self.__class__.__name__
+            print('\n', '-=' * 30)
+            print("Running %s.test01a_fetch_all_fields..." %
+                  self.__class__.__name__)
 
         # Create an instance of an HDF5 Table
         self.fileh = open_file(self.file, "r")
@@ -423,9 +430,9 @@ class BasicTestCase(common.PyTablesTestCase):
                   if rec['var2'] < 20]
         rec = result[-1]
         if common.verbose:
-            print "Nrows in", table._v_pathname, ":", table.nrows
-            print "Last record in table ==>", rec
-            print "Total selected records in table ==> ", len(result)
+            print("Nrows in", table._v_pathname, ":", table.nrows)
+            print("Last record in table ==>", rec)
+            print("Total selected records in table ==> ", len(result))
         nrows = 20 - 1
         strnrows = "%04d" % (self.expectedrows - nrows)
         strnrows = strnrows.encode('ascii')
@@ -449,8 +456,8 @@ class BasicTestCase(common.PyTablesTestCase):
         """Checking table read (using Row[integer])"""
 
         if common.verbose:
-            print '\n', '-=' * 30
-            print "Running %s.test01a_integer..." % self.__class__.__name__
+            print('\n', '-=' * 30)
+            print("Running %s.test01a_integer..." % self.__class__.__name__)
 
         # Create an instance of an HDF5 Table
         self.fileh = open_file(self.file, "r")
@@ -462,9 +469,9 @@ class BasicTestCase(common.PyTablesTestCase):
         result = [rec[1] for rec in table.iterrows()
                   if rec['var2'] < 20]
         if common.verbose:
-            print "Nrows in", table._v_pathname, ":", table.nrows
-            print "Total selected records in table ==> ", len(result)
-            print "All results ==>", result
+            print("Nrows in", table._v_pathname, ":", table.nrows)
+            print("Total selected records in table ==> ", len(result))
+            print("All results ==>", result)
         self.assertEqual(len(result), 20)
         self.assertEqual(result, range(20))
 
@@ -472,8 +479,8 @@ class BasicTestCase(common.PyTablesTestCase):
         """Checking table read (using Row[::2])"""
 
         if common.verbose:
-            print '\n', '-=' * 30
-            print "Running %s.test01a_extslice..." % self.__class__.__name__
+            print('\n', '-=' * 30)
+            print("Running %s.test01a_extslice..." % self.__class__.__name__)
 
         # Create an instance of an HDF5 Table
         self.fileh = open_file(self.file, "r")
@@ -486,9 +493,9 @@ class BasicTestCase(common.PyTablesTestCase):
                   if rec['var2'] < 20]
         rec = result[-1]
         if common.verbose:
-            print "Nrows in", table._v_pathname, ":", table.nrows
-            print "Last record in table ==>", rec
-            print "Total selected records in table ==> ", len(result)
+            print("Nrows in", table._v_pathname, ":", table.nrows)
+            print("Last record in table ==>", rec)
+            print("Total selected records in table ==> ", len(result))
         nrows = 20 - 1
         strnrows = "%04d" % (self.expectedrows - nrows)
         strnrows = strnrows.encode('ascii')
@@ -512,8 +519,8 @@ class BasicTestCase(common.PyTablesTestCase):
         """Checking table read (using Row['no-field'])"""
 
         if common.verbose:
-            print '\n', '-=' * 30
-            print "Running %s.test01a_nofield..." % self.__class__.__name__
+            print('\n', '-=' * 30)
+            print("Running %s.test01a_nofield..." % self.__class__.__name__)
 
         # Create an instance of an HDF5 Table
         self.fileh = open_file(self.file, "r")
@@ -527,18 +534,19 @@ class BasicTestCase(common.PyTablesTestCase):
         except KeyError:
             if common.verbose:
                 (type, value, traceback) = sys.exc_info()
-                print "\nGreat!, the next KeyError was catched!"
-                print value
+                print("\nGreat!, the next KeyError was catched!")
+                print(value)
         else:
-            print result
+            print(result)
             self.fail("expected a KeyError")
 
     def test01a_badtypefield(self):
         """Checking table read (using Row[{}])"""
 
         if common.verbose:
-            print '\n', '-=' * 30
-            print "Running %s.test01a_badtypefield..." % self.__class__.__name__
+            print('\n', '-=' * 30)
+            print("Running %s.test01a_badtypefield..." %
+                  self.__class__.__name__)
 
         # Create an instance of an HDF5 Table
         self.fileh = open_file(self.file, "r")
@@ -552,18 +560,18 @@ class BasicTestCase(common.PyTablesTestCase):
         except TypeError:
             if common.verbose:
                 (type, value, traceback) = sys.exc_info()
-                print "\nGreat!, the next TypeError was catched!"
-                print value
+                print("\nGreat!, the next TypeError was catched!")
+                print(value)
         else:
-            print result
+            print(result)
             self.fail("expected a TypeError")
 
     def test01b_readTable(self):
         """Checking table read and cuts (multidimensional columns case)"""
 
         if common.verbose:
-            print '\n', '-=' * 30
-            print "Running %s.test01b_readTable..." % self.__class__.__name__
+            print('\n', '-=' * 30)
+            print("Running %s.test01b_readTable..." % self.__class__.__name__)
 
         # Create an instance of an HDF5 Table
         self.fileh = open_file(self.file, "r")
@@ -575,11 +583,11 @@ class BasicTestCase(common.PyTablesTestCase):
         result = [rec['var5'] for rec in table.iterrows()
                   if rec['var2'] < 20]
         if common.verbose:
-            print "Nrows in", table._v_pathname, ":", table.nrows
-            print "Last record in table ==>", rec
-            print "rec['var5'] ==>", rec['var5'],
-            print "nrows ==>", table.nrows
-            print "Total selected records in table ==> ", len(result)
+            print("Nrows in", table._v_pathname, ":", table.nrows)
+            print("Last record in table ==>", rec)
+            print("rec['var5'] ==>", rec['var5'], end=' ')
+            print("nrows ==>", table.nrows)
+            print("Total selected records in table ==> ", len(result))
         nrows = table.nrows
         rec = list(table.iterrows())[-1]
         if isinstance(rec['var5'], np.ndarray):
@@ -628,8 +636,8 @@ class BasicTestCase(common.PyTablesTestCase):
         """Checking nested iterators (reading)"""
 
         if common.verbose:
-            print '\n', '-=' * 30
-            print "Running %s.test01c_readTable..." % self.__class__.__name__
+            print('\n', '-=' * 30)
+            print("Running %s.test01c_readTable..." % self.__class__.__name__)
 
         # Create an instance of an HDF5 Table
         self.fileh = open_file(self.file, "r")
@@ -642,7 +650,7 @@ class BasicTestCase(common.PyTablesTestCase):
                 if rec2['var2'] < 20:
                     result.append([rec['var2'], rec2['var2']])
         if common.verbose:
-            print "result ==>", result
+            print("result ==>", result)
 
         self.assertEqual(result, [[0, 0], [0, 1], [1, 0], [1, 1]])
 
@@ -650,8 +658,8 @@ class BasicTestCase(common.PyTablesTestCase):
         """Checking nested iterators (reading, mixed conditions)"""
 
         if common.verbose:
-            print '\n', '-=' * 30
-            print "Running %s.test01d_readTable..." % self.__class__.__name__
+            print('\n', '-=' * 30)
+            print("Running %s.test01d_readTable..." % self.__class__.__name__)
 
         # Create an instance of an HDF5 Table
         self.fileh = open_file(self.file, "r")
@@ -663,7 +671,7 @@ class BasicTestCase(common.PyTablesTestCase):
             for rec2 in table.where('var2 < 20', stop=2):
                 result.append([rec['var2'], rec2['var2']])
         if common.verbose:
-            print "result ==>", result
+            print("result ==>", result)
 
         self.assertEqual(result, [[0, 0], [0, 1], [1, 0], [1, 1]])
 
@@ -671,8 +679,8 @@ class BasicTestCase(common.PyTablesTestCase):
         """Checking nested iterators (reading, both conditions)"""
 
         if common.verbose:
-            print '\n', '-=' * 30
-            print "Running %s.test01e_readTable..." % self.__class__.__name__
+            print('\n', '-=' * 30)
+            print("Running %s.test01e_readTable..." % self.__class__.__name__)
 
         # Create an instance of an HDF5 Table
         self.fileh = open_file(self.file, "r")
@@ -684,7 +692,7 @@ class BasicTestCase(common.PyTablesTestCase):
             for rec2 in table.where('var2 < 3'):
                 result.append([rec['var2'], rec2['var3']])
         if common.verbose:
-            print "result ==>", result
+            print("result ==>", result)
 
         self.assertEqual(result,
                          [[0, 0], [0, 1], [0, 2], [1, 0], [1, 1], [1, 2]])
@@ -693,8 +701,8 @@ class BasicTestCase(common.PyTablesTestCase):
         """Checking nested iterators (reading, break in the loop)"""
 
         if common.verbose:
-            print '\n', '-=' * 30
-            print "Running %s.test01f_readTable..." % self.__class__.__name__
+            print('\n', '-=' * 30)
+            print("Running %s.test01f_readTable..." % self.__class__.__name__)
 
         # Create an instance of an HDF5 Table
         self.fileh = open_file(self.file, "r")
@@ -708,7 +716,7 @@ class BasicTestCase(common.PyTablesTestCase):
                     break
                 result.append([rec['var2'], rec2['var3']])
         if common.verbose:
-            print "result ==>", result
+            print("result ==>", result)
 
         self.assertEqual(result,
                          [[0, 0], [0, 1], [0, 2], [1, 0], [1, 1], [1, 2]])
@@ -717,8 +725,8 @@ class BasicTestCase(common.PyTablesTestCase):
         """Checking iterator with an evanescent table."""
 
         if common.verbose:
-            print '\n', '-=' * 30
-            print "Running %s.test01g_readTable..." % self.__class__.__name__
+            print('\n', '-=' * 30)
+            print("Running %s.test01g_readTable..." % self.__class__.__name__)
 
         # Create an instance of an HDF5 Table
         self.fileh = open_file(self.file, "r")
@@ -730,23 +738,23 @@ class BasicTestCase(common.PyTablesTestCase):
         self.assertEqual(len(result), 20)
 
     def test02_AppendRows(self):
-        """Checking whether appending record rows works or not"""
+        """Checking whether appending record rows works or not."""
 
         # Now, open it, but in "append" mode
         self.fileh = open_file(self.file, mode="a")
         self.rootgroup = self.fileh.root
         if common.verbose:
-            print '\n', '-=' * 30
-            print "Running %s.test02_AppendRows..." % self.__class__.__name__
+            print('\n', '-=' * 30)
+            print("Running %s.test02_AppendRows..." % self.__class__.__name__)
 
         # Get a table
         table = self.fileh.get_node("/group0/table1")
         # Get their row object
         row = table.row
         if common.verbose:
-            print "Nrows in old", table._v_pathname, ":", table.nrows
-            print "Record Format ==>", table.description._v_nested_formats
-            print "Record Size ==>", table.rowsize
+            print("Nrows in old", table._v_pathname, ":", table.nrows)
+            print("Record Format ==>", table.description._v_nested_formats)
+            print("Record Size ==>", table.rowsize)
         # Append some rows
         for i in xrange(self.appendrows):
             s = '%04d' % (self.appendrows - i)
@@ -774,27 +782,27 @@ class BasicTestCase(common.PyTablesTestCase):
                 row['var5'] = np.array((float(i),)*4)
             else:
                 row['var5'] = float(i)
-            if 'float16' in np.typeDict:
+            if 'Float16Col' in globals():
                 if isinstance(row['var11'], np.ndarray):
                     row['var11'] = np.array((float(i),)*4)
                 else:
                     row['var11'] = float(i)
-            if 'float96' in np.typeDict:
+            if 'Float96Col' in globals():
                 if isinstance(row['var12'], np.ndarray):
                     row['var12'] = np.array((float(i),)*4)
                 else:
                     row['var12'] = float(i)
-            if 'float128' in np.typeDict:
+            if 'Float128Col' in globals():
                 if isinstance(row['var13'], np.ndarray):
                     row['var13'] = np.array((float(i),)*4)
                 else:
                     row['var13'] = float(i)
-            if 'complex192' in np.typeDict:
+            if 'Complex192Col' in globals():
                 if isinstance(row['var14'], np.ndarray):
                     row['var14'] = [float(i)+0j, 1 + float(i)*1j]
                 else:
                     row['var14'] = 1 + float(i)*1j
-            if 'complex256' in np.typeDict:
+            if 'Complex256Col' in globals():
                 if isinstance(row['var15'], np.ndarray):
                     row['var15'] = [float(i)+0j, 1 + float(i)*1j]
                 else:
@@ -825,14 +833,14 @@ class BasicTestCase(common.PyTablesTestCase):
     # flushing them explicitely is being warned from now on.
     # F. Alted 2006-08-03
     def _test02a_AppendRows(self):
-        """Checking appending records without flushing explicitely"""
+        """Checking appending records without flushing explicitely."""
 
         # Now, open it, but in "append" mode
         self.fileh = open_file(self.file, mode="a")
         self.rootgroup = self.fileh.root
         if common.verbose:
-            print '\n', '-=' * 30
-            print "Running %s.test02a_AppendRows..." % self.__class__.__name__
+            print('\n', '-=' * 30)
+            print("Running %s.test02a_AppendRows..." % self.__class__.__name__)
 
         group = self.rootgroup
         for i in range(3):
@@ -843,9 +851,9 @@ class BasicTestCase(common.PyTablesTestCase):
             # Get their row object
             row = table.row
             if common.verbose:
-                print "Nrows in old", table._v_pathname, ":", table.nrows
-                print "Record Format ==>", table.description._v_nested_formats
-                print "Record Size ==>", table.rowsize
+                print("Nrows in old", table._v_pathname, ":", table.nrows)
+                print("Record Format ==>", table.description._v_nested_formats)
+                print("Record Size ==>", table.rowsize)
             # Append some rows
             for i in xrange(self.appendrows):
                 row['var1'] = '%04d' % (self.appendrows - i)
@@ -872,27 +880,27 @@ class BasicTestCase(common.PyTablesTestCase):
                     row['var5'] = np.array((float(i),)*4)
                 else:
                     row['var5'] = float(i)
-                if 'float16' in np.typeDict:
+                if 'Float16Col' in globals():
                     if isinstance(row['var11'], np.ndarray):
                         row['var11'] = np.array((float(i),)*4)
                     else:
                         row['var11'] = float(i)
-                if 'float96' in np.typeDict:
+                if 'Float96Col' in globals():
                     if isinstance(row['var12'], np.ndarray):
                         row['var12'] = np.array((float(i),)*4)
                     else:
                         row['var12'] = float(i)
-                if 'float128' in np.typeDict:
+                if 'Float128Col' in globals():
                     if isinstance(row['var13'], np.ndarray):
                         row['var13'] = np.array((float(i),)*4)
                     else:
                         row['var13'] = float(i)
-                if 'complex192' in np.typeDict:
+                if 'Complex192Col' in globals():
                     if isinstance(row['var14'], np.ndarray):
                         row['var14'] = [float(i)+0j, 1 + float(i)*1j]
                     else:
                         row['var14'] = 1 + float(i)*1j
-                if 'complex256' in np.typeDict:
+                if 'Complex256Col' in globals():
                     if isinstance(row['var15'], np.ndarray):
                         row['var15'] = [float(i)+0j, 1 + float(i)*1j]
                     else:
@@ -931,15 +939,15 @@ class BasicTestCase(common.PyTablesTestCase):
         self.fileh = open_file(self.file, mode="a")
         self.rootgroup = self.fileh.root
         if common.verbose:
-            print '\n', '-=' * 30
-            print "Running %s.test02b_AppendRows..." % self.__class__.__name__
+            print('\n', '-=' * 30)
+            print("Running %s.test02b_AppendRows..." % self.__class__.__name__)
 
         # Get a table
         table = self.fileh.get_node("/group0/table1")
         if common.verbose:
-            print "Nrows in old", table._v_pathname, ":", table.nrows
-            print "Record Format ==>", table.description._v_nested_formats
-            print "Record Size ==>", table.rowsize
+            print("Nrows in old", table._v_pathname, ":", table.nrows)
+            print("Record Format ==>", table.description._v_nested_formats)
+            print("Record Size ==>", table.rowsize)
         # Set a small number of buffer to make this test faster
         table.nrowsinbuf = 3
         # Get their row object
@@ -973,27 +981,27 @@ class BasicTestCase(common.PyTablesTestCase):
                 row['var5'] = np.array((float(i),)*4)
             else:
                 row['var5'] = float(i)
-            if 'float16' in np.typeDict:
+            if 'Float16Col' in globals():
                 if isinstance(row['var11'], np.ndarray):
                     row['var11'] = np.array((float(i),)*4)
                 else:
                     row['var11'] = float(i)
-            if 'float96' in np.typeDict:
+            if 'Float96Col' in globals():
                 if isinstance(row['var12'], np.ndarray):
                     row['var12'] = np.array((float(i),)*4)
                 else:
                     row['var12'] = float(i)
-            if 'float128' in np.typeDict:
+            if 'Float128Col' in globals():
                 if isinstance(row['var13'], np.ndarray):
                     row['var13'] = np.array((float(i),)*4)
                 else:
                     row['var13'] = float(i)
-            if 'complex192' in np.typeDict:
+            if 'Complex192Col' in globals():
                 if isinstance(row['var14'], np.ndarray):
                     row['var14'] = [float(i)+0j, 1 + float(i)*1j]
                 else:
                     row['var14'] = 1 + float(i)*1j
-            if 'complex256' in np.typeDict:
+            if 'Complex256Col' in globals():
                 if isinstance(row['var15'], np.ndarray):
                     row['var15'] = [float(i)+0j, 1 + float(i)*1j]
                 else:
@@ -1015,8 +1023,8 @@ class BasicTestCase(common.PyTablesTestCase):
         table.flush()
         result = [row['var2'] for row in table.iterrows() if row['var2'] < 20]
         if common.verbose:
-            print "Result length ==>", len(result)
-            print "Result contents ==>", result
+            print("Result length ==>", len(result))
+            print("Result contents ==>", result)
         self.assertEqual(len(result), 20 + 3 * table.nrowsinbuf)
         self.assertEqual(result, [0, 1, 2, 3, 4, 5, 6, 7, 8, 9,
                                   10, 11, 12, 13, 14, 15, 16, 17, 18, 19,
@@ -1027,8 +1035,8 @@ class BasicTestCase(common.PyTablesTestCase):
         # row['var7'] = row['var1'][-1]
         result7 = [row['var7'] for row in table.iterrows() if row['var2'] < 20]
         if common.verbose:
-            print "Result7 length ==>", len(result7)
-            print "Result7 contents ==>", result7
+            print("Result7 length ==>", len(result7))
+            print("Result7 contents ==>", result7)
         self.assertEqual(
             result7,
             [b'0', b'9', b'8', b'7', b'6', b'5', b'4', b'3', b'2', b'1',
@@ -1039,7 +1047,7 @@ class BasicTestCase(common.PyTablesTestCase):
     # the new policy of not doing a flush in the middle of a __del__
     # operation. F. Alted 2006-08-24
     def _test02c_AppendRows(self):
-        """Checking appending with evanescent table objects"""
+        """Checking appending with evanescent table objects."""
 
         # This test is kind of magic, but it is a good sanity check anyway.
 
@@ -1047,15 +1055,15 @@ class BasicTestCase(common.PyTablesTestCase):
         self.fileh = open_file(self.file, mode="a")
         self.rootgroup = self.fileh.root
         if common.verbose:
-            print '\n', '-=' * 30
-            print "Running %s.test02c_AppendRows..." % self.__class__.__name__
+            print('\n', '-=' * 30)
+            print("Running %s.test02c_AppendRows..." % self.__class__.__name__)
 
         # Get a table
         table = self.fileh.get_node("/group0/table1")
         if common.verbose:
-            print "Nrows in old", table._v_pathname, ":", table.nrows
-            print "Record Format ==>", table.description._v_nested_formats
-            print "Record Size ==>", table.rowsize
+            print("Nrows in old", table._v_pathname, ":", table.nrows)
+            print("Record Format ==>", table.description._v_nested_formats)
+            print("Record Size ==>", table.rowsize)
         # Set a small number of buffer to make this test faster
         table.nrowsinbuf = 3
         # Get their row object
@@ -1090,8 +1098,8 @@ class BasicTestCase(common.PyTablesTestCase):
         result = [row['var2'] for row in table.iterrows()
                   if 100 <= row['var2'] < 122]
         if common.verbose:
-            print "Result length ==>", len(result)
-            print "Result contents ==>", result
+            print("Result length ==>", len(result))
+            print("Result contents ==>", result)
         self.assertEqual(len(result), 22)
         self.assertEqual(
             result,
@@ -1107,15 +1115,15 @@ class BasicTestCase(common.PyTablesTestCase):
         self.fileh = open_file(self.file, mode="a")
         self.rootgroup = self.fileh.root
         if common.verbose:
-            print '\n', '-=' * 30
-            print "Running %s.test02d_AppendRows..." % self.__class__.__name__
+            print('\n', '-=' * 30)
+            print("Running %s.test02d_AppendRows..." % self.__class__.__name__)
 
         # Get a table
         table = self.fileh.get_node("/group0/table1")
         if common.verbose:
-            print "Nrows in old", table._v_pathname, ":", table.nrows
-            print "Record Format ==>", table.description._v_nested_formats
-            print "Record Size ==>", table.rowsize
+            print("Nrows in old", table._v_pathname, ":", table.nrows)
+            print("Record Format ==>", table.description._v_nested_formats)
+            print("Record Size ==>", table.rowsize)
         # Set a small number of buffer to make this test faster
         table.nrowsinbuf = 3
         # Get their row object
@@ -1134,8 +1142,8 @@ class BasicTestCase(common.PyTablesTestCase):
         result = [row['var2'] for row in table.iterrows()
                   if 100 <= row['var2'] < 120]
         if common.verbose:
-            print "Result length ==>", len(result)
-            print "Result contents ==>", result
+            print("Result length ==>", len(result))
+            print("Result contents ==>", result)
         if table.nrows > 119:
             # Case for big tables
             self.assertEqual(len(result), 39)
@@ -1171,7 +1179,7 @@ class BasicTestCase(common.PyTablesTestCase):
         self.assertEqual(newnrows, oldnrows + 1,
                          "Append to alive table failed.")
 
-        if self.fileh._aliveNodes.nodeCacheSlots == 0:
+        if self.fileh._node_manager.cache.nslots == 0:
             # Skip this test from here on because the second case
             # won't work when thereis not a node cache.
             return
@@ -1187,11 +1195,11 @@ class BasicTestCase(common.PyTablesTestCase):
 
     # CAVEAT: The next test only works for tables with rows < 2**15
     def test03_endianess(self):
-        """Checking if table is endianess aware"""
+        """Checking if table is endianess aware."""
 
         if common.verbose:
-            print '\n', '-=' * 30
-            print "Running %s.test03_endianess..." % self.__class__.__name__
+            print('\n', '-=' * 30)
+            print("Running %s.test03_endianess..." % self.__class__.__name__)
 
         # Create an instance of an HDF5 Table
         self.fileh = open_file(self.file, "r")
@@ -1200,11 +1208,11 @@ class BasicTestCase(common.PyTablesTestCase):
         # Read the records and select the ones with "var3" column less than 20
         result = [rec['var2'] for rec in table.iterrows() if rec['var3'] < 20]
         if common.verbose:
-            print "Nrows in", table._v_pathname, ":", table.nrows
-            print "On-disk byteorder ==>", table.byteorder
-            print "Last record in table ==>", rec
-            print "Selected records ==>", result
-            print "Total selected records in table ==>", len(result)
+            print("Nrows in", table._v_pathname, ":", table.nrows)
+            print("On-disk byteorder ==>", table.byteorder)
+            print("Last record in table ==>", rec)
+            print("Selected records ==>", result)
+            print("Total selected records in table ==>", len(result))
         nrows = self.expectedrows - 1
         self.assertEqual(table.byteorder,
                          {"little": "big", "big": "little"}[sys.byteorder])
@@ -1213,11 +1221,11 @@ class BasicTestCase(common.PyTablesTestCase):
         self.assertEqual(len(result), 20)
 
     def test04_delete(self):
-        """Checking whether a single row can be deleted"""
+        """Checking whether a single row can be deleted."""
 
         if common.verbose:
-            print '\n', '-=' * 30
-            print "Running %s.test04_delete..." % self.__class__.__name__
+            print('\n', '-=' * 30)
+            print("Running %s.test04_delete..." % self.__class__.__name__)
 
         # Create an instance of an HDF5 Table
         self.fileh = open_file(self.file, "a")
@@ -1227,9 +1235,9 @@ class BasicTestCase(common.PyTablesTestCase):
         result = [r['var2'] for r in table.iterrows() if r['var2'] < 20]
 
         if common.verbose:
-            print "Nrows in", table._v_pathname, ":", table.nrows
-            print "Last selected value ==>", result[-1]
-            print "Total selected records in table ==>", len(result)
+            print("Nrows in", table._v_pathname, ":", table.nrows)
+            print("Last selected value ==>", result[-1])
+            print("Total selected records in table ==>", len(result))
 
         nrows = table.nrows
         table.nrowsinbuf = 3  # small value of the buffer
@@ -1240,9 +1248,9 @@ class BasicTestCase(common.PyTablesTestCase):
         result2 = [r['var2'] for r in table.iterrows() if r['var2'] < 20]
 
         if common.verbose:
-            print "Nrows in", table._v_pathname, ":", table.nrows
-            print "Last selected value ==>", result2[-1]
-            print "Total selected records in table ==>", len(result2)
+            print("Nrows in", table._v_pathname, ":", table.nrows)
+            print("Last selected value ==>", result2[-1])
+            print("Total selected records in table ==>", len(result2))
 
         self.assertEqual(table.nrows, nrows - 1)
         self.assertEqual(table.shape, (nrows - 1,))
@@ -1251,11 +1259,11 @@ class BasicTestCase(common.PyTablesTestCase):
         self.assertEqual(result[:-1], result2)
 
     def test04a_delete(self):
-        """Checking whether a single row can be deleted"""
+        """Checking whether a single row can be deleted."""
 
         if common.verbose:
-            print '\n', '-=' * 30
-            print "Running %s.test04_delete..." % self.__class__.__name__
+            print('\n', '-=' * 30)
+            print("Running %s.test04_delete..." % self.__class__.__name__)
 
         # Create an instance of an HDF5 Table
         self.fileh = open_file(self.file, "a")
@@ -1265,9 +1273,9 @@ class BasicTestCase(common.PyTablesTestCase):
         result = [r['var2'] for r in table.iterrows() if r['var2'] < 20]
 
         if common.verbose:
-            print "Nrows in", table._v_pathname, ":", table.nrows
-            print "Last selected value ==>", result[-1]
-            print "Total selected records in table ==>", len(result)
+            print("Nrows in", table._v_pathname, ":", table.nrows)
+            print("Last selected value ==>", result[-1])
+            print("Total selected records in table ==>", len(result))
 
         nrows = table.nrows
         table.nrowsinbuf = 3  # small value of the buffer
@@ -1278,9 +1286,9 @@ class BasicTestCase(common.PyTablesTestCase):
         result2 = [r['var2'] for r in table.iterrows() if r['var2'] < 20]
 
         if common.verbose:
-            print "Nrows in", table._v_pathname, ":", table.nrows
-            print "Last selected value ==>", result2[-1]
-            print "Total selected records in table ==>", len(result2)
+            print("Nrows in", table._v_pathname, ":", table.nrows)
+            print("Last selected value ==>", result2[-1])
+            print("Total selected records in table ==>", len(result2))
 
         self.assertEqual(table.nrows, nrows - 1)
         self.assertEqual(table.shape, (nrows - 1,))
@@ -1289,11 +1297,11 @@ class BasicTestCase(common.PyTablesTestCase):
         self.assertEqual(result[:-1], result2)
 
     def test04b_delete(self):
-        """Checking whether a range of rows can be deleted"""
+        """Checking whether a range of rows can be deleted."""
 
         if common.verbose:
-            print '\n', '-=' * 30
-            print "Running %s.test04b_delete..." % self.__class__.__name__
+            print('\n', '-=' * 30)
+            print("Running %s.test04b_delete..." % self.__class__.__name__)
 
         # Create an instance of an HDF5 Table
         self.fileh = open_file(self.file, "a")
@@ -1303,9 +1311,9 @@ class BasicTestCase(common.PyTablesTestCase):
         result = [r['var2'] for r in table.iterrows() if r['var2'] < 20]
 
         if common.verbose:
-            print "Nrows in", table._v_pathname, ":", table.nrows
-            print "Last selected value ==>", result[-1]
-            print "Total selected records in table ==>", len(result)
+            print("Nrows in", table._v_pathname, ":", table.nrows)
+            print("Last selected value ==>", result[-1])
+            print("Total selected records in table ==>", len(result))
 
         nrows = table.nrows
         table.nrowsinbuf = 4  # small value of the buffer
@@ -1316,9 +1324,9 @@ class BasicTestCase(common.PyTablesTestCase):
         result2 = [r['var2'] for r in table.iterrows() if r['var2'] < 20]
 
         if common.verbose:
-            print "Nrows in", table._v_pathname, ":", table.nrows
-            print "Last selected value ==>", result2[-1]
-            print "Total selected records in table ==>", len(result2)
+            print("Nrows in", table._v_pathname, ":", table.nrows)
+            print("Last selected value ==>", result2[-1])
+            print("Total selected records in table ==>", len(result2))
 
         self.assertEqual(table.nrows, nrows - 10)
         self.assertEqual(table.shape, (nrows - 10,))
@@ -1327,11 +1335,11 @@ class BasicTestCase(common.PyTablesTestCase):
         self.assertEqual(result[:10], result2)
 
     def test04c_delete(self):
-        """Checking whether removing a bad range of rows is detected"""
+        """Checking whether removing a bad range of rows is detected."""
 
         if common.verbose:
-            print '\n', '-=' * 30
-            print "Running %s.test04c_delete..." % self.__class__.__name__
+            print('\n', '-=' * 30)
+            print("Running %s.test04c_delete..." % self.__class__.__name__)
 
         # Create an instance of an HDF5 Table
         self.fileh = open_file(self.file, "a")
@@ -1349,9 +1357,9 @@ class BasicTestCase(common.PyTablesTestCase):
         result2 = [r['var2'] for r in table.iterrows() if r['var2'] < 20]
 
         if common.verbose:
-            print "Nrows in", table._v_pathname, ":", table.nrows
-            print "Last selected value ==>", result2[-1]
-            print "Total selected records in table ==>", len(result2)
+            print("Nrows in", table._v_pathname, ":", table.nrows)
+            print("Last selected value ==>", result2[-1])
+            print("Total selected records in table ==>", len(result2))
 
         self.assertEqual(table.nrows, 10)
         self.assertEqual(table.shape, (10,))
@@ -1360,11 +1368,11 @@ class BasicTestCase(common.PyTablesTestCase):
         self.assertEqual(result[:10], result2)
 
     def test04d_delete(self):
-        """Checking whether removing rows several times at once is working"""
+        """Checking whether removing rows several times at once is working."""
 
         if common.verbose:
-            print '\n', '-=' * 30
-            print "Running %s.test04d_delete..." % self.__class__.__name__
+            print('\n', '-=' * 30)
+            print("Running %s.test04d_delete..." % self.__class__.__name__)
 
         # Create an instance of an HDF5 Table
         self.fileh = open_file(self.file, "a")
@@ -1410,27 +1418,27 @@ class BasicTestCase(common.PyTablesTestCase):
                 row['var5'] = np.array((float(i),)*4)
             else:
                 row['var5'] = float(i)
-            if 'float16' in np.typeDict:
+            if 'Float16Col' in globals():
                 if isinstance(row['var11'], np.ndarray):
                     row['var11'] = np.array((float(i),)*4)
                 else:
                     row['var11'] = float(i)
-            if 'float96' in np.typeDict:
+            if 'Float96Col' in globals():
                 if isinstance(row['var12'], np.ndarray):
                     row['var12'] = np.array((float(i),)*4)
                 else:
                     row['var12'] = float(i)
-            if 'float128' in np.typeDict:
+            if 'Float128Col' in globals():
                 if isinstance(row['var13'], np.ndarray):
                     row['var13'] = np.array((float(i),)*4)
                 else:
                     row['var13'] = float(i)
-            if 'complex192' in np.typeDict:
+            if 'Complex192Col' in globals():
                 if isinstance(row['var14'], np.ndarray):
                     row['var14'] = [float(i)+0j, 1 + float(i)*1j]
                 else:
                     row['var14'] = 1 + float(i)*1j
-            if 'complex256' in np.typeDict:
+            if 'Complex256Col' in globals():
                 if isinstance(row['var15'], np.ndarray):
                     row['var15'] = [float(i)+0j, 1 + float(i)*1j]
                 else:
@@ -1447,9 +1455,9 @@ class BasicTestCase(common.PyTablesTestCase):
         result2 = [r['var2'] for r in table if r['var2'] < 20]
 
         if common.verbose:
-            print "Nrows in", table._v_pathname, ":", table.nrows
-            print "Last selected value ==>", result2[-1]
-            print "Total selected records in table ==>", len(result2)
+            print("Nrows in", table._v_pathname, ":", table.nrows)
+            print("Last selected value ==>", result2[-1])
+            print("Total selected records in table ==>", len(result2))
 
         self.assertEqual(table.nrows, nrows - 5)
         self.assertEqual(table.shape, (nrows - 5,))
@@ -1459,11 +1467,12 @@ class BasicTestCase(common.PyTablesTestCase):
         self.assertEqual(result[10:15], result2[10:15])
 
     def test05_filtersTable(self):
-        """Checking tablefilters"""
+        """Checking tablefilters."""
 
         if common.verbose:
-            print '\n', '-=' * 30
-            print "Running %s.test05_filtersTable..." % self.__class__.__name__
+            print('\n', '-=' * 30)
+            print("Running %s.test05_filtersTable..." %
+                  self.__class__.__name__)
 
         # Create an instance of an HDF5 Table
         self.fileh = open_file(self.file, "r")
@@ -1471,18 +1480,18 @@ class BasicTestCase(common.PyTablesTestCase):
 
         # Check filters:
         if self.compress != table.filters.complevel and common.verbose:
-            print "Error in compress. Class:", self.__class__.__name__
-            print "self, table:", self.compress, table.filters.complevel
+            print("Error in compress. Class:", self.__class__.__name__)
+            print("self, table:", self.compress, table.filters.complevel)
         self.assertEqual(table.filters.complevel, self.compress)
         if self.compress > 0 and which_lib_version(self.complib):
             self.assertEqual(table.filters.complib, self.complib)
         if self.shuffle != table.filters.shuffle and common.verbose:
-            print "Error in shuffle. Class:", self.__class__.__name__
-            print "self, table:", self.shuffle, table.filters.shuffle
+            print("Error in shuffle. Class:", self.__class__.__name__)
+            print("self, table:", self.shuffle, table.filters.shuffle)
         self.assertEqual(self.shuffle, table.filters.shuffle)
         if self.fletcher32 != table.filters.fletcher32 and common.verbose:
-            print "Error in fletcher32. Class:", self.__class__.__name__
-            print "self, table:", self.fletcher32, table.filters.fletcher32
+            print("Error in fletcher32. Class:", self.__class__.__name__)
+            print("self, table:", self.fletcher32, table.filters.fletcher32)
         self.assertEqual(self.fletcher32, table.filters.fletcher32)
 
     def test06_attributes(self):
@@ -1527,19 +1536,19 @@ class NumPyDTWriteTestCase(BasicTestCase):
     formats = "a4,i4,i2,2f8,f4,i2,a1,b1,c8,c16".split(',')
     names = 'var1,var2,var3,var4,var5,var6,var7,var8,var9,var10'.split(',')
 
-    if 'float16' in np.typeDict:
+    if 'Float16Col' in globals():
         formats.append('f2')
         names.append('var11')
-    if 'float96' in np.typeDict:
+    if 'Float96Col' in globals():
         formats.append('f12')
         names.append('var12')
-    if 'float128' in np.typeDict:
+    if 'Float128Col' in globals():
         formats.append('f16')
         names.append('var13')
-    if 'complex192' in np.typeDict:
+    if 'Complex192Col' in globals():
         formats.append('c24')
         names.append('var14')
-    if 'complex256' in np.typeDict:
+    if 'Complex256Col' in globals():
         formats.append('c32')
         names.append('var15')
 
@@ -1552,19 +1561,19 @@ class RecArrayOneWriteTestCase(BasicTestCase):
     formats = "a4,i4,i2,2f8,f4,i2,a1,b1,c8,c16".split(',')
     names = 'var1,var2,var3,var4,var5,var6,var7,var8,var9,var10'.split(',')
 
-    if 'float16' in np.typeDict:
+    if 'Float16Col' in globals():
         formats.append('f2')
         names.append('var11')
-    if 'float96' in np.typeDict:
+    if 'Float96Col' in globals():
         formats.append('f12')
         names.append('var12')
-    if 'float128' in np.typeDict:
+    if 'Float128Col' in globals():
         formats.append('f16')
         names.append('var13')
-    if 'complex192' in np.typeDict:
+    if 'Complex192Col' in globals():
         formats.append('c24')
         names.append('var14')
-    if 'complex256' in np.typeDict:
+    if 'Complex256Col' in globals():
         formats.append('c32')
         names.append('var15')
 
@@ -1579,19 +1588,19 @@ class RecArrayTwoWriteTestCase(BasicTestCase):
     formats = "a4,i4,i2,2f8,f4,i2,a1,b1,c8,c16".split(',')
     names = 'var1,var2,var3,var4,var5,var6,var7,var8,var9,var10'.split(',')
 
-    if 'float16' in np.typeDict:
+    if 'Float16Col' in globals():
         formats.append('f2')
         names.append('var11')
-    if 'float96' in np.typeDict:
+    if 'Float96Col' in globals():
         formats.append('f12')
         names.append('var12')
-    if 'float128' in np.typeDict:
+    if 'Float128Col' in globals():
         formats.append('f16')
         names.append('var13')
-    if 'complex192' in np.typeDict:
+    if 'Complex192Col' in globals():
         formats.append('c24')
         names.append('var14')
-    if 'complex256' in np.typeDict:
+    if 'Complex256Col' in globals():
         formats.append('c32')
         names.append('var15')
 
@@ -1606,19 +1615,19 @@ class RecArrayThreeWriteTestCase(BasicTestCase):
     formats = "a4,i4,i2,2f8,f4,i2,a1,b1,c8,c16".split(',')
     names = 'var1,var2,var3,var4,var5,var6,var7,var8,var9,var10'.split(',')
 
-    if 'float16' in np.typeDict:
+    if 'Float16Col' in globals():
         formats.append('f2')
         names.append('var11')
-    if 'float96' in np.typeDict:
+    if 'Float96Col' in globals():
         formats.append('f12')
         names.append('var12')
-    if 'float128' in np.typeDict:
+    if 'Float128Col' in globals():
         formats.append('f16')
         names.append('var13')
-    if 'complex192' in np.typeDict:
+    if 'Complex192Col' in globals():
         formats.append('c24')
         names.append('var14')
-    if 'complex256' in np.typeDict:
+    if 'Complex256Col' in globals():
         formats.append('c32')
         names.append('var15')
 
@@ -1639,6 +1648,41 @@ class CompressBloscShuffleTablesTestCase(BasicTestCase):
     complib = "blosc"
 
 
+class CompressBloscBloscLZTablesTestCase(BasicTestCase):
+    title = "CompressBloscLZTables"
+    compress = 1
+    shuffle = 1
+    complib = "blosc:blosclz"
+
+
+class CompressBloscLZ4TablesTestCase(BasicTestCase):
+    title = "CompressLZ4Tables"
+    compress = 1
+    shuffle = 1
+    complib = "blosc:lz4"
+
+
+class CompressBloscLZ4HCTablesTestCase(BasicTestCase):
+    title = "CompressLZ4HCTables"
+    compress = 1
+    shuffle = 1
+    complib = "blosc:lz4hc"
+
+
+class CompressBloscSnappyTablesTestCase(BasicTestCase):
+    title = "CompressSnappyTables"
+    compress = 1
+    shuffle = 1
+    complib = "blosc:snappy"
+
+
+class CompressBloscZlibTablesTestCase(BasicTestCase):
+    title = "CompressZlibTables"
+    compress = 1
+    shuffle = 1
+    complib = "blosc:zlib"
+
+
 class CompressLZOTablesTestCase(BasicTestCase):
     title = "CompressLZOTables"
     compress = 1
@@ -2082,11 +2126,19 @@ class BasicRangeTestCase(unittest.TestCase):
                     result.append(column[nrec])
         else:
             if 0 < self.step:
-                result = [rec['var2'] for rec in table.iterrows(self.start,
-                            self.stop, self.step) if rec['var2'] < self.nrows]
+                result = [
+                    rec['var2'] for rec in table.iterrows(self.start,
+                                                          self.stop,
+                                                          self.step)
+                    if rec['var2'] < self.nrows
+                ]
             elif 0 > self.step:
-                result = [rec['var2'] for rec in table.iterrows(self.start,
-                            self.stop, self.step) if rec['var2'] > self.nrows]
+                result = [
+                    rec['var2'] for rec in table.iterrows(self.start,
+                                                          self.stop,
+                                                          self.step)
+                    if rec['var2'] > self.nrows
+                ]
 
         if self.start < 0:
             startr = self.expectedrows + self.start
@@ -2109,41 +2161,47 @@ class BasicRangeTestCase(unittest.TestCase):
             stopr = self.nrows
 
         if common.verbose:
-            print "Nrows in", table._v_pathname, ":", table.nrows
+            print("Nrows in", table._v_pathname, ":", table.nrows)
             if reslength:
                 if self.checkrecarray:
-                    print "Last record *read* in recarray ==>", recarray[-1]
+                    print("Last record *read* in recarray ==>", recarray[-1])
                 elif self.checkgetCol:
-                    print "Last value *read* in getCol ==>", column[-1]
+                    print("Last value *read* in getCol ==>", column[-1])
                 else:
-                    print "Last record *read* in table range ==>", rec
-            print "Total number of selected records ==>", len(result)
-            print "Selected records:\n", result
-            print "Selected records should look like:\n", \
-                  range(startr, stopr, self.step)
-            print "start, stop, step ==>", self.start, self.stop, self.step
-            print "startr, stopr, step ==>", startr, stopr, self.step
+                    print("Last record *read* in table range ==>", rec)
+            print("Total number of selected records ==>", len(result))
+            print("Selected records:\n", result)
+            print("Selected records should look like:\n",
+                  range(startr, stopr, self.step))
+            print("start, stop, step ==>", self.start, self.stop, self.step)
+            print("startr, stopr, step ==>", startr, stopr, self.step)
 
         self.assertEqual(result, range(startr, stopr, self.step))
         if not (self.checkrecarray or self.checkgetCol):
             if startr < stopr and 0 < self.step:
-                rec = [r for r in table.iterrows(self.start, self.stop, self.step)
+                rec = [r for r in table.iterrows(self.start, self.stop,
+                                                 self.step)
                        if r['var2'] < self.nrows][-1]
                 if self.nrows < self.expectedrows:
-                    self.assertEqual(rec['var2'],
-                                     range(self.start, self.stop, self.step)[-1])
+                    self.assertEqual(
+                        rec['var2'],
+                        range(self.start, self.stop, self.step)[-1])
                 else:
-                    self.assertEqual(rec['var2'],
-                                     range(startr, stopr, self.step)[-1])
+                    self.assertEqual(
+                        rec['var2'],
+                        range(startr, stopr, self.step)[-1])
             elif startr > stopr and 0 > self.step:
-                rec = [r['var2'] for r in table.iterrows(self.start, self.stop, self.step)
+                rec = [r['var2'] for r in table.iterrows(self.start, self.stop,
+                                                         self.step)
                        if r['var2'] > self.nrows][0]
                 if self.nrows < self.expectedrows:
-                    self.assertEqual(rec,
-                                     range(self.start, self.stop or -1, self.step)[0])
+                    self.assertEqual(
+                        rec,
+                        range(self.start, self.stop or -1, self.step)[0])
                 else:
-                    self.assertEqual(rec,
-                                     range(startr, stopr or -1, self.step)[0])
+                    self.assertEqual(
+                        rec,
+                        range(startr, stopr or -1, self.step)[0])
 
         # Close the file
         self.fileh.close()
@@ -2152,8 +2210,8 @@ class BasicRangeTestCase(unittest.TestCase):
         """Checking ranges in table iterators (case1)"""
 
         if common.verbose:
-            print '\n', '-=' * 30
-            print "Running %s.test01_range..." % self.__class__.__name__
+            print('\n', '-=' * 30)
+            print("Running %s.test01_range..." % self.__class__.__name__)
 
         # Case where step < nrowsinbuf < 2 * step
         self.nrows = 21
@@ -2168,8 +2226,8 @@ class BasicRangeTestCase(unittest.TestCase):
         """Checking ranges in table iterators (case1)"""
 
         if common.verbose:
-            print '\n', '-=' * 30
-            print "Running %s.test01a_range..." % self.__class__.__name__
+            print('\n', '-=' * 30)
+            print("Running %s.test01a_range..." % self.__class__.__name__)
 
         # Case where step < nrowsinbuf < 2 * step
         self.nrows = 21
@@ -2184,8 +2242,8 @@ class BasicRangeTestCase(unittest.TestCase):
         """Checking ranges in table iterators (case2)"""
 
         if common.verbose:
-            print '\n', '-=' * 30
-            print "Running %s.test02_range..." % self.__class__.__name__
+            print('\n', '-=' * 30)
+            print("Running %s.test02_range..." % self.__class__.__name__)
 
         # Case where step < nrowsinbuf < 10 * step
         self.nrows = 21
@@ -2200,8 +2258,8 @@ class BasicRangeTestCase(unittest.TestCase):
         """Checking ranges in table iterators (case3)"""
 
         if common.verbose:
-            print '\n', '-=' * 30
-            print "Running %s.test03_range..." % self.__class__.__name__
+            print('\n', '-=' * 30)
+            print("Running %s.test03_range..." % self.__class__.__name__)
 
         # Case where step < nrowsinbuf < 1.1 * step
         self.nrows = self.expectedrows
@@ -2216,8 +2274,8 @@ class BasicRangeTestCase(unittest.TestCase):
         """Checking ranges in table iterators (case4)"""
 
         if common.verbose:
-            print '\n', '-=' * 30
-            print "Running %s.test04_range..." % self.__class__.__name__
+            print('\n', '-=' * 30)
+            print("Running %s.test04_range..." % self.__class__.__name__)
 
         # Case where step == nrowsinbuf
         self.nrows = self.expectedrows
@@ -2232,8 +2290,8 @@ class BasicRangeTestCase(unittest.TestCase):
         """Checking ranges in table iterators (case5)"""
 
         if common.verbose:
-            print '\n', '-=' * 30
-            print "Running %s.test05_range..." % self.__class__.__name__
+            print('\n', '-=' * 30)
+            print("Running %s.test05_range..." % self.__class__.__name__)
 
         # Case where step > 1.1 * nrowsinbuf
         self.nrows = 21
@@ -2248,8 +2306,8 @@ class BasicRangeTestCase(unittest.TestCase):
         """Checking ranges in table iterators (case6)"""
 
         if common.verbose:
-            print '\n', '-=' * 30
-            print "Running %s.test06_range..." % self.__class__.__name__
+            print('\n', '-=' * 30)
+            print("Running %s.test06_range..." % self.__class__.__name__)
 
         # Case where step > 3 * nrowsinbuf
         self.nrows = 3
@@ -2264,8 +2322,8 @@ class BasicRangeTestCase(unittest.TestCase):
         """Checking ranges in table iterators (case7)"""
 
         if common.verbose:
-            print '\n', '-=' * 30
-            print "Running %s.test07_range..." % self.__class__.__name__
+            print('\n', '-=' * 30)
+            print("Running %s.test07_range..." % self.__class__.__name__)
 
         # Case where start == stop
         self.nrows = 2
@@ -2280,8 +2338,8 @@ class BasicRangeTestCase(unittest.TestCase):
         """Checking ranges in table iterators (case8)"""
 
         if common.verbose:
-            print '\n', '-=' * 30
-            print "Running %s.test08_range..." % self.__class__.__name__
+            print('\n', '-=' * 30)
+            print("Running %s.test08_range..." % self.__class__.__name__)
 
         # Case where start > stop
         self.nrows = 2
@@ -2296,8 +2354,8 @@ class BasicRangeTestCase(unittest.TestCase):
         """Checking ranges in table iterators (case9)"""
 
         if common.verbose:
-            print '\n', '-=' * 30
-            print "Running %s.test09_range..." % self.__class__.__name__
+            print('\n', '-=' * 30)
+            print("Running %s.test09_range..." % self.__class__.__name__)
 
         # Case where stop = None (last row)
         self.nrows = 100
@@ -2312,8 +2370,8 @@ class BasicRangeTestCase(unittest.TestCase):
         """Checking ranges in table iterators (case10)"""
 
         if common.verbose:
-            print '\n', '-=' * 30
-            print "Running %s.test10_range..." % self.__class__.__name__
+            print('\n', '-=' * 30)
+            print("Running %s.test10_range..." % self.__class__.__name__)
 
         # Case where start < 0 and stop = None (last row)
         self.nrows = self.expectedrows
@@ -2330,8 +2388,8 @@ class BasicRangeTestCase(unittest.TestCase):
         """Checking ranges in table iterators (case10a)"""
 
         if common.verbose:
-            print '\n', '-=' * 30
-            print "Running %s.test10a_range..." % self.__class__.__name__
+            print('\n', '-=' * 30)
+            print("Running %s.test10a_range..." % self.__class__.__name__)
 
         # Case where start < 0 and stop = 0
         self.nrows = self.expectedrows
@@ -2348,8 +2406,8 @@ class BasicRangeTestCase(unittest.TestCase):
         """Checking ranges in table iterators (case11)"""
 
         if common.verbose:
-            print '\n', '-=' * 30
-            print "Running %s.test11_range..." % self.__class__.__name__
+            print('\n', '-=' * 30)
+            print("Running %s.test11_range..." % self.__class__.__name__)
 
         # Case where start < 0 and stop < 0
         self.nrows = self.expectedrows
@@ -2366,8 +2424,8 @@ class BasicRangeTestCase(unittest.TestCase):
         """Checking ranges in table iterators (case12)"""
 
         if common.verbose:
-            print '\n', '-=' * 30
-            print "Running %s.test12_range..." % self.__class__.__name__
+            print('\n', '-=' * 30)
+            print("Running %s.test12_range..." % self.__class__.__name__)
 
         # Case where start < 0 and stop < 0 and start > stop
         self.nrows = self.expectedrows
@@ -2384,8 +2442,8 @@ class BasicRangeTestCase(unittest.TestCase):
         """Checking ranges in table iterators (case13)"""
 
         if common.verbose:
-            print '\n', '-=' * 30
-            print "Running %s.test13_range..." % self.__class__.__name__
+            print('\n', '-=' * 30)
+            print("Running %s.test13_range..." % self.__class__.__name__)
 
         # Case where step < 0
         self.step = -11
@@ -2394,8 +2452,8 @@ class BasicRangeTestCase(unittest.TestCase):
         except ValueError:
             if common.verbose:
                 (type, value, traceback) = sys.exc_info()
-                print "\nGreat!, the next ValueError was catched!"
-                print value
+                print("\nGreat!, the next ValueError was catched!")
+                print(value)
             self.fileh.close()
         #else:
         #    print rec
@@ -2408,8 +2466,8 @@ class BasicRangeTestCase(unittest.TestCase):
         except ValueError:
             if common.verbose:
                 (type, value, traceback) = sys.exc_info()
-                print "\nGreat!, the next ValueError was catched!"
-                print value
+                print("\nGreat!, the next ValueError was catched!")
+                print(value)
             self.fileh.close()
         #else:
         #    print rec
@@ -2431,8 +2489,9 @@ class getColRangeTestCase(BasicRangeTestCase):
         """Checking non-existing Field in getCol method """
 
         if common.verbose:
-            print '\n', '-=' * 30
-            print "Running %s.test01_nonexistentField..." % self.__class__.__name__
+            print('\n', '-=' * 30)
+            print("Running %s.test01_nonexistentField..." %
+                  self.__class__.__name__)
 
         # Create an instance of an HDF5 Table
         self.fileh = open_file(self.file, "r")
@@ -2445,11 +2504,11 @@ class getColRangeTestCase(BasicRangeTestCase):
         except KeyError:
             if common.verbose:
                 (type, value, traceback) = sys.exc_info()
-                print "\nGreat!, the next KeyError was catched!"
-                print value
+                print("\nGreat!, the next KeyError was catched!")
+                print(value)
             self.fileh.close()
         else:
-            print rec
+            print(rec)
             self.fail("expected a KeyError")
 
 
@@ -2525,11 +2584,11 @@ class getItemTestCase(unittest.TestCase):
     #----------------------------------------
 
     def test01a_singleItem(self):
-        """Checking __getitem__ method with single parameter (int) """
+        """Checking __getitem__ method with single parameter (int)"""
 
         if common.verbose:
-            print '\n', '-=' * 30
-            print "Running %s.test01a_singleItem..." % self.__class__.__name__
+            print('\n', '-=' * 30)
+            print("Running %s.test01a_singleItem..." % self.__class__.__name__)
 
         self.fileh = open_file(self.file, "r")
         table = self.fileh.root.table0
@@ -2541,11 +2600,15 @@ class getItemTestCase(unittest.TestCase):
         self.assertEqual(result["var2"], self.expectedrows - 1)
 
     def test01b_singleItem(self):
-        """Checking __getitem__ method with single parameter (neg. int)"""
+        """Checking __getitem__ method with single parameter (neg.
+
+        int)
+
+        """
 
         if common.verbose:
-            print '\n', '-=' * 30
-            print "Running %s.test01b_singleItem..." % self.__class__.__name__
+            print('\n', '-=' * 30)
+            print("Running %s.test01b_singleItem..." % self.__class__.__name__)
 
         self.fileh = open_file(self.file, "r")
         table = self.fileh.root.table0
@@ -2560,8 +2623,8 @@ class getItemTestCase(unittest.TestCase):
         """Checking __getitem__ method with single parameter (long)"""
 
         if common.verbose:
-            print '\n', '-=' * 30
-            print "Running %s.test01c_singleItem..." % self.__class__.__name__
+            print('\n', '-=' * 30)
+            print("Running %s.test01c_singleItem..." % self.__class__.__name__)
 
         self.fileh = open_file(self.file, "r")
         table = self.fileh.root.table0
@@ -2573,11 +2636,15 @@ class getItemTestCase(unittest.TestCase):
         self.assertEqual(result["var2"], self.expectedrows - 1)
 
     def test01d_singleItem(self):
-        """Checking __getitem__ method with single parameter (neg. long)"""
+        """Checking __getitem__ method with single parameter (neg.
+
+        long)
+
+        """
 
         if common.verbose:
-            print '\n', '-=' * 30
-            print "Running %s.test01d_singleItem..." % self.__class__.__name__
+            print('\n', '-=' * 30)
+            print("Running %s.test01d_singleItem..." % self.__class__.__name__)
 
         self.fileh = open_file(self.file, "r")
         table = self.fileh.root.table0
@@ -2592,8 +2659,8 @@ class getItemTestCase(unittest.TestCase):
         """Checking __getitem__ method with single parameter (rank-0 ints)"""
 
         if common.verbose:
-            print '\n', '-=' * 30
-            print "Running %s.test01e_singleItem..." % self.__class__.__name__
+            print('\n', '-=' * 30)
+            print("Running %s.test01e_singleItem..." % self.__class__.__name__)
 
         self.fileh = open_file(self.file, "r")
         table = self.fileh.root.table0
@@ -2605,11 +2672,11 @@ class getItemTestCase(unittest.TestCase):
         self.assertEqual(result["var2"], self.expectedrows - 1)
 
     def test02_twoItems(self):
-        """Checking __getitem__ method with start, stop parameters """
+        """Checking __getitem__ method with start, stop parameters."""
 
         if common.verbose:
-            print '\n', '-=' * 30
-            print "Running %s.test02_twoItem..." % self.__class__.__name__
+            print('\n', '-=' * 30)
+            print("Running %s.test02_twoItem..." % self.__class__.__name__)
 
         self.fileh = open_file(self.file, "r")
         table = self.fileh.root.table0
@@ -2625,11 +2692,11 @@ class getItemTestCase(unittest.TestCase):
                          range(self.expectedrows-2, self.expectedrows))
 
     def test03_threeItems(self):
-        """Checking __getitem__ method with start, stop, step parameters """
+        """Checking __getitem__ method with start, stop, step parameters."""
 
         if common.verbose:
-            print '\n', '-=' * 30
-            print "Running %s.test03_threeItem..." % self.__class__.__name__
+            print('\n', '-=' * 30)
+            print("Running %s.test03_threeItem..." % self.__class__.__name__)
 
         self.fileh = open_file(self.file, "r")
         table = self.fileh.root.table0
@@ -2645,11 +2712,12 @@ class getItemTestCase(unittest.TestCase):
             0, self.expectedrows, 1))
 
     def test04_negativeStep(self):
-        """Checking __getitem__ method with negative step parameter"""
+        """Checking __getitem__ method with negative step parameter."""
 
         if common.verbose:
-            print '\n', '-=' * 30
-            print "Running %s.test04_negativeStep..." % self.__class__.__name__
+            print('\n', '-=' * 30)
+            print("Running %s.test04_negativeStep..." %
+                  self.__class__.__name__)
 
         self.fileh = open_file(self.file, "r")
         table = self.fileh.root.table0
@@ -2658,17 +2726,18 @@ class getItemTestCase(unittest.TestCase):
         except ValueError:
             if common.verbose:
                 (type, value, traceback) = sys.exc_info()
-                print "\nGreat!, the next ValueError was catched!"
-                print value
+                print("\nGreat!, the next ValueError was catched!")
+                print(value)
         else:
             self.fail("expected a ValueError")
 
     def test06a_singleItemCol(self):
-        """Checking __getitem__ method in Col with single parameter """
+        """Checking __getitem__ method in Col with single parameter."""
 
         if common.verbose:
-            print '\n', '-=' * 30
-            print "Running %s.test06a_singleItemCol..." % self.__class__.__name__
+            print('\n', '-=' * 30)
+            print("Running %s.test06a_singleItemCol..." %
+                  self.__class__.__name__)
 
         self.fileh = open_file(self.file, "r")
         table = self.fileh.root.table0
@@ -2678,12 +2747,15 @@ class getItemTestCase(unittest.TestCase):
         self.assertEqual(colvar2[self.expectedrows-1], self.expectedrows - 1)
 
     def test06b_singleItemCol(self):
-        """Checking __getitem__ method in Col with single parameter
-        (negative)"""
+        """Checking __getitem__ method in Col with single parameter.
+
+        (negative)
+
+        """
 
         if common.verbose:
-            print '\n', '-=' * 30
-            print "Running %s.test06b_singleItem..." % self.__class__.__name__
+            print('\n', '-=' * 30)
+            print("Running %s.test06b_singleItem..." % self.__class__.__name__)
 
         self.fileh = open_file(self.file, "r")
         table = self.fileh.root.table0
@@ -2693,11 +2765,11 @@ class getItemTestCase(unittest.TestCase):
         self.assertEqual(colvar2[-self.expectedrows], 0)
 
     def test07_twoItemsCol(self):
-        """Checking __getitem__ method in Col with start, stop parameters """
+        """Checking __getitem__ method in Col with start, stop parameters."""
 
         if common.verbose:
-            print '\n', '-=' * 30
-            print "Running %s.test07_twoItemCol..." % self.__class__.__name__
+            print('\n', '-=' * 30)
+            print("Running %s.test07_twoItemCol..." % self.__class__.__name__)
 
         self.fileh = open_file(self.file, "r")
         table = self.fileh.root.table0
@@ -2710,11 +2782,12 @@ class getItemTestCase(unittest.TestCase):
 
     def test08_threeItemsCol(self):
         """Checking __getitem__ method in Col with start, stop, step
-        parameters"""
+        parameters."""
 
         if common.verbose:
-            print '\n', '-=' * 30
-            print "Running %s.test08_threeItemCol..." % self.__class__.__name__
+            print('\n', '-=' * 30)
+            print("Running %s.test08_threeItemCol..." %
+                  self.__class__.__name__)
 
         self.fileh = open_file(self.file, "r")
         table = self.fileh.root.table0
@@ -2726,11 +2799,12 @@ class getItemTestCase(unittest.TestCase):
         self.assertEqual(colvar2[::].tolist(), range(0, self.expectedrows, 1))
 
     def test09_negativeStep(self):
-        """Checking __getitem__ method in Col with negative step parameter"""
+        """Checking __getitem__ method in Col with negative step parameter."""
 
         if common.verbose:
-            print '\n', '-=' * 30
-            print "Running %s.test09_negativeStep..." % self.__class__.__name__
+            print('\n', '-=' * 30)
+            print("Running %s.test09_negativeStep..." %
+                  self.__class__.__name__)
 
         self.fileh = open_file(self.file, "r")
         table = self.fileh.root.table0
@@ -2740,11 +2814,39 @@ class getItemTestCase(unittest.TestCase):
         except ValueError:
             if common.verbose:
                 (type, value, traceback) = sys.exc_info()
-                print "\nGreat!, the next ValueError was catched!"
-                print value
+                print("\nGreat!, the next ValueError was catched!")
+                print(value)
         else:
             self.fail("expected a ValueError")
 
+    def test10_list_integers(self):
+        """Checking accessing Table with a list of integers."""
+
+        self.fileh = open_file(self.file, "r")
+        table = self.fileh.root.table0
+        idx = list(range(10, 70, 11))
+
+        result = table[idx]
+        self.assertEqual(result["var2"].tolist(), idx)
+
+        result = table.read_coordinates(idx)
+        self.assertEqual(result["var2"].tolist(), idx)
+
+    def test11_list_booleans(self):
+        """Checking accessing Table with a list of boolean values."""
+
+        self.fileh = open_file(self.file, "r")
+        table = self.fileh.root.table0
+        idx = list(range(10, 70, 11))
+
+        selection = [n in idx for n in range(self.expectedrows)]
+
+        result = table[selection]
+        self.assertEqual(result["var2"].tolist(), idx)
+
+        result = table.read_coordinates(selection)
+        self.assertEqual(result["var2"].tolist(), idx)
+
 
 class Rec(IsDescription):
     col1 = IntCol(pos=1)
@@ -2790,8 +2892,8 @@ class setItem(common.PyTablesTestCase):
             table.nrowsinbuf = self.buffersize  # set buffer value
         r2 = table.read()
         if common.verbose:
-            print "Original table-->", repr(r2)
-            print "Should look like-->", repr(r1)
+            print("Original table-->", repr(r2))
+            print("Should look like-->", repr(r1))
         self.assertEqual(r1.tostring(), r2.tostring())
         self.assertEqual(table.nrows, 4)
 
@@ -2826,8 +2928,8 @@ class setItem(common.PyTablesTestCase):
             table.nrowsinbuf = self.buffersize  # set buffer value
         r2 = table.read()
         if common.verbose:
-            print "Original table-->", repr(r2)
-            print "Should look like-->", repr(r1)
+            print("Original table-->", repr(r2))
+            print("Should look like-->", repr(r1))
         self.assertEqual(r1.tostring(), r2.tostring())
         self.assertEqual(table.nrows, 4)
 
@@ -2863,8 +2965,8 @@ class setItem(common.PyTablesTestCase):
             table.nrowsinbuf = self.buffersize  # set buffer value
         r2 = table.read()
         if common.verbose:
-            print "Original table-->", repr(r2)
-            print "Should look like-->", repr(r1)
+            print("Original table-->", repr(r2))
+            print("Should look like-->", repr(r1))
         self.assertEqual(r1.tostring(), r2.tostring())
         self.assertEqual(table.nrows, 4)
 
@@ -2902,8 +3004,8 @@ class setItem(common.PyTablesTestCase):
             table.nrowsinbuf = self.buffersize  # set buffer value
         r2 = table.read()
         if common.verbose:
-            print "Original table-->", repr(r2)
-            print "Should look like-->", repr(r1)
+            print("Original table-->", repr(r2))
+            print("Should look like-->", repr(r1))
         self.assertEqual(r1.tostring(), r2.tostring())
         self.assertEqual(table.nrows, 4)
 
@@ -2941,8 +3043,8 @@ class setItem(common.PyTablesTestCase):
             table.nrowsinbuf = self.buffersize  # set buffer value
         r2 = table.read()
         if common.verbose:
-            print "Original table-->", repr(r2)
-            print "Should look like-->", repr(r1)
+            print("Original table-->", repr(r2))
+            print("Should look like-->", repr(r1))
         self.assertEqual(r1.tostring(), r2.tostring())
         self.assertEqual(table.nrows, 4)
 
@@ -2977,8 +3079,8 @@ class setItem(common.PyTablesTestCase):
             table.nrowsinbuf = self.buffersize  # set buffer value
         r2 = table.read()
         if common.verbose:
-            print "Original table-->", repr(r2)
-            print "Should look like-->", repr(r1)
+            print("Original table-->", repr(r2))
+            print("Should look like-->", repr(r1))
         self.assertEqual(r1.tostring(), r2.tostring())
         self.assertEqual(table.nrows, 4)
 
@@ -3013,8 +3115,8 @@ class setItem(common.PyTablesTestCase):
             table.nrowsinbuf = self.buffersize  # set buffer value
         r2 = table.read()
         if common.verbose:
-            print "Original table-->", repr(r2)
-            print "Should look like-->", repr(r1)
+            print("Original table-->", repr(r2))
+            print("Should look like-->", repr(r1))
         self.assertEqual(r1.tostring(), r2.tostring())
         self.assertEqual(table.nrows, 4)
 
@@ -3043,8 +3145,8 @@ class setItem(common.PyTablesTestCase):
         except NotImplementedError:
             if common.verbose:
                 (type, value, traceback) = sys.exc_info()
-                print "\nGreat!, the next NotImplementedError was catched!"
-                print value
+                print("\nGreat!, the next NotImplementedError was catched!")
+                print(value)
         else:
             self.fail("expected a NotImplementedError")
 
@@ -3096,8 +3198,8 @@ class setItem(common.PyTablesTestCase):
             table.nrowsinbuf = self.buffersize  # set buffer value
         r2 = table.read()
         if common.verbose:
-            print "Original table-->", repr(r2)
-            print "Should look like-->", repr(r1)
+            print("Original table-->", repr(r2))
+            print("Should look like-->", repr(r1))
         self.assertEqual(r1.tostring(), r2.tostring())
         self.assertEqual(table.nrows, 4)
 
@@ -3132,8 +3234,8 @@ class setItem(common.PyTablesTestCase):
             table.nrowsinbuf = self.buffersize  # set buffer value
         r2 = table.read()
         if common.verbose:
-            print "Original table-->", repr(r2)
-            print "Should look like-->", repr(r1)
+            print("Original table-->", repr(r2))
+            print("Should look like-->", repr(r1))
         self.assertEqual(r1.tostring(), r2.tostring())
         self.assertEqual(table.nrows, 4)
 
@@ -3171,8 +3273,8 @@ class setItem(common.PyTablesTestCase):
             table.nrowsinbuf = self.buffersize  # set buffer value
         r2 = table.read()
         if common.verbose:
-            print "Original table-->", repr(r2)
-            print "Should look like-->", repr(r1)
+            print("Original table-->", repr(r2))
+            print("Should look like-->", repr(r1))
         self.assertEqual(r1.tostring(), r2.tostring())
         self.assertEqual(table.nrows, 4)
 
@@ -3236,8 +3338,8 @@ class updateRow(common.PyTablesTestCase):
             table.nrowsinbuf = self.buffersize  # set buffer value
         r2 = table.read()
         if common.verbose:
-            print "Original table-->", repr(r2)
-            print "Should look like-->", repr(r1)
+            print("Original table-->", repr(r2))
+            print("Should look like-->", repr(r1))
         self.assertEqual(r1.tostring(), r2.tostring())
         self.assertEqual(table.nrows, 4)
 
@@ -3277,8 +3379,8 @@ class updateRow(common.PyTablesTestCase):
             table.nrowsinbuf = self.buffersize  # set buffer value
         r2 = table.read()
         if common.verbose:
-            print "Original table-->", repr(r2)
-            print "Should look like-->", repr(r1)
+            print("Original table-->", repr(r2))
+            print("Should look like-->", repr(r1))
         self.assertEqual(r1.tostring(), r2.tostring())
         self.assertEqual(table.nrows, 4)
 
@@ -3318,8 +3420,8 @@ class updateRow(common.PyTablesTestCase):
             table.nrowsinbuf = self.buffersize  # set buffer value
         r2 = table.read()
         if common.verbose:
-            print "Original table-->", repr(r2)
-            print "Should look like-->", repr(r1)
+            print("Original table-->", repr(r2))
+            print("Should look like-->", repr(r1))
         self.assertEqual(r1.tostring(), r2.tostring())
         self.assertEqual(table.nrows, 4)
 
@@ -3359,8 +3461,8 @@ class updateRow(common.PyTablesTestCase):
             table.nrowsinbuf = self.buffersize  # set buffer value
         r2 = table.read()
         if common.verbose:
-            print "Original table-->", repr(r2)
-            print "Should look like-->", repr(r1)
+            print("Original table-->", repr(r2))
+            print("Should look like-->", repr(r1))
         self.assertEqual(r1.tostring(), r2.tostring())
         self.assertEqual(table.nrows, 4)
 
@@ -3397,8 +3499,8 @@ class updateRow(common.PyTablesTestCase):
             table.nrowsinbuf = self.buffersize  # set buffer value
         r2 = table.read()
         if common.verbose:
-            print "Original table-->", repr(r2)
-            print "Should look like-->", repr(r1)
+            print("Original table-->", repr(r2))
+            print("Should look like-->", repr(r1))
         self.assertEqual(r1.tostring(), r2.tostring())
         self.assertEqual(table.nrows, 4)
 
@@ -3435,8 +3537,8 @@ class updateRow(common.PyTablesTestCase):
             table.nrowsinbuf = self.buffersize  # set buffer value
         r2 = table.read()
         if common.verbose:
-            print "Original table-->", repr(r2)
-            print "Should look like-->", repr(r1)
+            print("Original table-->", repr(r2))
+            print("Should look like-->", repr(r1))
         self.assertEqual(r1.tostring(), r2.tostring())
         self.assertEqual(table.nrows, 4)
 
@@ -3473,8 +3575,8 @@ class updateRow(common.PyTablesTestCase):
             table.nrowsinbuf = self.buffersize  # set buffer value
         r2 = table.read()
         if common.verbose:
-            print "Original table-->", repr(r2)
-            print "Should look like-->", repr(r1)
+            print("Original table-->", repr(r2))
+            print("Should look like-->", repr(r1))
         self.assertEqual(r1.tostring(), r2.tostring())
         self.assertEqual(table.nrows, 4)
 
@@ -3521,8 +3623,8 @@ class updateRow(common.PyTablesTestCase):
             table.nrowsinbuf = self.buffersize  # set buffer value
         r2 = table.read()
         if common.verbose:
-            print "Original table-->", repr(r2)
-            print "Should look like-->", repr(r1)
+            print("Original table-->", repr(r2))
+            print("Should look like-->", repr(r1))
         self.assertEqual(r1.tostring(), r2.tostring())
         self.assertEqual(table.nrows, nrows)
 
@@ -3569,8 +3671,8 @@ class updateRow(common.PyTablesTestCase):
             table.nrowsinbuf = self.buffersize  # set buffer value
         r2 = table.read()
         if common.verbose:
-            print "Original table-->", repr(r2)
-            print "Should look like-->", repr(r1)
+            print("Original table-->", repr(r2))
+            print("Should look like-->", repr(r1))
         self.assertEqual(r1.tostring(), r2.tostring())
         self.assertEqual(table.nrows, nrows)
 
@@ -3622,8 +3724,8 @@ class updateRow(common.PyTablesTestCase):
             table.nrowsinbuf = self.buffersize  # set buffer value
         r2 = table.read()
         if common.verbose:
-            print "Original table-->", repr(r2)
-            print "Should look like-->", repr(r1)
+            print("Original table-->", repr(r2))
+            print("Should look like-->", repr(r1))
         self.assertEqual(r1.tostring(), r2.tostring())
         self.assertEqual(table.nrows, nrows)
 
@@ -3676,8 +3778,8 @@ class updateRow(common.PyTablesTestCase):
             table.nrowsinbuf = self.buffersize  # set buffer value
         r2 = table.read()
         if common.verbose:
-            print "Original table-->", repr(r2)
-            print "Should look like-->", repr(r1)
+            print("Original table-->", repr(r2))
+            print("Should look like-->", repr(r1))
         self.assertEqual(r1.tostring(), r2.tostring())
         self.assertEqual(table.nrows, nrows)
 
@@ -3707,8 +3809,8 @@ class RecArrayIO(unittest.TestCase):
         "Checking saving a regular recarray"
 
         if common.verbose:
-            print '\n', '-=' * 30
-            print "Running %s.test00..." % self.__class__.__name__
+            print('\n', '-=' * 30)
+            print("Running %s.test00..." % self.__class__.__name__)
 
         file = tempfile.mktemp(".h5")
         fileh = open_file(file, "w")
@@ -3734,8 +3836,8 @@ class RecArrayIO(unittest.TestCase):
         "Checking saving a recarray with an offset in its buffer"
 
         if common.verbose:
-            print '\n', '-=' * 30
-            print "Running %s.test01..." % self.__class__.__name__
+            print('\n', '-=' * 30)
+            print("Running %s.test01..." % self.__class__.__name__)
 
         file = tempfile.mktemp(".h5")
         fileh = open_file(file, "w")
@@ -3765,8 +3867,8 @@ class RecArrayIO(unittest.TestCase):
         "Checking saving a large recarray with an offset in its buffer"
 
         if common.verbose:
-            print '\n', '-=' * 30
-            print "Running %s.test02..." % self.__class__.__name__
+            print('\n', '-=' * 30)
+            print("Running %s.test02..." % self.__class__.__name__)
 
         file = tempfile.mktemp(".h5")
         fileh = open_file(file, "w")
@@ -3795,8 +3897,8 @@ class RecArrayIO(unittest.TestCase):
         "Checking saving a strided recarray with an offset in its buffer"
 
         if common.verbose:
-            print '\n', '-=' * 30
-            print "Running %s.test03..." % self.__class__.__name__
+            print('\n', '-=' * 30)
+            print("Running %s.test03..." % self.__class__.__name__)
 
         file = tempfile.mktemp(".h5")
         fileh = open_file(file, "w")
@@ -3827,8 +3929,8 @@ class RecArrayIO(unittest.TestCase):
         "Checking appending several rows at once"
 
         if common.verbose:
-            print '\n', '-=' * 30
-            print "Running %s.test04..." % self.__class__.__name__
+            print('\n', '-=' * 30)
+            print("Running %s.test04..." % self.__class__.__name__)
 
         file = tempfile.mktemp(".h5")
         fileh = open_file(file, "w")
@@ -3858,8 +3960,8 @@ class RecArrayIO(unittest.TestCase):
             table = fileh.root.recarray
         r2 = fileh.root.recarray.read()
         if common.verbose:
-            print "Original table-->", repr(r2)
-            print "Should look like-->", repr(r1)
+            print("Original table-->", repr(r2))
+            print("Should look like-->", repr(r1))
         self.assertEqual(r1.tostring(), r2.tostring())
         self.assertEqual(table.nrows, 4)
 
@@ -3870,8 +3972,8 @@ class RecArrayIO(unittest.TestCase):
         "Checking appending several rows at once (close file version)"
 
         if common.verbose:
-            print '\n', '-=' * 30
-            print "Running %s.test05..." % self.__class__.__name__
+            print('\n', '-=' * 30)
+            print("Running %s.test05..." % self.__class__.__name__)
 
         file = tempfile.mktemp(".h5")
         fileh = open_file(file, "w")
@@ -3900,8 +4002,8 @@ class RecArrayIO(unittest.TestCase):
             table = fileh.root.recarray
         r2 = fileh.root.recarray.read()
         if common.verbose:
-            print "Original table-->", repr(r2)
-            print "Should look like-->", repr(r1)
+            print("Original table-->", repr(r2))
+            print("Should look like-->", repr(r1))
         self.assertEqual(r1.tostring(), r2.tostring())
         self.assertEqual(table.nrows, 4)
 
@@ -3912,8 +4014,8 @@ class RecArrayIO(unittest.TestCase):
         "Checking modifying one table row (list version)"
 
         if common.verbose:
-            print '\n', '-=' * 30
-            print "Running %s.test06a..." % self.__class__.__name__
+            print('\n', '-=' * 30)
+            print("Running %s.test06a..." % self.__class__.__name__)
 
         file = tempfile.mktemp(".h5")
         fileh = open_file(file, "w")
@@ -3940,8 +4042,8 @@ class RecArrayIO(unittest.TestCase):
             table = fileh.root.recarray
         r2 = table.read()
         if common.verbose:
-            print "Original table-->", repr(r2)
-            print "Should look like-->", repr(r1)
+            print("Original table-->", repr(r2))
+            print("Should look like-->", repr(r1))
         self.assertEqual(r1.tostring(), r2.tostring())
         self.assertEqual(table.nrows, 4)
 
@@ -3952,8 +4054,8 @@ class RecArrayIO(unittest.TestCase):
         "Checking modifying one table row (recarray version)"
 
         if common.verbose:
-            print '\n', '-=' * 30
-            print "Running %s.test06b..." % self.__class__.__name__
+            print('\n', '-=' * 30)
+            print("Running %s.test06b..." % self.__class__.__name__)
 
         file = tempfile.mktemp(".h5")
         fileh = open_file(file, "w")
@@ -3981,8 +4083,8 @@ class RecArrayIO(unittest.TestCase):
             table = fileh.root.recarray
         r2 = table.read()
         if common.verbose:
-            print "Original table-->", repr(r2)
-            print "Should look like-->", repr(r1)
+            print("Original table-->", repr(r2))
+            print("Should look like-->", repr(r1))
         self.assertEqual(r1.tostring(), r2.tostring())
         self.assertEqual(table.nrows, 4)
 
@@ -3993,8 +4095,8 @@ class RecArrayIO(unittest.TestCase):
         "Checking modifying several rows at once (list version)"
 
         if common.verbose:
-            print '\n', '-=' * 30
-            print "Running %s.test07a..." % self.__class__.__name__
+            print('\n', '-=' * 30)
+            print("Running %s.test07a..." % self.__class__.__name__)
 
         file = tempfile.mktemp(".h5")
         fileh = open_file(file, "w")
@@ -4021,8 +4123,8 @@ class RecArrayIO(unittest.TestCase):
             table = fileh.root.recarray
         r2 = table.read()
         if common.verbose:
-            print "Original table-->", repr(r2)
-            print "Should look like-->", repr(r1)
+            print("Original table-->", repr(r2))
+            print("Should look like-->", repr(r1))
         self.assertEqual(r1.tostring(), r2.tostring())
         self.assertEqual(table.nrows, 4)
 
@@ -4033,8 +4135,8 @@ class RecArrayIO(unittest.TestCase):
         "Checking modifying several rows at once (recarray version)"
 
         if common.verbose:
-            print '\n', '-=' * 30
-            print "Running %s.test07b..." % self.__class__.__name__
+            print('\n', '-=' * 30)
+            print("Running %s.test07b..." % self.__class__.__name__)
 
         file = tempfile.mktemp(".h5")
         fileh = open_file(file, "w")
@@ -4063,8 +4165,8 @@ class RecArrayIO(unittest.TestCase):
             table = fileh.root.recarray
         r2 = table.read()
         if common.verbose:
-            print "Original table-->", repr(r2)
-            print "Should look like-->", repr(r1)
+            print("Original table-->", repr(r2))
+            print("Should look like-->", repr(r1))
         self.assertEqual(r1.tostring(), r2.tostring())
         self.assertEqual(table.nrows, 4)
 
@@ -4075,8 +4177,8 @@ class RecArrayIO(unittest.TestCase):
         "Checking modifying several rows with a mismatching value"
 
         if common.verbose:
-            print '\n', '-=' * 30
-            print "Running %s.test07c..." % self.__class__.__name__
+            print('\n', '-=' * 30)
+            print("Running %s.test07c..." % self.__class__.__name__)
 
         file = tempfile.mktemp(".h5")
         fileh = open_file(file, "w")
@@ -4101,8 +4203,8 @@ class RecArrayIO(unittest.TestCase):
         "Checking modifying one column (single column version)"
 
         if common.verbose:
-            print '\n', '-=' * 30
-            print "Running %s.test08a..." % self.__class__.__name__
+            print('\n', '-=' * 30)
+            print("Running %s.test08a..." % self.__class__.__name__)
 
         file = tempfile.mktemp(".h5")
         fileh = open_file(file, "w")
@@ -4130,8 +4232,8 @@ class RecArrayIO(unittest.TestCase):
             table = fileh.root.recarray
         r2 = table.read()
         if common.verbose:
-            print "Original table-->", repr(r2)
-            print "Should look like-->", repr(r1)
+            print("Original table-->", repr(r2))
+            print("Should look like-->", repr(r1))
         self.assertEqual(r1.tostring(), r2.tostring())
         self.assertEqual(table.nrows, 4)
 
@@ -4142,8 +4244,8 @@ class RecArrayIO(unittest.TestCase):
         "Checking modifying one column (single column version, modify_column)"
 
         if common.verbose:
-            print '\n', '-=' * 30
-            print "Running %s.test08a2..." % self.__class__.__name__
+            print('\n', '-=' * 30)
+            print("Running %s.test08a2..." % self.__class__.__name__)
 
         file = tempfile.mktemp(".h5")
         fileh = open_file(file, "w")
@@ -4171,8 +4273,8 @@ class RecArrayIO(unittest.TestCase):
             table = fileh.root.recarray
         r2 = table.read()
         if common.verbose:
-            print "Original table-->", repr(r2)
-            print "Should look like-->", repr(r1)
+            print("Original table-->", repr(r2))
+            print("Should look like-->", repr(r1))
         self.assertEqual(r1.tostring(), r2.tostring())
         self.assertEqual(table.nrows, 4)
 
@@ -4183,8 +4285,8 @@ class RecArrayIO(unittest.TestCase):
         "Checking modifying one column (single column version, recarray)"
 
         if common.verbose:
-            print '\n', '-=' * 30
-            print "Running %s.test08b..." % self.__class__.__name__
+            print('\n', '-=' * 30)
+            print("Running %s.test08b..." % self.__class__.__name__)
 
         file = tempfile.mktemp(".h5")
         fileh = open_file(file, "w")
@@ -4213,8 +4315,8 @@ class RecArrayIO(unittest.TestCase):
             table = fileh.root.recarray
         r2 = table.read()
         if common.verbose:
-            print "Original table-->", repr(r2)
-            print "Should look like-->", repr(r1)
+            print("Original table-->", repr(r2))
+            print("Should look like-->", repr(r1))
         self.assertEqual(r1.tostring(), r2.tostring())
         self.assertEqual(table.nrows, 4)
 
@@ -4226,8 +4328,8 @@ class RecArrayIO(unittest.TestCase):
         modify_column)"""
 
         if common.verbose:
-            print '\n', '-=' * 30
-            print "Running %s.test08b2..." % self.__class__.__name__
+            print('\n', '-=' * 30)
+            print("Running %s.test08b2..." % self.__class__.__name__)
 
         file = tempfile.mktemp(".h5")
         fileh = open_file(file, "w")
@@ -4256,8 +4358,8 @@ class RecArrayIO(unittest.TestCase):
             table = fileh.root.recarray
         r2 = table.read()
         if common.verbose:
-            print "Original table-->", repr(r2)
-            print "Should look like-->", repr(r1)
+            print("Original table-->", repr(r2))
+            print("Should look like-->", repr(r1))
         self.assertEqual(r1.tostring(), r2.tostring())
         self.assertEqual(table.nrows, 4)
 
@@ -4268,8 +4370,8 @@ class RecArrayIO(unittest.TestCase):
         "Checking modifying one column (single column version, single element)"
 
         if common.verbose:
-            print '\n', '-=' * 30
-            print "Running %s.test08c..." % self.__class__.__name__
+            print('\n', '-=' * 30)
+            print("Running %s.test08c..." % self.__class__.__name__)
 
         file = tempfile.mktemp(".h5")
         fileh = open_file(file, "w")
@@ -4299,8 +4401,8 @@ class RecArrayIO(unittest.TestCase):
             table = fileh.root.recarray
         r2 = table.read()
         if common.verbose:
-            print "Original table-->", repr(r2)
-            print "Should look like-->", repr(r1)
+            print("Original table-->", repr(r2))
+            print("Should look like-->", repr(r1))
         self.assertEqual(r1.tostring(), r2.tostring())
         self.assertEqual(table.nrows, 4)
 
@@ -4311,8 +4413,8 @@ class RecArrayIO(unittest.TestCase):
         "Checking modifying table columns (multiple column version)"
 
         if common.verbose:
-            print '\n', '-=' * 30
-            print "Running %s.test09a..." % self.__class__.__name__
+            print('\n', '-=' * 30)
+            print("Running %s.test09a..." % self.__class__.__name__)
 
         file = tempfile.mktemp(".h5")
         fileh = open_file(file, "w")
@@ -4342,8 +4444,8 @@ class RecArrayIO(unittest.TestCase):
             table = fileh.root.recarray
         r2 = table.read()
         if common.verbose:
-            print "Original table-->", repr(r2)
-            print "Should look like-->", repr(r1)
+            print("Original table-->", repr(r2))
+            print("Should look like-->", repr(r1))
         self.assertEqual(r1.tostring(), r2.tostring())
         self.assertEqual(table.nrows, 4)
 
@@ -4354,8 +4456,8 @@ class RecArrayIO(unittest.TestCase):
         "Checking modifying table columns (multiple columns, recarray)"
 
         if common.verbose:
-            print '\n', '-=' * 30
-            print "Running %s.test09b..." % self.__class__.__name__
+            print('\n', '-=' * 30)
+            print("Running %s.test09b..." % self.__class__.__name__)
 
         file = tempfile.mktemp(".h5")
         fileh = open_file(file, "w")
@@ -4385,8 +4487,8 @@ class RecArrayIO(unittest.TestCase):
             table = fileh.root.recarray
         r2 = table.read()
         if common.verbose:
-            print "Original table-->", repr(r2)
-            print "Should look like-->", repr(r1)
+            print("Original table-->", repr(r2))
+            print("Should look like-->", repr(r1))
         self.assertEqual(r1.tostring(), r2.tostring())
         self.assertEqual(table.nrows, 4)
 
@@ -4397,8 +4499,8 @@ class RecArrayIO(unittest.TestCase):
         "Checking modifying table columns (single column, step)"
 
         if common.verbose:
-            print '\n', '-=' * 30
-            print "Running %s.test09c..." % self.__class__.__name__
+            print('\n', '-=' * 30)
+            print("Running %s.test09c..." % self.__class__.__name__)
 
         file = tempfile.mktemp(".h5")
         fileh = open_file(file, "w")
@@ -4428,8 +4530,8 @@ class RecArrayIO(unittest.TestCase):
             table = fileh.root.recarray
         r2 = table.read()
         if common.verbose:
-            print "Original table-->", repr(r2)
-            print "Should look like-->", repr(r1)
+            print("Original table-->", repr(r2))
+            print("Should look like-->", repr(r1))
         self.assertEqual(r1.tostring(), r2.tostring())
         self.assertEqual(table.nrows, 4)
 
@@ -4440,8 +4542,8 @@ class RecArrayIO(unittest.TestCase):
         "Checking modifying table columns (multiple columns, step)"
 
         if common.verbose:
-            print '\n', '-=' * 30
-            print "Running %s.test09d..." % self.__class__.__name__
+            print('\n', '-=' * 30)
+            print("Running %s.test09d..." % self.__class__.__name__)
 
         file = tempfile.mktemp(".h5")
         fileh = open_file(file, "w")
@@ -4472,8 +4574,8 @@ class RecArrayIO(unittest.TestCase):
             table = fileh.root.recarray
         r2 = table.read()
         if common.verbose:
-            print "Original table-->", repr(r2)
-            print "Should look like-->", repr(r1)
+            print("Original table-->", repr(r2))
+            print("Should look like-->", repr(r1))
         self.assertEqual(r1.tostring(), r2.tostring())
         self.assertEqual(table.nrows, 4)
 
@@ -4484,8 +4586,8 @@ class RecArrayIO(unittest.TestCase):
         "Checking modifying rows using coordinates (readCoords/modifyCoords)."
 
         if common.verbose:
-            print '\n', '-=' * 30
-            print "Running %s.test10a..." % self.__class__.__name__
+            print('\n', '-=' * 30)
+            print("Running %s.test10a..." % self.__class__.__name__)
 
         file = tempfile.mktemp(".h5")
         fileh = open_file(file, "w")
@@ -4520,8 +4622,8 @@ class RecArrayIO(unittest.TestCase):
             table = fileh.root.recarray
         r2 = table.read()
         if common.verbose:
-            print "Original table-->", repr(r2)
-            print "Should look like-->", repr(r1)
+            print("Original table-->", repr(r2))
+            print("Should look like-->", repr(r1))
         self.assertEqual(r1.tostring(), r2.tostring())
         self.assertEqual(table.nrows, 4)
 
@@ -4532,8 +4634,8 @@ class RecArrayIO(unittest.TestCase):
         "Checking modifying rows using coordinates (getitem/setitem)."
 
         if common.verbose:
-            print '\n', '-=' * 30
-            print "Running %s.test10b..." % self.__class__.__name__
+            print('\n', '-=' * 30)
+            print("Running %s.test10b..." % self.__class__.__name__)
 
         file = tempfile.mktemp(".h5")
         fileh = open_file(file, "w")
@@ -4568,8 +4670,8 @@ class RecArrayIO(unittest.TestCase):
             table = fileh.root.recarray
         r2 = table.read()
         if common.verbose:
-            print "Original table-->", repr(r2)
-            print "Should look like-->", repr(r1)
+            print("Original table-->", repr(r2))
+            print("Should look like-->", repr(r1))
         self.assertEqual(r1.tostring(), r2.tostring())
         self.assertEqual(table.nrows, 4)
 
@@ -4604,11 +4706,11 @@ class CopyTestCase(unittest.TestCase):
                 self.assertEqual(col1._v_colpathnames, col2._v_colpathnames)
 
     def test01_copy(self):
-        """Checking Table.copy() method """
+        """Checking Table.copy() method."""
 
         if common.verbose:
-            print '\n', '-=' * 30
-            print "Running %s.test01_copy..." % self.__class__.__name__
+            print('\n', '-=' * 30)
+            print("Running %s.test01_copy..." % self.__class__.__name__)
 
         # Create an instance of an HDF5 Table
         file = tempfile.mktemp(".h5")
@@ -4622,7 +4724,7 @@ class CopyTestCase(unittest.TestCase):
 
         if self.close:
             if common.verbose:
-                print "(closing file version)"
+                print("(closing file version)")
             fileh.close()
             fileh = open_file(file, mode="a")
             table1 = fileh.root.table1
@@ -4632,18 +4734,18 @@ class CopyTestCase(unittest.TestCase):
 
         if self.close:
             if common.verbose:
-                print "(closing file version)"
+                print("(closing file version)")
             fileh.close()
             fileh = open_file(file, mode="a")
             table1 = fileh.root.table1
             table2 = fileh.root.table2
 
         if common.verbose:
-            print "table1-->", table1.read()
-            print "table2-->", table2.read()
+            print("table1-->", table1.read())
+            print("table2-->", table2.read())
             # print "dirs-->", dir(table1), dir(table2)
-            print "attrs table1-->", repr(table1.attrs)
-            print "attrs table2-->", repr(table2.attrs)
+            print("attrs table1-->", repr(table1.attrs))
+            print("attrs table2-->", repr(table2.attrs))
 
         # Check that all the elements are equal
         for row1 in table1:
@@ -4683,8 +4785,8 @@ class CopyTestCase(unittest.TestCase):
         """Checking Table.copy() method (where specified)"""
 
         if common.verbose:
-            print '\n', '-=' * 30
-            print "Running %s.test02_copy..." % self.__class__.__name__
+            print('\n', '-=' * 30)
+            print("Running %s.test02_copy..." % self.__class__.__name__)
 
         # Create an instance of an HDF5 Table
         file = tempfile.mktemp(".h5")
@@ -4698,7 +4800,7 @@ class CopyTestCase(unittest.TestCase):
 
         if self.close:
             if common.verbose:
-                print "(closing file version)"
+                print("(closing file version)")
             fileh.close()
             fileh = open_file(file, mode="a")
             table1 = fileh.root.table1
@@ -4709,17 +4811,17 @@ class CopyTestCase(unittest.TestCase):
 
         if self.close:
             if common.verbose:
-                print "(closing file version)"
+                print("(closing file version)")
             fileh.close()
             fileh = open_file(file, mode="r")
             table1 = fileh.root.table1
             table2 = fileh.root.group1.table2
 
         if common.verbose:
-            print "table1-->", table1.read()
-            print "table2-->", table2.read()
-            print "attrs table1-->", repr(table1.attrs)
-            print "attrs table2-->", repr(table2.attrs)
+            print("table1-->", table1.read())
+            print("table2-->", table2.read())
+            print("attrs table1-->", repr(table1.attrs))
+            print("attrs table2-->", repr(table2.attrs))
 
         # Check that all the elements are equal
         for row1 in table1:
@@ -4753,8 +4855,8 @@ class CopyTestCase(unittest.TestCase):
         """Checking Table.copy() method (table larger than buffer)"""
 
         if common.verbose:
-            print '\n', '-=' * 30
-            print "Running %s.test03_copy..." % self.__class__.__name__
+            print('\n', '-=' * 30)
+            print("Running %s.test03_copy..." % self.__class__.__name__)
 
         # Create an instance of an HDF5 Table
         file = tempfile.mktemp(".h5")
@@ -4772,7 +4874,7 @@ class CopyTestCase(unittest.TestCase):
 
         if self.close:
             if common.verbose:
-                print "(closing file version)"
+                print("(closing file version)")
             fileh.close()
             fileh = open_file(file, mode="a")
             table1 = fileh.root.table1
@@ -4783,17 +4885,17 @@ class CopyTestCase(unittest.TestCase):
         table2 = table1.copy(group1, 'table2', title="title table2")
         if self.close:
             if common.verbose:
-                print "(closing file version)"
+                print("(closing file version)")
             fileh.close()
             fileh = open_file(file, mode="r")
             table1 = fileh.root.table1
             table2 = fileh.root.group1.table2
 
         if common.verbose:
-            print "table1-->", table1.read()
-            print "table2-->", table2.read()
-            print "attrs table1-->", repr(table1.attrs)
-            print "attrs table2-->", repr(table2.attrs)
+            print("table1-->", table1.read())
+            print("table2-->", table2.read())
+            print("attrs table1-->", repr(table1.attrs))
+            print("attrs table2-->", repr(table2.attrs))
 
         # Check that all the elements are equal
         for row1 in table1:
@@ -4827,8 +4929,8 @@ class CopyTestCase(unittest.TestCase):
         """Checking Table.copy() method (different compress level)"""
 
         if common.verbose:
-            print '\n', '-=' * 30
-            print "Running %s.test04_copy..." % self.__class__.__name__
+            print('\n', '-=' * 30)
+            print("Running %s.test04_copy..." % self.__class__.__name__)
 
         # Create an instance of an HDF5 Table
         file = tempfile.mktemp(".h5")
@@ -4842,7 +4944,7 @@ class CopyTestCase(unittest.TestCase):
 
         if self.close:
             if common.verbose:
-                print "(closing file version)"
+                print("(closing file version)")
             fileh.close()
             fileh = open_file(file, mode="a")
             table1 = fileh.root.table1
@@ -4854,17 +4956,17 @@ class CopyTestCase(unittest.TestCase):
 
         if self.close:
             if common.verbose:
-                print "(closing file version)"
+                print("(closing file version)")
             fileh.close()
             fileh = open_file(file, mode="r")
             table1 = fileh.root.table1
             table2 = fileh.root.group1.table2
 
         if common.verbose:
-            print "table1-->", table1.read()
-            print "table2-->", table2.read()
-            print "attrs table1-->", repr(table1.attrs)
-            print "attrs table2-->", repr(table2.attrs)
+            print("table1-->", table1.read())
+            print("table2-->", table2.read())
+            print("attrs table1-->", repr(table1.attrs))
+            print("attrs table2-->", repr(table2.attrs))
 
         # Check that all the elements are equal
         for row1 in table1:
@@ -4897,8 +4999,8 @@ class CopyTestCase(unittest.TestCase):
         """Checking Table.copy() method (user attributes copied)"""
 
         if common.verbose:
-            print '\n', '-=' * 30
-            print "Running %s.test05_copy..." % self.__class__.__name__
+            print('\n', '-=' * 30)
+            print("Running %s.test05_copy..." % self.__class__.__name__)
 
         # Create an instance of an HDF5 Table
         file = tempfile.mktemp(".h5")
@@ -4915,7 +5017,7 @@ class CopyTestCase(unittest.TestCase):
 
         if self.close:
             if common.verbose:
-                print "(closing file version)"
+                print("(closing file version)")
             fileh.close()
             fileh = open_file(file, mode="a")
             table1 = fileh.root.table1
@@ -4928,17 +5030,17 @@ class CopyTestCase(unittest.TestCase):
 
         if self.close:
             if common.verbose:
-                print "(closing file version)"
+                print("(closing file version)")
             fileh.close()
             fileh = open_file(file, mode="r")
             table1 = fileh.root.table1
             table2 = fileh.root.group1.table2
 
         if common.verbose:
-            print "table1-->", table1.read()
-            print "table2-->", table2.read()
-            print "attrs table1-->", repr(table1.attrs)
-            print "attrs table2-->", repr(table2.attrs)
+            print("table1-->", table1.read())
+            print("table2-->", table2.read())
+            print("attrs table1-->", repr(table1.attrs))
+            print("attrs table2-->", repr(table2.attrs))
 
         # Check that all the elements are equal
         for row1 in table1:
@@ -4973,8 +5075,8 @@ class CopyTestCase(unittest.TestCase):
         """Checking Table.copy() method (user attributes not copied)"""
 
         if common.verbose:
-            print '\n', '-=' * 30
-            print "Running %s.test05b_copy..." % self.__class__.__name__
+            print('\n', '-=' * 30)
+            print("Running %s.test05b_copy..." % self.__class__.__name__)
 
         # Create an instance of an HDF5 Table
         file = tempfile.mktemp(".h5")
@@ -4991,7 +5093,7 @@ class CopyTestCase(unittest.TestCase):
 
         if self.close:
             if common.verbose:
-                print "(closing file version)"
+                print("(closing file version)")
             fileh.close()
             fileh = open_file(file, mode="a")
             table1 = fileh.root.table1
@@ -5004,17 +5106,17 @@ class CopyTestCase(unittest.TestCase):
 
         if self.close:
             if common.verbose:
-                print "(closing file version)"
+                print("(closing file version)")
             fileh.close()
             fileh = open_file(file, mode="r")
             table1 = fileh.root.table1
             table2 = fileh.root.group1.table2
 
         if common.verbose:
-            print "table1-->", table1.read()
-            print "table2-->", table2.read()
-            print "attrs table1-->", repr(table1.attrs)
-            print "attrs table2-->", repr(table2.attrs)
+            print("table1-->", table1.read())
+            print("table2-->", table2.read())
+            print("attrs table1-->", repr(table1.attrs))
+            print("attrs table2-->", repr(table2.attrs))
 
         # Check that all the elements are equal
         for row1 in table1:
@@ -5058,11 +5160,11 @@ class OpenCopyTestCase(CopyTestCase):
 
 class CopyIndexTestCase(unittest.TestCase):
     def test01_index(self):
-        """Checking Table.copy() method with indexes"""
+        """Checking Table.copy() method with indexes."""
 
         if common.verbose:
-            print '\n', '-=' * 30
-            print "Running %s.test01_index..." % self.__class__.__name__
+            print('\n', '-=' * 30)
+            print("Running %s.test01_index..." % self.__class__.__name__)
 
         # Create an instance of an HDF5 Table
         file = tempfile.mktemp(".h5")
@@ -5079,7 +5181,7 @@ class CopyIndexTestCase(unittest.TestCase):
 
         if self.close:
             if common.verbose:
-                print "(closing file version)"
+                print("(closing file version)")
             fileh.close()
             fileh = open_file(file, mode="a")
             table1 = fileh.root.table1
@@ -5091,10 +5193,10 @@ class CopyIndexTestCase(unittest.TestCase):
                              stop=self.stop,
                              step=self.step)
         if common.verbose:
-            print "table1-->", table1.read()
-            print "table2-->", table2.read()
-            print "attrs table1-->", repr(table1.attrs)
-            print "attrs table2-->", repr(table2.attrs)
+            print("table1-->", table1.read())
+            print("table2-->", table2.read())
+            print("attrs table1-->", repr(table1.attrs))
+            print("attrs table2-->", repr(table2.attrs))
 
         # Check that all the elements are equal
         r2 = r[self.start:self.stop:self.step]
@@ -5105,8 +5207,8 @@ class CopyIndexTestCase(unittest.TestCase):
 
         # Assert the number of rows in table
         if common.verbose:
-            print "nrows in table2-->", table2.nrows
-            print "and it should be-->", r2.shape[0]
+            print("nrows in table2-->", table2.nrows)
+            print("and it should be-->", r2.shape[0])
         self.assertEqual(r2.shape[0], table2.nrows)
 
         # Close the file
@@ -5117,8 +5219,8 @@ class CopyIndexTestCase(unittest.TestCase):
         """Checking Table.copy() method with indexes (close file version)"""
 
         if common.verbose:
-            print '\n', '-=' * 30
-            print "Running %s.test02_indexclosef..." % self.__class__.__name__
+            print('\n', '-=' * 30)
+            print("Running %s.test02_indexclosef..." % self.__class__.__name__)
 
         # Create an instance of an HDF5 Table
         file = tempfile.mktemp(".h5")
@@ -5132,7 +5234,7 @@ class CopyIndexTestCase(unittest.TestCase):
 
         if self.close:
             if common.verbose:
-                print "(closing file version)"
+                print("(closing file version)")
             fileh.close()
             fileh = open_file(file, mode="a")
             table1 = fileh.root.table1
@@ -5150,10 +5252,10 @@ class CopyIndexTestCase(unittest.TestCase):
         table2 = fileh.root.table2
 
         if common.verbose:
-            print "table1-->", table1.read()
-            print "table2-->", table2.read()
-            print "attrs table1-->", repr(table1.attrs)
-            print "attrs table2-->", repr(table2.attrs)
+            print("table1-->", table1.read())
+            print("table2-->", table2.read())
+            print("attrs table1-->", repr(table1.attrs))
+            print("attrs table2-->", repr(table2.attrs))
 
         # Check that all the elements are equal
         r2 = r[self.start:self.stop:self.step]
@@ -5164,8 +5266,8 @@ class CopyIndexTestCase(unittest.TestCase):
 
         # Assert the number of rows in table
         if common.verbose:
-            print "nrows in table2-->", table2.nrows
-            print "and it should be-->", r2.shape[0]
+            print("nrows in table2-->", table2.nrows)
+            print("and it should be-->", r2.shape[0])
         self.assertEqual(r2.shape[0], table2.nrows)
 
         # Close the file
@@ -5351,19 +5453,19 @@ class DefaultValues(unittest.TestCase):
         values = [b"abcd", 1, 2, 3.1, 4.2, 5, "e", 1, 1j, 1 + 0j]
         formats = 'a4,i4,i2,f8,f4,u2,a1,b1,c8,c16'.split(',')
 
-        if 'float16' in np.typeDict:
+        if 'Float16Col' in globals():
             values.append(6.4)
             formats.append('f2')
-        if 'float96' in np.typeDict:
+        if 'Float96Col' in globals():
             values.append(6.4)
             formats.append('f12')
-        if 'float128' in np.typeDict:
+        if 'Float128Col' in globals():
             values.append(6.4)
             formats.append('f16')
-        if 'complex192' in np.typeDict:
+        if 'Complex192Col' in globals():
             values.append(1.-0.j)
             formats.append('c24')
-        if 'complex256' in np.typeDict:
+        if 'Complex256Col' in globals():
             values.append(1.-0.j)
             formats.append('c32')
 
@@ -5380,13 +5482,13 @@ class DefaultValues(unittest.TestCase):
         # This generates too much output. Activate only when
         # self.nrowsinbuf is very small (<10)
         if common.verbose:
-            print "First 10 table values:"
+            print("First 10 table values:")
             for row in table.iterrows(0, 10):
-                print row
-            print "The first 5 read recarray values:"
-            print r2[:5]
-            print "Records should look like:"
-            print r[:5]
+                print(row)
+            print("The first 5 read recarray values:")
+            print(r2[:5])
+            print("Records should look like:")
+            print(r[:5])
 
         for name1, name2 in zip(r.dtype.names, r2.dtype.names):
             self.assertTrue(allequal(r[name1], r2[name2]))
@@ -5430,19 +5532,19 @@ class DefaultValues(unittest.TestCase):
         values = [b"abcd", 1, 2, 3.1, 4.2, 5, "e", 1, 1j, 1 + 0j]
         formats = 'a4,i4,i2,f8,f4,u2,a1,b1,c8,c16'.split(',')
 
-        if 'float16' in np.typeDict:
+        if 'Float16Col' in globals():
             values.append(6.4)
             formats.append('f2')
-        if 'float96' in np.typeDict:
+        if 'Float96Col' in globals():
             values.append(6.4)
             formats.append('f12')
-        if 'float128' in np.typeDict:
+        if 'Float128Col' in globals():
             values.append(6.4)
             formats.append('f16')
-        if 'complex192' in np.typeDict:
+        if 'Complex192Col' in globals():
             values.append(1.-0.j)
             formats.append('c24')
-        if 'complex256' in np.typeDict:
+        if 'Complex256Col' in globals():
             values.append(1.-0.j)
             formats.append('c32')
 
@@ -5459,13 +5561,13 @@ class DefaultValues(unittest.TestCase):
         # This generates too much output. Activate only when
         # self.nrowsinbuf is very small (<10)
         if common.verbose:
-            print "First 10 table values:"
+            print("First 10 table values:")
             for row in table.iterrows(0, 10):
-                print row
-            print "The first 5 read recarray values:"
-            print r2[:5]
-            print "Records should look like:"
-            print r[:5]
+                print(row)
+            print("The first 5 read recarray values:")
+            print(r2[:5])
+            print("Records should look like:")
+            print(r[:5])
 
         for name1, name2 in zip(r.dtype.names, r2.dtype.names):
             self.assertTrue(allequal(r[name1], r2[name2]))
@@ -5528,21 +5630,21 @@ class LengthTestCase(unittest.TestCase):
     #----------------------------------------
 
     def test01_lengthrows(self):
-        """Checking __length__ in Table"""
+        """Checking __length__ in Table."""
 
         if common.verbose:
-            print '\n', '-=' * 30
-            print "Running %s.test01_lengthrows..." % self.__class__.__name__
+            print('\n', '-=' * 30)
+            print("Running %s.test01_lengthrows..." % self.__class__.__name__)
 
         # Number of rows
         len(self.table) == self.nrows
 
     def test02_lengthcols(self):
-        """Checking __length__ in Cols"""
+        """Checking __length__ in Cols."""
 
         if common.verbose:
-            print '\n', '-=' * 30
-            print "Running %s.test02_lengthcols..." % self.__class__.__name__
+            print('\n', '-=' * 30)
+            print("Running %s.test02_lengthcols..." % self.__class__.__name__)
 
         # Number of columns
         if self.record is Record:
@@ -5551,11 +5653,11 @@ class LengthTestCase(unittest.TestCase):
             len(self.table.cols) == 4
 
     def test03_lengthcol(self):
-        """Checking __length__ in Column"""
+        """Checking __length__ in Column."""
 
         if common.verbose:
-            print '\n', '-=' * 30
-            print "Running %s.test03_lengthcol..." % self.__class__.__name__
+            print('\n', '-=' * 30)
+            print("Running %s.test03_lengthcol..." % self.__class__.__name__)
 
         # Number of rows for all columns column
         for colname in self.table.colnames:
@@ -5611,7 +5713,7 @@ class WhereAppendTestCase(common.TempFileMixin, common.PyTablesTestCase):
         # Rows resulting from the query are those in the new table.
         it2 = iter(tbl2)
         for r1 in tbl1.where('id > 1'):
-            r2 = it2.next()
+            r2 = next(it2)
             self.assertTrue(r1['id'] == r2['id'] and r1['v1'] == r2['v1']
                             and r1['v2'] == r2['v2'])
 
@@ -5635,7 +5737,7 @@ class WhereAppendTestCase(common.TempFileMixin, common.PyTablesTestCase):
         # Rows resulting from the query are those in the new table.
         it2 = iter(tbl2)
         for r1 in tbl1.where('id > 1'):
-            r2 = it2.next()
+            r2 = next(it2)
             self.assertTrue(r1['id'] == r2['id'] and r1['v1'] == r2['v1']
                             and r1['v2'] == r2['v2'])
 
@@ -5658,7 +5760,7 @@ class WhereAppendTestCase(common.TempFileMixin, common.PyTablesTestCase):
         # Rows resulting from the query are those in the new table.
         it2 = iter(tbl2)
         for r1 in tbl1.where('id > 1'):
-            r2 = it2.next()
+            r2 = next(it2)
             self.assertTrue(r1['id'] == r2['id'] and int(r1['v1']) == r2['v1']
                             and r1['v2'] == r2['v2'])
 
@@ -5763,7 +5865,7 @@ class ChunkshapeTestCase(unittest.TestCase):
 
         tbl = self.fileh.root.table
         if common.verbose:
-            print "chunkshape-->", tbl.chunkshape
+            print("chunkshape-->", tbl.chunkshape)
         self.assertEqual(tbl.chunkshape, (13,))
 
     def test01(self):
@@ -5773,7 +5875,7 @@ class ChunkshapeTestCase(unittest.TestCase):
         self.fileh = open_file(self.file, 'r')
         tbl = self.fileh.root.table
         if common.verbose:
-            print "chunkshape-->", tbl.chunkshape
+            print("chunkshape-->", tbl.chunkshape)
         self.assertEqual(tbl.chunkshape, (13,))
 
 
@@ -5834,8 +5936,8 @@ class IrregularStrideTestCase(unittest.TestCase):
         coords1 = table.get_where_list('c1<5')
         coords2 = table.get_where_list('c2<5')
         if common.verbose:
-            print "\nSelected coords1-->", coords1
-            print "Selected coords2-->", coords2
+            print("\nSelected coords1-->", coords1)
+            print("Selected coords2-->", coords2)
         self.assertTrue(allequal(coords1, np.arange(5, dtype=SizeType)))
         self.assertTrue(allequal(coords2, np.arange(5, dtype=SizeType)))
 
@@ -5880,10 +5982,10 @@ class Issue262TestCase(unittest.TestCase):
         data = data[np.where((data['c1'] > 5) & (data['c2'] < 30))]
 
         if common.verbose:
-            print
-            print "Selected coords1-->", coords1
-            print "Selected coords2-->", coords2
-            print "Selected data-->", data
+            print()
+            print("Selected coords1-->", coords1)
+            print("Selected coords2-->", coords2)
+            print("Selected data-->", data)
         self.assertEqual(len(coords1) + len(coords2), len(data))
 
     def test_gh262_01(self):
@@ -5893,8 +5995,8 @@ class Issue262TestCase(unittest.TestCase):
         data = table.get_where_list('(c1>5)&(~(c1>5))', start=0, step=1)
 
         if common.verbose:
-            print
-            print "data -->", data
+            print()
+            print("data -->", data)
         self.assertEqual(len(data), 0)
 
     def test_gh262_02(self):
@@ -5904,8 +6006,8 @@ class Issue262TestCase(unittest.TestCase):
         data = table.get_where_list('(c1>5)&(~(c1>5))', start=1, step=1)
 
         if common.verbose:
-            print
-            print "data -->", data
+            print()
+            print("data -->", data)
         self.assertEqual(len(data), 0)
 
     def test_gh262_03(self):
@@ -5915,8 +6017,8 @@ class Issue262TestCase(unittest.TestCase):
         data = table.get_where_list('(c1>5)&(~(c1>5))', start=0, step=2)
 
         if common.verbose:
-            print
-            print "data -->", data
+            print()
+            print("data -->", data)
         self.assertEqual(len(data), 0)
 
     def test_gh262_04(self):
@@ -5926,8 +6028,8 @@ class Issue262TestCase(unittest.TestCase):
         data = table.get_where_list('(c1>5)&(~(c1>5))', start=1, step=2)
 
         if common.verbose:
-            print
-            print "data -->", data
+            print()
+            print("data -->", data)
         self.assertEqual(len(data), 0)
 
 
@@ -5960,13 +6062,13 @@ class TruncateTestCase(unittest.TestCase):
 
         if self.close:
             if common.verbose:
-                print "(closing file version)"
+                print("(closing file version)")
             self.fileh.close()
             self.fileh = open_file(self.file, mode="r")
             table = self.fileh.root.table
 
         if common.verbose:
-            print "table-->", table.read()
+            print("table-->", table.read())
 
         self.assertEqual(table.nrows, 0)
         for row in table:
@@ -5981,13 +6083,13 @@ class TruncateTestCase(unittest.TestCase):
 
         if self.close:
             if common.verbose:
-                print "(closing file version)"
+                print("(closing file version)")
             self.fileh.close()
             self.fileh = open_file(self.file, mode="r")
             table = self.fileh.root.table
 
         if common.verbose:
-            print "table-->", table.read()
+            print("table-->", table.read())
 
         self.assertEqual(table.nrows, 1)
         for row in table:
@@ -6002,13 +6104,13 @@ class TruncateTestCase(unittest.TestCase):
 
         if self.close:
             if common.verbose:
-                print "(closing file version)"
+                print("(closing file version)")
             self.fileh.close()
             self.fileh = open_file(self.file, mode="r")
             table = self.fileh.root.table
 
         if common.verbose:
-            print "table-->", table.read()
+            print("table-->", table.read())
 
         self.assertEqual(table.nrows, 2)
         for row in table:
@@ -6023,13 +6125,13 @@ class TruncateTestCase(unittest.TestCase):
 
         if self.close:
             if common.verbose:
-                print "(closing file version)"
+                print("(closing file version)")
             self.fileh.close()
             self.fileh = open_file(self.file, mode="r")
             table = self.fileh.root.table
 
         if common.verbose:
-            print "table-->", table.read()
+            print("table-->", table.read())
 
         self.assertEqual(table.nrows, 4)
         # Check the original values
@@ -6104,12 +6206,12 @@ class PointSelectionTestCase(common.PyTablesTestCase):
         for value1, value2 in self.limits:
             key = (data >= value1) & (data < value2)
             if common.verbose:
-                print "Selection to test:", key
+                print("Selection to test:", key)
             a = recarr[key]
             b = table[key]
             if common.verbose:
-                print "NumPy selection:", a
-                print "PyTables selection:", b
+                print("NumPy selection:", a)
+                print("PyTables selection:", b)
             npt.assert_array_equal(
                 a, b, "NumPy array and PyTables selections does not match.")
 
@@ -6121,7 +6223,7 @@ class PointSelectionTestCase(common.PyTablesTestCase):
         for value1, value2 in self.limits:
             key = np.where((data >= value1) & (data < value2))
             if common.verbose:
-                print "Selection to test:", key, type(key)
+                print("Selection to test:", key, type(key))
             a = recarr[key]
             b = table[key]
 #             if common.verbose:
@@ -6138,7 +6240,7 @@ class PointSelectionTestCase(common.PyTablesTestCase):
         for value1, value2 in self.limits:
             key = np.where((data >= value1) & (data < value2))
             if common.verbose:
-                print "Selection to test:", key
+                print("Selection to test:", key)
             recarr[key]
             fkey = np.array(key, "f4")
             self.assertRaises(TypeError, table.__getitem__, fkey)
@@ -6151,7 +6253,7 @@ class PointSelectionTestCase(common.PyTablesTestCase):
         for value1, value2 in self.limits:
             key = np.where((data >= value1) & (data < value2))[0]
             if common.verbose:
-                print "Selection to test:", key, type(key)
+                print("Selection to test:", key, type(key))
             a = recarr[key]
             b = table[key]
 #             if common.verbose:
@@ -6168,7 +6270,7 @@ class PointSelectionTestCase(common.PyTablesTestCase):
         for value1, value2 in self.limits:
             key = np.where((data >= value1) & (data < value2))[0].tolist()
             if common.verbose:
-                print "Selection to test:", key, type(key)
+                print("Selection to test:", key, type(key))
             a = recarr[key]
             b = table[key]
 #             if common.verbose:
@@ -6185,7 +6287,7 @@ class PointSelectionTestCase(common.PyTablesTestCase):
         for value1, value2 in self.limits:
             key = np.where((data >= value1) & (data < value2))
             if common.verbose:
-                print "Selection to test:", key
+                print("Selection to test:", key)
             s = recarr[key]
             # Modify the s recarray
             s["f0"][:] = data[:len(s)]*2
@@ -6209,7 +6311,7 @@ class PointSelectionTestCase(common.PyTablesTestCase):
         for value1, value2 in self.limits:
             key = np.where((data >= value1) & (data < value2))
             if common.verbose:
-                print "Selection to test:", key
+                print("Selection to test:", key)
             s = recarr[key]
             # Modify the s recarray
             s["f0"][:] = data[:len(s)]*2
@@ -6240,7 +6342,7 @@ class MDLargeColTestCase(common.TempFileMixin, common.PyTablesTestCase):
             tbl = self.h5file.root.test
         # Check the value
         if common.verbose:
-            print "First row-->", tbl[0]['col1']
+            print("First row-->", tbl[0]['col1'])
         npt.assert_array_equal(tbl[0]['col1'], np.zeros(N, 'i1'))
 
 
@@ -6256,7 +6358,7 @@ class MDLargeColReopen(MDLargeColTestCase):
 # See ticket #264.
 class ExhaustedIter(common.PyTablesTestCase):
     def setUp(self):
-        """Create small database"""
+        """Create small database."""
         class Observations(IsDescription):
             market_id = IntCol(pos=0)
             scenario_id = IntCol(pos=1)
@@ -6299,11 +6401,15 @@ class ExhaustedIter(common.PyTablesTestCase):
             vals = [row['value'] for row in rows_grouped]
             scenario_means.append(self.average(vals))
         if common.verbose:
-            print 'Means -->', scenario_means
+            print('Means -->', scenario_means)
         self.assertEqual(scenario_means, [112.0, 112.0, 112.0])
 
     def test01_groupby(self):
-        """Checking iterating an exhausted iterator (ticket #264). Reopen."""
+        """Checking iterating an exhausted iterator (ticket #264).
+
+        Reopen.
+
+        """
         from itertools import groupby
         self.fileh.close()
         self.fileh = open_file(self.file, 'r')
@@ -6313,7 +6419,7 @@ class ExhaustedIter(common.PyTablesTestCase):
             vals = [row['value'] for row in rows_grouped]
             scenario_means.append(self.average(vals))
         if common.verbose:
-            print 'Means -->', scenario_means
+            print('Means -->', scenario_means)
         self.assertEqual(scenario_means, [112.0, 112.0, 112.0])
 
 
@@ -6325,7 +6431,7 @@ class SpecialColnamesTestCase(common.TempFileMixin, common.PyTablesTestCase):
         t = f.create_table(f.root, "test", a)
         self.assertEqual(len(t.colnames), 3, "Number of columns incorrect")
         if common.verbose:
-            print "colnames -->", t.colnames
+            print("colnames -->", t.colnames)
         for name, name2 in zip(t.colnames, ("a", "_b", "__c")):
             self.assertEqual(name, name2)
 
@@ -6337,7 +6443,7 @@ class RowContainsTestCase(common.TempFileMixin, common.PyTablesTestCase):
         t = f.create_table(f.root, "test", a)
         row = [r for r in t.iterrows()][0]
         if common.verbose:
-            print "row -->", row[:]
+            print("row -->", row[:])
         for item in (1, 2, 3):
             self.assertTrue(item in row)
         self.assertTrue(4 not in row)
@@ -6634,6 +6740,19 @@ def suite():
         theSuite.addTest(unittest.makeSuite(CompressBloscTablesTestCase))
         theSuite.addTest(unittest.makeSuite(
             CompressBloscShuffleTablesTestCase))
+        theSuite.addTest(unittest.makeSuite(
+            CompressBloscBloscLZTablesTestCase))
+        if 'lz4' in tables.blosc_compressor_list():
+            theSuite.addTest(unittest.makeSuite(
+                CompressBloscLZ4TablesTestCase))
+            theSuite.addTest(unittest.makeSuite(
+                CompressBloscLZ4HCTablesTestCase))
+        if 'snappy' in tables.blosc_compressor_list():
+            theSuite.addTest(unittest.makeSuite(
+                CompressBloscSnappyTablesTestCase))
+        if 'zlib' in tables.blosc_compressor_list():
+            theSuite.addTest(unittest.makeSuite(
+                CompressBloscZlibTablesTestCase))
         theSuite.addTest(unittest.makeSuite(CompressLZOTablesTestCase))
         theSuite.addTest(unittest.makeSuite(CompressLZOShuffleTablesTestCase))
         theSuite.addTest(unittest.makeSuite(CompressZLIBTablesTestCase))
diff --git a/tables/tests/test_tablesMD.py b/tables/tests/test_tablesMD.py
index 68a0082..b31aace 100644
--- a/tables/tests/test_tablesMD.py
+++ b/tables/tests/test_tablesMD.py
@@ -1,5 +1,6 @@
 # -*- coding: utf-8 -*-
 
+from __future__ import print_function
 import sys
 import unittest
 import os
@@ -35,10 +36,11 @@ class Record(IsDescription):
 
 #  Dictionary definition
 RecordDescriptionDict = {
-    'var0': StringCol(itemsize=4, dflt=b"", shape=2),  # 4-character string array
+    'var0': StringCol(itemsize=4, dflt=b"", shape=2),  # 4-character string
+                                                       # array
     'var1': StringCol(itemsize=4, dflt=[b"abcd", b"efgh"], shape=(2, 2)),
-#     'var0': StringCol(itemsize=4, shape=2),       # 4-character String
-#     'var1': StringCol(itemsize=4, shape=(2,2)),   # 4-character String
+    #'var0': StringCol(itemsize=4, shape=2),       # 4-character String
+    #'var1': StringCol(itemsize=4, shape=(2,2)),   # 4-character String
     'var1_': IntCol(shape=2),                      # integer array
     'var2': IntCol(shape=(2, 2)),                  # integer array
     'var3': Int16Col(),                           # short integer
@@ -182,7 +184,7 @@ class BasicTestCase(common.PyTablesTestCase):
     #----------------------------------------
 
     def test00_description(self):
-        """Checking table description and descriptive fields"""
+        """Checking table description and descriptive fields."""
 
         self.fileh = open_file(self.file)
 
@@ -229,9 +231,9 @@ class BasicTestCase(common.PyTablesTestCase):
         # Column defaults.
         for v in expectedNames:
             if common.verbose:
-                print "dflt-->", columns[v].dflt
-                print "coldflts-->", tbl.coldflts[v]
-                print "desc.dflts-->", desc._v_dflts[v]
+                print("dflt-->", columns[v].dflt)
+                print("coldflts-->", tbl.coldflts[v])
+                print("desc.dflts-->", desc._v_dflts[v])
             self.assertTrue(common.areArraysEqual(tbl.coldflts[v],
                                                   columns[v].dflt))
             self.assertTrue(common.areArraysEqual(desc._v_dflts[v],
@@ -248,11 +250,11 @@ class BasicTestCase(common.PyTablesTestCase):
             self.assertEqual(expectedCol.type, col.type)
 
     def test01_readTable(self):
-        """Checking table read and cuts"""
+        """Checking table read and cuts."""
 
         if common.verbose:
-            print '\n', '-=' * 30
-            print "Running %s.test01_readTable..." % self.__class__.__name__
+            print('\n', '-=' * 30)
+            print("Running %s.test01_readTable..." % self.__class__.__name__)
 
         # Create an instance of an HDF5 Table
         self.fileh = open_file(self.file, "r")
@@ -265,10 +267,10 @@ class BasicTestCase(common.PyTablesTestCase):
                   if r['var2'][0][0] < 20]
 
         if common.verbose:
-            print "Table:", repr(table)
-            print "Nrows in", table._v_pathname, ":", table.nrows
-            print "Last record in table ==>", rec
-            print "Total selected records in table ==> ", len(result)
+            print("Table:", repr(table))
+            print("Nrows in", table._v_pathname, ":", table.nrows)
+            print("Last record in table ==>", rec)
+            print("Total selected records in table ==> ", len(result))
         nrows = self.expectedrows - 1
         r = [r for r in table.iterrows() if r['var2'][0][0] < 20][-1]
         self.assertEqual((
@@ -289,8 +291,8 @@ class BasicTestCase(common.PyTablesTestCase):
         """Checking table read and cuts (multidimensional columns case)"""
 
         if common.verbose:
-            print '\n', '-=' * 30
-            print "Running %s.test01b_readTable..." % self.__class__.__name__
+            print('\n', '-=' * 30)
+            print("Running %s.test01b_readTable..." % self.__class__.__name__)
 
         # Create an instance of an HDF5 Table
         self.fileh = open_file(self.file, "r")
@@ -301,9 +303,9 @@ class BasicTestCase(common.PyTablesTestCase):
         # Read the records and select those with "var2" file less than 20
         result = [r['var5'] for r in table.iterrows() if r['var2'][0][0] < 20]
         if common.verbose:
-            print "Nrows in", table._v_pathname, ":", table.nrows
-            print "Last record in table ==>", rec
-            print "Total selected records in table ==> ", len(result)
+            print("Nrows in", table._v_pathname, ":", table.nrows)
+            print("Last record in table ==>", rec)
+            print("Total selected records in table ==> ", len(result))
         nrows = table.nrows
         r = [r for r in table.iterrows() if r['var2'][0][0] < 20][-1]
         if isinstance(r['var5'], np.ndarray):
@@ -346,39 +348,39 @@ class BasicTestCase(common.PyTablesTestCase):
         self.assertEqual(len(result), 20)
 
     def test01c_readTable(self):
-        """Checking shape of multidimensional columns"""
+        """Checking shape of multidimensional columns."""
 
         if common.verbose:
-            print '\n', '-=' * 30
-            print "Running %s.test01c_readTable..." % self.__class__.__name__
+            print('\n', '-=' * 30)
+            print("Running %s.test01c_readTable..." % self.__class__.__name__)
 
         # Create an instance of an HDF5 Table
         self.fileh = open_file(self.file, "r")
         table = self.fileh.get_node("/table0")
 
         if common.verbose:
-            print "var2 col shape:", table.cols.var2.shape
-            print "Should be:", table.cols.var2[:].shape
+            print("var2 col shape:", table.cols.var2.shape)
+            print("Should be:", table.cols.var2[:].shape)
         self.assertEqual(table.cols.var2.shape, table.cols.var2[:].shape)
 
     def test02_AppendRows(self):
-        """Checking whether appending record rows works or not"""
+        """Checking whether appending record rows works or not."""
 
         # Now, open it, but in "append" mode
         self.fileh = open_file(self.file, mode="a")
         self.rootgroup = self.fileh.root
         if common.verbose:
-            print '\n', '-=' * 30
-            print "Running %s.test02_AppendRows..." % self.__class__.__name__
+            print('\n', '-=' * 30)
+            print("Running %s.test02_AppendRows..." % self.__class__.__name__)
 
         # Get a table
         table = self.fileh.get_node("/group0/table1")
         # Get their row object
         row = table.row
         if common.verbose:
-            print "Nrows in old", table._v_pathname, ":", table.nrows
-            print "Record Format ==>", table.description._v_nested_formats
-            print "Record Size ==>", table.rowsize
+            print("Nrows in old", table._v_pathname, ":", table.nrows)
+            print("Record Format ==>", table.description._v_nested_formats)
+            print("Record Size ==>", table.rowsize)
         # Append some rows
         for i in xrange(self.appendrows):
             s = '%04d' % (self.appendrows - i)
@@ -426,11 +428,11 @@ class BasicTestCase(common.PyTablesTestCase):
 
     # CAVEAT: The next test only works for tables with rows < 2**15
     def test03_endianess(self):
-        """Checking if table is endianess aware"""
+        """Checking if table is endianess aware."""
 
         if common.verbose:
-            print '\n', '-=' * 30
-            print "Running %s.test03_endianess..." % self.__class__.__name__
+            print('\n', '-=' * 30)
+            print("Running %s.test03_endianess..." % self.__class__.__name__)
 
         # Create an instance of an HDF5 Table
         self.fileh = open_file(self.file, "r")
@@ -439,10 +441,10 @@ class BasicTestCase(common.PyTablesTestCase):
         # Read the records and select the ones with "var3" column less than 20
         result = [r['var2'] for r in table.iterrows() if r['var3'] < 20]
         if common.verbose:
-            print "Nrows in", table._v_pathname, ":", table.nrows
-            print "On-disk byteorder ==>", table.byteorder
-            print "Last record in table ==>", rec
-            print "Total selected records in table ==>", len(result)
+            print("Nrows in", table._v_pathname, ":", table.nrows)
+            print("On-disk byteorder ==>", table.byteorder)
+            print("Last record in table ==>", rec)
+            print("Total selected records in table ==>", len(result))
         nrows = self.expectedrows - 1
         r = list(table.iterrows())[-1]
         self.assertEqual((r['var1'][0][0], r['var3']), (b"0001", nrows))
@@ -636,7 +638,8 @@ class BasicRangeTestCase(unittest.TestCase):
             for nrec in range(len(recarray)):
                 if recarray['var2'][nrec][0][0] < self.nrows and 0 < self.step:
                     result.append(recarray['var2'][nrec][0][0])
-                elif recarray['var2'][nrec][0][0] > self.nrows and 0 > self.step:
+                elif (recarray['var2'][nrec][0][0] > self.nrows and
+                        0 > self.step):
                     result.append(recarray['var2'][nrec][0][0])
         elif self.checkgetCol:
             column = table.read(self.start, self.stop, self.step, 'var2')
@@ -648,11 +651,19 @@ class BasicRangeTestCase(unittest.TestCase):
                     result.append(column[nrec][0][0])  # *-*
         else:
             if 0 < self.step:
-                result = [r['var2'][0][0] for r in table.iterrows(self.start, 
-                          self.stop, self.step) if r['var2'][0][0] < self.nrows]
+                result = [
+                    r['var2'][0][0] for r in table.iterrows(self.start,
+                                                            self.stop,
+                                                            self.step)
+                    if r['var2'][0][0] < self.nrows
+                ]
             elif 0 > self.step:
-                result = [r['var2'][0][0] for r in table.iterrows(self.start, 
-                          self.stop, self.step) if r['var2'][0][0] > self.nrows]
+                result = [
+                    r['var2'][0][0] for r in table.iterrows(self.start,
+                                                            self.stop,
+                                                            self.step)
+                    if r['var2'][0][0] > self.nrows
+                ]
 
         if self.start < 0:
             startr = self.expectedrows + self.start
@@ -675,40 +686,45 @@ class BasicRangeTestCase(unittest.TestCase):
             stopr = self.nrows
 
         if common.verbose:
-            print "Nrows in", table._v_pathname, ":", table.nrows
+            print("Nrows in", table._v_pathname, ":", table.nrows)
             if reslength:
                 if self.checkrecarray:
-                    print "Last record *read* in recarray ==>", recarray[-1]
+                    print("Last record *read* in recarray ==>", recarray[-1])
                 elif self.checkgetCol:
-                    print "Last value *read* in getCol ==>", column[-1]
+                    print("Last value *read* in getCol ==>", column[-1])
                 else:
-                    print "Last record *read* in table range ==>", rec
-            print "Total number of selected records ==>", len(result)
-            print "Selected records:\n", result
-            print "Selected records should look like:\n", \
-                  range(startr, stopr, self.step)
-            print "start, stop, step ==>", startr, stopr, self.step
+                    print("Last record *read* in table range ==>", rec)
+            print("Total number of selected records ==>", len(result))
+            print("Selected records:\n", result)
+            print("Selected records should look like:\n",
+                  range(startr, stopr, self.step))
+            print("start, stop, step ==>", startr, stopr, self.step)
 
         self.assertEqual(result, range(startr, stopr, self.step))
         if not (self.checkrecarray or self.checkgetCol):
             if startr < stopr and 0 < self.step:
-                r = [r['var2'] for r in table.iterrows(self.start, self.stop, self.step)
+                r = [r['var2'] for r in table.iterrows(self.start, self.stop,
+                                                       self.step)
                      if r['var2'][0][0] < self.nrows][-1]
                 if self.nrows > self.expectedrows:
-                    self.assertEqual(r[0][0],
-                                     range(self.start, self.stop, self.step)[-1])
+                    self.assertEqual(
+                        r[0][0],
+                        range(self.start, self.stop, self.step)[-1])
                 else:
                     self.assertEqual(r[0][0],
                                      range(startr, stopr, self.step)[-1])
             elif startr > stopr and 0 > self.step:
-                r = [r['var2'] for r in table.iterrows(self.start, self.stop, self.step)
+                r = [r['var2'] for r in table.iterrows(self.start, self.stop,
+                                                       self.step)
                      if r['var2'][0][0] > self.nrows][0]
                 if self.nrows < self.expectedrows:
-                    self.assertEqual(r[0][0],
-                                     range(self.start, self.stop or -1, self.step)[0])
+                    self.assertEqual(
+                        r[0][0],
+                        range(self.start, self.stop or -1, self.step)[0])
                 else:
-                    self.assertEqual(r[0][0],
-                                     range(startr, stopr or -1 , self.step)[0])
+                    self.assertEqual(
+                        r[0][0],
+                        range(startr, stopr or -1, self.step)[0])
 
         # Close the file
         self.fileh.close()
@@ -717,8 +733,8 @@ class BasicRangeTestCase(unittest.TestCase):
         """Checking ranges in table iterators (case1)"""
 
         if common.verbose:
-            print '\n', '-=' * 30
-            print "Running %s.test01_range..." % self.__class__.__name__
+            print('\n', '-=' * 30)
+            print("Running %s.test01_range..." % self.__class__.__name__)
 
         # Case where step < nrowsinbuf < 2 * step
         self.nrows = 21
@@ -733,8 +749,8 @@ class BasicRangeTestCase(unittest.TestCase):
         """Checking ranges in table iterators (case1)"""
 
         if common.verbose:
-            print '\n', '-=' * 30
-            print "Running %s.test01_range..." % self.__class__.__name__
+            print('\n', '-=' * 30)
+            print("Running %s.test01_range..." % self.__class__.__name__)
 
         # Case where step < nrowsinbuf < 2 * step
         self.nrows = 21
@@ -749,8 +765,8 @@ class BasicRangeTestCase(unittest.TestCase):
         """Checking ranges in table iterators (case2)"""
 
         if common.verbose:
-            print '\n', '-=' * 30
-            print "Running %s.test02_range..." % self.__class__.__name__
+            print('\n', '-=' * 30)
+            print("Running %s.test02_range..." % self.__class__.__name__)
 
         # Case where step < nrowsinbuf < 10 * step
         self.nrows = 21
@@ -765,8 +781,8 @@ class BasicRangeTestCase(unittest.TestCase):
         """Checking ranges in table iterators (case3)"""
 
         if common.verbose:
-            print '\n', '-=' * 30
-            print "Running %s.test03_range..." % self.__class__.__name__
+            print('\n', '-=' * 30)
+            print("Running %s.test03_range..." % self.__class__.__name__)
 
         # Case where step < nrowsinbuf < 1.1 * step
         self.nrows = self.expectedrows
@@ -781,8 +797,8 @@ class BasicRangeTestCase(unittest.TestCase):
         """Checking ranges in table iterators (case4)"""
 
         if common.verbose:
-            print '\n', '-=' * 30
-            print "Running %s.test04_range..." % self.__class__.__name__
+            print('\n', '-=' * 30)
+            print("Running %s.test04_range..." % self.__class__.__name__)
 
         # Case where step == nrowsinbuf
         self.nrows = self.expectedrows
@@ -797,8 +813,8 @@ class BasicRangeTestCase(unittest.TestCase):
         """Checking ranges in table iterators (case5)"""
 
         if common.verbose:
-            print '\n', '-=' * 30
-            print "Running %s.test05_range..." % self.__class__.__name__
+            print('\n', '-=' * 30)
+            print("Running %s.test05_range..." % self.__class__.__name__)
 
         # Case where step > 1.1 * nrowsinbuf
         self.nrows = 21
@@ -813,8 +829,8 @@ class BasicRangeTestCase(unittest.TestCase):
         """Checking ranges in table iterators (case6)"""
 
         if common.verbose:
-            print '\n', '-=' * 30
-            print "Running %s.test06_range..." % self.__class__.__name__
+            print('\n', '-=' * 30)
+            print("Running %s.test06_range..." % self.__class__.__name__)
 
         # Case where step > 3 * nrowsinbuf
         self.nrows = 3
@@ -829,8 +845,8 @@ class BasicRangeTestCase(unittest.TestCase):
         """Checking ranges in table iterators (case7)"""
 
         if common.verbose:
-            print '\n', '-=' * 30
-            print "Running %s.test07_range..." % self.__class__.__name__
+            print('\n', '-=' * 30)
+            print("Running %s.test07_range..." % self.__class__.__name__)
 
         # Case where start == stop
         self.nrows = 2
@@ -845,8 +861,8 @@ class BasicRangeTestCase(unittest.TestCase):
         """Checking ranges in table iterators (case8)"""
 
         if common.verbose:
-            print '\n', '-=' * 30
-            print "Running %s.test08_range..." % self.__class__.__name__
+            print('\n', '-=' * 30)
+            print("Running %s.test08_range..." % self.__class__.__name__)
 
         # Case where start > stop
         self.nrows = 2
@@ -861,8 +877,8 @@ class BasicRangeTestCase(unittest.TestCase):
         """Checking ranges in table iterators (case9)"""
 
         if common.verbose:
-            print '\n', '-=' * 30
-            print "Running %s.test09_range..." % self.__class__.__name__
+            print('\n', '-=' * 30)
+            print("Running %s.test09_range..." % self.__class__.__name__)
 
         # Case where stop = None
         self.nrows = 100
@@ -877,8 +893,8 @@ class BasicRangeTestCase(unittest.TestCase):
         """Checking ranges in table iterators (case10)"""
 
         if common.verbose:
-            print '\n', '-=' * 30
-            print "Running %s.test10_range..." % self.__class__.__name__
+            print('\n', '-=' * 30)
+            print("Running %s.test10_range..." % self.__class__.__name__)
 
         # Case where start < 0 and stop = 0
         self.nrows = self.expectedrows
@@ -895,8 +911,8 @@ class BasicRangeTestCase(unittest.TestCase):
         """Checking ranges in table iterators (case11)"""
 
         if common.verbose:
-            print '\n', '-=' * 30
-            print "Running %s.test11_range..." % self.__class__.__name__
+            print('\n', '-=' * 30)
+            print("Running %s.test11_range..." % self.__class__.__name__)
 
         # Case where start < 0 and stop < 0
         self.nrows = self.expectedrows
@@ -913,8 +929,8 @@ class BasicRangeTestCase(unittest.TestCase):
         """Checking ranges in table iterators (case12)"""
 
         if common.verbose:
-            print '\n', '-=' * 30
-            print "Running %s.test12_range..." % self.__class__.__name__
+            print('\n', '-=' * 30)
+            print("Running %s.test12_range..." % self.__class__.__name__)
 
         # Case where start < 0 and stop < 0 and start > stop
         self.nrows = self.expectedrows
@@ -931,8 +947,8 @@ class BasicRangeTestCase(unittest.TestCase):
         """Checking ranges in table iterators (case13)"""
 
         if common.verbose:
-            print '\n', '-=' * 30
-            print "Running %s.test13_range..." % self.__class__.__name__
+            print('\n', '-=' * 30)
+            print("Running %s.test13_range..." % self.__class__.__name__)
 
         # Case where step < 0
         self.step = -11
@@ -941,7 +957,7 @@ class BasicRangeTestCase(unittest.TestCase):
         except ValueError:
             if common.verbose:
                 (type, value, traceback) = sys.exc_info()
-                print "\nGreat!, the next ValueError was catched!"
+                print("\nGreat!, the next ValueError was catched!")
             self.fileh.close()
         #else:
         #    self.fail("expected a ValueError")
@@ -953,7 +969,7 @@ class BasicRangeTestCase(unittest.TestCase):
         except ValueError:
             if common.verbose:
                 (type, value, traceback) = sys.exc_info()
-                print "\nGreat!, the next ValueError was catched!"
+                print("\nGreat!, the next ValueError was catched!")
             self.fileh.close()
         #else:
         #    self.fail("expected a ValueError")
@@ -974,8 +990,9 @@ class getColRangeTestCase(BasicRangeTestCase):
         """Checking non-existing Field in getCol method """
 
         if common.verbose:
-            print '\n', '-=' * 30
-            print "Running %s.test01_nonexistentField..." % self.__class__.__name__
+            print('\n', '-=' * 30)
+            print("Running %s.test01_nonexistentField..." %
+                  self.__class__.__name__)
 
         # Create an instance of an HDF5 Table
         self.fileh = open_file(self.file, "r")
@@ -987,7 +1004,7 @@ class getColRangeTestCase(BasicRangeTestCase):
         except KeyError:
             if common.verbose:
                 (type, value, traceback) = sys.exc_info()
-                print "\nGreat!, the next KeyError was catched!"
+                print("\nGreat!, the next KeyError was catched!")
         else:
             self.fail("expected a KeyError")
 
@@ -1121,8 +1138,8 @@ class RecArrayIO(unittest.TestCase):
         "Checking modifying one column (single column version, list)"
 
         if common.verbose:
-            print '\n', '-=' * 30
-            print "Running %s.test08a..." % self.__class__.__name__
+            print('\n', '-=' * 30)
+            print("Running %s.test08a..." % self.__class__.__name__)
 
         file = tempfile.mktemp(".h5")
         fileh = open_file(file, "w")
@@ -1148,8 +1165,8 @@ class RecArrayIO(unittest.TestCase):
         # Read the modified table
         r2 = table.read()
         if common.verbose:
-            print "Original table-->", repr(r2)
-            print "Should look like-->", repr(r1)
+            print("Original table-->", repr(r2))
+            print("Should look like-->", repr(r1))
         self.assertEqual(r1.tostring(), r2.tostring())
         self.assertEqual(table.nrows, 4)
 
@@ -1160,8 +1177,8 @@ class RecArrayIO(unittest.TestCase):
         "Checking modifying one column (single column version, recarray)"
 
         if common.verbose:
-            print '\n', '-=' * 30
-            print "Running %s.test08b..." % self.__class__.__name__
+            print('\n', '-=' * 30)
+            print("Running %s.test08b..." % self.__class__.__name__)
 
         file = tempfile.mktemp(".h5")
         fileh = open_file(file, "w")
@@ -1189,8 +1206,8 @@ class RecArrayIO(unittest.TestCase):
         # Read the modified table
         r2 = table.read()
         if common.verbose:
-            print "Original table-->", repr(r2)
-            print "Should look like-->", repr(r1)
+            print("Original table-->", repr(r2))
+            print("Should look like-->", repr(r1))
         self.assertEqual(r1.tostring(), r2.tostring())
         self.assertEqual(table.nrows, 4)
 
@@ -1198,12 +1215,12 @@ class RecArrayIO(unittest.TestCase):
         os.remove(file)
 
     def test08b2(self):
-        """Checking modifying one column (single column version,
-        recarray, modify_column)"""
+        """Checking modifying one column (single column version, recarray,
+        modify_column)"""
 
         if common.verbose:
-            print '\n', '-=' * 30
-            print "Running %s.test08b2..." % self.__class__.__name__
+            print('\n', '-=' * 30)
+            print("Running %s.test08b2..." % self.__class__.__name__)
 
         file = tempfile.mktemp(".h5")
         fileh = open_file(file, "w")
@@ -1231,8 +1248,8 @@ class RecArrayIO(unittest.TestCase):
         # Read the modified table
         r2 = table.read()
         if common.verbose:
-            print "Original table-->", repr(r2)
-            print "Should look like-->", repr(r1)
+            print("Original table-->", repr(r2))
+            print("Should look like-->", repr(r1))
         self.assertEqual(r1.tostring(), r2.tostring())
         self.assertEqual(table.nrows, 4)
 
@@ -1288,10 +1305,10 @@ class DefaultValues(unittest.TestCase):
         # This generates too much output. Activate only when
         # self.nrowsinbuf is very small (<10)
         if common.verbose and 1:
-            print "Table values:"
-            print r2
-            print "Record values:"
-            print r
+            print("Table values:")
+            print(r2)
+            print("Record values:")
+            print(r)
 
         # Both checks do work, however, tostring() seems more stringent.
         self.assertEqual(r.tostring(), r2.tostring())
@@ -1344,8 +1361,8 @@ class ShapeTestCase(unittest.TestCase):
         table = self.fileh.root.table
 
         if common.verbose:
-            print "The values look like:", table.cols.var0[:]
-            print "They should look like:", [1]
+            print("The values look like:", table.cols.var0[:])
+            print("They should look like:", [1])
 
         # The real check
         self.assertEqual(table.cols.var0[:].tolist(), [1])
@@ -1359,8 +1376,8 @@ class ShapeTestCase(unittest.TestCase):
         table = self.fileh.root.table
 
         if common.verbose:
-            print "The values look like:", table.cols.var1[:]
-            print "They should look like:", [[1]]
+            print("The values look like:", table.cols.var1[:])
+            print("They should look like:", [[1]])
 
         # The real check
         self.assertEqual(table.cols.var1[:].tolist(), [[1]])
@@ -1374,8 +1391,8 @@ class ShapeTestCase(unittest.TestCase):
         table = self.fileh.root.table
 
         if common.verbose:
-            print "The values look like:", table.cols.var2[:]
-            print "They should look like:", [[1, 1]]
+            print("The values look like:", table.cols.var2[:])
+            print("They should look like:", [[1, 1]])
 
         # The real check
         self.assertEqual(table.cols.var2[:].tolist(), [[1, 1]])
@@ -1390,8 +1407,8 @@ class ShapeTestCase(unittest.TestCase):
         table = self.fileh.root.table
 
         if common.verbose:
-            print "The values look like:", table.cols.var3[:]
-            print "They should look like:", [[[0, 0], [1, 1]]]
+            print("The values look like:", table.cols.var3[:])
+            print("They should look like:", [[[0, 0], [1, 1]]])
 
         # The real check
         self.assertEqual(table.cols.var3[:].tolist(), [[[0, 0], [1, 1]]])
@@ -1447,8 +1464,8 @@ class setItem(common.PyTablesTestCase):
             table.nrowsinbuf = self.buffersize  # set buffer value
         r2 = table.read()
         if common.verbose:
-            print "Original table-->", repr(r2)
-            print "Should look like-->", repr(r1)
+            print("Original table-->", repr(r2))
+            print("Should look like-->", repr(r1))
         self.assertEqual(r1.tostring(), r2.tostring())
         self.assertEqual(table.nrows, 4)
 
@@ -1479,8 +1496,8 @@ class setItem(common.PyTablesTestCase):
             table.nrowsinbuf = self.buffersize  # set buffer value
         r2 = table.read()
         if common.verbose:
-            print "Original table-->", repr(r2)
-            print "Should look like-->", repr(r1)
+            print("Original table-->", repr(r2))
+            print("Should look like-->", repr(r1))
         self.assertEqual(r1.tostring(), r2.tostring())
         self.assertEqual(table.nrows, 4)
 
@@ -1513,8 +1530,8 @@ class setItem(common.PyTablesTestCase):
             table.nrowsinbuf = self.buffersize  # set buffer value
         r2 = table.read()
         if common.verbose:
-            print "Original table-->", repr(r2)
-            print "Should look like-->", repr(r1)
+            print("Original table-->", repr(r2))
+            print("Should look like-->", repr(r1))
         self.assertEqual(r1.tostring(), r2.tostring())
         self.assertEqual(table.nrows, 4)
 
@@ -1548,8 +1565,8 @@ class setItem(common.PyTablesTestCase):
             table.nrowsinbuf = self.buffersize  # set buffer value
         r2 = table.read()
         if common.verbose:
-            print "Original table-->", repr(r2)
-            print "Should look like-->", repr(r1)
+            print("Original table-->", repr(r2))
+            print("Should look like-->", repr(r1))
         self.assertEqual(r1.tostring(), r2.tostring())
         self.assertEqual(table.nrows, 4)
 
@@ -1583,8 +1600,8 @@ class setItem(common.PyTablesTestCase):
             table.nrowsinbuf = self.buffersize  # set buffer value
         r2 = table.read()
         if common.verbose:
-            print "Original table-->", repr(r2)
-            print "Should look like-->", repr(r1)
+            print("Original table-->", repr(r2))
+            print("Should look like-->", repr(r1))
         self.assertEqual(r1.tostring(), r2.tostring())
         self.assertEqual(table.nrows, 4)
 
@@ -1615,8 +1632,8 @@ class setItem(common.PyTablesTestCase):
             table.nrowsinbuf = self.buffersize  # set buffer value
         r2 = table.read()
         if common.verbose:
-            print "Original table-->", repr(r2)
-            print "Should look like-->", repr(r1)
+            print("Original table-->", repr(r2))
+            print("Should look like-->", repr(r1))
         self.assertEqual(r1.tostring(), r2.tostring())
         self.assertEqual(table.nrows, 4)
 
@@ -1647,8 +1664,8 @@ class setItem(common.PyTablesTestCase):
             table.nrowsinbuf = self.buffersize  # set buffer value
         r2 = table.read()
         if common.verbose:
-            print "Original table-->", repr(r2)
-            print "Should look like-->", repr(r1)
+            print("Original table-->", repr(r2))
+            print("Should look like-->", repr(r1))
         self.assertEqual(r1.tostring(), r2.tostring())
         self.assertEqual(table.nrows, 4)
 
@@ -1673,8 +1690,8 @@ class setItem(common.PyTablesTestCase):
         except NotImplementedError:
             if common.verbose:
                 (type, value, traceback) = sys.exc_info()
-                print "\nGreat!, the next NotImplementedError was catched!"
-                print value
+                print("\nGreat!, the next NotImplementedError was catched!")
+                print(value)
         else:
             self.fail("expected a NotImplementedError")
 
@@ -1704,8 +1721,8 @@ class setItem(common.PyTablesTestCase):
             table.nrowsinbuf = self.buffersize  # set buffer value
         r2 = table.read()
         if common.verbose:
-            print "Original table-->", repr(r2)
-            print "Should look like-->", repr(r1)
+            print("Original table-->", repr(r2))
+            print("Should look like-->", repr(r1))
         self.assertEqual(r1.tostring(), r2.tostring())
         self.assertEqual(table.nrows, 4)
 
@@ -1736,8 +1753,8 @@ class setItem(common.PyTablesTestCase):
             table.nrowsinbuf = self.buffersize  # set buffer value
         r2 = table.read()
         if common.verbose:
-            print "Original table-->", repr(r2)
-            print "Should look like-->", repr(r1)
+            print("Original table-->", repr(r2))
+            print("Should look like-->", repr(r1))
         self.assertEqual(r1.tostring(), r2.tostring())
         self.assertEqual(table.nrows, 4)
 
@@ -1771,8 +1788,8 @@ class setItem(common.PyTablesTestCase):
             table.nrowsinbuf = self.buffersize  # set buffer value
         r2 = table.read()
         if common.verbose:
-            print "Original table-->", repr(r2)
-            print "Should look like-->", repr(r1)
+            print("Original table-->", repr(r2))
+            print("Should look like-->", repr(r1))
         self.assertEqual(r1.tostring(), r2.tostring())
         self.assertEqual(table.nrows, 4)
 
@@ -1840,8 +1857,8 @@ class updateRow(common.PyTablesTestCase):
             table.nrowsinbuf = self.buffersize  # set buffer value
         r2 = table.read()
         if common.verbose:
-            print "Original table-->", repr(r2)
-            print "Should look like-->", repr(r1)
+            print("Original table-->", repr(r2))
+            print("Should look like-->", repr(r1))
         self.assertEqual(r1.tostring(), r2.tostring())
         self.assertEqual(table.nrows, 4)
 
@@ -1877,8 +1894,8 @@ class updateRow(common.PyTablesTestCase):
             table.nrowsinbuf = self.buffersize  # set buffer value
         r2 = table.read()
         if common.verbose:
-            print "Original table-->", repr(r2)
-            print "Should look like-->", repr(r1)
+            print("Original table-->", repr(r2))
+            print("Should look like-->", repr(r1))
         self.assertEqual(r1.tostring(), r2.tostring())
         self.assertEqual(table.nrows, 4)
 
@@ -1914,8 +1931,8 @@ class updateRow(common.PyTablesTestCase):
             table.nrowsinbuf = self.buffersize  # set buffer value
         r2 = table.read()
         if common.verbose:
-            print "Original table-->", repr(r2)
-            print "Should look like-->", repr(r1)
+            print("Original table-->", repr(r2))
+            print("Should look like-->", repr(r1))
         self.assertEqual(r1.tostring(), r2.tostring())
         self.assertEqual(table.nrows, 4)
 
@@ -1951,8 +1968,8 @@ class updateRow(common.PyTablesTestCase):
             table.nrowsinbuf = self.buffersize  # set buffer value
         r2 = table.read()
         if common.verbose:
-            print "Original table-->", repr(r2)
-            print "Should look like-->", repr(r1)
+            print("Original table-->", repr(r2))
+            print("Should look like-->", repr(r1))
         self.assertEqual(r1.tostring(), r2.tostring())
         self.assertTrue(table.nrows, 4)
 
@@ -1985,8 +2002,8 @@ class updateRow(common.PyTablesTestCase):
             table.nrowsinbuf = self.buffersize  # set buffer value
         r2 = table.read()
         if common.verbose:
-            print "Original table-->", repr(r2)
-            print "Should look like-->", repr(r1)
+            print("Original table-->", repr(r2))
+            print("Should look like-->", repr(r1))
         self.assertEqual(r1.tostring(), r2.tostring())
         self.assertEqual(table.nrows, 4)
 
@@ -2019,8 +2036,8 @@ class updateRow(common.PyTablesTestCase):
             table.nrowsinbuf = self.buffersize  # set buffer value
         r2 = table.read()
         if common.verbose:
-            print "Original table-->", repr(r2)
-            print "Should look like-->", repr(r1)
+            print("Original table-->", repr(r2))
+            print("Should look like-->", repr(r1))
         self.assertEqual(r1.tostring(), r2.tostring())
         self.assertEqual(table.nrows, 4)
 
@@ -2054,8 +2071,8 @@ class updateRow(common.PyTablesTestCase):
             table.nrowsinbuf = self.buffersize  # set buffer value
         r2 = table.read()
         if common.verbose:
-            print "Original table-->", repr(r2)
-            print "Should look like-->", repr(r1)
+            print("Original table-->", repr(r2))
+            print("Should look like-->", repr(r1))
         self.assertEqual(r1.tostring(), r2.tostring())
         self.assertEqual(table.nrows, 4)
 
@@ -2098,8 +2115,8 @@ class updateRow(common.PyTablesTestCase):
             table.nrowsinbuf = self.buffersize  # set buffer value
         r2 = table.read()
         if common.verbose:
-            print "Original table-->", repr(r2)
-            print "Should look like-->", repr(r1)
+            print("Original table-->", repr(r2))
+            print("Should look like-->", repr(r1))
         self.assertEqual(r1.tostring(), r2.tostring())
         self.assertEqual(table.nrows, nrows)
 
@@ -2142,8 +2159,8 @@ class updateRow(common.PyTablesTestCase):
             table.nrowsinbuf = self.buffersize  # set buffer value
         r2 = table.read()
         if common.verbose:
-            print "Original table-->", repr(r2)
-            print "Should look like-->", repr(r1)
+            print("Original table-->", repr(r2))
+            print("Should look like-->", repr(r1))
         self.assertEqual(r1.tostring(), r2.tostring())
         self.assertEqual(table.nrows, nrows)
 
@@ -2192,8 +2209,8 @@ class updateRow(common.PyTablesTestCase):
             table.nrowsinbuf = self.buffersize  # set buffer value
         r2 = table.read()
         if common.verbose:
-            print "Original table-->", repr(r2)
-            print "Should look like-->", repr(r1)
+            print("Original table-->", repr(r2))
+            print("Should look like-->", repr(r1))
         self.assertEqual(r1.tostring(), r2.tostring())
         self.assertEqual(table.nrows, nrows)
 
@@ -2242,8 +2259,8 @@ class updateRow(common.PyTablesTestCase):
             table.nrowsinbuf = self.buffersize  # set buffer value
         r2 = table.read()
         if common.verbose:
-            print "Original table-->", repr(r2)
-            print "Should look like-->", repr(r1)
+            print("Original table-->", repr(r2))
+            print("Should look like-->", repr(r1))
         self.assertEqual(r1.tostring(), r2.tostring())
         self.assertEqual(table.nrows, nrows)
 
diff --git a/tables/tests/test_timetype.py b/tables/tests/test_timetype.py
index eb78216..3fdeb33 100644
--- a/tables/tests/test_timetype.py
+++ b/tables/tests/test_timetype.py
@@ -12,6 +12,7 @@
 
 """Unit test for the Time datatypes."""
 
+from __future__ import print_function
 import unittest
 import tempfile
 import os
@@ -296,8 +297,8 @@ class CompareTestCase(common.PyTablesTestCase):
         orig_val = numpy.arange(0, nrows * 2, dtype=numpy.int32) + 0.012
         orig_val.shape = (nrows, 1, 2)
         if common.verbose:
-            print "Original values:", orig_val
-            print "Retrieved values:", arr
+            print("Original values:", orig_val)
+            print("Retrieved values:", arr)
         self.assertTrue(allequal(arr, orig_val),
                         "Stored and retrieved values do not match.")
 
@@ -358,8 +359,8 @@ class CompareTestCase(common.PyTablesTestCase):
         # Time32 column.
         orig_val = numpy.arange(nrows, dtype=numpy.int32)
         if common.verbose:
-            print "Original values:", orig_val
-            print "Retrieved values:", recarr['t32col'][:]
+            print("Original values:", orig_val)
+            print("Retrieved values:", recarr['t32col'][:])
         self.assertTrue(numpy.alltrue(recarr['t32col'][:] == orig_val),
                         "Stored and retrieved values do not match.")
 
@@ -367,8 +368,8 @@ class CompareTestCase(common.PyTablesTestCase):
         orig_val = numpy.arange(0, nrows * 2, dtype=numpy.int32) + 0.012
         orig_val.shape = (nrows, 2)
         if common.verbose:
-            print "Original values:", orig_val
-            print "Retrieved values:", recarr['t64col'][:]
+            print("Original values:", orig_val)
+            print("Retrieved values:", recarr['t64col'][:])
         self.assertTrue(allequal(recarr['t64col'][:], orig_val, numpy.float64),
                         "Stored and retrieved values do not match.")
 
@@ -420,8 +421,8 @@ class CompareTestCase(common.PyTablesTestCase):
         orig_val = numpy.arange(0, nrows * 2, dtype=numpy.int32) + 0.012
         orig_val.shape = (nrows, 2)
         if common.verbose:
-            print "Original values:", orig_val
-            print "Retrieved values:", arr
+            print("Original values:", orig_val)
+            print("Retrieved values:", arr)
         self.assertTrue(allequal(arr, orig_val),
                         "Stored and retrieved values do not match.")
 
@@ -484,16 +485,16 @@ class UnalignedTestCase(common.PyTablesTestCase):
         # Int8 column.
         orig_val = numpy.arange(nrows, dtype=numpy.int8)
         if common.verbose:
-            print "Original values:", orig_val
-            print "Retrieved values:", recarr['i8col'][:]
+            print("Original values:", orig_val)
+            print("Retrieved values:", recarr['i8col'][:])
         self.assertTrue(numpy.alltrue(recarr['i8col'][:] == orig_val),
                         "Stored and retrieved values do not match.")
 
         # Time32 column.
         orig_val = numpy.arange(nrows, dtype=numpy.int32)
         if common.verbose:
-            print "Original values:", orig_val
-            print "Retrieved values:", recarr['t32col'][:]
+            print("Original values:", orig_val)
+            print("Retrieved values:", recarr['t32col'][:])
         self.assertTrue(numpy.alltrue(recarr['t32col'][:] == orig_val),
                         "Stored and retrieved values do not match.")
 
@@ -501,8 +502,8 @@ class UnalignedTestCase(common.PyTablesTestCase):
         orig_val = numpy.arange(0, nrows * 2, dtype=numpy.int32) + 0.012
         orig_val.shape = (nrows, 2)
         if common.verbose:
-            print "Original values:", orig_val
-            print "Retrieved values:", recarr['t64col'][:]
+            print("Original values:", orig_val)
+            print("Retrieved values:", recarr['t64col'][:])
         self.assertTrue(allequal(recarr['t64col'][:], orig_val, numpy.float64),
                         "Stored and retrieved values do not match.")
 
@@ -529,8 +530,8 @@ class BigEndianTestCase(common.PyTablesTestCase):
         orig_val = numpy.arange(start, start + nrows, dtype=numpy.int32)
 
         if common.verbose:
-            print "Retrieved values:", earr
-            print "Should look like:", orig_val
+            print("Retrieved values:", earr)
+            print("Should look like:", orig_val)
         self.assertTrue(numpy.alltrue(earr == orig_val),
                         "Retrieved values do not match the expected values.")
 
@@ -546,8 +547,8 @@ class BigEndianTestCase(common.PyTablesTestCase):
         orig_val = numpy.arange(start, start + nrows, dtype=numpy.float64)
 
         if common.verbose:
-            print "Retrieved values:", earr
-            print "Should look like:", orig_val
+            print("Retrieved values:", earr)
+            print("Should look like:", orig_val)
         self.assertTrue(numpy.allclose(earr, orig_val, rtol=1.e-15),
                         "Retrieved values do not match the expected values.")
 
@@ -564,8 +565,8 @@ class BigEndianTestCase(common.PyTablesTestCase):
         orig_val = numpy.arange(start, start + nrows, dtype=numpy.int32)
 
         if common.verbose:
-            print "Retrieved values:", t32
-            print "Should look like:", orig_val
+            print("Retrieved values:", t32)
+            print("Should look like:", orig_val)
         self.assertTrue(numpy.alltrue(t32 == orig_val),
                         "Retrieved values do not match the expected values.")
 
@@ -582,8 +583,8 @@ class BigEndianTestCase(common.PyTablesTestCase):
         orig_val = numpy.arange(start, start + nrows, dtype=numpy.float64)
 
         if common.verbose:
-            print "Retrieved values:", t64
-            print "Should look like:", orig_val
+            print("Retrieved values:", t64)
+            print("Should look like:", orig_val)
         self.assertTrue(numpy.allclose(t64, orig_val, rtol=1.e-15),
                         "Retrieved values do not match the expected values.")
 
@@ -602,8 +603,8 @@ class BigEndianTestCase(common.PyTablesTestCase):
         orig_val = numpy.arange(start, start + nrows, dtype=numpy.float64)
 
         if common.verbose:
-            print "Retrieved values:", t64
-            print "Should look like:", orig_val
+            print("Retrieved values:", t64)
+            print("Should look like:", orig_val)
         self.assertTrue(numpy.allclose(t64, orig_val, rtol=1.e-15),
                         "Retrieved values do not match the expected values.")
 
diff --git a/tables/tests/test_tree.py b/tables/tests/test_tree.py
index 1d8265b..e0b88ba 100644
--- a/tables/tests/test_tree.py
+++ b/tables/tests/test_tree.py
@@ -1,5 +1,6 @@
 # -*- coding: utf-8 -*-
 
+from __future__ import print_function
 import sys
 import warnings
 import unittest
@@ -89,8 +90,8 @@ class TreeTestCase(unittest.TestCase):
         "Checking the File.get_node() with string node names"
 
         if common.verbose:
-            print '\n', '-=' * 30
-            print "Running %s.test00_getNode..." % self.__class__.__name__
+            print('\n', '-=' * 30)
+            print("Running %s.test00_getNode..." % self.__class__.__name__)
 
         self.h5file = open_file(self.file, "r")
         nodelist = ['/', '/table0', '/group0/var1', '/group0/group1/var4']
@@ -101,7 +102,7 @@ class TreeTestCase(unittest.TestCase):
 
         self.assertEqual(nodenames, nodelist)
         if common.verbose:
-            print "get_node(pathname) test passed"
+            print("get_node(pathname) test passed")
         nodegroups = [
             '/', '/group0', '/group0/group1', '/group0/group1/group2']
         nodenames = ['var1', 'var4']
@@ -121,7 +122,7 @@ class TreeTestCase(unittest.TestCase):
                           '/group0/group1/var1', '/group0/group1/var4'])
 
         if common.verbose:
-            print "get_node(groupname, name) test passed"
+            print("get_node(groupname, name) test passed")
         nodelist = ['/', '/group0', '/group0/group1', '/group0/group1/group2',
                     '/table0']
         nodenames = []
@@ -133,8 +134,8 @@ class TreeTestCase(unittest.TestCase):
             except LookupError:
                 if common.verbose:
                     (type, value, traceback) = sys.exc_info()
-                    print "\nGreat!, the next LookupError was catched!"
-                    print value
+                    print("\nGreat!, the next LookupError was catched!")
+                    print(value)
             else:
                 nodenames.append(object._v_pathname)
                 groupobjects.append(object)
@@ -143,7 +144,7 @@ class TreeTestCase(unittest.TestCase):
                          ['/', '/group0', '/group0/group1',
                           '/group0/group1/group2'])
         if common.verbose:
-            print "get_node(groupname, classname='Group') test passed"
+            print("get_node(groupname, classname='Group') test passed")
 
         # Reset the warning
         # warnings.filterwarnings("default", category=UserWarning)
@@ -164,14 +165,15 @@ class TreeTestCase(unittest.TestCase):
                           '/group0/var1', '/group0/var4',
                           '/group0/group1/var1', '/group0/group1/var4'])
         if common.verbose:
-            print "get_node(groupobject, name, classname='Array') test passed"
+            print("get_node(groupobject, name, classname='Array') test passed")
 
     def test01_getNodeClass(self):
         "Checking the File.get_node() with instances"
 
         if common.verbose:
-            print '\n', '-=' * 30
-            print "Running %s.test01_getNodeClass..." % self.__class__.__name__
+            print('\n', '-=' * 30)
+            print("Running %s.test01_getNodeClass..." %
+                  self.__class__.__name__)
 
         self.h5file = open_file(self.file, "r")
         # This tree ways of get_node usage should return a table instance
@@ -195,8 +197,8 @@ class TreeTestCase(unittest.TestCase):
         "Checking the File.list_nodes() method"
 
         if common.verbose:
-            print '\n', '-=' * 30
-            print "Running %s.test02_listNodes..." % self.__class__.__name__
+            print('\n', '-=' * 30)
+            print("Running %s.test02_listNodes..." % self.__class__.__name__)
 
         # Made the warnings to raise an error
         # warnings.filterwarnings("error", category=UserWarning)
@@ -224,7 +226,7 @@ class TreeTestCase(unittest.TestCase):
                           '/group0/group1', '/group0/table1',
                           '/group0/var1', '/group0/var4'])
         if common.verbose:
-            print "list_nodes(pathname) test passed"
+            print("list_nodes(pathname) test passed")
 
         nodenames = []
         for node in objects:
@@ -243,7 +245,7 @@ class TreeTestCase(unittest.TestCase):
                           '/group0/group1/var1', '/group0/group1/var4'])
 
         if common.verbose:
-            print "list_nodes(groupobject) test passed"
+            print("list_nodes(groupobject) test passed")
 
         nodenames = []
         for node in objects:
@@ -252,8 +254,8 @@ class TreeTestCase(unittest.TestCase):
             except TypeError:
                 if common.verbose:
                     (type, value, traceback) = sys.exc_info()
-                    print "\nGreat!, the next TypeError was catched!"
-                    print value
+                    print("\nGreat!, the next TypeError was catched!")
+                    print(value)
             else:
                 for object in objectlist:
                     nodenames.append(object._v_pathname)
@@ -265,7 +267,7 @@ class TreeTestCase(unittest.TestCase):
                           '/group0/group1/var1', '/group0/group1/var4'])
 
         if common.verbose:
-            print "list_nodes(groupobject, classname = 'Leaf') test passed"
+            print("list_nodes(groupobject, classname = 'Leaf') test passed")
 
         nodenames = []
         for node in objects:
@@ -274,8 +276,8 @@ class TreeTestCase(unittest.TestCase):
             except TypeError:
                 if common.verbose:
                     (type, value, traceback) = sys.exc_info()
-                    print "\nGreat!, the next TypeError was catched!"
-                    print value
+                    print("\nGreat!, the next TypeError was catched!")
+                    print(value)
             else:
                 for object in objectlist:
                     nodenames.append(object._v_pathname)
@@ -284,7 +286,7 @@ class TreeTestCase(unittest.TestCase):
                          ['/group0/table1', '/group0/group1/table2'])
 
         if common.verbose:
-            print "list_nodes(groupobject, classname = 'Table') test passed"
+            print("list_nodes(groupobject, classname = 'Table') test passed")
 
         # Reset the warning
         # warnings.filterwarnings("default", category=UserWarning)
@@ -293,8 +295,8 @@ class TreeTestCase(unittest.TestCase):
         "Checking the File.iter_nodes() method"
 
         if common.verbose:
-            print '\n', '-=' * 30
-            print "Running %s.test02b_iterNodes..." % self.__class__.__name__
+            print('\n', '-=' * 30)
+            print("Running %s.test02b_iterNodes..." % self.__class__.__name__)
 
         self.h5file = open_file(self.file, "r")
 
@@ -320,7 +322,7 @@ class TreeTestCase(unittest.TestCase):
                           '/group0/group1', '/group0/table1',
                           '/group0/var1', '/group0/var4'])
         if common.verbose:
-            print "iter_nodes(pathname) test passed"
+            print("iter_nodes(pathname) test passed")
 
         nodenames = []
         for node in objects:
@@ -339,7 +341,7 @@ class TreeTestCase(unittest.TestCase):
                           '/group0/group1/var1', '/group0/group1/var4'])
 
         if common.verbose:
-            print "iter_nodes(groupobject) test passed"
+            print("iter_nodes(groupobject) test passed")
 
         nodenames = []
         for node in objects:
@@ -348,8 +350,8 @@ class TreeTestCase(unittest.TestCase):
             except TypeError:
                 if common.verbose:
                     (type, value, traceback) = sys.exc_info()
-                    print "\nGreat!, the next TypeError was catched!"
-                    print value
+                    print("\nGreat!, the next TypeError was catched!")
+                    print(value)
             else:
                 for object in objectlist:
                     nodenames.append(object._v_pathname)
@@ -361,7 +363,7 @@ class TreeTestCase(unittest.TestCase):
                           '/group0/group1/var1', '/group0/group1/var4'])
 
         if common.verbose:
-            print "iter_nodes(groupobject, classname = 'Leaf') test passed"
+            print("iter_nodes(groupobject, classname = 'Leaf') test passed")
 
         nodenames = []
         for node in objects:
@@ -370,8 +372,8 @@ class TreeTestCase(unittest.TestCase):
             except TypeError:
                 if common.verbose:
                     (type, value, traceback) = sys.exc_info()
-                    print "\nGreat!, the next TypeError was catched!"
-                    print value
+                    print("\nGreat!, the next TypeError was catched!")
+                    print(value)
             else:
                 for object in objectlist:
                     nodenames.append(object._v_pathname)
@@ -380,7 +382,7 @@ class TreeTestCase(unittest.TestCase):
                          ['/group0/table1', '/group0/group1/table2'])
 
         if common.verbose:
-            print "iter_nodes(groupobject, classname = 'Table') test passed"
+            print("iter_nodes(groupobject, classname = 'Table') test passed")
 
         # Reset the warning
         # warnings.filterwarnings("default", category=UserWarning)
@@ -389,8 +391,9 @@ class TreeTestCase(unittest.TestCase):
         "Checking the File.walk_groups() method"
 
         if common.verbose:
-            print '\n', '-=' * 30
-            print "Running %s.test03_TraverseTree..." % self.__class__.__name__
+            print('\n', '-=' * 30)
+            print("Running %s.test03_TraverseTree..." %
+                  self.__class__.__name__)
 
         self.h5file = open_file(self.file, "r")
         groups = []
@@ -407,15 +410,16 @@ class TreeTestCase(unittest.TestCase):
                          ["/", "/group0", "/group0/group1",
                           "/group0/group1/group2"])
 
-        self.assertEqual(tables,
-                         ["/table0", "/group0/table1", "/group0/group1/table2"])
+        self.assertEqual(
+            tables,
+            ["/table0", "/group0/table1", "/group0/group1/table2"])
 
         self.assertEqual(arrays,
                          ['/var1', '/var4',
                           '/group0/var1', '/group0/var4',
                           '/group0/group1/var1', '/group0/group1/var4'])
         if common.verbose:
-            print "walk_groups() test passed"
+            print("walk_groups() test passed")
 
         groups = []
         tables = []
@@ -436,14 +440,14 @@ class TreeTestCase(unittest.TestCase):
                          '/group0/group1/var1', '/group0/group1/var4'])
 
         if common.verbose:
-            print "walk_groups(pathname) test passed"
+            print("walk_groups(pathname) test passed")
 
     def test04_walkNodes(self):
         "Checking File.walk_nodes"
 
         if common.verbose:
-            print '\n', '-=' * 30
-            print "Running %s.test04_walkNodes..." % self.__class__.__name__
+            print('\n', '-=' * 30)
+            print("Running %s.test04_walkNodes..." % self.__class__.__name__)
 
         self.h5file = open_file(self.file, "r")
 
@@ -480,12 +484,13 @@ class TreeTestCase(unittest.TestCase):
                           '/group0/group1/var1', '/group0/group1/var4'])
 
         if common.verbose:
-            print "File.__iter__() and Group.__iter__ test passed"
+            print("File.__iter__() and Group.__iter__ test passed")
 
         groups = []
         tables = []
         arrays = []
-        for group in self.h5file.walk_nodes("/group0/group1", classname="Group"):
+        for group in self.h5file.walk_nodes("/group0/group1",
+                                            classname="Group"):
             groups.append(group._v_pathname)
             for table in group._f_walknodes('Table'):
                 tables.append(table._v_pathname)
@@ -501,12 +506,11 @@ class TreeTestCase(unittest.TestCase):
                          '/group0/group1/var1', '/group0/group1/var4'])
 
         if common.verbose:
-            print "walk_nodes(pathname, classname) test passed"
+            print("walk_nodes(pathname, classname) test passed")
 
 
 class DeepTreeTestCase(unittest.TestCase):
-    """Checks for deep hierarchy levels in PyTables trees.
-    """
+    """Checks for deep hierarchy levels in PyTables trees."""
 
     def setUp(self):
         # Here we put a more conservative limit to deal with more platforms
@@ -518,19 +522,19 @@ class DeepTreeTestCase(unittest.TestCase):
         else:
             self.maxdepth = 64  # This should be safe for most machines
         if common.verbose:
-            print "Maximum depth tested :", self.maxdepth
+            print("Maximum depth tested :", self.maxdepth)
 
         # Open a new empty HDF5 file
         self.file = tempfile.mktemp(".h5")
         fileh = open_file(self.file, mode="w")
         group = fileh.root
         if common.verbose:
-            print "Depth writing progress: ",
+            print("Depth writing progress: ", end=' ')
         # Iterate until maxdepth
         for depth in range(self.maxdepth):
             # Save it on the HDF5 file
             if common.verbose:
-                print "%3d," % (depth),
+                print("%3d," % (depth), end=' ')
             # Create a couple of arrays here
             fileh.create_array(group, 'array', [1, 1], "depth: %d" % depth)
             fileh.create_array(group, 'array2', [1, 1], "depth: %d" % depth)
@@ -550,11 +554,11 @@ class DeepTreeTestCase(unittest.TestCase):
         fileh = open_file(file, mode="r")
         group = fileh.root
         if common.verbose:
-            print "\nDepth reading progress: ",
+            print("\nDepth reading progress: ", end=' ')
         # Get the metadata on the previosly saved arrays
         for depth in range(self.maxdepth):
             if common.verbose:
-                print "%3d," % (depth),
+                print("%3d," % (depth), end=' ')
             # Check the contents
             self.assertEqual(group.array[:], [1, 1])
             self.assertTrue("array2" in group)
@@ -562,7 +566,7 @@ class DeepTreeTestCase(unittest.TestCase):
             # Iterate over the next group
             group = fileh.get_node(group, 'group' + str(depth))
         if common.verbose:
-            print  # This flush the stdout buffer
+            print()  # This flush the stdout buffer
         fileh.close()
 
     def test00_deepTree(self):
@@ -575,7 +579,7 @@ class DeepTreeTestCase(unittest.TestCase):
         file2 = tempfile.mktemp(".h5")
         fileh2 = open_file(file2, mode="w")
         if common.verbose:
-            print "\nCopying deep tree..."
+            print("\nCopying deep tree...")
         fileh.copy_node(fileh.root, fileh2.root, recursive=True)
         fileh.close()
         fileh2.close()
@@ -588,7 +592,7 @@ class DeepTreeTestCase(unittest.TestCase):
         file2 = tempfile.mktemp(".h5")
         fileh2 = open_file(file2, mode="w", node_cache_slots=10)
         if common.verbose:
-            print "\nCopying deep tree..."
+            print("\nCopying deep tree...")
         fileh.copy_node(fileh.root, fileh2.root, recursive=True)
         fileh.close()
         fileh2.close()
@@ -601,7 +605,7 @@ class DeepTreeTestCase(unittest.TestCase):
         file2 = tempfile.mktemp(".h5")
         fileh2 = open_file(file2, mode="w", node_cache_slots=0)
         if common.verbose:
-            print "\nCopying deep tree..."
+            print("\nCopying deep tree...")
         fileh.copy_node(fileh.root, fileh2.root, recursive=True)
         fileh.close()
         fileh2.close()
@@ -617,7 +621,7 @@ class DeepTreeTestCase(unittest.TestCase):
         file2 = tempfile.mktemp(".h5")
         fileh2 = open_file(file2, mode="w", node_cache_slots=-256)
         if common.verbose:
-            print "\nCopying deep tree..."
+            print("\nCopying deep tree...")
         fileh.copy_node(fileh.root, fileh2.root, recursive=True)
         fileh.close()
         fileh2.close()
@@ -626,17 +630,17 @@ class DeepTreeTestCase(unittest.TestCase):
 
 
 class WideTreeTestCase(unittest.TestCase):
-    """Checks for maximum number of children for a Group.
-    """
+    """Checks for maximum number of children for a Group."""
 
     def test00_Leafs(self):
-        """Checking creation of large number of leafs (1024) per group
+        """Checking creation of large number of leafs (1024) per group.
+
+        Variable 'maxchildren' controls this check. PyTables support up
+        to 4096 children per group, but this would take too much memory
+        (up to 64 MB) for testing purposes (may be we can add a test for
+        big platforms). A 1024 children run takes up to 30 MB. A 512
+        children test takes around 25 MB.
 
-        Variable 'maxchildren' controls this check. PyTables support
-        up to 4096 children per group, but this would take too much
-        memory (up to 64 MB) for testing purposes (may be we can add a
-        test for big platforms). A 1024 children run takes up to 30 MB.
-        A 512 children test takes around 25 MB.
         """
 
         import time
@@ -645,10 +649,10 @@ class WideTreeTestCase(unittest.TestCase):
         else:
             maxchildren = 256
         if common.verbose:
-            print '\n', '-=' * 30
-            print "Running %s.test00_wideTree..." % \
-                  self.__class__.__name__
-            print "Maximum number of children tested :", maxchildren
+            print('\n', '-=' * 30)
+            print("Running %s.test00_wideTree..." %
+                  self.__class__.__name__)
+            print("Maximum number of children tested :", maxchildren)
         # Open a new empty HDF5 file
         file = tempfile.mktemp(".h5")
         # file = "test_widetree.h5"
@@ -656,14 +660,14 @@ class WideTreeTestCase(unittest.TestCase):
         a = [1, 1]
         fileh = open_file(file, mode="w")
         if common.verbose:
-            print "Children writing progress: ",
+            print("Children writing progress: ", end=' ')
         for child in range(maxchildren):
             if common.verbose:
-                print "%3d," % (child),
+                print("%3d," % (child), end=' ')
             fileh.create_array(fileh.root, 'array' + str(child),
                                a, "child: %d" % child)
         if common.verbose:
-            print
+            print()
         # Close the file
         fileh.close()
 
@@ -672,13 +676,13 @@ class WideTreeTestCase(unittest.TestCase):
         # Open the previous HDF5 file in read-only mode
         fileh = open_file(file, mode="r")
         if common.verbose:
-            print "\nTime spent opening a file with %d arrays: %s s" % \
-                  (maxchildren, time.time()-t1)
-            print "\nChildren reading progress: ",
+            print("\nTime spent opening a file with %d arrays: %s s" %
+                  (maxchildren, time.time()-t1))
+            print("\nChildren reading progress: ", end=' ')
         # Get the metadata on the previosly saved arrays
         for child in range(maxchildren):
             if common.verbose:
-                print "%3d," % (child),
+                print("%3d," % (child), end=' ')
             # Create an array for later comparison
             # Get the actual array
             array_ = getattr(fileh.root, 'array' + str(child))
@@ -686,20 +690,21 @@ class WideTreeTestCase(unittest.TestCase):
             # Arrays a and b must be equal
             self.assertEqual(a, b)
         if common.verbose:
-            print  # This flush the stdout buffer
+            print()  # This flush the stdout buffer
         # Close the file
         fileh.close()
         # Then, delete the file
         os.remove(file)
 
     def test01_wideTree(self):
-        """Checking creation of large number of groups (1024) per group
+        """Checking creation of large number of groups (1024) per group.
+
+        Variable 'maxchildren' controls this check. PyTables support up
+        to 4096 children per group, but this would take too much memory
+        (up to 64 MB) for testing purposes (may be we can add a test for
+        big platforms). A 1024 children run takes up to 30 MB. A 512
+        children test takes around 25 MB.
 
-        Variable 'maxchildren' controls this check. PyTables support
-        up to 4096 children per group, but this would take too much
-        memory (up to 64 MB) for testing purposes (may be we can add a
-        test for big platforms). A 1024 children run takes up to 30 MB.
-        A 512 children test takes around 25 MB.
         """
 
         import time
@@ -710,24 +715,24 @@ class WideTreeTestCase(unittest.TestCase):
             # for standard platforms
             maxchildren = 256
         if common.verbose:
-            print '\n', '-=' * 30
-            print "Running %s.test00_wideTree..." % \
-                  self.__class__.__name__
-            print "Maximum number of children tested :", maxchildren
+            print('\n', '-=' * 30)
+            print("Running %s.test00_wideTree..." %
+                  self.__class__.__name__)
+            print("Maximum number of children tested :", maxchildren)
         # Open a new empty HDF5 file
         file = tempfile.mktemp(".h5")
         # file = "test_widetree.h5"
 
         fileh = open_file(file, mode="w")
         if common.verbose:
-            print "Children writing progress: ",
+            print("Children writing progress: ", end=' ')
         for child in range(maxchildren):
             if common.verbose:
-                print "%3d," % (child),
+                print("%3d," % (child), end=' ')
             fileh.create_group(fileh.root, 'group' + str(child),
                                "child: %d" % child)
         if common.verbose:
-            print
+            print()
         # Close the file
         fileh.close()
 
@@ -735,19 +740,19 @@ class WideTreeTestCase(unittest.TestCase):
         # Open the previous HDF5 file in read-only mode
         fileh = open_file(file, mode="r")
         if common.verbose:
-            print "\nTime spent opening a file with %d groups: %s s" % \
-                  (maxchildren, time.time()-t1)
-            print "\nChildren reading progress: ",
+            print("\nTime spent opening a file with %d groups: %s s" %
+                  (maxchildren, time.time()-t1))
+            print("\nChildren reading progress: ", end=' ')
         # Get the metadata on the previosly saved arrays
         for child in range(maxchildren):
             if common.verbose:
-                print "%3d," % (child),
+                print("%3d," % (child), end=' ')
             # Get the actual group
             group = getattr(fileh.root, 'group' + str(child))
             # Arrays a and b must be equal
             self.assertEqual(group._v_title, "child: %d" % child)
         if common.verbose:
-            print  # This flush the stdout buffer
+            print()  # This flush the stdout buffer
         # Close the file
         fileh.close()
         # Then, delete the file
@@ -802,11 +807,13 @@ class HiddenTreeTestCase(unittest.TestCase):
         warnings.filterwarnings('ignore', category=DeprecationWarning)
 
         for vpath in self.visible:
-            self.assertTrue(vpath in objects,
-                            "Missing visible node ``%s`` from ``File.objects``." % vpath)
+            self.assertTrue(
+                vpath in objects,
+                "Missing visible node ``%s`` from ``File.objects``." % vpath)
         for hpath in self.hidden:
-            self.assertTrue(hpath not in objects,
-                            "Found hidden node ``%s`` in ``File.objects``." % hpath)
+            self.assertTrue(
+                hpath not in objects,
+                "Found hidden node ``%s`` in ``File.objects``." % hpath)
 
         warnings.filterwarnings('default', category=DeprecationWarning)
 
@@ -955,12 +962,11 @@ class HiddenTreeTestCase(unittest.TestCase):
 
 class CreateParentsTestCase(common.TempFileMixin, common.PyTablesTestCase):
 
-    """
-    Test the ``createparents`` flag.
+    """Test the ``createparents`` flag.
+
+    These are mainly for the user interface.  More thorough tests on the
+    workings of the flag can be found in the ``test_do_undo.py`` module.
 
-    These are mainly for the user interface.  More thorough tests on
-    the workings of the flag can be found in the ``test_do_undo.py``
-    module.
     """
 
     filters = Filters(complevel=4)  # simply non-default
diff --git a/tables/tests/test_types.py b/tables/tests/test_types.py
index 366ee83..0341d47 100644
--- a/tables/tests/test_types.py
+++ b/tables/tests/test_types.py
@@ -1,5 +1,6 @@
 # -*- coding: utf-8 -*-
 
+from __future__ import print_function
 import sys
 import unittest
 import os
@@ -23,15 +24,15 @@ class Record(IsDescription):
     var5 = Col.from_kind('float', itemsize=4)  # float  (single-precision)
     var6 = Col.from_kind('complex')  # double-precision
     var7 = Col.from_kind('complex', itemsize=8)  # single-precision
-    if hasattr(numpy, "float16"):
+    if "Float16Atom" in globals():
         var8 = Col.from_kind('float', itemsize=2)  # half-precision
-    if hasattr(numpy, "float96"):
+    if "Float96Atom" in globals():
         var9 = Col.from_kind('float', itemsize=12)  # extended-precision
-    if hasattr(numpy, "float128"):
+    if "Float128Atom" in globals():
         var10 = Col.from_kind('float', itemsize=16)  # extended-precision
-    if hasattr(numpy, "complex192"):
+    if "Complex192Atom" in globals():
         var11 = Col.from_kind('complex', itemsize=24)  # extended-precision
-    if hasattr(numpy, "complex256"):
+    if "Complex256Atom" in globals():
         var12 = Col.from_kind('complex', itemsize=32)  # extended-precision
 
 
@@ -60,7 +61,7 @@ class RangeTestCase(unittest.TestCase):
     #----------------------------------------
 
     def test00_range(self):
-        """Testing the range check"""
+        """Testing the range check."""
         rec = self.table.row
         # Save a record
         i = self.maxshort
@@ -71,26 +72,27 @@ class RangeTestCase(unittest.TestCase):
         rec['var5'] = float(i)
         rec['var6'] = float(i)
         rec['var7'] = complex(i, i)
-        if hasattr(numpy, "float16"):
+        if "Float16Atom" in globals():
             rec['var8'] = float(i)
-        if hasattr(numpy, "float96"):
+        if "Float96Atom" in globals():
             rec['var9'] = float(i)
-        if hasattr(numpy, "float128"):
+        if "Float128Atom" in globals():
             rec['var10'] = float(i)
         try:
             rec.append()
         except ValueError:
             if common.verbose:
                 (type, value, traceback) = sys.exc_info()
-                print "\nGreat!, the next ValueError was catched!"
-                print value
+                print("\nGreat!, the next ValueError was catched!")
+                print(value)
             pass
         else:
             if common.verbose:
-                print "\nNow, the range overflow no longer issues a ValueError"
+                print(
+                    "\nNow, the range overflow no longer issues a ValueError")
 
     def test01_type(self):
-        """Testing the type check"""
+        """Testing the type check."""
         rec = self.table.row
         # Save a record
         i = self.maxshort
@@ -103,19 +105,19 @@ class RangeTestCase(unittest.TestCase):
         except TypeError:
             if common.verbose:
                 (type, value, traceback) = sys.exc_info()
-                print "\nGreat!, the next TypeError was catched!"
-                print value
+                print("\nGreat!, the next TypeError was catched!")
+                print(value)
             pass
         else:
-            print rec
+            print(rec)
             self.fail("expected a TypeError")
         rec['var6'] = float(i)
         rec['var7'] = complex(i, i)
-        if hasattr(numpy, "float16"):
+        if "Float16Atom" in globals():
             rec['var8'] = float(i)
-        if hasattr(numpy, "float96"):
+        if "Float96Atom" in globals():
             rec['var9'] = float(i)
-        if hasattr(numpy, "float128"):
+        if "Float128Atom" in globals():
             rec['var10'] = float(i)
 
 
@@ -123,35 +125,35 @@ class RangeTestCase(unittest.TestCase):
 class DtypeTestCase(common.TempFileMixin, common.PyTablesTestCase):
 
     def test00a_table(self):
-        """Check dtype accessor for Table objects"""
+        """Check dtype accessor for Table objects."""
         a = self.h5file.create_table('/', 'table', Record)
         self.assertEqual(a.dtype, a.description._v_dtype)
 
     def test00b_column(self):
-        """Check dtype accessor for Column objects"""
+        """Check dtype accessor for Column objects."""
         a = self.h5file.create_table('/', 'table', Record)
         c = a.cols.var3
         self.assertEqual(c.dtype, a.description._v_dtype['var3'])
 
     def test01_array(self):
-        """Check dtype accessor for Array objects"""
+        """Check dtype accessor for Array objects."""
         a = self.h5file.create_array('/', 'array', [1, 2])
         self.assertEqual(a.dtype, a.atom.dtype)
 
     def test02_carray(self):
-        """Check dtype accessor for CArray objects"""
+        """Check dtype accessor for CArray objects."""
         a = self.h5file.create_carray(
             '/', 'array', atom=FloatAtom(), shape=[1, 2])
         self.assertEqual(a.dtype, a.atom.dtype)
 
     def test03_carray(self):
-        """Check dtype accessor for EArray objects"""
+        """Check dtype accessor for EArray objects."""
         a = self.h5file.create_earray(
             '/', 'array', atom=FloatAtom(), shape=[0, 2])
         self.assertEqual(a.dtype, a.atom.dtype)
 
     def test04_vlarray(self):
-        """Check dtype accessor for VLArray objects"""
+        """Check dtype accessor for VLArray objects."""
         a = self.h5file.create_vlarray('/', 'array', FloatAtom())
         self.assertEqual(a.dtype, a.atom.dtype)
 
@@ -204,7 +206,7 @@ class ReadFloatTestCase(common.PyTablesTestCase):
 
     def test04_read_longdouble(self):
         dtype = "longdouble"
-        if hasattr(numpy, "float96") or hasattr(numpy, "float128"):
+        if "Float96Atom" in globals() or "Float128Atom" in globals():
             ds = getattr(self.fileh.root, dtype)
             self.assertFalse(isinstance(ds, UnImplemented))
             self.assertEqual(ds.shape, (self.nrows, self.ncols))
@@ -212,30 +214,36 @@ class ReadFloatTestCase(common.PyTablesTestCase):
             self.assertTrue(common.allequal(
                 ds.read(), self.values.astype(dtype)))
 
-            if hasattr(numpy, "float96"):
+            if "Float96Atom" in globals():
                 self.assertEqual(ds.dtype, "float96")
-            elif hasattr(numpy, "float128"):
+            elif "Float128Atom" in globals():
                 self.assertEqual(ds.dtype, "float128")
         else:
             # XXX: check
-            # ds = self.assertWarns(UserWarning,
-            #                       getattr, self.fileh.root, dtype)
-            # self.assertTrue(isinstance(ds, UnImplemented))
-
-            ds = getattr(self.fileh.root, dtype)
-            self.assertEqual(ds.dtype, "float64")
+            # the behavior depends on the HDF5 lib configuration
+            try:
+                ds = self.assertWarns(UserWarning,
+                                      getattr, self.fileh.root, dtype)
+                self.assertTrue(isinstance(ds, UnImplemented))
+            except AssertionError:
+                from tables.utilsextension import _broken_hdf5_long_double
+                if not _broken_hdf5_long_double():
+                    ds = getattr(self.fileh.root, dtype)
+                    self.assertEqual(ds.dtype, "float64")
 
     def test05_read_quadprecision_float(self):
-        # ds = self.assertWarns(UserWarning, getattr, self.fileh.root,
-        #                     "quadprecision")
-        # self.assertTrue(isinstance(ds, UnImplemented))
-
-        # NOTE: it would be nice to have some sort of message that warns
-        #       against the potential precision loss: the quad-precision
-        #       dataset actually uses 128 bits for each element, not just
-        #       80 bits (longdouble)
-        ds = self.fileh.root.quadprecision
-        self.assertEqual(ds.dtype, "longdouble")
+        # XXX: check
+        try:
+            ds = self.assertWarns(UserWarning, getattr, self.fileh.root,
+                                  "quadprecision")
+            self.assertTrue(isinstance(ds, UnImplemented))
+        except AssertionError:
+            # NOTE: it would be nice to have some sort of message that warns
+            #       against the potential precision loss: the quad-precision
+            #       dataset actually uses 128 bits for each element, not just
+            #       80 bits (longdouble)
+            ds = self.fileh.root.quadprecision
+            self.assertEqual(ds.dtype, "longdouble")
 
 
 class AtomTestCase(common.PyTablesTestCase):
diff --git a/tables/tests/test_vlarray.py b/tables/tests/test_vlarray.py
index 4810470..fd46bb0 100644
--- a/tables/tests/test_vlarray.py
+++ b/tables/tests/test_vlarray.py
@@ -1,5 +1,6 @@
 # -*- coding: latin-1 -*-
 
+from __future__ import print_function
 import sys
 import unittest
 import os
@@ -9,6 +10,7 @@ import tempfile
 import numpy
 import numpy.testing as npt
 
+import tables
 from tables import *
 from tables.tests import common
 from tables.tests.common import allequal
@@ -79,11 +81,11 @@ class BasicTestCase(unittest.TestCase):
         self.assertEqual(obj.atom.type, 'int32')
 
     def test01_read(self):
-        """Checking vlarray read"""
+        """Checking vlarray read."""
 
         if common.verbose:
-            print '\n', '-=' * 30
-            print "Running %s.test01_read..." % self.__class__.__name__
+            print('\n', '-=' * 30)
+            print("Running %s.test01_read..." % self.__class__.__name__)
 
         # Create an instance of an HDF5 Table
         self.fileh = open_file(self.file, "r")
@@ -95,9 +97,9 @@ class BasicTestCase(unittest.TestCase):
         row = vlarray.read(0)[0]
         row2 = vlarray.read(2)[0]
         if common.verbose:
-            print "Flavor:", vlarray.flavor
-            print "Nrows in", vlarray._v_pathname, ":", vlarray.nrows
-            print "First row in vlarray ==>", row
+            print("Flavor:", vlarray.flavor)
+            print("Nrows in", vlarray._v_pathname, ":", vlarray.nrows)
+            print("First row in vlarray ==>", row)
 
         nrows = 5
         self.assertEqual(nrows, vlarray.nrows)
@@ -114,26 +116,27 @@ class BasicTestCase(unittest.TestCase):
 
         # Check filters:
         if self.compress != vlarray.filters.complevel and common.verbose:
-            print "Error in compress. Class:", self.__class__.__name__
-            print "self, vlarray:", self.compress, vlarray.filters.complevel
+            print("Error in compress. Class:", self.__class__.__name__)
+            print("self, vlarray:", self.compress, vlarray.filters.complevel)
         self.assertEqual(vlarray.filters.complevel, self.compress)
         if self.compress > 0 and which_lib_version(self.complib):
             self.assertEqual(vlarray.filters.complib, self.complib)
         if self.shuffle != vlarray.filters.shuffle and common.verbose:
-            print "Error in shuffle. Class:", self.__class__.__name__
-            print "self, vlarray:", self.shuffle, vlarray.filters.shuffle
+            print("Error in shuffle. Class:", self.__class__.__name__)
+            print("self, vlarray:", self.shuffle, vlarray.filters.shuffle)
         self.assertEqual(self.shuffle, vlarray.filters.shuffle)
         if self.fletcher32 != vlarray.filters.fletcher32 and common.verbose:
-            print "Error in fletcher32. Class:", self.__class__.__name__
-            print "self, vlarray:", self.fletcher32, vlarray.filters.fletcher32
+            print("Error in fletcher32. Class:", self.__class__.__name__)
+            print("self, vlarray:", self.fletcher32,
+                  vlarray.filters.fletcher32)
         self.assertEqual(self.fletcher32, vlarray.filters.fletcher32)
 
     def test02a_getitem(self):
         """Checking vlarray __getitem__ (slices)"""
 
         if common.verbose:
-            print '\n', '-=' * 30
-            print "Running %s.test02a_getitem..." % self.__class__.__name__
+            print('\n', '-=' * 30)
+            print("Running %s.test02a_getitem..." % self.__class__.__name__)
 
         # Create an instance of an HDF5 Table
         self.fileh = open_file(self.file, "r")
@@ -153,10 +156,10 @@ class BasicTestCase(unittest.TestCase):
             rows1 = rows[slc]
             rows1f = []
             if common.verbose:
-                print "Flavor:", vlarray.flavor
-                print "Nrows in", vlarray._v_pathname, ":", vlarray.nrows
-                print "Original rows ==>", rows1
-                print "Rows read in vlarray ==>", rows2
+                print("Flavor:", vlarray.flavor)
+                print("Nrows in", vlarray._v_pathname, ":", vlarray.nrows)
+                print("Original rows ==>", rows1)
+                print("Rows read in vlarray ==>", rows2)
 
             if self.flavor == "numpy":
                 for val in rows1:
@@ -170,8 +173,8 @@ class BasicTestCase(unittest.TestCase):
         """Checking vlarray __getitem__ (scalars)"""
 
         if common.verbose:
-            print '\n', '-=' * 30
-            print "Running %s.test02b_getitem..." % self.__class__.__name__
+            print('\n', '-=' * 30)
+            print("Running %s.test02b_getitem..." % self.__class__.__name__)
 
         if self.flavor != "numpy":
             # This test is only valid for NumPy
@@ -189,20 +192,20 @@ class BasicTestCase(unittest.TestCase):
             rows2 = vlarray[slc]
             rows1 = rows[slc]
             if common.verbose:
-                print "Flavor:", vlarray.flavor
-                print "Nrows in", vlarray._v_pathname, ":", vlarray.nrows
-                print "Original rows ==>", rows1
-                print "Rows read in vlarray ==>", rows2
+                print("Flavor:", vlarray.flavor)
+                print("Nrows in", vlarray._v_pathname, ":", vlarray.nrows)
+                print("Original rows ==>", rows1)
+                print("Rows read in vlarray ==>", rows2)
 
             for i in range(len(rows1)):
                 self.assertTrue(allequal(rows2[i], rows1[i], self.flavor))
 
     def test03_append(self):
-        """Checking vlarray append"""
+        """Checking vlarray append."""
 
         if common.verbose:
-            print '\n', '-=' * 30
-            print "Running %s.test03_append..." % self.__class__.__name__
+            print('\n', '-=' * 30)
+            print("Running %s.test03_append..." % self.__class__.__name__)
 
         # Create an instance of an HDF5 Table
         self.fileh = open_file(self.file, "a")
@@ -217,9 +220,9 @@ class BasicTestCase(unittest.TestCase):
         row2 = vlarray[2]
         row3 = vlarray[-1]
         if common.verbose:
-            print "Flavor:", vlarray.flavor
-            print "Nrows in", vlarray._v_pathname, ":", vlarray.nrows
-            print "First row in vlarray ==>", row1
+            print("Flavor:", vlarray.flavor)
+            print("Nrows in", vlarray._v_pathname, ":", vlarray.nrows)
+            print("First row in vlarray ==>", row1)
 
         nrows = 6
         self.assertEqual(nrows, vlarray.nrows)
@@ -239,6 +242,18 @@ class BasicTestCase(unittest.TestCase):
             self.assertEqual(row3, [7, 8, 9, 10])
         self.assertEqual(len(row3), 4)
 
+    def test04_get_row_size(self):
+        """Checking get_row_size method."""
+
+        self.fileh = open_file(self.file, "a")
+        vlarray = self.fileh.get_node("/vlarray1")
+
+        self.assertEqual(vlarray.get_row_size(0), 2 * vlarray.atom.size)
+        self.assertEqual(vlarray.get_row_size(1), 3 * vlarray.atom.size)
+        self.assertEqual(vlarray.get_row_size(2), 0 * vlarray.atom.size)
+        self.assertEqual(vlarray.get_row_size(3), 4 * vlarray.atom.size)
+        self.assertEqual(vlarray.get_row_size(4), 5 * vlarray.atom.size)
+
 
 class BasicNumPyTestCase(BasicTestCase):
     flavor = "numpy"
@@ -265,6 +280,36 @@ class BloscShuffleComprTestCase(BasicTestCase):
     complib = "blosc"
 
 
+class BloscBloscLZComprTestCase(BasicTestCase):
+    compress = 9
+    shuffle = 1
+    complib = "blosc:blosclz"
+
+
+class BloscLZ4ComprTestCase(BasicTestCase):
+    compress = 9
+    shuffle = 1
+    complib = "blosc:lz4"
+
+
+class BloscLZ4HCComprTestCase(BasicTestCase):
+    compress = 9
+    shuffle = 1
+    complib = "blosc:lz4hc"
+
+
+class BloscSnappyComprTestCase(BasicTestCase):
+    compress = 9
+    shuffle = 1
+    complib = "blosc:snappy"
+
+
+class BloscZlibComprTestCase(BasicTestCase):
+    compress = 9
+    shuffle = 1
+    complib = "blosc:zlib"
+
+
 class LZOComprTestCase(BasicTestCase):
     compress = 1
     complib = "lzo"
@@ -311,8 +356,8 @@ class TypesTestCase(unittest.TestCase):
         """Checking vlarray with NumPy string atoms ('numpy' flavor)"""
 
         if common.verbose:
-            print '\n', '-=' * 30
-            print "Running %s.test01_StringAtom..." % self.__class__.__name__
+            print('\n', '-=' * 30)
+            print("Running %s.test01_StringAtom..." % self.__class__.__name__)
 
         vlarray = self.fileh.create_vlarray('/', 'stringAtom',
                                             atom=StringAtom(itemsize=3),
@@ -330,9 +375,9 @@ class TypesTestCase(unittest.TestCase):
         # Read all the rows:
         row = vlarray.read()
         if common.verbose:
-            print "Object read:", row
-            print "Nrows in", vlarray._v_pathname, ":", vlarray.nrows
-            print "First row in vlarray ==>", row[0]
+            print("Object read:", row)
+            print("Nrows in", vlarray._v_pathname, ":", vlarray.nrows)
+            print("First row in vlarray ==>", row[0])
 
         self.assertEqual(vlarray.nrows, 2)
         npt.assert_array_equal(
@@ -346,8 +391,8 @@ class TypesTestCase(unittest.TestCase):
         strided)"""
 
         if common.verbose:
-            print '\n', '-=' * 30
-            print "Running %s.test01a_StringAtom..." % self.__class__.__name__
+            print('\n', '-=' * 30)
+            print("Running %s.test01a_StringAtom..." % self.__class__.__name__)
 
         vlarray = self.fileh.create_vlarray('/', 'stringAtom',
                                             atom=StringAtom(itemsize=3),
@@ -365,9 +410,9 @@ class TypesTestCase(unittest.TestCase):
         # Read all the rows:
         row = vlarray.read()
         if common.verbose:
-            print "Object read:", row
-            print "Nrows in", vlarray._v_pathname, ":", vlarray.nrows
-            print "First row in vlarray ==>", row[0]
+            print("Object read:", row)
+            print("Nrows in", vlarray._v_pathname, ":", vlarray.nrows)
+            print("First row in vlarray ==>", row[0])
 
         self.assertEqual(vlarray.nrows, 2)
         npt.assert_array_equal(row[0], numpy.array(["1", "123", "123"], 'S'))
@@ -379,8 +424,9 @@ class TypesTestCase(unittest.TestCase):
         """Checking vlarray with NumPy string atoms (NumPy flavor, no conv)"""
 
         if common.verbose:
-            print '\n', '-=' * 30
-            print "Running %s.test01a_2_StringAtom..." % self.__class__.__name__
+            print('\n', '-=' * 30)
+            print("Running %s.test01a_2_StringAtom..." %
+                  self.__class__.__name__)
 
         vlarray = self.fileh.create_vlarray('/', 'stringAtom',
                                             atom=StringAtom(itemsize=3),
@@ -398,9 +444,9 @@ class TypesTestCase(unittest.TestCase):
         # Read all the rows:
         row = vlarray.read()
         if common.verbose:
-            print "Object read:", row
-            print "Nrows in", vlarray._v_pathname, ":", vlarray.nrows
-            print "First row in vlarray ==>", row[0]
+            print("Object read:", row)
+            print("Nrows in", vlarray._v_pathname, ":", vlarray.nrows)
+            print("First row in vlarray ==>", row[0])
 
         self.assertEqual(vlarray.nrows, 2)
         npt.assert_array_equal(
@@ -413,8 +459,8 @@ class TypesTestCase(unittest.TestCase):
         """Checking vlarray with NumPy string atoms (python flavor)"""
 
         if common.verbose:
-            print '\n', '-=' * 30
-            print "Running %s.test01b_StringAtom..." % self.__class__.__name__
+            print('\n', '-=' * 30)
+            print("Running %s.test01b_StringAtom..." % self.__class__.__name__)
 
         vlarray = self.fileh.create_vlarray('/', 'stringAtom2',
                                             atom=StringAtom(itemsize=3),
@@ -432,10 +478,10 @@ class TypesTestCase(unittest.TestCase):
         # Read all the rows:
         row = vlarray.read()
         if common.verbose:
-            print "Testing String flavor"
-            print "Object read:", row
-            print "Nrows in", vlarray._v_pathname, ":", vlarray.nrows
-            print "First row in vlarray ==>", row[0]
+            print("Testing String flavor")
+            print("Object read:", row)
+            print("Nrows in", vlarray._v_pathname, ":", vlarray.nrows)
+            print("First row in vlarray ==>", row[0])
 
         self.assertEqual(vlarray.nrows, 2)
         self.assertEqual(row[0], [b"1", b"12", b"123", b"123", b"123"])
@@ -444,12 +490,15 @@ class TypesTestCase(unittest.TestCase):
         self.assertEqual(len(row[1]), 2)
 
     def test01c_StringAtom(self):
-        """Checking updating vlarray with NumPy string atoms
-        ('numpy' flavor)"""
+        """Checking updating vlarray with NumPy string atoms.
+
+        ('numpy' flavor)
+
+        """
 
         if common.verbose:
-            print '\n', '-=' * 30
-            print "Running %s.test01c_StringAtom..." % self.__class__.__name__
+            print('\n', '-=' * 30)
+            print("Running %s.test01c_StringAtom..." % self.__class__.__name__)
 
         vlarray = self.fileh.create_vlarray('/', 'stringAtom',
                                             atom=StringAtom(itemsize=3),
@@ -471,9 +520,9 @@ class TypesTestCase(unittest.TestCase):
         # Read all the rows:
         row = vlarray.read()
         if common.verbose:
-            print "Object read:", row
-            print "Nrows in", vlarray._v_pathname, ":", vlarray.nrows
-            print "First row in vlarray ==>", row[0]
+            print("Object read:", row)
+            print("Nrows in", vlarray._v_pathname, ":", vlarray.nrows)
+            print("First row in vlarray ==>", row[0])
 
         self.assertEqual(vlarray.nrows, 2)
         self.assertTrue(
@@ -486,8 +535,8 @@ class TypesTestCase(unittest.TestCase):
         """Checking updating vlarray with string atoms (String flavor)"""
 
         if common.verbose:
-            print '\n', '-=' * 30
-            print "Running %s.test01d_StringAtom..." % self.__class__.__name__
+            print('\n', '-=' * 30)
+            print("Running %s.test01d_StringAtom..." % self.__class__.__name__)
 
         vlarray = self.fileh.create_vlarray('/', 'stringAtom2',
                                             atom=StringAtom(itemsize=3),
@@ -509,10 +558,10 @@ class TypesTestCase(unittest.TestCase):
         # Read all the rows:
         row = vlarray.read()
         if common.verbose:
-            print "Testing String flavor"
-            print "Object read:", row
-            print "Nrows in", vlarray._v_pathname, ":", vlarray.nrows
-            print "First row in vlarray ==>", row[0]
+            print("Testing String flavor")
+            print("Object read:", row)
+            print("Nrows in", vlarray._v_pathname, ":", vlarray.nrows)
+            print("First row in vlarray ==>", row[0])
 
         self.assertEqual(vlarray.nrows, 2)
         self.assertEqual(row[0], [b"1", b"123", b"12", b"", b"123"])
@@ -521,11 +570,11 @@ class TypesTestCase(unittest.TestCase):
         self.assertEqual(len(row[1]), 2)
 
     def test02_BoolAtom(self):
-        """Checking vlarray with boolean atoms"""
+        """Checking vlarray with boolean atoms."""
 
         if common.verbose:
-            print '\n', '-=' * 30
-            print "Running %s.test02_BoolAtom..." % self.__class__.__name__
+            print('\n', '-=' * 30)
+            print("Running %s.test02_BoolAtom..." % self.__class__.__name__)
 
         vlarray = self.fileh.create_vlarray('/', 'BoolAtom',
                                             atom=BoolAtom(),
@@ -542,9 +591,9 @@ class TypesTestCase(unittest.TestCase):
         # Read all the rows:
         row = vlarray.read()
         if common.verbose:
-            print "Object read:", row
-            print "Nrows in", vlarray._v_pathname, ":", vlarray.nrows
-            print "First row in vlarray ==>", row[0]
+            print("Object read:", row)
+            print("Nrows in", vlarray._v_pathname, ":", vlarray.nrows)
+            print("First row in vlarray ==>", row[0])
 
         self.assertEqual(vlarray.nrows, 2)
         self.assertTrue(allequal(row[0], numpy.array([1, 0, 1], dtype='bool')))
@@ -553,11 +602,11 @@ class TypesTestCase(unittest.TestCase):
         self.assertEqual(len(row[1]), 2)
 
     def test02b_BoolAtom(self):
-        """Checking setting vlarray with boolean atoms"""
+        """Checking setting vlarray with boolean atoms."""
 
         if common.verbose:
-            print '\n', '-=' * 30
-            print "Running %s.test02b_BoolAtom..." % self.__class__.__name__
+            print('\n', '-=' * 30)
+            print("Running %s.test02b_BoolAtom..." % self.__class__.__name__)
 
         vlarray = self.fileh.create_vlarray('/', 'BoolAtom',
                                             atom=BoolAtom(),
@@ -578,9 +627,9 @@ class TypesTestCase(unittest.TestCase):
         # Read all the rows:
         row = vlarray.read()
         if common.verbose:
-            print "Object read:", row
-            print "Nrows in", vlarray._v_pathname, ":", vlarray.nrows
-            print "First row in vlarray ==>", row[0]
+            print("Object read:", row)
+            print("Nrows in", vlarray._v_pathname, ":", vlarray.nrows)
+            print("First row in vlarray ==>", row[0])
 
         self.assertEqual(vlarray.nrows, 2)
         self.assertTrue(allequal(row[0], numpy.array([0, 1, 1], dtype='bool')))
@@ -589,7 +638,7 @@ class TypesTestCase(unittest.TestCase):
         self.assertEqual(len(row[1]), 2)
 
     def test03_IntAtom(self):
-        """Checking vlarray with integer atoms"""
+        """Checking vlarray with integer atoms."""
 
         ttypes = [
             "Int8",
@@ -602,8 +651,8 @@ class TypesTestCase(unittest.TestCase):
             #"UInt64",  # Unavailable in some platforms
         ]
         if common.verbose:
-            print '\n', '-=' * 30
-            print "Running %s.test03_IntAtom..." % self.__class__.__name__
+            print('\n', '-=' * 30)
+            print("Running %s.test03_IntAtom..." % self.__class__.__name__)
 
         for atype in ttypes:
             vlarray = self.fileh.create_vlarray('/', atype,
@@ -620,10 +669,10 @@ class TypesTestCase(unittest.TestCase):
             # Read all the rows:
             row = vlarray.read()
             if common.verbose:
-                print "Testing type:", atype
-                print "Object read:", row
-                print "Nrows in", vlarray._v_pathname, ":", vlarray.nrows
-                print "First row in vlarray ==>", row[0]
+                print("Testing type:", atype)
+                print("Object read:", row)
+                print("Nrows in", vlarray._v_pathname, ":", vlarray.nrows)
+                print("First row in vlarray ==>", row[0])
 
             self.assertEqual(vlarray.nrows, 2)
             self.assertTrue(allequal(row[
@@ -647,8 +696,8 @@ class TypesTestCase(unittest.TestCase):
             #"UInt64": numpy.int64,  # Unavailable in some platforms
         }
         if common.verbose:
-            print '\n', '-=' * 30
-            print "Running %s.test03a_IntAtom..." % self.__class__.__name__
+            print('\n', '-=' * 30)
+            print("Running %s.test03a_IntAtom..." % self.__class__.__name__)
 
         for atype in ttypes:
             vlarray = self.fileh.create_vlarray(
@@ -671,10 +720,10 @@ class TypesTestCase(unittest.TestCase):
             # Read all the rows:
             row = vlarray.read()
             if common.verbose:
-                print "Testing type:", atype
-                print "Object read:", row
-                print "Nrows in", vlarray._v_pathname, ":", vlarray.nrows
-                print "First row in vlarray ==>", row[0]
+                print("Testing type:", atype)
+                print("Object read:", row)
+                print("Nrows in", vlarray._v_pathname, ":", vlarray.nrows)
+                print("First row in vlarray ==>", row[0])
 
             self.assertEqual(vlarray.nrows, 2)
             self.assertTrue(
@@ -685,7 +734,7 @@ class TypesTestCase(unittest.TestCase):
             self.assertEqual(len(row[1]), 2)
 
     def test03b_IntAtom(self):
-        """Checking updating vlarray with integer atoms"""
+        """Checking updating vlarray with integer atoms."""
 
         ttypes = [
             "Int8",
@@ -698,8 +747,8 @@ class TypesTestCase(unittest.TestCase):
             #"UInt64",  # Unavailable in some platforms
         ]
         if common.verbose:
-            print '\n', '-=' * 30
-            print "Running %s.test03_IntAtom..." % self.__class__.__name__
+            print('\n', '-=' * 30)
+            print("Running %s.test03_IntAtom..." % self.__class__.__name__)
 
         for atype in ttypes:
             vlarray = self.fileh.create_vlarray(
@@ -720,10 +769,10 @@ class TypesTestCase(unittest.TestCase):
             # Read all the rows:
             row = vlarray.read()
             if common.verbose:
-                print "Testing type:", atype
-                print "Object read:", row
-                print "Nrows in", vlarray._v_pathname, ":", vlarray.nrows
-                print "First row in vlarray ==>", row[0]
+                print("Testing type:", atype)
+                print("Object read:", row)
+                print("Nrows in", vlarray._v_pathname, ":", vlarray.nrows)
+                print("First row in vlarray ==>", row[0])
 
             self.assertEqual(vlarray.nrows, 2)
             self.assertTrue(allequal(row[
@@ -747,8 +796,8 @@ class TypesTestCase(unittest.TestCase):
             #"UInt64": numpy.int64,  # Unavailable in some platforms
         }
         if common.verbose:
-            print '\n', '-=' * 30
-            print "Running %s.test03c_IntAtom..." % self.__class__.__name__
+            print('\n', '-=' * 30)
+            print("Running %s.test03c_IntAtom..." % self.__class__.__name__)
 
         for atype in ttypes:
             vlarray = self.fileh.create_vlarray(
@@ -777,10 +826,10 @@ class TypesTestCase(unittest.TestCase):
             # Read all the rows:
             row = vlarray.read()
             if common.verbose:
-                print "Testing type:", atype
-                print "Object read:", row
-                print "Nrows in", vlarray._v_pathname, ":", vlarray.nrows
-                print "First row in vlarray ==>", row[0]
+                print("Testing type:", atype)
+                print("Object read:", row)
+                print("Nrows in", vlarray._v_pathname, ":", vlarray.nrows)
+                print("First row in vlarray ==>", row[0])
 
             self.assertEqual(vlarray.nrows, 2)
             self.assertTrue(
@@ -804,8 +853,8 @@ class TypesTestCase(unittest.TestCase):
             #"UInt64": numpy.int64,  # Unavailable in some platforms
         }
         if common.verbose:
-            print '\n', '-=' * 30
-            print "Running %s.test03d_IntAtom..." % self.__class__.__name__
+            print('\n', '-=' * 30)
+            print("Running %s.test03d_IntAtom..." % self.__class__.__name__)
 
         byteorder = {'little': 'big', 'big': 'little'}[sys.byteorder]
         for atype in ttypes:
@@ -836,10 +885,10 @@ class TypesTestCase(unittest.TestCase):
             # Read all the rows:
             row = vlarray.read()
             if common.verbose:
-                print "Testing type:", atype
-                print "Object read:", row
-                print "Nrows in", vlarray._v_pathname, ":", vlarray.nrows
-                print "First row in vlarray ==>", row[0]
+                print("Testing type:", atype)
+                print("Object read:", row)
+                print("Nrows in", vlarray._v_pathname, ":", vlarray.nrows)
+                print("First row in vlarray ==>", row[0])
 
             byteorder2 = byteorders[row[0].dtype.byteorder]
             if byteorder2 != "irrelevant":
@@ -855,18 +904,19 @@ class TypesTestCase(unittest.TestCase):
             self.assertEqual(len(row[1]), 2)
 
     def test04_FloatAtom(self):
-        """Checking vlarray with floating point atoms"""
+        """Checking vlarray with floating point atoms."""
 
         ttypes = ["Float32",
                   "Float64",
                   ]
         for name in ("float16", "float96", "float128"):
-            if hasattr(numpy, name):
+            atomname = name.capitalize() + 'Atom'
+            if atomname in globals():
                 ttypes.append(name)
 
         if common.verbose:
-            print '\n', '-=' * 30
-            print "Running %s.test04_FloatAtom..." % self.__class__.__name__
+            print('\n', '-=' * 30)
+            print("Running %s.test04_FloatAtom..." % self.__class__.__name__)
 
         for atype in ttypes:
             vlarray = self.fileh.create_vlarray(
@@ -883,10 +933,10 @@ class TypesTestCase(unittest.TestCase):
             # Read all the rows:
             row = vlarray.read()
             if common.verbose:
-                print "Testing type:", atype
-                print "Object read:", row
-                print "Nrows in", vlarray._v_pathname, ":", vlarray.nrows
-                print "First row in vlarray ==>", row[0]
+                print("Testing type:", atype)
+                print("Object read:", row)
+                print("Nrows in", vlarray._v_pathname, ":", vlarray.nrows)
+                print("First row in vlarray ==>", row[0])
 
             self.assertEqual(vlarray.nrows, 2)
             self.assertTrue(allequal(row[
@@ -903,16 +953,16 @@ class TypesTestCase(unittest.TestCase):
             "Float32": numpy.float32,
             "Float64": numpy.float64,
         }
-        if hasattr(numpy, "float16"):
+        if "Float16Atom" in globals():
             ttypes["float16"] = numpy.float16
-        if hasattr(numpy, "float96"):
+        if "Float96Atom" in globals():
             ttypes["float96"] = numpy.float96
-        if hasattr(numpy, "float128"):
+        if "Float128Atom" in globals():
             ttypes["float128"] = numpy.float128
 
         if common.verbose:
-            print '\n', '-=' * 30
-            print "Running %s.test04a_FloatAtom..." % self.__class__.__name__
+            print('\n', '-=' * 30)
+            print("Running %s.test04a_FloatAtom..." % self.__class__.__name__)
 
         for atype in ttypes:
             vlarray = self.fileh.create_vlarray(
@@ -935,10 +985,10 @@ class TypesTestCase(unittest.TestCase):
             # Read all the rows:
             row = vlarray.read()
             if common.verbose:
-                print "Testing type:", atype
-                print "Object read:", row
-                print "Nrows in", vlarray._v_pathname, ":", vlarray.nrows
-                print "First row in vlarray ==>", row[0]
+                print("Testing type:", atype)
+                print("Object read:", row)
+                print("Nrows in", vlarray._v_pathname, ":", vlarray.nrows)
+                print("First row in vlarray ==>", row[0])
 
             self.assertEqual(vlarray.nrows, 2)
             self.assertTrue(allequal(row[0], numpy.array([1.3, 2.2, 3.3],
@@ -949,19 +999,20 @@ class TypesTestCase(unittest.TestCase):
             self.assertEqual(len(row[1]), 2)
 
     def test04b_FloatAtom(self):
-        """Checking updating vlarray with floating point atoms"""
+        """Checking updating vlarray with floating point atoms."""
 
         ttypes = [
             "Float32",
             "Float64",
         ]
         for name in ("float16", "float96", "float128"):
-            if hasattr(numpy, name):
+            atomname = name.capitalize() + 'Atom'
+            if atomname in globals():
                 ttypes.append(name)
 
         if common.verbose:
-            print '\n', '-=' * 30
-            print "Running %s.test04b_FloatAtom..." % self.__class__.__name__
+            print('\n', '-=' * 30)
+            print("Running %s.test04b_FloatAtom..." % self.__class__.__name__)
 
         for atype in ttypes:
             vlarray = self.fileh.create_vlarray(
@@ -982,10 +1033,10 @@ class TypesTestCase(unittest.TestCase):
             # Read all the rows:
             row = vlarray.read()
             if common.verbose:
-                print "Testing type:", atype
-                print "Object read:", row
-                print "Nrows in", vlarray._v_pathname, ":", vlarray.nrows
-                print "First row in vlarray ==>", row[0]
+                print("Testing type:", atype)
+                print("Object read:", row)
+                print("Nrows in", vlarray._v_pathname, ":", vlarray.nrows)
+                print("First row in vlarray ==>", row[0])
 
             self.assertEqual(vlarray.nrows, 2)
             self.assertTrue(allequal(row[
@@ -1002,16 +1053,16 @@ class TypesTestCase(unittest.TestCase):
             "Float32": numpy.float32,
             "Float64": numpy.float64,
         }
-        if hasattr(numpy, "float16"):
+        if "Float16Atom" in globals():
             ttypes["float16"] = numpy.float16
-        if hasattr(numpy, "float96"):
+        if "Float96Atom" in globals():
             ttypes["float96"] = numpy.float96
-        if hasattr(numpy, "float128"):
+        if "Float128Atom" in globals():
             ttypes["float128"] = numpy.float128
 
         if common.verbose:
-            print '\n', '-=' * 30
-            print "Running %s.test04c_FloatAtom..." % self.__class__.__name__
+            print('\n', '-=' * 30)
+            print("Running %s.test04c_FloatAtom..." % self.__class__.__name__)
 
         for atype in ttypes:
             vlarray = self.fileh.create_vlarray(
@@ -1040,10 +1091,10 @@ class TypesTestCase(unittest.TestCase):
             # Read all the rows:
             row = vlarray.read()
             if common.verbose:
-                print "Testing type:", atype
-                print "Object read:", row
-                print "Nrows in", vlarray._v_pathname, ":", vlarray.nrows
-                print "First row in vlarray ==>", row[0]
+                print("Testing type:", atype)
+                print("Object read:", row)
+                print("Nrows in", vlarray._v_pathname, ":", vlarray.nrows)
+                print("First row in vlarray ==>", row[0])
 
             self.assertEqual(vlarray.nrows, 2)
             self.assertTrue(allequal(row[0], numpy.array([4.3, 2.2, 4.3],
@@ -1060,16 +1111,16 @@ class TypesTestCase(unittest.TestCase):
             "Float32": numpy.float32,
             "Float64": numpy.float64,
         }
-        if hasattr(numpy, "float16"):
+        if "Float16Atom" in globals():
             ttypes["float16"] = numpy.float16
-        if hasattr(numpy, "float96"):
+        if "Float96Atom" in globals():
             ttypes["float96"] = numpy.float96
-        if hasattr(numpy, "float128"):
+        if "Float128Atom" in globals():
             ttypes["float128"] = numpy.float128
 
         if common.verbose:
-            print '\n', '-=' * 30
-            print "Running %s.test04d_FloatAtom..." % self.__class__.__name__
+            print('\n', '-=' * 30)
+            print("Running %s.test04d_FloatAtom..." % self.__class__.__name__)
 
         byteorder = {'little': 'big', 'big': 'little'}[sys.byteorder]
         for atype in ttypes:
@@ -1100,10 +1151,10 @@ class TypesTestCase(unittest.TestCase):
             # Read all the rows:
             row = vlarray.read()
             if common.verbose:
-                print "Testing type:", atype
-                print "Object read:", row
-                print "Nrows in", vlarray._v_pathname, ":", vlarray.nrows
-                print "First row in vlarray ==>", row[0]
+                print("Testing type:", atype)
+                print("Object read:", row)
+                print("Nrows in", vlarray._v_pathname, ":", vlarray.nrows)
+                print("First row in vlarray ==>", row[0])
 
             self.assertEqual(vlarray.byteorder, byteorder)
             self.assertTrue(byteorders[row[0].dtype.byteorder], sys.byteorder)
@@ -1116,21 +1167,21 @@ class TypesTestCase(unittest.TestCase):
             self.assertEqual(len(row[1]), 2)
 
     def test04_ComplexAtom(self):
-        """Checking vlarray with numerical complex atoms"""
+        """Checking vlarray with numerical complex atoms."""
 
         ttypes = [
             "Complex32",
             "Complex64",
         ]
 
-        if hasattr(numpy, "complex192"):
+        if "Complex192Atom" in globals():
             ttypes.append("Complex96")
-        if hasattr(numpy, "complex256"):
+        if "Complex256Atom" in globals():
             ttypes.append("Complex128")
 
         if common.verbose:
-            print '\n', '-=' * 30
-            print "Running %s.test04_ComplexAtom..." % self.__class__.__name__
+            print('\n', '-=' * 30)
+            print("Running %s.test04_ComplexAtom..." % self.__class__.__name__)
 
         for atype in ttypes:
             vlarray = self.fileh.create_vlarray(
@@ -1147,10 +1198,10 @@ class TypesTestCase(unittest.TestCase):
             # Read all the rows:
             row = vlarray.read()
             if common.verbose:
-                print "Testing type:", atype
-                print "Object read:", row
-                print "Nrows in", vlarray._v_pathname, ":", vlarray.nrows
-                print "First row in vlarray ==>", row[0]
+                print("Testing type:", atype)
+                print("Object read:", row)
+                print("Nrows in", vlarray._v_pathname, ":", vlarray.nrows)
+                print("First row in vlarray ==>", row[0])
 
             self.assertEqual(vlarray.nrows, 2)
             self.assertTrue(
@@ -1164,21 +1215,22 @@ class TypesTestCase(unittest.TestCase):
             self.assertEqual(len(row[1]), 2)
 
     def test04b_ComplexAtom(self):
-        """Checking modifying vlarray with numerical complex atoms"""
+        """Checking modifying vlarray with numerical complex atoms."""
 
         ttypes = [
             "Complex32",
             "Complex64",
         ]
 
-        if hasattr(numpy, "complex192"):
+        if "Complex192Atom" in globals():
             ttypes.append("Complex96")
-        if hasattr(numpy, "complex256"):
+        if "Complex256Atom" in globals():
             ttypes.append("Complex128")
 
         if common.verbose:
-            print '\n', '-=' * 30
-            print "Running %s.test04b_ComplexAtom..." % self.__class__.__name__
+            print('\n', '-=' * 30)
+            print("Running %s.test04b_ComplexAtom..." %
+                  self.__class__.__name__)
 
         for atype in ttypes:
             vlarray = self.fileh.create_vlarray(
@@ -1199,10 +1251,10 @@ class TypesTestCase(unittest.TestCase):
             # Read all the rows:
             row = vlarray.read()
             if common.verbose:
-                print "Testing type:", atype
-                print "Object read:", row
-                print "Nrows in", vlarray._v_pathname, ":", vlarray.nrows
-                print "First row in vlarray ==>", row[0]
+                print("Testing type:", atype)
+                print("Object read:", row)
+                print("Nrows in", vlarray._v_pathname, ":", vlarray.nrows)
+                print("First row in vlarray ==>", row[0])
 
             self.assertEqual(vlarray.nrows, 2)
             self.assertTrue(
@@ -1216,15 +1268,16 @@ class TypesTestCase(unittest.TestCase):
             self.assertEqual(len(row[1]), 2)
 
     def test05_VLStringAtom(self):
-        """Checking vlarray with variable length strings"""
+        """Checking vlarray with variable length strings."""
 
         # Skip the test if the default encoding has been mangled.
         if sys.getdefaultencoding() != 'ascii':
             return
 
         if common.verbose:
-            print '\n', '-=' * 30
-            print "Running %s.test05_VLStringAtom..." % self.__class__.__name__
+            print('\n', '-=' * 30)
+            print("Running %s.test05_VLStringAtom..." %
+                  self.__class__.__name__)
 
         vlarray = self.fileh.create_vlarray(
             '/', "VLStringAtom", atom=VLStringAtom())
@@ -1246,9 +1299,9 @@ class TypesTestCase(unittest.TestCase):
         # Read all the rows:
         row = vlarray.read()
         if common.verbose:
-            print "Object read:", row
-            print "Nrows in", vlarray._v_pathname, ":", vlarray.nrows
-            print "First row in vlarray ==>", row[0]
+            print("Object read:", row)
+            print("Nrows in", vlarray._v_pathname, ":", vlarray.nrows)
+            print("First row in vlarray ==>", row[0])
 
         self.assertEqual(vlarray.nrows, 4)
         self.assertEqual(row[0], "asd")
@@ -1261,11 +1314,12 @@ class TypesTestCase(unittest.TestCase):
         self.assertEqual(len(row[3]), 0)
 
     def test05b_VLStringAtom(self):
-        """Checking updating vlarray with variable length strings"""
+        """Checking updating vlarray with variable length strings."""
 
         if common.verbose:
-            print '\n', '-=' * 30
-            print "Running %s.test05b_VLStringAtom..." % self.__class__.__name__
+            print('\n', '-=' * 30)
+            print("Running %s.test05b_VLStringAtom..." %
+                  self.__class__.__name__)
 
         vlarray = self.fileh.create_vlarray(
             '/', "VLStringAtom", atom=VLStringAtom())
@@ -1287,10 +1341,10 @@ class TypesTestCase(unittest.TestCase):
         # Read all the rows:
         row = vlarray.read()
         if common.verbose:
-            print "Object read:", row
-            print "Nrows in", vlarray._v_pathname, ":", vlarray.nrows
-            print "First row in vlarray ==>", repr(row[0])
-            print "Second row in vlarray ==>", repr(row[1])
+            print("Object read:", row)
+            print("Nrows in", vlarray._v_pathname, ":", vlarray.nrows)
+            print("First row in vlarray ==>", repr(row[0]))
+            print("Second row in vlarray ==>", repr(row[1]))
 
         self.assertEqual(vlarray.nrows, 2)
         self.assertEqual(row[0], b"as4")
@@ -1299,11 +1353,11 @@ class TypesTestCase(unittest.TestCase):
         self.assertEqual(len(row[1]), 5)
 
     def test06a_Object(self):
-        """Checking vlarray with object atoms """
+        """Checking vlarray with object atoms."""
 
         if common.verbose:
-            print '\n', '-=' * 30
-            print "Running %s.test06a_Object..." % self.__class__.__name__
+            print('\n', '-=' * 30)
+            print("Running %s.test06a_Object..." % self.__class__.__name__)
 
         vlarray = self.fileh.create_vlarray(
             '/', "Object", atom=ObjectAtom())
@@ -1320,9 +1374,9 @@ class TypesTestCase(unittest.TestCase):
         # Read all the rows:
         row = vlarray.read()
         if common.verbose:
-            print "Object read:", row
-            print "Nrows in", vlarray._v_pathname, ":", vlarray.nrows
-            print "First row in vlarray ==>", row[0]
+            print("Object read:", row)
+            print("Nrows in", vlarray._v_pathname, ":", vlarray.nrows)
+            print("First row in vlarray ==>", row[0])
 
         self.assertEqual(vlarray.nrows, 3)
         self.assertEqual(row[0], [[1, 2, 3], "aaa", u"aaa���"])
@@ -1336,11 +1390,11 @@ class TypesTestCase(unittest.TestCase):
         self.assertRaises(TypeError, len, row[2])
 
     def test06b_Object(self):
-        """Checking updating vlarray with object atoms """
+        """Checking updating vlarray with object atoms."""
 
         if common.verbose:
-            print '\n', '-=' * 30
-            print "Running %s.test06b_Object..." % self.__class__.__name__
+            print('\n', '-=' * 30)
+            print("Running %s.test06b_Object..." % self.__class__.__name__)
 
         vlarray = self.fileh.create_vlarray('/', "Object", atom=ObjectAtom())
         # When updating an object, this seems to change the number
@@ -1365,9 +1419,9 @@ class TypesTestCase(unittest.TestCase):
         # Read all the rows:
         row = vlarray.read()
         if common.verbose:
-            print "Object read:", row
-            print "Nrows in", vlarray._v_pathname, ":", vlarray.nrows
-            print "First row in vlarray ==>", row[0]
+            print("Object read:", row)
+            print("Nrows in", vlarray._v_pathname, ":", vlarray.nrows)
+            print("First row in vlarray ==>", row[0])
 
         self.assertEqual(vlarray.nrows, 2)
         self.assertEqual(row[0], ([1, 2, 4], "aa4", u"��5"))
@@ -1383,8 +1437,8 @@ class TypesTestCase(unittest.TestCase):
         """Checking vlarray with object atoms (numpy arrays as values)"""
 
         if common.verbose:
-            print '\n', '-=' * 30
-            print "Running %s.test06c_Object..." % self.__class__.__name__
+            print('\n', '-=' * 30)
+            print("Running %s.test06c_Object..." % self.__class__.__name__)
 
         vlarray = self.fileh.create_vlarray('/', "Object", atom=ObjectAtom())
         vlarray.append(numpy.array([[1, 2], [0, 4]], 'i4'))
@@ -1400,9 +1454,9 @@ class TypesTestCase(unittest.TestCase):
         # Read all the rows:
         row = vlarray.read()
         if common.verbose:
-            print "Object read:", row
-            print "Nrows in", vlarray._v_pathname, ":", vlarray.nrows
-            print "First row in vlarray ==>", row[0]
+            print("Object read:", row)
+            print("Nrows in", vlarray._v_pathname, ":", vlarray.nrows)
+            print("First row in vlarray ==>", row[0])
 
         self.assertEqual(vlarray.nrows, 3)
         self.assertTrue(allequal(row[0], numpy.array([[1, 2], [0, 4]], 'i4')))
@@ -1413,8 +1467,8 @@ class TypesTestCase(unittest.TestCase):
         """Checking updating vlarray with object atoms (numpy arrays)"""
 
         if common.verbose:
-            print '\n', '-=' * 30
-            print "Running %s.test06d_Object..." % self.__class__.__name__
+            print('\n', '-=' * 30)
+            print("Running %s.test06d_Object..." % self.__class__.__name__)
 
         vlarray = self.fileh.create_vlarray('/', "Object", atom=ObjectAtom())
         vlarray.append(numpy.array([[1, 2], [0, 4]], 'i4'))
@@ -1437,9 +1491,9 @@ class TypesTestCase(unittest.TestCase):
         # Read all the rows:
         row = vlarray.read()
         if common.verbose:
-            print "Object read:", row
-            print "Nrows in", vlarray._v_pathname, ":", vlarray.nrows
-            print "First row in vlarray ==>", row[0]
+            print("Object read:", row)
+            print("Nrows in", vlarray._v_pathname, ":", vlarray.nrows)
+            print("First row in vlarray ==>", row[0])
 
         self.assertEqual(vlarray.nrows, 3)
         self.assertTrue(allequal(row[0], numpy.array([[1, 0], [0, 4]], 'i4')))
@@ -1447,15 +1501,16 @@ class TypesTestCase(unittest.TestCase):
         self.assertTrue(allequal(row[2], numpy.array(22, 'i1')))
 
     def test07_VLUnicodeAtom(self):
-        """Checking vlarray with variable length Unicode strings"""
+        """Checking vlarray with variable length Unicode strings."""
 
         # Skip the test if the default encoding has been mangled.
         if sys.getdefaultencoding() != 'ascii':
             return
 
         if common.verbose:
-            print '\n', '-=' * 30
-            print "Running %s.test07_VLUnicodeAtom..." % self.__class__.__name__
+            print('\n', '-=' * 30)
+            print("Running %s.test07_VLUnicodeAtom..." %
+                  self.__class__.__name__)
 
         vlarray = self.fileh.create_vlarray(
             '/', "VLUnicodeAtom", atom=VLUnicodeAtom())
@@ -1477,9 +1532,9 @@ class TypesTestCase(unittest.TestCase):
         # Read all the rows:
         row = vlarray.read()
         if common.verbose:
-            print "Object read:", row
-            print "Nrows in", vlarray._v_pathname, ":", vlarray.nrows
-            print "First row in vlarray ==>", row[0]
+            print("Object read:", row)
+            print("Nrows in", vlarray._v_pathname, ":", vlarray.nrows)
+            print("First row in vlarray ==>", row[0])
 
         self.assertEqual(vlarray.nrows, 4)
         self.assertEqual(row[0], u"asd")
@@ -1492,11 +1547,12 @@ class TypesTestCase(unittest.TestCase):
         self.assertEqual(len(row[3]), 0)
 
     def test07b_VLUnicodeAtom(self):
-        """Checking updating vlarray with variable length Unicode strings"""
+        """Checking updating vlarray with variable length Unicode strings."""
 
         if common.verbose:
-            print '\n', '-=' * 30
-            print "Running %s.test07b_VLUnicodeAtom..." % self.__class__.__name__
+            print('\n', '-=' * 30)
+            print("Running %s.test07b_VLUnicodeAtom..." %
+                  self.__class__.__name__)
 
         vlarray = self.fileh.create_vlarray(
             '/', "VLUnicodeAtom", atom=VLUnicodeAtom())
@@ -1518,10 +1574,10 @@ class TypesTestCase(unittest.TestCase):
         # Read all the rows:
         row = vlarray.read()
         if common.verbose:
-            print "Object read:", row
-            print "Nrows in", vlarray._v_pathname, ":", vlarray.nrows
-            print "First row in vlarray ==>", repr(row[0])
-            print "Second row in vlarray ==>", repr(row[1])
+            print("Object read:", row)
+            print("Nrows in", vlarray._v_pathname, ":", vlarray.nrows)
+            print("First row in vlarray ==>", repr(row[0]))
+            print("Second row in vlarray ==>", repr(row[1]))
 
         self.assertEqual(vlarray.nrows, 2)
         self.assertEqual(row[0], u"as\xe4")
@@ -1560,12 +1616,12 @@ class MDTypesTestCase(unittest.TestCase):
     #----------------------------------------
 
     def test01_StringAtom(self):
-        """Checking vlarray with MD NumPy string atoms"""
+        """Checking vlarray with MD NumPy string atoms."""
 
         root = self.rootgroup
         if common.verbose:
-            print '\n', '-=' * 30
-            print "Running %s.test01_StringAtom..." % self.__class__.__name__
+            print('\n', '-=' * 30)
+            print("Running %s.test01_StringAtom..." % self.__class__.__name__)
 
         # Create an string atom
         vlarray = self.fileh.create_vlarray(root, 'stringAtom',
@@ -1578,9 +1634,9 @@ class MDTypesTestCase(unittest.TestCase):
         # Read all the rows:
         row = vlarray.read()
         if common.verbose:
-            print "Object read:", row
-            print "Nrows in", vlarray._v_pathname, ":", vlarray.nrows
-            print "Second row in vlarray ==>", row[1]
+            print("Object read:", row)
+            print("Nrows in", vlarray._v_pathname, ":", vlarray.nrows)
+            print("Second row in vlarray ==>", row[1])
 
         self.assertEqual(vlarray.nrows, 2)
         npt.assert_array_equal(
@@ -1596,8 +1652,8 @@ class MDTypesTestCase(unittest.TestCase):
 
         root = self.rootgroup
         if common.verbose:
-            print '\n', '-=' * 30
-            print "Running %s.test01b_StringAtom..." % self.__class__.__name__
+            print('\n', '-=' * 30)
+            print("Running %s.test01b_StringAtom..." % self.__class__.__name__)
 
         # Create an string atom
         vlarray = self.fileh.create_vlarray(root, 'stringAtom',
@@ -1611,9 +1667,9 @@ class MDTypesTestCase(unittest.TestCase):
         # Read all the rows:
         row = vlarray.read()
         if common.verbose:
-            print "Object read:", row
-            print "Nrows in", vlarray._v_pathname, ":", vlarray.nrows
-            print "Second row in vlarray ==>", row[1]
+            print("Object read:", row)
+            print("Nrows in", vlarray._v_pathname, ":", vlarray.nrows)
+            print("Second row in vlarray ==>", row[1])
 
         self.assertEqual(vlarray.nrows, 2)
         self.assertEqual(row[0], [[b"123", b"45"], [b"45", b"123"]])
@@ -1627,8 +1683,8 @@ class MDTypesTestCase(unittest.TestCase):
 
         root = self.rootgroup
         if common.verbose:
-            print '\n', '-=' * 30
-            print "Running %s.test01c_StringAtom..." % self.__class__.__name__
+            print('\n', '-=' * 30)
+            print("Running %s.test01c_StringAtom..." % self.__class__.__name__)
 
         # Create an string atom
         vlarray = self.fileh.create_vlarray(root, 'stringAtom',
@@ -1645,9 +1701,9 @@ class MDTypesTestCase(unittest.TestCase):
         # Read all the rows:
         row = vlarray.read()
         if common.verbose:
-            print "Object read:", row
-            print "Nrows in", vlarray._v_pathname, ":", vlarray.nrows
-            print "Second row in vlarray ==>", row[1]
+            print("Object read:", row)
+            print("Nrows in", vlarray._v_pathname, ":", vlarray.nrows)
+            print("Second row in vlarray ==>", row[1])
 
         self.assertEqual(vlarray.nrows, 2)
         self.assertEqual(row[0], [[b"123", b"45"], [b"45", b"123"]])
@@ -1661,8 +1717,8 @@ class MDTypesTestCase(unittest.TestCase):
 
         root = self.rootgroup
         if common.verbose:
-            print '\n', '-=' * 30
-            print "Running %s.test01d_StringAtom..." % self.__class__.__name__
+            print('\n', '-=' * 30)
+            print("Running %s.test01d_StringAtom..." % self.__class__.__name__)
 
         # Create an string atom
         vlarray = self.fileh.create_vlarray(root, 'stringAtom',
@@ -1679,9 +1735,9 @@ class MDTypesTestCase(unittest.TestCase):
         # Read all the rows:
         row = vlarray.read()
         if common.verbose:
-            print "Object read:", row
-            print "Nrows in", vlarray._v_pathname, ":", vlarray.nrows
-            print "Second row in vlarray ==>", row[1]
+            print("Object read:", row)
+            print("Nrows in", vlarray._v_pathname, ":", vlarray.nrows)
+            print("Second row in vlarray ==>", row[1])
 
         self.assertEqual(vlarray.nrows, 2)
         self.assertEqual(row[0], [[b"123", b"45"]])
@@ -1690,12 +1746,12 @@ class MDTypesTestCase(unittest.TestCase):
         self.assertEqual(len(row[1]), 2)
 
     def test02_BoolAtom(self):
-        """Checking vlarray with MD boolean atoms"""
+        """Checking vlarray with MD boolean atoms."""
 
         root = self.rootgroup
         if common.verbose:
-            print '\n', '-=' * 30
-            print "Running %s.test02_BoolAtom..." % self.__class__.__name__
+            print('\n', '-=' * 30)
+            print("Running %s.test02_BoolAtom..." % self.__class__.__name__)
 
         # Create an string atom
         vlarray = self.fileh.create_vlarray(root, 'BoolAtom',
@@ -1707,9 +1763,9 @@ class MDTypesTestCase(unittest.TestCase):
         # Read all the rows:
         row = vlarray.read()
         if common.verbose:
-            print "Object read:", row
-            print "Nrows in", vlarray._v_pathname, ":", vlarray.nrows
-            print "Second row in vlarray ==>", row[1]
+            print("Object read:", row)
+            print("Nrows in", vlarray._v_pathname, ":", vlarray.nrows)
+            print("Second row in vlarray ==>", row[1])
 
         self.assertEqual(vlarray.nrows, 2)
         self.assertTrue(
@@ -1725,8 +1781,8 @@ class MDTypesTestCase(unittest.TestCase):
 
         root = self.rootgroup
         if common.verbose:
-            print '\n', '-=' * 30
-            print "Running %s.test02b_BoolAtom..." % self.__class__.__name__
+            print('\n', '-=' * 30)
+            print("Running %s.test02b_BoolAtom..." % self.__class__.__name__)
 
         # Create an string atom
         vlarray = self.fileh.create_vlarray(root, 'BoolAtom',
@@ -1741,9 +1797,9 @@ class MDTypesTestCase(unittest.TestCase):
         # Read all the rows:
         row = vlarray.read()
         if common.verbose:
-            print "Object read:", row
-            print "Nrows in", vlarray._v_pathname, ":", vlarray.nrows
-            print "Second row in vlarray ==>", row[1]
+            print("Object read:", row)
+            print("Nrows in", vlarray._v_pathname, ":", vlarray.nrows)
+            print("Second row in vlarray ==>", row[1])
 
         self.assertEqual(vlarray.nrows, 2)
         self.assertTrue(
@@ -1759,8 +1815,8 @@ class MDTypesTestCase(unittest.TestCase):
 
         root = self.rootgroup
         if common.verbose:
-            print '\n', '-=' * 30
-            print "Running %s.test02c_BoolAtom..." % self.__class__.__name__
+            print('\n', '-=' * 30)
+            print("Running %s.test02c_BoolAtom..." % self.__class__.__name__)
 
         # Create an string atom
         vlarray = self.fileh.create_vlarray(root, 'BoolAtom',
@@ -1775,9 +1831,9 @@ class MDTypesTestCase(unittest.TestCase):
         # Read all the rows:
         row = vlarray.read()
         if common.verbose:
-            print "Object read:", row
-            print "Nrows in", vlarray._v_pathname, ":", vlarray.nrows
-            print "Second row in vlarray ==>", row[1]
+            print("Object read:", row)
+            print("Nrows in", vlarray._v_pathname, ":", vlarray.nrows)
+            print("Second row in vlarray ==>", row[1])
 
         self.assertEqual(vlarray.nrows, 2)
         self.assertTrue(
@@ -1790,7 +1846,7 @@ class MDTypesTestCase(unittest.TestCase):
         self.assertEqual(len(row[1]), 2)
 
     def test03_IntAtom(self):
-        """Checking vlarray with MD integer atoms"""
+        """Checking vlarray with MD integer atoms."""
 
         ttypes = ["Int8",
                   "UInt8",
@@ -1803,8 +1859,8 @@ class MDTypesTestCase(unittest.TestCase):
                   ]
         root = self.rootgroup
         if common.verbose:
-            print '\n', '-=' * 30
-            print "Running %s.test03_IntAtom..." % self.__class__.__name__
+            print('\n', '-=' * 30)
+            print("Running %s.test03_IntAtom..." % self.__class__.__name__)
 
         # Create an string atom
         for atype in ttypes:
@@ -1817,9 +1873,9 @@ class MDTypesTestCase(unittest.TestCase):
             # Read all the rows:
             row = vlarray.read()
             if common.verbose:
-                print "Testing type:", atype
-                print "Nrows in", vlarray._v_pathname, ":", vlarray.nrows
-                print "Second row in vlarray ==>", repr(row[1])
+                print("Testing type:", atype)
+                print("Nrows in", vlarray._v_pathname, ":", vlarray.nrows)
+                print("Second row in vlarray ==>", repr(row[1]))
 
             self.assertEqual(vlarray.nrows, 2)
             self.assertTrue(
@@ -1832,7 +1888,7 @@ class MDTypesTestCase(unittest.TestCase):
             self.assertEqual(len(row[1]), 1)
 
     def test04_FloatAtom(self):
-        """Checking vlarray with MD floating point atoms"""
+        """Checking vlarray with MD floating point atoms."""
 
         ttypes = [
             "Float32",
@@ -1840,15 +1896,20 @@ class MDTypesTestCase(unittest.TestCase):
             "Complex32",
             "Complex64",
         ]
-        for name in ("float16", "float96", "float128",
-                     "Complex192", "Complex256"):
-            if hasattr(numpy, name):
+
+        for name in ("float16", "float96", "float128"):
+            atomname = name.capitalize() + "Atom"
+            if atomname in globals():
                 ttypes.append(name.capitalize())
+        for itemsize in (192, 256):
+            atomname = "Complex%dAtom" % itemsize
+            if atomname in globals():
+                ttypes.append("Complex%d" % (itemsize // 2))
 
         root = self.rootgroup
         if common.verbose:
-            print '\n', '-=' * 30
-            print "Running %s.test04_FloatAtom..." % self.__class__.__name__
+            print('\n', '-=' * 30)
+            print("Running %s.test04_FloatAtom..." % self.__class__.__name__)
 
         # Create an string atom
         for atype in ttypes:
@@ -1861,9 +1922,9 @@ class MDTypesTestCase(unittest.TestCase):
             # Read all the rows:
             row = vlarray.read()
             if common.verbose:
-                print "Testing type:", atype
-                print "Nrows in", vlarray._v_pathname, ":", vlarray.nrows
-                print "Second row in vlarray ==>", row[1]
+                print("Testing type:", atype)
+                print("Nrows in", vlarray._v_pathname, ":", vlarray.nrows)
+                print("Second row in vlarray ==>", row[1])
 
             self.assertEqual(vlarray.nrows, 2)
             self.assertTrue(
@@ -1899,12 +1960,12 @@ class AppendShapeTestCase(unittest.TestCase):
     #----------------------------------------
 
     def test00_difinputs(self):
-        """Checking vlarray.append() with different inputs"""
+        """Checking vlarray.append() with different inputs."""
 
         root = self.rootgroup
         if common.verbose:
-            print '\n', '-=' * 30
-            print "Running %s.test00_difinputs..." % self.__class__.__name__
+            print('\n', '-=' * 30)
+            print("Running %s.test00_difinputs..." % self.__class__.__name__)
 
         # Create an string atom
         vlarray = self.fileh.create_vlarray(root, 'vlarray',
@@ -1920,7 +1981,7 @@ class AppendShapeTestCase(unittest.TestCase):
 
         if self.close:
             if common.verbose:
-                print "(closing file version)"
+                print("(closing file version)")
             self.fileh.close()
             self.fileh = open_file(self.file, mode="r")
             vlarray = self.fileh.root.vlarray
@@ -1928,9 +1989,9 @@ class AppendShapeTestCase(unittest.TestCase):
         # Read all the vlarray
         row = vlarray.read()
         if common.verbose:
-            print "Object read:", row
-            print "Nrows in", vlarray._v_pathname, ":", vlarray.nrows
-            print "First row in vlarray ==>", row[0]
+            print("Object read:", row)
+            print("Nrows in", vlarray._v_pathname, ":", vlarray.nrows)
+            print("First row in vlarray ==>", row[0])
 
         self.assertEqual(vlarray.nrows, 3)
         self.assertEqual(row[0], [1, 2, 3])
@@ -1938,12 +1999,12 @@ class AppendShapeTestCase(unittest.TestCase):
         self.assertEqual(row[2], [1, 2, 3])
 
     def test01_toomanydims(self):
-        """Checking vlarray.append() with too many dimensions"""
+        """Checking vlarray.append() with too many dimensions."""
 
         root = self.rootgroup
         if common.verbose:
-            print '\n', '-=' * 30
-            print "Running %s.test01_toomanydims..." % self.__class__.__name__
+            print('\n', '-=' * 30)
+            print("Running %s.test01_toomanydims..." % self.__class__.__name__)
 
         # Create an string atom
         vlarray = self.fileh.create_vlarray(root, 'vlarray',
@@ -1955,14 +2016,14 @@ class AppendShapeTestCase(unittest.TestCase):
         except ValueError:
             if common.verbose:
                 (type, value, traceback) = sys.exc_info()
-                print "\nGreat!, the next RuntimeError was catched!"
-                print value
+                print("\nGreat!, the next RuntimeError was catched!")
+                print(value)
         else:
             self.fail("expected a ValueError")
 
         if self.close:
             if common.verbose:
-                print "(closing file version)"
+                print("(closing file version)")
             self.fileh.close()
             self.fileh = open_file(self.file, mode="r")
             vlarray = self.fileh.root.vlarray
@@ -1970,8 +2031,8 @@ class AppendShapeTestCase(unittest.TestCase):
         # Read all the rows (there should be none)
         row = vlarray.read()
         if common.verbose:
-            print "Object read:", row
-            print "Nrows in", vlarray._v_pathname, ":", vlarray.nrows
+            print("Object read:", row)
+            print("Nrows in", vlarray._v_pathname, ":", vlarray.nrows)
 
         self.assertEqual(vlarray.nrows, 0)
 
@@ -1980,8 +2041,8 @@ class AppendShapeTestCase(unittest.TestCase):
 
         root = self.rootgroup
         if common.verbose:
-            print '\n', '-=' * 30
-            print "Running %s.test02_zerodims..." % self.__class__.__name__
+            print('\n', '-=' * 30)
+            print("Running %s.test02_zerodims..." % self.__class__.__name__)
 
         # Create an string atom
         vlarray = self.fileh.create_vlarray(root, 'vlarray',
@@ -1991,7 +2052,7 @@ class AppendShapeTestCase(unittest.TestCase):
 
         if self.close:
             if common.verbose:
-                print "(closing file version)"
+                print("(closing file version)")
             self.fileh.close()
             self.fileh = open_file(self.file, mode="r")
             vlarray = self.fileh.root.vlarray
@@ -1999,9 +2060,9 @@ class AppendShapeTestCase(unittest.TestCase):
         # Read the only row in vlarray
         row = vlarray.read(0)[0]
         if common.verbose:
-            print "Object read:", row
-            print "Nrows in", vlarray._v_pathname, ":", vlarray.nrows
-            print "First row in vlarray ==>", repr(row)
+            print("Object read:", row)
+            print("Nrows in", vlarray._v_pathname, ":", vlarray.nrows)
+            print("First row in vlarray ==>", repr(row))
 
         self.assertEqual(vlarray.nrows, 1)
         self.assertTrue(allequal(row, numpy.zeros(dtype='int32', shape=(0,))))
@@ -2012,8 +2073,8 @@ class AppendShapeTestCase(unittest.TestCase):
 
         root = self.rootgroup
         if common.verbose:
-            print '\n', '-=' * 30
-            print "Running %s.test03a_cast..." % self.__class__.__name__
+            print('\n', '-=' * 30)
+            print("Running %s.test03a_cast..." % self.__class__.__name__)
 
         # Create an string atom
         vlarray = self.fileh.create_vlarray(root, 'vlarray',
@@ -2024,7 +2085,7 @@ class AppendShapeTestCase(unittest.TestCase):
 
         if self.close:
             if common.verbose:
-                print "(closing file version)"
+                print("(closing file version)")
             self.fileh.close()
             self.fileh = open_file(self.file, mode="r")
             vlarray = self.fileh.root.vlarray
@@ -2032,9 +2093,9 @@ class AppendShapeTestCase(unittest.TestCase):
         # Read the only row in vlarray
         row = vlarray.read(0)[0]
         if common.verbose:
-            print "Object read:", row
-            print "Nrows in", vlarray._v_pathname, ":", vlarray.nrows
-            print "First row in vlarray ==>", repr(row)
+            print("Object read:", row)
+            print("Nrows in", vlarray._v_pathname, ":", vlarray.nrows)
+            print("First row in vlarray ==>", repr(row))
 
         self.assertEqual(vlarray.nrows, 1)
         self.assertTrue(allequal(row, numpy.array([1, 2], dtype='int32')))
@@ -2045,8 +2106,8 @@ class AppendShapeTestCase(unittest.TestCase):
 
         root = self.rootgroup
         if common.verbose:
-            print '\n', '-=' * 30
-            print "Running %s.test03b_cast..." % self.__class__.__name__
+            print('\n', '-=' * 30)
+            print("Running %s.test03b_cast..." % self.__class__.__name__)
 
         # Create an string atom
         vlarray = self.fileh.create_vlarray(root, 'vlarray',
@@ -2057,7 +2118,7 @@ class AppendShapeTestCase(unittest.TestCase):
 
         if self.close:
             if common.verbose:
-                print "(closing file version)"
+                print("(closing file version)")
             self.fileh.close()
             self.fileh = open_file(self.file, mode="r")
             vlarray = self.fileh.root.vlarray
@@ -2065,9 +2126,9 @@ class AppendShapeTestCase(unittest.TestCase):
         # Read the only row in vlarray
         row = vlarray.read(0)[0]
         if common.verbose:
-            print "Object read:", row
-            print "Nrows in", vlarray._v_pathname, ":", vlarray.nrows
-            print "First row in vlarray ==>", repr(row)
+            print("Object read:", row)
+            print("Nrows in", vlarray._v_pathname, ":", vlarray.nrows)
+            print("First row in vlarray ==>", repr(row))
 
         self.assertEqual(vlarray.nrows, 1)
         self.assertTrue(allequal(row, numpy.array([1, 2], dtype='int32')))
@@ -2106,8 +2167,9 @@ class FlavorTestCase(unittest.TestCase):
 
         root = self.rootgroup
         if common.verbose:
-            print '\n', '-=' * 30
-            print "Running %s.test01_EmptyVLArray..." % self.__class__.__name__
+            print('\n', '-=' * 30)
+            print("Running %s.test01_EmptyVLArray..." %
+                  self.__class__.__name__)
 
         # Create an string atom
         vlarray = self.fileh.create_vlarray(root, "vlarray",
@@ -2119,9 +2181,9 @@ class FlavorTestCase(unittest.TestCase):
         vlarray = self.fileh.root.vlarray
         row = vlarray.read()
         if common.verbose:
-            print "Testing flavor:", self.flavor
-            print "Object read:", row, repr(row)
-            print "Nrows in", vlarray._v_pathname, ":", vlarray.nrows
+            print("Testing flavor:", self.flavor)
+            print("Object read:", row, repr(row))
+            print("Nrows in", vlarray._v_pathname, ":", vlarray.nrows)
         # Check that the object read is effectively empty
         self.assertEqual(vlarray.nrows, 0)
         self.assertEqual(row, [])
@@ -2131,8 +2193,9 @@ class FlavorTestCase(unittest.TestCase):
 
         root = self.rootgroup
         if common.verbose:
-            print '\n', '-=' * 30
-            print "Running %s.test01_EmptyVLArray..." % self.__class__.__name__
+            print('\n', '-=' * 30)
+            print("Running %s.test01_EmptyVLArray..." %
+                  self.__class__.__name__)
 
         # Create an string atom
         vlarray = self.fileh.create_vlarray(root, "vlarray",
@@ -2141,9 +2204,9 @@ class FlavorTestCase(unittest.TestCase):
         # Read all the rows (it should be empty):
         row = vlarray.read()
         if common.verbose:
-            print "Testing flavor:", self.flavor
-            print "Object read:", row
-            print "Nrows in", vlarray._v_pathname, ":", vlarray.nrows
+            print("Testing flavor:", self.flavor)
+            print("Object read:", row)
+            print("Nrows in", vlarray._v_pathname, ":", vlarray.nrows)
         # Check that the object read is effectively empty
         self.assertEqual(vlarray.nrows, 0)
         self.assertEqual(row, [])
@@ -2153,8 +2216,8 @@ class FlavorTestCase(unittest.TestCase):
 
         root = self.rootgroup
         if common.verbose:
-            print '\n', '-=' * 30
-            print "Running %s.test02_BoolAtom..." % self.__class__.__name__
+            print('\n', '-=' * 30)
+            print("Running %s.test02_BoolAtom..." % self.__class__.__name__)
 
         # Create an string atom
         vlarray = self.fileh.create_vlarray(root, "Bool", BoolAtom())
@@ -2166,10 +2229,10 @@ class FlavorTestCase(unittest.TestCase):
         # Read all the rows:
         row = vlarray.read()
         if common.verbose:
-            print "Testing flavor:", self.flavor
-            print "Object read:", row
-            print "Nrows in", vlarray._v_pathname, ":", vlarray.nrows
-            print "First row in vlarray ==>", row[0]
+            print("Testing flavor:", self.flavor)
+            print("Object read:", row)
+            print("Nrows in", vlarray._v_pathname, ":", vlarray.nrows)
+            print("First row in vlarray ==>", row[0])
 
         self.assertEqual(vlarray.nrows, 3)
         self.assertEqual(len(row[0]), 3)
@@ -2209,8 +2272,8 @@ class FlavorTestCase(unittest.TestCase):
                   ]
         root = self.rootgroup
         if common.verbose:
-            print '\n', '-=' * 30
-            print "Running %s.test03_IntAtom..." % self.__class__.__name__
+            print('\n', '-=' * 30)
+            print("Running %s.test03_IntAtom..." % self.__class__.__name__)
 
         # Create an string atom
         for atype in ttypes:
@@ -2224,10 +2287,10 @@ class FlavorTestCase(unittest.TestCase):
             # Read all the rows:
             row = vlarray.read()
             if common.verbose:
-                print "Testing flavor:", self.flavor
-                print "Object read:", row
-                print "Nrows in", vlarray._v_pathname, ":", vlarray.nrows
-                print "First row in vlarray ==>", row[0]
+                print("Testing flavor:", self.flavor)
+                print("Object read:", row)
+                print("Nrows in", vlarray._v_pathname, ":", vlarray.nrows)
+                print("First row in vlarray ==>", row[0])
 
             self.assertEqual(vlarray.nrows, 3)
             self.assertEqual(len(row[0]), 3)
@@ -2267,8 +2330,8 @@ class FlavorTestCase(unittest.TestCase):
                   ]
         root = self.rootgroup
         if common.verbose:
-            print '\n', '-=' * 30
-            print "Running %s.test03_IntAtom..." % self.__class__.__name__
+            print('\n', '-=' * 30)
+            print("Running %s.test03_IntAtom..." % self.__class__.__name__)
 
         # Create an string atom
         for atype in ttypes:
@@ -2285,10 +2348,10 @@ class FlavorTestCase(unittest.TestCase):
             # Read all the rows:
             row = vlarray.read()
             if common.verbose:
-                print "Testing flavor:", self.flavor
-                print "Object read:", row
-                print "Nrows in", vlarray._v_pathname, ":", vlarray.nrows
-                print "First row in vlarray ==>", row[0]
+                print("Testing flavor:", self.flavor)
+                print("Object read:", row)
+                print("Nrows in", vlarray._v_pathname, ":", vlarray.nrows)
+                print("First row in vlarray ==>", row[0])
 
             self.assertEqual(vlarray.nrows, 3)
             self.assertEqual(len(row[0]), 3)
@@ -2322,15 +2385,21 @@ class FlavorTestCase(unittest.TestCase):
             "Complex32",
             "Complex64",
         ]
-        for name in ("float16", "float96", "float128",
-                     "Complex192", "Complex256"):
-            if hasattr(numpy, name):
+
+        for name in ("float16", "float96", "float128"):
+            atomname = name.capitalize() + "Atom"
+            if atomname in globals():
                 ttypes.append(name.capitalize())
 
+        for itemsize in (192, 256):
+            atomname = "Complex%dAtom" % itemsize
+            if atomname in globals():
+                ttypes.append("Complex%d" % (itemsize // 2))
+
         root = self.rootgroup
         if common.verbose:
-            print '\n', '-=' * 30
-            print "Running %s.test04_FloatAtom..." % self.__class__.__name__
+            print('\n', '-=' * 30)
+            print("Running %s.test04_FloatAtom..." % self.__class__.__name__)
 
         # Create an string atom
         for atype in ttypes:
@@ -2344,10 +2413,10 @@ class FlavorTestCase(unittest.TestCase):
             # Read all the rows:
             row = vlarray.read()
             if common.verbose:
-                print "Testing flavor:", self.flavor
-                print "Object read:", row
-                print "Nrows in", vlarray._v_pathname, ":", vlarray.nrows
-                print "First row in vlarray ==>", row[0]
+                print("Testing flavor:", self.flavor)
+                print("Object read:", row)
+                print("Nrows in", vlarray._v_pathname, ":", vlarray.nrows)
+                print("First row in vlarray ==>", row[0])
 
             self.assertEqual(vlarray.nrows, 3)
             self.assertEqual(len(row[0]), 3)
@@ -2418,8 +2487,8 @@ class ReadRangeTestCase(unittest.TestCase):
     def test01_start(self):
         "Checking reads with only a start value"
         if common.verbose:
-            print '\n', '-=' * 30
-            print "Running %s.test01_start..." % self.__class__.__name__
+            print('\n', '-=' * 30)
+            print("Running %s.test01_start..." % self.__class__.__name__)
 
         self.fileh = open_file(self.file, "r")
         vlarray = self.fileh.root.vlarray
@@ -2430,8 +2499,8 @@ class ReadRangeTestCase(unittest.TestCase):
         row.append(vlarray.read(10)[0])
         row.append(vlarray.read(99)[0])
         if common.verbose:
-            print "Nrows in", vlarray._v_pathname, ":", vlarray.nrows
-            print "Second row in vlarray ==>", row[1]
+            print("Nrows in", vlarray._v_pathname, ":", vlarray.nrows)
+            print("Second row in vlarray ==>", row[1])
 
         self.assertEqual(vlarray.nrows, self.nrows)
         self.assertEqual(len(row[0]), 0)
@@ -2444,8 +2513,8 @@ class ReadRangeTestCase(unittest.TestCase):
     def test01b_start(self):
         "Checking reads with only a start value in a slice"
         if common.verbose:
-            print '\n', '-=' * 30
-            print "Running %s.test01b_start..." % self.__class__.__name__
+            print('\n', '-=' * 30)
+            print("Running %s.test01b_start..." % self.__class__.__name__)
 
         self.fileh = open_file(self.file, "r")
         vlarray = self.fileh.root.vlarray
@@ -2456,8 +2525,8 @@ class ReadRangeTestCase(unittest.TestCase):
         row.append(vlarray[10])
         row.append(vlarray[99])
         if common.verbose:
-            print "Nrows in", vlarray._v_pathname, ":", vlarray.nrows
-            print "Second row in vlarray ==>", row[1]
+            print("Nrows in", vlarray._v_pathname, ":", vlarray.nrows)
+            print("Second row in vlarray ==>", row[1])
 
         self.assertEqual(vlarray.nrows, self.nrows)
         self.assertEqual(len(row[0]), 0)
@@ -2470,8 +2539,8 @@ class ReadRangeTestCase(unittest.TestCase):
     def test01np_start(self):
         "Checking reads with only a start value in a slice (numpy indexes)"
         if common.verbose:
-            print '\n', '-=' * 30
-            print "Running %s.test01np_start..." % self.__class__.__name__
+            print('\n', '-=' * 30)
+            print("Running %s.test01np_start..." % self.__class__.__name__)
 
         self.fileh = open_file(self.file, "r")
         vlarray = self.fileh.root.vlarray
@@ -2482,8 +2551,8 @@ class ReadRangeTestCase(unittest.TestCase):
         row.append(vlarray[numpy.int32(10)])
         row.append(vlarray[numpy.int64(99)])
         if common.verbose:
-            print "Nrows in", vlarray._v_pathname, ":", vlarray.nrows
-            print "Second row in vlarray ==>", row[1]
+            print("Nrows in", vlarray._v_pathname, ":", vlarray.nrows)
+            print("Second row in vlarray ==>", row[1])
 
         self.assertEqual(vlarray.nrows, self.nrows)
         self.assertEqual(len(row[0]), 0)
@@ -2496,8 +2565,8 @@ class ReadRangeTestCase(unittest.TestCase):
     def test02_stop(self):
         "Checking reads with only a stop value"
         if common.verbose:
-            print '\n', '-=' * 30
-            print "Running %s.test02_stop..." % self.__class__.__name__
+            print('\n', '-=' * 30)
+            print("Running %s.test02_stop..." % self.__class__.__name__)
 
         self.fileh = open_file(self.file, "r")
         vlarray = self.fileh.root.vlarray
@@ -2510,9 +2579,9 @@ class ReadRangeTestCase(unittest.TestCase):
         row.append(vlarray.read(stop=10))
         row.append(vlarray.read(stop=99))
         if common.verbose:
-            print "Nrows in", vlarray._v_pathname, ":", vlarray.nrows
-            print "First row in vlarray ==>", row[0]
-            print "Second row in vlarray ==>", row[1]
+            print("Nrows in", vlarray._v_pathname, ":", vlarray.nrows)
+            print("First row in vlarray ==>", row[0])
+            print("Second row in vlarray ==>", row[1])
 
         self.assertEqual(vlarray.nrows, self.nrows)
         self.assertEqual(len(row[0]), 1)
@@ -2529,8 +2598,8 @@ class ReadRangeTestCase(unittest.TestCase):
     def test02b_stop(self):
         "Checking reads with only a stop value in a slice"
         if common.verbose:
-            print '\n', '-=' * 30
-            print "Running %s.test02b_stop..." % self.__class__.__name__
+            print('\n', '-=' * 30)
+            print("Running %s.test02b_stop..." % self.__class__.__name__)
 
         self.fileh = open_file(self.file, "r")
         vlarray = self.fileh.root.vlarray
@@ -2543,8 +2612,8 @@ class ReadRangeTestCase(unittest.TestCase):
         row.append(vlarray[:10])
         row.append(vlarray[:99])
         if common.verbose:
-            print "Nrows in", vlarray._v_pathname, ":", vlarray.nrows
-            print "Second row in vlarray ==>", row[1]
+            print("Nrows in", vlarray._v_pathname, ":", vlarray.nrows)
+            print("Second row in vlarray ==>", row[1])
 
         self.assertEqual(vlarray.nrows, self.nrows)
         self.assertEqual(len(row[0]), 1)
@@ -2563,8 +2632,8 @@ class ReadRangeTestCase(unittest.TestCase):
     def test03_startstop(self):
         "Checking reads with a start and stop values"
         if common.verbose:
-            print '\n', '-=' * 30
-            print "Running %s.test03_startstop..." % self.__class__.__name__
+            print('\n', '-=' * 30)
+            print("Running %s.test03_startstop..." % self.__class__.__name__)
 
         self.fileh = open_file(self.file, "r")
         vlarray = self.fileh.root.vlarray
@@ -2577,8 +2646,8 @@ class ReadRangeTestCase(unittest.TestCase):
         row.append(vlarray.read(5, 15))
         row.append(vlarray.read(0, 100))  # read all the array
         if common.verbose:
-            print "Nrows in", vlarray._v_pathname, ":", vlarray.nrows
-            print "Second row in vlarray ==>", row[1]
+            print("Nrows in", vlarray._v_pathname, ":", vlarray.nrows)
+            print("Second row in vlarray ==>", row[1])
 
         self.assertEqual(vlarray.nrows, self.nrows)
         self.assertEqual(len(row[0]), 10)
@@ -2597,8 +2666,8 @@ class ReadRangeTestCase(unittest.TestCase):
     def test03b_startstop(self):
         "Checking reads with a start and stop values in slices"
         if common.verbose:
-            print '\n', '-=' * 30
-            print "Running %s.test03b_startstop..." % self.__class__.__name__
+            print('\n', '-=' * 30)
+            print("Running %s.test03b_startstop..." % self.__class__.__name__)
 
         self.fileh = open_file(self.file, "r")
         vlarray = self.fileh.root.vlarray
@@ -2611,8 +2680,8 @@ class ReadRangeTestCase(unittest.TestCase):
         row.append(vlarray[5:15])
         row.append(vlarray[:])  # read all the array
         if common.verbose:
-            print "Nrows in", vlarray._v_pathname, ":", vlarray.nrows
-            print "Second row in vlarray ==>", row[1]
+            print("Nrows in", vlarray._v_pathname, ":", vlarray.nrows)
+            print("Second row in vlarray ==>", row[1])
 
         self.assertEqual(vlarray.nrows, self.nrows)
         self.assertEqual(len(row[0]), 10)
@@ -2631,8 +2700,9 @@ class ReadRangeTestCase(unittest.TestCase):
     def test04_startstopstep(self):
         "Checking reads with a start, stop & step values"
         if common.verbose:
-            print '\n', '-=' * 30
-            print "Running %s.test04_startstopstep..." % self.__class__.__name__
+            print('\n', '-=' * 30)
+            print("Running %s.test04_startstopstep..." %
+                  self.__class__.__name__)
 
         self.fileh = open_file(self.file, "r")
         vlarray = self.fileh.root.vlarray
@@ -2645,8 +2715,8 @@ class ReadRangeTestCase(unittest.TestCase):
         row.append(vlarray.read(5, 15, 3))
         row.append(vlarray.read(0, 100, 20))
         if common.verbose:
-            print "Nrows in", vlarray._v_pathname, ":", vlarray.nrows
-            print "Second row in vlarray ==>", row[1]
+            print("Nrows in", vlarray._v_pathname, ":", vlarray.nrows)
+            print("Second row in vlarray ==>", row[1])
 
         self.assertEqual(vlarray.nrows, self.nrows)
         self.assertEqual(len(row[0]), 5)
@@ -2665,8 +2735,9 @@ class ReadRangeTestCase(unittest.TestCase):
     def test04np_startstopstep(self):
         "Checking reads with a start, stop & step values (numpy indices)"
         if common.verbose:
-            print '\n', '-=' * 30
-            print "Running %s.test04np_startstopstep..." % self.__class__.__name__
+            print('\n', '-=' * 30)
+            print("Running %s.test04np_startstopstep..." %
+                  self.__class__.__name__)
 
         self.fileh = open_file(self.file, "r")
         vlarray = self.fileh.root.vlarray
@@ -2680,8 +2751,8 @@ class ReadRangeTestCase(unittest.TestCase):
         row.append(vlarray.read(numpy.int8(
             0), numpy.int8(100), numpy.int8(20)))
         if common.verbose:
-            print "Nrows in", vlarray._v_pathname, ":", vlarray.nrows
-            print "Second row in vlarray ==>", row[1]
+            print("Nrows in", vlarray._v_pathname, ":", vlarray.nrows)
+            print("Second row in vlarray ==>", row[1])
 
         self.assertEqual(vlarray.nrows, self.nrows)
         self.assertEqual(len(row[0]), 5)
@@ -2700,8 +2771,8 @@ class ReadRangeTestCase(unittest.TestCase):
     def test04b_slices(self):
         "Checking reads with start, stop & step values in slices"
         if common.verbose:
-            print '\n', '-=' * 30
-            print "Running %s.test04b_slices..." % self.__class__.__name__
+            print('\n', '-=' * 30)
+            print("Running %s.test04b_slices..." % self.__class__.__name__)
 
         self.fileh = open_file(self.file, "r")
         vlarray = self.fileh.root.vlarray
@@ -2714,8 +2785,8 @@ class ReadRangeTestCase(unittest.TestCase):
         row.append(vlarray[5:15:3])
         row.append(vlarray[0:100:20])
         if common.verbose:
-            print "Nrows in", vlarray._v_pathname, ":", vlarray.nrows
-            print "Second row in vlarray ==>", row[1]
+            print("Nrows in", vlarray._v_pathname, ":", vlarray.nrows)
+            print("Second row in vlarray ==>", row[1])
 
         self.assertEqual(vlarray.nrows, self.nrows)
         self.assertEqual(len(row[0]), 5)
@@ -2732,12 +2803,15 @@ class ReadRangeTestCase(unittest.TestCase):
                 allequal(row[2][x//20], numpy.arange(x, dtype='int32')))
 
     def test04bnp_slices(self):
-        """Checking reads with start, stop & step values in slices
-        (numpy indices)"""
+        """Checking reads with start, stop & step values in slices.
+
+        (numpy indices)
+
+        """
 
         if common.verbose:
-            print '\n', '-=' * 30
-            print "Running %s.test04bnp_slices..." % self.__class__.__name__
+            print('\n', '-=' * 30)
+            print("Running %s.test04bnp_slices..." % self.__class__.__name__)
 
         self.fileh = open_file(self.file, "r")
         vlarray = self.fileh.root.vlarray
@@ -2750,8 +2824,8 @@ class ReadRangeTestCase(unittest.TestCase):
         row.append(vlarray[numpy.int16(5):numpy.int16(15):numpy.int64(3)])
         row.append(vlarray[numpy.uint16(0):numpy.int32(100):numpy.int8(20)])
         if common.verbose:
-            print "Nrows in", vlarray._v_pathname, ":", vlarray.nrows
-            print "Second row in vlarray ==>", row[1]
+            print("Nrows in", vlarray._v_pathname, ":", vlarray.nrows)
+            print("Second row in vlarray ==>", row[1])
 
         self.assertEqual(vlarray.nrows, self.nrows)
         self.assertEqual(len(row[0]), 5)
@@ -2770,23 +2844,24 @@ class ReadRangeTestCase(unittest.TestCase):
     def test05_out_of_range(self):
         "Checking out of range reads"
         if common.verbose:
-            print '\n', '-=' * 30
-            print "Running %s.test05_out_of_range..." % self.__class__.__name__
+            print('\n', '-=' * 30)
+            print("Running %s.test05_out_of_range..." %
+                  self.__class__.__name__)
 
         self.fileh = open_file(self.file, "r")
         vlarray = self.fileh.root.vlarray
 
         if common.verbose:
-            print "Nrows in", vlarray._v_pathname, ":", vlarray.nrows
+            print("Nrows in", vlarray._v_pathname, ":", vlarray.nrows)
 
         try:
             row = vlarray.read(1000)[0]
-            print "row-->", row
+            print("row-->", row)
         except IndexError:
             if common.verbose:
                 (type, value, traceback) = sys.exc_info()
-                print "\nGreat!, the next IndexError was catched!"
-                print value
+                print("\nGreat!, the next IndexError was catched!")
+                print(value)
             self.fileh.close()
         else:
             (type, value, traceback) = sys.exc_info()
@@ -2830,8 +2905,8 @@ class GetItemRangeTestCase(unittest.TestCase):
     def test01_start(self):
         "Checking reads with only a start value"
         if common.verbose:
-            print '\n', '-=' * 30
-            print "Running %s.test01_start..." % self.__class__.__name__
+            print('\n', '-=' * 30)
+            print("Running %s.test01_start..." % self.__class__.__name__)
 
         self.fileh = open_file(self.file, "r")
         vlarray = self.fileh.root.vlarray
@@ -2843,8 +2918,8 @@ class GetItemRangeTestCase(unittest.TestCase):
         row.append(vlarray[numpy.array(10)])
         row.append(vlarray[99])
         if common.verbose:
-            print "Nrows in", vlarray._v_pathname, ":", vlarray.nrows
-            print "Second row in vlarray ==>", row[1]
+            print("Nrows in", vlarray._v_pathname, ":", vlarray.nrows)
+            print("Second row in vlarray ==>", row[1])
 
         self.assertEqual(vlarray.nrows, self.nrows)
         self.assertEqual(len(row[0]), 0)
@@ -2860,8 +2935,8 @@ class GetItemRangeTestCase(unittest.TestCase):
     def test01b_start(self):
         "Checking reads with only a start value in a slice"
         if common.verbose:
-            print '\n', '-=' * 30
-            print "Running %s.test01b_start..." % self.__class__.__name__
+            print('\n', '-=' * 30)
+            print("Running %s.test01b_start..." % self.__class__.__name__)
 
         self.fileh = open_file(self.file, "r")
         vlarray = self.fileh.root.vlarray
@@ -2872,8 +2947,8 @@ class GetItemRangeTestCase(unittest.TestCase):
         row.append(vlarray[10])
         row.append(vlarray[99])
         if common.verbose:
-            print "Nrows in", vlarray._v_pathname, ":", vlarray.nrows
-            print "Second row in vlarray ==>", row[1]
+            print("Nrows in", vlarray._v_pathname, ":", vlarray.nrows)
+            print("Second row in vlarray ==>", row[1])
 
         self.assertEqual(vlarray.nrows, self.nrows)
         self.assertEqual(len(row[0]), 0)
@@ -2886,8 +2961,8 @@ class GetItemRangeTestCase(unittest.TestCase):
     def test02_stop(self):
         "Checking reads with only a stop value"
         if common.verbose:
-            print '\n', '-=' * 30
-            print "Running %s.test02_stop..." % self.__class__.__name__
+            print('\n', '-=' * 30)
+            print("Running %s.test02_stop..." % self.__class__.__name__)
 
         self.fileh = open_file(self.file, "r")
         vlarray = self.fileh.root.vlarray
@@ -2900,9 +2975,9 @@ class GetItemRangeTestCase(unittest.TestCase):
         row.append(vlarray[:10])
         row.append(vlarray[:99])
         if common.verbose:
-            print "Nrows in", vlarray._v_pathname, ":", vlarray.nrows
-            print "First row in vlarray ==>", row[0]
-            print "Second row in vlarray ==>", row[1]
+            print("Nrows in", vlarray._v_pathname, ":", vlarray.nrows)
+            print("First row in vlarray ==>", row[0])
+            print("Second row in vlarray ==>", row[1])
 
         self.assertEqual(vlarray.nrows, self.nrows)
         self.assertEqual(len(row[0]), 1)
@@ -2919,8 +2994,8 @@ class GetItemRangeTestCase(unittest.TestCase):
     def test02b_stop(self):
         "Checking reads with only a stop value in a slice"
         if common.verbose:
-            print '\n', '-=' * 30
-            print "Running %s.test02b_stop..." % self.__class__.__name__
+            print('\n', '-=' * 30)
+            print("Running %s.test02b_stop..." % self.__class__.__name__)
 
         self.fileh = open_file(self.file, "r")
         vlarray = self.fileh.root.vlarray
@@ -2933,8 +3008,8 @@ class GetItemRangeTestCase(unittest.TestCase):
         row.append(vlarray[:10])
         row.append(vlarray[:99])
         if common.verbose:
-            print "Nrows in", vlarray._v_pathname, ":", vlarray.nrows
-            print "Second row in vlarray ==>", row[1]
+            print("Nrows in", vlarray._v_pathname, ":", vlarray.nrows)
+            print("Second row in vlarray ==>", row[1])
 
         self.assertEqual(vlarray.nrows, self.nrows)
         self.assertEqual(len(row[0]), 1)
@@ -2953,8 +3028,8 @@ class GetItemRangeTestCase(unittest.TestCase):
     def test03_startstop(self):
         "Checking reads with a start and stop values"
         if common.verbose:
-            print '\n', '-=' * 30
-            print "Running %s.test03_startstop..." % self.__class__.__name__
+            print('\n', '-=' * 30)
+            print("Running %s.test03_startstop..." % self.__class__.__name__)
 
         self.fileh = open_file(self.file, "r")
         vlarray = self.fileh.root.vlarray
@@ -2967,8 +3042,8 @@ class GetItemRangeTestCase(unittest.TestCase):
         row.append(vlarray[5:15])
         row.append(vlarray[0:100])  # read all the array
         if common.verbose:
-            print "Nrows in", vlarray._v_pathname, ":", vlarray.nrows
-            print "Second row in vlarray ==>", row[1]
+            print("Nrows in", vlarray._v_pathname, ":", vlarray.nrows)
+            print("Second row in vlarray ==>", row[1])
 
         self.assertEqual(vlarray.nrows, self.nrows)
         self.assertEqual(len(row[0]), 10)
@@ -2987,8 +3062,8 @@ class GetItemRangeTestCase(unittest.TestCase):
     def test03b_startstop(self):
         "Checking reads with a start and stop values in slices"
         if common.verbose:
-            print '\n', '-=' * 30
-            print "Running %s.test03b_startstop..." % self.__class__.__name__
+            print('\n', '-=' * 30)
+            print("Running %s.test03b_startstop..." % self.__class__.__name__)
 
         self.fileh = open_file(self.file, "r")
         vlarray = self.fileh.root.vlarray
@@ -3001,8 +3076,8 @@ class GetItemRangeTestCase(unittest.TestCase):
         row.append(vlarray[5:15])
         row.append(vlarray[:])  # read all the array
         if common.verbose:
-            print "Nrows in", vlarray._v_pathname, ":", vlarray.nrows
-            print "Second row in vlarray ==>", row[1]
+            print("Nrows in", vlarray._v_pathname, ":", vlarray.nrows)
+            print("Second row in vlarray ==>", row[1])
 
         self.assertEqual(vlarray.nrows, self.nrows)
         self.assertEqual(len(row[0]), 10)
@@ -3021,8 +3096,8 @@ class GetItemRangeTestCase(unittest.TestCase):
     def test04_slices(self):
         "Checking reads with a start, stop & step values"
         if common.verbose:
-            print '\n', '-=' * 30
-            print "Running %s.test04_slices..." % self.__class__.__name__
+            print('\n', '-=' * 30)
+            print("Running %s.test04_slices..." % self.__class__.__name__)
 
         self.fileh = open_file(self.file, "r")
         vlarray = self.fileh.root.vlarray
@@ -3035,8 +3110,8 @@ class GetItemRangeTestCase(unittest.TestCase):
         row.append(vlarray[5:15:3])
         row.append(vlarray[0:100:20])
         if common.verbose:
-            print "Nrows in", vlarray._v_pathname, ":", vlarray.nrows
-            print "Second row in vlarray ==>", row[1]
+            print("Nrows in", vlarray._v_pathname, ":", vlarray.nrows)
+            print("Second row in vlarray ==>", row[1])
 
         self.assertEqual(vlarray.nrows, self.nrows)
         self.assertEqual(len(row[0]), 5)
@@ -3055,8 +3130,8 @@ class GetItemRangeTestCase(unittest.TestCase):
     def test04bnp_slices(self):
         "Checking reads with start, stop & step values (numpy indices)"
         if common.verbose:
-            print '\n', '-=' * 30
-            print "Running %s.test04np_slices..." % self.__class__.__name__
+            print('\n', '-=' * 30)
+            print("Running %s.test04np_slices..." % self.__class__.__name__)
 
         self.fileh = open_file(self.file, "r")
         vlarray = self.fileh.root.vlarray
@@ -3069,8 +3144,8 @@ class GetItemRangeTestCase(unittest.TestCase):
         row.append(vlarray[numpy.int8(5):numpy.int8(15):numpy.int8(3)])
         row.append(vlarray[numpy.int8(0):numpy.int8(100):numpy.int8(20)])
         if common.verbose:
-            print "Nrows in", vlarray._v_pathname, ":", vlarray.nrows
-            print "Second row in vlarray ==>", row[1]
+            print("Nrows in", vlarray._v_pathname, ":", vlarray.nrows)
+            print("Second row in vlarray ==>", row[1])
 
         self.assertEqual(vlarray.nrows, self.nrows)
         self.assertEqual(len(row[0]), 5)
@@ -3089,23 +3164,24 @@ class GetItemRangeTestCase(unittest.TestCase):
     def test05_out_of_range(self):
         "Checking out of range reads"
         if common.verbose:
-            print '\n', '-=' * 30
-            print "Running %s.test05_out_of_range..." % self.__class__.__name__
+            print('\n', '-=' * 30)
+            print("Running %s.test05_out_of_range..." %
+                  self.__class__.__name__)
 
         self.fileh = open_file(self.file, "r")
         vlarray = self.fileh.root.vlarray
 
         if common.verbose:
-            print "Nrows in", vlarray._v_pathname, ":", vlarray.nrows
+            print("Nrows in", vlarray._v_pathname, ":", vlarray.nrows)
 
         try:
             row = vlarray[1000]
-            print "row-->", row
+            print("row-->", row)
         except IndexError:
             if common.verbose:
                 (type, value, traceback) = sys.exc_info()
-                print "\nGreat!, the next IndexError was catched!"
-                print value
+                print("\nGreat!, the next IndexError was catched!")
+                print(value)
             self.fileh.close()
         else:
             (type, value, traceback) = sys.exc_info()
@@ -3114,23 +3190,24 @@ class GetItemRangeTestCase(unittest.TestCase):
     def test05np_out_of_range(self):
         "Checking out of range reads (numpy indexes)"
         if common.verbose:
-            print '\n', '-=' * 30
-            print "Running %s.test05np_out_of_range..." % self.__class__.__name__
+            print('\n', '-=' * 30)
+            print("Running %s.test05np_out_of_range..." %
+                  self.__class__.__name__)
 
         self.fileh = open_file(self.file, "r")
         vlarray = self.fileh.root.vlarray
 
         if common.verbose:
-            print "Nrows in", vlarray._v_pathname, ":", vlarray.nrows
+            print("Nrows in", vlarray._v_pathname, ":", vlarray.nrows)
 
         try:
             row = vlarray[numpy.int32(1000)]
-            print "row-->", row
+            print("row-->", row)
         except IndexError:
             if common.verbose:
                 (type, value, traceback) = sys.exc_info()
-                print "\nGreat!, the next IndexError was catched!"
-                print value
+                print("\nGreat!, the next IndexError was catched!")
+                print(value)
             self.fileh.close()
         else:
             (type, value, traceback) = sys.exc_info()
@@ -3174,8 +3251,8 @@ class SetRangeTestCase(unittest.TestCase):
     def test01_start(self):
         "Checking updates that modifies a complete row"
         if common.verbose:
-            print '\n', '-=' * 30
-            print "Running %s.test01_start..." % self.__class__.__name__
+            print('\n', '-=' * 30)
+            print("Running %s.test01_start..." % self.__class__.__name__)
 
         self.fileh = open_file(self.file, "a")
         vlarray = self.fileh.root.vlarray
@@ -3191,8 +3268,8 @@ class SetRangeTestCase(unittest.TestCase):
         row.append(vlarray.read(10)[0])
         row.append(vlarray.read(99)[0])
         if common.verbose:
-            print "Nrows in", vlarray._v_pathname, ":", vlarray.nrows
-            print "Second row in vlarray ==>", row[1]
+            print("Nrows in", vlarray._v_pathname, ":", vlarray.nrows)
+            print("Second row in vlarray ==>", row[1])
 
         self.assertEqual(vlarray.nrows, self.nrows)
         self.assertEqual(len(row[0]), 0)
@@ -3208,8 +3285,8 @@ class SetRangeTestCase(unittest.TestCase):
     def test01np_start(self):
         "Checking updates that modifies a complete row"
         if common.verbose:
-            print '\n', '-=' * 30
-            print "Running %s.test01np_start..." % self.__class__.__name__
+            print('\n', '-=' * 30)
+            print("Running %s.test01np_start..." % self.__class__.__name__)
 
         self.fileh = open_file(self.file, "a")
         vlarray = self.fileh.root.vlarray
@@ -3225,8 +3302,8 @@ class SetRangeTestCase(unittest.TestCase):
         row.append(vlarray.read(numpy.int8(10))[0])
         row.append(vlarray.read(numpy.int8(99))[0])
         if common.verbose:
-            print "Nrows in", vlarray._v_pathname, ":", vlarray.nrows
-            print "Second row in vlarray ==>", row[1]
+            print("Nrows in", vlarray._v_pathname, ":", vlarray.nrows)
+            print("Second row in vlarray ==>", row[1])
 
         self.assertEqual(vlarray.nrows, self.nrows)
         self.assertEqual(len(row[0]), 0)
@@ -3242,8 +3319,8 @@ class SetRangeTestCase(unittest.TestCase):
     def test02_partial(self):
         "Checking updates with only a part of a row"
         if common.verbose:
-            print '\n', '-=' * 30
-            print "Running %s.test02_partial..." % self.__class__.__name__
+            print('\n', '-=' * 30)
+            print("Running %s.test02_partial..." % self.__class__.__name__)
 
         self.fileh = open_file(self.file, "a")
         vlarray = self.fileh.root.vlarray
@@ -3259,8 +3336,8 @@ class SetRangeTestCase(unittest.TestCase):
         row.append(vlarray.read(10)[0])
         row.append(vlarray.read(96)[0])
         if common.verbose:
-            print "Nrows in", vlarray._v_pathname, ":", vlarray.nrows
-            print "Second row in vlarray ==>", row[1]
+            print("Nrows in", vlarray._v_pathname, ":", vlarray.nrows)
+            print("Second row in vlarray ==>", row[1])
 
         self.assertEqual(vlarray.nrows, self.nrows)
         self.assertEqual(len(row[0]), 0)
@@ -3277,8 +3354,9 @@ class SetRangeTestCase(unittest.TestCase):
     def test03a_several_rows(self):
         "Checking updating several rows at once (slice style)"
         if common.verbose:
-            print '\n', '-=' * 30
-            print "Running %s.test03a_several_rows..." % self.__class__.__name__
+            print('\n', '-=' * 30)
+            print("Running %s.test03a_several_rows..." %
+                  self.__class__.__name__)
 
         self.fileh = open_file(self.file, "a")
         vlarray = self.fileh.root.vlarray
@@ -3294,8 +3372,8 @@ class SetRangeTestCase(unittest.TestCase):
         row.append(vlarray.read(4)[0])
         row.append(vlarray.read(5)[0])
         if common.verbose:
-            print "Nrows in", vlarray._v_pathname, ":", vlarray.nrows
-            print "Second row in vlarray ==>", row[1]
+            print("Nrows in", vlarray._v_pathname, ":", vlarray.nrows)
+            print("Second row in vlarray ==>", row[1])
 
         self.assertEqual(vlarray.nrows, self.nrows)
         self.assertEqual(len(row[0]), 3)
@@ -3308,8 +3386,9 @@ class SetRangeTestCase(unittest.TestCase):
     def test03b_several_rows(self):
         "Checking updating several rows at once (list style)"
         if common.verbose:
-            print '\n', '-=' * 30
-            print "Running %s.test03b_several_rows..." % self.__class__.__name__
+            print('\n', '-=' * 30)
+            print("Running %s.test03b_several_rows..." %
+                  self.__class__.__name__)
 
         self.fileh = open_file(self.file, "a")
         vlarray = self.fileh.root.vlarray
@@ -3325,8 +3404,8 @@ class SetRangeTestCase(unittest.TestCase):
         row.append(vlarray.read(10)[0])
         row.append(vlarray.read(96)[0])
         if common.verbose:
-            print "Nrows in", vlarray._v_pathname, ":", vlarray.nrows
-            print "Second row in vlarray ==>", row[1]
+            print("Nrows in", vlarray._v_pathname, ":", vlarray.nrows)
+            print("Second row in vlarray ==>", row[1])
 
         self.assertEqual(vlarray.nrows, self.nrows)
         self.assertEqual(len(row[0]), 0)
@@ -3342,8 +3421,9 @@ class SetRangeTestCase(unittest.TestCase):
     def test03c_several_rows(self):
         "Checking updating several rows at once (NumPy's where style)"
         if common.verbose:
-            print '\n', '-=' * 30
-            print "Running %s.test03c_several_rows..." % self.__class__.__name__
+            print('\n', '-=' * 30)
+            print("Running %s.test03c_several_rows..." %
+                  self.__class__.__name__)
 
         self.fileh = open_file(self.file, "a")
         vlarray = self.fileh.root.vlarray
@@ -3359,8 +3439,8 @@ class SetRangeTestCase(unittest.TestCase):
         row.append(vlarray.read(10)[0])
         row.append(vlarray.read(96)[0])
         if common.verbose:
-            print "Nrows in", vlarray._v_pathname, ":", vlarray.nrows
-            print "Second row in vlarray ==>", row[1]
+            print("Nrows in", vlarray._v_pathname, ":", vlarray.nrows)
+            print("Second row in vlarray ==>", row[1])
 
         self.assertEqual(vlarray.nrows, self.nrows)
         self.assertEqual(len(row[0]), 0)
@@ -3376,22 +3456,23 @@ class SetRangeTestCase(unittest.TestCase):
     def test04_out_of_range(self):
         "Checking out of range updates (first index)"
         if common.verbose:
-            print '\n', '-=' * 30
-            print "Running %s.test04_out_of_range..." % self.__class__.__name__
+            print('\n', '-=' * 30)
+            print("Running %s.test04_out_of_range..." %
+                  self.__class__.__name__)
 
         self.fileh = open_file(self.file, "a")
         vlarray = self.fileh.root.vlarray
 
         if common.verbose:
-            print "Nrows in", vlarray._v_pathname, ":", vlarray.nrows
+            print("Nrows in", vlarray._v_pathname, ":", vlarray.nrows)
 
         try:
             vlarray[1000] = [1]
         except IndexError:
             if common.verbose:
                 (type, value, traceback) = sys.exc_info()
-                print "\nGreat!, the next IndexError was catched!"
-                print value
+                print("\nGreat!, the next IndexError was catched!")
+                print(value)
             self.fileh.close()
         else:
             (type, value, traceback) = sys.exc_info()
@@ -3400,23 +3481,23 @@ class SetRangeTestCase(unittest.TestCase):
     def test05_value_error(self):
         "Checking out value errors"
         if common.verbose:
-            print '\n', '-=' * 30
-            print "Running %s.test05_value_error..." % self.__class__.__name__
+            print('\n', '-=' * 30)
+            print("Running %s.test05_value_error..." % self.__class__.__name__)
 
         self.fileh = open_file(self.file, "a")
         vlarray = self.fileh.root.vlarray
 
         if common.verbose:
-            print "Nrows in", vlarray._v_pathname, ":", vlarray.nrows
+            print("Nrows in", vlarray._v_pathname, ":", vlarray.nrows)
 
         try:
             vlarray[10] = [1]*100
-            print "row-->", row
+            print("row-->", row)
         except ValueError:
             if common.verbose:
                 (type, value, traceback) = sys.exc_info()
-                print "\nGreat!, the next ValueError was catched!"
-                print value
+                print("\nGreat!, the next ValueError was catched!")
+                print(value)
             self.fileh.close()
         else:
             (type, value, traceback) = sys.exc_info()
@@ -3427,11 +3508,11 @@ class CopyTestCase(unittest.TestCase):
     close = True
 
     def test01a_copy(self):
-        """Checking VLArray.copy() method """
+        """Checking VLArray.copy() method."""
 
         if common.verbose:
-            print '\n', '-=' * 30
-            print "Running %s.test01a_copy..." % self.__class__.__name__
+            print('\n', '-=' * 30)
+            print("Running %s.test01a_copy..." % self.__class__.__name__)
 
         # Create an instance of an HDF5 Table
         file = tempfile.mktemp(".h5")
@@ -3448,7 +3529,7 @@ class CopyTestCase(unittest.TestCase):
 
         if self.close:
             if common.verbose:
-                print "(closing file version)"
+                print("(closing file version)")
             fileh.close()
             fileh = open_file(file, mode="a")
             array1 = fileh.root.array1
@@ -3458,19 +3539,19 @@ class CopyTestCase(unittest.TestCase):
 
         if self.close:
             if common.verbose:
-                print "(closing file version)"
+                print("(closing file version)")
             fileh.close()
             fileh = open_file(file, mode="r")
             array1 = fileh.root.array1
             array2 = fileh.root.array2
 
         if common.verbose:
-            print "array1-->", repr(array1)
-            print "array2-->", repr(array2)
-            print "array1[:]-->", repr(array1.read())
-            print "array2[:]-->", repr(array2.read())
-            print "attrs array1-->", repr(array1.attrs)
-            print "attrs array2-->", repr(array2.attrs)
+            print("array1-->", repr(array1))
+            print("array2-->", repr(array2))
+            print("array1[:]-->", repr(array1.read()))
+            print("array2[:]-->", repr(array2.read()))
+            print("attrs array1-->", repr(array1.attrs))
+            print("attrs array2-->", repr(array2.attrs))
 
         # Check that all the elements are equal
         self.assertEqual(array1.read(), array2.read())
@@ -3489,11 +3570,15 @@ class CopyTestCase(unittest.TestCase):
         os.remove(file)
 
     def test01b_copy(self):
-        """Checking VLArray.copy() method. Pseudo-atom case."""
+        """Checking VLArray.copy() method.
+
+        Pseudo-atom case.
+
+        """
 
         if common.verbose:
-            print '\n', '-=' * 30
-            print "Running %s.test01b_copy..." % self.__class__.__name__
+            print('\n', '-=' * 30)
+            print("Running %s.test01b_copy..." % self.__class__.__name__)
 
         # Create an instance of an HDF5 Table
         file = tempfile.mktemp(".h5")
@@ -3510,7 +3595,7 @@ class CopyTestCase(unittest.TestCase):
 
         if self.close:
             if common.verbose:
-                print "(closing file version)"
+                print("(closing file version)")
             fileh.close()
             fileh = open_file(file, mode="a")
             array1 = fileh.root.array1
@@ -3520,19 +3605,19 @@ class CopyTestCase(unittest.TestCase):
 
         if self.close:
             if common.verbose:
-                print "(closing file version)"
+                print("(closing file version)")
             fileh.close()
             fileh = open_file(file, mode="r")
             array1 = fileh.root.array1
             array2 = fileh.root.array2
 
         if common.verbose:
-            print "array1-->", repr(array1)
-            print "array2-->", repr(array2)
-            print "array1[:]-->", repr(array1.read())
-            print "array2[:]-->", repr(array2.read())
-            print "attrs array1-->", repr(array1.attrs)
-            print "attrs array2-->", repr(array2.attrs)
+            print("array1-->", repr(array1))
+            print("array2-->", repr(array2))
+            print("array1[:]-->", repr(array1.read()))
+            print("array2[:]-->", repr(array2.read()))
+            print("attrs array1-->", repr(array1.attrs))
+            print("attrs array2-->", repr(array2.attrs))
 
         # Check that all the elements are equal
         self.assertEqual(array1.read(), array2.read())
@@ -3554,8 +3639,8 @@ class CopyTestCase(unittest.TestCase):
         """Checking VLArray.copy() method (where specified)"""
 
         if common.verbose:
-            print '\n', '-=' * 30
-            print "Running %s.test02_copy..." % self.__class__.__name__
+            print('\n', '-=' * 30)
+            print("Running %s.test02_copy..." % self.__class__.__name__)
 
         # Create an instance of an HDF5 Table
         file = tempfile.mktemp(".h5")
@@ -3572,7 +3657,7 @@ class CopyTestCase(unittest.TestCase):
 
         if self.close:
             if common.verbose:
-                print "(closing file version)"
+                print("(closing file version)")
             fileh.close()
             fileh = open_file(file, mode="a")
             array1 = fileh.root.array1
@@ -3583,19 +3668,19 @@ class CopyTestCase(unittest.TestCase):
 
         if self.close:
             if common.verbose:
-                print "(closing file version)"
+                print("(closing file version)")
             fileh.close()
             fileh = open_file(file, mode="r")
             array1 = fileh.root.array1
             array2 = fileh.root.group1.array2
 
         if common.verbose:
-            print "array1-->", repr(array1)
-            print "array2-->", repr(array2)
-            print "array1-->", array1.read()
-            print "array2-->", array2.read()
-            print "attrs array1-->", repr(array1.attrs)
-            print "attrs array2-->", repr(array2.attrs)
+            print("array1-->", repr(array1))
+            print("array2-->", repr(array2))
+            print("array1-->", array1.read())
+            print("array2-->", array2.read())
+            print("attrs array1-->", repr(array1.attrs))
+            print("attrs array2-->", repr(array2.attrs))
 
         # Check that all the elements are equal
         self.assertEqual(array1.read(), array2.read())
@@ -3616,8 +3701,8 @@ class CopyTestCase(unittest.TestCase):
         """Checking VLArray.copy() method ('python' flavor)"""
 
         if common.verbose:
-            print '\n', '-=' * 30
-            print "Running %s.test03_copy..." % self.__class__.__name__
+            print('\n', '-=' * 30)
+            print("Running %s.test03_copy..." % self.__class__.__name__)
 
         # Create an instance of an HDF5 Table
         file = tempfile.mktemp(".h5")
@@ -3634,7 +3719,7 @@ class CopyTestCase(unittest.TestCase):
 
         if self.close:
             if common.verbose:
-                print "(closing file version)"
+                print("(closing file version)")
             fileh.close()
             fileh = open_file(file, mode="a")
             array1 = fileh.root.array1
@@ -3644,15 +3729,15 @@ class CopyTestCase(unittest.TestCase):
 
         if self.close:
             if common.verbose:
-                print "(closing file version)"
+                print("(closing file version)")
             fileh.close()
             fileh = open_file(file, mode="r")
             array1 = fileh.root.array1
             array2 = fileh.root.array2
 
         if common.verbose:
-            print "attrs array1-->", repr(array1.attrs)
-            print "attrs array2-->", repr(array2.attrs)
+            print("attrs array1-->", repr(array1.attrs))
+            print("attrs array2-->", repr(array2.attrs))
 
         # Assert other properties in array
         self.assertEqual(array1.nrows, array2.nrows)
@@ -3670,8 +3755,8 @@ class CopyTestCase(unittest.TestCase):
         """Checking VLArray.copy() method (checking title copying)"""
 
         if common.verbose:
-            print '\n', '-=' * 30
-            print "Running %s.test04_copy..." % self.__class__.__name__
+            print('\n', '-=' * 30)
+            print("Running %s.test04_copy..." % self.__class__.__name__)
 
         # Create an instance of an HDF5 Table
         file = tempfile.mktemp(".h5")
@@ -3690,7 +3775,7 @@ class CopyTestCase(unittest.TestCase):
 
         if self.close:
             if common.verbose:
-                print "(closing file version)"
+                print("(closing file version)")
             fileh.close()
             fileh = open_file(file, mode="a")
             array1 = fileh.root.array1
@@ -3700,7 +3785,7 @@ class CopyTestCase(unittest.TestCase):
 
         if self.close:
             if common.verbose:
-                print "(closing file version)"
+                print("(closing file version)")
             fileh.close()
             fileh = open_file(file, mode="r")
             array1 = fileh.root.array1
@@ -3708,7 +3793,7 @@ class CopyTestCase(unittest.TestCase):
 
         # Assert user attributes
         if common.verbose:
-            print "title of destination array-->", array2.title
+            print("title of destination array-->", array2.title)
         self.assertEqual(array2.title, "title array2")
 
         # Close the file
@@ -3719,8 +3804,8 @@ class CopyTestCase(unittest.TestCase):
         """Checking VLArray.copy() method (user attributes copied)"""
 
         if common.verbose:
-            print '\n', '-=' * 30
-            print "Running %s.test05_copy..." % self.__class__.__name__
+            print('\n', '-=' * 30)
+            print("Running %s.test05_copy..." % self.__class__.__name__)
 
         # Create an instance of an HDF5 Table
         file = tempfile.mktemp(".h5")
@@ -3739,7 +3824,7 @@ class CopyTestCase(unittest.TestCase):
 
         if self.close:
             if common.verbose:
-                print "(closing file version)"
+                print("(closing file version)")
             fileh.close()
             fileh = open_file(file, mode="a")
             array1 = fileh.root.array1
@@ -3749,15 +3834,15 @@ class CopyTestCase(unittest.TestCase):
 
         if self.close:
             if common.verbose:
-                print "(closing file version)"
+                print("(closing file version)")
             fileh.close()
             fileh = open_file(file, mode="r")
             array1 = fileh.root.array1
             array2 = fileh.root.array2
 
         if common.verbose:
-            print "attrs array1-->", repr(array1.attrs)
-            print "attrs array2-->", repr(array2.attrs)
+            print("attrs array1-->", repr(array1.attrs))
+            print("attrs array2-->", repr(array2.attrs))
 
         # Assert user attributes
         self.assertEqual(array2.attrs.attr1, "attr1")
@@ -3771,8 +3856,8 @@ class CopyTestCase(unittest.TestCase):
         """Checking VLArray.copy() method (user attributes not copied)"""
 
         if common.verbose:
-            print '\n', '-=' * 30
-            print "Running %s.test05b_copy..." % self.__class__.__name__
+            print('\n', '-=' * 30)
+            print("Running %s.test05b_copy..." % self.__class__.__name__)
 
         # Create an instance of an HDF5 Table
         file = tempfile.mktemp(".h5")
@@ -3791,7 +3876,7 @@ class CopyTestCase(unittest.TestCase):
 
         if self.close:
             if common.verbose:
-                print "(closing file version)"
+                print("(closing file version)")
             fileh.close()
             fileh = open_file(file, mode="a")
             array1 = fileh.root.array1
@@ -3801,15 +3886,15 @@ class CopyTestCase(unittest.TestCase):
 
         if self.close:
             if common.verbose:
-                print "(closing file version)"
+                print("(closing file version)")
             fileh.close()
             fileh = open_file(file, mode="r")
             array1 = fileh.root.array1
             array2 = fileh.root.array2
 
         if common.verbose:
-            print "attrs array1-->", repr(array1.attrs)
-            print "attrs array2-->", repr(array2.attrs)
+            print("attrs array1-->", repr(array1.attrs))
+            print("attrs array2-->", repr(array2.attrs))
 
         # Assert user attributes
         self.assertEqual(array2.attrs.attr1, None)
@@ -3831,11 +3916,11 @@ class OpenCopyTestCase(CopyTestCase):
 class CopyIndexTestCase(unittest.TestCase):
 
     def test01_index(self):
-        """Checking VLArray.copy() method with indexes"""
+        """Checking VLArray.copy() method with indexes."""
 
         if common.verbose:
-            print '\n', '-=' * 30
-            print "Running %s.test01_index..." % self.__class__.__name__
+            print('\n', '-=' * 30)
+            print("Running %s.test01_index..." % self.__class__.__name__)
 
         # Create an instance of an HDF5 Array
         file = tempfile.mktemp(".h5")
@@ -3853,7 +3938,7 @@ class CopyIndexTestCase(unittest.TestCase):
 
         if self.close:
             if common.verbose:
-                print "(closing file version)"
+                print("(closing file version)")
             fileh.close()
             fileh = open_file(file, mode="a")
             array1 = fileh.root.array1
@@ -3866,12 +3951,12 @@ class CopyIndexTestCase(unittest.TestCase):
 
         r2 = r[self.start:self.stop:self.step]
         if common.verbose:
-            print "r2-->", r2
-            print "array2-->", array2[:]
-            print "attrs array1-->", repr(array1.attrs)
-            print "attrs array2-->", repr(array2.attrs)
-            print "nrows in array2-->", array2.nrows
-            print "and it should be-->", len(r2)
+            print("r2-->", r2)
+            print("array2-->", array2[:])
+            print("attrs array1-->", repr(array1.attrs))
+            print("attrs array2-->", repr(array2.attrs))
+            print("nrows in array2-->", array2.nrows)
+            print("and it should be-->", len(r2))
         # Check that all the elements are equal
         self.assertEqual(r2, array2[:])
         # Assert the number of rows in array
@@ -3985,7 +4070,7 @@ class ChunkshapeTestCase(unittest.TestCase):
 
         vla = self.fileh.root.vlarray
         if common.verbose:
-            print "chunkshape-->", vla.chunkshape
+            print("chunkshape-->", vla.chunkshape)
         self.assertEqual(vla.chunkshape, (13,))
 
     def test01(self):
@@ -3995,7 +4080,7 @@ class ChunkshapeTestCase(unittest.TestCase):
         self.fileh = open_file(self.file, 'r')
         vla = self.fileh.root.vlarray
         if common.verbose:
-            print "chunkshape-->", vla.chunkshape
+            print("chunkshape-->", vla.chunkshape)
         self.assertEqual(vla.chunkshape, (13,))
 
 
@@ -4043,13 +4128,13 @@ class TruncateTestCase(unittest.TestCase):
 
         if self.close:
             if common.verbose:
-                print "(closing file version)"
+                print("(closing file version)")
             self.fileh.close()
             self.fileh = open_file(self.file, mode="r")
             array1 = self.fileh.root.array1
 
         if common.verbose:
-            print "array1-->", array1.read()
+            print("array1-->", array1.read())
 
         self.assertEqual(array1.nrows, 0)
         self.assertEqual(array1[:], [])
@@ -4063,13 +4148,13 @@ class TruncateTestCase(unittest.TestCase):
 
         if self.close:
             if common.verbose:
-                print "(closing file version)"
+                print("(closing file version)")
             self.fileh.close()
             self.fileh = open_file(self.file, mode="r")
             array1 = self.fileh.root.array1
 
         if common.verbose:
-            print "array1-->", array1.read()
+            print("array1-->", array1.read())
 
         self.assertEqual(array1.nrows, 1)
         self.assertTrue(
@@ -4084,13 +4169,13 @@ class TruncateTestCase(unittest.TestCase):
 
         if self.close:
             if common.verbose:
-                print "(closing file version)"
+                print("(closing file version)")
             self.fileh.close()
             self.fileh = open_file(self.file, mode="r")
             array1 = self.fileh.root.array1
 
         if common.verbose:
-            print "array1-->", array1.read()
+            print("array1-->", array1.read())
 
         self.assertEqual(array1.nrows, 2)
         self.assertTrue(
@@ -4106,13 +4191,13 @@ class TruncateTestCase(unittest.TestCase):
 
         if self.close:
             if common.verbose:
-                print "(closing file version)"
+                print("(closing file version)")
             self.fileh.close()
             self.fileh = open_file(self.file, mode="r")
             array1 = self.fileh.root.array1
 
         if common.verbose:
-            print "array1-->", array1.read()
+            print("array1-->", array1.read())
 
         self.assertEqual(array1.nrows, 4)
         # Check the original values
@@ -4180,7 +4265,7 @@ class PointSelectionTestCase(common.PyTablesTestCase):
         vlarr = self.vlarr
         for key in self.working_keyset:
             if common.verbose:
-                print "Selection to test:", repr(key)
+                print("Selection to test:", repr(key))
             a = nparr[key].tolist()
             b = vlarr[key]
             # if common.verbose:
@@ -4195,7 +4280,7 @@ class PointSelectionTestCase(common.PyTablesTestCase):
         vlarr = self.vlarr
         for key in self.not_working_keyset:
             if common.verbose:
-                print "Selection to test:", key
+                print("Selection to test:", key)
             self.assertRaises(IndexError, vlarr.__getitem__, key)
 
 
@@ -4485,6 +4570,14 @@ def suite():
         theSuite.addTest(unittest.makeSuite(ZlibComprTestCase))
         theSuite.addTest(unittest.makeSuite(BloscComprTestCase))
         theSuite.addTest(unittest.makeSuite(BloscShuffleComprTestCase))
+        theSuite.addTest(unittest.makeSuite(BloscBloscLZComprTestCase))
+        if 'lz4' in tables.blosc_compressor_list():
+            theSuite.addTest(unittest.makeSuite(BloscLZ4ComprTestCase))
+            theSuite.addTest(unittest.makeSuite(BloscLZ4HCComprTestCase))
+        if 'snappy' in tables.blosc_compressor_list():
+            theSuite.addTest(unittest.makeSuite(BloscSnappyComprTestCase))
+        if 'zlib' in tables.blosc_compressor_list():
+            theSuite.addTest(unittest.makeSuite(BloscZlibComprTestCase))
         theSuite.addTest(unittest.makeSuite(LZOComprTestCase))
         theSuite.addTest(unittest.makeSuite(Bzip2ComprTestCase))
         theSuite.addTest(unittest.makeSuite(TypesReopenTestCase))
diff --git a/tables/unimplemented.py b/tables/unimplemented.py
index ed76b39..fdab512 100644
--- a/tables/unimplemented.py
+++ b/tables/unimplemented.py
@@ -22,8 +22,8 @@ from tables._past import previous_api_property
 
 
 class UnImplemented(hdf5extension.UnImplemented, Leaf):
-    """This class represents datasets not supported by PyTables in an
-    HDF5 file.
+    """This class represents datasets not supported by PyTables in an HDF5
+    file.
 
     When reading a generic HDF5 file (i.e. one that has not been created with
     PyTables, but with some other HDF5 library based tool), chances are that
diff --git a/tables/utils.py b/tables/utils.py
index 8b5af07..67ccc70 100644
--- a/tables/utils.py
+++ b/tables/utils.py
@@ -10,10 +10,12 @@
 #
 ########################################################################
 
-"""Utility functions"""
+"""Utility functions."""
 
+from __future__ import print_function
 import os
 import sys
+import warnings
 import subprocess
 from time import time
 
@@ -55,6 +57,10 @@ def is_idx(index):
             return False
         try:
             index.__index__()
+            if isinstance(index, bool):
+                warnings.warn(
+                    'using a boolean instead of an integer will result in an '
+                    'error in the future', DeprecationWarning, stacklevel=2)
             return True
         except TypeError:
             return False
@@ -69,7 +75,7 @@ def is_idx(index):
 
 
 def idx2long(index):
-    """Convert a possible index into a long int"""
+    """Convert a possible index into a long int."""
 
     try:
         return long(index)
@@ -265,15 +271,40 @@ def show_stats(explain, tref, encoding=None):
         elif line.startswith("VmLib:"):
             vmlib = int(line.split()[1])
     sout.close()
-    print "Memory usage: ******* %s *******" % explain
-    print "VmSize: %7s kB\tVmRSS: %7s kB" % (vmsize, vmrss)
-    print "VmData: %7s kB\tVmStk: %7s kB" % (vmdata, vmstk)
-    print "VmExe:  %7s kB\tVmLib: %7s kB" % (vmexe, vmlib)
+    print("Memory usage: ******* %s *******" % explain)
+    print("VmSize: %7s kB\tVmRSS: %7s kB" % (vmsize, vmrss))
+    print("VmData: %7s kB\tVmStk: %7s kB" % (vmdata, vmstk))
+    print("VmExe:  %7s kB\tVmLib: %7s kB" % (vmexe, vmlib))
     tnow = time()
-    print "WallClock time:", round(tnow - tref, 3)
+    print("WallClock time:", round(tnow - tref, 3))
     return tnow
 
 
+# truncate data before calling __setitem__, to improve compression ratio
+# this function is taken verbatim from netcdf4-python
+def quantize(data, least_significant_digit):
+    """quantize data to improve compression.
+
+    Data is quantized using around(scale*data)/scale, where scale is
+    2**bits, and bits is determined from the least_significant_digit.
+
+    For example, if least_significant_digit=1, bits will be 4.
+
+    """
+
+    precision = pow(10., -least_significant_digit)
+    exp = numpy.log10(precision)
+    if exp < 0:
+        exp = int(numpy.floor(exp))
+    else:
+        exp = int(numpy.ceil(exp))
+    bits = numpy.ceil(numpy.log2(pow(10., -exp)))
+    scale = pow(2., bits)
+    datout = numpy.around(scale * data) / scale
+
+    return datout
+
+
 # Utilities to detect leaked instances.  See recipe 14.10 of the Python
 # Cookbook by Martelli & Ascher.
 tracked_classes = {}
@@ -411,7 +442,11 @@ class NailedDict(object):
 
 
 def detect_number_of_cores():
-    """Detects the number of cores on a system. Cribbed from pp."""
+    """Detects the number of cores on a system.
+
+    Cribbed from pp.
+
+    """
 
     # Linux, Unix and MacOS:
     if hasattr(os, "sysconf"):
diff --git a/tables/utilsExtension.py b/tables/utilsExtension.py
index fceef23..a368e67 100644
--- a/tables/utilsExtension.py
+++ b/tables/utilsExtension.py
@@ -3,4 +3,4 @@ from tables.utilsextension import *
 
 _warnmsg = ("utilsExtension is pending deprecation, import utilsextension instead. "
             "You may use the pt2to3 tool to update your source code.")
-warn(_warnmsg, PendingDeprecationWarning, stacklevel=2)
+warn(_warnmsg, DeprecationWarning, stacklevel=2)
diff --git a/tables/utilsextension.pyx b/tables/utilsextension.pyx
index 33e49f5..ebc0446 100644
--- a/tables/utilsextension.pyx
+++ b/tables/utilsextension.pyx
@@ -34,7 +34,7 @@ from tables._past import previous_api
 from cpython cimport PY_MAJOR_VERSION
 from libc.stdio cimport stderr
 from libc.stdlib cimport malloc, free
-from libc.string cimport strchr, strcmp, strlen
+from libc.string cimport strchr, strcmp, strncmp, strlen
 from cpython.bytes cimport PyBytes_Check
 from cpython.unicode cimport PyUnicode_DecodeUTF8, PyUnicode_Check
 
@@ -52,7 +52,8 @@ from definitions cimport (H5ARRAYget_info, H5ARRAYget_ndims,
   H5Fopen, H5Gclose, H5Gopen, H5P_DEFAULT, H5T_ARRAY, H5T_BITFIELD,
   H5T_COMPOUND, H5T_CSET_ASCII, H5T_CSET_UTF8, H5T_C_S1, H5T_DIR_DEFAULT,
   H5T_ENUM, H5T_FLOAT, H5T_IEEE_F32BE, H5T_IEEE_F32LE, H5T_IEEE_F64BE,
-  H5T_IEEE_F64LE, H5T_INTEGER, H5T_NATIVE_LDOUBLE, H5T_NO_CLASS, H5T_OPAQUE,
+  H5T_IEEE_F64LE, H5T_INTEGER, H5T_NATIVE_DOUBLE, H5T_NATIVE_LDOUBLE,
+  H5T_NO_CLASS, H5T_OPAQUE,
   H5T_ORDER_BE, H5T_ORDER_LE, H5T_REFERENCE, H5T_STD_B8BE, H5T_STD_B8LE,
   H5T_STD_I16BE, H5T_STD_I16LE, H5T_STD_I32BE, H5T_STD_I32LE, H5T_STD_I64BE,
   H5T_STD_I64LE, H5T_STD_I8BE, H5T_STD_I8LE, H5T_STD_U16BE, H5T_STD_U16LE,
@@ -189,7 +190,10 @@ cdef extern from "utils.h":
 
 # Functions from Blosc
 cdef extern from "blosc.h" nogil:
+  void blosc_init()
   int blosc_set_nthreads(int nthreads)
+  char* blosc_list_compressors()
+  int blosc_compcode_to_compname(int compcode, char **compname)
 
 
 # @TODO: use the c_string_type and c_string_encoding global directives
@@ -211,7 +215,8 @@ cdef str cstr_to_pystr(const_char* cstring):
 import_array()
 
 cdef register_blosc_():
-  cdef char *version, *date
+  cdef char *version
+  cdef char *date
 
   register_blosc(&version, &date)
   compinfo = (version, date)
@@ -234,13 +239,14 @@ def _arch_without_blosc():
     for a in ["arm", "sparc", "mips"]:
         if a in arch:
             return True
-        return False
+    return False
 
 # Only register bloc compressor on platforms that actually support it.
 if _arch_without_blosc():
     blosc_version = None
 else:
     blosc_version = register_blosc_()
+    blosc_init()  # from 1.2 on, Blosc library must be initialized
 
 
 # Important: Blosc calls that modifies global variables in Blosc must be
@@ -382,6 +388,14 @@ silenceHDF5Messages = previous_api(silence_hdf5_messages)
 silence_hdf5_messages()
 
 
+def _broken_hdf5_long_double():
+    # HDF5 < 1.8.12 has a bug that prevents correct identification of the
+    # long double data type when the code is built with gcc 4.8.
+    # See also: http://hdf-forum.184993.n3.nabble.com/Issues-with-H5T-NATIVE-LDOUBLE-tt4026450.html
+
+    return H5Tget_order(H5T_NATIVE_DOUBLE) != H5Tget_order(H5T_NATIVE_LDOUBLE)
+
+
 # Helper functions
 cdef hsize_t *malloc_dims(object pdims):
   """Return a malloced hsize_t dims from a python pdims."""
@@ -569,12 +583,12 @@ def is_hdf5_file(object filename):
 
   """
 
+  # Check that the file exists and is readable.
+  check_file_access(filename)
+
   # Encode the filename in case it is unicode
   encname = encode_filename(filename)
 
-  # Check that the file exists and is readable.
-  check_file_access(encname)
-
   ret = H5Fis_hdf5(encname)
   if ret < 0:
     raise HDF5ExtError("problems identifying file ``%s``" % (filename,))
@@ -675,7 +689,7 @@ def which_lib_version(str name):
     if bzip2_version:
       (bzip2_version_string, bzip2_version_date) = bzip2_version
       return (bzip2_version, bzip2_version_string, bzip2_version_date)
-  elif strcmp(cname, "blosc") == 0:
+  elif strncmp(cname, "blosc", 5) == 0:
     if blosc_version:
       (blosc_version_string, blosc_version_date) = blosc_version
       return (blosc_version, blosc_version_string, blosc_version_date)
@@ -690,18 +704,66 @@ def which_lib_version(str name):
 whichLibVersion = previous_api(which_lib_version)
 
 
+# A function returning all the compressors supported by local Blosc
+def blosc_compressor_list():
+  """
+  blosc_compressor_list()
+
+  Returns a list of compressors available in the Blosc build.
+
+  Parameters
+  ----------
+  None
+
+  Returns
+  -------
+  out : list
+      The list of names.
+  """
+  list_compr = blosc_list_compressors().decode()
+  clist = [str(cname) for cname in list_compr.split(',')]
+  return clist
+
+
+# Convert compressor code to compressor name
+def blosc_compcode_to_compname_(compcode):
+  """
+  blosc_compcode_to_compname()
+
+  Returns the compressor name associated with compressor code.
+
+  Parameters
+  ----------
+  None
+
+  Returns
+  -------
+  out : string
+      The name of the compressor.
+  """
+  cdef char *cname
+  cdef object compname
+
+  compname = b"unknown (report this to developers)"
+  if blosc_compcode_to_compname(compcode, &cname) >= 0:
+    compname = cname
+  return compname.decode()
+
+
 def which_class(hid_t loc_id, object name):
   """Detects a class ID using heuristics."""
 
   cdef H5T_class_t  class_id
   cdef H5D_layout_t layout
   cdef hsize_t      nfields
-  cdef char         *field_name1, *field_name2
+  cdef char         *field_name1
+  cdef char         *field_name2
   cdef int          i
   cdef hid_t        type_id, dataset_id
   cdef object       classId
   cdef int          rank
-  cdef hsize_t      *dims, *maxdims
+  cdef hsize_t      *dims
+  cdef hsize_t      *maxdims
   cdef char         byteorder[11]  # "irrelevant" fits easily here
   cdef bytes        encoded_name
 
@@ -985,10 +1047,8 @@ def enum_from_hdf5(hid_t enumId, str byteorder):
 enumFromHDF5 = previous_api(enum_from_hdf5)
 
 
-def enum_to_hdf5(object enumAtom, str byteorder):
-  """enum_to_hdf5(enumAtom, byteorder) -> hid_t
-
-  Convert a PyTables enumerated type to an HDF5 one.
+def enum_to_hdf5(object enum_atom, str byteorder):
+  """Convert a PyTables enumerated type to an HDF5 one.
 
   This function creates an HDF5 enumerated type from the information
   contained in `enumAtom` (an ``Atom`` object), with the specified
@@ -997,41 +1057,48 @@ def enum_to_hdf5(object enumAtom, str byteorder):
 
   """
 
-  cdef bytes  name
-  cdef hid_t  baseId, enumId
-  cdef long   bytestride, i
-  cdef void  *rbuffer, *rbuf
-  cdef ndarray npValues
-  cdef object baseAtom
+  cdef bytes   name
+  cdef hid_t   base_id, enum_id
+  cdef long    bytestride, i
+  cdef void    *rbuffer
+  cdef void    *rbuf
+  cdef ndarray values
+  cdef object  base_atom
 
   # Get the base HDF5 type and create the enumerated type.
-  baseAtom = Atom.from_dtype(enumAtom.dtype.base)
-  baseId = atom_to_hdf5_type(baseAtom, byteorder)
+  base_atom = Atom.from_dtype(enum_atom.dtype.base)
+  base_id = atom_to_hdf5_type(base_atom, byteorder)
 
   try:
-    enumId = H5Tenum_create(baseId)
-    if enumId < 0:
+    enum_id = H5Tenum_create(base_id)
+    if enum_id < 0:
       raise HDF5ExtError("failed to create HDF5 enumerated type")
   finally:
-    if H5Tclose(baseId) < 0:
+    if H5Tclose(base_id) < 0:
       raise HDF5ExtError("failed to close HDF5 base type")
 
   # Set the name and value of each of the members.
-  npNames = enumAtom._names
-  npValues = enumAtom._values
-  bytestride = npValues.strides[0]
-  rbuffer = npValues.data
-  for i from 0 <= i < len(npNames):
-    name = npNames[i].encode('utf-8')
+  names = enum_atom._names
+  values = enum_atom._values
+  bytestride = values.strides[0]
+  rbuffer = values.data
+
+  i = names.index(enum_atom._defname)
+  idx = list(range(len(names)))
+  idx.pop(i)
+  idx.insert(0, i)
+
+  for i in idx:
+    name = names[i].encode('utf-8')
     rbuf = <void *>(<char *>rbuffer + bytestride * i)
-    if H5Tenum_insert(enumId, name, rbuf) < 0:
+    if H5Tenum_insert(enum_id, name, rbuf) < 0:
       e = HDF5ExtError("failed to insert value into HDF5 enumerated type")
-      if H5Tclose(enumId) < 0:
+      if H5Tclose(enum_id) < 0:
         raise HDF5ExtError("failed to close HDF5 enumerated type")
       raise e
 
   # Return the new, open HDF5 enumerated type.
-  return enumId
+  return enum_id
 
 
 enumToHDF5 = previous_api(enum_to_hdf5)
@@ -1276,16 +1343,16 @@ def atom_from_hdf5_type(hid_t type_id, pure_numpy_types=False):
   """
 
   cdef object stype, shape, atom_, sctype, tsize, kind
-  cdef object dflt, base, enum, nptype
+  cdef object dflt, base, enum_, nptype
 
   stype, shape = hdf5_to_np_ext_type(type_id, pure_numpy_types, atom=True)
   # Create the Atom
   if stype == 'e':
-    (enum, nptype) = load_enum(type_id)
+    (enum_, nptype) = load_enum(type_id)
     # Take one of the names as the default in the enumeration.
-    dflt = next(iter(enum))[0]
+    dflt = next(iter(enum_))[0]
     base = Atom.from_dtype(nptype)
-    atom_ = EnumAtom(enum, dflt, base, shape=shape)
+    atom_ = EnumAtom(enum_, dflt, base, shape=shape)
   else:
     kind = npext_prefixes_to_ptkinds[stype[0]]
     tsize = int(stype[1:])
diff --git a/tables/vlarray.py b/tables/vlarray.py
index f0c2a79..97189d7 100644
--- a/tables/vlarray.py
+++ b/tables/vlarray.py
@@ -10,7 +10,7 @@
 #
 ########################################################################
 
-"""Here is defined the VLArray class"""
+"""Here is defined the VLArray class."""
 
 import sys
 
@@ -43,7 +43,7 @@ class VLArray(hdf5extension.VLArray, Leaf):
     homogeneous elements, called *atoms*. Like Table datasets (see
     :ref:`TableClassDescr`), variable length arrays can have only one
     dimension, and the elements (atoms) of their rows can be fully
-    multidimensional.  VLArray objects do also support compression.
+    multidimensional.
 
     When reading a range of rows from a VLArray, you will *always* get
     a Python list of objects of the current flavor (each of them for a
@@ -54,6 +54,18 @@ class VLArray(hdf5extension.VLArray, Leaf):
     inherits all the public attributes and methods that Leaf (see
     :ref:`LeafClassDescr`) already provides.
 
+    .. note::
+    
+          VLArray objects also support compression although compression
+          is only performed on the data structures used internally by
+          the HDF5 to take references of the location of the variable
+          length data. Data itself (the raw data) are not compressed
+          or filtered.
+          
+          Please refer to the `VLTypes Technical Note
+          <http://www.hdfgroup.org/HDF5/doc/TechNotes/VLTypes.html>`_
+          for more details on the topic.
+          
     Parameters
     ----------
     parentnode
@@ -117,9 +129,9 @@ class VLArray(hdf5extension.VLArray, Leaf):
         vlarray.append([5, 6, 9, 8])
 
         # Now, read it through an iterator:
-        print '-->', vlarray.title
+        print('-->', vlarray.title)
         for x in vlarray:
-            print '%s[%d]--> %s' % (vlarray.name, vlarray.nrow, x)
+            print('%s[%d]--> %s' % (vlarray.name, vlarray.nrow, x))
 
         # Now, do the same with native Python strings.
         vlarray2 = fileh.create_vlarray(fileh.root, 'vlarray2',
@@ -129,14 +141,14 @@ class VLArray(hdf5extension.VLArray, Leaf):
         vlarray2.flavor = 'python'
 
         # Append some (variable length) rows:
-        print '-->', vlarray2.title
+        print('-->', vlarray2.title)
         vlarray2.append(['5', '66'])
         vlarray2.append(['5', '6', '77'])
         vlarray2.append(['5', '6', '9', '88'])
 
         # Now, read it through an iterator:
         for x in vlarray2:
-            print '%s[%d]--> %s' % (vlarray2.name, vlarray2.nrow, x)
+            print('%s[%d]--> %s' % (vlarray2.name, vlarray2.nrow, x))
 
         # Close the file.
         fileh.close()
@@ -541,7 +553,7 @@ class VLArray(hdf5extension.VLArray, Leaf):
         ::
 
             for row in vlarray.iterrows(step=4):
-                print '%s[%d]--> %s' % (vlarray.name, vlarray.nrow, row)
+                print('%s[%d]--> %s' % (vlarray.name, vlarray.nrow, row))
 
         .. versionchanged:: 3.0
            If the *start* parameter is provided and *stop* is None then the
@@ -585,7 +597,7 @@ class VLArray(hdf5extension.VLArray, Leaf):
         return self
 
     def _init_loop(self):
-        """Initialization for the __iter__ iterator"""
+        """Initialization for the __iter__ iterator."""
 
         self._nrowsread = self._start
         self._startb = self._start
@@ -598,7 +610,8 @@ class VLArray(hdf5extension.VLArray, Leaf):
     def next(self):
         """Get the next element of the array during an iteration.
 
-        The element is returned as a list of objects of the current flavor.
+        The element is returned as a list of objects of the current
+        flavor.
 
         """
 
@@ -697,7 +710,7 @@ class VLArray(hdf5extension.VLArray, Leaf):
                                                               nobjects))
             try:
                 nparr[:] = value
-            except Exception, exc:  # XXX
+            except Exception as exc:  # XXX
                 raise ValueError("Value parameter:\n'%r'\n"
                                  "cannot be converted into an array object "
                                  "compliant vlarray[%s] row: \n'%r'\n"
@@ -814,7 +827,7 @@ class VLArray(hdf5extension.VLArray, Leaf):
 
     def _g_copy_with_stats(self, group, name, start, stop, step,
                            title, filters, chunkshape, _log, **kwargs):
-        """Private part of Leaf.copy() for each kind of leaf"""
+        """Private part of Leaf.copy() for each kind of leaf."""
 
         # Build the new VLArray object
         object = VLArray(

-- 
Alioth's /usr/local/bin/git-commit-notice on /srv/git.debian.org/git/debian-science/packages/pytables.git



More information about the debian-science-commits mailing list