[pytables] 01/04: Imported Upstream version 3.1.1

Antonio Valentino a_valentino-guest at moszumanska.debian.org
Thu Mar 27 21:17:05 UTC 2014


This is an automated email from the git hooks/post-receive script.

a_valentino-guest pushed a commit to branch master
in repository pytables.

commit db864cf7081e0dcf182058db2322f8788dc8dd36
Author: Antonio Valentino <antonio.valentino at tiscali.it>
Date:   Thu Mar 27 22:05:52 2014 +0100

    Imported Upstream version 3.1.1
---
 ANNOUNCE.txt.in                                    |  41 +--
 RELEASE_NOTES.txt                                  |  31 ++
 VERSION                                            |   2 +-
 c-blosc/ANNOUNCE.rst                               |  22 +-
 c-blosc/README.rst                                 |  10 +-
 c-blosc/README_HEADER.rst                          |   8 +-
 c-blosc/RELEASE_NOTES.rst                          |  31 +-
 c-blosc/RELEASING.rst                              |   2 +-
 c-blosc/bench/Makefile                             |   2 +-
 c-blosc/bench/Makefile.mingw                       |   2 +-
 c-blosc/blosc/CMakeLists.txt                       |   2 +-
 c-blosc/blosc/blosc.c                              |  65 ++--
 c-blosc/blosc/blosc.h                              |  30 +-
 .../internal-complibs/lz4-r110/add-version.patch   |  14 -
 .../internal-complibs/{lz4-r110 => lz4-r113}/lz4.c | 374 +++++++++++----------
 .../internal-complibs/{lz4-r110 => lz4-r113}/lz4.h |  59 ++--
 .../{lz4-r110 => lz4-r113}/lz4hc.c                 | 306 +++++++++--------
 .../{lz4-r110 => lz4-r113}/lz4hc.h                 |  29 +-
 c-blosc/tests/Makefile                             |   2 +-
 c-blosc/tests/test_api.c                           |  11 +
 doc/source/release_notes.rst                       |   1 +
 doc/source/usersguide/libref/filenode_classes.rst  |   4 +
 examples/nested-iter.py                            |  19 --
 setup.py                                           |  19 +-
 tables/__init__.py                                 |  11 +-
 tables/file.py                                     |   6 +-
 tables/flavor.py                                   |  10 +-
 tables/hdf5extension.pyx                           |   8 +-
 tables/nodes/filenode.py                           | 162 +++++++++
 tables/nodes/tests/test_filenode.py                | 105 ++++++
 tables/tests/test_tablesMD.py                      |  16 +-
 31 files changed, 907 insertions(+), 497 deletions(-)

diff --git a/ANNOUNCE.txt.in b/ANNOUNCE.txt.in
index fbee1ca..f51dc0d 100644
--- a/ANNOUNCE.txt.in
+++ b/ANNOUNCE.txt.in
@@ -4,43 +4,20 @@
 
 We are happy to announce PyTables @VERSION at .
 
-This is a feature release.  The upgrading is recommended for users that
-are running PyTables in production environments.
+This is a bug-fix release that addresses a critical bug that make PyTables
+unusable on some platforms.
 
 
 What's new
 ==========
 
-Probably the most relevant changes in this release are internal improvements
-like the new node cache that is now compatible with the upcoming Python 3.4
-and the registry for open files has been deeply reworked. The caching feature
-of file handlers has been completely dropped so now PyTables is a little bit
-more "thread friendly".
-
-New, user visible, features include:
-
-- a new lossy filter for HDF5 datasets (EArray, CArray, VLArray and Table
-  objects). The *quantization* filter truncates floating point data to a
-  specified precision before writing to disk.
-  This can significantly improve the performance of compressors
-  (many thanks to Andreas Hilboll).
-- support for the H5FD_SPLIT HDF5 driver (thanks to simleo)
-- all new features introduced in the Blosc_ 1.3.x series, and in particular
-  the ability to leverage different compressors within Blosc_ are now available
-  in PyTables via the blosc filter (a big thank you to Francesc)
-- the ability to save/restore the default value of :class:`EnumAtom` types
-
-Also, installations of the HDF5 library that have a broken support for the
-*long double* data type (see the `Issues with H5T_NATIVE_LDOUBLE`_ thread on
-the HFG5 forum) are detected by PyTables @VERSION@ and the corresponding
-features are automatically disabled.
-
-Users that need support for the *long double* data type should ensure to build
-PyTables against an installation of the HDF5 library that is not affected by the
-bug.
-
-.. _`Issues with H5T_NATIVE_LDOUBLE`:
-    http://hdf-forum.184993.n3.nabble.com/Issues-with-H5T-NATIVE-LDOUBLE-tt4026450.html
+- Fixed a critical bug that caused an exception at import time.
+  The error was triggered when a bug in long-double detection is detected
+  in the HDF5 library (see :issue:`275`) and numpy_ does not expose
+  `float96` or `float128`. Closes :issue:`344`.
+- The internal Blosc_ library has been updated to version 1.3.5.
+  This fixes a false buffer overrun condition that made c-blosc to fail,
+  even if the problem was not real.
 
 As always, a large amount of bugs have been addressed and squashed as well.
 
diff --git a/RELEASE_NOTES.txt b/RELEASE_NOTES.txt
index 0498084..5f3045a 100644
--- a/RELEASE_NOTES.txt
+++ b/RELEASE_NOTES.txt
@@ -8,6 +8,37 @@
 .. py:currentmodule:: tables
 
 
+Changes from 3.1.0 to 3.1.1
+===========================
+
+Bugs fixed
+----------
+
+- Fixed a critical bug that caused an exception at import time.
+  The error was triggered when a bug in long-double detection is detected
+  in the HDF5 library (see :issue:`275`) and numpy_ does not expose
+  `float96` or `float128`. Closes :issue:`344`.
+- The internal Blosc_ library has been updated to version 1.3.5.
+  This fixes a false buffer overrun condition that made c-blosc to fail,
+  even if the problem was not real.
+
+
+Improvements
+------------
+
+- Do not create a temporary array when the *obj* parameter is not specified
+  in :meth:`File.create_array` (thanks to Francesc).
+  Closes :issue:`337` and :issue:`339`).
+- Added two new utility functions
+  (:func:`tables.nodes.filenode.read_from_filenode` and
+  :func:`tables.nodes.filenode.save_to_filenode`) for the direct copy from
+  filesystem to filenode and vice versa (closes :issue:`342`).
+  Thanks to Andreas Hilboll.
+- Removed the :file:`examples/nested-iter.py` considered no longer useful.
+  Closes :issue:`343`.
+- Better detection of the `-msse2` compiler flag.
+
+
 Changes from 3.0 to 3.1.0
 =========================
 
diff --git a/VERSION b/VERSION
index fd2a018..94ff29c 100644
--- a/VERSION
+++ b/VERSION
@@ -1 +1 @@
-3.1.0
+3.1.1
diff --git a/c-blosc/ANNOUNCE.rst b/c-blosc/ANNOUNCE.rst
index 507305d..9a662b5 100644
--- a/c-blosc/ANNOUNCE.rst
+++ b/c-blosc/ANNOUNCE.rst
@@ -1,18 +1,18 @@
 ===============================================================
- Announcing Blosc 1.3.2
+ Announcing c-blosc 1.3.5
  A blocking, shuffling and lossless compression library
 ===============================================================
 
 What is new?
 ============
 
-This is a maintenance release, where basically support for MSVC 2008
-has been added for Snappy internal sources and versioning symbols have
-been included in internal sources.
+This is just a maintenance release for removing a 'pointer from
+integer without a cast' compiler warning due to a bad macro
+definition.
 
 For more info, please see the release notes in:
 
-https://github.com/FrancescAlted/blosc/wiki/Release-notes
+https://github.com/Blosc/c-blosc/wiki/Release-notes
 
 
 What is it?
@@ -27,11 +27,11 @@ Blosc is the first compressor (that I'm aware of) that is meant not
 only to reduce the size of large datasets on-disk or in-memory, but
 also to accelerate object manipulations that are memory-bound.
 
-There is also a handy command line for Blosc called Bloscpack
-(https://github.com/esc/bloscpack) that allows you to compress large
-binary datafiles on-disk.  Although the format for Bloscpack has not
-stabilized yet, it allows you to effectively use Blosc from you
-favorite shell.
+Blosc has a Python wrapper called python-blosc
+(https://github.com/Blosc/python-blosc) with a high-performance
+interface to NumPy too.  There is also a handy command line for Blosc
+called Bloscpack (https://github.com/Blosc/bloscpack) that allows you to
+compress large binary datafiles on-disk.
 
 
 Download sources
@@ -43,7 +43,7 @@ http://www.blosc.org/
 
 and proceed from there.  The github repository is over here:
 
-https://github.com/FrancescAlted/blosc
+https://github.com/Blosc/c-blosc
 
 Blosc is distributed using the MIT license, see LICENSES/BLOSC.txt for
 details.
diff --git a/c-blosc/README.rst b/c-blosc/README.rst
index f7f3a96..d7bd53b 100644
--- a/c-blosc/README.rst
+++ b/c-blosc/README.rst
@@ -242,8 +242,14 @@ Wrapper for Python
 
 Blosc has an official wrapper for Python.  See:
 
-http://blosc.pydata.org
-https://github.com/FrancescAlted/python-blosc
+https://github.com/Blosc/python-blosc
+
+Command line interface and serialization format for Blosc
+=========================================================
+
+Blosc can be used from command line by using Bloscpack.  See:
+
+https://github.com/Blosc/bloscpack
 
 Filter for HDF5
 ===============
diff --git a/c-blosc/README_HEADER.rst b/c-blosc/README_HEADER.rst
index 96172d5..d2428f1 100644
--- a/c-blosc/README_HEADER.rst
+++ b/c-blosc/README_HEADER.rst
@@ -20,7 +20,7 @@ All entries are little endian.
 :version:
     (``uint8``) Blosc format version.
 :versionlz:
-    (``uint8``) Blosclz format  version (internal Lempel-Ziv algorithm).
+    (``uint8``) Version of the internal compressor used.
 :flags and compressor enumeration:
     (``bitfield``) The flags of the buffer 
 
@@ -47,12 +47,10 @@ All entries are little endian.
     :``0``:
         ``blosclz``
     :``1``:
-        ``lz4``
+        ``lz4`` or ``lz4hc``
     :``2``:
-        ``lz4hc``
-    :``3``:
         ``snappy``
-    :``4``:
+    :``3``:
         ``zlib``
 
 :typesize:
diff --git a/c-blosc/RELEASE_NOTES.rst b/c-blosc/RELEASE_NOTES.rst
index baf3901..373d406 100644
--- a/c-blosc/RELEASE_NOTES.rst
+++ b/c-blosc/RELEASE_NOTES.rst
@@ -1,12 +1,37 @@
-===============================
- Release notes for Blosc 1.3.2
-===============================
+================================
+ Release notes for c-blosc 1.3.5
+================================
 
 :Author: Francesc Alted
 :Contact: faltet at gmail.com
 :URL: http://www.blosc.org
 
 
+Changes from 1.3.4 to 1.3.5
+===========================
+
+* Removed a pointer from 'pointer from integer without a cast' compiler
+  warning due to a bad macro definition.
+
+
+Changes from 1.3.3 to 1.3.4
+===========================
+
+* Fixed a false buffer overrun condition.  This bug made c-blosc to
+  fail, even if the failure was not real.
+
+* Fixed the type of a buffer string.
+
+
+Changes from 1.3.2 to 1.3.3
+===========================
+
+* Updated to LZ4 1.1.3 (improved speed for 32-bit platforms).
+
+* Added a new `blosc_cbuffer_complib()` for getting the compression
+  library for a compressed buffer.
+
+
 Changes from 1.3.1 to 1.3.2
 ===========================
 
diff --git a/c-blosc/RELEASING.rst b/c-blosc/RELEASING.rst
index 514fb6d..8908f29 100644
--- a/c-blosc/RELEASING.rst
+++ b/c-blosc/RELEASING.rst
@@ -73,7 +73,7 @@ Announcing
 
 - Update the release notes in the github wiki:
 
-https://github.com/FrancescAlted/blosc/wiki/Release-notes
+https://github.com/Blosc/c-blosc/wiki/Release-notes
 
 - Send an announcement to the blosc, pytables, carray and
   comp.compression lists.  Use the ``ANNOUNCE.rst`` file as skeleton
diff --git a/c-blosc/bench/Makefile b/c-blosc/bench/Makefile
index 54cb820..267c54a 100644
--- a/c-blosc/bench/Makefile
+++ b/c-blosc/bench/Makefile
@@ -6,7 +6,7 @@ SOURCES = $(wildcard ../blosc/*.c)
 EXECUTABLE = bench
 
 # Support for internal LZ4 and LZ4HC
-LZ4_DIR = ../internal-complibs/lz4-r110
+LZ4_DIR = ../internal-complibs/lz4-r113
 CFLAGS += -DHAVE_LZ4 -I$(LZ4_DIR)
 SOURCES += $(wildcard $(LZ4_DIR)/*.c)
 
diff --git a/c-blosc/bench/Makefile.mingw b/c-blosc/bench/Makefile.mingw
index 3a0f858..3665486 100644
--- a/c-blosc/bench/Makefile.mingw
+++ b/c-blosc/bench/Makefile.mingw
@@ -7,7 +7,7 @@ SOURCES = $(wildcard ../blosc/*.c)
 EXECUTABLE = bench
 
 # Support for internal LZ4
-LZ4_DIR = ../internal-complibs/lz4-r110
+LZ4_DIR = ../internal-complibs/lz4-r113
 CFLAGS += -DHAVE_LZ4 -I$(LZ4_DIR)
 SOURCES += $(wildcard $(LZ4_DIR)/*.c)
 
diff --git a/c-blosc/blosc/CMakeLists.txt b/c-blosc/blosc/CMakeLists.txt
index 2ce9cd5..305eb84 100644
--- a/c-blosc/blosc/CMakeLists.txt
+++ b/c-blosc/blosc/CMakeLists.txt
@@ -8,7 +8,7 @@ if(NOT DEACTIVATE_LZ4)
     if (LZ4_FOUND)
         include_directories( ${LZ4_INCLUDE_DIR} )
     else(LZ4_FOUND)
-        set(LZ4_LOCAL_DIR ${INTERNAL_LIBS}/lz4-r110)
+        set(LZ4_LOCAL_DIR ${INTERNAL_LIBS}/lz4-r113)
         include_directories( ${LZ4_LOCAL_DIR} )
     endif(LZ4_FOUND)
 endif(NOT DEACTIVATE_LZ4)
diff --git a/c-blosc/blosc/blosc.c b/c-blosc/blosc/blosc.c
index 36f9243..de96ca2 100644
--- a/c-blosc/blosc/blosc.c
+++ b/c-blosc/blosc/blosc.c
@@ -139,14 +139,14 @@ static struct temp_data {
 /* Wait until all threads are initialized */
 #ifdef _POSIX_BARRIERS_MINE
 static int rc;
-#define WAIT_INIT \
+#define WAIT_INIT(RET_VAL)  \
   rc = pthread_barrier_wait(&barr_init); \
   if (rc != 0 && rc != PTHREAD_BARRIER_SERIAL_THREAD) { \
     printf("Could not wait on barrier (init)\n"); \
-    return(-1); \
+    return((RET_VAL));				  \
   }
 #else
-#define WAIT_INIT \
+#define WAIT_INIT(RET_VAL)   \
   pthread_mutex_lock(&count_threads_mutex); \
   if (count_threads < nthreads) { \
     count_threads++; \
@@ -160,14 +160,14 @@ static int rc;
 
 /* Wait for all threads to finish */
 #ifdef _POSIX_BARRIERS_MINE
-#define WAIT_FINISH \
+#define WAIT_FINISH(RET_VAL)   \
   rc = pthread_barrier_wait(&barr_finish); \
   if (rc != 0 && rc != PTHREAD_BARRIER_SERIAL_THREAD) { \
     printf("Could not wait on barrier (finish)\n"); \
-    return(-1);                                       \
+    return((RET_VAL));				    \
   }
 #else
-#define WAIT_FINISH \
+#define WAIT_FINISH(RET_VAL)			    \
   pthread_mutex_lock(&count_threads_mutex); \
   if (count_threads > 0) { \
     count_threads--; \
@@ -362,13 +362,14 @@ static int lz4_wrap_compress(const char* input, size_t input_length,
 }
 
 static int lz4hc_wrap_compress(const char* input, size_t input_length,
-                               char* output, size_t maxout)
+                               char* output, size_t maxout, int clevel)
 {
   int cbytes;
   if (input_length > (size_t)(2<<30))
     return -1;   /* input larger than 1 GB is not supported */
-  cbytes = LZ4_compressHC_limitedOutput(input, output, (int)input_length,
-					(int)maxout);
+  /* clevel for lz4hc goes up to 16, at least in LZ4 1.1.3 */
+  cbytes = LZ4_compressHC2_limitedOutput(input, output, (int)input_length,
+					 (int)maxout, clevel*2-1);
   return cbytes;
 }
 
@@ -503,7 +504,7 @@ static int blosc_c(int32_t blocksize, int32_t leftoverblock,
     }
     else if (compressor == BLOSC_LZ4HC) {
       cbytes = lz4hc_wrap_compress((char *)_tmp+j*neblock, (size_t)neblock,
-                                   (char *)dest, (size_t)maxout);
+                                   (char *)dest, (size_t)maxout, params.clevel);
     }
     #endif /*  HAVE_LZ4 */
     #if defined(HAVE_SNAPPY)
@@ -526,7 +527,7 @@ static int blosc_c(int32_t blocksize, int32_t leftoverblock,
       return -5;    /* signals no compression support */
     }
 
-    if (cbytes >= maxout) {
+    if (cbytes > maxout) {
       /* Buffer overrun caused by compression (should never happen) */
       return -1;
     }
@@ -535,7 +536,7 @@ static int blosc_c(int32_t blocksize, int32_t leftoverblock,
       return -2;
     }
     else if (cbytes == 0) {
-      /* The compressor has been unable to compress data significantly. */
+      /* The compressor has been unable to compress data at all. */
       /* Before doing the copy, check that we are not running into a
          buffer overflow. */
       if ((ntbytes+neblock) > maxbytes) {
@@ -734,9 +735,9 @@ static int parallel_blosc(void)
   }
 
   /* Synchronization point for all threads (wait for initialization) */
-  WAIT_INIT;
+  WAIT_INIT(-1);
   /* Synchronization point for all threads (wait for finalization) */
-  WAIT_FINISH;
+  WAIT_FINISH(-1);
 
   if (giveup_code > 0) {
     /* Return the total bytes (de-)compressed in threads */
@@ -1360,7 +1361,7 @@ static void *t_blosc(void *tids)
     init_sentinels_done = 0;     /* sentinels have to be initialised yet */
 
     /* Synchronization point for all threads (wait for initialization) */
-    WAIT_INIT;
+    WAIT_INIT(NULL);
 
     /* Check if thread has been asked to return */
     if (end_threads) {
@@ -1501,7 +1502,7 @@ static void *t_blosc(void *tids)
     }
 
     /* Meeting point for all threads (wait for finalization) */
-    WAIT_FINISH;
+    WAIT_FINISH(NULL);
 
   }  /* closes while(1) */
 
@@ -1604,7 +1605,7 @@ int blosc_set_nthreads_(int nthreads_new)
       /* Tell all existing threads to finish */
       end_threads = 1;
       /* Synchronization point for all threads (wait for initialization) */
-      WAIT_INIT;
+      WAIT_INIT(-1);
       /* Join exiting threads */
       for (t=0; t<nthreads; t++) {
         rc2 = pthread_join(threads[t], &status);
@@ -1674,7 +1675,7 @@ int blosc_get_complib_info(char *compname, char **complib, char **version)
   int clibcode;
   char *clibname;
   char *clibversion = "unknown";
-  char *sbuffer[256];
+  char sbuffer[256];
 
   clibcode = compname_to_clibcode(compname);
   clibname = clibcode_to_clibname(clibcode);
@@ -1685,9 +1686,11 @@ int blosc_get_complib_info(char *compname, char **complib, char **version)
   }
 #if defined(HAVE_LZ4)
   else if (clibcode == BLOSC_LZ4_LIB) {
-#if defined(LZ4_VERSION_STRING)
-    clibversion = LZ4_VERSION_STRING;
-#endif /* LZ4_VERSION_STRING */
+#if defined(LZ4_VERSION_MAJOR)
+    sprintf(sbuffer, "%d.%d.%d",
+            LZ4_VERSION_MAJOR, LZ4_VERSION_MINOR, LZ4_VERSION_RELEASE);
+    clibversion = sbuffer;
+#endif /*  LZ4_VERSION_MAJOR */
   }
 #endif /*  HAVE_LZ4 */
 #if defined(HAVE_SNAPPY)
@@ -1729,7 +1732,7 @@ int blosc_free_resources(void)
     /* Tell all existing threads to finish */
     end_threads = 1;
     /* Synchronization point for all threads (wait for initialization) */
-    WAIT_INIT;
+    WAIT_INIT(-1);
     /* Join exiting threads */
     for (t=0; t<nthreads; t++) {
       rc2 = pthread_join(threads[t], &status);
@@ -1822,8 +1825,22 @@ void blosc_cbuffer_versions(const void *cbuffer, int *version,
   uint8_t *_src = (uint8_t *)(cbuffer);  /* current pos for source buffer */
 
   /* Read the version info */
-  *version = (int)_src[0];             /* blosc format version */
-  *versionlz = (int)_src[1];           /* blosclz format version */
+  *version = (int)_src[0];         /* blosc format version */
+  *versionlz = (int)_src[1];       /* Lempel-Ziv compressor format version */
+}
+
+
+/* Return the compressor library/format used in a compressed buffer. */
+char *blosc_cbuffer_complib(const void *cbuffer)
+{
+  uint8_t *_src = (uint8_t *)(cbuffer);  /* current pos for source buffer */
+  int clibcode;
+  char *complib;
+
+  /* Read the compressor format/library info */
+  clibcode = (_src[2] & 0xe0) >> 5;
+  complib = clibcode_to_clibname(clibcode);
+  return complib;
 }
 
 
diff --git a/c-blosc/blosc/blosc.h b/c-blosc/blosc/blosc.h
index 36fc819..d1cd859 100644
--- a/c-blosc/blosc/blosc.h
+++ b/c-blosc/blosc/blosc.h
@@ -14,11 +14,11 @@
 /* Version numbers */
 #define BLOSC_VERSION_MAJOR    1    /* for major interface/format changes  */
 #define BLOSC_VERSION_MINOR    3    /* for minor interface/format changes  */
-#define BLOSC_VERSION_RELEASE  2    /* for tweaks, bug-fixes, or development */
+#define BLOSC_VERSION_RELEASE  5    /* for tweaks, bug-fixes, or development */
 
-#define BLOSC_VERSION_STRING   "1.3.2"  /* string version.  Sync with above! */
+#define BLOSC_VERSION_STRING   "1.3.5"  /* string version.  Sync with above! */
 #define BLOSC_VERSION_REVISION "$Rev$"   /* revision version */
-#define BLOSC_VERSION_DATE     "$Date:: 2014-01-17 #$"    /* date version */
+#define BLOSC_VERSION_DATE     "$Date:: 2014-03-22 #$"    /* date version */
 
 #define BLOSCLZ_VERSION_STRING "1.0.1"   /* the internal compressor version */
 
@@ -236,9 +236,10 @@ char* blosc_list_compressors(void);
   In `complib` and `version` you get the compression library name and
   version (if available) as output.
 
-  In `complib` and `version` you get a pointer to the compressor name
-  and the version in string format respectively.  After using the name
-  and version, you should free() them so as to avoid leaks.
+  In `complib` and `version` you get a pointer to the compressor
+  library name and the version in string format respectively.  After
+  using the name and version, you should free() them so as to avoid
+  leaks.
 
   If the compressor is supported, it returns the code for the library
   (>=0).  If it is not supported, this function returns -1.
@@ -247,9 +248,10 @@ int blosc_get_complib_info(char *compname, char **complib, char **version);
 
 
 /**
-  Free possible memory temporaries and thread resources.  Use this when you
-  are not going to use Blosc for a long while.  In case of problems releasing
-  the resources, it returns a negative number, else it returns 0.
+  Free possible memory temporaries and thread resources.  Use this
+  when you are not going to use Blosc for a long while.  In case of
+  problems releasing the resources, it returns a negative number, else
+  it returns 0.
   */
 int blosc_free_resources(void);
 
@@ -290,7 +292,7 @@ void blosc_cbuffer_metainfo(const void *cbuffer, size_t *typesize,
 /**
   Return information about a compressed buffer, namely the internal
   Blosc format version (`version`) and the format for the internal
-  Lempel-Ziv algorithm (`versionlz`).
+  Lempel-Ziv compressor used (`versionlz`).
 
   This function should always succeed.
   */
@@ -298,6 +300,14 @@ void blosc_cbuffer_versions(const void *cbuffer, int *version,
                             int *versionlz);
 
 
+/**
+  Return the compressor library/format used in a compressed buffer.
+
+  This function should always succeed.
+  */
+char *blosc_cbuffer_complib(const void *cbuffer);
+
+
 
 /*********************************************************************
 
diff --git a/c-blosc/internal-complibs/lz4-r110/add-version.patch b/c-blosc/internal-complibs/lz4-r110/add-version.patch
deleted file mode 100644
index fef243d..0000000
--- a/c-blosc/internal-complibs/lz4-r110/add-version.patch
+++ /dev/null
@@ -1,14 +0,0 @@
-diff --git a/internal-complibs/lz4-r110/lz4.h b/internal-complibs/lz4-r110/lz4.h
-index af05dbc..33fcbe4 100644
---- a/internal-complibs/lz4-r110/lz4.h
-+++ b/internal-complibs/lz4-r110/lz4.h
-@@ -37,6 +37,9 @@
- extern "C" {
- #endif
- 
-+// The next is for getting the LZ4 version.
-+// Please note that this is only defined in the Blosc sources of LZ4.
-+#define LZ4_VERSION_STRING "r110"
- 
- //**************************************
- // Compiler Options
diff --git a/c-blosc/internal-complibs/lz4-r110/lz4.c b/c-blosc/internal-complibs/lz4-r113/lz4.c
similarity index 71%
rename from c-blosc/internal-complibs/lz4-r110/lz4.c
rename to c-blosc/internal-complibs/lz4-r113/lz4.c
index f521b0f..ee37895 100644
--- a/c-blosc/internal-complibs/lz4-r110/lz4.c
+++ b/c-blosc/internal-complibs/lz4-r113/lz4.c
@@ -1,6 +1,6 @@
 /*
    LZ4 - Fast LZ compression algorithm
-   Copyright (C) 2011-2013, Yann Collet.
+   Copyright (C) 2011-2014, Yann Collet.
    BSD 2-Clause License (http://www.opensource.org/licenses/bsd-license.php)
 
    Redistribution and use in source and binary forms, with or without
@@ -31,37 +31,43 @@
    - LZ4 public forum : https://groups.google.com/forum/#!forum/lz4c
 */
 
-//**************************************
-// Tuning parameters
-//**************************************
-// MEMORY_USAGE :
-// Memory usage formula : N->2^N Bytes (examples : 10 -> 1KB; 12 -> 4KB ; 16 -> 64KB; 20 -> 1MB; etc.)
-// Increasing memory usage improves compression ratio
-// Reduced memory usage can improve speed, due to cache effect
-// Default value is 14, for 16KB, which nicely fits into Intel x86 L1 cache
+/**************************************
+   Tuning parameters
+**************************************/
+/*
+ * MEMORY_USAGE :
+ * Memory usage formula : N->2^N Bytes (examples : 10 -> 1KB; 12 -> 4KB ; 16 -> 64KB; 20 -> 1MB; etc.)
+ * Increasing memory usage improves compression ratio
+ * Reduced memory usage can improve speed, due to cache effect
+ * Default value is 14, for 16KB, which nicely fits into Intel x86 L1 cache
+ */
 #define MEMORY_USAGE 14
 
-// HEAPMODE :
-// Select how default compression functions will allocate memory for their hash table,
-// in memory stack (0:default, fastest), or in memory heap (1:requires memory allocation (malloc)).
+/*
+ * HEAPMODE :
+ * Select how default compression functions will allocate memory for their hash table,
+ * in memory stack (0:default, fastest), or in memory heap (1:requires memory allocation (malloc)).
+ */
 #define HEAPMODE 0
 
 
-//**************************************
-// CPU Feature Detection
-//**************************************
-// 32 or 64 bits ?
+/**************************************
+   CPU Feature Detection
+**************************************/
+/* 32 or 64 bits ? */
 #if (defined(__x86_64__) || defined(_M_X64) || defined(_WIN64) \
   || defined(__powerpc64__) || defined(__ppc64__) || defined(__PPC64__) \
   || defined(__64BIT__) || defined(_LP64) || defined(__LP64__) \
-  || defined(__ia64) || defined(__itanium__) || defined(_M_IA64) )   // Detects 64 bits mode
+  || defined(__ia64) || defined(__itanium__) || defined(_M_IA64) )   /* Detects 64 bits mode */
 #  define LZ4_ARCH64 1
 #else
 #  define LZ4_ARCH64 0
 #endif
 
-// Little Endian or Big Endian ?
-// Overwrite the #define below if you know your architecture endianess
+/*
+ * Little Endian or Big Endian ?
+ * Overwrite the #define below if you know your architecture endianess
+ */
 #if defined (__GLIBC__)
 #  include <endian.h>
 #  if (__BYTE_ORDER == __BIG_ENDIAN)
@@ -75,48 +81,53 @@
    || defined(_MIPSEB) || defined(__s390__)
 #  define LZ4_BIG_ENDIAN 1
 #else
-// Little Endian assumed. PDP Endian and other very rare endian format are unsupported.
+/* Little Endian assumed. PDP Endian and other very rare endian format are unsupported. */
 #endif
 
-// Unaligned memory access is automatically enabled for "common" CPU, such as x86.
-// For others CPU, such as ARM, the compiler may be more cautious, inserting unnecessary extra code to ensure aligned access property
-// If you know your target CPU supports unaligned memory access, you want to force this option manually to improve performance
+/*
+ * Unaligned memory access is automatically enabled for "common" CPU, such as x86.
+ * For others CPU, such as ARM, the compiler may be more cautious, inserting unnecessary extra code to ensure aligned access property
+ * If you know your target CPU supports unaligned memory access, you want to force this option manually to improve performance
+ */
 #if defined(__ARM_FEATURE_UNALIGNED)
 #  define LZ4_FORCE_UNALIGNED_ACCESS 1
 #endif
 
-// Define this parameter if your target system or compiler does not support hardware bit count
-#if defined(_MSC_VER) && defined(_WIN32_WCE)            // Visual Studio for Windows CE does not support Hardware bit count
+/* Define this parameter if your target system or compiler does not support hardware bit count */
+#if defined(_MSC_VER) && defined(_WIN32_WCE)   /* Visual Studio for Windows CE does not support Hardware bit count */
 #  define LZ4_FORCE_SW_BITCOUNT
 #endif
 
-// BIG_ENDIAN_NATIVE_BUT_INCOMPATIBLE :
-// This option may provide a small boost to performance for some big endian cpu, although probably modest.
-// You may set this option to 1 if data will remain within closed environment.
-// This option is useless on Little_Endian CPU (such as x86)
-//#define BIG_ENDIAN_NATIVE_BUT_INCOMPATIBLE 1
+/*
+ * BIG_ENDIAN_NATIVE_BUT_INCOMPATIBLE :
+ * This option may provide a small boost to performance for some big endian cpu, although probably modest.
+ * You may set this option to 1 if data will remain within closed environment.
+ * This option is useless on Little_Endian CPU (such as x86)
+ */
+
+/* #define BIG_ENDIAN_NATIVE_BUT_INCOMPATIBLE 1 */
 
 
-//**************************************
-// Compiler Options
-//**************************************
-#if defined(__STDC_VERSION__) && (__STDC_VERSION__ >= 199901L)   // C99
+/**************************************
+ Compiler Options
+**************************************/
+#if defined(__STDC_VERSION__) && (__STDC_VERSION__ >= 199901L)   /* C99 */
 /* "restrict" is a known keyword */
 #else
-#  define restrict // Disable restrict
+#  define restrict /* Disable restrict */
 #endif
 
-#ifdef _MSC_VER    // Visual Studio
+#ifdef _MSC_VER    /* Visual Studio */
 #  define FORCE_INLINE static __forceinline
-#  include <intrin.h>                    // For Visual 2005
-#  if LZ4_ARCH64   // 64-bits
-#    pragma intrinsic(_BitScanForward64) // For Visual 2005
-#    pragma intrinsic(_BitScanReverse64) // For Visual 2005
-#  else            // 32-bits
-#    pragma intrinsic(_BitScanForward)   // For Visual 2005
-#    pragma intrinsic(_BitScanReverse)   // For Visual 2005
+#  include <intrin.h>                    /* For Visual 2005 */
+#  if LZ4_ARCH64   /* 64-bits */
+#    pragma intrinsic(_BitScanForward64) /* For Visual 2005 */
+#    pragma intrinsic(_BitScanReverse64) /* For Visual 2005 */
+#  else            /* 32-bits */
+#    pragma intrinsic(_BitScanForward)   /* For Visual 2005 */
+#    pragma intrinsic(_BitScanReverse)   /* For Visual 2005 */
 #  endif
-#  pragma warning(disable : 4127)        // disable: C4127: conditional expression is constant
+#  pragma warning(disable : 4127)        /* disable: C4127: conditional expression is constant */
 #else
 #  ifdef __GNUC__
 #    define FORCE_INLINE static inline __attribute__((always_inline))
@@ -125,7 +136,7 @@
 #  endif
 #endif
 
-#ifdef _MSC_VER
+#ifdef _MSC_VER  /* Visual Studio */
 #  define lz4_bswap16(x) _byteswap_ushort(x)
 #else
 #  define lz4_bswap16(x) ((unsigned short int) ((((x) >> 8) & 0xffu) | (((x) & 0xffu) << 8)))
@@ -143,26 +154,26 @@
 #define unlikely(expr)   expect((expr) != 0, 0)
 
 
-//**************************************
-// Memory routines
-//**************************************
-#include <stdlib.h>   // malloc, calloc, free
+/**************************************
+   Memory routines
+**************************************/
+#include <stdlib.h>   /* malloc, calloc, free */
 #define ALLOCATOR(n,s) calloc(n,s)
 #define FREEMEM        free
-#include <string.h>   // memset, memcpy
+#include <string.h>   /* memset, memcpy */
 #define MEM_INIT       memset
 
 
-//**************************************
-// Includes
-//**************************************
+/**************************************
+   Includes
+**************************************/
 #include "lz4.h"
 
 
-//**************************************
-// Basic Types
-//**************************************
-#if defined (__STDC_VERSION__) && __STDC_VERSION__ >= 199901L   // C99
+/**************************************
+   Basic Types
+**************************************/
+#if defined (__STDC_VERSION__) && (__STDC_VERSION__ >= 199901L)   /* C99 */
 # include <stdint.h>
   typedef  uint8_t BYTE;
   typedef uint16_t U16;
@@ -210,9 +221,9 @@ typedef struct {size_t v;} _PACKED size_t_S;
 #define AARCH(x) (((size_t_S *)(x))->v)
 
 
-//**************************************
-// Constants
-//**************************************
+/**************************************
+   Constants
+**************************************/
 #define LZ4_HASHLOG   (MEMORY_USAGE-2)
 #define HASHTABLESIZE (1 << MEMORY_USAGE)
 #define HASHNBCELLS4  (1 << LZ4_HASHLOG)
@@ -222,10 +233,14 @@ typedef struct {size_t v;} _PACKED size_t_S;
 #define COPYLENGTH 8
 #define LASTLITERALS 5
 #define MFLIMIT (COPYLENGTH+MINMATCH)
-const int LZ4_minLength = (MFLIMIT+1);
+static const int LZ4_minLength = (MFLIMIT+1);
+
+#define KB *(1U<<10)
+#define MB *(1U<<20)
+#define GB *(1U<<30)
 
-#define LZ4_64KLIMIT ((1<<16) + (MFLIMIT-1))
-#define SKIPSTRENGTH 6     // Increasing this value will make the compression run slower on incompressible data
+#define LZ4_64KLIMIT ((64 KB) + (MFLIMIT-1))
+#define SKIPSTRENGTH 6   /* Increasing this value will make the compression run slower on incompressible data */
 
 #define MAXD_LOG 16
 #define MAX_DISTANCE ((1 << MAXD_LOG) - 1)
@@ -235,15 +250,10 @@ const int LZ4_minLength = (MFLIMIT+1);
 #define RUN_BITS (8-ML_BITS)
 #define RUN_MASK ((1U<<RUN_BITS)-1)
 
-#define KB *(1U<<10)
-#define MB *(1U<<20)
-#define GB *(1U<<30)
-
-
-//**************************************
-// Structures and local types
-//**************************************
 
+/**************************************
+   Structures and local types
+**************************************/
 typedef struct {
     U32 hashTable[HASHNBCELLS4];
     const BYTE* bufferStart;
@@ -260,40 +270,36 @@ typedef enum { endOnOutputSize = 0, endOnInputSize = 1 } endCondition_directive;
 typedef enum { full = 0, partial = 1 } earlyEnd_directive;
 
 
-//**************************************
-// Architecture-specific macros
-//**************************************
+/**************************************
+   Architecture-specific macros
+**************************************/
 #define STEPSIZE                  sizeof(size_t)
 #define LZ4_COPYSTEP(d,s)         { AARCH(d) = AARCH(s); d+=STEPSIZE; s+=STEPSIZE; }
 #define LZ4_COPY8(d,s)            { LZ4_COPYSTEP(d,s); if (STEPSIZE<8) LZ4_COPYSTEP(d,s); }
-#define LZ4_SECURECOPY(d,s,e)     { if ((STEPSIZE==4)||(d<e)) LZ4_WILDCOPY(d,s,e); }
-
-#if LZ4_ARCH64   // 64-bit
-#  define HTYPE                   U32
-#  define INITBASE(base)          const BYTE* const base = ip
-#else            // 32-bit
-#  define HTYPE                   const BYTE*
-#  define INITBASE(base)          const int base = 0
-#endif
 
 #if (defined(LZ4_BIG_ENDIAN) && !defined(BIG_ENDIAN_NATIVE_BUT_INCOMPATIBLE))
 #  define LZ4_READ_LITTLEENDIAN_16(d,s,p) { U16 v = A16(p); v = lz4_bswap16(v); d = (s) - v; }
 #  define LZ4_WRITE_LITTLEENDIAN_16(p,i)  { U16 v = (U16)(i); v = lz4_bswap16(v); A16(p) = v; p+=2; }
-#else      // Little Endian
+#else      /* Little Endian */
 #  define LZ4_READ_LITTLEENDIAN_16(d,s,p) { d = (s) - A16(p); }
 #  define LZ4_WRITE_LITTLEENDIAN_16(p,v)  { A16(p) = v; p+=2; }
 #endif
 
 
-//**************************************
-// Macros
-//**************************************
-#define LZ4_WILDCOPY(d,s,e)     { do { LZ4_COPY8(d,s) } while (d<e); }           // at the end, d>=e;
+/**************************************
+   Macros
+**************************************/
+#if LZ4_ARCH64 || !defined(__GNUC__)
+#  define LZ4_WILDCOPY(d,s,e)     { do { LZ4_COPY8(d,s) } while (d<e); }           /* at the end, d>=e; */
+#else
+#  define LZ4_WILDCOPY(d,s,e)     { if (likely(e-d <= 8)) LZ4_COPY8(d,s) else do { LZ4_COPY8(d,s) } while (d<e); }
+#endif
+#define LZ4_SECURECOPY(d,s,e)     { if (d<e) LZ4_WILDCOPY(d,s,e); }
 
 
-//****************************
-// Private functions
-//****************************
+/****************************
+   Private local functions
+****************************/
 #if LZ4_ARCH64
 
 FORCE_INLINE int LZ4_NbCommonBytes (register U64 val)
@@ -360,9 +366,9 @@ FORCE_INLINE int LZ4_NbCommonBytes (register U32 val)
 #endif
 
 
-//****************************
-// Compression functions
-//****************************
+/****************************
+   Compression functions
+****************************/
 FORCE_INLINE int LZ4_hashSequence(U32 sequence, tableType_t tableType)
 {
     if (tableType == byU16)
@@ -393,7 +399,7 @@ FORCE_INLINE const BYTE* LZ4_getPositionOnHash(U32 h, void* tableBase, tableType
 {
     if (tableType == byPtr) { const BYTE** hashTable = (const BYTE**) tableBase; return hashTable[h]; }
     if (tableType == byU32) { U32* hashTable = (U32*) tableBase; return hashTable[h] + srcBase; }
-    { U16* hashTable = (U16*) tableBase; return hashTable[h] + srcBase; }   // default, to ensure a return
+    { U16* hashTable = (U16*) tableBase; return hashTable[h] + srcBase; }   /* default, to ensure a return */
 }
 
 FORCE_INLINE const BYTE* LZ4_getPosition(const BYTE* p, void* tableBase, tableType_t tableType, const BYTE* srcBase)
@@ -429,18 +435,18 @@ FORCE_INLINE int LZ4_compress_generic(
     const int skipStrength = SKIPSTRENGTH;
     U32 forwardH;
 
-    // Init conditions
-    if ((U32)inputSize > (U32)LZ4_MAX_INPUT_SIZE) return 0;                                // Unsupported input size, too large (or negative)
-    if ((prefix==withPrefix) && (ip != ((LZ4_Data_Structure*)ctx)->nextBlock)) return 0;   // must continue from end of previous block
-    if (prefix==withPrefix) ((LZ4_Data_Structure*)ctx)->nextBlock=iend;                    // do it now, due to potential early exit
-    if ((tableType == byU16) && (inputSize>=LZ4_64KLIMIT)) return 0;                       // Size too large (not within 64K limit)
-    if (inputSize<LZ4_minLength) goto _last_literals;                                      // Input too small, no compression (all literals)
+    /* Init conditions */
+    if ((U32)inputSize > (U32)LZ4_MAX_INPUT_SIZE) return 0;                                /* Unsupported input size, too large (or negative) */
+    if ((prefix==withPrefix) && (ip != ((LZ4_Data_Structure*)ctx)->nextBlock)) return 0;   /* must continue from end of previous block */
+    if (prefix==withPrefix) ((LZ4_Data_Structure*)ctx)->nextBlock=iend;                    /* do it now, due to potential early exit */
+    if ((tableType == byU16) && (inputSize>=(int)LZ4_64KLIMIT)) return 0;                  /* Size too large (not within 64K limit) */
+    if (inputSize<LZ4_minLength) goto _last_literals;                                      /* Input too small, no compression (all literals) */
 
-    // First Byte
+    /* First Byte */
     LZ4_putPosition(ip, ctx, tableType, base);
     ip++; forwardH = LZ4_hashPosition(ip, tableType);
 
-    // Main Loop
+    /* Main Loop */
     for ( ; ; )
     {
         int findMatchAttempts = (1U << skipStrength) + 3;
@@ -448,14 +454,14 @@ FORCE_INLINE int LZ4_compress_generic(
         const BYTE* ref;
         BYTE* token;
 
-        // Find a match
+        /* Find a match */
         do {
             U32 h = forwardH;
             int step = findMatchAttempts++ >> skipStrength;
             ip = forwardIp;
             forwardIp = ip + step;
 
-            if unlikely(forwardIp > mflimit) { goto _last_literals; }
+            if (unlikely(forwardIp > mflimit)) { goto _last_literals; }
 
             forwardH = LZ4_hashPosition(forwardIp, tableType);
             ref = LZ4_getPositionOnHash(h, ctx, tableType, base);
@@ -463,13 +469,13 @@ FORCE_INLINE int LZ4_compress_generic(
 
         } while ((ref + MAX_DISTANCE < ip) || (A32(ref) != A32(ip)));
 
-        // Catch up
-        while ((ip>anchor) && (ref > lowLimit) && unlikely(ip[-1]==ref[-1])) { ip--; ref--; }
+        /* Catch up */
+        while ((ip>anchor) && (ref > lowLimit) && (unlikely(ip[-1]==ref[-1]))) { ip--; ref--; }
 
-        // Encode Literal length
+        /* Encode Literal length */
         length = (int)(ip - anchor);
         token = op++;
-        if ((limitedOutput) && unlikely(op + length + (2 + 1 + LASTLITERALS) + (length/255) > oend)) return 0;   // Check output limit
+        if ((limitedOutput) && (unlikely(op + length + (2 + 1 + LASTLITERALS) + (length/255) > oend))) return 0;   /* Check output limit */
         if (length>=(int)RUN_MASK)
         {
             int len = length-RUN_MASK;
@@ -479,17 +485,17 @@ FORCE_INLINE int LZ4_compress_generic(
         }
         else *token = (BYTE)(length<<ML_BITS);
 
-        // Copy Literals
+        /* Copy Literals */
         { BYTE* end=(op)+(length); LZ4_WILDCOPY(op,anchor,end); op=end; }
 
 _next_match:
-        // Encode Offset
+        /* Encode Offset */
         LZ4_WRITE_LITTLEENDIAN_16(op,(U16)(ip-ref));
 
-        // Start Counting
-        ip+=MINMATCH; ref+=MINMATCH;    // MinMatch already verified
+        /* Start Counting */
+        ip+=MINMATCH; ref+=MINMATCH;    /* MinMatch already verified */
         anchor = ip;
-        while likely(ip<matchlimit-(STEPSIZE-1))
+        while (likely(ip<matchlimit-(STEPSIZE-1)))
         {
             size_t diff = AARCH(ref) ^ AARCH(ip);
             if (!diff) { ip+=STEPSIZE; ref+=STEPSIZE; continue; }
@@ -501,9 +507,9 @@ _next_match:
         if ((ip<matchlimit) && (*ref == *ip)) ip++;
 _endCount:
 
-        // Encode MatchLength
+        /* Encode MatchLength */
         length = (int)(ip - anchor);
-        if ((limitedOutput) && unlikely(op + (1 + LASTLITERALS) + (length>>8) > oend)) return 0;    // Check output limit
+        if ((limitedOutput) && (unlikely(op + (1 + LASTLITERALS) + (length>>8) > oend))) return 0;    /* Check output limit */
         if (length>=(int)ML_MASK)
         {
             *token += ML_MASK;
@@ -514,34 +520,34 @@ _endCount:
         }
         else *token += (BYTE)(length);
 
-        // Test end of chunk
+        /* Test end of chunk */
         if (ip > mflimit) { anchor = ip;  break; }
 
-        // Fill table
+        /* Fill table */
         LZ4_putPosition(ip-2, ctx, tableType, base);
 
-        // Test next position
+        /* Test next position */
         ref = LZ4_getPosition(ip, ctx, tableType, base);
         LZ4_putPosition(ip, ctx, tableType, base);
         if ((ref + MAX_DISTANCE >= ip) && (A32(ref) == A32(ip))) { token = op++; *token=0; goto _next_match; }
 
-        // Prepare next loop
+        /* Prepare next loop */
         anchor = ip++;
         forwardH = LZ4_hashPosition(ip, tableType);
     }
 
 _last_literals:
-    // Encode Last Literals
+    /* Encode Last Literals */
     {
         int lastRun = (int)(iend - anchor);
-        if ((limitedOutput) && (((char*)op - dest) + lastRun + 1 + ((lastRun+255-RUN_MASK)/255) > (U32)maxOutputSize)) return 0;   // Check output limit
+        if ((limitedOutput) && (((char*)op - dest) + lastRun + 1 + ((lastRun+255-RUN_MASK)/255) > (U32)maxOutputSize)) return 0;   /* Check output limit */
         if (lastRun>=(int)RUN_MASK) { *op++=(RUN_MASK<<ML_BITS); lastRun-=RUN_MASK; for(; lastRun >= 255 ; lastRun-=255) *op++ = 255; *op++ = (BYTE) lastRun; }
         else *op++ = (BYTE)(lastRun<<ML_BITS);
         memcpy(op, anchor, iend - anchor);
         op += iend-anchor;
     }
 
-    // End
+    /* End */
     return (int) (((char*)op)-dest);
 }
 
@@ -549,9 +555,9 @@ _last_literals:
 int LZ4_compress(const char* source, char* dest, int inputSize)
 {
 #if (HEAPMODE)
-    void* ctx = ALLOCATOR(HASHNBCELLS4, 4);   // Aligned on 4-bytes boundaries
+    void* ctx = ALLOCATOR(HASHNBCELLS4, 4);   /* Aligned on 4-bytes boundaries */
 #else
-    U32 ctx[1U<<(MEMORY_USAGE-2)] = {0};      // Ensure data is aligned on 4-bytes boundaries
+    U32 ctx[1U<<(MEMORY_USAGE-2)] = {0};      /* Ensure data is aligned on 4-bytes boundaries */
 #endif
     int result;
 
@@ -569,9 +575,9 @@ int LZ4_compress(const char* source, char* dest, int inputSize)
 int LZ4_compress_limitedOutput(const char* source, char* dest, int inputSize, int maxOutputSize)
 {
 #if (HEAPMODE)
-    void* ctx = ALLOCATOR(HASHNBCELLS4, 4);   // Aligned on 4-bytes boundaries
+    void* ctx = ALLOCATOR(HASHNBCELLS4, 4);   /* Aligned on 4-bytes boundaries */
 #else
-    U32 ctx[1U<<(MEMORY_USAGE-2)] = {0};      // Ensure data is aligned on 4-bytes boundaries
+    U32 ctx[1U<<(MEMORY_USAGE-2)] = {0};      /* Ensure data is aligned on 4-bytes boundaries */
 #endif
     int result;
 
@@ -587,16 +593,16 @@ int LZ4_compress_limitedOutput(const char* source, char* dest, int inputSize, in
 }
 
 
-//*****************************
-// Using an external allocation
-//*****************************
+/*****************************
+   Using external allocation
+*****************************/
 
 int LZ4_sizeofState() { return 1 << MEMORY_USAGE; }
 
 
 int LZ4_compress_withState (void* state, const char* source, char* dest, int inputSize)
 {
-    if (((size_t)(state)&3) != 0) return 0;   // Error : state is not aligned on 4-bytes boundary
+    if (((size_t)(state)&3) != 0) return 0;   /* Error : state is not aligned on 4-bytes boundary */
     MEM_INIT(state, 0, LZ4_sizeofState());
 
     if (inputSize < (int)LZ4_64KLIMIT)
@@ -608,7 +614,7 @@ int LZ4_compress_withState (void* state, const char* source, char* dest, int inp
 
 int LZ4_compress_limitedOutput_withState (void* state, const char* source, char* dest, int inputSize, int maxOutputSize)
 {
-    if (((size_t)(state)&3) != 0) return 0;   // Error : state is not aligned on 4-bytes boundary
+    if (((size_t)(state)&3) != 0) return 0;   /* Error : state is not aligned on 4-bytes boundary */
     MEM_INIT(state, 0, LZ4_sizeofState());
 
     if (inputSize < (int)LZ4_64KLIMIT)
@@ -618,9 +624,9 @@ int LZ4_compress_limitedOutput_withState (void* state, const char* source, char*
 }
 
 
-//****************************
-// Stream functions
-//****************************
+/****************************
+   Stream functions
+****************************/
 
 int LZ4_sizeofStreamState()
 {
@@ -637,7 +643,7 @@ FORCE_INLINE void LZ4_init(LZ4_Data_Structure* lz4ds, const BYTE* base)
 
 int LZ4_resetStreamState(void* state, const char* inputBuffer)
 {
-    if ((((size_t)state) & 3) != 0) return 1;   // Error : pointer is not aligned on 4-bytes boundary
+    if ((((size_t)state) & 3) != 0) return 1;   /* Error : pointer is not aligned on 4-bytes boundary */
     LZ4_init((LZ4_Data_Structure*)state, (const BYTE*)inputBuffer);
     return 0;
 }
@@ -662,8 +668,8 @@ char* LZ4_slideInputBuffer (void* LZ4_Data)
     LZ4_Data_Structure* lz4ds = (LZ4_Data_Structure*)LZ4_Data;
     size_t delta = lz4ds->nextBlock - (lz4ds->bufferStart + 64 KB);
 
-    if ( (lz4ds->base - delta > lz4ds->base)                          // underflow control
-       || ((size_t)(lz4ds->nextBlock - lz4ds->base) > 0xE0000000) )   // close to 32-bits limit
+    if ( (lz4ds->base - delta > lz4ds->base)                          /* underflow control */
+       || ((size_t)(lz4ds->nextBlock - lz4ds->base) > 0xE0000000) )   /* close to 32-bits limit */
     {
         size_t deltaLimit = (lz4ds->nextBlock - 64 KB) - lz4ds->base;
         int nH;
@@ -700,27 +706,29 @@ int LZ4_compress_limitedOutput_continue (void* LZ4_Data, const char* source, cha
 }
 
 
-//****************************
-// Decompression functions
-//****************************
+/****************************
+   Decompression functions
+****************************/
 
-// This generic decompression function cover all use cases.
-// It shall be instanciated several times, using different sets of directives
-// Note that it is essential this generic function is really inlined,
-// in order to remove useless branches during compilation optimisation.
+/*
+ * This generic decompression function cover all use cases.
+ * It shall be instanciated several times, using different sets of directives
+ * Note that it is essential this generic function is really inlined,
+ * in order to remove useless branches during compilation optimisation.
+ */
 FORCE_INLINE int LZ4_decompress_generic(
                  const char* source,
                  char* dest,
-                 int inputSize,          //
-                 int outputSize,         // If endOnInput==endOnInputSize, this value is the max size of Output Buffer.
+                 int inputSize,
+                 int outputSize,         /* If endOnInput==endOnInputSize, this value is the max size of Output Buffer. */
 
-                 int endOnInput,         // endOnOutputSize, endOnInputSize
-                 int prefix64k,          // noPrefix, withPrefix
-                 int partialDecoding,    // full, partial
-                 int targetOutputSize    // only used if partialDecoding==partial
+                 int endOnInput,         /* endOnOutputSize, endOnInputSize */
+                 int prefix64k,          /* noPrefix, withPrefix */
+                 int partialDecoding,    /* full, partial */
+                 int targetOutputSize    /* only used if partialDecoding==partial */
                  )
 {
-    // Local Variables
+    /* Local Variables */
     const BYTE* restrict ip = (const BYTE*) source;
     const BYTE* ref;
     const BYTE* const iend = ip + inputSize;
@@ -730,23 +738,24 @@ FORCE_INLINE int LZ4_decompress_generic(
     BYTE* cpy;
     BYTE* oexit = op + targetOutputSize;
 
-    const size_t dec32table[] = {0, 3, 2, 3, 0, 0, 0, 0};   // static reduces speed for LZ4_decompress_safe() on GCC64
+    /*const size_t dec32table[] = {0, 3, 2, 3, 0, 0, 0, 0};   / static reduces speed for LZ4_decompress_safe() on GCC64 */
+    const size_t dec32table[] = {4-0, 4-3, 4-2, 4-3, 4-0, 4-0, 4-0, 4-0};   /* static reduces speed for LZ4_decompress_safe() on GCC64 */
     static const size_t dec64table[] = {0, 0, 0, (size_t)-1, 0, 1, 2, 3};
 
 
-    // Special cases
-    if ((partialDecoding) && (oexit> oend-MFLIMIT)) oexit = oend-MFLIMIT;                        // targetOutputSize too high => decode everything
-    if ((endOnInput) && unlikely(outputSize==0)) return ((inputSize==1) && (*ip==0)) ? 0 : -1;   // Empty output buffer
-    if ((!endOnInput) && unlikely(outputSize==0)) return (*ip==0?1:-1);
+    /* Special cases */
+    if ((partialDecoding) && (oexit> oend-MFLIMIT)) oexit = oend-MFLIMIT;                        /* targetOutputSize too high => decode everything */
+    if ((endOnInput) && (unlikely(outputSize==0))) return ((inputSize==1) && (*ip==0)) ? 0 : -1;   /* Empty output buffer */
+    if ((!endOnInput) && (unlikely(outputSize==0))) return (*ip==0?1:-1);
 
 
-    // Main Loop
+    /* Main Loop */
     while (1)
     {
         unsigned token;
         size_t length;
 
-        // get runlength
+        /* get runlength */
         token = *ip++;
         if ((length=(token>>ML_BITS)) == RUN_MASK)
         {
@@ -758,36 +767,36 @@ FORCE_INLINE int LZ4_decompress_generic(
             }
         }
 
-        // copy literals
+        /* copy literals */
         cpy = op+length;
         if (((endOnInput) && ((cpy>(partialDecoding?oexit:oend-MFLIMIT)) || (ip+length>iend-(2+1+LASTLITERALS))) )
             || ((!endOnInput) && (cpy>oend-COPYLENGTH)))
         {
             if (partialDecoding)
             {
-                if (cpy > oend) goto _output_error;                           // Error : write attempt beyond end of output buffer
-                if ((endOnInput) && (ip+length > iend)) goto _output_error;   // Error : read attempt beyond end of input buffer
+                if (cpy > oend) goto _output_error;                           /* Error : write attempt beyond end of output buffer */
+                if ((endOnInput) && (ip+length > iend)) goto _output_error;   /* Error : read attempt beyond end of input buffer */
             }
             else
             {
-                if ((!endOnInput) && (cpy != oend)) goto _output_error;       // Error : block decoding must stop exactly there
-                if ((endOnInput) && ((ip+length != iend) || (cpy > oend))) goto _output_error;   // Error : input must be consumed
+                if ((!endOnInput) && (cpy != oend)) goto _output_error;       /* Error : block decoding must stop exactly there */
+                if ((endOnInput) && ((ip+length != iend) || (cpy > oend))) goto _output_error;   /* Error : input must be consumed */
             }
             memcpy(op, ip, length);
             ip += length;
             op += length;
-            break;                                       // Necessarily EOF, due to parsing restrictions
+            break;                                       /* Necessarily EOF, due to parsing restrictions */
         }
         LZ4_WILDCOPY(op, ip, cpy); ip -= (op-cpy); op = cpy;
 
-        // get offset
+        /* get offset */
         LZ4_READ_LITTLEENDIAN_16(ref,cpy,ip); ip+=2;
-        if ((prefix64k==noPrefix) && unlikely(ref < (BYTE* const)dest)) goto _output_error;   // Error : offset outside destination buffer
+        if ((prefix64k==noPrefix) && (unlikely(ref < (BYTE* const)dest))) goto _output_error;   /* Error : offset outside destination buffer */
 
-        // get matchlength
+        /* get matchlength */
         if ((length=(token&ML_MASK)) == ML_MASK)
         {
-            while ((!endOnInput) || (ip<iend-(LASTLITERALS+1)))   // Ensure enough bytes remain for LASTLITERALS + token
+            while ((!endOnInput) || (ip<iend-(LASTLITERALS+1)))   /* Ensure enough bytes remain for LASTLITERALS + token */
             {
                 unsigned s = *ip++;
                 length += s;
@@ -796,39 +805,42 @@ FORCE_INLINE int LZ4_decompress_generic(
             }
         }
 
-        // copy repeated sequence
-        if unlikely((op-ref)<(int)STEPSIZE)
+        /* copy repeated sequence */
+        if (unlikely((op-ref)<(int)STEPSIZE))
         {
             const size_t dec64 = dec64table[(sizeof(void*)==4) ? 0 : op-ref];
             op[0] = ref[0];
             op[1] = ref[1];
             op[2] = ref[2];
             op[3] = ref[3];
-            op += 4, ref += 4; ref -= dec32table[op-ref];
+            /*op += 4, ref += 4; ref -= dec32table[op-ref];
             A32(op) = A32(ref);
-            op += STEPSIZE-4; ref -= dec64;
+            op += STEPSIZE-4; ref -= dec64;*/
+            ref += dec32table[op-ref];
+            A32(op+4) = A32(ref);
+            op += STEPSIZE; ref -= dec64;
         } else { LZ4_COPYSTEP(op,ref); }
         cpy = op + length - (STEPSIZE-4);
 
-        if unlikely(cpy>oend-COPYLENGTH-(STEPSIZE-4))
+        if (unlikely(cpy>oend-COPYLENGTH-(STEPSIZE-4)))
         {
-            if (cpy > oend-LASTLITERALS) goto _output_error;    // Error : last 5 bytes must be literals
+            if (cpy > oend-LASTLITERALS) goto _output_error;    /* Error : last 5 bytes must be literals */
             LZ4_SECURECOPY(op, ref, (oend-COPYLENGTH));
             while(op<cpy) *op++=*ref++;
             op=cpy;
             continue;
         }
         LZ4_WILDCOPY(op, ref, cpy);
-        op=cpy;   // correction
+        op=cpy;   /* correction */
     }
 
-    // end of decoding
+    /* end of decoding */
     if (endOnInput)
-       return (int) (((char*)op)-dest);     // Nb of output bytes decoded
+       return (int) (((char*)op)-dest);     /* Nb of output bytes decoded */
     else
-       return (int) (((char*)ip)-source);   // Nb of input bytes read
+       return (int) (((char*)ip)-source);   /* Nb of input bytes read */
 
-    // Overflow error detected
+    /* Overflow error detected */
 _output_error:
     return (int) (-(((char*)ip)-source))-1;
 }
@@ -856,7 +868,7 @@ int LZ4_decompress_fast_withPrefix64k(const char* source, char* dest, int output
 
 int LZ4_decompress_fast(const char* source, char* dest, int outputSize)
 {
-#ifdef _MSC_VER   // This version is faster with Visual
+#ifdef _MSC_VER   /* This version is faster with Visual */
     return LZ4_decompress_generic(source, dest, 0, outputSize, endOnOutputSize, noPrefix, full, 0);
 #else
     return LZ4_decompress_generic(source, dest, 0, outputSize, endOnOutputSize, withPrefix, full, 0);
diff --git a/c-blosc/internal-complibs/lz4-r110/lz4.h b/c-blosc/internal-complibs/lz4-r113/lz4.h
similarity index 91%
rename from c-blosc/internal-complibs/lz4-r110/lz4.h
rename to c-blosc/internal-complibs/lz4-r113/lz4.h
index 33fcbe4..fd35db5 100644
--- a/c-blosc/internal-complibs/lz4-r110/lz4.h
+++ b/c-blosc/internal-complibs/lz4-r113/lz4.h
@@ -37,21 +37,26 @@
 extern "C" {
 #endif
 
-// The next is for getting the LZ4 version.
-// Please note that this is only defined in the Blosc sources of LZ4.
-#define LZ4_VERSION_STRING "r110"
-
-//**************************************
-// Compiler Options
-//**************************************
-#if defined(_MSC_VER) && !defined(__cplusplus)   // Visual Studio
-#  define inline __inline           // Visual C is not C99, but supports some kind of inline
+
+/**************************************
+   Version
+**************************************/
+#define LZ4_VERSION_MAJOR    1    /* for major interface/format changes  */
+#define LZ4_VERSION_MINOR    1    /* for minor interface/format changes  */
+#define LZ4_VERSION_RELEASE  3    /* for tweaks, bug-fixes, or development */
+
+
+/**************************************
+   Compiler Options
+**************************************/
+#if (defined(__GNUC__) && defined(__STRICT_ANSI__)) || (defined(_MSC_VER) && !defined(__cplusplus))   /* Visual Studio */
+#  define inline __inline           /* Visual C is not C99, but supports some kind of inline */
 #endif
 
 
-//****************************
-// Simple Functions
-//****************************
+/**************************************
+   Simple Functions
+**************************************/
 
 int LZ4_compress        (const char* source, char* dest, int inputSize);
 int LZ4_decompress_safe (const char* source, char* dest, int inputSize, int maxOutputSize);
@@ -74,10 +79,10 @@ LZ4_decompress_safe() :
 */
 
 
-//****************************
-// Advanced Functions
-//****************************
-#define LZ4_MAX_INPUT_SIZE        0x7E000000   // 2 113 929 216 bytes
+/**************************************
+   Advanced Functions
+**************************************/
+#define LZ4_MAX_INPUT_SIZE        0x7E000000   /* 2 113 929 216 bytes */
 #define LZ4_COMPRESSBOUND(isize)  ((unsigned int)(isize) > (unsigned int)LZ4_MAX_INPUT_SIZE ? 0 : (isize) + ((isize)/255) + 16)
 static inline int LZ4_compressBound(int isize)  { return LZ4_COMPRESSBOUND(isize); }
 
@@ -138,9 +143,6 @@ LZ4_decompress_safe_partial() :
 */
 
 
-//*****************************
-// Using an external allocation
-//*****************************
 int LZ4_sizeofState();
 int LZ4_compress_withState               (void* state, const char* source, char* dest, int inputSize);
 int LZ4_compress_limitedOutput_withState (void* state, const char* source, char* dest, int inputSize, int maxOutputSize);
@@ -158,10 +160,9 @@ They just use the externally allocated memory area instead of allocating their o
 */
 
 
-//****************************
-// Streaming Functions
-//****************************
-
+/**************************************
+   Streaming Functions
+**************************************/
 void* LZ4_create (const char* inputBuffer);
 int   LZ4_compress_continue (void* LZ4_Data, const char* source, char* dest, int inputSize);
 int   LZ4_compress_limitedOutput_continue (void* LZ4_Data, const char* source, char* dest, int inputSize, int maxOutputSize);
@@ -233,17 +234,15 @@ int LZ4_decompress_fast_withPrefix64k (const char* source, char* dest, int outpu
 */
 
 
-//****************************
-// Obsolete Functions
-//****************************
-
-static inline int LZ4_uncompress (const char* source, char* dest, int outputSize) { return LZ4_decompress_fast(source, dest, outputSize); }
-static inline int LZ4_uncompress_unknownOutputSize (const char* source, char* dest, int isize, int maxOutputSize) { return LZ4_decompress_safe(source, dest, isize, maxOutputSize); }
-
+/**************************************
+   Obsolete Functions
+**************************************/
 /*
 These functions are deprecated and should no longer be used.
 They are provided here for compatibility with existing user programs.
 */
+static inline int LZ4_uncompress (const char* source, char* dest, int outputSize) { return LZ4_decompress_fast(source, dest, outputSize); }
+static inline int LZ4_uncompress_unknownOutputSize (const char* source, char* dest, int isize, int maxOutputSize) { return LZ4_decompress_safe(source, dest, isize, maxOutputSize); }
 
 
 
diff --git a/c-blosc/internal-complibs/lz4-r110/lz4hc.c b/c-blosc/internal-complibs/lz4-r113/lz4hc.c
similarity index 74%
rename from c-blosc/internal-complibs/lz4-r110/lz4hc.c
rename to c-blosc/internal-complibs/lz4-r113/lz4hc.c
index f28283f..e84de2b 100644
--- a/c-blosc/internal-complibs/lz4-r110/lz4hc.c
+++ b/c-blosc/internal-complibs/lz4-r113/lz4hc.c
@@ -1,6 +1,6 @@
 /*
    LZ4 HC - High Compression Mode of LZ4
-   Copyright (C) 2011-2013, Yann Collet.
+   Copyright (C) 2011-2014, Yann Collet.
    BSD 2-Clause License (http://www.opensource.org/licenses/bsd-license.php)
 
    Redistribution and use in source and binary forms, with or without
@@ -31,31 +31,41 @@
    - LZ4 source repository : http://code.google.com/p/lz4/
 */
 
-//**************************************
-// Memory routines
-//**************************************
-#include <stdlib.h>   // calloc, free
+
+
+/**************************************
+   Tuning Parameter
+**************************************/
+#define LZ4HC_DEFAULT_COMPRESSIONLEVEL 8
+
+
+/**************************************
+   Memory routines
+**************************************/
+#include <stdlib.h>   /* calloc, free */
 #define ALLOCATOR(s)  calloc(1,s)
 #define FREEMEM       free
-#include <string.h>   // memset, memcpy
+#include <string.h>   /* memset, memcpy */
 #define MEM_INIT      memset
 
 
-//**************************************
-// CPU Feature Detection
-//**************************************
-// 32 or 64 bits ?
+/**************************************
+   CPU Feature Detection
+**************************************/
+/* 32 or 64 bits ? */
 #if (defined(__x86_64__) || defined(_M_X64) || defined(_WIN64) \
   || defined(__powerpc64__) || defined(__ppc64__) || defined(__PPC64__) \
   || defined(__64BIT__) || defined(_LP64) || defined(__LP64__) \
-  || defined(__ia64) || defined(__itanium__) || defined(_M_IA64) )   // Detects 64 bits mode
+  || defined(__ia64) || defined(__itanium__) || defined(_M_IA64) )   /* Detects 64 bits mode */
 #  define LZ4_ARCH64 1
 #else
 #  define LZ4_ARCH64 0
 #endif
 
-// Little Endian or Big Endian ?
-// Overwrite the #define below if you know your architecture endianess
+/*
+ * Little Endian or Big Endian ?
+ * Overwrite the #define below if you know your architecture endianess
+ */
 #if defined (__GLIBC__)
 #  include <endian.h>
 #  if (__BYTE_ORDER == __BIG_ENDIAN)
@@ -69,43 +79,45 @@
    || defined(_MIPSEB) || defined(__s390__)
 #  define LZ4_BIG_ENDIAN 1
 #else
-// Little Endian assumed. PDP Endian and other very rare endian format are unsupported.
+/* Little Endian assumed. PDP Endian and other very rare endian format are unsupported. */
 #endif
 
-// Unaligned memory access is automatically enabled for "common" CPU, such as x86.
-// For others CPU, the compiler will be more cautious, and insert extra code to ensure aligned access is respected
-// If you know your target CPU supports unaligned memory access, you want to force this option manually to improve performance
+/*
+ * Unaligned memory access is automatically enabled for "common" CPU, such as x86.
+ * For others CPU, the compiler will be more cautious, and insert extra code to ensure aligned access is respected
+ * If you know your target CPU supports unaligned memory access, you want to force this option manually to improve performance
+ */
 #if defined(__ARM_FEATURE_UNALIGNED)
 #  define LZ4_FORCE_UNALIGNED_ACCESS 1
 #endif
 
-// Define this parameter if your target system or compiler does not support hardware bit count
-#if defined(_MSC_VER) && defined(_WIN32_WCE)            // Visual Studio for Windows CE does not support Hardware bit count
+/* Define this parameter if your target system or compiler does not support hardware bit count */
+#if defined(_MSC_VER) && defined(_WIN32_WCE)            /* Visual Studio for Windows CE does not support Hardware bit count */
 #  define LZ4_FORCE_SW_BITCOUNT
 #endif
 
 
-//**************************************
-// Compiler Options
-//**************************************
-#if defined (__STDC_VERSION__) && __STDC_VERSION__ >= 199901L   // C99
-  /* "restrict" is a known keyword */
+/**************************************
+ Compiler Options
+**************************************/
+#if defined(__STDC_VERSION__) && (__STDC_VERSION__ >= 199901L)   /* C99 */
+/* "restrict" is a known keyword */
 #else
-#  define restrict  // Disable restrict
+#  define restrict /* Disable restrict */
 #endif
 
-#ifdef _MSC_VER    // Visual Studio
+#ifdef _MSC_VER    /* Visual Studio */
 #  define FORCE_INLINE static __forceinline
-#  include <intrin.h>                    // For Visual 2005
-#  if LZ4_ARCH64   // 64-bits
-#    pragma intrinsic(_BitScanForward64) // For Visual 2005
-#    pragma intrinsic(_BitScanReverse64) // For Visual 2005
-#  else            // 32-bits
-#    pragma intrinsic(_BitScanForward)   // For Visual 2005
-#    pragma intrinsic(_BitScanReverse)   // For Visual 2005
+#  include <intrin.h>                    /* For Visual 2005 */
+#  if LZ4_ARCH64   /* 64-bits */
+#    pragma intrinsic(_BitScanForward64) /* For Visual 2005 */
+#    pragma intrinsic(_BitScanReverse64) /* For Visual 2005 */
+#  else            /* 32-bits */
+#    pragma intrinsic(_BitScanForward)   /* For Visual 2005 */
+#    pragma intrinsic(_BitScanReverse)   /* For Visual 2005 */
 #  endif
-#  pragma warning(disable : 4127)        // disable: C4127: conditional expression is constant
-#  pragma warning(disable : 4701)        // disable: C4701: potentially uninitialized local variable used
+#  pragma warning(disable : 4127)        /* disable: C4127: conditional expression is constant */
+#  pragma warning(disable : 4701)        /* disable: C4701: potentially uninitialized local variable used */
 #else
 #  ifdef __GNUC__
 #    define FORCE_INLINE static inline __attribute__((always_inline))
@@ -114,24 +126,24 @@
 #  endif
 #endif
 
-#ifdef _MSC_VER  // Visual Studio
+#ifdef _MSC_VER  /* Visual Studio */
 #  define lz4_bswap16(x) _byteswap_ushort(x)
 #else
 #  define lz4_bswap16(x)  ((unsigned short int) ((((x) >> 8) & 0xffu) | (((x) & 0xffu) << 8)))
 #endif
 
 
-//**************************************
-// Includes
-//**************************************
+/**************************************
+   Includes
+**************************************/
 #include "lz4hc.h"
 #include "lz4.h"
 
 
-//**************************************
-// Basic Types
-//**************************************
-#if defined (__STDC_VERSION__) && __STDC_VERSION__ >= 199901L   // C99
+/**************************************
+   Basic Types
+**************************************/
+#if defined (__STDC_VERSION__) && (__STDC_VERSION__ >= 199901L)   /* C99 */
 # include <stdint.h>
   typedef uint8_t  BYTE;
   typedef uint16_t U16;
@@ -173,9 +185,9 @@ typedef struct _U64_S { U64 v; } _PACKED U64_S;
 #define A16(x) (((U16_S *)(x))->v)
 
 
-//**************************************
-// Constants
-//**************************************
+/**************************************
+   Constants
+**************************************/
 #define MINMATCH 4
 
 #define DICTIONARY_LOGSIZE 16
@@ -187,8 +199,6 @@ typedef struct _U64_S { U64 v; } _PACKED U64_S;
 #define HASHTABLESIZE (1 << HASH_LOG)
 #define HASH_MASK (HASHTABLESIZE - 1)
 
-#define MAX_NB_ATTEMPTS 256
-
 #define ML_BITS  4
 #define ML_MASK  (size_t)((1U<<ML_BITS)-1)
 #define RUN_BITS (8-ML_BITS)
@@ -205,25 +215,21 @@ typedef struct _U64_S { U64 v; } _PACKED U64_S;
 #define GB *(1U<<30)
 
 
-//**************************************
-// Architecture-specific macros
-//**************************************
-#if LZ4_ARCH64   // 64-bit
+/**************************************
+   Architecture-specific macros
+**************************************/
+#if LZ4_ARCH64   /* 64-bit */
 #  define STEPSIZE 8
 #  define LZ4_COPYSTEP(s,d)     A64(d) = A64(s); d+=8; s+=8;
 #  define LZ4_COPYPACKET(s,d)   LZ4_COPYSTEP(s,d)
-#  define UARCH U64
 #  define AARCH A64
 #  define HTYPE                 U32
 #  define INITBASE(b,s)         const BYTE* const b = s
-#else   // 32-bit
+#else            /* 32-bit */
 #  define STEPSIZE 4
 #  define LZ4_COPYSTEP(s,d)     A32(d) = A32(s); d+=4; s+=4;
 #  define LZ4_COPYPACKET(s,d)   LZ4_COPYSTEP(s,d); LZ4_COPYSTEP(s,d);
-#  define UARCH U32
 #  define AARCH A32
-//#  define HTYPE                 const BYTE*
-//#  define INITBASE(b,s)         const int b = 0
 #  define HTYPE                 U32
 #  define INITBASE(b,s)         const BYTE* const b = s
 #endif
@@ -231,15 +237,15 @@ typedef struct _U64_S { U64 v; } _PACKED U64_S;
 #if defined(LZ4_BIG_ENDIAN)
 #  define LZ4_READ_LITTLEENDIAN_16(d,s,p) { U16 v = A16(p); v = lz4_bswap16(v); d = (s) - v; }
 #  define LZ4_WRITE_LITTLEENDIAN_16(p,i)  { U16 v = (U16)(i); v = lz4_bswap16(v); A16(p) = v; p+=2; }
-#else   // Little Endian
+#else      /* Little Endian */
 #  define LZ4_READ_LITTLEENDIAN_16(d,s,p) { d = (s) - A16(p); }
 #  define LZ4_WRITE_LITTLEENDIAN_16(p,v)  { A16(p) = v; p+=2; }
 #endif
 
 
-//************************************************************
-// Local Types
-//************************************************************
+/**************************************
+   Local Types
+**************************************/
 typedef struct
 {
     const BYTE* inputBuffer;
@@ -251,9 +257,9 @@ typedef struct
 } LZ4HC_Data_Structure;
 
 
-//**************************************
-// Macros
-//**************************************
+/**************************************
+   Macros
+**************************************/
 #define LZ4_WILDCOPY(s,d,e)    do { LZ4_COPYPACKET(s,d) } while (d<e);
 #define LZ4_BLINDCOPY(s,d,l)   { BYTE* e=d+l; LZ4_WILDCOPY(s,d,e); d=e; }
 #define HASH_FUNCTION(i)       (((i) * 2654435761U) >> ((MINMATCH*8)-HASH_LOG))
@@ -263,9 +269,9 @@ typedef struct
 #define GETNEXT(p)             ((p) - (size_t)DELTANEXT(p))
 
 
-//**************************************
-// Private functions
-//**************************************
+/**************************************
+ Private functions
+**************************************/
 #if LZ4_ARCH64
 
 FORCE_INLINE int LZ4_NbCommonBytes (register U64 val)
@@ -349,7 +355,7 @@ FORCE_INLINE void LZ4_initHC (LZ4HC_Data_Structure* hc4, const BYTE* base)
 
 int LZ4_resetStreamStateHC(void* state, const char* inputBuffer)
 {
-    if ((((size_t)state) & (sizeof(void*)-1)) != 0) return 1;   // Error : pointer is not aligned for pointer (32 or 64 bits)
+    if ((((size_t)state) & (sizeof(void*)-1)) != 0) return 1;   /* Error : pointer is not aligned for pointer (32 or 64 bits) */
     LZ4_initHC((LZ4HC_Data_Structure*)state, (const BYTE*)inputBuffer);
     return 0;
 }
@@ -370,7 +376,7 @@ int LZ4_freeHC (void* LZ4HC_Data)
 }
 
 
-// Update chains up to ip (excluded)
+/* Update chains up to ip (excluded) */
 FORCE_INLINE void LZ4HC_Insert (LZ4HC_Data_Structure* hc4, const BYTE* ip)
 {
     U16*   chainTable = hc4->chainTable;
@@ -393,12 +399,12 @@ char* LZ4_slideInputBufferHC(void* LZ4HC_Data)
 {
     LZ4HC_Data_Structure* hc4 = (LZ4HC_Data_Structure*)LZ4HC_Data;
     U32 distance = (U32)(hc4->end - hc4->inputBuffer) - 64 KB;
-    distance = (distance >> 16) << 16;   // Must be a multiple of 64 KB
+    distance = (distance >> 16) << 16;   /* Must be a multiple of 64 KB */
     LZ4HC_Insert(hc4, hc4->end - MINMATCH);
     memcpy((void*)(hc4->end - 64 KB - distance), (const void*)(hc4->end - 64 KB), 64 KB);
     hc4->nextToUpdate -= distance;
     hc4->base -= distance;
-    if ((U32)(hc4->inputBuffer - hc4->base) > 1 GB + 64 KB)   // Avoid overflow
+    if ((U32)(hc4->inputBuffer - hc4->base) > 1 GB + 64 KB)   /* Avoid overflow */
     {
         int i;
         hc4->base += 1 GB;
@@ -415,7 +421,7 @@ FORCE_INLINE size_t LZ4HC_CommonLength (const BYTE* p1, const BYTE* p2, const BY
 
     while (p1t<matchlimit-(STEPSIZE-1))
     {
-        UARCH diff = AARCH(p2) ^ AARCH(p1t);
+        size_t diff = AARCH(p2) ^ AARCH(p1t);
         if (!diff) { p1t+=STEPSIZE; p2+=STEPSIZE; continue; }
         p1t += LZ4_NbCommonBytes(diff);
         return (p1t - p1);
@@ -427,26 +433,26 @@ FORCE_INLINE size_t LZ4HC_CommonLength (const BYTE* p1, const BYTE* p2, const BY
 }
 
 
-FORCE_INLINE int LZ4HC_InsertAndFindBestMatch (LZ4HC_Data_Structure* hc4, const BYTE* ip, const BYTE* const matchlimit, const BYTE** matchpos)
+FORCE_INLINE int LZ4HC_InsertAndFindBestMatch (LZ4HC_Data_Structure* hc4, const BYTE* ip, const BYTE* const matchlimit, const BYTE** matchpos, const int maxNbAttempts)
 {
     U16* const chainTable = hc4->chainTable;
     HTYPE* const HashTable = hc4->hashTable;
     const BYTE* ref;
     INITBASE(base,hc4->base);
-    int nbAttempts=MAX_NB_ATTEMPTS;
+    int nbAttempts=maxNbAttempts;
     size_t repl=0, ml=0;
-    U16 delta=0;  // useless assignment, to remove an uninitialization warning
+    U16 delta=0;  /* useless assignment, to remove an uninitialization warning */
 
-    // HC4 match finder
+    /* HC4 match finder */
     LZ4HC_Insert(hc4, ip);
     ref = HASH_POINTER(ip);
 
 #define REPEAT_OPTIMIZATION
 #ifdef REPEAT_OPTIMIZATION
-    // Detect repetitive sequences of length <= 4
-    if ((U32)(ip-ref) <= 4)        // potential repetition
+    /* Detect repetitive sequences of length <= 4 */
+    if ((U32)(ip-ref) <= 4)        /* potential repetition */
     {
-        if (A32(ref) == A32(ip))   // confirmed
+        if (A32(ref) == A32(ip))   /* confirmed */
         {
             delta = (U16)(ip-ref);
             repl = ml  = LZ4HC_CommonLength(ip+MINMATCH, ref+MINMATCH, matchlimit) + MINMATCH;
@@ -469,7 +475,7 @@ FORCE_INLINE int LZ4HC_InsertAndFindBestMatch (LZ4HC_Data_Structure* hc4, const
     }
 
 #ifdef REPEAT_OPTIMIZATION
-    // Complete table
+    /* Complete table */
     if (repl)
     {
         const BYTE* ptr = ip;
@@ -478,13 +484,13 @@ FORCE_INLINE int LZ4HC_InsertAndFindBestMatch (LZ4HC_Data_Structure* hc4, const
         end = ip + repl - (MINMATCH-1);
         while(ptr < end-delta)
         {
-            DELTANEXT(ptr) = delta;    // Pre-Load
+            DELTANEXT(ptr) = delta;    /* Pre-Load */
             ptr++;
         }
         do
         {
             DELTANEXT(ptr) = delta;
-            HashTable[HASH_VALUE(ptr)] = (HTYPE)((ptr) - base);     // Head of chain
+            HashTable[HASH_VALUE(ptr)] = (HTYPE)((ptr) - base);     /* Head of chain */
             ptr++;
         } while(ptr < end);
         hc4->nextToUpdate = end;
@@ -495,16 +501,16 @@ FORCE_INLINE int LZ4HC_InsertAndFindBestMatch (LZ4HC_Data_Structure* hc4, const
 }
 
 
-FORCE_INLINE int LZ4HC_InsertAndGetWiderMatch (LZ4HC_Data_Structure* hc4, const BYTE* ip, const BYTE* startLimit, const BYTE* matchlimit, int longest, const BYTE** matchpos, const BYTE** startpos)
+FORCE_INLINE int LZ4HC_InsertAndGetWiderMatch (LZ4HC_Data_Structure* hc4, const BYTE* ip, const BYTE* startLimit, const BYTE* matchlimit, int longest, const BYTE** matchpos, const BYTE** startpos, const int maxNbAttempts)
 {
     U16* const  chainTable = hc4->chainTable;
     HTYPE* const HashTable = hc4->hashTable;
     INITBASE(base,hc4->base);
     const BYTE*  ref;
-    int nbAttempts = MAX_NB_ATTEMPTS;
+    int nbAttempts = maxNbAttempts;
     int delta = (int)(ip-startLimit);
 
-    // First Match
+    /* First Match */
     LZ4HC_Insert(hc4, ip);
     ref = HASH_POINTER(ip);
 
@@ -521,7 +527,7 @@ FORCE_INLINE int LZ4HC_InsertAndGetWiderMatch (LZ4HC_Data_Structure* hc4, const
 
             while (ipt<matchlimit-(STEPSIZE-1))
             {
-                UARCH diff = AARCH(reft) ^ AARCH(ipt);
+                size_t diff = AARCH(reft) ^ AARCH(ipt);
                 if (!diff) { ipt+=STEPSIZE; reft+=STEPSIZE; continue; }
                 ipt += LZ4_NbCommonBytes(diff);
                 goto _endCount;
@@ -532,7 +538,7 @@ FORCE_INLINE int LZ4HC_InsertAndGetWiderMatch (LZ4HC_Data_Structure* hc4, const
 _endCount:
             reft = ref;
 #else
-            // Easier for code maintenance, but unfortunately slower too
+            /* Easier for code maintenance, but unfortunately slower too */
             const BYTE* startt = ip;
             const BYTE* reft = ref;
             const BYTE* ipt = ip + MINMATCH + LZ4HC_CommonLength(ip+MINMATCH, ref+MINMATCH, matchlimit);
@@ -568,26 +574,26 @@ FORCE_INLINE int LZ4HC_encodeSequence (
     int length;
     BYTE* token;
 
-    // Encode Literal length
+    /* Encode Literal length */
     length = (int)(*ip - *anchor);
     token = (*op)++;
-    if ((limitedOutputBuffer) && ((*op + length + (2 + 1 + LASTLITERALS) + (length>>8)) > oend)) return 1;   // Check output limit
+    if ((limitedOutputBuffer) && ((*op + length + (2 + 1 + LASTLITERALS) + (length>>8)) > oend)) return 1;   /* Check output limit */
     if (length>=(int)RUN_MASK) { int len; *token=(RUN_MASK<<ML_BITS); len = length-RUN_MASK; for(; len > 254 ; len-=255) *(*op)++ = 255;  *(*op)++ = (BYTE)len; }
     else *token = (BYTE)(length<<ML_BITS);
 
-    // Copy Literals
+    /* Copy Literals */
     LZ4_BLINDCOPY(*anchor, *op, length);
 
-    // Encode Offset
+    /* Encode Offset */
     LZ4_WRITE_LITTLEENDIAN_16(*op,(U16)(*ip-ref));
 
-    // Encode MatchLength
+    /* Encode MatchLength */
     length = (int)(matchLength-MINMATCH);
-    if ((limitedOutputBuffer) && (*op + (1 + LASTLITERALS) + (length>>8) > oend)) return 1;   // Check output limit
+    if ((limitedOutputBuffer) && (*op + (1 + LASTLITERALS) + (length>>8) > oend)) return 1;   /* Check output limit */
     if (length>=(int)ML_MASK) { *token+=ML_MASK; length-=ML_MASK; for(; length > 509 ; length-=510) { *(*op)++ = 255; *(*op)++ = 255; } if (length > 254) { length-=255; *(*op)++ = 255; } *(*op)++ = (BYTE)length; }
     else *token += (BYTE)(length);
 
-    // Prepare next loop
+    /* Prepare next loop */
     *ip += matchLength;
     *anchor = *ip;
 
@@ -595,12 +601,14 @@ FORCE_INLINE int LZ4HC_encodeSequence (
 }
 
 
+#define MAX_COMPRESSION_LEVEL 16
 static int LZ4HC_compress_generic (
                  void* ctxvoid,
                  const char* source,
                  char* dest,
                  int inputSize,
                  int maxOutputSize,
+                 int compressionLevel,
                  limitedOutput_directive limit
                 )
 {
@@ -614,6 +622,7 @@ static int LZ4HC_compress_generic (
     BYTE* op = (BYTE*) dest;
     BYTE* const oend = op + maxOutputSize;
 
+    const int maxNbAttempts = compressionLevel > MAX_COMPRESSION_LEVEL ? 1 << MAX_COMPRESSION_LEVEL : compressionLevel ? 1<<(compressionLevel-1) : 1<<LZ4HC_DEFAULT_COMPRESSIONLEVEL;
     int   ml, ml2, ml3, ml0;
     const BYTE* ref=NULL;
     const BYTE* start2=NULL;
@@ -624,29 +633,29 @@ static int LZ4HC_compress_generic (
     const BYTE* ref0;
 
 
-    // Ensure blocks follow each other
+    /* Ensure blocks follow each other */
     if (ip != ctx->end) return 0;
     ctx->end += inputSize;
 
     ip++;
 
-    // Main Loop
+    /* Main Loop */
     while (ip < mflimit)
     {
-        ml = LZ4HC_InsertAndFindBestMatch (ctx, ip, matchlimit, (&ref));
+        ml = LZ4HC_InsertAndFindBestMatch (ctx, ip, matchlimit, (&ref), maxNbAttempts);
         if (!ml) { ip++; continue; }
 
-        // saved, in case we would skip too much
+        /* saved, in case we would skip too much */
         start0 = ip;
         ref0 = ref;
         ml0 = ml;
 
 _Search2:
         if (ip+ml < mflimit)
-            ml2 = LZ4HC_InsertAndGetWiderMatch(ctx, ip + ml - 2, ip + 1, matchlimit, ml, &ref2, &start2);
+            ml2 = LZ4HC_InsertAndGetWiderMatch(ctx, ip + ml - 2, ip + 1, matchlimit, ml, &ref2, &start2, maxNbAttempts);
         else ml2 = ml;
 
-        if (ml2 == ml)  // No better match
+        if (ml2 == ml)  /* No better match */
         {
             if (LZ4HC_encodeSequence(&ip, &op, &anchor, ml, ref, limit, oend)) return 0;
             continue;
@@ -654,7 +663,7 @@ _Search2:
 
         if (start0 < ip)
         {
-            if (start2 < ip + ml0)   // empirical
+            if (start2 < ip + ml0)   /* empirical */
             {
                 ip = start0;
                 ref = ref0;
@@ -662,8 +671,8 @@ _Search2:
             }
         }
 
-        // Here, start0==ip
-        if ((start2 - ip) < 3)   // First Match too small : removed
+        /* Here, start0==ip */
+        if ((start2 - ip) < 3)   /* First Match too small : removed */
         {
             ml = ml2;
             ip = start2;
@@ -672,9 +681,11 @@ _Search2:
         }
 
 _Search3:
-        // Currently we have :
-        // ml2 > ml1, and
-        // ip1+3 <= ip2 (usually < ip1+ml1)
+        /*
+         * Currently we have :
+         * ml2 > ml1, and
+         * ip1+3 <= ip2 (usually < ip1+ml1)
+         */
         if ((start2 - ip) < OPTIMAL_ML)
         {
             int correction;
@@ -689,26 +700,26 @@ _Search3:
                 ml2 -= correction;
             }
         }
-        // Now, we have start2 = ip+new_ml, with new_ml = min(ml, OPTIMAL_ML=18)
+        /* Now, we have start2 = ip+new_ml, with new_ml = min(ml, OPTIMAL_ML=18) */
 
         if (start2 + ml2 < mflimit)
-            ml3 = LZ4HC_InsertAndGetWiderMatch(ctx, start2 + ml2 - 3, start2, matchlimit, ml2, &ref3, &start3);
+            ml3 = LZ4HC_InsertAndGetWiderMatch(ctx, start2 + ml2 - 3, start2, matchlimit, ml2, &ref3, &start3, maxNbAttempts);
         else ml3 = ml2;
 
-        if (ml3 == ml2) // No better match : 2 sequences to encode
+        if (ml3 == ml2) /* No better match : 2 sequences to encode */
         {
-            // ip & ref are known; Now for ml
+            /* ip & ref are known; Now for ml */
             if (start2 < ip+ml)  ml = (int)(start2 - ip);
-            // Now, encode 2 sequences
+            /* Now, encode 2 sequences */
             if (LZ4HC_encodeSequence(&ip, &op, &anchor, ml, ref, limit, oend)) return 0;
             ip = start2;
             if (LZ4HC_encodeSequence(&ip, &op, &anchor, ml2, ref2, limit, oend)) return 0;
             continue;
         }
 
-        if (start3 < ip+ml+3) // Not enough space for match 2 : remove it
+        if (start3 < ip+ml+3) /* Not enough space for match 2 : remove it */
         {
-            if (start3 >= (ip+ml)) // can write Seq1 immediately ==> Seq2 is removed, so Seq3 becomes Seq1
+            if (start3 >= (ip+ml)) /* can write Seq1 immediately ==> Seq2 is removed, so Seq3 becomes Seq1 */
             {
                 if (start2 < ip+ml)
                 {
@@ -741,8 +752,10 @@ _Search3:
             goto _Search3;
         }
 
-        // OK, now we have 3 ascending matches; let's write at least the first one
-        // ip & ref are known; Now for ml
+        /*
+         * OK, now we have 3 ascending matches; let's write at least the first one
+         * ip & ref are known; Now for ml
+         */
         if (start2 < ip+ml)
         {
             if ((start2 - ip) < (int)ML_MASK)
@@ -777,80 +790,101 @@ _Search3:
 
     }
 
-    // Encode Last Literals
+    /* Encode Last Literals */
     {
         int lastRun = (int)(iend - anchor);
-        if ((limit) && (((char*)op - dest) + lastRun + 1 + ((lastRun+255-RUN_MASK)/255) > (U32)maxOutputSize)) return 0;  // Check output limit
+        if ((limit) && (((char*)op - dest) + lastRun + 1 + ((lastRun+255-RUN_MASK)/255) > (U32)maxOutputSize)) return 0;  /* Check output limit */
         if (lastRun>=(int)RUN_MASK) { *op++=(RUN_MASK<<ML_BITS); lastRun-=RUN_MASK; for(; lastRun > 254 ; lastRun-=255) *op++ = 255; *op++ = (BYTE) lastRun; }
         else *op++ = (BYTE)(lastRun<<ML_BITS);
         memcpy(op, anchor, iend - anchor);
         op += iend-anchor;
     }
 
-    // End
+    /* End */
     return (int) (((char*)op)-dest);
 }
 
 
-int LZ4_compressHC(const char* source, char* dest, int inputSize)
+int LZ4_compressHC2(const char* source, char* dest, int inputSize, int compressionLevel)
 {
     void* ctx = LZ4_createHC(source);
     int result;
     if (ctx==NULL) return 0;
 
-    result = LZ4HC_compress_generic (ctx, source, dest, inputSize, 0, noLimit);
+    result = LZ4HC_compress_generic (ctx, source, dest, inputSize, 0, compressionLevel, noLimit);
 
     LZ4_freeHC(ctx);
     return result;
 }
 
-int LZ4_compressHC_limitedOutput(const char* source, char* dest, int inputSize, int maxOutputSize)
+int LZ4_compressHC(const char* source, char* dest, int inputSize) { return LZ4_compressHC2(source, dest, inputSize, 0); }
+
+int LZ4_compressHC2_limitedOutput(const char* source, char* dest, int inputSize, int maxOutputSize, int compressionLevel)
 {
     void* ctx = LZ4_createHC(source);
     int result;
     if (ctx==NULL) return 0;
 
-    result = LZ4HC_compress_generic (ctx, source, dest, inputSize, maxOutputSize, limitedOutput);
+    result = LZ4HC_compress_generic (ctx, source, dest, inputSize, maxOutputSize, compressionLevel, limitedOutput);
 
     LZ4_freeHC(ctx);
     return result;
 }
 
+int LZ4_compressHC_limitedOutput(const char* source, char* dest, int inputSize, int maxOutputSize)
+{
+    return LZ4_compressHC2_limitedOutput(source, dest, inputSize, maxOutputSize, 0);
+}
 
-//*****************************
-// Using an external allocation
-//*****************************
 
+/*****************************
+   Using external allocation
+*****************************/
 int LZ4_sizeofStateHC() { return sizeof(LZ4HC_Data_Structure); }
 
 
-int LZ4_compressHC_withStateHC (void* state, const char* source, char* dest, int inputSize)
+int LZ4_compressHC2_withStateHC (void* state, const char* source, char* dest, int inputSize, int compressionLevel)
 {
-    if (((size_t)(state)&(sizeof(void*)-1)) != 0) return 0;   // Error : state is not aligned for pointers (32 or 64 bits)
+    if (((size_t)(state)&(sizeof(void*)-1)) != 0) return 0;   /* Error : state is not aligned for pointers (32 or 64 bits) */
     LZ4_initHC ((LZ4HC_Data_Structure*)state, (const BYTE*)source);
-    return LZ4HC_compress_generic (state, source, dest, inputSize, 0, noLimit);
+    return LZ4HC_compress_generic (state, source, dest, inputSize, 0, compressionLevel, noLimit);
 }
 
+int LZ4_compressHC_withStateHC (void* state, const char* source, char* dest, int inputSize)
+{ return LZ4_compressHC2_withStateHC (state, source, dest, inputSize, 0); }
 
-int LZ4_compressHC_limitedOutput_withStateHC (void* state, const char* source, char* dest, int inputSize, int maxOutputSize)
+
+int LZ4_compressHC2_limitedOutput_withStateHC (void* state, const char* source, char* dest, int inputSize, int maxOutputSize, int compressionLevel)
 {
-    if (((size_t)(state)&(sizeof(void*)-1)) != 0) return 0;   // Error : state is not aligned for pointers (32 or 64 bits)
+    if (((size_t)(state)&(sizeof(void*)-1)) != 0) return 0;   /* Error : state is not aligned for pointers (32 or 64 bits) */
     LZ4_initHC ((LZ4HC_Data_Structure*)state, (const BYTE*)source);
-    return LZ4HC_compress_generic (state, source, dest, inputSize, maxOutputSize, limitedOutput);
+    return LZ4HC_compress_generic (state, source, dest, inputSize, maxOutputSize, compressionLevel, limitedOutput);
 }
 
+int LZ4_compressHC_limitedOutput_withStateHC (void* state, const char* source, char* dest, int inputSize, int maxOutputSize)
+{ return LZ4_compressHC2_limitedOutput_withStateHC (state, source, dest, inputSize, maxOutputSize, 0); }
+
 
-//****************************
-// Stream functions
-//****************************
+/****************************
+   Stream functions
+****************************/
 
 int LZ4_compressHC_continue (void* LZ4HC_Data, const char* source, char* dest, int inputSize)
 {
-    return LZ4HC_compress_generic (LZ4HC_Data, source, dest, inputSize, 0, noLimit);
+    return LZ4HC_compress_generic (LZ4HC_Data, source, dest, inputSize, 0, 0, noLimit);
+}
+
+int LZ4_compressHC2_continue (void* LZ4HC_Data, const char* source, char* dest, int inputSize, int compressionLevel)
+{
+    return LZ4HC_compress_generic (LZ4HC_Data, source, dest, inputSize, 0, compressionLevel, noLimit);
 }
 
 int LZ4_compressHC_limitedOutput_continue (void* LZ4HC_Data, const char* source, char* dest, int inputSize, int maxOutputSize)
 {
-    return LZ4HC_compress_generic (LZ4HC_Data, source, dest, inputSize, maxOutputSize, limitedOutput);
+    return LZ4HC_compress_generic (LZ4HC_Data, source, dest, inputSize, maxOutputSize, 0, limitedOutput);
 }
 
+int LZ4_compressHC2_limitedOutput_continue (void* LZ4HC_Data, const char* source, char* dest, int inputSize, int maxOutputSize, int compressionLevel)
+{
+    return LZ4HC_compress_generic (LZ4HC_Data, source, dest, inputSize, maxOutputSize, compressionLevel, limitedOutput);
+}
diff --git a/c-blosc/internal-complibs/lz4-r110/lz4hc.h b/c-blosc/internal-complibs/lz4-r113/lz4hc.h
similarity index 84%
rename from c-blosc/internal-complibs/lz4-r110/lz4hc.h
rename to c-blosc/internal-complibs/lz4-r113/lz4hc.h
index 4fb1916..9e6929c 100644
--- a/c-blosc/internal-complibs/lz4-r110/lz4hc.h
+++ b/c-blosc/internal-complibs/lz4-r113/lz4hc.h
@@ -63,18 +63,31 @@ LZ4_compress_limitedOutput() :
 */
 
 
+int LZ4_compressHC2 (const char* source, char* dest, int inputSize, int compressionLevel);
+int LZ4_compressHC2_limitedOutput (const char* source, char* dest, int inputSize, int maxOutputSize, int compressionLevel);
+/*
+    Same functions as above, but with programmable 'compressionLevel'.
+    Recommended values are between 4 and 9, although any value between 0 and 16 will work.
+    'compressionLevel'==0 means use default 'compressionLevel' value.
+    Values above 16 behave the same as 16.
+    Equivalent variants exist for all other compression functions below.
+*/
+
 /* Note :
 Decompression functions are provided within LZ4 source code (see "lz4.h") (BSD license)
 */
 
 
-//*****************************
-// Using an external allocation
-//*****************************
+/**************************************
+   Using an external allocation
+**************************************/
 int LZ4_sizeofStateHC();
 int LZ4_compressHC_withStateHC               (void* state, const char* source, char* dest, int inputSize);
 int LZ4_compressHC_limitedOutput_withStateHC (void* state, const char* source, char* dest, int inputSize, int maxOutputSize);
 
+int LZ4_compressHC2_withStateHC              (void* state, const char* source, char* dest, int inputSize, int compressionLevel);
+int LZ4_compressHC2_limitedOutput_withStateHC(void* state, const char* source, char* dest, int inputSize, int maxOutputSize, int compressionLevel);
+
 /*
 These functions are provided should you prefer to allocate memory for compression tables with your own allocation methods.
 To know how much memory must be allocated for the compression tables, use :
@@ -88,16 +101,18 @@ They just use the externally allocated memory area instead of allocating their o
 */
 
 
-//****************************
-// Streaming Functions
-//****************************
-
+/**************************************
+   Streaming Functions
+**************************************/
 void* LZ4_createHC (const char* inputBuffer);
 int   LZ4_compressHC_continue (void* LZ4HC_Data, const char* source, char* dest, int inputSize);
 int   LZ4_compressHC_limitedOutput_continue (void* LZ4HC_Data, const char* source, char* dest, int inputSize, int maxOutputSize);
 char* LZ4_slideInputBufferHC (void* LZ4HC_Data);
 int   LZ4_freeHC (void* LZ4HC_Data);
 
+int   LZ4_compressHC2_continue (void* LZ4HC_Data, const char* source, char* dest, int inputSize, int compressionLevel);
+int   LZ4_compressHC2_limitedOutput_continue (void* LZ4HC_Data, const char* source, char* dest, int inputSize, int maxOutputSize, int compressionLevel);
+
 /*
 These functions allow the compression of dependent blocks, where each block benefits from prior 64 KB within preceding blocks.
 In order to achieve this, it is necessary to start creating the LZ4HC Data Structure, thanks to the function :
diff --git a/c-blosc/tests/Makefile b/c-blosc/tests/Makefile
index fddba4b..9929ed0 100644
--- a/c-blosc/tests/Makefile
+++ b/c-blosc/tests/Makefile
@@ -9,7 +9,7 @@ SOURCES := $(wildcard *.c)
 EXECUTABLES := $(patsubst %.c, %.exe, $(SOURCES))
 
 # Support for internal LZ4 and LZ4HC
-LZ4_DIR = ../internal-complibs/lz4-r110
+LZ4_DIR = ../internal-complibs/lz4-r113
 CFLAGS += -DHAVE_LZ4 -I$(LZ4_DIR)
 BLOSC_LIB += $(wildcard $(LZ4_DIR)/*.c)
 
diff --git a/c-blosc/tests/test_api.c b/c-blosc/tests/test_api.c
index 76d7226..8233cd8 100644
--- a/c-blosc/tests/test_api.c
+++ b/c-blosc/tests/test_api.c
@@ -56,13 +56,24 @@ static char *test_cbuffer_versions() {
 }
 
 
+static char *test_cbuffer_complib() {
+  char *complib;
+
+  complib = blosc_cbuffer_complib(dest);
+  mu_assert("ERROR: complib incorrect", strcmp(complib, "BloscLZ") == 0);
+  return 0;
+}
+
+
 static char *all_tests() {
   mu_run_test(test_cbuffer_sizes);
   mu_run_test(test_cbuffer_metainfo);
   mu_run_test(test_cbuffer_versions);
+  mu_run_test(test_cbuffer_complib);
   return 0;
 }
 
+
 int main(int argc, char **argv) {
   char *result;
 
diff --git a/doc/source/release_notes.rst b/doc/source/release_notes.rst
index 709d0d7..a56ad0e 100644
--- a/doc/source/release_notes.rst
+++ b/doc/source/release_notes.rst
@@ -58,6 +58,7 @@ Release timeline
 ----------------
 
 =============== =========== ==========
+PyTables        3.1.1       2014-03-25
 PyTables        3.1.0       2014-02-05
 PyTables        3.1.0rc2    2014-01-22
 PyTables        3.1.0rc1    2014-01-17
diff --git a/doc/source/usersguide/libref/filenode_classes.rst b/doc/source/usersguide/libref/filenode_classes.rst
index 391298b..ee0cd7d 100644
--- a/doc/source/usersguide/libref/filenode_classes.rst
+++ b/doc/source/usersguide/libref/filenode_classes.rst
@@ -23,6 +23,10 @@ Module functions
 
 .. autofunction:: open_node
 
+.. autofunction:: read_from_filenode
+
+.. autofunction:: save_to_filenode
+
 
 The RawPyTablesIO base class
 ----------------------------
diff --git a/examples/nested-iter.py b/examples/nested-iter.py
deleted file mode 100644
index 44c3bbb..0000000
--- a/examples/nested-iter.py
+++ /dev/null
@@ -1,19 +0,0 @@
-"""A small example showing the use of nested iterators within PyTables.
-
-This program needs the output file, 'tutorial1.h5', generated by
-'tutorial1-1.py' in order to work.
-
-"""
-
-from __future__ import print_function
-import tables
-f = tables.open_file("tutorial1.h5")
-rout = f.root.detector.readout
-
-print("*** Result of a three-folded nested iterator ***")
-for p in rout.where('pressure < 16'):
-    for q in rout.where('pressure < 9'):
-        for n in rout.where('energy < 10'):
-            print("pressure, energy-->", p['pressure'], n['energy'])
-print("*** End of selected data ***")
-f.close()
diff --git a/setup.py b/setup.py
index dcc3b6b..0f2f3f7 100755
--- a/setup.py
+++ b/setup.py
@@ -7,6 +7,7 @@ import os
 import re
 import sys
 import ctypes
+import tempfile
 import textwrap
 import subprocess
 from os.path import exists, expanduser
@@ -734,8 +735,24 @@ if 'BLOSC' not in optional_libs:
     inc_dirs += glob.glob('c-blosc/internal-complibs/*')
     # ...and the macros for all the compressors supported
     def_macros += [('HAVE_LZ4', 1), ('HAVE_SNAPPY', 1), ('HAVE_ZLIB', 1)]
+
     # Add -msse2 flag for optimizing shuffle in include Blosc
-    if os.name == 'posix':
+    def compiler_has_flags(compiler, flags):
+        with tempfile.NamedTemporaryFile(mode='w', suffix='.c',
+                                         delete=False) as fd:
+            fd.write('int main() {return 0;}')
+
+        try:
+            compiler.compile([fd.name], extra_preargs=flags)
+        except Exception:
+            return False
+        else:
+            return True
+        finally:
+            os.remove(fd.name)
+
+    if compiler_has_flags(compiler, ["-msse2"]):
+        print("Setting compiler flag '-msse2'")
         CFLAGS.append("-msse2")
 else:
     ADDLIBS += ['blosc']
diff --git a/tables/__init__.py b/tables/__init__.py
index bab517c..85ab98a 100644
--- a/tables/__init__.py
+++ b/tables/__init__.py
@@ -210,9 +210,12 @@ else:
         _atom.all_types.discard('complex192')
         _atom.ComplexAtom._isizes.remove(24)
     except AttributeError:
-        del _atom.Float128Atom, _atom.Complex256Atom
-        del _description.Float128Col, _description.Complex256Col
-        _atom.all_types.discard('complex256')
-        _atom.ComplexAtom._isizes.remove(32)
+        try:
+            del _atom.Float128Atom, _atom.Complex256Atom
+            del _description.Float128Col, _description.Complex256Col
+            _atom.all_types.discard('complex256')
+            _atom.ComplexAtom._isizes.remove(32)
+        except AttributeError:
+            pass
     del _atom, _description
 del _broken_hdf5_long_double
diff --git a/tables/file.py b/tables/file.py
index ab236f6..ebb63f0 100644
--- a/tables/file.py
+++ b/tables/file.py
@@ -1135,7 +1135,11 @@ class File(hdf5extension.File, object):
                                 '(or None) then both the atom and shape '
                                 'parametes should be provided.')
             else:
-                obj = numpy.zeros(shape, atom.dtype)
+                # Making strides=(0,...) below is a trick to create the
+                # array fast and without memory consumption
+                dflt = numpy.zeros((), dtype=atom.dtype)
+                obj = numpy.ndarray(shape, dtype=atom.dtype, buffer=dflt,
+                                    strides=(0,)*len(shape))
         else:
             flavor = flavor_of(obj)
             # use a temporary object because converting obj at this stage
diff --git a/tables/flavor.py b/tables/flavor.py
index 9bbf277..87a0b54 100644
--- a/tables/flavor.py
+++ b/tables/flavor.py
@@ -357,11 +357,17 @@ def _is_numpy(array):
 
 
 def _numpy_contiguous(convfunc):
-    """Decorate `convfunc` to return a *contiguous* NumPy array."""
+    """Decorate `convfunc` to return a *contiguous* NumPy array.
+
+    Note: When arrays are 0-strided, the copy is avoided.  This allows
+    to use `array` to still carry info about the dtype and shape.
+    """
 
     def conv_to_numpy(array):
         nparr = convfunc(array)
-        if hasattr(nparr, 'flags') and not nparr.flags.contiguous:
+        if (hasattr(nparr, 'flags') and
+            not nparr.flags.contiguous and
+            sum(nparr.strides) != 0):
             nparr = nparr.copy()  # copying the array makes it contiguous
         return nparr
     conv_to_numpy.__name__ = convfunc.__name__
diff --git a/tables/hdf5extension.pyx b/tables/hdf5extension.pyx
index 001a99b..207180d 100644
--- a/tables/hdf5extension.pyx
+++ b/tables/hdf5extension.pyx
@@ -1206,7 +1206,13 @@ cdef class Array(Leaf):
     self.rank = len(shape)
     self.dims = npy_malloc_dims(self.rank, <npy_intp *>(dims.data))
     # Get the pointer to the buffer data area
-    rbuf = nparr.data
+    strides = (<object>nparr).strides
+    # When the object is not a 0-d ndarray and its strides == 0, that
+    # means that the array does not contain actual data
+    if strides != () and sum(strides) == 0:
+      rbuf = NULL
+    else:
+      rbuf = nparr.data
     # Save the array
     complib = (self.filters.complib or '').encode('utf-8')
     version = self._v_version.encode('utf-8')
diff --git a/tables/nodes/filenode.py b/tables/nodes/filenode.py
index 1d3422b..87b84ad 100644
--- a/tables/nodes/filenode.py
+++ b/tables/nodes/filenode.py
@@ -31,6 +31,7 @@ See :ref:`filenode_usersguide` for instructions on use.
 """
 
 import io
+import os
 import warnings
 
 import numpy as np
@@ -46,6 +47,13 @@ NodeTypeVersions = [1, 2]
 """Supported values for NODE_TYPE_VERSION node system attribute."""
 
 
+# have a Python2/3 compatible way to check for string
+try:
+    string_types = basestring
+except NameError:
+    string_types = str
+
+
 class RawPyTablesIO(io.RawIOBase):
     """Base class for raw binary I/O on HDF5 files using PyTables."""
 
@@ -699,6 +707,160 @@ def open_node(node, mode='r'):
 openNode = previous_api(open_node)
 
 
+def save_to_filenode(h5file, filename, where, name=None, overwrite=False,
+                     title="", filters=None):
+    """Save a file's contents to a filenode inside a PyTables file.
+
+    .. versionadded:: 3.2
+
+    Parameters
+    ----------
+    h5file
+      The PyTables file to be written to; can be either a string
+      giving the file's location or a :class:`File` object.  If a file
+      with name *h5file* already exists, it will be opened in
+      mode ``a``.
+
+    filename
+      Path of the file which shall be stored within the PyTables file.
+
+    where, name
+      Location of the filenode where the data shall be stored.  If
+      *name* is not given, and *where* is either a :class:`Group`
+      object or a string ending on ``/``, the leaf name will be set to
+      the file name of *filename*.
+
+    overwrite
+      Whether or not a possibly existing filenode of the specified
+      name shall be overwritten.
+
+    title
+       A description for this node (it sets the ``TITLE`` HDF5
+       attribute on disk).
+
+    filters
+       An instance of the :class:`Filters` class that provides
+       information about the desired I/O filters to be applied
+       during the life of this object.
+
+    """
+    # sanity checks
+    if not os.access(filename, os.R_OK):
+        raise IOError("The file '%s' could not be read" % filename)
+    if isinstance(h5file, tables.file.File) and h5file.mode == "r":
+        raise IOError("The file '%s' is opened read-only" % h5file.filename)
+
+    # guess filenode's name if necessary
+    if (name is None and (isinstance(where, tables.group.Group) or
+                          (isinstance(where, string_types) and
+                           where.endswith("/")))):
+        name = os.path.split(filename)[1]
+
+    new_h5file = not isinstance(h5file, tables.file.File)
+    f = tables.File(h5file, "a") if new_h5file else h5file
+
+    # check for already existing filenode
+    try:
+        n = f.get_node(where=where, name=name)
+        if not overwrite:
+            if new_h5file:
+                f.close()
+            raise IOError("Specified node already exists in file '%s'" %
+                          f.filename)
+    except tables.NoSuchNodeError:
+        pass
+
+    # read data from disk
+    with open(filename, "rb") as fd:
+        data = fd.read()
+
+    if isinstance(where, string_types) and name is None:
+        nodepath = where.split("/")
+        where = "/" + "/".join(nodepath[:-1])
+        name = nodepath[-1]
+
+    # remove existing filenode if present
+    try:
+        f.remove_node(where=where, name=name)
+    except tables.NoSuchNodeError:
+        pass
+
+    # write file's contents to filenode
+    fnode = new_node(f, where=where, name=name, title=title, filters=filters)
+    fnode.write(data)
+    fnode.close()
+
+    # cleanup
+    if new_h5file:
+        f.close()
+
+
+def read_from_filenode(h5file, filename, where, name=None, overwrite=False,
+                       create_target=False):
+    """Read a filenode from a PyTables file and write its contents to a file.
+
+    .. versionadded:: 3.2
+
+    Parameters
+    ----------
+    h5file
+      The PyTables file to be read from; can be either a string
+      giving the file's location or a :class:`File` object.
+
+    filename
+      Path of the file where the contents of the filenode shall be
+      written to.  If *filename* points to a directory or ends with
+      ``/`` (``\`` on Windows), the filename will be set to the *name*
+      attribute of the read filenode.
+
+    where, name
+      Location of the filenode where the data shall be read from.
+
+    overwrite
+      Whether or not a possibly existing file of the specified
+      *filename* shall be overwritten.
+
+    create_target
+      Whether or not the folder hierarchy needed to accomodate the
+      given target ``filename`` will be created.
+
+    """
+    new_h5file = not isinstance(h5file, tables.file.File)
+    f = tables.File(h5file, "r") if new_h5file else h5file
+    fnode = open_node(f.get_node(where=where, name=name))
+
+    # guess output filename if necessary
+    if os.path.isdir(filename) or filename.endswith(os.path.sep):
+        filename = os.path.join(filename, fnode.node.name)
+
+    if os.access(filename, os.R_OK) and not overwrite:
+        if new_h5file:
+            f.close()
+        raise IOError("The file '%s' already exists" % filename)
+
+    # create folder hierarchy if necessary
+    if create_target and not os.path.isdir(os.path.split(filename)[0]):
+        os.makedirs(os.path.split(filename)[0])
+
+    if not os.access(os.path.split(filename)[0], os.W_OK):
+        if new_h5file:
+            f.close()
+        raise IOError("The file '%s' cannot be written to" % filename)
+
+    # read data from filenode
+    data = fnode.read()
+    fnode.close()
+
+    # store data to file
+    with open(filename, "wb") as fd:
+        fd.write(data)
+
+    # cleanup
+    del data
+    if new_h5file:
+        f.close()
+
+
 ## Local Variables:
 ## mode: python
 ## py-indent-offset: 4
diff --git a/tables/nodes/tests/test_filenode.py b/tables/nodes/tests/test_filenode.py
index 1986974..90e3fc9 100644
--- a/tables/nodes/tests/test_filenode.py
+++ b/tables/nodes/tests/test_filenode.py
@@ -15,6 +15,7 @@
 import unittest
 import tempfile
 import os
+import shutil
 import warnings
 
 import tables
@@ -864,6 +865,109 @@ class Version1TestCase(OldVersionTestCase):
     oldh5fname = 'test_filenode_v1.h5'
 
 
+class DirectReadWriteTestCase(common.TempFileMixin, common.PyTablesTestCase):
+
+    datafname = 'test_filenode.dat'
+
+    def setUp(self):
+        """
+        This method sets the following instance attributes:
+
+        * ``h5fname``: the name of the temporary HDF5 file.
+        * ``h5file``, the writable, temporary HDF5 file with a '/test' node
+        * ``datafname``: the name of the data file to be stored in the
+          temporary HDF5 file.
+        * ``data``: the contents of the file ``datafname``
+        * ``testfname``: the name of a temporary file to be written to.
+        """
+
+        super(DirectReadWriteTestCase, self).setUp()
+        self.datafname = self._testFilename(self.datafname)
+        self.testfname = tempfile.mktemp()
+        self.testh5fname = tempfile.mktemp(suffix=".h5")
+        with open(self.datafname, "rb") as fd:
+            self.data = fd.read()
+        self.testdir = tempfile.mkdtemp()
+
+    def tearDown(self):
+        """tearDown() -> None
+
+        Closes 'fnode' and 'h5file'; removes 'h5fname'.
+        """
+
+        super(DirectReadWriteTestCase, self).tearDown()
+        if os.access(self.testfname, os.R_OK):
+            os.remove(self.testfname)
+        if os.access(self.testh5fname, os.R_OK):
+            os.remove(self.testh5fname)
+        shutil.rmtree(self.testdir)
+
+    def test01_WriteToFilename(self):
+        # write contents of datafname to h5 testfile
+        filenode.save_to_filenode(self.testh5fname, self.datafname, "/test1")
+        # make sure writing to an existing node doesn't work ...
+        self.assertRaises(IOError, filenode.save_to_filenode, self.testh5fname,
+                          self.datafname, "/test1")
+        # ... except if overwrite is True
+        filenode.save_to_filenode(self.testh5fname, self.datafname, "/test1",
+                                  overwrite=True)
+        # write again, this time specifying a name
+        filenode.save_to_filenode(self.testh5fname, self.datafname, "/",
+                                  name="test2")
+        # read from test h5file
+        filenode.read_from_filenode(self.testh5fname, self.testfname, "/test1")
+        # and compare result to what it should be
+        with open(self.testfname, "rb") as fd:
+            self.assertEqual(fd.read(), self.data)
+        # make sure extracting to an existing file doesn't work ...
+        self.assertRaises(IOError, filenode.read_from_filenode,
+                          self.testh5fname, self.testfname, "/test1")
+        # except overwrite is True.  And try reading with a name
+        filenode.read_from_filenode(self.testh5fname, self.testfname, "/",
+                                    name="test2", overwrite=True)
+        # and compare to what it should be
+        with open(self.testfname, "rb") as fd:
+            self.assertEqual(fd.read(), self.data)
+        # cleanup
+        os.remove(self.testfname)
+        os.remove(self.testh5fname)
+
+    def test02_WriteToHDF5File(self):
+        # write contents of datafname to h5 testfile
+        filenode.save_to_filenode(self.h5file, self.datafname, "/test1")
+        # make sure writing to an existing node doesn't work ...
+        self.assertRaises(IOError, filenode.save_to_filenode, self.h5file,
+                          self.datafname, "/test1")
+        # ... except if overwrite is True
+        filenode.save_to_filenode(self.h5file, self.datafname, "/test1",
+                                  overwrite=True)
+        # read from test h5file
+        filenode.read_from_filenode(self.h5file, self.testfname, "/test1")
+        # and compare result to what it should be
+        with open(self.testfname, "rb") as fd:
+            self.assertEqual(fd.read(), self.data)
+        # make sure extracting to an existing file doesn't work ...
+        self.assertRaises(IOError, filenode.read_from_filenode, self.h5file,
+                          self.testfname, "/test1")
+        # make sure the original h5file is still alive and kicking
+        self.assertEqual(isinstance(self.h5file, tables.file.File), True)
+        self.assertEqual(self.h5file.mode, "w")
+
+    def test03_AutomaticNameGuessing(self):
+        # write using the filename as node name
+        filenode.save_to_filenode(self.testh5fname, self.datafname, "/")
+        # and read again
+        datafname = os.path.split(self.datafname)[1]
+        filenode.read_from_filenode(self.testh5fname, self.testdir, "/",
+                                    name=datafname)
+        # test if the output file really has the expected name
+        self.assertEqual(os.access(os.path.join(self.testdir, datafname),
+                                   os.R_OK), True)
+        # and compare result to what it should be
+        with open(os.path.join(self.testdir, datafname), "rb") as fd:
+            self.assertEqual(fd.read(), self.data)
+
+
 #----------------------------------------------------------------------
 def suite():
     """suite() -> test suite
@@ -883,6 +987,7 @@ def suite():
     #theSuite.addTest(unittest.makeSuite(LineSeparatorTestCase))
     theSuite.addTest(unittest.makeSuite(AttrsTestCase))
     theSuite.addTest(unittest.makeSuite(ClosedH5FileTestCase))
+    theSuite.addTest(unittest.makeSuite(DirectReadWriteTestCase))
 
     return theSuite
 
diff --git a/tables/tests/test_tablesMD.py b/tables/tests/test_tablesMD.py
index b31aace..f3e186f 100644
--- a/tables/tests/test_tablesMD.py
+++ b/tables/tests/test_tablesMD.py
@@ -165,8 +165,8 @@ class BasicTestCase(common.PyTablesTestCase):
                     else:
                         row['var5'] = float(i)
                     # var6 will be like var3 but byteswaped
-                    row['var6'] = ((row['var3'] >> 8) & 0xff) + \
-                                  ((row['var3'] << 8) & 0xff00)
+                    row['var6'] = (((row['var3'] >> 8) & 0xff) +
+                                   ((row['var3'] << 8) & 0xff00))
                     row.append()
 
             # Flush the buffer for this table
@@ -269,7 +269,7 @@ class BasicTestCase(common.PyTablesTestCase):
         if common.verbose:
             print("Table:", repr(table))
             print("Nrows in", table._v_pathname, ":", table.nrows)
-            print("Last record in table ==>", rec)
+            print("Last record in table ==>", r)
             print("Total selected records in table ==> ", len(result))
         nrows = self.expectedrows - 1
         r = [r for r in table.iterrows() if r['var2'][0][0] < 20][-1]
@@ -304,7 +304,7 @@ class BasicTestCase(common.PyTablesTestCase):
         result = [r['var5'] for r in table.iterrows() if r['var2'][0][0] < 20]
         if common.verbose:
             print("Nrows in", table._v_pathname, ":", table.nrows)
-            print("Last record in table ==>", rec)
+            print("Last record in table ==>", r)
             print("Total selected records in table ==> ", len(result))
         nrows = table.nrows
         r = [r for r in table.iterrows() if r['var2'][0][0] < 20][-1]
@@ -443,7 +443,7 @@ class BasicTestCase(common.PyTablesTestCase):
         if common.verbose:
             print("Nrows in", table._v_pathname, ":", table.nrows)
             print("On-disk byteorder ==>", table.byteorder)
-            print("Last record in table ==>", rec)
+            print("Last record in table ==>", r)
             print("Total selected records in table ==>", len(result))
         nrows = self.expectedrows - 1
         r = list(table.iterrows())[-1]
@@ -603,8 +603,8 @@ class BasicRangeTestCase(unittest.TestCase):
                 else:
                     row['var5'] = float(i)
                 # var6 will be like var3 but byteswaped
-                row['var6'] = ((row['var3'] >> 8) & 0xff) + \
-                              ((row['var3'] << 8) & 0xff00)
+                row['var6'] = (((row['var3'] >> 8) & 0xff) +
+                               ((row['var3'] << 8) & 0xff00))
                 row.append()
 
             # Flush the buffer for this table
@@ -693,7 +693,7 @@ class BasicRangeTestCase(unittest.TestCase):
                 elif self.checkgetCol:
                     print("Last value *read* in getCol ==>", column[-1])
                 else:
-                    print("Last record *read* in table range ==>", rec)
+                    print("Last record *read* in table range ==>", r)
             print("Total number of selected records ==>", len(result))
             print("Selected records:\n", result)
             print("Selected records should look like:\n",

-- 
Alioth's /usr/local/bin/git-commit-notice on /srv/git.debian.org/git/debian-science/packages/pytables.git



More information about the debian-science-commits mailing list