[skimage] 01/06: New upstream version 0.13.1

Yaroslav Halchenko debian at onerussian.com
Tue Oct 17 17:55:30 UTC 2017


This is an automated email from the git hooks/post-receive script.

yoh pushed a commit to branch master
in repository skimage.

commit 99efed28ea57f33dd2617a7b3dadd1db84745a6b
Author: Yaroslav Halchenko <debian at onerussian.com>
Date:   Tue Oct 17 07:39:58 2017 -0400

    New upstream version 0.13.1
---
 .github/PULL_REQUEST_TEMPLATE.md                   |   5 +
 Makefile                                           |   2 +-
 RELEASE.txt                                        | 115 +++-
 appveyor.yml                                       |   2 +-
 bento.info                                         |   2 +-
 doc/examples/edges/plot_active_contours.py         |  10 +-
 doc/examples/edges/plot_line_hough_transform.py    |  14 +-
 doc/examples/edges/plot_skeleton.py                |  10 +
 doc/examples/features_detection/plot_corner.py     |   4 +-
 doc/examples/filters/plot_deconvolution.py         |  10 +-
 doc/examples/filters/plot_inpaint.py               |   6 +-
 doc/examples/filters/plot_restoration.py           |   2 +-
 doc/examples/segmentation/plot_ncut.py             |   2 +-
 doc/examples/segmentation/plot_niblack_sauvola.py  |   8 +-
 doc/examples/xx_applications/plot_rank_filters.py  |  10 +-
 doc/ext/sphinx_gallery/LICENSE                     |  27 -
 doc/ext/sphinx_gallery/README.txt                  |   6 -
 doc/ext/sphinx_gallery/__init__.py                 |  13 -
 doc/ext/sphinx_gallery/_static/broken_example.png  | Bin 21404 -> 0 bytes
 doc/ext/sphinx_gallery/_static/broken_stamp.svg    |  90 ---
 doc/ext/sphinx_gallery/_static/gallery.css         | 192 ------
 doc/ext/sphinx_gallery/_static/no_image.png        | Bin 4315 -> 0 bytes
 doc/ext/sphinx_gallery/backreferences.py           | 193 -------
 doc/ext/sphinx_gallery/docs_resolv.py              | 462 ---------------
 doc/ext/sphinx_gallery/downloads.py                | 117 ----
 doc/ext/sphinx_gallery/gen_gallery.py              | 252 --------
 doc/ext/sphinx_gallery/gen_rst.py                  | 643 ---------------------
 doc/ext/sphinx_gallery/notebook.py                 | 194 -------
 doc/ext/sphinx_gallery/py_source_parser.py         |  99 ----
 doc/release/contribs.py                            |   2 +-
 doc/release/release_0.13.rst                       |  18 +
 doc/source/_templates/localtoc.html                |   3 +-
 doc/source/_templates/navbar.html                  |   1 -
 doc/source/_templates/navigation.html              |  23 +-
 doc/source/_templates/versions.html                |  18 +-
 doc/source/conf.py                                 |   2 +-
 doc/source/themes/scikit-image/layout.html         |   6 +-
 .../themes/scikit-image/static/css/custom.css      |  57 +-
 doc/tools/apigen.py                                |  51 +-
 skimage/__init__.py                                |   2 +-
 skimage/draw/draw.py                               |  38 +-
 skimage/feature/util.py                            |   2 +-
 skimage/future/graph/graph_cut.py                  |   8 +-
 skimage/future/graph/graph_merge.py                |   6 +-
 skimage/future/graph/rag.py                        |  58 +-
 skimage/future/graph/tests/test_rag.py             |   8 +-
 skimage/io/_io.py                                  |  10 +-
 skimage/measure/_ccomp.pxd                         |   1 -
 skimage/restoration/__init__.py                    |  15 -
 skimage/restoration/_denoise.py                    |  16 +-
 skimage/restoration/inpaint.py                     |   3 +-
 skimage/restoration/tests/test_denoise.py          |  26 +-
 skimage/transform/_geometric.py                    |   2 +-
 tools/{osx_wheel_upload.sh => upload_wheels.sh}    |   1 +
 54 files changed, 393 insertions(+), 2474 deletions(-)

diff --git a/.github/PULL_REQUEST_TEMPLATE.md b/.github/PULL_REQUEST_TEMPLATE.md
index 3d884e8..1af7067 100644
--- a/.github/PULL_REQUEST_TEMPLATE.md
+++ b/.github/PULL_REQUEST_TEMPLATE.md
@@ -16,3 +16,8 @@
 [If this is a bug-fix or enhancement, it closes issue # ]
 [If this is a new feature, it implements the following paper: ]
 
+## For reviewers
+
+- [ ] Check that the PR title is short, concise, and will make sense 1 year
+  later.
+- [ ] Check that new features are mentioned in `doc/release/release_dev.rst`.
diff --git a/Makefile b/Makefile
index bd4e7a2..558f5f0 100644
--- a/Makefile
+++ b/Makefile
@@ -19,5 +19,5 @@ coverage:
 	$(NOSETESTS) skimage --with-coverage --cover-package=skimage
 
 html:
-	pip install -q sphinx
+	pip install -q sphinx pytest-runner sphinx-gallery
 	export SPHINXOPTS=-W; make -C doc html
diff --git a/RELEASE.txt b/RELEASE.txt
index cd09679..f690608 100644
--- a/RELEASE.txt
+++ b/RELEASE.txt
@@ -1,21 +1,37 @@
 How to make a new release of ``skimage``
 ========================================
 
+While following this guide, note down all the times that you need to consult a
+previous release manager, or that you find an instruction unclear. You will
+make a PR to update these notes after you are done with the release. ;-)
+
 - Check ``TODO.txt`` for any outstanding tasks.
 
-- Update release notes.
+- Branch v<major>.<minor>.x from master. This is the "release branch", where
+  you will make your changes gearing up for release, and cherry-pick them as
+  appropriate to master.
+
+- In the master branch, update the version number in ``skimage/__init__.py``
+  and ``bento.info`` to the next ``-dev`` version, commit, and push.
+
+- Back on the release branch, update the release notes:
+
+  1. Review and cleanup ``doc/release/release_dev.txt``.
 
-  1. Review and cleanup ``doc/release/release_dev.txt``
+  2. Make a list of merges and contributors by running
+     ``doc/release/contribs.py <tag of previous release>``.
 
-     - To show a list of merges and contributors, run
-       ``doc/release/contribs.py <tag of prev release>``.
+  3. Paste this list at the end of the ``release_dev.txt``. Scan the PR titles
+     for highlights, deprecations, and API changes, and mention these in the
+     relevant sections of the notes.
 
-  2. Rename to ``doc/release/release_X.txt``
+  4. Rename to ``doc/release/release_<major>.<minor>.txt``
 
-  3. Copy ``doc/release/release_template.txt`` to
+  5. Copy ``doc/release/release_template.txt`` to
      ``doc/release/release_dev.txt`` for the next release.
 
-- Update the version number in ``skimage/__init__.py`` and ``bento.info`` and commit
+- Update the version number in ``skimage/__init__.py`` and ``bento.info`` and
+  commit.
 
 - Update the docs:
 
@@ -28,33 +44,41 @@ How to make a new release of ``skimage``
 
 - Add the version number as a tag in git::
 
-   git tag -s v0.X.0
+   git tag -s [-u <key-id>] v<major>.<minor>.0
 
   (If you do not have a gpg key, use -m instead; it is important for
   Debian packaging that the tags are annotated)
 
 - Push the new meta-data to github::
 
-   git push --tags origin master
+   git push --tags upstream master
 
-- Publish on PyPi::
+  (where ``upstream`` is the name of the
+   ``github.com:scikit-image/scikit-image`` repository.)
 
-   python setup.py register
-   python setup.py sdist upload
+- Build the package wheels (pre-compiled binaries) for various platforms:
 
-  Go to https://travis-ci.org/scikit-image/scikit-image-wheels, select the
-  "Current" tab, and click (on the right) on the "Restart Build" icon.  After
-  the wheels become available at http://wheels.scikit-image.org/ (approx 15
-  mins), execute ``tools/osx_wheel_upload.sh``.  Note that, if you rebuild the
-  same wheels, it can take up to 15 minutes for the the files in the http
-  directory to update to the versions that Travis-CI uploaded. You may want to
-  check the timestamps in the http directory listing to check that you will get
-  the latest version.
+  - Clone https://github.com/scikit-image/scikit-image-wheels.
+  - Update its ``.travis.yml`` file so that ``BUILD_COMMIT`` points to this
+    release tag (e.g. ``v0.13.0``).
+  - Commit and push.
+  - Wait until the corresponding wheels appear at
+    http://wheels.scikit-image.org (about 15 minutes).
 
-- Increase the version number
+- Download the wheels and upload them to PyPI (you may need to ask Stéfan or
+  Juan to give you push access to the scikit-image package on PyPI).
 
-  - In ``setup.py``, set to ``0.Xdev``.
-  - In ``bento.info``, set to ``0.X.dev0``.
+  - Make sure ``twine`` is available. You can install it with
+    ``pip install twine``.
+  - Download ``wheel-uploader`` [1]_ and place it on your PATH.
+  - Run ``tools/upload_wheels.sh``.
+
+.. [1] https://github.com/MacPython/terryfy/master/wheel-uploader
+
+- Publish the source distribution on PyPi::
+
+   python setup.py sdist
+   twine upload dist/scikit-image-<major>.<minor>.0.tar.gz
 
 - Update the web frontpage:
   The webpage is kept in a separate repo: scikit-image-web
@@ -70,19 +94,56 @@ How to make a new release of ``skimage``
 
 - Update the development docs for the new version ``0.Xdev`` just like above
 
-- Post release notes on mailing lists, blog, G+, etc.
+- Post release notes on mailing lists, blog, Twitter, etc.
 
   - scikit-image at python.org
-  - scipy-user at scipy.org
-  - scikit-learn-general at lists.sourceforge.net
-  - pythonvision at googlegroups.com
+  - scipy-user at python.org
+  - scikit-learn at python.org
 
 - Update the version and the release date on wikipedia
   https://en.wikipedia.org/wiki/Scikit-image
 
+Conda-forge
+-----------
+
+A scikit-image build recipe resides at
+http://github.com/conda-forge/scikit-image-feedstock. You should update it to
+point to the most recent release. You can do this by following these steps:
+
+- Fork the repository at http://github.com/conda-forge/scikit-image-feedstock,
+  and clone it to your machine.
+- Sprout a new branch, e.g. ``v<major>.<minor>``.
+- Find out the SHA256 hash of the source distribution. You can find this at
+  https://pypi.org/project/scikit-image/, or use the following commands:
+
+  - ``sha256sum path/to/scikit-image-*.tar.gz`` (Linux)
+  - ``shasum -a 256 dist/scikit-image-*.tar.gz`` (macOS)
+  - ``CertUtil -hashfile dist\scikit-image-*.tar.gz SHA256`` (Windows)
+
+- Edit the file ``recipe/meta.yaml``:
+
+  - Update the version number on the first line.
+  - Update the SHA256 value on line 10.
+  - If necessary, reset the build number to 0. (line 13)
+  - Update any requirements in the appropriate sections (build or run).
+    Note: don't remove ``numpy x.x``. This tells conda-smithy, conda-forge's
+    build system, that the library must be linked against NumPy at build time.
+
+- Commit the changes, push to your fork, and submit a pull request to the
+  upstream repo.
+
 Debian
 ------
 
+The below instructions remain here for completeness. However, the Debian
+scientific team has kindly taken over package maintenance. Simply follow the
+procedure described at https://www.debian.org/Bugs/Reporting to report a "bug"
+that there is a new version of scikit-image out (specifying the version
+number), with severity set to "Wishlist".
+
+If you want to take matters into your own hands for some reason, follow the
+instructions detailed below to cut a Debian release yourself.
+
 - Tag the release as per instructions above.
 - git checkout debian
 - git merge v0.x.x
diff --git a/appveyor.yml b/appveyor.yml
index 0b9d4ba..a1dc9f8 100644
--- a/appveyor.yml
+++ b/appveyor.yml
@@ -59,7 +59,7 @@ build: false
 
 test_script:
   # Build the docs
-  - pip install sphinx
+  - pip install sphinx pytest-runner sphinx-gallery
   - SET PYTHON=%PYTHON%\\python.exe && cd doc && make html
 
   # Change to a non-source folder to make sure we run the tests on the
diff --git a/bento.info b/bento.info
index 48d8897..68ff2b3 100644
--- a/bento.info
+++ b/bento.info
@@ -1,5 +1,5 @@
 Name: scikit-image
-Version: 0.13.0
+Version: 0.13.1
 Summary: Image processing routines for SciPy
 Url: http://scikit-image.org
 DownloadUrl: http://github.com/scikit-image/scikit-image
diff --git a/doc/examples/edges/plot_active_contours.py b/doc/examples/edges/plot_active_contours.py
index 9f20680..2a3b8c9 100644
--- a/doc/examples/edges/plot_active_contours.py
+++ b/doc/examples/edges/plot_active_contours.py
@@ -4,8 +4,8 @@ Active Contour Model
 ====================
 
 The active contour model is a method to fit open or closed splines to lines or
-edges in an image. It works by minimising an energy that is in part defined by
-the image and part by the spline's shape: length and smoothness. The
+edges in an image [1]_. It works by minimising an energy that is in part
+defined by the image and part by the spline's shape: length and smoothness. The
 minimization is done implicitly in the shape energy and explicitly in the
 image energy.
 
@@ -15,13 +15,13 @@ to the edges of the face and (2) to find the darkest curve between two fixed
 points while obeying smoothness considerations. Typically it is a good idea to
 smooth images a bit before analyzing, as done in the following examples.
 
-.. [1] *Snakes: Active contour models*. Kass, M.; Witkin, A.; Terzopoulos, D.
-       International Journal of Computer Vision 1 (4): 321 (1988).
-
 We initialize a circle around the astronaut's face and use the default boundary
 condition ``bc='periodic'`` to fit a closed curve. The default parameters
 ``w_line=0, w_edge=1`` will make the curve search towards edges, such as the
 boundaries of the face.
+
+.. [1] *Snakes: Active contour models*. Kass, M.; Witkin, A.; Terzopoulos, D.
+       International Journal of Computer Vision 1 (4): 321 (1988).
 """
 
 import numpy as np
diff --git a/doc/examples/edges/plot_line_hough_transform.py b/doc/examples/edges/plot_line_hough_transform.py
index e516c10..1ae74bb 100644
--- a/doc/examples/edges/plot_line_hough_transform.py
+++ b/doc/examples/edges/plot_line_hough_transform.py
@@ -3,7 +3,8 @@
 Straight line Hough transform
 =============================
 
-The Hough transform in its simplest form is a method to detect straight lines.
+The Hough transform in its simplest form is a method to detect straight lines
+[1]_.
 
 In the following example, we construct an image with a line intersection. We
 then use the `Hough transform  <http://en.wikipedia.org/wiki/Hough_transform>`__.
@@ -31,7 +32,7 @@ local maxima in the resulting histogram indicates the parameters of the most
 probably lines. In our example, the maxima occur at 45 and 135 degrees,
 corresponding to the normal vector angles of each line.
 
-Another approach is the Progressive Probabilistic Hough Transform [1]_. It is
+Another approach is the Progressive Probabilistic Hough Transform [2]_. It is
 based on the assumption that using a random subset of voting points give a good
 approximation to the actual result, and that lines can be extracted during the
 voting process by walking along connected components. This returns the
@@ -45,13 +46,14 @@ than 10 with a gap less than 3 pixels.
 References
 ----------
 
-.. [1] C. Galamhos, J. Matas and J. Kittler,"Progressive probabilistic
+.. [1] Duda, R. O. and P. E. Hart, "Use of the Hough Transformation to
+       Detect Lines and Curves in Pictures," Comm. ACM, Vol. 15,
+       pp. 11-15 (January, 1972)
+
+.. [2] C. Galamhos, J. Matas and J. Kittler,"Progressive probabilistic
        Hough transform for line detection", in IEEE Computer Society
        Conference on Computer Vision and Pattern Recognition, 1999.
 
-.. [2] Duda, R. O. and P. E. Hart, "Use of the Hough Transformation to
-       Detect Lines and Curves in Pictures," Comm. ACM, Vol. 15,
-       pp. 11-15 (January, 1972)
 """
 import numpy as np
 
diff --git a/doc/examples/edges/plot_skeleton.py b/doc/examples/edges/plot_skeleton.py
index ca365db..29a9d3a 100644
--- a/doc/examples/edges/plot_skeleton.py
+++ b/doc/examples/edges/plot_skeleton.py
@@ -60,6 +60,16 @@ plt.show()
 #
 # Note that ``skeletonize_3d`` is designed to be used mostly on 3-D images.
 # However, for illustrative purposes, we apply this algorithm on a 2-D image.
+#
+# .. [Zha84] A fast parallel algorithm for thinning digital patterns,
+#            T. Y. Zhang and C. Y. Suen, Communications of the ACM,
+#            March 1984, Volume 27, Number 3.
+#
+# .. [Lee94] T.-C. Lee, R.L. Kashyap and C.-N. Chu, Building skeleton models
+#            via 3-D medial surface/axis thinning algorithms.
+#            Computer Vision, Graphics, and Image Processing, 56(6):462-478,
+#            1994.
+#
 
 import matplotlib.pyplot as plt
 from skimage.morphology import skeletonize, skeletonize_3d
diff --git a/doc/examples/features_detection/plot_corner.py b/doc/examples/features_detection/plot_corner.py
index 4e962e1..8be3cec 100644
--- a/doc/examples/features_detection/plot_corner.py
+++ b/doc/examples/features_detection/plot_corner.py
@@ -3,8 +3,8 @@
 Corner detection
 ================
 
-Detect corner points using the Harris corner detector and determine subpixel
-position of corners.
+Detect corner points using the Harris corner detector and determine the
+subpixel position of corners ([1]_, [2]_).
 
 .. [1] http://en.wikipedia.org/wiki/Corner_detection
 .. [2] http://en.wikipedia.org/wiki/Interest_point_detection
diff --git a/doc/examples/filters/plot_deconvolution.py b/doc/examples/filters/plot_deconvolution.py
index cc377b8..9a7a22f 100644
--- a/doc/examples/filters/plot_deconvolution.py
+++ b/doc/examples/filters/plot_deconvolution.py
@@ -3,14 +3,14 @@
 Image Deconvolution
 =====================
 In this example, we deconvolve an image using Richardson-Lucy
-deconvolution algorithm.
+deconvolution algorithm ([1]_, [2]_).
 
 The algorithm is based on a PSF (Point Spread Function),
-where PSF is described as the impulse response of the 
-optical system. The blurred image is sharpened through a number of 
+where PSF is described as the impulse response of the
+optical system. The blurred image is sharpened through a number of
 iterations, which needs to be hand-tuned.
 
-.. [1] William Hadley Richardson, "Bayesian-Based Iterative 
+.. [1] William Hadley Richardson, "Bayesian-Based Iterative
        Method of Image Restoration",
        J. Opt. Soc. Am. A 27, 1593-1607 (1972), DOI:10.1364/JOSA.62.000055
 
@@ -27,7 +27,7 @@ astro = color.rgb2gray(data.astronaut())
 
 psf = np.ones((5, 5)) / 25
 astro = conv2(astro, psf, 'same')
-# Add Noise to Image 
+# Add Noise to Image
 astro_noisy = astro.copy()
 astro_noisy += (np.random.poisson(lam=25, size=astro.shape) - 10) / 255.
 
diff --git a/doc/examples/filters/plot_inpaint.py b/doc/examples/filters/plot_inpaint.py
index 17faba2..7d715b0 100644
--- a/doc/examples/filters/plot_inpaint.py
+++ b/doc/examples/filters/plot_inpaint.py
@@ -16,8 +16,7 @@ inpainting algorithm based on 'biharmonic equation'-assumption [2]_ [3]_.
 .. [2]  Wikipedia. Biharmonic equation
         https://en.wikipedia.org/wiki/Biharmonic_equation
 .. [3]  N.S.Hoang, S.B.Damelin, "On surface completion and image
-        inpainting by biharmonic functions: numerical aspects",
-        http://www.ima.umn.edu/~damelin/biharmonic
+        inpainting by biharmonic functions: numerical aspects"
 """
 
 import numpy as np
@@ -39,7 +38,8 @@ image_defect = image_orig.copy()
 for layer in range(image_defect.shape[-1]):
     image_defect[np.where(mask)] = 0
 
-image_result = inpaint.inpaint_biharmonic(image_defect, mask, multichannel=True)
+image_result = inpaint.inpaint_biharmonic(image_defect, mask,
+                                          multichannel=True)
 
 fig, axes = plt.subplots(ncols=2, nrows=2)
 ax = axes.ravel()
diff --git a/doc/examples/filters/plot_restoration.py b/doc/examples/filters/plot_restoration.py
index f00b628..e9a16d0 100644
--- a/doc/examples/filters/plot_restoration.py
+++ b/doc/examples/filters/plot_restoration.py
@@ -20,7 +20,7 @@ Unsupervised Wiener
 -------------------
 This algorithm has a self-tuned regularisation parameters based on
 data learning. This is not common and based on the following
-publication. The algorithm is based on a iterative Gibbs sampler that
+publication [1]_. The algorithm is based on a iterative Gibbs sampler that
 draw alternatively samples of posterior conditional law of the image,
 the noise power and the image frequency power.
 
diff --git a/doc/examples/segmentation/plot_ncut.py b/doc/examples/segmentation/plot_ncut.py
index fa8133b..e875831 100644
--- a/doc/examples/segmentation/plot_ncut.py
+++ b/doc/examples/segmentation/plot_ncut.py
@@ -4,7 +4,7 @@ Normalized Cut
 ==============
 
 This example constructs a Region Adjacency Graph (RAG) and recursively performs
-a Normalized Cut on it.
+a Normalized Cut on it [1]_.
 
 References
 ----------
diff --git a/doc/examples/segmentation/plot_niblack_sauvola.py b/doc/examples/segmentation/plot_niblack_sauvola.py
index 78b0bad..b03314d 100644
--- a/doc/examples/segmentation/plot_niblack_sauvola.py
+++ b/doc/examples/segmentation/plot_niblack_sauvola.py
@@ -5,10 +5,10 @@ Niblack and Sauvola Thresholding
 
 Niblack and Sauvola thresholds are local thresholding techniques that are
 useful for images where the background is not uniform, especially for text
-recognition. Instead of calculating a single global threshold for the entire
-image, several thresholds are calculated for every pixel by using specific
-formulae that take into account the mean and standard deviation of the local
-neighborhood (defined by a window centered around the pixel).
+recognition [1]_, [2]_. Instead of calculating a single global threshold for
+the entire image, several thresholds are calculated for every pixel by using
+specific formulae that take into account the mean and standard deviation of the
+local neighborhood (defined by a window centered around the pixel).
 
 Here, we binarize an image using these algorithms compare it to a common global
 thresholding technique. Parameter `window_size` determines the size of the
diff --git a/doc/examples/xx_applications/plot_rank_filters.py b/doc/examples/xx_applications/plot_rank_filters.py
index a600569..4fd519e 100644
--- a/doc/examples/xx_applications/plot_rank_filters.py
+++ b/doc/examples/xx_applications/plot_rank_filters.py
@@ -4,10 +4,10 @@ Rank filters
 ============
 
 Rank filters are non-linear filters using the local gray-level ordering to
-compute the filtered value. This ensemble of filters share a common base: the
-local gray-level histogram is computed on the neighborhood of a pixel (defined
-by a 2-D structuring element). If the filtered value is taken as the middle
-value of the histogram, we get the classical median filter.
+compute the filtered value [1]_. This ensemble of filters share a common base:
+the local gray-level histogram is computed on the neighborhood of a pixel
+(defined by a 2-D structuring element). If the filtered value is taken as the
+middle value of the histogram, we get the classical median filter.
 
 Rank filters can be used for several purposes such as:
 
@@ -354,7 +354,7 @@ plt.tight_layout()
 # Image threshold
 # ===============
 #
-# The Otsu threshold [1]_ method can be applied locally using the local gray-
+# The Otsu threshold [4]_ method can be applied locally using the local gray-
 # level distribution. In the example below, for each pixel, an "optimal"
 # threshold is determined by maximizing the variance between two classes of
 # pixels of the local neighborhood defined by a structuring element.
diff --git a/doc/ext/sphinx_gallery/LICENSE b/doc/ext/sphinx_gallery/LICENSE
deleted file mode 100644
index 1ad8acd..0000000
--- a/doc/ext/sphinx_gallery/LICENSE
+++ /dev/null
@@ -1,27 +0,0 @@
-Copyright (c) 2015, Óscar Nájera
-All rights reserved.
-
-Redistribution and use in source and binary forms, with or without
-modification, are permitted provided that the following conditions are met:
-
-* Redistributions of source code must retain the above copyright notice, this
-  list of conditions and the following disclaimer.
-
-* Redistributions in binary form must reproduce the above copyright notice,
-  this list of conditions and the following disclaimer in the documentation
-  and/or other materials provided with the distribution.
-
-* Neither the name of sphinx-gallery nor the names of its
-  contributors may be used to endorse or promote products derived from
-  this software without specific prior written permission.
-
-THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
-AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
-IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
-DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE
-FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
-DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
-SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
-CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY,
-OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
-OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
diff --git a/doc/ext/sphinx_gallery/README.txt b/doc/ext/sphinx_gallery/README.txt
deleted file mode 100644
index a9cd77a..0000000
--- a/doc/ext/sphinx_gallery/README.txt
+++ /dev/null
@@ -1,6 +0,0 @@
-This directory was taken from
-https://github.com/sphinx-gallery/sphinx-gallery
-
-Files should not diverge from the original sphinx-gallery project, and
-any modifications should be submitted as pull requests to sphinx-gallery
-(with some exceptions such as CSS tweaking).
diff --git a/doc/ext/sphinx_gallery/__init__.py b/doc/ext/sphinx_gallery/__init__.py
deleted file mode 100644
index 22464dd..0000000
--- a/doc/ext/sphinx_gallery/__init__.py
+++ /dev/null
@@ -1,13 +0,0 @@
-"""
-==============
-Sphinx Gallery
-==============
-
-"""
-import os
-__version__ = '0.1.8'
-
-
-def glr_path_static():
-    """Returns path to packaged static files"""
-    return os.path.abspath(os.path.join(os.path.dirname(__file__), '_static'))
diff --git a/doc/ext/sphinx_gallery/_static/broken_example.png b/doc/ext/sphinx_gallery/_static/broken_example.png
deleted file mode 100644
index 4fea24e..0000000
Binary files a/doc/ext/sphinx_gallery/_static/broken_example.png and /dev/null differ
diff --git a/doc/ext/sphinx_gallery/_static/broken_stamp.svg b/doc/ext/sphinx_gallery/_static/broken_stamp.svg
deleted file mode 100644
index 3aa3671..0000000
--- a/doc/ext/sphinx_gallery/_static/broken_stamp.svg
+++ /dev/null
@@ -1,90 +0,0 @@
-<?xml version="1.0" encoding="UTF-8" standalone="no"?>
-<!-- Created with Inkscape (http://www.inkscape.org/) -->
-
-<svg
-   xmlns:dc="http://purl.org/dc/elements/1.1/"
-   xmlns:cc="http://creativecommons.org/ns#"
-   xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#"
-   xmlns:svg="http://www.w3.org/2000/svg"
-   xmlns="http://www.w3.org/2000/svg"
-   xmlns:xlink="http://www.w3.org/1999/xlink"
-   xmlns:sodipodi="http://sodipodi.sourceforge.net/DTD/sodipodi-0.dtd"
-   xmlns:inkscape="http://www.inkscape.org/namespaces/inkscape"
-   width="400"
-   height="280"
-   viewBox="0 0 400 280"
-   id="svg2"
-   version="1.1"
-   inkscape:version="0.91 r13725"
-   sodipodi:docname="broken_stamp.svg"
-   inkscape:export-filename="/home/oscar/dev/sphinx-gallery/sphinx_gallery/_static/broken_example.png"
-   inkscape:export-xdpi="72"
-   inkscape:export-ydpi="72">
-  <defs
-     id="defs4">
-    <linearGradient
-       inkscape:collect="always"
-       id="linearGradient4205">
-      <stop
-         style="stop-color:#ff0000;stop-opacity:1;"
-         offset="0"
-         id="stop4207" />
-      <stop
-         style="stop-color:#ff0000;stop-opacity:0;"
-         offset="1"
-         id="stop4209" />
-    </linearGradient>
-    <linearGradient
-       inkscape:collect="always"
-       xlink:href="#linearGradient4205"
-       id="linearGradient4211"
-       x1="6.2696795"
-       y1="116.88912"
-       x2="578.68317"
-       y2="216.17484"
-       gradientUnits="userSpaceOnUse"
-       gradientTransform="matrix(0.7714414,-0.4453919,0.4453919,0.7714414,-16.317178,892.32964)" />
-  </defs>
-  <sodipodi:namedview
-     id="base"
-     pagecolor="#ffffff"
-     bordercolor="#666666"
-     borderopacity="1.0"
-     inkscape:pageopacity="0.0"
-     inkscape:pageshadow="2"
-     inkscape:zoom="2.0446043"
-     inkscape:cx="165.48402"
-     inkscape:cy="111.71285"
-     inkscape:document-units="px"
-     inkscape:current-layer="layer1"
-     showgrid="false"
-     units="px"
-     inkscape:window-width="1920"
-     inkscape:window-height="995"
-     inkscape:window-x="0"
-     inkscape:window-y="56"
-     inkscape:window-maximized="1" />
-  <metadata
-     id="metadata7">
-    <rdf:RDF>
-      <cc:Work
-         rdf:about="">
-        <dc:format>image/svg+xml</dc:format>
-        <dc:type
-           rdf:resource="http://purl.org/dc/dcmitype/StillImage" />
-        <dc:title></dc:title>
-      </cc:Work>
-    </rdf:RDF>
-  </metadata>
-  <g
-     inkscape:label="Layer 1"
-     inkscape:groupmode="layer"
-     id="layer1"
-     transform="translate(0,-772.36216)">
-    <path
-       style="color:#000000;font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:medium;line-height:normal;font-family:sans-serif;text-indent:0;text-align:start;text-decoration:none;text-decoration-line:none;text-decoration-style:solid;text-decoration-color:#000000;letter-spacing:normal;word-spacing:normal;text-transform:none;direction:ltr;block-progression:tb;writing-mode:lr-tb;baseline-shift:baseline;text-anchor:start;white-space:normal;clip-rule:nonze [...]
-       d="m 22.292172,952.38989 0.222696,0.38572 6.119789,10.59978 c 0.04122,-0.52683 0.07809,-1.05427 0.115533,-1.58145 l -5.241184,-9.07801 5.342834,-3.08468 c -0.03197,-0.32506 -0.06216,-0.65069 -0.09131,-0.97586 l -6.468355,3.7345 z m 10.15229,-5.86143 c 0.007,0.33992 0.01351,0.67933 0.01597,1.01937 l 16.314781,-9.41934 c 0.217182,-0.46922 0.435355,-0.93818 0.654515,-1.40648 l -16.985272,9.80645 z m 18.538701,-10.70332 c -0.214245,0.46595 -0.428695,0.93178 -0.640954,1.39864 l 0.93567 [...]
-       id="rect4138"
-       inkscape:connector-curvature="0" />
-  </g>
-</svg>
diff --git a/doc/ext/sphinx_gallery/_static/gallery.css b/doc/ext/sphinx_gallery/_static/gallery.css
deleted file mode 100644
index 37047a9..0000000
--- a/doc/ext/sphinx_gallery/_static/gallery.css
+++ /dev/null
@@ -1,192 +0,0 @@
-/*
-Sphinx-Gallery has compatible CSS to fix default sphinx themes
-Tested for Sphinx 1.3.1 for all themes: default, alabaster, sphinxdoc,
-scrolls, agogo, traditional, nature, haiku, pyramid
-Tested for Read the Docs theme 0.1.7 */
-.sphx-glr-thumbcontainer {
-  background: #fff;
-  border: solid #fff 1px;
-  -moz-border-radius: 5px;
-  -webkit-border-radius: 5px;
-  border-radius: 5px;
-  box-shadow: none;
-  float: left;
-  margin: 5px;
-  min-height: 230px;
-  padding-top: 5px;
-  position: relative;
-}
-.sphx-glr-thumbcontainer:hover {
-  border: solid #b4ddfc 1px;
-  box-shadow: 0 0 15px rgba(142, 176, 202, 0.5);
-}
-.sphx-glr-thumbcontainer a.internal {
-  bottom: 0;
-  display: block;
-  left: 0;
-  padding: 150px 10px 0;
-  position: absolute;
-  right: 0;
-  top: 0;
-}
-/* Next one is to avoid Sphinx traditional theme to cover all the
-thumbnail with its default link Background color */
-.sphx-glr-thumbcontainer a.internal:hover {
-  background-color: transparent;
-}
-
-.sphx-glr-thumbcontainer p {
-  margin: 0 0 .1em 0;
-}
-.sphx-glr-thumbcontainer .figure {
-  margin: 10px;
-  width: 160px;
-}
-.sphx-glr-thumbcontainer img {
-  display: inline;
-  max-height: 160px;
-  width: 160px;
-}
-.sphx-glr-thumbcontainer[tooltip]:hover:after {
-  background: rgba(0, 0, 0, 0.8);
-  -webkit-border-radius: 5px;
-  -moz-border-radius: 5px;
-  border-radius: 5px;
-  color: #fff;
-  content: attr(tooltip);
-  left: 95%;
-  padding: 5px 15px;
-  position: absolute;
-  z-index: 98;
-  width: 220px;
-  bottom: 52%;
-}
-.sphx-glr-thumbcontainer[tooltip]:hover:before {
-  border: solid;
-  border-color: #333 transparent;
-  border-width: 18px 0 0 20px;
-  bottom: 58%;
-  content: '';
-  left: 85%;
-  position: absolute;
-  z-index: 99;
-}
-
-.highlight-pytb pre {
-  background-color: #ffe4e4;
-  border: 1px solid #f66;
-  margin-top: 10px;
-  padding: 7px;
-}
-
-.sphx-glr-script-out {
-  color: #888;
-  margin: 0;
-}
-.sphx-glr-script-out .highlight {
-  background-color: transparent;
-  margin-left: 2.5em;
-  margin-top: -1.4em;
-}
-.sphx-glr-script-out .highlight pre {
-  background-color: #fafae2;
-  border: 0;
-  max-height: 30em;
-  overflow: auto;
-  padding-left: 1ex;
-  margin: 0px;
-  word-break: break-word;
-}
-.sphx-glr-script-out + p {
-  margin-top: 1.8em;
-}
-blockquote.sphx-glr-script-out {
-  margin-left: 0pt;
-}
-
-div.sphx-glr-footer {
-    text-align: center;
-}
-
-div.sphx-glr-download {
-  display: inline-block;
-  margin: 1em auto 1ex 2ex;
-  vertical-align: middle;
-}
-
-div.sphx-glr-download a {
-  background-color: #ffc;
-  background-image: linear-gradient(to bottom, #FFC, #d5d57e);
-  border-radius: 4px;
-  border: 1px solid #c2c22d;
-  color: #000;
-  display: inline-block;
-  /* Not valid in old browser, hence we keep the line above to override */
-  display: table-caption;
-  font-weight: bold;
-  padding: 1ex;
-  text-align: center;
-}
-
-/* The last child of a download button is the file name */
-div.sphx-glr-download a span:last-child {
-    font-size: smaller;
-}
-
- at media (min-width: 20em) {
-    div.sphx-glr-download a {
-	min-width: 10em;
-    }
-}
-
- at media (min-width: 30em) {
-    div.sphx-glr-download a {
-	min-width: 13em;
-    }
-}
-
- at media (min-width: 40em) {
-    div.sphx-glr-download a {
-	min-width: 16em;
-    }
-}
-
-
-div.sphx-glr-download code.download {
-  display: inline-block;
-  white-space: normal;
-  word-break: normal;
-  overflow-wrap: break-word;
-  /* border and background are given by the enclosing 'a' */
-  border: none;
-  background: none;
-}
-
-div.sphx-glr-download a:hover {
-  box-shadow: inset 0 1px 0 rgba(255,255,255,.1), 0 1px 5px rgba(0,0,0,.25);
-  text-decoration: none;
-  background-image: none;
-  background-color: #d5d57e;
-}
-
-ul.sphx-glr-horizontal {
-  list-style: none;
-  padding: 0;
-}
-ul.sphx-glr-horizontal li {
-  display: inline;
-}
-ul.sphx-glr-horizontal img {
-  height: auto !important;
-}
-
-p.sphx-glr-signature a.reference.external {
-  -moz-border-radius: 5px;
-  -webkit-border-radius: 5px;
-  border-radius: 5px;
-  padding: 3px;
-  font-size: 75%;
-  text-align: right;
-  margin-left: auto;
-  display: table;
-}
diff --git a/doc/ext/sphinx_gallery/_static/no_image.png b/doc/ext/sphinx_gallery/_static/no_image.png
deleted file mode 100644
index 8c2d48d..0000000
Binary files a/doc/ext/sphinx_gallery/_static/no_image.png and /dev/null differ
diff --git a/doc/ext/sphinx_gallery/backreferences.py b/doc/ext/sphinx_gallery/backreferences.py
deleted file mode 100644
index 2a4a8e0..0000000
--- a/doc/ext/sphinx_gallery/backreferences.py
+++ /dev/null
@@ -1,193 +0,0 @@
-# -*- coding: utf-8 -*-
-# Author: Óscar Nájera
-# License: 3-clause BSD
-"""
-========================
-Backreferences Generator
-========================
-
-Reviews generated example files in order to keep track of used modules
-"""
-
-from __future__ import print_function
-import ast
-import os
-
-
-# Try Python 2 first, otherwise load from Python 3
-try:
-    import cPickle as pickle
-except ImportError:
-    import pickle
-
-
-class NameFinder(ast.NodeVisitor):
-    """Finds the longest form of variable names and their imports in code
-
-    Only retains names from imported modules.
-    """
-
-    def __init__(self):
-        super(NameFinder, self).__init__()
-        self.imported_names = {}
-        self.accessed_names = set()
-
-    def visit_Import(self, node, prefix=''):
-        for alias in node.names:
-            local_name = alias.asname or alias.name
-            self.imported_names[local_name] = prefix + alias.name
-
-    def visit_ImportFrom(self, node):
-        self.visit_Import(node, node.module + '.')
-
-    def visit_Name(self, node):
-        self.accessed_names.add(node.id)
-
-    def visit_Attribute(self, node):
-        attrs = []
-        while isinstance(node, ast.Attribute):
-            attrs.append(node.attr)
-            node = node.value
-
-        if isinstance(node, ast.Name):
-            # This is a.b, not e.g. a().b
-            attrs.append(node.id)
-            self.accessed_names.add('.'.join(reversed(attrs)))
-        else:
-            # need to get a in a().b
-            self.visit(node)
-
-    def get_mapping(self):
-        for name in self.accessed_names:
-            local_name = name.split('.', 1)[0]
-            remainder = name[len(local_name):]
-            if local_name in self.imported_names:
-                # Join import path to relative path
-                full_name = self.imported_names[local_name] + remainder
-                yield name, full_name
-
-
-def get_short_module_name(module_name, obj_name):
-    """ Get the shortest possible module name """
-    parts = module_name.split('.')
-    short_name = module_name
-    for i in range(len(parts) - 1, 0, -1):
-        short_name = '.'.join(parts[:i])
-        try:
-            exec('from %s import %s' % (short_name, obj_name))
-        except Exception:  # libraries can throw all sorts of exceptions...
-            # get the last working module name
-            short_name = '.'.join(parts[:(i + 1)])
-            break
-    return short_name
-
-
-def identify_names(code):
-    """Builds a codeobj summary by identifying and resolving used names
-
-    >>> code = '''
-    ... from a.b import c
-    ... import d as e
-    ... print(c)
-    ... e.HelloWorld().f.g
-    ... '''
-    >>> for name, o in sorted(identify_names(code).items()):
-    ...     print(name, o['name'], o['module'], o['module_short'])
-    c c a.b a.b
-    e.HelloWorld HelloWorld d d
-    """
-    finder = NameFinder()
-    try:
-        finder.visit(ast.parse(code))
-    except SyntaxError:
-        return {}
-
-    example_code_obj = {}
-    for name, full_name in finder.get_mapping():
-        # name is as written in file (e.g. np.asarray)
-        # full_name includes resolved import path (e.g. numpy.asarray)
-        splitted = full_name.rsplit('.', 1)
-        if len(splitted) == 1:
-            # module without attribute. This is not useful for
-            # backreferences
-            continue
-
-        module, attribute = splitted
-        # get shortened module name
-        module_short = get_short_module_name(module, attribute)
-        cobj = {'name': attribute, 'module': module,
-                'module_short': module_short}
-        example_code_obj[name] = cobj
-    return example_code_obj
-
-
-def scan_used_functions(example_file, gallery_conf):
-    """save variables so we can later add links to the documentation"""
-    example_code_obj = identify_names(open(example_file).read())
-    if example_code_obj:
-        codeobj_fname = example_file[:-3] + '_codeobj.pickle'
-        with open(codeobj_fname, 'wb') as fid:
-            pickle.dump(example_code_obj, fid, pickle.HIGHEST_PROTOCOL)
-
-    backrefs = set('{module_short}.{name}'.format(**entry)
-                   for entry in example_code_obj.values()
-                   if entry['module'].startswith(gallery_conf['doc_module']))
-
-    return backrefs
-
-
-# XXX This figure:: uses a forward slash even on Windows, but the op.join's
-# elsewhere will use backslashes...
-THUMBNAIL_TEMPLATE = """
-.. raw:: html
-
-    <div class="sphx-glr-thumbcontainer" tooltip="{snippet}">
-
-.. only:: html
-
-    .. figure:: /{thumbnail}
-
-        :ref:`sphx_glr_{ref_name}`
-
-.. raw:: html
-
-    </div>
-"""
-
-BACKREF_THUMBNAIL_TEMPLATE = THUMBNAIL_TEMPLATE + """
-.. only:: not html
-
-    * :ref:`sphx_glr_{ref_name}`
-"""
-
-
-def _thumbnail_div(full_dir, fname, snippet, is_backref=False):
-    """Generates RST to place a thumbnail in a gallery"""
-    thumb = os.path.join(full_dir, 'images', 'thumb',
-                         'sphx_glr_%s_thumb.png' % fname[:-3])
-    ref_name = os.path.join(full_dir, fname).replace(os.path.sep, '_')
-
-    template = BACKREF_THUMBNAIL_TEMPLATE if is_backref else THUMBNAIL_TEMPLATE
-    return template.format(snippet=snippet, thumbnail=thumb, ref_name=ref_name)
-
-
-def write_backreferences(seen_backrefs, gallery_conf,
-                         target_dir, fname, snippet):
-    """Writes down back reference files, which include a thumbnail list
-    of examples using a certain module"""
-    example_file = os.path.join(target_dir, fname)
-    build_target_dir = os.path.relpath(target_dir, gallery_conf['src_dir'])
-    backrefs = scan_used_functions(example_file, gallery_conf)
-    for backref in backrefs:
-        include_path = os.path.join(gallery_conf['src_dir'],
-                                    gallery_conf['mod_example_dir'],
-                                    '%s.examples' % backref)
-        seen = backref in seen_backrefs
-        with open(include_path, 'a' if seen else 'w') as ex_file:
-            if not seen:
-                heading = '\n\nExamples using ``%s``' % backref
-                ex_file.write(heading + '\n')
-                ex_file.write('^' * len(heading) + '\n')
-            ex_file.write(_thumbnail_div(build_target_dir, fname, snippet,
-                                         is_backref=True))
-            seen_backrefs.add(backref)
diff --git a/doc/ext/sphinx_gallery/docs_resolv.py b/doc/ext/sphinx_gallery/docs_resolv.py
deleted file mode 100644
index 2f12eb9..0000000
--- a/doc/ext/sphinx_gallery/docs_resolv.py
+++ /dev/null
@@ -1,462 +0,0 @@
-# -*- coding: utf-8 -*-
-# Author: Óscar Nájera
-# License: 3-clause BSD
-###############################################################################
-# Documentation link resolver objects
-from __future__ import print_function
-import gzip
-import os
-import posixpath
-import re
-import shelve
-import sys
-
-from sphinx.util.console import fuchsia
-
-# Try Python 2 first, otherwise load from Python 3
-try:
-    import cPickle as pickle
-    import urllib2 as urllib
-    from urllib2 import HTTPError, URLError
-except ImportError:
-    import pickle
-    import urllib.request
-    import urllib.error
-    import urllib.parse
-    from urllib.error import HTTPError, URLError
-
-from io import StringIO
-
-
-def _get_data(url):
-    """Helper function to get data over http or from a local file"""
-    if url.startswith('http://'):
-        # Try Python 2, use Python 3 on exception
-        try:
-            resp = urllib.urlopen(url)
-            encoding = resp.headers.dict.get('content-encoding', 'plain')
-        except AttributeError:
-            resp = urllib.request.urlopen(url)
-            encoding = resp.headers.get('content-encoding', 'plain')
-        data = resp.read()
-        if encoding == 'plain':
-            pass
-        elif encoding == 'gzip':
-            data = StringIO(data)
-            data = gzip.GzipFile(fileobj=data).read()
-        else:
-            raise RuntimeError('unknown encoding')
-    else:
-        with open(url, 'r') as fid:
-            data = fid.read()
-
-    return data
-
-
-def get_data(url, gallery_dir):
-    """Persistent dictionary usage to retrieve the search indexes"""
-
-    # shelve keys need to be str in python 2
-    if sys.version_info[0] == 2 and isinstance(url, unicode):
-        url = url.encode('utf-8')
-
-    cached_file = os.path.join(gallery_dir, 'searchindex')
-    search_index = shelve.open(cached_file)
-    if url in search_index:
-        data = search_index[url]
-    else:
-        data = _get_data(url)
-        search_index[url] = data
-    search_index.close()
-
-    return data
-
-
-def _select_block(str_in, start_tag, end_tag):
-    """Select first block delimited by start_tag and end_tag"""
-    start_pos = str_in.find(start_tag)
-    if start_pos < 0:
-        raise ValueError('start_tag not found')
-    depth = 0
-    for pos in range(start_pos, len(str_in)):
-        if str_in[pos] == start_tag:
-            depth += 1
-        elif str_in[pos] == end_tag:
-            depth -= 1
-
-        if depth == 0:
-            break
-    sel = str_in[start_pos + 1:pos]
-    return sel
-
-
-def _parse_dict_recursive(dict_str):
-    """Parse a dictionary from the search index"""
-    dict_out = dict()
-    pos_last = 0
-    pos = dict_str.find(':')
-    while pos >= 0:
-        key = dict_str[pos_last:pos]
-        if dict_str[pos + 1] == '[':
-            # value is a list
-            pos_tmp = dict_str.find(']', pos + 1)
-            if pos_tmp < 0:
-                raise RuntimeError('error when parsing dict')
-            value = dict_str[pos + 2: pos_tmp].split(',')
-            # try to convert elements to int
-            for i in range(len(value)):
-                try:
-                    value[i] = int(value[i])
-                except ValueError:
-                    pass
-        elif dict_str[pos + 1] == '{':
-            # value is another dictionary
-            subdict_str = _select_block(dict_str[pos:], '{', '}')
-            value = _parse_dict_recursive(subdict_str)
-            pos_tmp = pos + len(subdict_str)
-        else:
-            raise ValueError('error when parsing dict: unknown elem')
-
-        key = key.strip('"')
-        if len(key) > 0:
-            dict_out[key] = value
-
-        pos_last = dict_str.find(',', pos_tmp)
-        if pos_last < 0:
-            break
-        pos_last += 1
-        pos = dict_str.find(':', pos_last)
-
-    return dict_out
-
-
-def parse_sphinx_searchindex(searchindex):
-    """Parse a Sphinx search index
-
-    Parameters
-    ----------
-    searchindex : str
-        The Sphinx search index (contents of searchindex.js)
-
-    Returns
-    -------
-    filenames : list of str
-        The file names parsed from the search index.
-    objects : dict
-        The objects parsed from the search index.
-    """
-    # Make sure searchindex uses UTF-8 encoding
-    if hasattr(searchindex, 'decode'):
-        searchindex = searchindex.decode('UTF-8')
-
-    # parse objects
-    query = 'objects:'
-    pos = searchindex.find(query)
-    if pos < 0:
-        raise ValueError('"objects:" not found in search index')
-
-    sel = _select_block(searchindex[pos:], '{', '}')
-    objects = _parse_dict_recursive(sel)
-
-    # parse filenames
-    query = 'filenames:'
-    pos = searchindex.find(query)
-    if pos < 0:
-        raise ValueError('"filenames:" not found in search index')
-    filenames = searchindex[pos + len(query) + 1:]
-    filenames = filenames[:filenames.find(']')]
-    filenames = [f.strip('"') for f in filenames.split(',')]
-
-    return filenames, objects
-
-
-class SphinxDocLinkResolver(object):
-    """ Resolve documentation links using searchindex.js generated by Sphinx
-
-    Parameters
-    ----------
-    doc_url : str
-        The base URL of the project website.
-    searchindex : str
-        Filename of searchindex, relative to doc_url.
-    extra_modules_test : list of str
-        List of extra module names to test.
-    relative : bool
-        Return relative links (only useful for links to documentation of this
-        package).
-    """
-
-    def __init__(self, doc_url, gallery_dir, searchindex='searchindex.js',
-                 extra_modules_test=None, relative=False):
-        self.doc_url = doc_url
-        self.gallery_dir = gallery_dir
-        self.relative = relative
-        self._link_cache = {}
-
-        self.extra_modules_test = extra_modules_test
-        self._page_cache = {}
-        if doc_url.startswith('http://'):
-            if relative:
-                raise ValueError('Relative links are only supported for local '
-                                 'URLs (doc_url cannot start with "http://)"')
-            searchindex_url = doc_url + '/' + searchindex
-        else:
-            searchindex_url = os.path.join(doc_url, searchindex)
-
-        # detect if we are using relative links on a Windows system
-        if os.name.lower() == 'nt' and not doc_url.startswith('http://'):
-            if not relative:
-                raise ValueError('You have to use relative=True for the local'
-                                 ' package on a Windows system.')
-            self._is_windows = True
-        else:
-            self._is_windows = False
-
-        # download and initialize the search index
-        sindex = get_data(searchindex_url, gallery_dir)
-        filenames, objects = parse_sphinx_searchindex(sindex)
-
-        self._searchindex = dict(filenames=filenames, objects=objects)
-
-    def _get_link(self, cobj):
-        """Get a valid link, False if not found"""
-
-        fname_idx = None
-        full_name = cobj['module_short'] + '.' + cobj['name']
-        if full_name in self._searchindex['objects']:
-            value = self._searchindex['objects'][full_name]
-            if isinstance(value, dict):
-                value = value[next(iter(value.keys()))]
-            fname_idx = value[0]
-        elif cobj['module_short'] in self._searchindex['objects']:
-            value = self._searchindex['objects'][cobj['module_short']]
-            if cobj['name'] in value.keys():
-                fname_idx = value[cobj['name']][0]
-
-        if fname_idx is not None:
-            fname = self._searchindex['filenames'][fname_idx]
-            # In 1.5+ Sphinx seems to have changed from .rst.html to only
-            # .html extension in converted files. But URLs could be
-            # built with < 1.5 or >= 1.5 regardless of what we're currently
-            # building with, so let's just check both :(
-            fnames = [fname + '.html', os.path.splitext(fname)[0] + '.html']
-            for fname in fnames:
-                try:
-                    if self._is_windows:
-                        fname = fname.replace('/', '\\')
-                        link = os.path.join(self.doc_url, fname)
-                    else:
-                        link = posixpath.join(self.doc_url, fname)
-
-                    if hasattr(link, 'decode'):
-                        link = link.decode('utf-8', 'replace')
-
-                    if link in self._page_cache:
-                        html = self._page_cache[link]
-                    else:
-                        html = get_data(link, self.gallery_dir)
-                        self._page_cache[link] = html
-                except (HTTPError, URLError, IOError):
-                    pass
-                else:
-                    break
-            else:
-                raise
-
-            # test if cobj appears in page
-            comb_names = [cobj['module_short'] + '.' + cobj['name']]
-            if self.extra_modules_test is not None:
-                for mod in self.extra_modules_test:
-                    comb_names.append(mod + '.' + cobj['name'])
-            url = False
-            if hasattr(html, 'decode'):
-                # Decode bytes under Python 3
-                html = html.decode('utf-8', 'replace')
-
-            for comb_name in comb_names:
-                if hasattr(comb_name, 'decode'):
-                    # Decode bytes under Python 3
-                    comb_name = comb_name.decode('utf-8', 'replace')
-                if comb_name in html:
-                    url = link + u'#' + comb_name
-            link = url
-        else:
-            link = False
-
-        return link
-
-    def resolve(self, cobj, this_url):
-        """Resolve the link to the documentation, returns None if not found
-
-        Parameters
-        ----------
-        cobj : dict
-            Dict with information about the "code object" for which we are
-            resolving a link.
-            cobi['name'] : function or class name (str)
-            cobj['module_short'] : shortened module name (str)
-            cobj['module'] : module name (str)
-        this_url: str
-            URL of the current page. Needed to construct relative URLs
-            (only used if relative=True in constructor).
-
-        Returns
-        -------
-        link : str | None
-            The link (URL) to the documentation.
-        """
-        full_name = cobj['module_short'] + '.' + cobj['name']
-        link = self._link_cache.get(full_name, None)
-        if link is None:
-            # we don't have it cached
-            link = self._get_link(cobj)
-            # cache it for the future
-            self._link_cache[full_name] = link
-
-        if link is False or link is None:
-            # failed to resolve
-            return None
-
-        if self.relative:
-            link = os.path.relpath(link, start=this_url)
-            if self._is_windows:
-                # replace '\' with '/' so it on the web
-                link = link.replace('\\', '/')
-
-            # for some reason, the relative link goes one directory too high up
-            link = link[3:]
-
-        return link
-
-
-def _embed_code_links(app, gallery_conf, gallery_dir):
-    # Add resolvers for the packages for which we want to show links
-    doc_resolvers = {}
-
-    src_gallery_dir = os.path.join(app.builder.srcdir, gallery_dir)
-    for this_module, url in gallery_conf['reference_url'].items():
-        try:
-            if url is None:
-                doc_resolvers[this_module] = SphinxDocLinkResolver(
-                    app.builder.outdir,
-                    src_gallery_dir,
-                    relative=True)
-            else:
-                doc_resolvers[this_module] = SphinxDocLinkResolver(url,
-                                                                   src_gallery_dir)
-
-        except HTTPError as e:
-            print("The following HTTP Error has occurred:\n")
-            print(e.code)
-        except URLError as e:
-            print("\n...\n"
-                  "Warning: Embedding the documentation hyperlinks requires "
-                  "Internet access.\nPlease check your network connection.\n"
-                  "Unable to continue embedding `{0}` links due to a URL "
-                  "Error:\n".format(this_module))
-            print(e.args)
-
-    html_gallery_dir = os.path.abspath(os.path.join(app.builder.outdir,
-                                                    gallery_dir))
-
-    # patterns for replacement
-    link_pattern = ('<a href="%s" title="View documentation for %s">%s</a>')
-    orig_pattern = '<span class="n">%s</span>'
-    period = '<span class="o">.</span>'
-
-    # This could be turned into a generator if necessary, but should be okay
-    flat = [[dirpath, filename]
-            for dirpath, _, filenames in os.walk(html_gallery_dir)
-            for filename in filenames]
-    iterator = app.status_iterator(
-        flat, os.path.basename(html_gallery_dir), colorfunc=fuchsia,
-        length=len(flat), stringify_func=lambda x: os.path.basename(x[1]))
-    for dirpath, fname in iterator:
-        full_fname = os.path.join(html_gallery_dir, dirpath, fname)
-        subpath = dirpath[len(html_gallery_dir) + 1:]
-        pickle_fname = os.path.join(src_gallery_dir, subpath,
-                                    fname[:-5] + '_codeobj.pickle')
-
-        if os.path.exists(pickle_fname):
-            # we have a pickle file with the objects to embed links for
-            with open(pickle_fname, 'rb') as fid:
-                example_code_obj = pickle.load(fid)
-            fid.close()
-            str_repl = {}
-            # generate replacement strings with the links
-            for name, cobj in example_code_obj.items():
-                this_module = cobj['module'].split('.')[0]
-
-                if this_module not in doc_resolvers:
-                    continue
-
-                try:
-                    link = doc_resolvers[this_module].resolve(cobj,
-                                                              full_fname)
-                except (HTTPError, URLError) as e:
-                    if isinstance(e, HTTPError):
-                        extra = e.code
-                    else:
-                        extra = e.reason
-                    print("\n\t\tError resolving %s.%s: %r (%s)"
-                          % (cobj['module'], cobj['name'], e, extra))
-                    continue
-
-                if link is not None:
-                    parts = name.split('.')
-                    name_html = period.join(orig_pattern % part
-                                            for part in parts)
-                    full_function_name = '%s.%s' % (
-                        cobj['module'], cobj['name'])
-                    str_repl[name_html] = link_pattern % (
-                        link, full_function_name, name_html)
-            # do the replacement in the html file
-
-            # ensure greediness
-            names = sorted(str_repl, key=len, reverse=True)
-            expr = re.compile(r'(?<!\.)\b' +  # don't follow . or word
-                              '|'.join(re.escape(name)
-                                       for name in names))
-
-            def substitute_link(match):
-                return str_repl[match.group()]
-
-            if len(str_repl) > 0:
-                with open(full_fname, 'rb') as fid:
-                    lines_in = fid.readlines()
-                with open(full_fname, 'wb') as fid:
-                    for line in lines_in:
-                        line = line.decode('utf-8')
-                        line = expr.sub(substitute_link, line)
-                        fid.write(line.encode('utf-8'))
-
-
-def embed_code_links(app, exception):
-    """Embed hyperlinks to documentation into example code"""
-    if exception is not None:
-        return
-
-    # No need to waste time embedding hyperlinks when not running the examples
-    # XXX: also at the time of writing this fixes make html-noplot
-    # for some reason I don't fully understand
-    if not app.builder.config.plot_gallery:
-        return
-
-    # XXX: Whitelist of builders for which it makes sense to embed
-    # hyperlinks inside the example html. Note that the link embedding
-    # require searchindex.js to exist for the links to the local doc
-    # and there does not seem to be a good way of knowing which
-    # builders creates a searchindex.js.
-    if app.builder.name not in ['html', 'readthedocs']:
-        return
-
-    print('Embedding documentation hyperlinks in examples..')
-
-    gallery_conf = app.config.sphinx_gallery_conf
-
-    gallery_dirs = gallery_conf['gallery_dirs']
-    if not isinstance(gallery_dirs, list):
-        gallery_dirs = [gallery_dirs]
-
-    for gallery_dir in gallery_dirs:
-        _embed_code_links(app, gallery_conf, gallery_dir)
diff --git a/doc/ext/sphinx_gallery/downloads.py b/doc/ext/sphinx_gallery/downloads.py
deleted file mode 100644
index aa5ff72..0000000
--- a/doc/ext/sphinx_gallery/downloads.py
+++ /dev/null
@@ -1,117 +0,0 @@
-# -*- coding: utf-8 -*-
-r"""
-Utilities for downloadable items
-================================
-
-"""
-# Author: Óscar Nájera
-# License: 3-clause BSD
-
-from __future__ import absolute_import, division, print_function
-
-import os
-import zipfile
-
-CODE_DOWNLOAD = """
-\n.. container:: sphx-glr-footer
-
-\n  .. container:: sphx-glr-download
-
-     :download:`Download Python source code: {0} <{0}>`\n
-
-\n  .. container:: sphx-glr-download
-
-     :download:`Download Jupyter notebook: {1} <{1}>`\n"""
-
-CODE_ZIP_DOWNLOAD = """
-\n.. container:: sphx-glr-footer
-
-\n  .. container:: sphx-glr-download
-
-    :download:`Download all examples in Python source code: {0} </{1}>`\n
-
-\n  .. container:: sphx-glr-download
-
-    :download:`Download all examples in Jupyter notebooks: {2} </{3}>`\n"""
-
-
-def python_zip(file_list, gallery_path, extension='.py'):
-    """Stores all files in file_list into an zip file
-
-    Parameters
-    ----------
-    file_list : list of strings
-        Holds all the file names to be included in zip file
-    gallery_path : string
-        path to where the zipfile is stored
-    extension : str
-        '.py' or '.ipynb' In order to deal with downloads of python
-        sources and jupyter notebooks the file extension from files in
-        file_list will be removed and replace with the value of this
-        variable while generating the zip file
-    Returns
-    -------
-    zipname : string
-        zip file name, written as `target_dir_{python,jupyter}.zip`
-        depending on the extension
-    """
-    zipname = os.path.basename(gallery_path)
-    zipname += '_python' if extension == '.py' else '_jupyter'
-    zipname = os.path.join(gallery_path, zipname + '.zip')
-
-    zipf = zipfile.ZipFile(zipname, mode='w')
-    for fname in file_list:
-        file_src = os.path.splitext(fname)[0] + extension
-        zipf.write(file_src, os.path.relpath(file_src, gallery_path))
-    zipf.close()
-
-    return zipname
-
-
-def list_downloadable_sources(target_dir):
-    """Returns a list of python source files is target_dir
-
-    Parameters
-    ----------
-    target_dir : string
-        path to the directory where python source file are
-    Returns
-    -------
-    list
-        list of paths to all Python source files in `target_dir`
-    """
-    return [os.path.join(target_dir, fname)
-            for fname in os.listdir(target_dir)
-            if fname.endswith('.py')]
-
-
-def generate_zipfiles(gallery_dir):
-    """
-    Collects all Python source files and Jupyter notebooks in
-    gallery_dir and makes zipfiles of them
-
-    Parameters
-    ----------
-    gallery_dir : string
-        path of the gallery to collect downloadable sources
-
-    Return
-    ------
-    download_rst: string
-        RestructuredText to include download buttons to the generated files
-    """
-
-    listdir = list_downloadable_sources(gallery_dir)
-    for directory in sorted(os.listdir(gallery_dir)):
-        if os.path.isdir(os.path.join(gallery_dir, directory)):
-            target_dir = os.path.join(gallery_dir, directory)
-            listdir.extend(list_downloadable_sources(target_dir))
-
-    py_zipfile = python_zip(listdir, gallery_dir)
-    jy_zipfile = python_zip(listdir, gallery_dir, ".ipynb")
-
-    dw_rst = CODE_ZIP_DOWNLOAD.format(os.path.basename(py_zipfile),
-                                      py_zipfile,
-                                      os.path.basename(jy_zipfile),
-                                      jy_zipfile)
-    return dw_rst
diff --git a/doc/ext/sphinx_gallery/gen_gallery.py b/doc/ext/sphinx_gallery/gen_gallery.py
deleted file mode 100644
index 38bc1a1..0000000
--- a/doc/ext/sphinx_gallery/gen_gallery.py
+++ /dev/null
@@ -1,252 +0,0 @@
-# -*- coding: utf-8 -*-
-# Author: Óscar Nájera
-# License: 3-clause BSD
-"""
-========================
-Sphinx-Gallery Generator
-========================
-
-Attaches Sphinx-Gallery to Sphinx in order to generate the galleries
-when building the documentation.
-"""
-
-
-from __future__ import division, print_function, absolute_import
-import copy
-import re
-import os
-from . import glr_path_static
-from .gen_rst import generate_dir_rst, SPHX_GLR_SIG
-from .docs_resolv import embed_code_links
-from .downloads import generate_zipfiles
-
-try:
-    FileNotFoundError
-except NameError:
-    # Python2
-    FileNotFoundError = IOError
-
-DEFAULT_GALLERY_CONF = {
-    'filename_pattern': re.escape(os.sep) + 'plot',
-    'examples_dirs': os.path.join('..', 'examples'),
-    'gallery_dirs': 'auto_examples',
-    'mod_example_dir': os.path.join('modules', 'generated'),
-    'doc_module': (),
-    'reference_url': {},
-    # build options
-    'plot_gallery': True,
-    'download_all_examples': True,
-    'abort_on_example_error': False,
-    'failing_examples': {},
-    'expected_failing_examples': set(),
-}
-
-
-def clean_gallery_out(build_dir):
-    """Deletes images under the sphx_glr namespace in the build directory"""
-    # Sphinx hack: sphinx copies generated images to the build directory
-    #  each time the docs are made.  If the desired image name already
-    #  exists, it appends a digit to prevent overwrites.  The problem is,
-    #  the directory is never cleared.  This means that each time you build
-    #  the docs, the number of images in the directory grows.
-    #
-    # This question has been asked on the sphinx development list, but there
-    #  was no response: http://osdir.com/ml/sphinx-dev/2011-02/msg00123.html
-    #
-    # The following is a hack that prevents this behavior by clearing the
-    #  image build directory from gallery images each time the docs are built.
-    #  If sphinx changes their layout between versions, this will not
-    #  work (though it should probably not cause a crash).
-    # Tested successfully on Sphinx 1.0.7
-
-    build_image_dir = os.path.join(build_dir, '_images')
-    if os.path.exists(build_image_dir):
-        filelist = os.listdir(build_image_dir)
-        for filename in filelist:
-            if filename.startswith('sphx_glr') and filename.endswith('png'):
-                os.remove(os.path.join(build_image_dir, filename))
-
-
-def generate_gallery_rst(app):
-    """Generate the Main examples gallery reStructuredText
-
-    Start the sphinx-gallery configuration and recursively scan the examples
-    directories in order to populate the examples gallery
-    """
-    print('Generating gallery')
-    try:
-        plot_gallery = eval(app.builder.config.plot_gallery)
-    except TypeError:
-        plot_gallery = bool(app.builder.config.plot_gallery)
-
-    gallery_conf = copy.deepcopy(DEFAULT_GALLERY_CONF)
-    gallery_conf.update(app.config.sphinx_gallery_conf)
-    gallery_conf.update(plot_gallery=plot_gallery)
-    gallery_conf.update(
-        abort_on_example_error=app.builder.config.abort_on_example_error)
-    gallery_conf['src_dir'] = app.builder.srcdir
-
-    # this assures I can call the config in other places
-    app.config.sphinx_gallery_conf = gallery_conf
-    app.config.html_static_path.append(glr_path_static())
-
-    clean_gallery_out(app.builder.outdir)
-
-    examples_dirs = gallery_conf['examples_dirs']
-    gallery_dirs = gallery_conf['gallery_dirs']
-
-    if not isinstance(examples_dirs, list):
-        examples_dirs = [examples_dirs]
-    if not isinstance(gallery_dirs, list):
-        gallery_dirs = [gallery_dirs]
-
-    mod_examples_dir = os.path.join(
-        app.builder.srcdir, gallery_conf['mod_example_dir'])
-    seen_backrefs = set()
-
-    computation_times = []
-
-    for examples_dir, gallery_dir in zip(examples_dirs, gallery_dirs):
-        examples_dir = os.path.join(app.builder.srcdir, examples_dir)
-        gallery_dir = os.path.join(app.builder.srcdir, gallery_dir)
-
-        for workdir in [examples_dir, gallery_dir, mod_examples_dir]:
-            if not os.path.exists(workdir):
-                os.makedirs(workdir)
-        # Here we don't use an os.walk, but we recurse only twice: flat is
-        # better than nested.
-        this_fhindex, this_computation_times = \
-            generate_dir_rst(examples_dir, gallery_dir, gallery_conf,
-                             seen_backrefs)
-        if this_fhindex == "":
-            raise FileNotFoundError("Main example directory {0} does not "
-                                    "have a README.txt file. Please write "
-                                    "one to introduce your gallery."
-                                    .format(examples_dir))
-
-        computation_times += this_computation_times
-
-        # we create an index.rst with all examples
-        fhindex = open(os.path.join(gallery_dir, 'index.rst'), 'w')
-        # :orphan: to suppress "not included in TOCTREE" sphinx warnings
-        fhindex.write(":orphan:\n\n" + this_fhindex)
-        for directory in sorted(os.listdir(examples_dir)):
-            if os.path.isdir(os.path.join(examples_dir, directory)):
-                src_dir = os.path.join(examples_dir, directory)
-                target_dir = os.path.join(gallery_dir, directory)
-                this_fhindex, this_computation_times = \
-                    generate_dir_rst(src_dir, target_dir, gallery_conf,
-                                     seen_backrefs)
-                fhindex.write(this_fhindex)
-                computation_times += this_computation_times
-
-        if gallery_conf['download_all_examples']:
-            download_fhindex = generate_zipfiles(gallery_dir)
-            fhindex.write(download_fhindex)
-
-        fhindex.write(SPHX_GLR_SIG)
-        fhindex.flush()
-
-    if gallery_conf['plot_gallery']:
-        print("Computation time summary:")
-        for time_elapsed, fname in sorted(computation_times)[::-1]:
-            if time_elapsed is not None:
-                print("\t- %s : %.2g sec" % (fname, time_elapsed))
-            else:
-                print("\t- %s : not run" % fname)
-
-
-def touch_empty_backreferences(app, what, name, obj, options, lines):
-    """Generate empty back-reference example files
-
-    This avoids inclusion errors/warnings if there are no gallery
-    examples for a class / module that is being parsed by autodoc"""
-
-    examples_path = os.path.join(app.srcdir,
-                                 app.config.sphinx_gallery_conf[
-                                     "mod_example_dir"],
-                                 "%s.examples" % name)
-
-    if not os.path.exists(examples_path):
-        # touch file
-        open(examples_path, 'w').close()
-
-
-def sumarize_failing_examples(app, exception):
-    """Collects the list of falling examples during build and prints them with the traceback
-
-    Raises ValueError if there where failing examples
-    """
-    if exception is not None:
-        return
-
-    # Under no-plot Examples are not run so nothing to summarize
-    if not app.config.sphinx_gallery_conf['plot_gallery']:
-        return
-
-    gallery_conf = app.config.sphinx_gallery_conf
-    failing_examples = set(gallery_conf['failing_examples'].keys())
-    expected_failing_examples = set([os.path.normpath(os.path.join(app.srcdir, path))
-                                     for path in
-                                     gallery_conf['expected_failing_examples']])
-
-    examples_expected_to_fail = failing_examples.intersection(
-        expected_failing_examples)
-    expected_fail_msg = []
-    if examples_expected_to_fail:
-        expected_fail_msg.append("\n\nExamples failing as expected:")
-        for fail_example in examples_expected_to_fail:
-            expected_fail_msg.append(fail_example + ' failed leaving traceback:\n' +
-                                     gallery_conf['failing_examples'][fail_example] + '\n')
-        print("\n".join(expected_fail_msg))
-
-    examples_not_expected_to_fail = failing_examples.difference(
-        expected_failing_examples)
-    fail_msgs = []
-    if examples_not_expected_to_fail:
-        fail_msgs.append("Unexpected failing examples:")
-        for fail_example in examples_not_expected_to_fail:
-            fail_msgs.append(fail_example + ' failed leaving traceback:\n' +
-                             gallery_conf['failing_examples'][fail_example] + '\n')
-
-    examples_not_expected_to_pass = expected_failing_examples.difference(
-        failing_examples)
-    if examples_not_expected_to_pass:
-        fail_msgs.append("Examples expected to fail, but not failling:\n" +
-                         "Please remove these examples from\n" +
-                         "sphinx_gallery_conf['expected_failing_examples']\n" +
-                         "in your conf.py file"
-                         "\n".join(examples_not_expected_to_pass))
-
-    if fail_msgs:
-        raise ValueError("Here is a summary of the problems encountered when "
-                         "running the examples\n\n" + "\n".join(fail_msgs) +
-                         "\n" + "-" * 79)
-
-
-def get_default_config_value(key):
-    def default_getter(conf):
-        return conf['sphinx_gallery_conf'].get(key, DEFAULT_GALLERY_CONF[key])
-    return default_getter
-
-
-def setup(app):
-    """Setup sphinx-gallery sphinx extension"""
-    app.add_config_value('sphinx_gallery_conf', DEFAULT_GALLERY_CONF, 'html')
-    for key in ['plot_gallery', 'abort_on_example_error']:
-        app.add_config_value(key, get_default_config_value(key), 'html')
-
-    app.add_stylesheet('gallery.css')
-
-    if 'sphinx.ext.autodoc' in app._extensions:
-        app.connect('autodoc-process-docstring', touch_empty_backreferences)
-
-    app.connect('builder-inited', generate_gallery_rst)
-
-    app.connect('build-finished', sumarize_failing_examples)
-    app.connect('build-finished', embed_code_links)
-
-
-def setup_module():
-    # HACK: Stop nosetests running setup() above
-    pass
diff --git a/doc/ext/sphinx_gallery/gen_rst.py b/doc/ext/sphinx_gallery/gen_rst.py
deleted file mode 100644
index 91176d8..0000000
--- a/doc/ext/sphinx_gallery/gen_rst.py
+++ /dev/null
@@ -1,643 +0,0 @@
-# -*- coding: utf-8 -*-
-# Author: Óscar Nájera
-# License: 3-clause BSD
-"""
-==================
-RST file generator
-==================
-
-Generate the rst files for the examples by iterating over the python
-example files.
-
-Files that generate images should start with 'plot'
-
-"""
-# Don't use unicode_literals here (be explicit with u"..." instead) otherwise
-# tricky errors come up with exec(code_blocks, ...) calls
-from __future__ import division, print_function, absolute_import
-from time import time
-import codecs
-import hashlib
-import os
-import re
-import shutil
-import subprocess
-import sys
-import traceback
-import warnings
-
-
-# Try Python 2 first, otherwise load from Python 3
-try:
-    # textwrap indent only exists in python 3
-    from textwrap import indent
-except ImportError:
-    def indent(text, prefix, predicate=None):
-        """Adds 'prefix' to the beginning of selected lines in 'text'.
-
-        If 'predicate' is provided, 'prefix' will only be added to the lines
-        where 'predicate(line)' is True. If 'predicate' is not provided,
-        it will default to adding 'prefix' to all non-empty lines that do not
-        consist solely of whitespace characters.
-        """
-        if predicate is None:
-            def predicate(line):
-                return line.strip()
-
-        def prefixed_lines():
-            for line in text.splitlines(True):
-                yield (prefix + line if predicate(line) else line)
-        return ''.join(prefixed_lines())
-
-from io import StringIO
-
-try:
-    # make sure that the Agg backend is set before importing any
-    # matplotlib
-    import matplotlib
-    matplotlib.use('agg')
-    matplotlib_backend = matplotlib.get_backend()
-
-    if matplotlib_backend != 'agg':
-        mpl_backend_msg = (
-            "Sphinx-Gallery relies on the matplotlib 'agg' backend to "
-            "render figures and write them to files. You are "
-            "currently using the {} backend. Sphinx-Gallery will "
-            "terminate the build now, because changing backends is "
-            "not well supported by matplotlib. We advise you to move "
-            "sphinx_gallery imports before any matplotlib-dependent "
-            "import. Moving sphinx_gallery imports at the top of "
-            "your conf.py file should fix this issue")
-
-        raise ValueError(mpl_backend_msg.format(matplotlib_backend))
-
-    import matplotlib.pyplot as plt
-except ImportError:
-    # this script can be imported by nosetest to find tests to run: we should
-    # not impose the matplotlib requirement in that case.
-    pass
-
-from . import glr_path_static
-from .backreferences import write_backreferences, _thumbnail_div
-from .downloads import CODE_DOWNLOAD
-from .py_source_parser import (get_docstring_and_rest,
-                               split_code_and_text_blocks)
-
-from .notebook import jupyter_notebook, save_notebook
-
-try:
-    basestring
-except NameError:
-    basestring = str
-    unicode = str
-
-
-###############################################################################
-
-
-class Tee(object):
-    """A tee object to redirect streams to multiple outputs"""
-
-    def __init__(self, file1, file2):
-        self.file1 = file1
-        self.file2 = file2
-
-    def write(self, data):
-        self.file1.write(data)
-        self.file2.write(data)
-
-    def flush(self):
-        self.file1.flush()
-        self.file2.flush()
-
-    # When called from a local terminal seaborn needs it in Python3
-    def isatty(self):
-        self.file1.isatty()
-
-
-class MixedEncodingStringIO(StringIO):
-    """Helper when both ASCII and unicode strings will be written"""
-
-    def write(self, data):
-        if not isinstance(data, unicode):
-            data = data.decode('utf-8')
-        StringIO.write(self, data)
-
-
-###############################################################################
-# The following strings are used when we have several pictures: we use
-# an html div tag that our CSS uses to turn the lists into horizontal
-# lists.
-HLIST_HEADER = """
-.. rst-class:: sphx-glr-horizontal
-
-"""
-
-HLIST_IMAGE_TEMPLATE = """
-    *
-
-      .. image:: /%s
-            :scale: 47
-"""
-
-SINGLE_IMAGE = """
-.. image:: /%s
-    :align: center
-"""
-
-
-# This one could contain unicode
-CODE_OUTPUT = u""".. rst-class:: sphx-glr-script-out
-
- Out::
-
-{0}\n"""
-
-
-SPHX_GLR_SIG = """\n.. rst-class:: sphx-glr-signature
-
-    `Generated by Sphinx-Gallery <http://sphinx-gallery.readthedocs.io>`_\n"""
-
-
-def codestr2rst(codestr, lang='python'):
-    """Return reStructuredText code block from code string"""
-    code_directive = "\n.. code-block:: {0}\n\n".format(lang)
-    indented_block = indent(codestr, ' ' * 4)
-    return code_directive + indented_block
-
-
-def extract_thumbnail_number(text):
-    """ Pull out the thumbnail image number specified in the docstring. """
-
-    # check whether the user has specified a specific thumbnail image
-    pattr = re.compile(
-        r"^\s*#\s*sphinx_gallery_thumbnail_number\s*=\s*([0-9]+)\s*$",
-        flags=re.MULTILINE)
-    match = pattr.search(text)
-
-    if match is None:
-        # by default, use the first figure created
-        thumbnail_number = 1
-    else:
-        thumbnail_number = int(match.groups()[0])
-
-    return thumbnail_number
-
-
-def extract_intro(filename):
-    """ Extract the first paragraph of module-level docstring. max:95 char"""
-
-    docstring, _ = get_docstring_and_rest(filename)
-
-    # lstrip is just in case docstring has a '\n\n' at the beginning
-    paragraphs = docstring.lstrip().split('\n\n')
-    if len(paragraphs) > 1:
-        first_paragraph = re.sub('\n', ' ', paragraphs[1])
-        first_paragraph = (first_paragraph[:95] + '...'
-                           if len(first_paragraph) > 95 else first_paragraph)
-    else:
-        raise ValueError(
-            "Example docstring should have a header for the example title "
-            "and at least a paragraph explaining what the example is about. "
-            "Please check the example file:\n {}\n".format(filename))
-
-    return first_paragraph
-
-
-def get_md5sum(src_file):
-    """Returns md5sum of file"""
-
-    with open(src_file, 'rb') as src_data:
-        src_content = src_data.read()
-
-        src_md5 = hashlib.md5(src_content).hexdigest()
-    return src_md5
-
-
-def md5sum_is_current(src_file):
-    """Checks whether src_file has the same md5 hash as the one on disk"""
-
-    src_md5 = get_md5sum(src_file)
-
-    src_md5_file = src_file + '.md5'
-    if os.path.exists(src_md5_file):
-        with open(src_md5_file, 'r') as file_checksum:
-            ref_md5 = file_checksum.read()
-
-        return src_md5 == ref_md5
-
-    return False
-
-
-def save_figures(image_path, fig_count, gallery_conf):
-    """Save all open matplotlib figures of the example code-block
-
-    Parameters
-    ----------
-    image_path : str
-        Path where plots are saved (format string which accepts figure number)
-    fig_count : int
-        Previous figure number count. Figure number add from this number
-    gallery_conf : dict
-        Contains the configuration of Sphinx-Gallery
-
-    Returns
-    -------
-    images_rst : str
-        rst code to embed the images in the document
-    fig_num : int
-        number of figures saved
-    """
-    figure_list = []
-
-    for fig_num in plt.get_fignums():
-        # Set the fig_num figure as the current figure as we can't
-        # save a figure that's not the current figure.
-        fig = plt.figure(fig_num)
-        kwargs = {}
-        to_rgba = matplotlib.colors.colorConverter.to_rgba
-        for attr in ['facecolor', 'edgecolor']:
-            fig_attr = getattr(fig, 'get_' + attr)()
-            default_attr = matplotlib.rcParams['figure.' + attr]
-            if to_rgba(fig_attr) != to_rgba(default_attr):
-                kwargs[attr] = fig_attr
-
-        current_fig = image_path.format(fig_count + fig_num)
-        fig.savefig(current_fig, **kwargs)
-        figure_list.append(current_fig)
-
-    if gallery_conf.get('find_mayavi_figures', False):
-        from mayavi import mlab
-        e = mlab.get_engine()
-        last_matplotlib_fig_num = fig_count + len(figure_list)
-        total_fig_num = last_matplotlib_fig_num + len(e.scenes)
-        mayavi_fig_nums = range(last_matplotlib_fig_num + 1, total_fig_num + 1)
-
-        for scene, mayavi_fig_num in zip(e.scenes, mayavi_fig_nums):
-            current_fig = image_path.format(mayavi_fig_num)
-            mlab.savefig(current_fig, figure=scene)
-            # make sure the image is not too large
-            scale_image(current_fig, current_fig, 850, 999)
-            figure_list.append(current_fig)
-        mlab.close(all=True)
-
-    return figure_rst(figure_list, gallery_conf['src_dir'])
-
-
-def figure_rst(figure_list, sources_dir):
-    """Given a list of paths to figures generate the corresponding rst
-
-    Depending on whether we have one or more figures, we use a
-    single rst call to 'image' or a horizontal list.
-
-    Parameters
-    ----------
-    figure_list : list of str
-        Strings are the figures' absolute paths
-    sources_dir : str
-        absolute path of Sphinx documentation sources
-
-    Returns
-    -------
-    images_rst : str
-        rst code to embed the images in the document
-    fig_num : int
-        number of figures saved
-    """
-
-    figure_list = [os.path.relpath(figure_path, sources_dir)
-                   for figure_path in figure_list]
-    images_rst = ""
-    if len(figure_list) == 1:
-        figure_name = figure_list[0]
-        images_rst = SINGLE_IMAGE % figure_name.lstrip('/')
-    elif len(figure_list) > 1:
-        images_rst = HLIST_HEADER
-        for figure_name in figure_list:
-            images_rst += HLIST_IMAGE_TEMPLATE % figure_name.lstrip('/')
-
-    return images_rst, len(figure_list)
-
-
-def scale_image(in_fname, out_fname, max_width, max_height):
-    """Scales an image with the same aspect ratio centered in an
-       image with a given max_width and max_height
-       if in_fname == out_fname the image can only be scaled down
-    """
-    # local import to avoid testing dependency on PIL:
-    try:
-        from PIL import Image
-    except ImportError:
-        import Image
-    img = Image.open(in_fname)
-    width_in, height_in = img.size
-    scale_w = max_width / float(width_in)
-    scale_h = max_height / float(height_in)
-
-    if height_in * scale_w <= max_height:
-        scale = scale_w
-    else:
-        scale = scale_h
-
-    if scale >= 1.0 and in_fname == out_fname:
-        return
-
-    width_sc = int(round(scale * width_in))
-    height_sc = int(round(scale * height_in))
-
-    # resize the image
-    img.thumbnail((width_sc, height_sc), Image.ANTIALIAS)
-
-    # insert centered
-    thumb = Image.new('RGB', (max_width, max_height), (255, 255, 255))
-    pos_insert = ((max_width - width_sc) // 2, (max_height - height_sc) // 2)
-    thumb.paste(img, pos_insert)
-
-    thumb.save(out_fname)
-    # Use optipng to perform lossless compression on the resized image if
-    # software is installed
-    if os.environ.get('SKLEARN_DOC_OPTIPNG', False):
-        try:
-            subprocess.call(["optipng", "-quiet", "-o", "9", out_fname])
-        except Exception:
-            warnings.warn('Install optipng to reduce the size of the \
-                          generated images')
-
-
-def save_thumbnail(image_path_template, src_file, gallery_conf):
-    """Save the thumbnail image"""
-    # read specification of the figure to display as thumbnail from main text
-    _, content = get_docstring_and_rest(src_file)
-    thumbnail_number = extract_thumbnail_number(content)
-    thumbnail_image_path = image_path_template.format(thumbnail_number)
-
-    thumb_dir = os.path.join(os.path.dirname(thumbnail_image_path), 'thumb')
-    if not os.path.exists(thumb_dir):
-        os.makedirs(thumb_dir)
-
-    base_image_name = os.path.splitext(os.path.basename(src_file))[0]
-    thumb_file = os.path.join(thumb_dir,
-                              'sphx_glr_%s_thumb.png' % base_image_name)
-
-    if src_file in gallery_conf['failing_examples']:
-        broken_img = os.path.join(glr_path_static(), 'broken_example.png')
-        scale_image(broken_img, thumb_file, 200, 140)
-
-    elif os.path.exists(thumbnail_image_path):
-        scale_image(thumbnail_image_path, thumb_file, 400, 280)
-
-    elif not os.path.exists(thumb_file):
-        # create something to replace the thumbnail
-        default_thumb_file = os.path.join(glr_path_static(), 'no_image.png')
-        default_thumb_file = gallery_conf.get("default_thumb_file",
-                                              default_thumb_file)
-        scale_image(default_thumb_file, thumb_file, 200, 140)
-
-
-def generate_dir_rst(src_dir, target_dir, gallery_conf, seen_backrefs):
-    """Generate the gallery reStructuredText for an example directory"""
-    if not os.path.exists(os.path.join(src_dir, 'README.txt')):
-        print(80 * '_')
-        print('Example directory %s does not have a README.txt file' %
-              src_dir)
-        print('Skipping this directory')
-        print(80 * '_')
-        return "", []  # because string is an expected return type
-
-    with open(os.path.join(src_dir, 'README.txt')) as fid:
-        fhindex = fid.read()
-    # Add empty lines to avoid bug in issue #165
-    fhindex += "\n\n"
-
-    if not os.path.exists(target_dir):
-        os.makedirs(target_dir)
-    sorted_listdir = [fname for fname in sorted(os.listdir(src_dir))
-                      if fname.endswith('.py')]
-    entries_text = []
-    computation_times = []
-    build_target_dir = os.path.relpath(target_dir, gallery_conf['src_dir'])
-    for fname in sorted_listdir:
-        amount_of_code, time_elapsed = \
-            generate_file_rst(fname, target_dir, src_dir, gallery_conf)
-        computation_times.append((time_elapsed, fname))
-        new_fname = os.path.join(src_dir, fname)
-        intro = extract_intro(new_fname)
-        write_backreferences(seen_backrefs, gallery_conf,
-                             target_dir, fname, intro)
-        this_entry = _thumbnail_div(build_target_dir, fname, intro) + """
-
-.. toctree::
-   :hidden:
-
-   /%s/%s\n""" % (build_target_dir, fname[:-3])
-        entries_text.append((amount_of_code, this_entry))
-
-    # sort to have the smallest entries in the beginning
-    entries_text.sort()
-
-    for _, entry_text in entries_text:
-        fhindex += entry_text
-
-    # clear at the end of the section
-    fhindex += """.. raw:: html\n
-    <div style='clear:both'></div>\n\n"""
-
-    return fhindex, computation_times
-
-
-def execute_code_block(code_block, example_globals,
-                       block_vars, gallery_conf):
-    """Executes the code block of the example file"""
-    time_elapsed = 0
-    stdout = ''
-
-    # If example is not suitable to run, skip executing its blocks
-    if not block_vars['execute_script']:
-        return stdout, time_elapsed
-
-    plt.close('all')
-    cwd = os.getcwd()
-    # Redirect output to stdout and
-    orig_stdout = sys.stdout
-    src_file = block_vars['src_file']
-
-    try:
-        # First cd in the original example dir, so that any file
-        # created by the example get created in this directory
-        os.chdir(os.path.dirname(src_file))
-        my_buffer = MixedEncodingStringIO()
-        my_stdout = Tee(sys.stdout, my_buffer)
-        sys.stdout = my_stdout
-
-        t_start = time()
-        # don't use unicode_literals at the top of this file or you get
-        # nasty errors here on Py2.7
-        exec(code_block, example_globals)
-        time_elapsed = time() - t_start
-
-        sys.stdout = orig_stdout
-
-        my_stdout = my_buffer.getvalue().strip().expandtabs()
-        # raise RuntimeError
-        if my_stdout:
-            stdout = CODE_OUTPUT.format(indent(my_stdout, u' ' * 4))
-        os.chdir(cwd)
-        images_rst, fig_num = save_figures(block_vars['image_path'],
-                                           block_vars['fig_count'], gallery_conf)
-
-    except Exception:
-        formatted_exception = traceback.format_exc()
-
-        fail_example_warning = 80 * '_' + '\n' + \
-            '%s failed to execute correctly:' % src_file + \
-            formatted_exception + 80 * '_' + '\n'
-        warnings.warn(fail_example_warning)
-
-        fig_num = 0
-        images_rst = codestr2rst(formatted_exception, lang='pytb')
-
-        # Breaks build on first example error
-        # XXX This check can break during testing e.g. if you uncomment the
-        # `raise RuntimeError` by the `my_stdout` call, maybe use `.get()`?
-        if gallery_conf['abort_on_example_error']:
-            raise
-        # Stores failing file
-        gallery_conf['failing_examples'][src_file] = formatted_exception
-        block_vars['execute_script'] = False
-
-    finally:
-        os.chdir(cwd)
-        sys.stdout = orig_stdout
-
-    code_output = u"\n{0}\n\n{1}\n\n".format(images_rst, stdout)
-    block_vars['fig_count'] += fig_num
-
-    return code_output, time_elapsed
-
-
-def clean_modules():
-    """Remove "unload" seaborn from the name space
-
-    After a script is executed it can load a variety of setting that one
-    does not want to influence in other examples in the gallery."""
-
-    # Horrible code to 'unload' seaborn, so that it resets
-    # its default when is load
-    # Python does not support unloading of modules
-    # https://bugs.python.org/issue9072
-    for module in list(sys.modules.keys()):
-        if 'seaborn' in module:
-            del sys.modules[module]
-
-    # Reset Matplotlib to default
-    plt.rcdefaults()
-
-
-def generate_file_rst(fname, target_dir, src_dir, gallery_conf):
-    """Generate the rst file for a given example.
-
-    Returns
-    -------
-    amount_of_code : int
-        character count of the corresponding python script in file
-    time_elapsed : float
-        seconds required to run the script
-    """
-
-    src_file = os.path.normpath(os.path.join(src_dir, fname))
-    example_file = os.path.join(target_dir, fname)
-    shutil.copyfile(src_file, example_file)
-    script_blocks = split_code_and_text_blocks(src_file)
-    amount_of_code = sum([len(bcontent)
-                          for blabel, bcontent in script_blocks
-                          if blabel == 'code'])
-
-    if md5sum_is_current(example_file):
-        return amount_of_code, 0
-
-    image_dir = os.path.join(target_dir, 'images')
-    if not os.path.exists(image_dir):
-        os.makedirs(image_dir)
-
-    base_image_name = os.path.splitext(fname)[0]
-    image_fname = 'sphx_glr_' + base_image_name + '_{0:03}.png'
-    build_image_dir = os.path.relpath(image_dir, gallery_conf['src_dir'])
-    image_path_template = os.path.join(image_dir, image_fname)
-
-    ref_fname = os.path.relpath(example_file, gallery_conf['src_dir'])
-    ref_fname = ref_fname.replace(os.path.sep, '_')
-    example_rst = """\n\n.. _sphx_glr_{0}:\n\n""".format(ref_fname)
-
-    filename_pattern = gallery_conf.get('filename_pattern')
-    execute_script = re.search(filename_pattern, src_file) and gallery_conf[
-        'plot_gallery']
-    example_globals = {
-        # A lot of examples contains 'print(__doc__)' for example in
-        # scikit-learn so that running the example prints some useful
-        # information. Because the docstring has been separated from
-        # the code blocks in sphinx-gallery, __doc__ is actually
-        # __builtin__.__doc__ in the execution context and we do not
-        # want to print it
-        '__doc__': '',
-        # Examples may contain if __name__ == '__main__' guards
-        # for in example scikit-learn if the example uses multiprocessing
-        '__name__': '__main__',
-    }
-
-    # A simple example has two blocks: one for the
-    # example introduction/explanation and one for the code
-    is_example_notebook_like = len(script_blocks) > 2
-    time_elapsed = 0
-    block_vars = {'execute_script': execute_script, 'fig_count': 0,
-                  'image_path': image_path_template, 'src_file': src_file}
-    if block_vars['execute_script']:
-        print('Executing file %s' % src_file)
-    for blabel, bcontent in script_blocks:
-        if blabel == 'code':
-            code_output, rtime = execute_code_block(bcontent,
-                                                    example_globals,
-                                                    block_vars,
-                                                    gallery_conf)
-
-            time_elapsed += rtime
-
-            if is_example_notebook_like:
-                example_rst += codestr2rst(bcontent) + '\n'
-                example_rst += code_output
-            else:
-                example_rst += code_output
-                if 'sphx-glr-script-out' in code_output:
-                    # Add some vertical space after output
-                    example_rst += "\n\n|\n\n"
-                example_rst += codestr2rst(bcontent) + '\n'
-
-        else:
-            example_rst += bcontent + '\n\n'
-
-    clean_modules()
-
-    # Writes md5 checksum if example has build correctly
-    # not failed and was initially meant to run(no-plot shall not cache md5sum)
-    if block_vars['execute_script']:
-        with open(example_file + '.md5', 'w') as file_checksum:
-            file_checksum.write(get_md5sum(example_file))
-
-    save_thumbnail(image_path_template, src_file, gallery_conf)
-
-    time_m, time_s = divmod(time_elapsed, 60)
-    example_nb = jupyter_notebook(script_blocks)
-    save_notebook(example_nb, example_file.replace('.py', '.ipynb'))
-    with codecs.open(os.path.join(target_dir, base_image_name + '.rst'),
-                     mode='w', encoding='utf-8') as f:
-        example_rst += "**Total running time of the script:**" \
-                       " ({0: .0f} minutes {1: .3f} seconds)\n\n".format(
-                           time_m, time_s)
-        example_rst += CODE_DOWNLOAD.format(fname,
-                                            fname.replace('.py', '.ipynb'))
-        example_rst += SPHX_GLR_SIG
-        f.write(example_rst)
-
-    if block_vars['execute_script']:
-        print("{0} ran in : {1:.2g} seconds\n".format(src_file, time_elapsed))
-
-    return amount_of_code, time_elapsed
diff --git a/doc/ext/sphinx_gallery/notebook.py b/doc/ext/sphinx_gallery/notebook.py
deleted file mode 100644
index fe26dd6..0000000
--- a/doc/ext/sphinx_gallery/notebook.py
+++ /dev/null
@@ -1,194 +0,0 @@
-# -*- coding: utf-8 -*-
-r"""
-============================
-Parser for Jupyter notebooks
-============================
-
-Class that holds the Jupyter notebook information
-
-"""
-# Author: Óscar Nájera
-# License: 3-clause BSD
-
-from __future__ import division, absolute_import, print_function
-from functools import partial
-import argparse
-import json
-import re
-import sys
-from .py_source_parser import split_code_and_text_blocks
-
-
-def jupyter_notebook_skeleton():
-    """Returns a dictionary with the elements of a Jupyter notebook"""
-    py_version = sys.version_info
-    notebook_skeleton = {
-        "cells": [],
-        "metadata": {
-            "kernelspec": {
-                "display_name": "Python " + str(py_version[0]),
-                "language": "python",
-                "name": "python" + str(py_version[0])
-            },
-            "language_info": {
-                "codemirror_mode": {
-                    "name": "ipython",
-                    "version": py_version[0]
-                },
-                "file_extension": ".py",
-                "mimetype": "text/x-python",
-                "name": "python",
-                "nbconvert_exporter": "python",
-                "pygments_lexer": "ipython" + str(py_version[0]),
-                "version": '{0}.{1}.{2}'.format(*sys.version_info[:3])
-            }
-        },
-        "nbformat": 4,
-        "nbformat_minor": 0
-    }
-    return notebook_skeleton
-
-
-def directive_fun(match, directive):
-    """Helper to fill in directives"""
-    directive_to_alert = dict(note="info", warning="danger")
-    return ('<div class="alert alert-{0}"><h4>{1}</h4><p>{2}</p></div>'
-            .format(directive_to_alert[directive], directive.capitalize(),
-                    match.group(1).strip()))
-
-
-def rst2md(text):
-    """Converts the RST text from the examples docstrigs and comments
-    into markdown text for the Jupyter notebooks"""
-
-    top_heading = re.compile(r'^=+$\s^([\w\s-]+)^=+$', flags=re.M)
-    text = re.sub(top_heading, r'# \1', text)
-
-    math_eq = re.compile(r'^\.\. math::((?:.+)?(?:\n+^  .+)*)', flags=re.M)
-    text = re.sub(math_eq,
-                  lambda match: r'\begin{{align}}{0}\end{{align}}'.format(
-                      match.group(1).strip()),
-                  text)
-    inline_math = re.compile(r':math:`(.+?)`', re.DOTALL)
-    text = re.sub(inline_math, r'$\1$', text)
-
-    directives = ('warning', 'note')
-    for directive in directives:
-        directive_re = re.compile(r'^\.\. %s::((?:.+)?(?:\n+^  .+)*)'
-                                  % directive, flags=re.M)
-        text = re.sub(directive_re,
-                      partial(directive_fun, directive=directive), text)
-
-    links = re.compile(r'^ *\.\. _.*:.*$\n', flags=re.M)
-    text = re.sub(links, '', text)
-
-    refs = re.compile(r':ref:`')
-    text = re.sub(refs, '`', text)
-
-    contents = re.compile(r'^\s*\.\. contents::.*$(\n +:\S+: *$)*\n',
-                          flags=re.M)
-    text = re.sub(contents, '', text)
-
-    images = re.compile(
-        r'^\.\. image::(.*$)(?:\n *:alt:(.*$)\n)?(?: +:\S+:.*$\n)*',
-        flags=re.M)
-    text = re.sub(
-        images, lambda match: '![{1}]({0})\n'.format(
-            match.group(1).strip(), (match.group(2) or '').strip()), text)
-
-    return text
-
-
-def jupyter_notebook(script_blocks):
-    """Generate a Jupyter notebook file cell-by-cell
-
-    Parameters
-    ----------
-    script_blocks: list
-        script execution cells
-    """
-
-    work_notebook = jupyter_notebook_skeleton()
-    add_code_cell(work_notebook, "%matplotlib inline")
-    fill_notebook(work_notebook, script_blocks)
-
-    return work_notebook
-
-
-def add_code_cell(work_notebook, code):
-    """Add a code cell to the notebook
-
-    Parameters
-    ----------
-    code : str
-        Cell content
-    """
-
-    code_cell = {
-        "cell_type": "code",
-        "execution_count": None,
-        "metadata": {"collapsed": False},
-        "outputs": [],
-        "source": [code.strip()]
-    }
-    work_notebook["cells"].append(code_cell)
-
-
-def add_markdown_cell(work_notebook, text):
-    """Add a markdown cell to the notebook
-
-    Parameters
-    ----------
-    code : str
-        Cell content
-    """
-    markdown_cell = {
-        "cell_type": "markdown",
-        "metadata": {},
-        "source": [rst2md(text)]
-    }
-    work_notebook["cells"].append(markdown_cell)
-
-
-def fill_notebook(work_notebook, script_blocks):
-    """Writes the Jupyter notebook cells
-
-    Parameters
-    ----------
-    script_blocks : list of tuples
-    """
-
-    for blabel, bcontent in script_blocks:
-        if blabel == 'code':
-            add_code_cell(work_notebook, bcontent)
-        else:
-            add_markdown_cell(work_notebook, bcontent + '\n')
-
-
-def save_notebook(work_notebook, write_file):
-    """Saves the Jupyter work_notebook to write_file"""
-    with open(write_file, 'w') as out_nb:
-        json.dump(work_notebook, out_nb, indent=2)
-
-
-###############################################################################
-# Notebook shell utility
-
-def python_to_jupyter_cli(args=None, namespace=None):
-    """Exposes the jupyter notebook renderer to the command line
-
-    Takes the same arguments as ArgumentParser.parse_args
-    """
-    parser = argparse.ArgumentParser(
-        description='Sphinx-Gallery Notebook converter')
-    parser.add_argument('python_src_file', nargs='+',
-                        help='Input Python file script to convert. '
-                        'Supports multiple files and shell wildcards'
-                        ' (e.g. *.py)')
-    args = parser.parse_args(args, namespace)
-
-    for src_file in args.python_src_file:
-        blocks = split_code_and_text_blocks(src_file)
-        print('Converting {0}'.format(src_file))
-        example_nb = jupyter_notebook(blocks)
-        save_notebook(example_nb, src_file.replace('.py', '.ipynb'))
diff --git a/doc/ext/sphinx_gallery/py_source_parser.py b/doc/ext/sphinx_gallery/py_source_parser.py
deleted file mode 100644
index d397087..0000000
--- a/doc/ext/sphinx_gallery/py_source_parser.py
+++ /dev/null
@@ -1,99 +0,0 @@
-# -*- coding: utf-8 -*-
-r"""
-Parser for python source files
-==============================
-"""
-# Created Sun Nov 27 14:03:07 2016
-# Author: Óscar Nájera
-
-from __future__ import division, absolute_import, print_function
-import ast
-import re
-from textwrap import dedent
-
-SYNTAX_ERROR_DOCSTRING = """
-SyntaxError
-===========
-
-Example script with invalid Python syntax
-"""
-
-
-def get_docstring_and_rest(filename):
-    """Separate `filename` content between docstring and the rest
-
-    Strongly inspired from ast.get_docstring.
-
-    Returns
-    -------
-    docstring: str
-        docstring of `filename`
-    rest: str
-        `filename` content without the docstring
-    """
-    # can't use codecs.open(filename, 'r', 'utf-8') here b/c ast doesn't
-    # seem to work with unicode strings in Python2.7
-    # "SyntaxError: encoding declaration in Unicode string"
-    with open(filename, 'rb') as fid:
-        content = fid.read()
-    # change from Windows format to UNIX for uniformity
-    content = content.replace(b'\r\n', b'\n')
-
-    try:
-        node = ast.parse(content)
-    except SyntaxError:
-        return SYNTAX_ERROR_DOCSTRING, content.decode('utf-8')
-
-    if not isinstance(node, ast.Module):
-        raise TypeError("This function only supports modules. "
-                        "You provided {0}".format(node.__class__.__name__))
-    if node.body and isinstance(node.body[0], ast.Expr) and \
-       isinstance(node.body[0].value, ast.Str):
-        docstring_node = node.body[0]
-        docstring = docstring_node.value.s
-        if hasattr(docstring, 'decode'):  # python2.7
-            docstring = docstring.decode('utf-8')
-        # This get the content of the file after the docstring last line
-        # Note: 'maxsplit' argument is not a keyword argument in python2
-        rest = content.decode('utf-8').split('\n', docstring_node.lineno)[-1]
-        return docstring, rest
-    else:
-        raise ValueError(('Could not find docstring in file "{0}". '
-                          'A docstring is required by sphinx-gallery')
-                         .format(filename))
-
-
-def split_code_and_text_blocks(source_file):
-    """Return list with source file separated into code and text blocks.
-
-    Returns
-    -------
-    blocks : list of (label, content)
-        List where each element is a tuple with the label ('text' or 'code'),
-        and content string of block.
-    """
-    docstring, rest_of_content = get_docstring_and_rest(source_file)
-    blocks = [('text', docstring)]
-
-    pattern = re.compile(
-        r'(?P<header_line>^#{20,}.*)\s(?P<text_content>(?:^#.*\s)*)',
-        flags=re.M)
-
-    pos_so_far = 0
-    for match in re.finditer(pattern, rest_of_content):
-        match_start_pos, match_end_pos = match.span()
-        code_block_content = rest_of_content[pos_so_far:match_start_pos]
-        text_content = match.group('text_content')
-        sub_pat = re.compile('^#', flags=re.M)
-        text_block_content = dedent(re.sub(sub_pat, '', text_content)).lstrip()
-        if code_block_content.strip():
-            blocks.append(('code', code_block_content))
-        if text_block_content.strip():
-            blocks.append(('text', text_block_content))
-        pos_so_far = match_end_pos
-
-    remaining_content = rest_of_content[pos_so_far:]
-    if remaining_content.strip():
-        blocks.append(('code', remaining_content))
-
-    return blocks
diff --git a/doc/release/contribs.py b/doc/release/contribs.py
index d6dcc49..5b2d973 100755
--- a/doc/release/contribs.py
+++ b/doc/release/contribs.py
@@ -13,7 +13,7 @@ tag = sys.argv[1]
 def call(cmd):
     return subprocess.check_output(shlex.split(cmd), universal_newlines=True).split('\n')
 
-tag_date = call("git show --format='%%ci' %s" % tag)[0]
+tag_date = call("git log -n1 --format='%%ci' %s" % tag)[0]
 print("Release %s was on %s\n" % (tag, tag_date))
 
 merges = call("git log --since='%s' --merges --format='>>>%%B' --reverse" % tag_date)
diff --git a/doc/release/release_0.13.rst b/doc/release/release_0.13.rst
index c54aaed..a0726db 100644
--- a/doc/release/release_0.13.rst
+++ b/doc/release/release_0.13.rst
@@ -1,3 +1,21 @@
+Announcement: scikit-image 0.13.1
+=================================
+
+scikit-image 0.13.1 is a bug-fix and compatibility update. See below for
+the many new features in 0.13.0.
+
+The main contribution in 0.13.1 is Jarrod Millman's valiant work to ensure
+scikit-image works with both NetworkX 1.11 and 2.0 (#2766). Additional updates
+include:
+
+- Bug fix in similarity transform estimation, by GitHub user @zhongzyd (#2690)
+- Bug fixes in ``skimage.util.plot_matches`` and ``denoise_wavelet``,
+  by Gregory Lee (#2650, #2640)
+- Documentation updates by Egor Panfilov (#2716) and Jirka Borovec (#2524)
+- Documentation build fixes by Gregory Lee (#2666, #2731), Nelle
+  Varoquaux (#2722), and Stéfan van der Walt (#2723, #2810)
+
+
 Announcement: scikit-image 0.13.0
 =================================
 
diff --git a/doc/source/_templates/localtoc.html b/doc/source/_templates/localtoc.html
index 6866649..b789650 100644
--- a/doc/source/_templates/localtoc.html
+++ b/doc/source/_templates/localtoc.html
@@ -1,4 +1,4 @@
-{% if pagename != 'index' %}
+<!-- {% if pagename != 'index' %}
 
     {%- if display_toc %}
         <h4 class="sidebar-box-heading">Contents</h4>
@@ -8,3 +8,4 @@
     {%- endif %}
 
 {% endif %}
+ -->
\ No newline at end of file
diff --git a/doc/source/_templates/navbar.html b/doc/source/_templates/navbar.html
index 683744c..772ae0d 100644
--- a/doc/source/_templates/navbar.html
+++ b/doc/source/_templates/navbar.html
@@ -1,4 +1,3 @@
-<li><a href="/">Home</a></li>
 <li><a href="/download.html">Download</a></li>
 <li><a href="/docs/dev/auto_examples">Gallery</a></li>
 <li><a href="/docs/dev">Documentation</a></li>
diff --git a/doc/source/_templates/navigation.html b/doc/source/_templates/navigation.html
index 6624469..8b13789 100644
--- a/doc/source/_templates/navigation.html
+++ b/doc/source/_templates/navigation.html
@@ -1,22 +1 @@
-<h4 class="sidebar-box-heading">{{ _('Navigation') }}</h4>
-<div class="well sidebar-box">
-    <ul class="nav nav-list">
-        <li><a href="{{ pathto(master_doc) }}">Documentation Home</a></li>
-    </ul>
-</div>
-{%- if prev %}
-    <h4 class="sidebar-box-heading">{{ _('Previous topic') }}</h4>
-    <div class="well sidebar-box">
-        <ul class="nav nav-list">
-            <li><a href="{{ prev.link|e }}" title="{{ _('previous chapter') }}">{{ prev.title }}</a></li>
-        </ul>
-    </div>
-{%- endif %}
-{%- if next %}
-    <h4 class="sidebar-box-heading">{{ _('Next topic') }}</h4>
-    <div class="well sidebar-box">
-        <ul class="nav nav-list">
-            <li><a href="{{ next.link|e }}" title="{{ _('next chapter') }}">{{ next.title }}</a></li>
-        </ul>
-    </div>
-{%- endif %}
+
diff --git a/doc/source/_templates/versions.html b/doc/source/_templates/versions.html
index 6dbb196..e19f1ef 100644
--- a/doc/source/_templates/versions.html
+++ b/doc/source/_templates/versions.html
@@ -1,9 +1,19 @@
-<h4 class="sidebar-box-heading">{{ _('Versions') }}</h4>
-<div class="well sidebar-box">
-    <ul class="nav nav-list">
+<div class="well">
+    <strong>Docs for {{ release }}<br></strong>
+
+    <a id="other">All versions</a>
+
+    <ul id="versionList" style="display: none;">
         <script src="{{ pathto('_static/', 1) }}docversions.js"></script>
         <script type="text/javascript">
             insert_version_links();
         </script>
     </ul>
-</div>
+
+ </div>
+
+<script type="text/javascript">
+	$("#other").click(function() {
+		$("#versionList").toggle();
+	});
+</script>
diff --git a/doc/source/conf.py b/doc/source/conf.py
index 3a20239..c68fc54 100644
--- a/doc/source/conf.py
+++ b/doc/source/conf.py
@@ -48,7 +48,7 @@ sphinx_gallery_conf = {
     'examples_dirs' : '../examples',
     # path where to save gallery generated examples
     'gallery_dirs'  : 'auto_examples',
-    'mod_example_dir': 'api',
+    'backreferences_dir': 'api',
     'reference_url'     : {
             'skimage': None,
             'matplotlib': 'http://matplotlib.org',
diff --git a/doc/source/themes/scikit-image/layout.html b/doc/source/themes/scikit-image/layout.html
index 5ac07fa..1c09bff 100644
--- a/doc/source/themes/scikit-image/layout.html
+++ b/doc/source/themes/scikit-image/layout.html
@@ -89,14 +89,14 @@
         </div>
     </div>
     <div class="row">
-        <div class="span9">
-            {% block body %}{% endblock %}
-        </div>
         <div class="span3">
             {%- for sidebartemplate in sidebars %}
                 {%- include sidebartemplate %}
             {%- endfor %}
         </div>
+        <div class="span9">
+            {% block body %}{% endblock %}
+        </div>
     </div>
     <div class="well footer">
         <small>
diff --git a/doc/source/themes/scikit-image/static/css/custom.css b/doc/source/themes/scikit-image/static/css/custom.css
index bfd84dd..5ed2196 100644
--- a/doc/source/themes/scikit-image/static/css/custom.css
+++ b/doc/source/themes/scikit-image/static/css/custom.css
@@ -183,7 +183,7 @@ dl.class dd, dl.function dd {
     padding: 10px;
 }
 .docutils.field-list th {
-    background-color: #eee;
+    background-color: #f3f3f3;
     padding: 10px;
     text-align: left;
     vertical-align: top;
@@ -243,3 +243,58 @@ p.admonition-title {
     border-color: #bce8f1;
 /*    padding: 1em;*/
 }
+
+.viewcode-link {
+    float: right;
+    font-family: monospace;
+}
+
+code.docutils.literal {
+    background-color: transparent;
+    border: none;
+}
+
+td.field-body blockquote {
+    border-left: none;
+    margin-bottom: 0.5em;
+}
+
+td.field-body p {
+    margin-bottom: 0.3em;
+}
+
+/*
+Overrride some default settings in sphinx-gallery's gallery.css.
+
+The !important property is necessary to ensure these settings take precedence
+over the ones bundled with sphinx-gallery.
+*/
+.sphx-glr-thumbcontainer {
+  border: solid #d6d6d6 1px !important;
+}
+
+div.sphx-glr-download {
+  width: auto !important;
+}
+
+div.sphx-glr-footer {
+    text-align: left  !important;
+}
+
+div.sphx-glr-download a {
+  background-color: #d9edf7 !important;
+  border: 1px solid #bce8f1 !important;
+  background-image: none !important;
+}
+
+div.sphx-glr-download code {
+  color: #3a87ad !important;
+}
+
+div.sphx-glr-download a:hover {
+  background-color: #d9edf7 !important;
+}
+
+p.sphx-glr-signature a.reference.external {
+  display: none !important;
+}
diff --git a/doc/tools/apigen.py b/doc/tools/apigen.py
index 61da3fe..98ccc9b 100644
--- a/doc/tools/apigen.py
+++ b/doc/tools/apigen.py
@@ -21,7 +21,7 @@ is an MIT-licensed project.
 import os
 import re
 
-from types import BuiltinFunctionType, FunctionType
+from types import BuiltinFunctionType, FunctionType, ModuleType
 
 # suppress print statements (warnings for empty files)
 DEBUG = True
@@ -199,12 +199,15 @@ class ApiDocWriter(object):
             A list of (public) function names in the module.
         classes : list of str
             A list of (public) class names in the module.
+        submodules : list of str
+            A list of (public) submodule names in the module.
         """
         mod = __import__(uri, fromlist=[uri.split('.')[-1]])
         # find all public objects in the module.
         obj_strs = [obj for obj in dir(mod) if not obj.startswith('_')]
         functions = []
         classes = []
+        submodules = []
         for obj_str in obj_strs:
             # find the actual object from its string representation
             if obj_str not in mod.__dict__:
@@ -214,6 +217,8 @@ class ApiDocWriter(object):
             # figure out if obj is a function or class
             if isinstance(obj, (FunctionType, BuiltinFunctionType)):
                 functions.append(obj_str)
+            elif isinstance(obj, ModuleType):
+                submodules.append(obj_str)
             else:
                 try:
                     issubclass(obj, object)
@@ -221,7 +226,7 @@ class ApiDocWriter(object):
                 except TypeError:
                     # not a function or class
                     pass
-        return functions, classes
+        return functions, classes, submodules
 
     def _parse_lines(self, linesource):
         ''' Parse lines of text for functions and classes '''
@@ -258,9 +263,9 @@ class ApiDocWriter(object):
             Contents of API doc
         '''
         # get the names of all classes and functions
-        functions, classes = self._parse_module_with_import(uri)
-        if not len(functions) and not len(classes) and DEBUG:
-            print('WARNING: Empty -', uri)  # dbg
+        functions, classes, submodules = self._parse_module_with_import(uri)
+        if not (len(functions) or len(classes) or len(submodules)) and DEBUG:
+            print('WARNING: Empty -', uri)
             return ''
 
         # Make a shorter version of the uri that omits the package name for
@@ -286,6 +291,9 @@ class ApiDocWriter(object):
         for c in classes:
             ad += '   ' + uri + '.' + c + '\n'
         ad += '\n'
+        for m in submodules:
+            ad += '    ' + uri + '.' + m + '\n'
+        ad += '\n'
 
         for f in functions:
             # must NOT exclude from index to keep cross-refs working
@@ -389,7 +397,9 @@ class ApiDocWriter(object):
     def write_modules_api(self, modules, outdir):
         # write the list
         written_modules = []
-        for m in modules:
+        public_modules = [m for m in modules
+                          if not m.split('.')[-1].startswith('_')]
+        for m in public_modules:
             api_str = self.generate_api_doc(m)
             if not api_str:
                 continue
@@ -423,7 +433,7 @@ class ApiDocWriter(object):
             os.mkdir(outdir)
         # compose list of modules
         modules = self.discover_modules()
-        self.write_modules_api(modules,outdir)
+        self.write_modules_api(modules, outdir)
 
     def write_index(self, outdir, froot='gen', relative_to=None):
         """Make a reST API index file from written files
@@ -457,9 +467,34 @@ class ApiDocWriter(object):
         w = idx.write
         w('.. AUTO-GENERATED FILE -- DO NOT EDIT!\n\n')
 
-        title = "API Reference"
+        # We look at the module name.  If it is `skimage`, display, if `skimage.submodule`, only show `submodule`,
+        # if it is `skimage.submodule.subsubmodule`, ignore.
+
+        title = "API Reference for skimage |version|"
         w(title + "\n")
         w("=" * len(title) + "\n\n")
+
+        subtitle = "Submodules"
+        w(subtitle + "\n")
+        w("-" * len(subtitle) + "\n\n")
+
+        for f in self.written_modules:
+            module_name = f.split('.')
+            if len(module_name) > 2:
+                continue
+            elif len(module_name) == 1:
+                module_name = module_name[0]
+                prefix = "-"
+            elif len(module_name) == 2:
+                module_name = module_name[1]
+                prefix = "\n  -"
+            w('{0} `{1} <{2}.html>`__\n'.format(prefix, module_name, os.path.join(f)))
+        w('\n')
+
+        subtitle = "Submodule Contents"
+        w(subtitle + "\n")
+        w("-" * len(subtitle) + "\n\n")
+
         w('.. toctree::\n\n')
         for f in self.written_modules:
             w('   %s\n' % os.path.join(relpath,f))
diff --git a/skimage/__init__.py b/skimage/__init__.py
index 4b3afa8..2b67800 100644
--- a/skimage/__init__.py
+++ b/skimage/__init__.py
@@ -65,7 +65,7 @@ import sys
 pkg_dir = osp.abspath(osp.dirname(__file__))
 data_dir = osp.join(pkg_dir, 'data')
 
-__version__ = '0.13.0'
+__version__ = '0.13.1'
 
 try:
     imp.find_module('nose')
diff --git a/skimage/draw/draw.py b/skimage/draw/draw.py
index c6d6d99..3ad6e5a 100644
--- a/skimage/draw/draw.py
+++ b/skimage/draw/draw.py
@@ -9,7 +9,7 @@ from ._draw import (_coords_inside_image, _line, _line_aa,
 
 
 def _ellipse_in_shape(shape, center, radii, rotation=0.):
-    """ Generate coordinates of points within ellipse bounded by shape.
+    """Generate coordinates of points within ellipse bounded by shape.
 
     Parameters
     ----------
@@ -91,6 +91,22 @@ def ellipse(r, c, r_radius, c_radius, shape=None, rotation=0.):
         ((x * cos(alpha) + y * sin(alpha)) / x_radius) ** 2 +
         ((x * sin(alpha) - y * cos(alpha)) / y_radius) ** 2 = 1
 
+
+    Note that the positions of `ellipse` without specified `shape` can have
+    also, negative values, as this is correct on the plane. On the other hand
+    using these ellipse positions for an image afterwards may lead to appearing
+    on the other side of image, because ``image[-1, -1] = image[end-1, end-1]``
+
+    >>> rr, cc = ellipse(1, 2, 3, 6)
+    >>> img = np.zeros((6, 12), dtype=np.uint8)
+    >>> img[rr, cc] = 1
+    >>> img
+    array([[1, 1, 1, 1, 1, 1, 1, 1, 0, 1, 1, 1],
+           [1, 1, 1, 1, 1, 1, 1, 1, 0, 1, 1, 1],
+           [1, 1, 1, 1, 1, 1, 1, 1, 0, 1, 1, 1],
+           [1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 1, 1],
+           [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
+           [1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 1, 1]], dtype=uint8)
     """
 
     center = np.array([r, c])
@@ -598,6 +614,26 @@ def ellipse_perimeter(r, c, r_radius, c_radius, orientation=0, shape=None):
            [0, 0, 1, 0, 0, 0, 0, 0, 1, 0],
            [0, 0, 0, 1, 1, 1, 1, 1, 0, 0],
            [0, 0, 0, 0, 0, 0, 0, 0, 0, 0]], dtype=uint8)
+
+
+    Note that the positions of `ellipse` without specified `shape` can have 
+    also, negative values, as this is correct on the plane. On the other hand
+    using these ellipse positions for an image afterwards may lead to appearing
+    on the other side of image, because ``image[-1, -1] = image[end-1, end-1]``
+
+    >>> rr, cc = ellipse_perimeter(2, 3, 4, 5)
+    >>> img = np.zeros((9, 12), dtype=np.uint8)
+    >>> img[rr, cc] = 1
+    >>> img
+    array([[0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 1],
+           [0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 1, 0],
+           [0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 1, 0],
+           [0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 1, 0],
+           [0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 1],
+           [1, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0],
+           [0, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0],
+           [0, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0],
+           [1, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0]], dtype=uint8)
     """
     return _ellipse_perimeter(r, c, r_radius, c_radius, orientation, shape)
 
diff --git a/skimage/feature/util.py b/skimage/feature/util.py
index b0c6698..fdb1b67 100644
--- a/skimage/feature/util.py
+++ b/skimage/feature/util.py
@@ -114,7 +114,7 @@ def plot_matches(ax, image1, image2, keypoints1, keypoints2, matches,
         idx2 = matches[i, 1]
 
         if matches_color is None:
-            color = np.random.rand(3, 1)
+            color = np.random.rand(3)
         else:
             color = matches_color
 
diff --git a/skimage/future/graph/graph_cut.py b/skimage/future/graph/graph_cut.py
index e431e10..cf6f6d2 100644
--- a/skimage/future/graph/graph_cut.py
+++ b/skimage/future/graph/graph_cut.py
@@ -55,7 +55,7 @@ def cut_threshold(labels, rag, thresh, in_place=True):
         rag = rag.copy()
 
     # Because deleting edges while iterating through them produces an error.
-    to_remove = [(x, y) for x, y, d in rag.edges_iter(data=True)
+    to_remove = [(x, y) for x, y, d in rag.edges(data=True)
                  if d['weight'] >= thresh]
     rag.remove_edges_from(to_remove)
 
@@ -125,14 +125,14 @@ def cut_normalized(labels, rag, thresh=0.001, num_cuts=10, in_place=True,
     if not in_place:
         rag = rag.copy()
 
-    for node in rag.nodes_iter():
+    for node in rag.nodes():
         rag.add_edge(node, node, weight=max_edge)
 
     _ncut_relabel(rag, thresh, num_cuts)
 
     map_array = np.zeros(labels.max() + 1, dtype=labels.dtype)
     # Mapping from old labels to new
-    for n, d in rag.nodes_iter(data=True):
+    for n, d in rag.nodes(data=True):
         map_array[d['labels']] = d['ncut label']
 
     return map_array[labels]
@@ -231,7 +231,7 @@ def _label_all(rag, attr_name):
     """
     node = min(rag.nodes())
     new_label = rag.node[node]['labels'][0]
-    for n, d in rag.nodes_iter(data=True):
+    for n, d in rag.nodes(data=True):
         d[attr_name] = new_label
 
 
diff --git a/skimage/future/graph/graph_merge.py b/skimage/future/graph/graph_merge.py
index 308f7f3..022ac31 100644
--- a/skimage/future/graph/graph_merge.py
+++ b/skimage/future/graph/graph_merge.py
@@ -42,7 +42,7 @@ def _rename_node(graph, node_id, copy_id):
     """ Rename `node_id` in `graph` to `copy_id`. """
 
     graph._add_node_silent(copy_id)
-    graph.node[copy_id] = graph.node[node_id]
+    graph.node[copy_id].update(graph.node[node_id])
 
     for nbr in graph.neighbors(node_id):
         wt = graph[node_id][nbr]['weight']
@@ -96,7 +96,7 @@ def merge_hierarchical(labels, rag, thresh, rag_copy, in_place_merge,
         rag = rag.copy()
 
     edge_heap = []
-    for n1, n2, data in rag.edges_iter(data=True):
+    for n1, n2, data in rag.edges(data=True):
         # Push a valid edge in the heap
         wt = data['weight']
         heap_item = [wt, n1, n2, True]
@@ -130,7 +130,7 @@ def merge_hierarchical(labels, rag, thresh, rag_copy, in_place_merge,
             _revalidate_node_edges(rag, new_id, edge_heap)
 
     label_map = np.arange(labels.max() + 1)
-    for ix, (n, d) in enumerate(rag.nodes_iter(data=True)):
+    for ix, (n, d) in enumerate(rag.nodes(data=True)):
         for label in d['labels']:
             label_map[label] = ix
 
diff --git a/skimage/future/graph/rag.py b/skimage/future/graph/rag.py
index 6f9a992..31619bb 100644
--- a/skimage/future/graph/rag.py
+++ b/skimage/future/graph/rag.py
@@ -139,7 +139,7 @@ class RAG(nx.Graph):
         if self.number_of_nodes() == 0:
             self.max_id = 0
         else:
-            self.max_id = max(self.nodes_iter())
+            self.max_id = max(self.nodes())
 
         if label_image is not None:
             fp = ndi.generate_binary_structure(label_image.ndim, connectivity)
@@ -252,6 +252,28 @@ class RAG(nx.Graph):
         g.max_id = self.max_id
         return g
 
+    def fresh_copy(self):
+        """Return a fresh copy graph with the same data structure.
+
+        A fresh copy has no nodes, edges or graph attributes. It is
+        the same data structure as the current graph. This method is
+        typically used to create an empty version of the graph.
+
+        This is required when subclassing Graph with networkx v2 and
+        does not cause problems for v1. Here is more detail from
+        the network migrating from 1.x to 2.x document::
+
+            With the new GraphViews (SubGraph, ReversedGraph, etc)
+            you can't assume that ``G.__class__()`` will create a new
+            instance of the same graph type as ``G``. In fact, the
+            call signature for ``__class__`` differs depending on
+            whether ``G`` is a view or a base class. For v2.x you
+            should use ``G.fresh_copy()`` to create a null graph of
+            the correct type---ready to fill with nodes and edges.
+
+        """
+        return RAG()
+
     def next_id(self):
         """Returns the `id` for the new node to be inserted.
 
@@ -272,30 +294,6 @@ class RAG(nx.Graph):
         .. seealso:: :func:`networkx.Graph.add_node`."""
         super(RAG, self).add_node(n)
 
-    def nodes_iter(self, *args, **kwargs):
-        """ Iterate over nodes
-
-        For compatibility with older versions of networkx.  Versions <= 1.11
-        have an ``nodes_iter`` method, but later versions return an iterator from
-        the nodes method, and lack ``nodes_iter``.
-        """
-        try:
-            return super(RAG, self).nodes_iter(*args, **kwargs)
-        except AttributeError:
-            return super(RAG, self).nodes(*args, **kwargs)
-
-    def edges_iter(self, *args, **kwargs):
-        """ Iterate over edges
-
-        For compatibility with older versions of networkx.  Versions <= 1.11
-        have an ``edges_iter`` method, but later versions return an iterator from
-        the edges method, and lack ``edges_iter``.
-        """
-        try:
-            return super(RAG, self).edges_iter(*args, **kwargs)
-        except AttributeError:
-            return super(RAG, self).edges(*args, **kwargs)
-
 
 def rag_mean_color(image, labels, connectivity=2, mode='distance',
                    sigma=255.0):
@@ -375,7 +373,7 @@ def rag_mean_color(image, labels, connectivity=2, mode='distance',
         graph.node[n]['mean color'] = (graph.node[n]['total color'] /
                                        graph.node[n]['pixel count'])
 
-    for x, y, d in graph.edges_iter(data=True):
+    for x, y, d in graph.edges(data=True):
         diff = graph.node[x]['mean color'] - graph.node[y]['mean color']
         diff = np.linalg.norm(diff)
         if mode == 'similarity':
@@ -527,7 +525,7 @@ def show_rag(labels, rag, img, border_color='black', edge_width=1.5,
     # offset is 1 so that regionprops does not ignore 0
     offset = 1
     map_array = np.arange(labels.max() + 1)
-    for n, d in rag.nodes_iter(data=True):
+    for n, d in rag.nodes(data=True):
         for label in d['labels']:
             map_array[label] = offset
         offset += 1
@@ -535,7 +533,7 @@ def show_rag(labels, rag, img, border_color='black', edge_width=1.5,
     rag_labels = map_array[labels]
     regions = measure.regionprops(rag_labels)
 
-    for (n, data), region in zip(rag.nodes_iter(data=True), regions):
+    for (n, data), region in zip(rag.nodes(data=True), regions):
         data['centroid'] = tuple(map(int, region['centroid']))
 
     cc = colors.ColorConverter()
@@ -549,10 +547,10 @@ def show_rag(labels, rag, img, border_color='black', edge_width=1.5,
     # The tuple[::-1] syntax reverses a tuple as matplotlib uses (x,y)
     # convention while skimage uses (row, column)
     lines = [[rag.node[n1]['centroid'][::-1], rag.node[n2]['centroid'][::-1]]
-             for (n1, n2) in rag.edges_iter()]
+             for (n1, n2) in rag.edges()]
 
     lc = LineCollection(lines, linewidths=edge_width, cmap=edge_cmap)
-    edge_weights = [d['weight'] for x, y, d in rag.edges_iter(data=True)]
+    edge_weights = [d['weight'] for x, y, d in rag.edges(data=True)]
     lc.set_array(np.array(edge_weights))
     ax.add_collection(lc)
 
diff --git a/skimage/future/graph/tests/test_rag.py b/skimage/future/graph/tests/test_rag.py
index 55a63c2..09fa5b0 100644
--- a/skimage/future/graph/tests/test_rag.py
+++ b/skimage/future/graph/tests/test_rag.py
@@ -32,14 +32,14 @@ def test_rag_merge():
     # We merge nodes and ensure that the minimum weight is chosen
     # when there is a conflict.
     g.merge_nodes(0, 2)
-    assert g.edge[1][2]['weight'] == 10
-    assert g.edge[2][3]['weight'] == 30
+    assert g.adj[1][2]['weight'] == 10
+    assert g.adj[2][3]['weight'] == 30
 
     # We specify `max_edge` as `weight_func` as ensure that maximum
     # weight is chosen in case on conflict
     gc.merge_nodes(0, 2, weight_func=max_edge)
-    assert gc.edge[1][2]['weight'] == 20
-    assert gc.edge[2][3]['weight'] == 40
+    assert gc.adj[1][2]['weight'] == 20
+    assert gc.adj[2][3]['weight'] == 40
 
     g.merge_nodes(1, 4)
     g.merge_nodes(2, 3)
diff --git a/skimage/io/_io.py b/skimage/io/_io.py
index 35a85f6..ac5b21a 100644
--- a/skimage/io/_io.py
+++ b/skimage/io/_io.py
@@ -33,9 +33,14 @@ def imread(fname, as_grey=False, plugin=None, flatten=None,
 
     Other Parameters
     ----------------
+    plugin_args : keywords
+        Passed to the given plugin.
     flatten : bool
         Backward compatible keyword, superseded by `as_grey`.
 
+    plugin_args : keywords
+        Passed to the given plugin.
+
     Returns
     -------
     img_array : ndarray
@@ -43,11 +48,6 @@ def imread(fname, as_grey=False, plugin=None, flatten=None,
         third dimension, such that a grey-image is MxN, an
         RGB-image MxNx3 and an RGBA-image MxNx4.
 
-    Other parameters
-    ----------------
-    plugin_args : keywords
-        Passed to the given plugin.
-
     """
     # Backward compatibility
     if flatten is not None:
diff --git a/skimage/measure/_ccomp.pxd b/skimage/measure/_ccomp.pxd
index b514e55..e772057 100644
--- a/skimage/measure/_ccomp.pxd
+++ b/skimage/measure/_ccomp.pxd
@@ -1,7 +1,6 @@
 """Export fast union find in Cython"""
 cimport numpy as cnp
 
-DTYPE = cnp.intp
 ctypedef cnp.intp_t DTYPE_t
 
 cdef DTYPE_t find_root(DTYPE_t *forest, DTYPE_t n) nogil
diff --git a/skimage/restoration/__init__.py b/skimage/restoration/__init__.py
index 9c55615..60b61e1 100644
--- a/skimage/restoration/__init__.py
+++ b/skimage/restoration/__init__.py
@@ -1,21 +1,6 @@
 # -*- coding: utf-8 -*-
 """Image restoration module.
 
-References
-----------
-.. [1] François Orieux, Jean-François Giovannelli, and Thomas
-       Rodet, "Bayesian estimation of regularization and point
-       spread function parameters for Wiener-Hunt deconvolution",
-       J. Opt. Soc. Am. A 27, 1593-1607 (2010)
-
-       http://www.opticsinfobase.org/josaa/abstract.cfm?URI=josaa-27-7-1593
-
-.. [2] Richardson, William Hadley, "Bayesian-Based Iterative Method of
-       Image Restoration". JOSA 62 (1): 55–59. doi:10.1364/JOSA.62.000055, 1972
-
-.. [3] B. R. Hunt "A matrix theory proof of the discrete
-       convolution theorem", IEEE Trans. on Audio and
-       Electroacoustics, vol. au-19, no. 4, pp. 285-288, dec. 1971
 """
 
 from .deconvolution import wiener, unsupervised_wiener, richardson_lucy
diff --git a/skimage/restoration/_denoise.py b/skimage/restoration/_denoise.py
index 4ddacd7..e73498f 100644
--- a/skimage/restoration/_denoise.py
+++ b/skimage/restoration/_denoise.py
@@ -16,15 +16,15 @@ def denoise_bilateral(image, win_size=None, sigma_color=None, sigma_spatial=1,
     """Denoise image using bilateral filter.
 
     This is an edge-preserving, denoising filter. It averages pixels based on
-    their spatial closeness and radiometric similarity.
+    their spatial closeness and radiometric similarity [1]_.
 
     Spatial closeness is measured by the Gaussian function of the Euclidean
     distance between two pixels and a certain standard deviation
     (`sigma_spatial`).
 
-    Radiometric similarity is measured by the Gaussian function of the Euclidean
-    distance between two color values and a certain standard deviation
-    (`sigma_color`).
+    Radiometric similarity is measured by the Gaussian function of the
+    Euclidean distance between two color values and a certain standard
+    deviation (`sigma_color`).
 
     Parameters
     ----------
@@ -132,7 +132,7 @@ def denoise_tv_bregman(image, weight, max_iter=100, eps=1e-3, isotropic=True):
     Total-variation denoising (also know as total-variation regularization)
     tries to find an image with less total-variation under the constraint
     of being similar to the input image, which is controlled by the
-    regularization parameter.
+    regularization parameter ([1]_, [2]_, [3]_, [4]_).
 
     Parameters
     ----------
@@ -569,7 +569,8 @@ def denoise_wavelet(img, sigma=None, wavelet='db1', mode='soft',
                 channel = out[..., i] - min
                 channel /= max - min
                 out[..., i] = denoise_wavelet(channel, sigma=sigma[i],
-                                              wavelet=wavelet, mode=mode)
+                                              wavelet=wavelet, mode=mode,
+                                              wavelet_levels=wavelet_levels)
 
                 out[..., i] = out[..., i] * (max - min)
                 out[..., i] += min
@@ -582,8 +583,7 @@ def denoise_wavelet(img, sigma=None, wavelet='db1', mode='soft',
                                                  wavelet_levels=wavelet_levels)
 
     else:
-        out = _wavelet_threshold(img, wavelet=wavelet, mode=mode,
-                                 sigma=sigma,
+        out = _wavelet_threshold(img, wavelet=wavelet, mode=mode, sigma=sigma,
                                  wavelet_levels=wavelet_levels)
 
     clip_range = (-1, 1) if img.min() < 0 else (0, 1)
diff --git a/skimage/restoration/inpaint.py b/skimage/restoration/inpaint.py
index 90a1f81..b2d66a9 100644
--- a/skimage/restoration/inpaint.py
+++ b/skimage/restoration/inpaint.py
@@ -97,8 +97,7 @@ def inpaint_biharmonic(img, mask, multichannel=False):
     References
     ----------
     .. [1]  N.S.Hoang, S.B.Damelin, "On surface completion and image inpainting
-            by biharmonic functions: numerical aspects",
-            http://www.ima.umn.edu/~damelin/biharmonic
+            by biharmonic functions: numerical aspects"
 
     Examples
     --------
diff --git a/skimage/restoration/tests/test_denoise.py b/skimage/restoration/tests/test_denoise.py
index 9ec5710..4d5883f 100644
--- a/skimage/restoration/tests/test_denoise.py
+++ b/skimage/restoration/tests/test_denoise.py
@@ -157,7 +157,7 @@ def test_denoise_tv_bregman_3d():
 
 
 def test_denoise_bilateral_2d():
-    img = checkerboard_gray.copy()[:50,:50]
+    img = checkerboard_gray.copy()[:50, :50]
     # add some random noise
     img += 0.5 * img.std() * np.random.rand(*img.shape)
     img = np.clip(img, 0, 1)
@@ -221,7 +221,7 @@ def test_denoise_bilateral_multidimensional():
 
 
 def test_denoise_bilateral_nan():
-    img = np.NaN + np.empty((50, 50))
+    img = np.full((50, 50), np.NaN)
     out = restoration.denoise_bilateral(img, multichannel=False)
     assert_equal(img, out)
 
@@ -343,26 +343,39 @@ def test_wavelet_denoising():
     astro_gray_odd = astro_gray[:, :-1]
     astro_odd = astro[:, :-1]
 
-    for img, multichannel in [(astro_gray, False), (astro_gray_odd, False),
-                              (astro_odd, True)]:
+    for img, multichannel, convert2ycbcr in [(astro_gray, False, False),
+                                             (astro_gray_odd, False, False),
+                                             (astro_odd, True, False),
+                                             (astro_odd, True, True)]:
         sigma = 0.1
         noisy = img + sigma * rstate.randn(*(img.shape))
         noisy = np.clip(noisy, 0, 1)
 
         # Verify that SNR is improved when true sigma is used
         denoised = restoration.denoise_wavelet(noisy, sigma=sigma,
-                                               multichannel=multichannel)
+                                               multichannel=multichannel,
+                                               convert2ycbcr=convert2ycbcr)
         psnr_noisy = compare_psnr(img, noisy)
         psnr_denoised = compare_psnr(img, denoised)
         assert_(psnr_denoised > psnr_noisy)
 
         # Verify that SNR is improved with internally estimated sigma
         denoised = restoration.denoise_wavelet(noisy,
-                                               multichannel=multichannel)
+                                               multichannel=multichannel,
+                                               convert2ycbcr=convert2ycbcr)
         psnr_noisy = compare_psnr(img, noisy)
         psnr_denoised = compare_psnr(img, denoised)
         assert_(psnr_denoised > psnr_noisy)
 
+        # SNR is improved less with 1 wavelet level than with the default.
+        denoised_1 = restoration.denoise_wavelet(noisy,
+                                                 multichannel=multichannel,
+                                                 wavelet_levels=1,
+                                                 convert2ycbcr=convert2ycbcr)
+        psnr_denoised_1 = compare_psnr(img, denoised_1)
+        assert_(psnr_denoised > psnr_denoised_1)
+        assert_(psnr_denoised_1 > psnr_noisy)
+
         # Test changing noise_std (higher threshold, so less energy in signal)
         res1 = restoration.denoise_wavelet(noisy, sigma=2*sigma,
                                            multichannel=multichannel)
@@ -485,6 +498,7 @@ def test_estimate_sigma_color():
     # default multichannel=False should raise a warning about last axis size
     assert_warns(UserWarning, restoration.estimate_sigma, img)
 
+
 def test_wavelet_denoising_args():
     """
     Some of the functions inside wavelet denoising throw an error the wrong
diff --git a/skimage/transform/_geometric.py b/skimage/transform/_geometric.py
index 51dc7b1..e1dc173 100644
--- a/skimage/transform/_geometric.py
+++ b/skimage/transform/_geometric.py
@@ -130,7 +130,7 @@ def _umeyama(src, dst, estimate_scale):
             T[:dim, :dim] = np.dot(U, np.dot(np.diag(d), V))
             d[dim - 1] = s
     else:
-        T[:dim, :dim] = np.dot(U, np.dot(np.diag(d), V.T))
+        T[:dim, :dim] = np.dot(U, np.dot(np.diag(d), V))
 
     if estimate_scale:
         # Eq. (41) and (42).
diff --git a/tools/osx_wheel_upload.sh b/tools/upload_wheels.sh
similarity index 90%
rename from tools/osx_wheel_upload.sh
rename to tools/upload_wheels.sh
index fff6153..6a87b43 100755
--- a/tools/osx_wheel_upload.sh
+++ b/tools/upload_wheels.sh
@@ -15,3 +15,4 @@ if [ "${SK_VERSION:0:1}" != 'v' ]; then
 fi
 echo "Trying download / upload of version ${SK_VERSION:1}"
 wheel-uploader -v scikit_image "${SK_VERSION:1}"
+wheel-uploader -v scikit_image -t manylinux1 "${SK_VERSION:1}"

-- 
Alioth's /usr/local/bin/git-commit-notice on /srv/git.debian.org/git/debian-science/packages/skimage.git



More information about the debian-science-commits mailing list