[Debian-l10n-commits] [translate-toolkit] branch upstream updated (a1aaa88 -> 29c1836)

Stuart Prescott stuart at moszumanska.debian.org
Mon Aug 25 14:37:45 UTC 2014


This is an automated email from the git hooks/post-receive script.

stuart pushed a change to branch upstream
in repository translate-toolkit.

      from  a1aaa88   Imported Upstream version 1.11.0+dfsg
       new  29c1836   Imported Upstream version 1.12.0+dfsg1

The 1 revisions listed above as "new" are entirely new to this
repository and will be described in separate emails.  The revisions
listed as "adds" were already present in the repository and have only
been added to this reference.


Summary of changes:
 MANIFEST.in                                        |   81 +
 PKG-INFO                                           |   12 +-
 README.rst                                         |   20 +-
 docs/api/misc.rst                                  |  103 --
 docs/changelog.rst                                 |   40 +-
 docs/commands/csv2po.rst                           |    8 +
 docs/commands/moz2po.rst                           |    6 +-
 docs/commands/oo2po.rst                            |    8 +-
 docs/commands/pocount.rst                          |    3 +
 docs/commands/podebug.rst                          |    2 +-
 docs/commands/pofilter.rst                         |    1 +
 docs/commands/posplit.rst                          |    2 +-
 docs/commands/poterminology.rst                    |    6 +-
 docs/conf.py                                       |   30 +-
 docs/contents.rst.inc                              |    1 +
 docs/developers/contributing.rst                   |    9 +-
 docs/developers/deprecation.rst                    |   72 +
 docs/developers/developers.rst                     |   12 +-
 docs/developers/releasing.rst                      |  227 ++-
 docs/developers/styleguide.rst                     |  133 +-
 docs/developers/testing.rst                        |  134 +-
 docs/formats/dtd.rst                               |    7 +-
 docs/formats/index.rst                             |    6 +-
 docs/formats/mo.rst                                |    2 +-
 docs/formats/php.rst                               |   13 +
 docs/formats/properties.rst                        |    7 +-
 docs/formats/rc.rst                                |    2 +-
 docs/formats/wordfast.rst                          |    2 +-
 docs/releases/1.10.0.rst                           |   11 +-
 docs/releases/1.11.0-rc1.rst                       |   13 +-
 docs/releases/1.11.0.rst                           |   19 +-
 docs/releases/1.12.0-rc1.rst                       |  164 ++
 docs/releases/1.12.0.rst                           |  182 ++
 docs/releases/1.8.1.rst                            |    2 +-
 docs/releases/1.9.0.rst                            |    4 +-
 docs/releases/dev.rst                              |   71 +
 docs/releases/index.rst                            |    9 +-
 requirements/dev.txt                               |    6 +-
 requirements/optional.txt                          |   17 +-
 requirements/recommended.txt                       |    7 +-
 requirements/required.txt                          |    7 +
 setup.cfg                                          |    5 +
 setup.py                                           |  553 +++---
 tests/cli/data/test_pocount/stderr.txt             |    4 +
 tests/cli/data/test_pocount_help/stdout.txt        |   17 +
 .../test_pocount_mutually_exclusive/stderr.txt     |    4 +
 tests/cli/data/test_pocount_nonexistant/stderr.txt |    1 +
 tests/cli/data/test_pocount_po_file/stdout.txt     |    9 +
 .../cli/data/test_pofilter_listfilters/stdout.txt  |   73 +
 tests/cli/data/test_pofilter_manpage/stdout.txt    |  102 ++
 tests/cli/data/test_prop2po/stderr.txt             |    3 +
 tests/cli/data/test_prop2po_dirs/stderr.txt        |    3 +
 tools/mozilla/buildxpi.py                          |   89 +-
 tools/mozilla/get_moz_enUS.py                      |   42 +-
 translate/__version__.py                           |    8 +-
 translate/convert/accesskey.py                     |   55 +-
 translate/convert/convert.py                       |   20 +-
 translate/convert/csv2po.py                        |   14 +-
 translate/convert/csv2tbx.py                       |    9 +-
 translate/convert/dtd2po.py                        |   25 +-
 translate/convert/factory.py                       |    1 +
 translate/convert/html2po.py                       |   14 +-
 translate/convert/ical2po.py                       |    6 +-
 translate/convert/ini2po                           |    3 +-
 translate/convert/ini2po.py                        |   62 +-
 translate/convert/json2po.py                       |    3 +-
 translate/convert/moz2po.py                        |   31 +-
 translate/convert/mozlang2po.py                    |    3 +-
 translate/convert/odf2xliff.py                     |   29 +-
 translate/convert/oo2po.py                         |   20 +-
 translate/convert/oo2xliff.py                      |   21 +-
 translate/convert/php2po.py                        |    1 -
 translate/convert/po2csv.py                        |    3 +-
 translate/convert/po2dtd.py                        |   15 +-
 translate/convert/po2html.py                       |   21 +-
 translate/convert/po2ical.py                       |    3 +-
 translate/convert/po2moz.py                        |    7 +-
 translate/convert/po2mozlang.py                    |    3 +-
 translate/convert/po2oo.py                         |   12 +-
 translate/convert/po2php.py                        |    4 +-
 translate/convert/po2prop.py                       |   82 +-
 translate/convert/po2rc.py                         |    3 +-
 translate/convert/po2tiki.py                       |    3 +-
 translate/convert/po2tmx.py                        |   15 +-
 translate/convert/po2ts.py                         |    3 +-
 translate/convert/po2txt.py                        |   12 +-
 translate/convert/po2web2py.py                     |    4 +-
 translate/convert/po2wordfast.py                   |    3 +-
 translate/convert/po2xliff.py                      |    3 +-
 translate/convert/pot2po.py                        |   10 +-
 translate/convert/prop2mozfunny.py                 |    5 +-
 translate/convert/prop2po.py                       |  141 +-
 translate/convert/rc2po.py                         |    6 +-
 translate/convert/sub2po.py                        |    5 +-
 translate/convert/symb2po.py                       |    2 +-
 translate/convert/test_accesskey.py                |   25 +
 translate/convert/test_convert.py                  |    2 +-
 translate/convert/test_csv2po.py                   |   26 +-
 translate/convert/test_dtd2po.py                   |   32 +-
 translate/convert/test_html2po.py                  |   10 +-
 translate/convert/test_json2po.py                  |    7 +-
 translate/convert/test_moz2po.py                   |    3 +-
 translate/convert/test_mozfunny2prop.py            |    4 +-
 translate/convert/test_mozlang2po.py               |   10 +-
 translate/convert/test_oo2po.py                    |   37 +-
 translate/convert/test_oo2xliff.py                 |    9 +-
 translate/convert/test_php2po.py                   |   16 +-
 translate/convert/test_po2csv.py                   |   17 +-
 translate/convert/test_po2dtd.py                   |  123 +-
 translate/convert/test_po2html.py                  |   51 +-
 translate/convert/test_po2ical.py                  |    8 +-
 translate/convert/test_po2ini.py                   |   19 +-
 translate/convert/test_po2moz.py                   |    3 +-
 translate/convert/test_po2mozlang.py               |   11 +-
 translate/convert/test_po2oo.py                    |    6 +-
 translate/convert/test_po2php.py                   |   27 +-
 translate/convert/test_po2prop.py                  |  126 +-
 translate/convert/test_po2sub.py                   |    7 +-
 translate/convert/test_po2tiki.py                  |    4 +-
 translate/convert/test_po2tmx.py                   |   52 +-
 translate/convert/test_po2ts.py                    |   13 +-
 translate/convert/test_po2txt.py                   |    7 +-
 translate/convert/test_po2xliff.py                 |   61 +-
 translate/convert/test_pot2po.py                   |   53 +-
 translate/convert/test_prop2mozfunny.py            |    8 +-
 translate/convert/test_prop2po.py                  |   87 +-
 translate/convert/test_tiki2po.py                  |    4 +-
 translate/convert/test_ts2po.py                    |    7 +-
 translate/convert/test_txt2po.py                   |    7 +-
 translate/convert/test_xliff2po.py                 |   20 +-
 translate/convert/tiki2po.py                       |    3 +-
 translate/convert/ts2po.py                         |    3 +-
 translate/convert/txt2po.py                        |    3 +-
 translate/convert/web2py2po.py                     |    2 -
 translate/convert/xliff2odf.py                     |   14 +-
 translate/convert/xliff2oo.py                      |   12 +-
 translate/convert/xliff2po.py                      |    8 +-
 translate/filters/autocorrect.py                   |    4 -
 translate/filters/checks.py                        |  130 +-
 translate/filters/decoration.py                    |    7 +-
 translate/filters/decorators.py                    |   10 +-
 translate/filters/pofilter.py                      |   10 +-
 translate/filters/prefilters.py                    |   11 +-
 translate/filters/spelling.py                      |    5 +-
 translate/filters/test_autocorrect.py              |   10 +-
 translate/filters/test_checks.py                   |  173 +-
 translate/filters/test_decoration.py               |   14 +-
 translate/filters/test_pofilter.py                 |   16 +-
 translate/i18n.py                                  |   27 -
 translate/lang/af.py                               |    6 +-
 translate/lang/ar.py                               |    2 +-
 translate/lang/common.py                           |   33 +-
 translate/lang/data.py                             |  296 ++--
 translate/lang/el.py                               |    3 +-
 translate/lang/es.py                               |    4 +-
 translate/lang/fa.py                               |    6 +-
 translate/lang/factory.py                          |    6 +-
 translate/lang/fr.py                               |    2 +-
 translate/lang/hy.py                               |    3 +-
 translate/lang/identify.py                         |    4 +-
 translate/lang/ngram.py                            |   10 +-
 translate/lang/nqo.py                              |    2 +-
 translate/lang/team.py                             |    6 +-
 translate/lang/test_af.py                          |    7 +-
 translate/lang/test_am.py                          |    2 +-
 translate/lang/test_ar.py                          |    6 +-
 translate/lang/test_common.py                      |   14 +-
 translate/lang/test_es.py                          |    2 +-
 translate/lang/test_identify.py                    |    2 +-
 translate/lang/test_km.py                          |    6 +-
 translate/lang/test_ko.py                          |    2 +-
 translate/lang/test_nqo.py                         |    6 +-
 translate/lang/test_team.py                        |    6 +-
 translate/lang/test_tr.py                          |    1 +
 translate/lang/tr.py                               |    1 +
 translate/lang/vi.py                               |    5 +-
 translate/lang/zh.py                               |    2 +-
 translate/lang/zh_cn.py                            |    2 +-
 translate/lang/zh_hk.py                            |    2 +-
 translate/lang/zh_tw.py                            |    2 +-
 translate/misc/autoencode.py                       |    9 +-
 translate/misc/context.py                          |   48 -
 translate/misc/contextlib.py                       |  199 ---
 translate/misc/decorators.py                       |   45 -
 translate/misc/deprecation.py                      |   45 +
 translate/misc/dictutils.py                        |   15 +-
 translate/misc/diff_match_patch.py                 | 1820 +-------------------
 translate/misc/file_discovery.py                   |   33 +-
 translate/misc/hash.py                             |   30 -
 translate/misc/ini.py                              |  576 -------
 translate/misc/lru.py                              |    4 +-
 translate/misc/optrecurse.py                       |   24 +-
 translate/misc/ourdom.py                           |    4 +-
 translate/misc/profiling.py                        |  122 --
 translate/misc/progressbar.py                      |    2 +-
 translate/misc/quote.py                            |  142 +-
 translate/misc/sparse.py                           |    4 +-
 translate/misc/test_multistring.py                 |    3 +-
 translate/misc/test_optrecurse.py                  |    2 +-
 translate/misc/test_quote.py                       |   37 +-
 translate/misc/textwrap.py                         |  203 ---
 translate/misc/typecheck/__init__.py               | 1559 -----------------
 translate/misc/typecheck/doctest_support.py        |   36 -
 translate/misc/typecheck/mixins.py                 |   84 -
 translate/misc/typecheck/sets.py                   |   62 -
 translate/misc/typecheck/typeclasses.py            |   35 -
 translate/misc/wStringIO.py                        |    2 +-
 translate/misc/wsgiserver/LICENSE.txt              |   25 +
 translate/misc/wsgiserver/__init__.py              |    7 +-
 .../wsgiserver/{wsgiserver.py => wsgiserver2.py}   |   62 +-
 .../wsgiserver/{wsgiserver.py => wsgiserver3.py}   |  595 ++-----
 translate/misc/xml_helpers.py                      |    1 +
 translate/misc/xmlwrapper.py                       |  159 --
 translate/search/indexing/CommonIndexer.py         |   94 +-
 translate/search/indexing/PyLuceneIndexer.py       |   37 +-
 translate/search/indexing/XapianIndexer.py         |   41 +-
 translate/search/indexing/__init__.py              |   14 +-
 translate/search/indexing/test_indexers.py         |   30 +-
 translate/search/lshtein.py                        |    2 +-
 translate/search/match.py                          |    8 +-
 translate/services/tmserver.py                     |   97 +-
 translate/storage/_factory_classes.py              |   11 +-
 translate/storage/aresource.py                     |  165 +-
 translate/storage/base.py                          |   25 +-
 translate/storage/benchmark.py                     |   17 +-
 translate/storage/bundleprojstore.py               |    8 +-
 translate/storage/catkeys.py                       |   12 +-
 translate/storage/cpo.py                           |   29 +-
 translate/storage/csvl10n.py                       |   15 +-
 translate/storage/directory.py                     |   13 +-
 translate/storage/dtd.py                           |   59 +-
 translate/storage/fpo.py                           |   17 +-
 translate/storage/html.py                          |   53 +-
 translate/storage/ical.py                          |   12 +-
 translate/storage/ini.py                           |   32 +-
 translate/storage/jsonl10n.py                      |   87 +-
 translate/storage/lisa.py                          |   21 +-
 translate/storage/mo.py                            |    9 +-
 translate/storage/mozilla_lang.py                  |    9 +-
 translate/storage/odf_shared.py                    |    5 +-
 translate/storage/omegat.py                        |    6 +-
 translate/storage/oo.py                            |    8 +-
 translate/storage/php.py                           |  115 +-
 translate/storage/placeables/__init__.py           |    1 +
 translate/storage/placeables/general.py            |   32 +-
 translate/storage/placeables/lisa.py               |    5 +-
 translate/storage/placeables/parse.py              |    2 +-
 translate/storage/placeables/strelem.py            |   68 +-
 translate/storage/placeables/terminology.py        |    3 +-
 translate/storage/placeables/test_base.py          |   11 +-
 translate/storage/placeables/test_general.py       |    6 +-
 translate/storage/placeables/test_lisa.py          |    4 +-
 translate/storage/placeables/test_terminology.py   |    7 +-
 translate/storage/placeables/xliff.py              |    7 +-
 translate/storage/po.py                            |   15 +-
 translate/storage/pocommon.py                      |   13 +-
 translate/storage/poheader.py                      |   21 +-
 translate/storage/poparser.py                      |    1 +
 translate/storage/poxliff.py                       |    9 +-
 translate/storage/project.py                       |    1 +
 translate/storage/projstore.py                     |   11 +-
 translate/storage/properties.py                    |  164 +-
 translate/storage/pypo.py                          |   44 +-
 translate/storage/qm.py                            |    8 +-
 translate/storage/qph.py                           |   17 +-
 translate/storage/rc.py                            |   20 +-
 translate/storage/statistics.py                    |    1 +
 translate/storage/statsdb.py                       |   45 +-
 translate/storage/subtitles.py                     |   32 +-
 translate/storage/symbian.py                       |    1 +
 translate/storage/tbx.py                           |    2 +-
 translate/storage/test_aresource.py                |   84 +-
 translate/storage/test_base.py                     |   38 +-
 translate/storage/test_catkeys.py                  |    7 +-
 translate/storage/test_cpo.py                      |   27 +-
 translate/storage/test_csvl10n.py                  |    3 +-
 translate/storage/test_directory.py                |    4 +-
 translate/storage/test_dtd.py                      |   54 +-
 translate/storage/test_factory.py                  |    8 +-
 translate/storage/test_html.py                     |   99 +-
 translate/storage/test_mo.py                       |   24 +-
 translate/storage/test_monolingual.py              |   10 +-
 translate/storage/test_mozilla_lang.py             |    5 +-
 translate/storage/test_omegat.py                   |    7 +-
 translate/storage/test_php.py                      |   62 +-
 translate/storage/test_po.py                       |  100 +-
 translate/storage/test_poheader.py                 |   30 +-
 translate/storage/test_poxliff.py                  |    7 +-
 translate/storage/test_properties.py               |   61 +-
 translate/storage/test_pypo.py                     |   43 +-
 translate/storage/test_qm.py                       |    3 +-
 translate/storage/test_qph.py                      |   12 +-
 translate/storage/test_rc.py                       |  226 ++-
 translate/storage/test_statsdb.py                  |   14 +-
 translate/storage/test_tbx.py                      |    9 +-
 translate/storage/test_tmx.py                      |   17 +-
 translate/storage/test_trados.py                   |   13 +-
 translate/storage/test_ts2.py                      |   89 +-
 translate/storage/test_txt.py                      |    9 +-
 translate/storage/test_utx.py                      |    3 +-
 translate/storage/test_wordfast.py                 |   13 +-
 translate/storage/test_xliff.py                    |   31 +-
 translate/storage/test_zip.py                      |    7 +-
 translate/storage/tiki.py                          |    4 +-
 translate/storage/tmdb.py                          |   26 +-
 translate/storage/tmx.py                           |   11 +-
 translate/storage/trados.py                        |   51 +-
 translate/storage/ts.py                            |    3 +-
 translate/storage/ts2.py                           |   76 +-
 translate/storage/txt.py                           |    9 +-
 translate/storage/utx.py                           |   38 +-
 translate/storage/versioncontrol/__init__.py       |   50 +-
 translate/storage/versioncontrol/bzr.py            |   30 +-
 translate/storage/versioncontrol/cvs.py            |   17 +-
 translate/storage/versioncontrol/darcs.py          |   25 +-
 translate/storage/versioncontrol/git.py            |   24 +-
 translate/storage/versioncontrol/hg.py             |   19 +-
 translate/storage/versioncontrol/svn.py            |   11 +-
 translate/storage/versioncontrol/test_helper.py    |    5 +-
 translate/storage/versioncontrol/test_svn.py       |    8 +-
 translate/storage/wordfast.py                      |   45 +-
 translate/storage/workflow.py                      |    3 +-
 translate/storage/xliff.py                         |   57 +-
 translate/storage/xml_extract/extract.py           |   33 +-
 translate/storage/xml_extract/generate.py          |   20 +-
 translate/storage/xml_extract/misc.py              |   19 +-
 translate/storage/xml_extract/test_misc.py         |    1 +
 translate/storage/xml_extract/test_unit_tree.py    |    3 +-
 translate/storage/xml_extract/unit_tree.py         |    7 -
 translate/storage/xml_extract/xpath_breadcrumb.py  |    3 -
 translate/storage/zip.py                           |    7 +-
 translate/tools/build_tmdb.py                      |   40 +-
 translate/tools/phppo2pypo.py                      |    2 +-
 translate/tools/poclean.py                         |    3 +-
 translate/tools/pocompile.py                       |    3 +-
 translate/tools/poconflicts.py                     |    9 +-
 translate/tools/pocount.py                         |  212 ++-
 translate/tools/podebug.py                         |   35 +-
 translate/tools/pogrep.py                          |    8 +-
 translate/tools/pomerge.py                         |   13 +-
 translate/tools/porestructure.py                   |   15 +-
 translate/tools/posegment.py                       |    2 +-
 translate/tools/poswap.py                          |    6 +-
 translate/tools/poterminology.py                   |   30 +-
 translate/tools/pretranslate.py                    |   32 +-
 translate/tools/pydiff.py                          |   86 +-
 translate/tools/pypo2phppo.py                      |    2 +-
 translate/tools/test_phppo2pypo.py                 |    3 +-
 translate/tools/test_pocount.py                    |   29 +-
 translate/tools/test_podebug.py                    |   31 +-
 translate/tools/test_pogrep.py                     |   23 +-
 translate/tools/test_pomerge.py                    |   57 +-
 translate/tools/test_pretranslate.py               |   41 +-
 translate/tools/test_pypo2phppo.py                 |    3 +-
 PKG-INFO => translate_toolkit.egg-info/PKG-INFO    |   12 +-
 translate_toolkit.egg-info/SOURCES.txt             | 1150 +++++++++++++
 translate_toolkit.egg-info/dependency_links.txt    |    1 +
 translate_toolkit.egg-info/requires.txt            |    3 +
 translate_toolkit.egg-info/top_level.txt           |    1 +
 359 files changed, 7153 insertions(+), 8890 deletions(-)
 create mode 100644 MANIFEST.in
 create mode 100644 docs/developers/deprecation.rst
 create mode 100644 docs/releases/1.12.0-rc1.rst
 create mode 100644 docs/releases/1.12.0.rst
 create mode 100644 docs/releases/dev.rst
 create mode 100644 requirements/required.txt
 create mode 100644 setup.cfg
 create mode 100644 tests/cli/data/test_pocount/stderr.txt
 create mode 100644 tests/cli/data/test_pocount_help/stdout.txt
 create mode 100644 tests/cli/data/test_pocount_mutually_exclusive/stderr.txt
 create mode 100644 tests/cli/data/test_pocount_nonexistant/stderr.txt
 create mode 100644 tests/cli/data/test_pocount_po_file/stdout.txt
 create mode 100644 tests/cli/data/test_pofilter_listfilters/stdout.txt
 create mode 100644 tests/cli/data/test_pofilter_manpage/stdout.txt
 create mode 100644 tests/cli/data/test_prop2po/stderr.txt
 create mode 100644 tests/cli/data/test_prop2po_dirs/stderr.txt
 delete mode 100644 translate/i18n.py
 delete mode 100644 translate/misc/context.py
 delete mode 100644 translate/misc/contextlib.py
 delete mode 100644 translate/misc/decorators.py
 create mode 100644 translate/misc/deprecation.py
 delete mode 100644 translate/misc/hash.py
 delete mode 100644 translate/misc/ini.py
 delete mode 100644 translate/misc/profiling.py
 delete mode 100644 translate/misc/textwrap.py
 delete mode 100644 translate/misc/typecheck/__init__.py
 delete mode 100644 translate/misc/typecheck/doctest_support.py
 delete mode 100644 translate/misc/typecheck/mixins.py
 delete mode 100644 translate/misc/typecheck/sets.py
 delete mode 100644 translate/misc/typecheck/typeclasses.py
 create mode 100644 translate/misc/wsgiserver/LICENSE.txt
 copy translate/misc/wsgiserver/{wsgiserver.py => wsgiserver2.py} (98%)
 rename translate/misc/wsgiserver/{wsgiserver.py => wsgiserver3.py} (79%)
 delete mode 100644 translate/misc/xmlwrapper.py
 mode change 100644 => 100755 translate/storage/aresource.py
 copy PKG-INFO => translate_toolkit.egg-info/PKG-INFO (93%)
 create mode 100644 translate_toolkit.egg-info/SOURCES.txt
 create mode 100644 translate_toolkit.egg-info/dependency_links.txt
 create mode 100644 translate_toolkit.egg-info/requires.txt
 create mode 100644 translate_toolkit.egg-info/top_level.txt

diff --git a/MANIFEST.in b/MANIFEST.in
new file mode 100644
index 0000000..8552266
--- /dev/null
+++ b/MANIFEST.in
@@ -0,0 +1,81 @@
+# MANIFEST.in: the below autogenerated by setup.py from translate 1.12.0
+# things needed by translate setup.py to rebuild
+# informational fs
+global-include README.rst
+global-include COPYING
+global-include *.txt
+# C programs
+global-include *.c
+# scripts which don't get included by default in sdist
+include translate/convert/pot2po
+include translate/convert/moz2po
+include translate/convert/po2moz
+include translate/convert/oo2po
+include translate/convert/po2oo
+include translate/convert/oo2xliff
+include translate/convert/xliff2oo
+include translate/convert/prop2po
+include translate/convert/po2prop
+include translate/convert/csv2po
+include translate/convert/po2csv
+include translate/convert/txt2po
+include translate/convert/po2txt
+include translate/convert/ts2po
+include translate/convert/po2ts
+include translate/convert/html2po
+include translate/convert/po2html
+include translate/convert/ical2po
+include translate/convert/po2ical
+include translate/convert/ini2po
+include translate/convert/po2ini
+include translate/convert/json2po
+include translate/convert/po2json
+include translate/convert/tiki2po
+include translate/convert/po2tiki
+include translate/convert/php2po
+include translate/convert/po2php
+include translate/convert/rc2po
+include translate/convert/po2rc
+include translate/convert/xliff2po
+include translate/convert/po2xliff
+include translate/convert/sub2po
+include translate/convert/po2sub
+include translate/convert/symb2po
+include translate/convert/po2symb
+include translate/convert/po2tmx
+include translate/convert/po2wordfast
+include translate/convert/csv2tbx
+include translate/convert/odf2xliff
+include translate/convert/xliff2odf
+include translate/convert/web2py2po
+include translate/convert/po2web2py
+include translate/filters/pofilter
+include translate/tools/pocompile
+include translate/tools/poconflicts
+include translate/tools/pocount
+include translate/tools/podebug
+include translate/tools/pogrep
+include translate/tools/pomerge
+include translate/tools/porestructure
+include translate/tools/posegment
+include translate/tools/poswap
+include translate/tools/poclean
+include translate/tools/poterminology
+include translate/tools/pretranslate
+include translate/services/tmserver
+include translate/tools/build_tmdb
+include tools/junitmsgfmt
+include tools/mozilla/build_firefox.sh
+include tools/mozilla/buildxpi.py
+include tools/mozilla/get_moz_enUS.py
+include tools/pocommentclean
+include tools/pocompendium
+include tools/pomigrate2
+include tools/popuretext
+include tools/poreencode
+include tools/posplit
+# include our documentation
+graft docs
+prune docs/doctrees
+graft share
+# MANIFEST.in: end of autogenerated block
\ No newline at end of file
diff --git a/PKG-INFO b/PKG-INFO
index 647b960..22d7624 100644
--- a/PKG-INFO
+++ b/PKG-INFO
@@ -1,12 +1,12 @@
 Metadata-Version: 1.0
 Name: translate-toolkit
-Version: 1.11.0
+Version: 1.12.0
 Summary: Tools and API for translation and localization engineering.
 Home-page: http://toolkit.translatehouse.org/
 Author: Translate
 Author-email: translate-devel at lists.sourceforge.net
 License: GNU General Public License (GPL)
-Download-URL: http://sourceforge.net/projects/translate/files/Translate Toolkit/1.11.0
+Download-URL: http://sourceforge.net/projects/translate/files/Translate Toolkit/1.12.0
 Description: 
         The `Translate Toolkit <http://toolkit.translatehouse.org/>`_ is created by
         localizers for localizers. It contains several utilities, as well as an API for
@@ -36,9 +36,11 @@ Classifier: Development Status :: 5 - Production/Stable
 Classifier: Environment :: Console
 Classifier: Intended Audience :: Developers
 Classifier: License :: OSI Approved :: GNU General Public License (GPL)
-Classifier: Programming Language :: Python
-Classifier: Topic :: Software Development :: Localization
-Classifier: Topic :: Software Development :: Libraries :: Python Modules
 Classifier: Operating System :: OS Independent
 Classifier: Operating System :: Microsoft :: Windows
 Classifier: Operating System :: Unix
+Classifier: Programming Language :: Python
+Classifier: Programming Language :: Python :: 2.6
+Classifier: Programming Language :: Python :: 2.7
+Classifier: Topic :: Software Development :: Libraries :: Python Modules
+Classifier: Topic :: Software Development :: Localization
diff --git a/README.rst b/README.rst
index ae384e6..da3d5fa 100644
--- a/README.rst
+++ b/README.rst
@@ -4,6 +4,9 @@ Translate Toolkit
 .. image:: https://travis-ci.org/translate/translate.png
     :alt: Build Status
     :target: https://travis-ci.org/translate/translate
+.. image:: https://coveralls.io/repos/translate/translate/badge.png?branch=master
+    :alt: Coverage Status
+    :target: https://coveralls.io/r/translate/translate?branch=master
 
 The Translate Toolkit is a set of software and documentation designed to help
 make the lives of localizers both more productive and less frustrating.
@@ -82,15 +85,15 @@ Requirements
    Will install all recommended requirements, while ``optional.txt`` will also
    install support for all other formats.
 
-Python 2.4 or later is recommended.
+Python 2.6 or later is required.
 
-The Toolkit should still work with Python 2.4 but is now most extensively
-tested using Python 2.7.
+Python 2.5 is no longer supported by the Python Software Foundation, while the
+Toolkit may work in versions before Python 2.6 this is not supported.
 
-The package lxml is needed for XML file processing. Version 1.3.4 and upwards
-should work, but lxml 2.1.0 or later is strongly recommended. <http://lxml.de/>
-Depending on your platform, the easiest way to install might be through your
-system's package management. Alternatively you can try ::
+The package lxml is needed for XML file processing. You should install version
+2.2.0 or later. <http://lxml.de/> Depending on your platform, the easiest way
+to install might be through your system's package management. Alternatively you
+can try ::
 
     easy_install lxml
 
@@ -188,8 +191,7 @@ We think there might be some :)
 
 Please send your bug reports to:
 translate-devel at lists.sourceforge.net
-or report them at our bugzilla server at
-<http://bugs.locamotion.org/>
+or report them at <https://github.com/translate/translate/issues>
 
 Some help in writing useful bug reports are mentioned here:
 <http://translate.sourceforge.net/wiki/developers/reporting_bugs>
diff --git a/docs/api/misc.rst b/docs/api/misc.rst
index 406e1d5..82c3997 100644
--- a/docs/api/misc.rst
+++ b/docs/api/misc.rst
@@ -13,22 +13,6 @@ autoencode
    :inherited-members:
 
 
-contextlib
-----------
-
-.. automodule:: translate.misc.contextlib
-   :members:
-   :inherited-members:
-
-
-context
--------
-
-.. automodule:: translate.misc.context
-   :members:
-   :inherited-members:
-
-
 dictutils
 ---------
 
@@ -37,14 +21,6 @@ dictutils
    :inherited-members:
 
 
-diff_match_patch
-----------------
-
-.. automodule:: translate.misc.diff_match_patch
-   :members:
-   :inherited-members:
-
-
 file_discovery
 --------------
 
@@ -53,22 +29,6 @@ file_discovery
    :inherited-members:
 
 
-hash
-----
-
-.. automodule:: translate.misc.hash
-   :members:
-   :inherited-members:
-
-
-ini
----
-
-.. automodule:: translate.misc.ini
-   :members:
-   :inherited-members:
-
-
 lru
 ---
 
@@ -101,14 +61,6 @@ ourdom
    :inherited-members:
 
 
-profiling
----------
-
-.. automodule:: translate.misc.profiling
-   :members:
-   :inherited-members:
-
-
 progressbar
 -----------
 
@@ -141,53 +93,6 @@ stdiotell
    :inherited-members:
 
 
-textwrap
---------
-
-.. automodule:: translate.misc.textwrap
-   :members:
-   :inherited-members:
-
-
-typecheck
----------
-
-.. automodule:: translate.misc.typecheck
-   :show-inheritance:
-
-
-doctest_support
-~~~~~~~~~~~~~~~
-
-.. automodule:: translate.misc.typecheck.doctest_support
-   :members:
-   :inherited-members:
-
-
-mixins
-~~~~~~
-
-.. automodule:: translate.misc.typecheck.mixins
-   :members:
-   :inherited-members:
-
-
-sets
-~~~~
-
-.. automodule:: translate.misc.typecheck.sets
-   :members:
-   :inherited-members:
-
-
-typeclasses
-~~~~~~~~~~~
-
-.. automodule:: translate.misc.typecheck.typeclasses
-   :members:
-   :inherited-members:
-
-
 wsgi
 ----
 
@@ -210,11 +115,3 @@ xml_helpers
 .. automodule:: translate.misc.xml_helpers
    :members:
    :inherited-members:
-
-
-xmlwrapper
-----------
-
-.. automodule:: translate.misc.xmlwrapper
-   :members:
-   :inherited-members:
diff --git a/docs/changelog.rst b/docs/changelog.rst
index 2199162..b7d42c0 100644
--- a/docs/changelog.rst
+++ b/docs/changelog.rst
@@ -9,42 +9,10 @@ This page lists what has changed, how it might affect you and how to work
 around the change either to bring your files in line or to use the old
 behaviour if required.
 
-.. _changelog#1.11:
 
-1.11
-====
-
-- Dropped support for Python 2.5 since it is no longer supported by the Python
-  Foundation. Also sticking to it was preventing us from using features that
-  are not supported on Python 2.5 but they are on later versions.
-- Properties will no longer drop entries where source and translation are
-  identical.
-
-.. _changelog#1.10:
-
-1.10
-====
-
-- The matching criterion when merging units can now be specified with the
-  ``X-Merge-On`` header. Available values for this header are `location` and
-  `id`. By default merges will be done by matching IDs. This supersedes the
-  effects of the ``X-Accelerator`` header when merging and establishes an
-  explicit way to set the desired matching criterion.
-
-
-.. _changelog#mozilla_dtd_files_change:
-
-Mozilla DTD files change
-------------------------
-
-We now preserve spaces in DTD files i.e.::
-
-  <!ENTITY          some.label          "definition">
-
-Will preserve the spaces around the entity name ``some.lable``
+.. note:: For newer Translate Toolkit versions changes please check the
+   :doc:`Release notes <releases/index>`.
 
-You probably want to run po2moz once to isolate the space changes from real
-translations.
 
 .. _changelog#1.6.0:
 
@@ -188,7 +156,7 @@ Premature termination of DTD entities
 -------------------------------------
 
 Although this does not occur frequently a case emerged where some DTD entities
-where not fully extracted from the DTD source.  This was fixed in :bug:`331`.
+where not fully extracted from the DTD source.  This was fixed in :issue:`331`.
 
 We expect this change to create a few new fuzzy entries.  There is no action
 required from the user as the next update of your PO files will bring the
@@ -660,6 +628,6 @@ although .properties files are actually a Java standard.  The old Mozilla way,
 and still the Java way, of working with .properties files is to escape any
 Unicode characters using the ``\uNNNN`` convention.  Mozilla now allows you to
 use Unicode in UTF-8 encoding for these files.  Thus in 0.9 of the Toolkit we
-now output UTF-8 encoded properties files. :bug:`Bug 114 <114>` tracks the
+now output UTF-8 encoded properties files. :issue:`Issue 193 <193>` tracks the
 status of this and we hope to add a feature to prop2po to restore the correct
 Java convention as an option.
diff --git a/docs/commands/csv2po.rst b/docs/commands/csv2po.rst
index eaeb05f..b697ad7 100644
--- a/docs/commands/csv2po.rst
+++ b/docs/commands/csv2po.rst
@@ -112,6 +112,14 @@ to UTF-8 and place the correctly encoded files in *po*.  We use the templates
 found in *pot* to ensure that we preserve formatting and other data.  Note that
 UTF-8 is the only available destination encoding.
 
+::
+
+  csv2po --columnorder=location,target,source fr.csv fr.po
+
+In case the CSV file has the columns in a different order you may use
+:option:`--columnorder`.
+
+
 .. _csv2po#bugs:
 
 Bugs
diff --git a/docs/commands/moz2po.rst b/docs/commands/moz2po.rst
index 48bd220..6047229 100644
--- a/docs/commands/moz2po.rst
+++ b/docs/commands/moz2po.rst
@@ -154,9 +154,9 @@ You can perform the bulk of your work (99%) with moz2po.
 Localisation of XHTML is not yet perfect, you might want to work with the files
 directly.
 
-:bug:`Bug 129 <129>` tracks the outstanding features which would allow complete
-localisation of Mozilla including; all help, start pages, rdf files, etc. It
-also tracks some bugs.
+:issue:`Issue 203 <203>` tracks the outstanding features which would allow
+complete localisation of Mozilla including; all help, start pages, rdf files,
+etc. It also tracks some bugs.
 
 Accesskeys don't yet work in .properties files and in several cases where the
 Mozilla .dtd files don't follow the normal conventions, for example in
diff --git a/docs/commands/oo2po.rst b/docs/commands/oo2po.rst
index 1002625..2fa3d2b 100644
--- a/docs/commands/oo2po.rst
+++ b/docs/commands/oo2po.rst
@@ -175,7 +175,7 @@ variables failures and translated XML will be excluded from the final SDF.
 helpcontent2
 ============
 
-The escaping of ``helpcontent2`` from SDF files was very confusing, :bug:`295`
-implemented a fix that appeared in version 1.1.0 (All known issues were fixed
-in 1.1.1).  Translators are now able to translate helpcontent2 with clean
-escaping.
+The escaping of ``helpcontent2`` from SDF files was very confusing,
+:issue:`295` implemented a fix that appeared in version 1.1.0 (All known issues
+were fixed in 1.1.1).  Translators are now able to translate helpcontent2 with
+clean escaping.
diff --git a/docs/commands/pocount.rst b/docs/commands/pocount.rst
index 05950c9..01324f5 100644
--- a/docs/commands/pocount.rst
+++ b/docs/commands/pocount.rst
@@ -37,6 +37,9 @@ Options:
 
 -h, --help       show this help message and exit
 --incomplete     skip 100% translated files
+
+Output format:
+
 --full           (default) statistics in full, verbose format
 --csv            statistics in CSV format
 --short          same as --short-strings
diff --git a/docs/commands/podebug.rst b/docs/commands/podebug.rst
index 3cc6a6a..344a87c 100644
--- a/docs/commands/podebug.rst
+++ b/docs/commands/podebug.rst
@@ -60,7 +60,7 @@ Options:
 --rewrite=STYLE        the translation rewrite style: :doc:`xxx, en, blank,
                        chef  (v1.2), unicode (v1.2) <option_rewrite>`
 --ignore=APPLICATION   apply tagging ignore rules for the given application:
-                       kde, gtk, openoffice, mozilla
+                       kde, gtk, openoffice, libreoffice, mozilla
 --hash=LENGTH          add an md5 hash to translations (only until version
                        1.3.0 -- see %h below)
 
diff --git a/docs/commands/pofilter.rst b/docs/commands/pofilter.rst
index a6cfd5f..063d7b9 100644
--- a/docs/commands/pofilter.rst
+++ b/docs/commands/pofilter.rst
@@ -54,6 +54,7 @@ Options:
 --autocorrect        output automatic corrections where possible rather than describing issues
 --language=LANG      set target language code (e.g. af-ZA) [required for spell check]. This will help to make pofilter aware of the conventions of your language
 --openoffice         use the standard checks for OpenOffice translations
+--libreoffice        use the standard checks for LibreOffice translations
 --mozilla            use the standard checks for Mozilla translations
 --drupal            use the standard checks for Drupal translations
 --gnome              use the standard checks for Gnome translations
diff --git a/docs/commands/posplit.rst b/docs/commands/posplit.rst
index d86b5ab..f1d4cdf 100644
--- a/docs/commands/posplit.rst
+++ b/docs/commands/posplit.rst
@@ -39,5 +39,5 @@ Bugs
 ====
 
 * Some relative path bugs thus the need for ./ before file.po.
-* Until version 1.9.1, the original input file was removed, :bug:`2006`.
+* Until version 1.9.1, the original input file was removed, :issue:`2006`.
 
diff --git a/docs/commands/poterminology.rst b/docs/commands/poterminology.rst
index 9c17256..0410d17 100644
--- a/docs/commands/poterminology.rst
+++ b/docs/commands/poterminology.rst
@@ -290,18 +290,18 @@ truncated at a few dozen characters).
 
 Default threshold settings may eliminate all output terms; in this case,
 poterminology should suggest threshold option settings that would allow output
-to be generated (this enhancement is tracked as :bug:`582`).
+to be generated (this enhancement is tracked as :issue:`582`).
 
 While poterminology ignores XML/HTML entities and elements and %-style format
 strings (for C and Python), it does not ignore all types of "variables" that
 may occur, particularly in OpenOffice.org, Mozilla, or Gnome localization
 files.  These other types should be ignored as well (this enhancement is
-tracked as :bug:`598`).
+tracked as :issue:`598`).
 
 Terms containing only words that are ignored individually, but not excluded
 from phrases (e.g. "you are you") may be generated by poterminology, but aren't
 generally useful.  Adding a new threshold option :opt:`--nonstop-needed` could
-allow these to be suppressed (this enhancement is tracked as :bug:`1102`).
+allow these to be suppressed (this enhancement is tracked as :issue:`1102`).
 
 Pootle ignores parenthetical comments in source text when performing
 terminology matching; this allows for terms like "scan (verb)" and "scan
diff --git a/docs/conf.py b/docs/conf.py
index 12d597e..fc4f999 100644
--- a/docs/conf.py
+++ b/docs/conf.py
@@ -38,8 +38,12 @@ extensions = [
     'sphinx.ext.coverage',
     'sphinx.ext.extlinks',
     'sphinx.ext.intersphinx',
+    'sphinx.ext.todo',
 ]
 
+# Display todo notes. See http://sphinx-doc.org/ext/todo.html#directive-todo
+todo_include_todos=True
+
 # Add any paths that contain templates here, relative to this directory.
 templates_path = ['_templates']
 
@@ -54,16 +58,16 @@ master_doc = 'index'
 
 # General information about the project.
 project = u'Translate Toolkit'
-copyright = u'2012, Translate.org.za'
+copyright = u'2002-2014, Translate'
 
 # The version info for the project you're documenting, acts as replacement for
 # |version| and |release|, also used in various other places throughout the
 # built documents.
 #
 # The short X.Y version.
-version = '1.11.0'
+version = '1.12.0'
 # The full version, including alpha/beta/rc tags.
-release = '1.11.0'
+release = '1.12.0'
 
 # The language for content autogenerated by Sphinx. Refer to documentation
 # for a list of supported languages.
@@ -103,8 +107,10 @@ pygments_style = 'sphinx'
 # -- Missing modules --------------------------------------------------
 import sys
 
+
 class Mock(object):
     VERSION = None
+
     def __init__(self, *args, **kwargs):
         pass
 
@@ -233,14 +239,14 @@ htmlhelp_basename = 'TranslateToolkitdoc'
 # -- Options for LaTeX output -------------------------------------------------
 
 latex_elements = {
-# The paper size ('letterpaper' or 'a4paper').
-#'papersize': 'letterpaper',
+    # The paper size ('letterpaper' or 'a4paper').
+    #'papersize': 'letterpaper',
 
-# The font size ('10pt', '11pt' or '12pt').
-#'pointsize': '10pt',
+    # The font size ('10pt', '11pt' or '12pt').
+    #'pointsize': '10pt',
 
-# Additional stuff for the LaTeX preamble.
-#'preamble': '',
+    # Additional stuff for the LaTeX preamble.
+    #'preamble': '',
 }
 
 # Grouping the document tree into LaTeX files. List of tuples
@@ -318,7 +324,7 @@ coverage_write_headline = False
 # -- Options for Intersphinx -------------------------------------------------
 
 intersphinx_mapping = {
-    'python': ('http://docs.python.org/2.7', None),
+    'python': ('https://docs.python.org/2.7', None),
     'pytest': ('http://pytest.org/latest/', None),
     'django': ('http://django.readthedocs.org/en/latest/', None),
     'pootle': ('http://docs.translatehouse.org/projects/pootle/en/latest/', None),
@@ -331,8 +337,8 @@ intersphinx_mapping = {
 
 extlinks = {
     # :role: (URL, prefix)
-    'bug': ('http://bugs.locamotion.org/show_bug.cgi?id=%s',
-            'bug '),
+    'issue': ('https://github.com/translate/translate/issues/%s',
+              'issue '),
     'man': ('http://linux.die.net/man/1/%s', ''),
     'wiki': ('http://translate.sourceforge.net/wiki/%s', ''),
     'wp': ('http://en.wikipedia.org/wiki/%s', ''),
diff --git a/docs/contents.rst.inc b/docs/contents.rst.inc
index 99c4cf7..ad31e6f 100644
--- a/docs/contents.rst.inc
+++ b/docs/contents.rst.inc
@@ -29,6 +29,7 @@ building new tools, make sure to read through this part.
    developers/contributing
    developers/developers
    developers/releasing
+   developers/deprecation
 
 
 Additional Notes
diff --git a/docs/developers/contributing.rst b/docs/developers/contributing.rst
index d99766b..e76e628 100644
--- a/docs/developers/contributing.rst
+++ b/docs/developers/contributing.rst
@@ -5,8 +5,9 @@ Contributing
 ************
 
 We could use your help.  If you are interesting in contributing then please
-join us on IRC on `#pootle <irc://irc.freenode.net/#pootle>`_ and on the
-`translate-devel <mailto:translate-devel at lists.sourceforge.net>`_ mailing list.
+join us on IRC on `#pootle-dev <irc://irc.freenode.net/#pootle-dev>`_ and on
+the `translate-devel <mailto:translate-devel at lists.sourceforge.net>`_ mailing
+list.
 
 Here are some idea of how you can contribute
 
@@ -50,8 +51,8 @@ Debugging
 
 - Make sure your familiar with the :wiki:`bug reporting guidelines
   <developers/reporting_bugs>`.
-- Create a login for yourself at http://bugs.locamotion.org
-- Then choose a bug
+- Create a login for yourself at https://github.com
+- Then choose an `issue <https://github.com/translate/translate/issues>`_
 
 Now you need to try and validate the bug.  Your aim is to confirm that the bug
 is either fixed, is invalid or still exists.
diff --git a/docs/developers/deprecation.rst b/docs/developers/deprecation.rst
new file mode 100644
index 0000000..ea9df1a
--- /dev/null
+++ b/docs/developers/deprecation.rst
@@ -0,0 +1,72 @@
+Deprecation of Features
+=======================
+
+From time to time we need to deprecate functionality, this is a guide as to how
+we implement deprecation.
+
+
+Types of deprecation
+--------------------
+
+1. Misspelled function
+2. Renamed function
+3. Deprecated feature
+
+
+Period of maintenance
+---------------------
+Toolkit retains deprecated features for a period of two releases.  Thus
+features deprecated in 1.7.0 are removed in 1.9.0.
+
+
+Documentation
+-------------
+Use the ``@deprecated`` decorator with a comment and change the docstring to
+use the Sphinx `deprecation syntax
+<http://sphinx-doc.org/markup/para.html#directive-deprecated>`_.
+
+.. code-block:: python
+
+   @deprecated("Use util.run_fast() instead.")
+   def run_slow():
+       """Run slowly
+
+       .. deprecated:: 1.9.0
+          Use :func:`run_fast` instead.
+       """
+       run_fast()  # Call new function if possible
+
+
+Implementation
+--------------
+Deprecated features should call the new functionality if possible.  This may
+not always be possible, such as the cases of drastic changes.  But it is the
+preferred approach to reduce maintenance of the old code.
+
+
+Announcements
+-------------
+.. note:: This applies only to feature deprecation and renamed functions.
+   Announcements for corrections are at the coders discretion.
+
+1. On **first release with deprecation** highlight that the feature is
+   deprecated in this release and explain reasons and alternate approaches.
+2. On **second relase** warn that the feature will be removed in the next
+   release.
+3. On **third release** remove the feature and announce removal in the release
+   announcements.
+
+Thus by examples:
+
+Translate Toolkit 1.9.0:
+  The ``run_slow`` function has been deprecated and replaced by the faster and
+  more correct ``run_fast``.  Users of ``run_slow`` are advised to migrate
+  their code.
+
+Translate Toolkit 1.10.0:
+  The ``run_slow`` function has been deprecated and replaced by ``run_fast``
+  and will be removed in the next version.  Users of ``run_slow`` are advised
+  to migrate their code.
+
+Translate Toolkit 1.11.0:
+  The ``run_slow`` function has been removed, use ``run_fast`` instead.
diff --git a/docs/developers/developers.rst b/docs/developers/developers.rst
index 1da6917..62d2415 100644
--- a/docs/developers/developers.rst
+++ b/docs/developers/developers.rst
@@ -46,17 +46,21 @@ the translate repository or fork it at GitHub.
 
 .. _developers#bugzilla:
 
-Bugzilla
---------
+Issues
+------
 
-* http://bugs.locamotion.org/
+* https://github.com/translate/translate/issues
 
 .. _developers#communication:
 
 Communication
 -------------
 
-* `IRC channel <irc://irc.freenode.net/#pootle>`_
+* IRC channels:
+
+  * `Development <irc://irc.freenode.net/#pootle-dev>`_ - no support related questions.
+  * `Help <irc://irc.freenode.net/#pootle>`_
+
 * `Developers mailing list <https://lists.sourceforge.net/lists/listinfo/translate-devel>`_
 * `Commits to version control <https://lists.sourceforge.net/lists/listinfo/translate-cvs>`_
 
diff --git a/docs/developers/releasing.rst b/docs/developers/releasing.rst
index f8060b4..10122e7 100644
--- a/docs/developers/releasing.rst
+++ b/docs/developers/releasing.rst
@@ -11,22 +11,14 @@ Summary
 #. Test install and other tests
 #. Tag the release
 #. Publish on PyPI
+#. Upload to Github
 #. Upload to Sourceforge
 #. Release documentation
 #. Update translate website
 #. Unstage sourceforge
 #. Announce to the world
 #. Cleanup
-
-Other possible steps
---------------------
-We need to check and document these if needed:
-
-- Build docs: we need to check if e need to build the docs for the release
-- Change URLs to point to the correct docs: do we want to change URLs to point
-  to the $version docs rather then 'latest'
-- Building on Windows, building for other Linux distros. We have produced 
-- Communicating to upstream packagers
+#. Other possible steps
 
 
 Detailed instructions
@@ -34,11 +26,16 @@ Detailed instructions
 
 Get a clean checkout
 --------------------
-We work from a clean checkout to esnure that everything you are adding to the
+We work from a clean checkout to ensure that everything you are adding to the
 build is what is in VC and doesn't contain any of your uncommitted changes.  It
-also ensure that someone else could relicate your process. ::
+also ensure that someone else could replicate your process.
+
+.. code-block:: bash
+
+  $ git clone git at github.com:translate/translate.git translate-release
+  $ cd translate-release
+  $ git submodule update --init
 
-    git clone git at github.com:translate/translate.git translate-release
 
 Create release notes
 --------------------
@@ -53,13 +50,19 @@ We create our release notes in reStructured Text, since we use that elsewhere
 and since it can be rendered well in some of our key sites.
 
 First we need to create a log of changes in the Translate Toolkit, which is
-done generically like this::
+done generically like this:
+
+.. code-block:: bash
+
+  $ git log $previous_version..HEAD > docs/release/$version.rst
+
 
-    git log $version-1..HEAD > docs/release/$version.rst
+Or a more specific example:
 
-Or a more specific example::
+.. code-block:: bash
+
+  $ git log 1.10.0..HEAD > docs/releases/1.11.0-rc1.rst
 
-    git log 1.10.0..HEAD > docs/releases/1.10.0.rst
 
 Edit this file.  You can use the commits as a guide to build up the release
 notes.  You should remove all log messages before the release.
@@ -80,9 +83,18 @@ Read for grammar and spelling errors.
    #. We speak in familiar terms e.g. "I know you've been waiting for this
       release" instead of formal.
 
-We create a list of contributors using this command::
+We create a list of contributors using this command:
+
+.. code-block:: bash
 
-   git log 1.10.0..HEAD --format='%aN, ' | awk '{arr[$0]++} END{for (i in arr){print arr[i], i;}}' | sort -rn | cut -d\  -f2-
+  $ git log 1.10.0..HEAD --format='%aN, ' | awk '{arr[$0]++} END{for (i in arr){print arr[i], i;}}' | sort -rn | cut -d\  -f2-
+
+
+Add release notes for dev
+-------------------------
+
+After updating the release notes for the about to be released version, it is
+necessary to add new release notes for the next release, tagged as ``dev``.
 
 
 Up version numbers
@@ -90,7 +102,7 @@ Up version numbers
 Update the version number in:
 
 - ``translate/__version__.py``
-- ``docs/conf.py```
+- ``docs/conf.py``
 
 In ``__version__.py``, bump the build number if anybody used the toolkit with
 the previous number, and there have been any changes to code touching stats or
@@ -99,7 +111,7 @@ Pootle, to regenerate the stats and checks.
 
 For ``conf.py`` change ``version`` and ``release``
 
-.. note:: FIXME - We might want to consolidate the version and release info so
+.. todo:: FIXME - We might want to consolidate the version and release info so
    that we can update it in one place.
 
 The version string should follow the pattern::
@@ -119,9 +131,16 @@ release of a ``$MINOR`` version will always have a ``$MICRO`` of ``.0``. So
 Build the package
 -----------------
 Building is the first step to testing that things work.  From your clean
-checkout run::
+checkout run:
+
+.. code-block:: bash
+
+  $ mkvirtualenv build-ttk-release
+  (build-ttk-release)$ pip install -r requirements/dev.txt
+  (build-ttk-release)$ make build
+  (build-ttk-release)$ deactivate
+  $ rmvirtualenv build-ttk-release
 
-    make build
 
 This will create a tarball in ``dist/`` which you can use for further testing.
 
@@ -131,62 +150,119 @@ This will create a tarball in ``dist/`` which you can use for further testing.
 
 Test install and other tests
 ----------------------------
-The easiest way to test is in a virtualenv.  You can install the new toolkit
-using::
+The easiest way to test is in a virtualenv. You can test the installation of
+the new toolkit using:
 
-    pip install path/to/dist/translate-toolkit-$version.tar.bz2
+.. code-block:: bash
 
-This will allow you test installation of the software.
+  $ mkvirtualenv test-ttk-release
+  (releasing)$ pip install path/to/dist/translate-toolkit-$version.tar.bz2
 
-You can then proceed with other tests such as checking
 
-#. Documentation is available
-#. Converters and scripts are installed and run correctly
-#. Meta information about the package is correct. See pypy section of reviewing
-   meta data.
+You can then proceed with other tests such as checking:
 
+#. Documentation is available in the package
+#. Converters and scripts are installed and run correctly:
 
-Tag the release
----------------
+   .. code-block:: bash
+
+     (test-ttk-release)$ moz2po --help
+     (test-ttk-release)$ php2po --version
+     (test-ttk-release)$ deactivate
+     $ rmvirtualenv test-ttk-release
+
+#. Meta information about the package is correct. This is stored in
+   :file:`setup.py`, to see some options to display meta-data use:
+
+   .. code-block:: bash
+
+     $ ./setup.py --help
+
+   Now you can try some options like:
+
+   .. code-block:: bash
+
+     $ ./setup.py --name
+     $ ./setup.py --version
+     $ ./setup.py --author
+     $ ./setup.py --author-email
+     $ ./setup.py --url
+     $ ./setup.py --license
+     $ ./setup.py --description
+     $ ./setup.py --long-description
+     $ ./setup.py --classifiers
+
+   The actual descriptions are taken from :file:`translate/__init__.py`.
+
+
+Tag and branch the release
+--------------------------
 You should only tag once you are happy with your release as there are some
-things that we can't undo. ::
+things that we can't undo. You can safely branch for a ``stable/`` branch
+before you tag.
 
-    git tag -a 1.10.0 -m "Tag version 1.10.0"
-    git push --tags
+.. code-block:: bash
+
+  $ git checkout -b stable/1.10.0
+  $ git push origin stable/1.10.0
+  $ git tag -a 1.10.0 -m "Tag version 1.10.0"
+  $ git push --tags
 
 
 Publish on PyPI
 ---------------
-Publish the package on the `Python Package Index
-<https://pypi.python.org/pypi>`_ (PyPI)
 
-- `Submitting Packages to the Package Index
+.. - `Submitting Packages to the Package Index
   <http://wiki.python.org/moin/CheeseShopTutorial#Submitting_Packages_to_the_Package_Index>`_
 
-.. note:: You need a username and password on https://pypi.python.org and have
-   rights to the project before you can proceed with this step.
 
-   These can be stored in ``$HOME/.pypirc`` and will contain your username and
-   password. A first run of ``./setup.py register`` will create such a file.
-   It will also actually publish the meta-data so only do it when you are
-   actually ready.
+.. note:: You need a username and password on `Python Package Index (PyPI)
+   <https://pypi.python.org>`_ and have rights to the project before you can
+   proceed with this step.
+
+   These can be stored in :file:`$HOME/.pypirc` and will contain your username
+   and password. A first run of:
+
+   .. code-block:: bash
+
+     $ ./setup.py register
+
+   will create such file. It will also actually publish the meta-data so only
+   do it when you are actually ready.
+
+To test before publishing run:
+
+.. code-block:: bash
+
+  $ make test-publish-pypi
 
-Review the meta data. This is stored in ``setup.py``, use ``./setup.py --help``
-to se some options to display meta-data. The actual descriptions are taken from
-``translate/__init__.py``.
 
-To test before publishing run::
+Then to actually publish:
 
-    make test-publish-pypi
+.. code-block:: bash
 
-Then to actually publish::
+  $ make publish-pypi
 
-    make publish-pypi
+
+Create a release on Github
+--------------------------
+
+- https://github.com/translate/translate/releases/new
+
+You will need:
+
+- Tarball of the release
+- Release notes in Markdown
+
+#. Draft a new release with the corresponding tag version
+#. Convert the release notes to Markdown with `Pandoc
+   <http://johnmacfarlane.net/pandoc/>`_ and add those to the release
+#. Attach the tarball to the release
+#. Mark it as pre-release if it's a release candidate.
 
 
 Copy files to sourceforge
 -------------------------
-Publishing files to the Translate Sourceforge project.
 
 .. note:: You need to have release permissions on sourceforge to perform this
    step.
@@ -198,6 +274,9 @@ You will need:
 - Tarball of the release
 - Release notes in reStructured Text
 
+
+These are the steps to perform:
+
 #. Create a new folder in the `Translate Toolkit
    <https://sourceforge.net/projects/translate/files/Translate%20Toolkit/>`_
    release folder using the 'Add Folder' button.  The folder must have the same
@@ -209,7 +288,7 @@ You will need:
    #. Upload tarball for release.
    #. Upload release notes as ``README.rst``.
    #. Click on the info icon for ``README.rst`` and tick "Exclude Stats" to
-      exlude the README from stats counting.
+      exclude the README from stats counting.
 
 #. Check that the README.rst for the parent ``Translate Toolkit`` folder is
    still appropriate, this is the text from ``translate/__info__.py``.
@@ -225,23 +304,26 @@ The Docs.
 
 Use the admin pages to flag a version that should be published
 
-.. note:: FIXME we might need to do this before publishing so that we can
+.. todo:: FIXME we might need to do this before publishing so that we can
    update doc references to point to the tagged version as apposed to the
    latest version.
 
 
 Update translate website
 ------------------------
-We use github pages for the website. First we need to checkout the pages::
+We use github pages for the website. First we need to checkout the pages:
+
+.. code-block:: bash
+
+  $ git checkout gh-pages
 
-    git checkout gh-pages
 
 #. In ``_posts/`` add a new release posting.  This is in Markdown format (for
    now), so we need to change the release notes .rst to .md, which mostly means
    changing URL links from ```xxx <link>`_`` to ``[xxx](link)``.
 #. Change $version as needed. See ``download.html``, ``_config.yml`` and
    ``egrep -r $old_release *``
-#. :command:`git commit` and :command:`git push` - changes are quite quick so
+#. :command:`git commit` and :command:`git push` -- changes are quite quick, so
    easy to review.
 
 
@@ -260,13 +342,38 @@ Let people know that there is a new version:
 #. Adjust the #pootle channel notice. Use ``/topic`` to change the topic.
 #. Email important users
 #. Tweet about it
+#. Update `Toolkit's Wikipedia page
+   <http://en.wikipedia.org/wiki/Translate_Toolkit>`_
 
 
 Cleanup
--------
+=======
+These are tasks not directly related to the releasing, but that are
+nevertheless completely necessary.
+
+Bump version to N+1-alpha1
+--------------------------
+
+Now that we've release lets make sure that master reflect the current state
+which would be ``{N+1}-alpha1``. This prevents anyone using master being
+confused with a stable release and we can easily check if they are using master
+or stable.
+
+
+Other possible steps
+====================
 Some possible cleanup tasks:
 
 - Remove any RC builds from the sourceforge download pages (maybe?).
 - Commit any release notes and such (or maybe do that before tagging).
 - Remove your translate-release checkout.
 - Update and fix these release notes.
+
+
+We also need to check and document these if needed:
+
+- Change URLs to point to the correct docs: do we want to change URLs to point
+  to the $version docs rather then 'latest'
+- Building on Windows, building for other Linux distros. We have produced
+  Windows builds in the past.
+- Communicating to upstream packagers
diff --git a/docs/developers/styleguide.rst b/docs/developers/styleguide.rst
index 3fb20b3..5bdd772 100644
--- a/docs/developers/styleguide.rst
+++ b/docs/developers/styleguide.rst
@@ -11,20 +11,27 @@ This Styleguide follows :pep:`8` with some clarifications. It is based almost
 verbatim on the `Flask Styleguide`_.
 
 
+.. _styleguide-python:
+
+Python
+------
+
+These are the Translate conventions for Python coding style.
+
 .. _styleguide-general:
 
 General
--------
+^^^^^^^
 
 Indentation
-^^^^^^^^^^^
+~~~~~~~~~~~
 
 4 real spaces, no tabs. Exceptions: modules that have been copied into the
 source that don't follow this guideline.
 
 
 Maximum line length
-^^^^^^^^^^^^^^^^^^^
+~~~~~~~~~~~~~~~~~~~
 
 79 characters with a soft limit for 84 if absolutely necessary. Try to avoid
 too nested code by cleverly placing `break`, `continue` and `return`
@@ -32,7 +39,7 @@ statements.
 
 
 Continuing long statements
-^^^^^^^^^^^^^^^^^^^^^^^^^^
+~~~~~~~~~~~~~~~~~~~~~~~~~~
 
 To continue a statement you can use backslashes (preceeded by a space) in which
 case you should align the next line with the last dot or equal sign, or indent
@@ -85,7 +92,7 @@ For lists or tuples with many items, break immediately after the opening brace:
 
 
 Blank lines
-^^^^^^^^^^^
+~~~~~~~~~~~
 
 Top level functions and classes are separated by two lines, everything else
 by one. Do not use too many blank lines to separate logical segments in code.
@@ -115,7 +122,7 @@ Example:
 .. _styleguide-imports:
 
 Imports
-^^^^^^^
+~~~~~~~
 
 Like in :pep:`8`, but:
 
@@ -170,7 +177,7 @@ Like in :pep:`8`, but:
 
 
 Properties
-^^^^^^^^^^
+~~~~~~~~~~
 
 - Never use ``lambda`` functions:
 
@@ -256,11 +263,48 @@ Properties
     x = property(getx, setx, delx, "I'm the 'x' property.")
 
 
+Single vs double quoted strings
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+There is no preference on using single or double quotes for strings, except in
+some specific cases:
+
+- Always use single quotes for string dictionary keys:
+
+  .. code-block:: python
+
+    # Good.
+    demo = {
+        'language': language,
+    }
+
+
+    # Bad.
+    demo = {
+        "language": language,
+    }
+
+
+- When a single or double quote character needs to be escaped it is recommended
+  to instead enclose the string using the other quoting:
+
+  .. code-block:: python
+
+    # Good.
+    str1 = "Sauron's eye"
+    str2 = 'Its name is "Virtaal".'
+
+
+    # Bad.
+    str3 = 'Sauron\'s eye'
+    str4 = "Its name is \"Virtaal\"."
+
+
 Expressions and Statements
---------------------------
+^^^^^^^^^^^^^^^^^^^^^^^^^^
 
 General whitespace rules
-^^^^^^^^^^^^^^^^^^^^^^^^
+~~~~~~~~~~~~~~~~~~~~~~~~
 
 - No whitespace for unary operators that are not words (e.g.: ``-``, ``~``
   etc.) as well on the inner side of parentheses.
@@ -285,7 +329,7 @@ General whitespace rules
 
 
 Slice notation
-^^^^^^^^^^^^^^
+~~~~~~~~~~~~~~
 
 While :pep:`8` calls for spaces around operators ``a = b + c`` this results in
 flags when you use ``a[b+1:c-1]`` but would allow the rather unreadable
@@ -315,7 +359,7 @@ flags when you use ``a[b+1:c-1]`` but would allow the rather unreadable
    String slice formatting is still under discussion.
 
 Comparisons
-^^^^^^^^^^^
+~~~~~~~~~~~
 
 - Against arbitrary types: ``==`` and ``!=``
 - Against singletons with ``is`` and ``is not`` (e.g.: ``foo is not None``)
@@ -324,20 +368,20 @@ Comparisons
 
 
 Negated containment checks
-^^^^^^^^^^^^^^^^^^^^^^^^^^
+~~~~~~~~~~~~~~~~~~~~~~~~~~
 
 - Use ``foo not in bar`` instead of ``not foo in bar``
 
 
 Instance checks
-^^^^^^^^^^^^^^^
+~~~~~~~~~~~~~~~
 
 - ``isinstance(a, C)`` instead of ``type(A) is C``, but try to avoid instance
   checks in general.  Check for features.
 
 
 If statements
-^^^^^^^^^^^^^
+~~~~~~~~~~~~~
 
 - Use ``()`` brackets around complex if statements to allow easy wrapping,
   don't use backslash to wrap an if statement.
@@ -368,11 +412,11 @@ If statements
 
 
 Naming Conventions
-------------------
+^^^^^^^^^^^^^^^^^^
 
 .. note::
 
-   This has not been implemented or discussed.  The Translate code 
+   This has not been implemented or discussed.  The Translate code
    is not at all consistent with these conventions.
 
 
@@ -395,7 +439,7 @@ with prefixes or suffixes.
 
 
 Function and method arguments
-^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
 
 - Class methods: ``cls`` as first parameter
 - Instance methods: ``self`` as first parameter
@@ -415,10 +459,10 @@ Special roles
 
 We introduce a number of special roles for documentation:
 
-* ``:bug:`` -- links to a bug in Translate's Bugzilla.
+* ``:issue:`` -- links to a toolkit issue Github.
 
-  * ``:bug:`123``` gives: :bug:`123`
-  * ``:bug:`broken <123>``` gives: :bug:`broken <123>`
+  * ``:issue:`234``` gives: :issue:`234`
+  * ``:issue:`broken <234>``` gives: :issue:`broken <234>`
 
 * ``:opt:`` -- mark command options and command values.
 
@@ -528,8 +572,7 @@ the important parts though are:
         This method should always be used rather than trying to modify the
         list manually.
 
-        :type unit: TranslationUnit
-        :param unit: Any object that inherits from :class:`TranslationUnit`.
+        :param Unit unit: Any object that inherits from :class:`Unit`.
         """
         self.units.append(unit)
 
@@ -542,10 +585,9 @@ Parameter documentation:
     def foo(bar):
         """Simple docstring.
 
-        :param bar: Something
-        :type bar: Some type
+        :param SomeType bar: Something
         :return: Returns something
-        :rtype: Return type 
+        :rtype: Return type
         """
 
 
@@ -586,6 +628,24 @@ Module header:
         :license: LICENSE_NAME, see LICENSE_FILE for more details.
     """
 
+Deprecation:
+  Document the deprecation and version when deprecating features:
+
+  .. code-block:: python
+
+     from translate.misc.deprecation import deprecated
+
+
+     @deprecated("Use util.run_fast() instead.")
+     def run_slow():
+         """Run fast
+
+         .. deprecated:: 1.5
+            Use :func:`run_fast` instead.
+         """
+         run_fast()
+
+
 
 Comments
 --------
@@ -628,3 +688,26 @@ Docstring comments:
 .. _reStructuredText primer: http://sphinx-doc.org/rest.html
 .. _Sphinx documentation: http://sphinx-doc.org/contents.html
 .. _paragraph-level markup: http://sphinx-doc.org/markup/para.html#paragraph-level-markup
+
+
+String formatting
+-----------------
+
+While str.format() is more powerful than %-formatting, the latter has been the
+canonical way of formatting strings in Python for a long time and the Python
+core team has shown no desire to settle on one syntax over the other.
+For simple, serial positional cases (non-translatable strings), the old "%s"
+way of formatting is preferred.
+For anything more complex, including translatable strings, str.format is
+preferred as it is significantly more powerful and often cleaner.
+
+.. code-block:: python
+
+    # Good
+    print("Hello, {thing}".format(thing="world"))
+    print("%s=%r" % ("hello", "world"))  # non-translatable strings
+
+    # Bad
+    print("%s, %s" % ("Hello", "world"))  # Translatable string.
+    print("Hello, %(thing)s" % {"thing": "world"})  # Use {thing}.
+    print("Hello, {}".format("world"))  # Incompatible with Python 2.6. Use %s.
diff --git a/docs/developers/testing.rst b/docs/developers/testing.rst
index ce7a5d8..2fc8f08 100644
--- a/docs/developers/testing.rst
+++ b/docs/developers/testing.rst
@@ -7,8 +7,7 @@ Our aim is that all new functionality is adequately tested. Adding tests for
 existing functionality is highly recommended before any major reimplementation
 (refactoring, etcetera).
 
-We use `py.test`_ for (unit) testing. You need at least pytest >= 1.0.0, but
-pytest >= 2.1 is strongly recommended.
+We use `py.test`_ for (unit) testing. You need at least pytest >= 2.2.
 
 To run tests in the current directory and its subdirectories:
 
@@ -140,3 +139,134 @@ function parameters:
 .. _recwarn plugin: http://pytest.org/latest/recwarn.html
 .. |recwarn plugin| replace:: *recwarn plugin*
 .. we use |recwarn plugin| here and in ref above for italics like :ref:
+
+
+Command Line Functional Testing
+================================
+
+Functional tests allow us to validate the operation of the tools on the command
+line.  The execution by a user is simulated using reference data files and the
+results are captured for comparison.
+
+The tests are simple to craft and use some naming magic to make it easy to
+refer to test files, stdout and stderr.
+
+File name magic
+---------------
+
+We use a special naming convention to make writing tests quick and easy.  Thus
+in the case of testing the following command:
+
+.. code-block:: bash
+
+   $ moz2po -t template.dtd translations.po translated.dtd
+
+Our test would be written like this:
+
+.. code-block:: bash
+
+   $ moz2po -t $one $two $out
+
+Where ``$one`` and ``$two`` are the input files and ``$out`` is the result file
+that the test framework will validate.
+
+The files would be called:
+
+===========================    ============   =========   ===================
+File                            Function       Variable    File naming conventions
+===========================    ============   =========   ===================
+test_moz2po_help.sh             Test script    -           test_${command}_${description}.sh
+test_moz2po_help/one.dtd        Input          $one        ${testname}/${variable}.${extension}
+test_moz2po_help/two.po         Input          $two        ${testname}/${variable}.${extension}
+test_moz2po_help/out.dtd        Output         $out        ${testname}/${variable}.${extension}
+test_moz2po_help/stdout.txt     Output         $stdout     ${testname}/${variable}.${extension}
+test_moz2po_help/stderr.txt     Output         $stderr     ${testname}/${variable}.${extension}
+===========================    ============   =========   ===================
+
+.. note:: A test filename must start with ``test_`` and end in ``.sh``.  The
+   rest of the name may only use ASCII alphanumeric characters and underscore
+   ``_``.
+
+The test file is placed in the ``tests/`` directory while data files are placed
+in the ``tests/data/${testname}`` directory.
+
+There are three standard output files:
+
+1. ``$out`` - the output from the command
+2. ``$stdout`` - any output given to the user
+3. ``$stderr`` - any error output
+
+The output files are available for checking at the end of the test execution
+and a test will fail if there are differences between the reference output and
+that achieved in the test run.
+
+You do not need to define reference output for all three, if one is missing
+then checks will be against ``/dev/null``.
+
+There can be any number of input files.  They need to be named using only ASCII
+characters without any punctuation.  While you can give them any name we
+recommend using numbered positions such as one, two, three.  These are
+converted into variables in the test framework so ensure that none of your
+choices clash with existing bash commands and variables.
+
+Your test script can access variables for all of your files so e.g.
+``moz2po_conversion/one.dtd`` will be referenced as ``$one`` and output
+``moz2po_conversion/out.dtd`` as ``$out``.
+
+
+Writing
+-------
+
+The tests are normal bash scripts so they can be executed on their own.  A
+template for a test is as follows:
+
+.. literalinclude:: ../../tests/cli/example_test.sh
+   :language: bash
+
+For simple tests, where we diff output and do the correct checking of output
+files, simply use ``check_results``.  More complex tests need to wrap tests in
+``start_checks`` and ``end_checks``.
+
+.. code-block:: bash
+
+   start_checks
+   has $out
+   containsi_stdout "Parsed:"
+   end_checks
+
+You can make use of the following commands in the ``start_checks`` scenario:
+
+=========================== ===========================================
+Command                      Description
+=========================== ===========================================
+has $file                    $file was output and it not empty
+has_stdout                   stdout is not empty
+has_stderr                   stderr is not empty
+startswith $file "String"    $file starts with "String"
+startswithi $file "String"   $file starts with "String" ignoring case
+startswith_stdout "String"   stdout starts with "String"
+startswithi_stdout "String"  stdout starts with "String" ignoring case
+startswith_stderr "String"   stderr starts with "String"
+startswithi_stderr "String"  stderr starts with "String" ignoring case
+contains $file "String"      $file contains "String"
+containsi $file "String"     $file contains "String" ignoring case
+contains_stdout "String"     stdout contains "String"
+containsi_stdout "String"    stdout contains "String" ignoring case
+contains_stderr "String"     stderr contains "String"
+containsi_stderr "String"    stderr contains "String" ignoring case
+endswith $file "String"      $file ends with "String"
+endswithi $file "String"     $file ends with "String" ignoring case
+endswith_stdout "String"     stdout ends with "String"
+endswithi_stdout "String"    stdout ends with "String" ignoring case
+endswith_stderr "String"     stderr ends with "String"
+endswithi_stderr "String"    stderr ends with "String" ignoring case
+=========================== ===========================================
+
+
+--prep
+^^^^^^
+
+If you use the --prep options on any test then the test will change behavior.
+It won't validate the results against your reference data but will instead
+create your reference data.  This makes it easy to generate your expected
+result files when you are setting up your test.
diff --git a/docs/formats/dtd.rst b/docs/formats/dtd.rst
index a84e4d0..aaf61fa 100644
--- a/docs/formats/dtd.rst
+++ b/docs/formats/dtd.rst
@@ -24,11 +24,14 @@ Features
   combined into a single unit
 * Translator directive -- all LOCALIZATION NOTE items such as DONT_TRANSLATE
   are handled and such items are discarded
+* Entities -- some entities such as ``&`` or ``"`` are expanded when
+  reading DTD files and escaped when writing them, so that translator see and
+  type ``&`` and ``"`` directly
 
 .. _dtd#issues:
 
 Issues
 ======
 
-* We don't escape character entities like ``<``, ``&`` -- this doesn't
-  break anything but it would be nicer to see © rather than ©
+* We don't expand some character entities like ``<``, ``&`` -- this
+  doesn't break anything but it would be nicer to see © rather than ``©``
diff --git a/docs/formats/index.rst b/docs/formats/index.rst
index ac9d566..2354b34 100644
--- a/docs/formats/index.rst
+++ b/docs/formats/index.rst
@@ -163,8 +163,8 @@ Formats that we would like to support but don't currently support:
 * Apple:
 
   * `AppleGlot <ftp://ftp.apple.com/developer/tool_chest/localization_tools/appleglot/appleglot_3.2_usersguide.pdf>`_
-  * .plist -- see :bug:`633` and `plistlib
-    <http://docs.python.org/2/library/plistlib.html>`_ for Python
+  * .plist -- see :issue:`633` and `plistlib
+    <https://docs.python.org/2/library/plistlib.html>`_ for Python
 
 * Adobe:
 
@@ -185,7 +185,7 @@ Formats that we would like to support but don't currently support:
   * :wp:`Rich Text Format <Rich_Text_Format>` (RTF) see also `pyrtf-ng
     <http://code.google.com/p/pyrtf-ng/>`_
   * :wp:`Open XML Paper Specification <Open_XML_Paper_Specification>`
-  * .NET Resource files (.resx) -- :bug:`Bug 396 <396>`
+  * .NET Resource files (.resx) -- :issue:`Issue 396 <396>`
 
 * XML related
 
diff --git a/docs/formats/mo.rst b/docs/formats/mo.rst
index 65a0dc3..83108ad 100644
--- a/docs/formats/mo.rst
+++ b/docs/formats/mo.rst
@@ -23,5 +23,5 @@ the .mo files to act as a translation memory.
    The hash table is also generated (the Gettext .mo files works fine without
    it). Due to slight differences in the construction of the hashing, the
    generated files are not identical to those generated by msgfmt, but they
-   should be functionally equivalent and 100% usable. :bug:`Bug 326 <326>`
+   should be functionally equivalent and 100% usable. :issue:`Issue 326 <326>`
    tracked the implementation of the hashing. The hash is platform dependent.
diff --git a/docs/formats/php.rst b/docs/formats/php.rst
index 5ead7c8..1d048ea 100644
--- a/docs/formats/php.rst
+++ b/docs/formats/php.rst
@@ -189,6 +189,19 @@ Our format support allows:
          "two" => "that",
       );
 
+* Blank array declaration, then square bracket syntax to fill that array
+
+  .. versionadded:: 1.12.0
+
+  .. code-block:: php
+
+      <?php
+      global $messages;
+      $messages = array();
+
+      $messages['language'] = 'Language';
+      $messages['file'] = 'File';
+
 
 .. _php#non-conformance:
 
diff --git a/docs/formats/properties.rst b/docs/formats/properties.rst
index b4d2506..6f87858 100644
--- a/docs/formats/properties.rst
+++ b/docs/formats/properties.rst
@@ -21,13 +21,16 @@ Features
 * Fully manage Java escaping (Mozilla non-escaped form is also handled)
 * Preserves the layout of the original source file in the translated version
 
+.. versionadded:: 1.12.0
+
+* Mozilla accelerators -- if a unit has an associated access key entry then
+  these are combined into a single unit
+
 .. _properties#not_implemented:
 
 Not implemented
 ===============
 
-* Does not fold access keys together as done in the :doc:`Mozilla DTD <dtd>`
-  format.
 * We don't allow filtering of unchanged values.  In Java you can inherit
   translations, if the key is missing from a file then Java will look to other
   files in the hierarchy to determine the translation.
diff --git a/docs/formats/rc.rst b/docs/formats/rc.rst
index e45e372..d50f2a8 100644
--- a/docs/formats/rc.rst
+++ b/docs/formats/rc.rst
@@ -48,5 +48,5 @@ Bugs
 
 * There may be problems with very deeply nested MENU's
 * LANGUAGE elements cannot yet be updated in :doc:`po2rc </commands/rc2po>`
-  (:bug:`Bug 360 <360>`)
+  (:issue:`Issue 360 <360>`)
 
diff --git a/docs/formats/wordfast.rst b/docs/formats/wordfast.rst
index a4e9cee..a14e439 100644
--- a/docs/formats/wordfast.rst
+++ b/docs/formats/wordfast.rst
@@ -21,4 +21,4 @@ Conformance
 * Header -- Only basic updating and reading of the header is implemented
 * Tab-separated value (TSV) -- the format correctly handles the TSV format used
   by Wordfast.  There is no quoting, Windows newlines are used and the ``\t``
-  is used as a delimiter (see :bug:`472`)
+  is used as a delimiter (see :issue:`472`)
diff --git a/docs/releases/1.10.0.rst b/docs/releases/1.10.0.rst
index 0713a1a..a9742a2 100644
--- a/docs/releases/1.10.0.rst
+++ b/docs/releases/1.10.0.rst
@@ -52,6 +52,14 @@ Formats and Converters
   guessing
 * .properties: BOMs in messages and C style comments [Roman Imankulov]
 * Mac OS String formatting improved [Roman Imankulov]
+* The spaces in DTD files are now preserved. For example the spaces in
+  ``<!ENTITY          some.label          "definition">`` around the entity
+  name ``some.label`` are now kept.
+* The matching criterion when merging units can now be specified with the
+  ``X-Merge-On`` header. Available values for this header are `location` and
+  `id`. By default merges will be done by matching IDs. This supersedes the
+  effects of the ``X-Accelerator`` header when merging and establishes an
+  explicit way to set the desired matching criterion.
 
 
 Version Control improvements
@@ -114,5 +122,4 @@ Michal Čihař, Roman Imankulov, Alexander Dupuy, Frank Tetzel,
 Luiz Fernando Ranghetti, Laurette Pretorius, Jiro Matsuzawa, Henrik Saari,
 Luca De Petrillo, Khaled Hosny, Dave Dash & Chris Oelmueller.
 
-And to all our bug finder, testers and `localisers
-<http://pootle.locamotion.org/projects/pootle/>`_, a Very BIG Thank You.
+And to all our bug finders and testers, a Very BIG Thank You.
diff --git a/docs/releases/1.11.0-rc1.rst b/docs/releases/1.11.0-rc1.rst
index 87378da..ee351a3 100644
--- a/docs/releases/1.11.0-rc1.rst
+++ b/docs/releases/1.11.0-rc1.rst
@@ -49,8 +49,8 @@ Formats and Converters
 - PHP:
 
   - Warn about duplicate entries
-  - Allow blank spaces in array declaration (:bug:`2646`)
-  - Support nested arrays (:bug:`2240`)
+  - Allow blank spaces in array declaration (:issue:`2646`)
+  - Support nested arrays (:issue:`2240`)
 
 - XLIFF:
 
@@ -77,9 +77,9 @@ Formats and Converters
     timestamp (Makefile-alike)
   - :option:`--threshold` -- in po2* converters this allows you to specify a
     percentage complete threshold.  If the PO files passes this theshold then
-    the file is output (:bug:`2998`)
+    the file is output (:issue:`2998`)
   - :option:`--removeuntranslated` -- Extend this option to po2dtd and thus
-    po2moz -- don't output untranslated text (:bug:`1718`)
+    po2moz -- don't output untranslated text (:issue:`1718`)
 
 Language specific fixes
 -----------------------
@@ -102,7 +102,7 @@ Mozilla tooling fixes
 - .lang -- Improved support for untranslated entries
 - ``buildxpi``:
 
-  - Can now build multiple languages at once (:bug:`2999`)
+  - Can now build multiple languages at once (:issue:`2999`)
   - Set a max product version to allow the language pack to continue to work
     once the browser version has moved out of Aurora channel
 
@@ -136,5 +136,4 @@ Michal Čihař, Jordi Mas, Stuart Prescott, Trung Ngo, Ronald Sterckx, Rail
 Aliev, Michael Schlenker, Martin-Zack Mekkaoui, Iskren Chernev, Luiz Fernando
 Ranghetti & Christian Hitz
 
-And to all our bug finders, testers and `localisers
-<http://pootle.locamotion.org/projects/pootle/>`_, a Very BIG Thank You.
+And to all our bug finders and testers, a Very BIG Thank You.
diff --git a/docs/releases/1.11.0.rst b/docs/releases/1.11.0.rst
index 480c5b1..b5747f5 100644
--- a/docs/releases/1.11.0.rst
+++ b/docs/releases/1.11.0.rst
@@ -38,7 +38,7 @@ Changes since 1.11.0 RC1:
 
 - Improve handling of escapes in wrapping
 - Handle a broken version of python-Levenshtein 
-- Output HTML source in po2html when a unit is fuzzy (:bug:`3145`)
+- Output HTML source in po2html when a unit is fuzzy (:issue:`3145`)
 
 Major changes
 -------------
@@ -57,8 +57,8 @@ Formats and Converters
 - PHP:
 
   - Warn about duplicate entries
-  - Allow blank spaces in array declaration (:bug:`2646`)
-  - Support nested arrays (:bug:`2240`)
+  - Allow blank spaces in array declaration (:issue:`2646`)
+  - Support nested arrays (:issue:`2240`)
 
 - XLIFF:
 
@@ -85,9 +85,9 @@ Formats and Converters
     timestamp (Makefile-alike)
   - :option:`--threshold` -- in po2* converters this allows you to specify a
     percentage complete threshold.  If the PO files passes this theshold then
-    the file is output (:bug:`2998`)
+    the file is output (:issue:`2998`)
   - :option:`--removeuntranslated` -- Extend this option to po2dtd and thus
-    po2moz -- don't output untranslated text (:bug:`1718`)
+    po2moz -- don't output untranslated text (:issue:`1718`)
 
 Language specific fixes
 -----------------------
@@ -110,7 +110,7 @@ Mozilla tooling fixes
 - .lang -- Improved support for untranslated entries
 - ``buildxpi``:
 
-  - Can now build multiple languages at once (:bug:`2999`)
+  - Can now build multiple languages at once (:issue:`2999`)
   - Set a max product version to allow the language pack to continue to work
     once the browser version has moved out of Aurora channel
 
@@ -119,7 +119,9 @@ Mozilla tooling fixes
 
 General
 -------
-- Dropped support for Python 2.5 -- 2.5 has reached end-of-life
+- Dropped support for Python 2.5 since it is no longer supported by the Python
+  Foundation. Also sticking to it was preventing us from using features that
+  are not supported on Python 2.5 but they are on later versions.
 - Dropped psyco support -- it is no longer maintained
 - Use logging throught instead of ``sys.stderr``
 - Lots of cleanups on docs: TBX, PHP, added Android and JSON docs
@@ -144,5 +146,4 @@ Michal Čihař, Jordi Mas, Stuart Prescott, Trung Ngo, Ronald Sterckx, Rail
 Aliev, Michael Schlenker, Martin-Zack Mekkaoui, Iskren Chernev, Luiz Fernando
 Ranghetti & Christian Hitz
 
-And to all our bug finders, testers and `localisers
-<http://pootle.locamotion.org/projects/pootle/>`_, a Very BIG Thank You.
+And to all our bug finders and testers, a Very BIG Thank You.
diff --git a/docs/releases/1.12.0-rc1.rst b/docs/releases/1.12.0-rc1.rst
new file mode 100644
index 0000000..e1eb2b8
--- /dev/null
+++ b/docs/releases/1.12.0-rc1.rst
@@ -0,0 +1,164 @@
+.. These notes are used in:
+   1. Our email announcements
+   2. The Translate Tools download page at toolkit.translatehouse.org
+   3. Sourceforge download page in
+      http://sourceforge.net/projects/translate/files/Translate%20Toolkit/1.12.0-rc1/README.rst/download
+
+Translate Toolkit 1.12.0-rc1
+****************************
+
+*Released on 11 July 2014*
+
+This release contains many improvements and bug fixes. While it contains many
+general improvements, it also specifically contains needed changes and
+optimizations for the upcoming `Pootle <http://pootle.translatehouse.org/>`_
+2.6.0 and `Virtaal <http://virtaal.translatehouse.org>`_ releases.
+
+It is just over 6 months since the last release and there are many improvements
+across the board.  A number of people contributed to this release and we've
+tried to credit them wherever possible (sorry if somehow we missed you).
+
+..
+  This is used for the email and other release notifications
+  Getting it and sharing it
+  =========================
+  * pip install translate-toolkit
+  * `Sourceforge download
+    <https://sourceforge.net/projects/translate/files/Translate%20Toolkit/1.12.0-rc1/>`_
+  * Please share this URL http://toolkit.translatehouse.org/download.html if
+    you'd like to tweet or post about the release.
+
+Highlighted improvements
+========================
+
+Major changes
+-------------
+
+- Properties and DTD formats fix a number of issues
+- Massive code cleanup looking forward Python 3 compatibility
+- Important changes in development process to ease testing
+
+
+Formats and Converters
+----------------------
+
+- Mozilla properties
+
+  - If a unit has an associated access key entry then these are combined into a
+    single unit
+  - Encoding errors are now reported early to prevent them being masked by
+    subsequent errors
+  - Leading and trailing spaces are escaped in order to avoid losing them when
+    using the converters
+  - The ``\uNN`` characters are now properly handled
+  - po2prop Now uses the source language accesskey if translation is missing
+  - Fixed conversion of successive Gaia plural units in prop2po
+
+- DTD
+
+  - The ``&`` entity is automatically expanded when reading DTD files, and
+    escaped back when writing them
+  - Underscore character is now a valid character in entity names
+  - Nonentities at end of string are now correctly handled
+  - po2dtd:
+
+    - Now uses the source language accesskey if target accesskey is missing
+    - Doesn't remove stray ``&`` as they probably ``&``
+
+- HTML
+
+  - The HTML5 ``figcaption`` tag is now localizable
+  - The ``title`` attribute is now localizable
+  - po2html now retains the untranslated attributes
+
+- Accesskeys
+
+  - Now accesskeys are combined using the correct case
+  - Added support for accesskey after ampersand and space
+
+- PHP
+
+  - Fall back to default dialect after adding every new unit
+  - Added support for empty array declaration when it is filled later
+
+- Android
+
+  - Added support for plurals
+  - Text is now properly escaped when using markup
+
+- TS
+
+  - The message id attribute is added to contextname
+
+
+Version Control improvements
+----------------------------
+
+- Added support for Subversion ``.svn`` directories
+
+
+Checks
+------
+
+- Added specific checks for LibreOffice
+
+
+Tools
+-----
+
+- The ``pocount`` tool has now a better counting algorithm for things that look
+  like XML
+
+
+Mozilla tooling fixes
+---------------------
+
+- Added support to check for bad accesskeys in .properties files
+- Now the Mozilla roundtrip script can be silently run
+- Added a new Gaia roundtrip script
+- The ``buildxpi`` ``--disable-compile-environment`` option has been restored,
+  resulting in huge speed improvements
+
+
+General
+-------
+
+- Extensive cleanup of setup script
+- Some bugfixes for placeables
+- Misc docs cleanups
+- Code cleanups:
+
+  - Applied tons of PEP8 and style guide cleanups
+  - Python 2.6 is our new minimum:
+
+    - Removed lots of code used to support old Python versions
+    - Dropped custom code in favor of Python standard libraries
+    - Updated codebase to use newer libraries
+    - Changed code to use newer syntax seeking Python 3 compatibility
+
+  - Updated some third party bundled software: CherryPy, BeautifulSoup4
+  - Added document to track licenses used by third party bundled code
+  - Removed TODO items. Some of them were moved to the bug tracker
+
+- Development process:
+
+  - Added a functional test framework
+  - Added dozens of new unit and functional tests
+  - Expanded the tasks performed in Travis: pep8, pytest-xdist, compile all
+    files, coveralls.io, ...
+
+
+...and loads of general code cleanups and of course many many bugfixes.
+
+
+Contributors
+------------
+
+This release was made possible by the following people:
+
+Dwayne Bailey, Jerome Leclanche, Leandro Regueiro, Khaled Hosny, Friedel Wolff,
+Heiki Ojasild, Julen Ruiz Aizpuru, damian.golda, Zolnai Tamás,
+Vladimir Rusinov, Stuart Prescott, Michal Čihař, Luca De Petrillo,
+Kevin KIN-FOO, Henrik Saari, Dominic König.
+
+And to all our bug finders and testers, a Very BIG Thank You.
diff --git a/docs/releases/1.12.0.rst b/docs/releases/1.12.0.rst
new file mode 100644
index 0000000..430f398
--- /dev/null
+++ b/docs/releases/1.12.0.rst
@@ -0,0 +1,182 @@
+.. These notes are used in:
+   1. Our email announcements
+   2. The Translate Tools download page at toolkit.translatehouse.org
+   3. Sourceforge download page in
+      http://sourceforge.net/projects/translate/files/Translate%20Toolkit/1.12.0/README.rst/download
+
+Translate Toolkit 1.12.0
+************************
+
+*Released on 12 August 2014*
+
+This release contains many improvements and bug fixes. While it contains many
+general improvements, it also specifically contains needed changes and
+optimizations for the upcoming `Pootle <http://pootle.translatehouse.org/>`_
+2.6.0 and `Virtaal <http://virtaal.translatehouse.org>`_ releases.
+
+It is just over 6 months since the last release and there are many improvements
+across the board.  A number of people contributed to this release and we've
+tried to credit them wherever possible (sorry if somehow we missed you).
+
+..
+  This is used for the email and other release notifications
+  Getting it and sharing it
+  =========================
+  * pip install translate-toolkit
+  * `Sourceforge download
+    <https://sourceforge.net/projects/translate/files/Translate%20Toolkit/1.12.0/>`_
+  * Please share this URL http://toolkit.translatehouse.org/download.html if
+    you'd like to tweet or post about the release.
+
+
+Highlighted improvements
+========================
+
+1.12.0 vs 1.12.0-rc1
+--------------------
+
+Changes since 1.12.0 RC1:
+
+- Added support for UTF-8 encoded OS X strings
+- RC format received some bugfixes and now ignores ``TEXTINCLUDE`` sections and
+  one line comments (``//``)
+- Qt Linguist files now output the XML declaration (:issue:`3198`)
+- ``xliff2po`` now supports files with ``.xliff`` extension
+- Minor change in placeables to correctly insert at an existing parent if
+  appropriate
+- Recovered diff-match-patch to provide support for old third party consumers
+- Added new tests for the UTF-8 encoded OS X strings, Qt linguist and RC
+  formats and the ``rc2po`` converter
+
+
+Major changes
+-------------
+
+- Properties and DTD formats fix a number of issues
+- Massive code cleanup looking forward Python 3 compatibility
+- Important changes in development process to ease testing
+
+
+Formats and Converters
+----------------------
+
+- Mozilla properties
+
+  - If a unit has an associated access key entry then these are combined into a
+    single unit
+  - Encoding errors are now reported early to prevent them being masked by
+    subsequent errors
+  - Leading and trailing spaces are escaped in order to avoid losing them when
+    using the converters
+  - The ``\uNN`` characters are now properly handled
+  - po2prop Now uses the source language accesskey if translation is missing
+  - Fixed conversion of successive Gaia plural units in prop2po
+
+- DTD
+
+  - The ``&`` entity is automatically expanded when reading DTD files, and
+    escaped back when writing them
+  - Underscore character is now a valid character in entity names
+  - Nonentities at end of string are now correctly handled
+  - po2dtd:
+
+    - Now uses the source language accesskey if target accesskey is missing
+    - Doesn't remove stray ``&`` as they probably ``&``
+
+- HTML
+
+  - The HTML5 ``figcaption`` tag is now localizable
+  - The ``title`` attribute is now localizable
+  - po2html now retains the untranslated attributes
+
+- Accesskeys
+
+  - Now accesskeys are combined using the correct case
+  - Added support for accesskey after ampersand and space
+
+- PHP
+
+  - Fall back to default dialect after adding every new unit
+  - Added support for empty array declaration when it is filled later
+
+- Android
+
+  - Added support for plurals
+  - Text is now properly escaped when using markup
+
+- TS
+
+  - The message id attribute is added to contextname
+
+
+Version Control improvements
+----------------------------
+
+- Added support for Subversion ``.svn`` directories
+
+
+Checks
+------
+
+- Added specific checks for LibreOffice
+
+
+Tools
+-----
+
+- The ``pocount`` tool has now a better counting algorithm for things that look
+  like XML
+
+
+Mozilla tooling fixes
+---------------------
+
+- Added support to check for bad accesskeys in .properties files
+- Now the Mozilla roundtrip script can be silently run
+- Added a new Gaia roundtrip script
+- The ``buildxpi`` ``--disable-compile-environment`` option has been restored,
+  resulting in huge speed improvements
+
+
+General
+-------
+
+- Extensive cleanup of setup script
+- Some bugfixes for placeables
+- Misc docs cleanups
+- Code cleanups:
+
+  - Applied tons of PEP8 and style guide cleanups
+  - Python 2.6 is our new minimum:
+
+    - Removed lots of code used to support old Python versions
+    - Dropped custom code in favor of Python standard libraries
+    - Updated codebase to use newer libraries
+    - Changed code to use newer syntax seeking Python 3 compatibility
+
+  - Updated some third party bundled software: CherryPy, BeautifulSoup4
+  - Added document to track licenses used by third party bundled code
+  - Removed TODO items. Some of them were moved to the bug tracker
+
+- Development process:
+
+  - Added a functional test framework
+  - Added dozens of new unit and functional tests
+  - Expanded the tasks performed in Travis: pep8, pytest-xdist, compile all
+    files, coveralls.io, ...
+
+
+...and loads of general code cleanups and of course many many bugfixes.
+
+
+Contributors
+------------
+
+This release was made possible by the following people:
+
+Dwayne Bailey, Jerome Leclanche, Leandro Regueiro, Khaled Hosny,
+Javier Alfonso, Friedel Wolff, Michal Čihař, Heiki Ojasild, Julen Ruiz Aizpuru,
+Florian Preinstorfer, damian.golda, Zolnai Tamás, Vladimir Rusinov,
+Stuart Prescott, Luca De Petrillo, Kevin KIN-FOO, Henrik Saari, Dominic König.
+
+And to all our bug finders and testers, a Very BIG Thank You.
diff --git a/docs/releases/1.8.1.rst b/docs/releases/1.8.1.rst
index d987e31..c08cdc6 100644
--- a/docs/releases/1.8.1.rst
+++ b/docs/releases/1.8.1.rst
@@ -71,4 +71,4 @@ The Translate team
 .. _pot2po: http://docs.translatehouse.org/projects/translate-toolkit/en/latest/commands/pot2po.html
 .. _Full feature list: http://docs.translatehouse.org/projects/translate-toolkit/en/latest/features.html
 .. _Download: http://sourceforge.net/projects/translate/files/Translate%20Toolkit/1.8.1/
-.. _Bugs: http://bugs.locamotion.org/
+.. _Bugs: https://github.com/translate/translate/issues
diff --git a/docs/releases/1.9.0.rst b/docs/releases/1.9.0.rst
index 39add52..59938c9 100644
--- a/docs/releases/1.9.0.rst
+++ b/docs/releases/1.9.0.rst
@@ -39,7 +39,7 @@ options made this `possible for GNOME
 
 Most relevant for Pootle
 ------------------------
-- Support for Xapian 1.2 (:bug:`1766`) [Rimas Kudelis]
+- Support for Xapian 1.2 (:issue:`1766`) [Rimas Kudelis]
 - Work around some changes introduced in Django 1.2.5/1.3
 
 
@@ -47,7 +47,7 @@ Format support
 --------------
 - Always use UNIX line endings for PO (even on Windows)
 - XLIFF and .ts files now shows "fuzzy" only the target present
-- Improved support for .ts comment as context (:bug:`1739`)
+- Improved support for .ts comment as context (:issue:`1739`)
 - Support for Java properties in UTF-8 encoding
 - More natural string ordering in json converter
 - Improved handling of trailing spaces in Mozilla DTD files
diff --git a/docs/releases/dev.rst b/docs/releases/dev.rst
new file mode 100644
index 0000000..2a8e65f
--- /dev/null
+++ b/docs/releases/dev.rst
@@ -0,0 +1,71 @@
+.. These notes are used in:
+   1. Our email announcements
+   2. The Translate Tools download page at toolkit.translatehouse.org
+   3. Sourceforge download page in
+      http://sourceforge.net/projects/translate/files/Translate%20Toolkit/1.12.0-rc1/README.rst/download
+
+Translate Toolkit 1.12.0-rc1
+****************************
+
+*Not yet released*
+
+This release contains many improvements and bug fixes. While it contains many
+general improvements, it also specifically contains needed changes and
+optimizations for the upcoming `Pootle <http://pootle.translatehouse.org/>`_
+2.6.0 and `Virtaal <http://virtaal.translatehouse.org>`_ releases.
+
+It is just over X months since the last release and there are many improvements
+across the board.  A number of people contributed to this release and we've
+tried to credit them wherever possible (sorry if somehow we missed you).
+
+..
+  This is used for the email and other release notifications
+  Getting it and sharing it
+  =========================
+  * pip install translate-toolkit
+  * `Sourceforge download
+    <https://sourceforge.net/projects/translate/files/Translate%20Toolkit/1.12.0/>`_
+  * Please share this URL http://toolkit.translatehouse.org/download.html if
+    you'd like to tweet or post about the release.
+
+Highlighted improvements
+========================
+
+Major changes
+-------------
+
+- Properties and DTD formats fix a number of issues
+- Massive code cleanup looking forward Python 3 compatibility
+- Important changes in development process to ease testing
+
+
+Formats and Converters
+----------------------
+
+- Mozilla properties
+
+  - The ``\uNN`` characters are now properly handled
+  - Fixed conversion of successive Gaia plural units in prop2po
+
+- DTD
+
+  - Underscore character is now a valid character in entity names
+
+
+General
+-------
+
+- Misc docs cleanups
+
+
+...and loads of general code cleanups and of course many many bugfixes.
+
+
+Contributors
+------------
+
+This release was made possible by the following people:
+
+%CONTRIBUTORS%
+
+And to all our bug finders and testers, a Very BIG Thank You.
diff --git a/docs/releases/index.rst b/docs/releases/index.rst
index 3be0bb4..e6fa131 100644
--- a/docs/releases/index.rst
+++ b/docs/releases/index.rst
@@ -3,12 +3,17 @@
 Release Notes
 *************
 
-The following are release notes used on PyPI, Sourceforge and mailing lists for
-Pootle releases.
+The following are release notes used on `PyPI
+<https://pypi.python.org/pypi/translate-toolkit>`_, `Sourceforge
+<http://sourceforge.net/projects/translate/files/Translate%20Toolkit/>`_ and
+mailing lists for Translate Toolkit releases.
 
 .. toctree::
    :maxdepth: 1
 
+   dev <dev>
+   1.12.0 <1.12.0>
+   1.12.0-rc1 <1.12.0-rc1>
    1.11.0 <1.11.0>
    1.11.0-rc1 <1.11.0-rc1>
    1.10.0 <1.10.0>
diff --git a/requirements/dev.txt b/requirements/dev.txt
index d688ba8..95be1e1 100644
--- a/requirements/dev.txt
+++ b/requirements/dev.txt
@@ -1,4 +1,8 @@
 -r optional.txt
 
+isort>=3.5.0
+pep8
 pytest>=2.2
-Sphinx>=1.1
+pytest-cov
+pytest-xdist
+Sphinx>=1.2.0
diff --git a/requirements/optional.txt b/requirements/optional.txt
index e61773e..e1c7406 100644
--- a/requirements/optional.txt
+++ b/requirements/optional.txt
@@ -3,17 +3,12 @@
 ##################
 # Format support #
 ##################
-# ini2po
-iniparse>=0.3.1
-
-# ical2po
-vobject>=0.6.6
-
-# Trados TM
-BeautifulSoup>=3.2
-
-# sub2po
-#aeidon>=0.14
+BeautifulSoup4>=4.3  # Trados TM
+iniparse>=0.3.1      # INI (ini2po)
+vobject>=0.6.6       # iCal (ical2po)
 
+#aeidon>=0.14        # Subtitles (sub2po)
 # aeidon not available through pip/PyPI,
 # so recording this dependency here is pointless except as a comment
+
+Babel==1.3           # Android plurals
diff --git a/requirements/recommended.txt b/requirements/recommended.txt
index db52402..a78adcd 100644
--- a/requirements/recommended.txt
+++ b/requirements/recommended.txt
@@ -2,8 +2,9 @@
 # Recommended #
 ###############
 
-# lxml - for XML processing (XLIFF, TMX, TBX)
-lxml>=2.1.0  # >=1.3.4 should work
+# lxml - for XML processing (XLIFF, TMX, TBX, Android)
+lxml>=2.2.0
 
 # Faster matching in e.g. pot2po
-python-Levenshtein>=0.11.1
+# 0.11.0 is broken using pip, later versions are fixed
+python-Levenshtein>=0.10.2,!=0.11.0
diff --git a/requirements/required.txt b/requirements/required.txt
new file mode 100644
index 0000000..9487485
--- /dev/null
+++ b/requirements/required.txt
@@ -0,0 +1,7 @@
+# argparse is required for Python 2.6, it is a
+# standard library in Python >= 2.7
+argparse
+six
+
+# Required to provide compatibility for old Virtaal releases:
+diff-match-patch
diff --git a/setup.cfg b/setup.cfg
new file mode 100644
index 0000000..861a9f5
--- /dev/null
+++ b/setup.cfg
@@ -0,0 +1,5 @@
+[egg_info]
+tag_build = 
+tag_date = 0
+tag_svn_revision = 0
+
diff --git a/setup.py b/setup.py
index 4956d96..d837162 100755
--- a/setup.py
+++ b/setup.py
@@ -17,20 +17,12 @@
 # You should have received a copy of the GNU General Public License along with
 # Translate; if not, see <http://www.gnu.org/licenses/>.
 
-import distutils.sysconfig
 import os
-import os.path
+import re
 import site
 import sys
-from distutils.core import setup, Extension, Distribution, Command
-
-try:
-    import py2exe
-    build_exe = py2exe.build_exe.py2exe
-    Distribution = py2exe.Distribution
-except ImportError:
-    py2exe = None
-    build_exe = Command
+from distutils.sysconfig import get_python_lib
+from os.path import dirname, isfile, join
 
 try:
     from sphinx.setup_command import BuildDoc
@@ -41,18 +33,11 @@ except ImportError:
 from translate import __doc__, __version__
 
 
-# TODO: check out installing into a different path with --prefix/--home
-
-join = os.path.join
-
 PRETTY_NAME = 'Translate Toolkit'
 translateversion = __version__.sver
 
-if sys.version_info >= (2, 6, 0) and site.ENABLE_USER_SITE:
-    sitepackages = site.USER_SITE
-else:
-    packagesdir = distutils.sysconfig.get_python_lib()
-    sitepackages = packagesdir.replace(sys.prefix + os.sep, '')
+packagesdir = get_python_lib()
+sitepackages = packagesdir.replace(sys.prefix + os.sep, '')
 
 infofiles = [(join(sitepackages, 'translate'),
              [filename for filename in ('COPYING', 'README.rst')])]
@@ -63,7 +48,6 @@ subpackages = [
         "filters",
         "lang",
         "misc",
-        join("misc", "typecheck"),
         join("misc", "wsgiserver"),
         "storage",
         join("storage", "placeables"),
@@ -77,7 +61,7 @@ subpackages = [
 # TODO: elementtree doesn't work in sdist, fix this
 packages = ["translate"]
 
-translatescripts = [apply(join, ('translate', ) + script) for script in
+translatescripts = [join(*('translate', ) + script) for script in [
                   ('convert', 'pot2po'),
                   ('convert', 'moz2po'), ('convert', 'po2moz'),
                   ('convert', 'oo2po'), ('convert', 'po2oo'),
@@ -116,9 +100,9 @@ translatescripts = [apply(join, ('translate', ) + script) for script in
                   ('tools', 'pretranslate'),
                   ('services', 'tmserver'),
                   ('tools', 'build_tmdb')]
+]
 
-translatebashscripts = [
-    apply(join, ('tools', ) + script) for script in [
+translatebashscripts = [join(*('tools', ) + script) for script in [
                   ('junitmsgfmt', ),
                   ('mozilla', 'build_firefox.sh'),
                   ('mozilla', 'buildxpi.py'),
@@ -132,189 +116,264 @@ translatebashscripts = [
     ]
 ]
 
+classifiers = [
+    "Development Status :: 5 - Production/Stable",
+    "Environment :: Console",
+    "Intended Audience :: Developers",
+    "License :: OSI Approved :: GNU General Public License (GPL)",
+    "Operating System :: OS Independent",
+    "Operating System :: Microsoft :: Windows",
+    "Operating System :: Unix",
+    "Programming Language :: Python",
+    "Programming Language :: Python :: 2.6",
+    "Programming Language :: Python :: 2.7",
+    "Topic :: Software Development :: Libraries :: Python Modules",
+    "Topic :: Software Development :: Localization",
+]
 
-def addsubpackages(subpackages):
-    for subpackage in subpackages:
-        initfiles.append((join(sitepackages, 'translate', subpackage),
-                          [join('translate', subpackage, '__init__.py')]))
-        packages.append("translate.%s" % subpackage)
 
+# py2exe-specific stuff
+try:
+    import py2exe
+except ImportError:
+    py2exe = None
+else:
+    BuildCommand = py2exe.build_exe.py2exe
+    Distribution = py2exe.Distribution
 
-class build_exe_map(build_exe):
-    """distutils py2exe-based class that builds the exe file(s) but allows
-    mapping data files"""
-
-    def reinitialize_command(self, command, reinit_subcommands=0):
-        if command == "install_data":
-            install_data = build_exe.reinitialize_command(self, command,
-                                                          reinit_subcommands)
-            install_data.data_files = self.remap_data_files(install_data.data_files)
-            return install_data
-        return build_exe.reinitialize_command(self, command, reinit_subcommands)
-
-    def remap_data_files(self, data_files):
-        """maps the given data files to different locations using external
-        map_data_file function"""
-        new_data_files = []
-        for f in data_files:
-            if type(f) in (str, unicode):
-                f = map_data_file(f)
+    class InnoScript(object):
+        """class that builds an InnoSetup script"""
+        def __init__(self, name, lib_dir, dist_dir, exe_files=[], other_files=[],
+                    install_scripts=[], version="1.0"):
+            self.lib_dir = lib_dir
+            self.dist_dir = dist_dir
+            if not self.dist_dir.endswith(os.sep):
+                self.dist_dir += os.sep
+            self.name = name
+            self.version = version
+            self.exe_files = [self.chop(p) for p in exe_files]
+            self.other_files = [self.chop(p) for p in other_files]
+            self.install_scripts = install_scripts
+
+        def getcompilecommand(self):
+            try:
+                import _winreg
+                compile_key = _winreg.OpenKey(_winreg.HKEY_CLASSES_ROOT,
+                                            "innosetupscriptfile\\shell\\compile\\command")
+                compilecommand = _winreg.QueryValue(compile_key, "")
+                compile_key.Close()
+            except:
+                compilecommand = 'compil32.exe "%1"'
+            return compilecommand
+
+        def chop(self, pathname):
+            """returns the path relative to self.dist_dir"""
+            assert pathname.startswith(self.dist_dir)
+            return pathname[len(self.dist_dir):]
+
+        def create(self, pathname=None):
+            """creates the InnoSetup script"""
+            if pathname is None:
+                _name = self.name + os.extsep + "iss"
+                self.pathname = join(self.dist_dir, _name).replace(" ", "_")
             else:
-                datadir, files = f
-                datadir = map_data_file(datadir)
-                if datadir is None:
-                    f = None
+                self.pathname = pathname
+
+            # See http://www.jrsoftware.org/isfaq.php for more InnoSetup config options.
+            ofi = self.file = open(self.pathname, "w")
+            ofi.write("; WARNING: This script has been created by py2exe. Changes to this script\n")
+            ofi.write("; will be overwritten the next time py2exe is run!\n")
+            ofi.write("[Setup]\n")
+            ofi.write("AppName=%s\n" % self.name)
+            ofi.write("AppVerName=%s %s\n" % (self.name, self.version))
+            ofi.write("DefaultDirName={pf}\\%s\n" % self.name)
+            ofi.write("DefaultGroupName=%s\n" % self.name)
+            ofi.write("OutputBaseFilename=%s-%s-setup\n" % (self.name, self.version))
+            ofi.write("ChangesEnvironment=yes\n")
+            ofi.write("\n")
+            ofi.write("[Files]\n")
+            for path in self.exe_files + self.other_files:
+                ofi.write('Source: "%s"; DestDir: "{app}\\%s"; Flags: ignoreversion\n' % (path, dirname(path)))
+            ofi.write("\n")
+            ofi.write("[Icons]\n")
+            ofi.write('Name: "{group}\\Documentation"; Filename: "{app}\\docs\\index.html";\n')
+            ofi.write('Name: "{group}\\Translate Toolkit Command Prompt"; Filename: "cmd.exe"\n')
+            ofi.write('Name: "{group}\\Uninstall %s"; Filename: "{uninstallexe}"\n' % self.name)
+            ofi.write("\n")
+            ofi.write("[Registry]\n")
+            # TODO: Move the code to update the Path environment variable to a
+            # Python script which will be invoked by the [Run] section (below)
+            ofi.write('Root: HKCU; Subkey: "Environment"; ValueType: expandsz; '
+                    'ValueName: "Path"; ValueData: "{reg:HKCU\\Environment,Path|};{app};"\n')
+            ofi.write("\n")
+            if self.install_scripts:
+                ofi.write("[Run]\n")
+                for path in self.install_scripts:
+                    ofi.write('Filename: "{app}\\%s"; WorkingDir: "{app}"; Parameters: "-install"\n' % path)
+                ofi.write("\n")
+                ofi.write("[UninstallRun]\n")
+                for path in self.install_scripts:
+                    ofi.write('Filename: "{app}\\%s"; WorkingDir: "{app}"; Parameters: "-remove"\n' % path)
+            ofi.write("\n")
+            ofi.close()
+
+        def compile(self):
+            """compiles the script using InnoSetup"""
+            shellcompilecommand = self.getcompilecommand()
+            compilecommand = shellcompilecommand.replace('"%1"', self.pathname)
+            result = os.system(compilecommand)
+            if result:
+                print("Error compiling iss file")
+                print("Opening iss file, use InnoSetup GUI to compile manually")
+                os.startfile(self.pathname)
+
+
+    class build_exe_map(BuildCommand):
+        """distutils py2exe-based class that builds the exe file(s) but allows
+        mapping data files"""
+
+        def reinitialize_command(self, command, reinit_subcommands=0):
+            if command == "install_data":
+                install_data = BuildCommand.reinitialize_command(self, command,
+                                                            reinit_subcommands)
+                install_data.data_files = self.remap_data_files(install_data.data_files)
+                return install_data
+            return BuildCommand.reinitialize_command(self, command, reinit_subcommands)
+
+        def remap_data_files(self, data_files):
+            """maps the given data files to different locations using external
+            map_data_file function"""
+            new_data_files = []
+            for f in data_files:
+                if type(f) in (str, unicode):
+                    f = map_data_file(f)
                 else:
-                    f = datadir, files
-            if f is not None:
-                new_data_files.append(f)
-        return new_data_files
-
-
-class InnoScript:
-    """class that builds an InnoSetup script"""
-    def __init__(self, name, lib_dir, dist_dir, exe_files=[], other_files=[],
-                 install_scripts=[], version="1.0"):
-        self.lib_dir = lib_dir
-        self.dist_dir = dist_dir
-        if not self.dist_dir.endswith(os.sep):
-            self.dist_dir += os.sep
-        self.name = name
-        self.version = version
-        self.exe_files = [self.chop(p) for p in exe_files]
-        self.other_files = [self.chop(p) for p in other_files]
-        self.install_scripts = install_scripts
-
-    def getcompilecommand(self):
-        try:
-            import _winreg
-            compile_key = _winreg.OpenKey(_winreg.HKEY_CLASSES_ROOT,
-                                          "innosetupscriptfile\\shell\\compile\\command")
-            compilecommand = _winreg.QueryValue(compile_key, "")
-            compile_key.Close()
-        except:
-            compilecommand = 'compil32.exe "%1"'
-        return compilecommand
-
-    def chop(self, pathname):
-        """returns the path relative to self.dist_dir"""
-        assert pathname.startswith(self.dist_dir)
-        return pathname[len(self.dist_dir):]
-
-    def create(self, pathname=None):
-        """creates the InnoSetup script"""
-        if pathname is None:
-            self.pathname = os.path.join(self.dist_dir,
-                                         self.name + os.extsep + "iss").replace(' ', '_')
-        else:
-            self.pathname = pathname
-# See http://www.jrsoftware.org/isfaq.php for more InnoSetup config options.
-        ofi = self.file = open(self.pathname, "w")
-        print >> ofi, "; WARNING: This script has been created by py2exe. Changes to this script"
-        print >> ofi, "; will be overwritten the next time py2exe is run!"
-        print >> ofi, r"[Setup]"
-        print >> ofi, r"AppName=%s" % self.name
-        print >> ofi, r"AppVerName=%s %s" % (self.name, self.version)
-        print >> ofi, r"DefaultDirName={pf}\%s" % self.name
-        print >> ofi, r"DefaultGroupName=%s" % self.name
-        print >> ofi, r"OutputBaseFilename=%s-%s-setup" % (self.name, self.version)
-        print >> ofi, r"ChangesEnvironment=yes"
-        print >> ofi
-        print >> ofi, r"[Files]"
-        for path in self.exe_files + self.other_files:
-            print >> ofi, r'Source: "%s"; DestDir: "{app}\%s"; Flags: ignoreversion' % \
-                            (path, os.path.dirname(path))
-        print >> ofi
-        print >> ofi, r"[Icons]"
-        print >> ofi, r'Name: "{group}\Documentation"; Filename: "{app}\docs\index.html";'
-        print >> ofi, r'Name: "{group}\Translate Toolkit Command Prompt"; Filename: "cmd.exe"'
-        print >> ofi, r'Name: "{group}\Uninstall %s"; Filename: "{uninstallexe}"' % self.name
-        print >> ofi
-        print >> ofi, r"[Registry]"
-        # TODO: Move the code to update the Path environment variable to a
-        # Python script which will be invoked by the [Run] section (below)
-        print >> ofi, r'Root: HKCU; Subkey: "Environment"; ValueType: expandsz; ValueName: "Path"; ValueData: "{reg:HKCU\Environment,Path|};{app};"'
-        print >> ofi
-        if self.install_scripts:
-            print >> ofi, r"[Run]"
-            for path in self.install_scripts:
-                print >> ofi, r'Filename: "{app}\%s"; WorkingDir: "{app}"; Parameters: "-install"' % path
-            print >> ofi
-            print >> ofi, r"[UninstallRun]"
-            for path in self.install_scripts:
-                print >> ofi, r'Filename: "{app}\%s"; WorkingDir: "{app}"; Parameters: "-remove"' % path
-        print >> ofi
-        ofi.close()
-
-    def compile(self):
-        """compiles the script using InnoSetup"""
-        shellcompilecommand = self.getcompilecommand()
-        compilecommand = shellcompilecommand.replace('"%1"', self.pathname)
-        result = os.system(compilecommand)
-        if result:
-            print "Error compiling iss file"
-            print "Opening iss file, use InnoSetup GUI to compile manually"
-            os.startfile(self.pathname)
-
-
-class build_installer(build_exe_map):
-    """distutils class that first builds the exe file(s), then creates a
-     Windows installer using InnoSetup"""
-    description = "create an executable installer for MS Windows using InnoSetup and py2exe"
-    user_options = getattr(build_exe, 'user_options', []) + \
-        [('install-script=', None,
-          "basename of installation script to be run after installation or before deinstallation")]
-
-    def initialize_options(self):
-        build_exe.initialize_options(self)
-        self.install_script = None
-
-    def run(self):
-        # First, let py2exe do it's work.
-        build_exe.run(self)
-        lib_dir = self.lib_dir
-        dist_dir = self.dist_dir
-        # create the Installer, using the files py2exe has created.
-        exe_files = self.windows_exe_files + self.console_exe_files
-        install_scripts = self.install_script
-        if isinstance(install_scripts, (str, unicode)):
-            install_scripts = [install_scripts]
-        script = InnoScript(PRETTY_NAME, lib_dir, dist_dir, exe_files,
-                            self.lib_files,
-                            version=self.distribution.metadata.version,
-                            install_scripts=install_scripts)
-        print "*** creating the inno setup script***"
-        script.create()
-        print "*** compiling the inno setup script***"
-        script.compile()
-        # Note: By default the final setup.exe will be in an Output
-        # subdirectory.
-
-
-def map_data_file(data_file):
-    """remaps a data_file (could be a directory) to a different location
-    This version gets rid of Lib\\site-packages, etc"""
-    data_parts = data_file.split(os.sep)
-    if data_parts[:2] == ["Lib", "site-packages"]:
-        data_parts = data_parts[2:]
-        if data_parts:
-            data_file = os.path.join(*data_parts)
-        else:
-            data_file = ""
-    if data_parts[:1] == ["translate"]:
-        data_parts = data_parts[1:]
-        if data_parts:
-            data_file = os.path.join(*data_parts)
+                    datadir, files = f
+                    datadir = map_data_file(datadir)
+                    if datadir is None:
+                        f = None
+                    else:
+                        f = datadir, files
+                if f is not None:
+                    new_data_files.append(f)
+            return new_data_files
+
+
+    class BuildInstaller(build_exe_map):
+        """distutils class that first builds the exe file(s), then creates a
+        Windows installer using InnoSetup"""
+        description = "create an executable installer for MS Windows using InnoSetup and py2exe"
+        user_options = getattr(BuildCommand, 'user_options', []) + \
+            [('install-script=', None,
+            "basename of installation script to be run after installation or before deinstallation")]
+
+        def initialize_options(self):
+            BuildCommand.initialize_options(self)
+            self.install_script = None
+
+        def run(self):
+            # First, let py2exe do it's work.
+            BuildCommand.run(self)
+            lib_dir = self.lib_dir
+            dist_dir = self.dist_dir
+            # create the Installer, using the files py2exe has created.
+            exe_files = self.windows_exe_files + self.console_exe_files
+            install_scripts = self.install_script
+            if isinstance(install_scripts, (str, unicode)):
+                install_scripts = [install_scripts]
+            script = InnoScript(PRETTY_NAME, lib_dir, dist_dir, exe_files,
+                                self.lib_files,
+                                version=self.distribution.metadata.version,
+                                install_scripts=install_scripts)
+            print("*** creating the inno setup script***")
+            script.create()
+            print("*** compiling the inno setup script***")
+            script.compile()
+            # Note: By default the final setup.exe will be in an Output
+            # subdirectory.
+
+
+    class TranslateDistribution(Distribution):
+        """a modified distribution class for translate"""
+        def __init__(self, attrs):
+            baseattrs = {}
+            py2exeoptions = {}
+            py2exeoptions["packages"] = ["translate", "encodings"]
+            py2exeoptions["compressed"] = True
+            py2exeoptions["excludes"] = [
+                "PyLucene", "Tkconstants", "Tkinter", "tcl",
+                "enchant",  # Need to do more to support spell checking on Windows
+                # strange things unnecessarily included with some versions of pyenchant:
+                "win32ui", "_win32sysloader", "win32pipe", "py2exe", "win32com",
+                "pywin", "isapi", "_tkinter", "win32api",
+            ]
+            version = attrs.get("version", translateversion)
+            py2exeoptions["dist_dir"] = "translate-toolkit-%s" % version
+            py2exeoptions["includes"] = ["lxml", "lxml._elementpath"]
+            options = {"py2exe": py2exeoptions}
+            baseattrs['options'] = options
+            if py2exe:
+                baseattrs['console'] = translatescripts
+                baseattrs['zipfile'] = "translate.zip"
+                baseattrs['cmdclass'] = cmdclass.update({
+                    "py2exe": build_exe_map,
+                    "innosetup": BuildInstaller,
+                })
+                options["innosetup"] = py2exeoptions.copy()
+                options["innosetup"]["install_script"] = []
+            baseattrs.update(attrs)
+            Distribution.__init__(self, baseattrs)
+
+
+    def map_data_file(data_file):
+        """remaps a data_file (could be a directory) to a different location
+        This version gets rid of Lib\\site-packages, etc"""
+        data_parts = data_file.split(os.sep)
+        if data_parts[:2] == ["Lib", "site-packages"]:
+            data_parts = data_parts[2:]
+            if data_parts:
+                data_file = join(*data_parts)
+            else:
+                data_file = ""
+        if data_parts[:1] == ["translate"]:
+            data_parts = data_parts[1:]
+            if data_parts:
+                data_file = join(*data_parts)
+            else:
+                data_file = ""
+        return data_file
+
+
+def parse_requirements(file_name):
+    """Parses a pip requirements file and returns a list of packages.
+
+    Use the result of this function in the ``install_requires`` field.
+    Copied from cburgmer/pdfserver.
+    """
+    requirements = []
+    for line in open(file_name, 'r').read().split('\n'):
+        # Ignore comments, blank lines and included requirements files
+        if re.match(r'(\s*#)|(\s*$)|(-r .*$)', line):
+            continue
+
+        if re.match(r'\s*-e\s+', line):
+            requirements.append(re.sub(r'\s*-e\s+.*#egg=(.*)$', r'\1', line))
+        elif re.match(r'\s*-f\s+', line):
+            pass
         else:
-            data_file = ""
-    return data_file
+            requirements.append(line)
+
+    return requirements
 
 
 def getdatafiles():
     datafiles = initfiles + infofiles
 
     def listfiles(srcdir):
-        return join(sitepackages, srcdir), [join(srcdir, f) for f in os.listdir(srcdir) if os.path.isfile(join(srcdir, f))]
+        return (
+            join(sitepackages, 'translate', srcdir),
+            [join(srcdir, f)
+             for f in os.listdir(srcdir) if isfile(join(srcdir, f))])
 
     docfiles = []
     for subdir in ['docs', 'share']:
@@ -325,55 +384,23 @@ def getdatafiles():
     return datafiles
 
 
-def buildmanifest_in(file, scripts):
+def buildmanifest_in(f, scripts):
     """This writes the required files to a MANIFEST.in file"""
-    print >>file, "# MANIFEST.in: the below autogenerated by setup.py from translate %s" % translateversion
-    print >>file, "# things needed by translate setup.py to rebuild"
-    print >>file, "# informational files"
-    for infofile in ("README.rst", "COPYING", "*.txt"):
-        print >>file, "global-include %s" % infofile
-    print >>file, "# C programs"
-    print >>file, "global-include *.c"
-    print >> file, "# scripts which don't get included by default in sdist"
+    f.write("# MANIFEST.in: the below autogenerated by setup.py from translate %s\n" % translateversion)
+    f.write("# things needed by translate setup.py to rebuild\n")
+    f.write("# informational fs\n")
+    for infof in ("README.rst", "COPYING", "*.txt"):
+        f.write("global-include %s\n" % infof)
+    f.write("# C programs\n")
+    f.write("global-include *.c\n")
+    f.write("# scripts which don't get included by default in sdist\n")
     for scriptname in scripts:
-        print >>file, "include %s" % scriptname
-    print >> file, "# include our documentation"
-    print >> file, "graft docs"
-    print >> file, "prune docs/doctrees"
-    print >> file, "graft share"
-    print >>file, "# MANIFEST.in: the above autogenerated by setup.py from translate %s" % translateversion
-
-
-class TranslateDistribution(Distribution):
-    """a modified distribution class for translate"""
-    def __init__(self, attrs):
-        baseattrs = {}
-        py2exeoptions = {}
-        py2exeoptions["packages"] = ["translate", "encodings"]
-        py2exeoptions["compressed"] = True
-        py2exeoptions["excludes"] = [
-            "PyLucene", "Tkconstants", "Tkinter", "tcl",
-            "enchant",  # Need to do more to support spell checking on Windows
-            # strange things unnecessarily included with some versions of pyenchant:
-            "win32ui", "_win32sysloader", "win32pipe", "py2exe", "win32com",
-            "pywin", "isapi", "_tkinter", "win32api",
-        ]
-        version = attrs.get("version", translateversion)
-        py2exeoptions["dist_dir"] = "translate-toolkit-%s" % version
-        py2exeoptions["includes"] = ["lxml", "lxml._elementpath"]
-        options = {"py2exe": py2exeoptions}
-        baseattrs['options'] = options
-        if py2exe:
-            baseattrs['console'] = translatescripts
-            baseattrs['zipfile'] = "translate.zip"
-            baseattrs['cmdclass'] = cmdclass.update({
-                "py2exe": build_exe_map,
-                "innosetup": build_installer,
-            })
-            options["innosetup"] = py2exeoptions.copy()
-            options["innosetup"]["install_script"] = []
-        baseattrs.update(attrs)
-        Distribution.__init__(self, baseattrs)
+        f.write("include %s\n" % scriptname)
+    f.write("# include our documentation\n")
+    f.write("graft docs\n")
+    f.write("prune docs/doctrees\n")
+    f.write("graft share\n")
+    f.write("# MANIFEST.in: end of autogenerated block")
 
 
 def standardsetup(name, version, custompackages=[], customdatafiles=[]):
@@ -382,31 +409,26 @@ def standardsetup(name, version, custompackages=[], customdatafiles=[]):
         manifest_in = open("MANIFEST.in", "w")
         buildmanifest_in(manifest_in, translatescripts + translatebashscripts)
         manifest_in.close()
-    except IOError, e:
-        print >> sys.stderr, "warning: could not recreate MANIFEST.in, continuing anyway. Error was %s" % e
-    addsubpackages(subpackages)
-    datafiles = getdatafiles()
-    ext_modules = []
-    dosetup(name, version, packages + custompackages,
-            datafiles + customdatafiles,
-            translatescripts + translatebashscripts, ext_modules)
+    except IOError as e:
+        sys.stderr.write("warning: could not recreate MANIFEST.in, continuing anyway. (%s)\n" % e)
 
-classifiers = [
-  "Development Status :: 5 - Production/Stable",
-  "Environment :: Console",
-  "Intended Audience :: Developers",
-  "License :: OSI Approved :: GNU General Public License (GPL)",
-  "Programming Language :: Python",
-  "Topic :: Software Development :: Localization",
-  "Topic :: Software Development :: Libraries :: Python Modules",
-  "Operating System :: OS Independent",
-  "Operating System :: Microsoft :: Windows",
-  "Operating System :: Unix"
-]
+    for subpackage in subpackages:
+        initfiles.append((join(sitepackages, "translate", subpackage),
+                          [join("translate", subpackage, "__init__.py")]))
+        packages.append("translate.%s" % subpackage)
+
+    datafiles = getdatafiles()
+    dosetup(name, version, packages + custompackages, datafiles + customdatafiles,
+            translatescripts + translatebashscripts)
 
 
 def dosetup(name, version, packages, datafiles, scripts, ext_modules=[]):
+    from setuptools import setup
     description, long_description = __doc__.split("\n", 1)
+    kwargs = {}
+    if py2exe:
+        kwargs["distclass"] = TranslateDistribution
+
     setup(name=name,
           version=version,
           license="GNU General Public License (GPL)",
@@ -422,9 +444,10 @@ def dosetup(name, version, packages, datafiles, scripts, ext_modules=[]):
           data_files=datafiles,
           scripts=scripts,
           ext_modules=ext_modules,
-          distclass=TranslateDistribution,
-          cmdclass = cmdclass
-         )
+          cmdclass=cmdclass,
+          install_requires=parse_requirements('requirements/required.txt'),
+          **kwargs
+    )
 
 if __name__ == "__main__":
     standardsetup("translate-toolkit", translateversion)
diff --git a/tests/cli/data/test_pocount/stderr.txt b/tests/cli/data/test_pocount/stderr.txt
new file mode 100644
index 0000000..35e7966
--- /dev/null
+++ b/tests/cli/data/test_pocount/stderr.txt
@@ -0,0 +1,4 @@
+usage: pocount [-h] [--incomplete]
+               [--full | --csv | --short | --short-strings | --short-words]
+               files [files ...]
+pocount: error: too few arguments
diff --git a/tests/cli/data/test_pocount_help/stdout.txt b/tests/cli/data/test_pocount_help/stdout.txt
new file mode 100644
index 0000000..d29c0cd
--- /dev/null
+++ b/tests/cli/data/test_pocount_help/stdout.txt
@@ -0,0 +1,17 @@
+usage: pocount [-h] [--incomplete]
+               [--full | --csv | --short | --short-strings | --short-words]
+               files [files ...]
+
+positional arguments:
+  files
+
+optional arguments:
+  -h, --help       show this help message and exit
+  --incomplete     skip 100% translated files.
+
+Output format:
+  --full           (default) statistics in full, verbose format
+  --csv            statistics in CSV format
+  --short          same as --short-strings
+  --short-strings  statistics of strings in short format - one line per file
+  --short-words    statistics of words in short format - one line per file
diff --git a/tests/cli/data/test_pocount_mutually_exclusive/stderr.txt b/tests/cli/data/test_pocount_mutually_exclusive/stderr.txt
new file mode 100644
index 0000000..0d2a60a
--- /dev/null
+++ b/tests/cli/data/test_pocount_mutually_exclusive/stderr.txt
@@ -0,0 +1,4 @@
+usage: pocount [-h] [--incomplete]
+               [--full | --csv | --short | --short-strings | --short-words]
+               files [files ...]
+pocount: error: argument --csv: not allowed with argument --short
diff --git a/tests/cli/data/test_pocount_nonexistant/stderr.txt b/tests/cli/data/test_pocount_nonexistant/stderr.txt
new file mode 100644
index 0000000..e96fd2d
--- /dev/null
+++ b/tests/cli/data/test_pocount_nonexistant/stderr.txt
@@ -0,0 +1 @@
+translate.tools.pocount: ERROR: cannot process missing.po: does not exist
diff --git a/tests/cli/data/test_pocount_po_file/stdout.txt b/tests/cli/data/test_pocount_po_file/stdout.txt
new file mode 100644
index 0000000..d4c32dc
--- /dev/null
+++ b/tests/cli/data/test_pocount_po_file/stdout.txt
@@ -0,0 +1,9 @@
+./data/test_pocount_po_file/one.po
+type              strings      words (source)    words (translation)
+translated:       1 (100%)          3 (100%)               3
+fuzzy:            0 (  0%)          0 (  0%)             n/a
+untranslated:     0 (  0%)          0 (  0%)             n/a
+Total:            1                 3                      3
+
+unreviewed:        1 (100%)          3 (100%)               3
+
diff --git a/tests/cli/data/test_pofilter_listfilters/stdout.txt b/tests/cli/data/test_pofilter_listfilters/stdout.txt
new file mode 100644
index 0000000..03f5418
--- /dev/null
+++ b/tests/cli/data/test_pofilter_listfilters/stdout.txt
@@ -0,0 +1,73 @@
+accelerators	Checks whether accelerators are consistent between the
+        two strings.
+        
+acronyms	Checks that acronyms that appear are unchanged.
+blank	Checks whether a translation only contains spaces.
+brackets	Checks that the number of brackets in both strings match.
+compendiumconflicts	Checks for Gettext compendium conflicts (#-#-#-#-#).
+credits	Checks for messages containing translation credits instead of
+        normal translations.
+        
+doublequoting	Checks whether doublequoting is consistent between the
+        two strings.
+        
+doublespacing	Checks for bad double-spaces by comparing to original.
+doublewords	Checks for repeated words in the translation.
+emails	Checks that emails are not translated.
+endpunc	Checks whether punctuation at the end of the strings match.
+endwhitespace	Checks whether whitespace at the end of the strings matches.
+escapes	Checks whether escaping is consistent between the two strings.
+filepaths	Checks that file paths have not been translated.
+functions	Checks that function names are not translated.
+hassuggestion	Checks if there is at least one suggested translation for this
+        unit.
+        
+isfuzzy	Check if the unit has been marked fuzzy.
+isreview	Check if the unit has been marked review.
+kdecomments	Checks to ensure that no KDE style comments appear in the
+        translation.
+        
+long	Checks whether a translation is much longer than the original
+        string.
+        
+musttranslatewords	Checks that words configured as definitely translatable don't appear
+        in the translation.
+newlines	Checks whether newlines are consistent between the two strings.
+notranslatewords	Checks that words configured as untranslatable appear in the
+        translation too.
+nplurals	Checks for the correct number of noun forms for plural
+        translations.
+        
+numbers	Checks whether numbers of various forms are consistent between the
+        two strings.
+        
+options	Checks that options are not translated.
+printf	Checks whether printf format strings match.
+puncspacing	Checks for bad spacing after punctuation.
+purepunc	Checks that strings that are purely punctuation are not changed.
+sentencecount	Checks that the number of sentences in both strings match.
+short	Checks whether a translation is much shorter than the original
+        string.
+        
+simplecaps	Checks the capitalisation of two strings isn't wildly different.
+simpleplurals	Checks for English style plural(s) for you to review.
+singlequoting	Checks whether singlequoting is consistent between the two strings.
+spellcheck	Checks words that don't pass a spell check.
+startcaps	Checks that the message starts with the correct capitalisation.
+startpunc	Checks whether punctuation at the beginning of the strings match.
+startwhitespace	Checks whether whitespace at the beginning of the strings
+        matches.
+        
+tabs	Checks whether tabs are consistent between the two strings.
+unchanged	Checks whether a translation is basically identical to the original
+        string.
+        
+untranslated	Checks whether a string has been translated at all.
+urls	Checks that URLs are not translated.
+validchars	Checks that only characters specified as valid appear in the
+        translation.
+        
+variables	Checks whether variables of various forms are consistent between the
+        two strings.
+        
+xmltags	Checks that XML/HTML tags have not been translated.
diff --git a/tests/cli/data/test_pofilter_manpage/stdout.txt b/tests/cli/data/test_pofilter_manpage/stdout.txt
new file mode 100644
index 0000000..34a06b4
--- /dev/null
+++ b/tests/cli/data/test_pofilter_manpage/stdout.txt
@@ -0,0 +1,102 @@
+.\" Autogenerated manpage
+.TH pofilter 1 "Translate Toolkit 1.12.0" "" "Translate Toolkit 1.12.0"
+.SH NAME
+pofilter \- Perform quality checks on Gettext PO, XLIFF and TMX localization files.
+.SH SYNOPSIS
+.PP
+\fBpofilter \fR[\fP--version\fR]\fP \fR[\fP-h\fR|\fP--help\fR]\fP \fR[\fP--manpage\fR]\fP \fR[\fP--progress \fIPROGRESS\fP\fR]\fP \fR[\fP--errorlevel \fIERRORLEVEL\fP\fR]\fP \fR[\fP-i\fR|\fP--input\fR]\fP \fIINPUT\fP \fR[\fP-x\fR|\fP--exclude \fIEXCLUDE\fP\fR]\fP \fR[\fP-o\fR|\fP--output\fR]\fP \fIOUTPUT\fP \fR[\fP-l\fR|\fP--listfilters\fR]\fP \fR[\fP--review\fR]\fP \fR[\fP--noreview\fR]\fP \fR[\fP--fuzzy\fR]\fP \fR[\fP--nofuzzy\fR]\fP \fR[\fP--nonotes\fR]\fP \fR[\fP--autocorrect\fR]\fP  [...]
+.SH DESCRIPTION
+Snippet files are created whenever a test fails.  These can be examined,
+corrected and merged back into the originals using pomerge.
+
+See:
+http://docs.translatehouse.org/projects/translate-toolkit/en/latest/commands/pofilter.html
+for examples and usage instructions and
+http://docs.translatehouse.org/projects/translate-toolkit/en/latest/commands/pofilter_tests.html
+for full descriptions of all tests.
+.SH OPTIONS
+.PP
+.TP
+\-\-version
+show program's version number and exit
+.TP
+\-h/\-\-help
+show this help message and exit
+.TP
+\-\-manpage
+output a manpage based on the help
+.TP
+\-\-progress
+show progress as: dots, none, bar, names, verbose
+.TP
+\-\-errorlevel
+show errorlevel as: none, message, exception, traceback
+.TP
+\-i/\-\-input
+read from INPUT in po, pot, tmx, xlf, xliff formats
+.TP
+\-x/\-\-exclude
+exclude names matching EXCLUDE from input paths
+.TP
+\-o/\-\-output
+write to OUTPUT in po, pot, tmx, xlf, xliff formats
+.TP
+\-l/\-\-listfilters
+list filters available
+.TP
+\-\-review
+include units marked for review (default)
+.TP
+\-\-noreview
+exclude units marked for review
+.TP
+\-\-fuzzy
+include units marked fuzzy (default)
+.TP
+\-\-nofuzzy
+exclude units marked fuzzy
+.TP
+\-\-nonotes
+don't add notes about the errors
+.TP
+\-\-autocorrect
+output automatic corrections where possible rather than describing issues
+.TP
+\-\-language
+set target language code (e.g. af\-ZA) [required for spell check and recommended in general]
+.TP
+\-\-openoffice
+use the standard checks for OpenOffice translations
+.TP
+\-\-libreoffice
+use the standard checks for LibreOffice translations
+.TP
+\-\-mozilla
+use the standard checks for Mozilla translations
+.TP
+\-\-drupal
+use the standard checks for Drupal translations
+.TP
+\-\-gnome
+use the standard checks for Gnome translations
+.TP
+\-\-kde
+use the standard checks for KDE translations
+.TP
+\-\-wx
+use the standard checks for wxWidgets translations
+.TP
+\-\-excludefilter
+don't use FILTER when filtering
+.TP
+\-t/\-\-test
+only use test FILTERs specified with this option when filtering
+.TP
+\-\-notranslatefile
+read list of untranslatable words from FILE (must not be translated)
+.TP
+\-\-musttranslatefile
+read list of translatable words from FILE (must be translated)
+.TP
+\-\-validcharsfile
+read list of all valid characters from FILE (must be in UTF\-8)
diff --git a/tests/cli/data/test_prop2po/stderr.txt b/tests/cli/data/test_prop2po/stderr.txt
new file mode 100644
index 0000000..11118a0
--- /dev/null
+++ b/tests/cli/data/test_prop2po/stderr.txt
@@ -0,0 +1,3 @@
+Usage: prop2po [--version] [-h|--help] [--manpage] [--progress PROGRESS] [--errorlevel ERRORLEVEL] [-i|--input] INPUT [-x|--exclude EXCLUDE] [-o|--output] OUTPUT [-t|--template TEMPLATE] [-S|--timestamp] [-P|--pot]
+
+prop2po: error: You need to give an inputfile or use - for stdin ; use --help for full usage instructions
diff --git a/tests/cli/data/test_prop2po_dirs/stderr.txt b/tests/cli/data/test_prop2po_dirs/stderr.txt
new file mode 100644
index 0000000..0d3c24f
--- /dev/null
+++ b/tests/cli/data/test_prop2po_dirs/stderr.txt
@@ -0,0 +1,3 @@
+WARNING:prop2po:Output directory does not exist. Attempting to create
+processing 1 files...
+[###########################################] 100%
diff --git a/tools/mozilla/buildxpi.py b/tools/mozilla/buildxpi.py
index 6d7e3ab..0b5150f 100755
--- a/tools/mozilla/buildxpi.py
+++ b/tools/mozilla/buildxpi.py
@@ -46,10 +46,11 @@ overwritten and replaced.
 import logging
 import os
 import re
-from glob       import glob
-from shutil     import move, rmtree
-from subprocess import Popen, PIPE, CalledProcessError
-from tempfile   import mkdtemp
+from glob import glob
+from shutil import move, rmtree
+from subprocess import PIPE, CalledProcessError, Popen
+from tempfile import mkdtemp
+
 
 logger = logging.getLogger(__name__)
 
@@ -71,10 +72,11 @@ class RunProcessError(CalledProcessError):
         if message.count('%') != 2:
             output += message + '\n'
             message = self._default_message
-            
+
         output += message % (self.cmd, self.returncode)
         return output
 
+
 def run(cmd, expected_status=0, fail_msg=None, stdout=-1, stderr=-1):
     """Run a command
     """
@@ -119,22 +121,18 @@ def build_xpi(l10nbase, srcdir, outputdir, langs, product, delete_dest=False,
     # Create a temporary directory for building
     builddir = mkdtemp('', 'buildxpi')
 
-    # Per the original instructions, it should be possible to configure the
-    # Mozilla build so that it doesn't require compiler toolchains or
-    # development include/library files - however it is currently broken for
-    # Aurora 22-23; # see https://bugzilla.mozilla.org/show_bug.cgi?id=862770
-    # in case it has been fixed and you can put back:
-    #ac_add_options --disable-compile-environment
-
     try:
         # Create new .mozconfig
         content = """
+ac_add_options --disable-compile-environment
 ac_add_options --disable-gstreamer
 ac_add_options --disable-ogg
 ac_add_options --disable-opus
 ac_add_options --disable-webrtc
 ac_add_options --disable-wave
 ac_add_options --disable-webm
+ac_add_options --disable-alsa
+ac_add_options --disable-pulseaudio
 ac_add_options --disable-libjpeg-turbo
 mk_add_options MOZ_OBJDIR=%(builddir)s
 ac_add_options --with-l10n-base=%(l10nbase)s
@@ -148,7 +146,7 @@ ac_add_options --enable-application=%(product)s
 
         mozconf = open(MOZCONFIG, 'w').write(content)
 
-	# Try to make sure that "environment shell" is defined
+        # Try to make sure that "environment shell" is defined
         # (python/mach/mach/mixin/process.py)
         if not any (var in os.environ
                     for var in ('SHELL', 'MOZILLABUILD', 'COMSPEC')):
@@ -167,11 +165,11 @@ ac_add_options --enable-application=%(product)s
         run(['make', '-C', 'config'],
             fail_msg="Unable to successfully configure build for XPI!")
 
-	moz_app_version=[]
-	if soft_max_version:
-	    version = open(os.path.join(srcdir, product, 'config', 'version.txt')).read().strip()
-	    version = re.sub(r'(^[0-9]*\.[0-9]*).*', r'\1.*', version)
-	    moz_app_version = ['MOZ_APP_MAXVERSION=%s' % version]
+        moz_app_version = []
+        if soft_max_version:
+            version = open(os.path.join(srcdir, product, 'config', 'version.txt')).read().strip()
+            version = re.sub(r'(^[0-9]*\.[0-9]*).*', r'\1.*', version)
+            moz_app_version = ['MOZ_APP_MAXVERSION=%s' % version]
         run(['make', '-C', os.path.join(product, 'locales')] +
             ['langpack-%s' % lang for lang in langs] + moz_app_version,
             fail_msg="Unable to successfully build XPI!")
@@ -208,35 +206,39 @@ ac_add_options --enable-application=%(product)s
 
 
 def create_option_parser():
-    from optparse import OptionParser
+    from argparse import ArgumentParser
     usage = 'Usage: buildxpi.py [<options>] <lang> [<lang2> ...]'
-    p = OptionParser(usage=usage)
+    p = ArgumentParser(usage=usage)
 
-    p.add_option(
+    p.add_argument(
         '-L', '--l10n-base',
+        type=str,
         dest='l10nbase',
         default='l10n',
         help='The directory containing the <lang> subdirectory.'
     )
-    p.add_option(
+    p.add_argument(
         '-o', '--output-dir',
+        type=str,
         dest='outputdir',
         default='.',
         help='The directory to copy the built XPI to (default: current directory).'
     )
-    p.add_option(
+    p.add_argument(
         '-p', '--mozproduct',
+        type=str,
         dest='mozproduct',
         default='browser',
         help='The Mozilla product name (default: "browser").'
     )
-    p.add_option(
+    p.add_argument(
         '-s', '--src',
+        type=str,
         dest='srcdir',
         default='mozilla',
         help='The directory containing the Mozilla l10n sources.'
     )
-    p.add_option(
+    p.add_argument(
         '-d', '--delete-dest',
         dest='delete_dest',
         action='store_true',
@@ -244,7 +246,7 @@ def create_option_parser():
         help='Delete output XPI if it already exists.'
     )
 
-    p.add_option(
+    p.add_argument(
         '-v', '--verbose',
         dest='verbose',
         action='store_true',
@@ -252,33 +254,34 @@ def create_option_parser():
         help='Be more noisy'
     )
 
-    p.add_option(
-        '', '--soft-max-version',
+    p.add_argument(
+        '--soft-max-version',
         dest='soft_max_version',
         action='store_true',
         default=False,
-	help='Override a fixed max version with one to cover the whole cycle '
-	     'e.g. 24.0a1 becomes 24.0.*'
+        help='Override a fixed max version with one to cover the whole cycle '
+             'e.g. 24.0a1 becomes 24.0.*'
+    )
+
+    p.add_argument(
+        "langs",
+        nargs="+"
     )
 
     return p
 
 if __name__ == '__main__':
-    options, args = create_option_parser().parse_args()
-
-    if len(args) < 1:
-        from argparse import ArgumentError
-        raise ArgumentError(None, 'You need to specify at least a language!')
+    args = create_option_parser().parse_args()
 
-    if options.verbose:
+    if args.verbose:
         logging.basicConfig(level=logging.DEBUG)
 
     build_xpi(
-        l10nbase=os.path.abspath(options.l10nbase),
-        srcdir=os.path.abspath(options.srcdir),
-        outputdir=os.path.abspath(options.outputdir),
-        langs=args,
-        product=options.mozproduct,
-        delete_dest=options.delete_dest,
-        soft_max_version=options.soft_max_version
+        l10nbase=os.path.abspath(args.l10nbase),
+        srcdir=os.path.abspath(args.srcdir),
+        outputdir=os.path.abspath(args.outputdir),
+        langs=args.langs,
+        product=args.mozproduct,
+        delete_dest=args.delete_dest,
+        soft_max_version=args.soft_max_version
     )
diff --git a/tools/mozilla/get_moz_enUS.py b/tools/mozilla/get_moz_enUS.py
index ec30cb3..7e086f6 100755
--- a/tools/mozilla/get_moz_enUS.py
+++ b/tools/mozilla/get_moz_enUS.py
@@ -54,18 +54,18 @@ def process_l10n_ini(inifile):
         topath = os.path.join(l10ncheckout, 'en-US', dir)
         if not os.path.exists(frompath):
             if verbose:
-                print "[Missing source]: %s" % frompath
+                print("[Missing source]: %s" % frompath)
             continue
         if os.path.exists(topath):
             if verbose:
-                print "[Existing target]: %s" % topath
+                print("[Existing target]: %s" % topath)
             continue
         if verbose:
-            print '%s -> %s' % (frompath, topath)
+            print('%s -> %s' % (frompath, topath))
         try:
             shutil.copytree(frompath, topath)
         except OSError as e:
-            print e
+            print(e)
 
     try:
         for include in l10n.options('includes'):
@@ -82,28 +82,31 @@ def process_l10n_ini(inifile):
 
 
 def create_option_parser():
-    from optparse import OptionParser
-    p = OptionParser()
+    from argparse import ArgumentParser
+    p = ArgumentParser()
 
-    p.add_option(
+    p.add_argument(
         '-s', '--src',
+        type=str,
         dest='srcdir',
         default='mozilla',
         help='The directory containing the Mozilla l10n sources.'
     )
-    p.add_option(
+    p.add_argument(
         '-d', '--dest',
+        type=str,
         dest='destdir',
         default='l10n',
         help='The destination directory to copy the en-US locale files to.'
     )
-    p.add_option(
+    p.add_argument(
         '-p', '--mozproduct',
+        type=str,
         dest='mozproduct',
         default='browser',
         help='The Mozilla product name.'
     )
-    p.add_option(
+    p.add_argument(
         '--delete-dest',
         dest='deletedest',
         default=False,
@@ -111,7 +114,7 @@ def create_option_parser():
         help='Delete the destination directory (if it exists).'
     )
 
-    p.add_option(
+    p.add_argument(
         '-v', '--verbose',
         dest='verbose',
         action='store_true',
@@ -122,22 +125,21 @@ def create_option_parser():
     return p
 
 if __name__ == '__main__':
-    options, args = create_option_parser().parse_args()
-    srccheckout = options.srcdir
-    l10ncheckout = options.destdir
-    product = options.mozproduct
-    verbose = options.verbose
+    args = create_option_parser().parse_args()
+    srccheckout = args.srcdir
+    l10ncheckout = args.destdir
+    product = args.mozproduct
 
     enUS_dir = os.path.join(l10ncheckout, 'en-US')
-    if options.deletedest and os.path.exists(enUS_dir):
+    if args.deletedest and os.path.exists(enUS_dir):
         shutil.rmtree(enUS_dir)
     if not os.path.exists(enUS_dir):
         os.makedirs(enUS_dir)
 
-    if verbose:
-        print "%s -s %s -d %s -p %s -v %s" % \
+    if args.verbose:
+        print("%s -s %s -d %s -p %s -v %s" %
               (__file__, srccheckout, l10ncheckout, product,
-               options.deletedest and '--delete-dest' or '')
+               args.deletedest and '--delete-dest' or ''))
     product_ini = os.path.join(srccheckout, product, 'locales', 'l10n.ini')
     if not os.path.isfile(product_ini):
         # Done for Fennec
diff --git a/translate/__version__.py b/translate/__version__.py
index 4583916..7edd807 100644
--- a/translate/__version__.py
+++ b/translate/__version__.py
@@ -20,16 +20,16 @@
 
 """This file contains the version of the Translate Toolkit."""
 
-build = 12017
-"""The build number is used by external used of the Translate Toolkit to
+build = 12021
+"""The build number is used by external users of the Translate Toolkit to
 trigger refreshes.  Thus increase the build number whenever changes are made to
 code touching stats or quality checks.  An increased build number will force a
 toolkit user, like Pootle, to regenerate it's stored stats and check
 results."""
 
-sver = "1.11.0"
+sver = "1.12.0"
 """Human readable version number. Used for version number display."""
 
-ver = (1, 11, 0)
+ver = (1, 12, 0)
 """Machine readable version number. Used by tools that need to adjust code
 paths based on a Translate Toolkit release number."""
diff --git a/translate/convert/accesskey.py b/translate/convert/accesskey.py
index c3ef11f..82abfa9 100644
--- a/translate/convert/accesskey.py
+++ b/translate/convert/accesskey.py
@@ -21,6 +21,7 @@
 
 from translate.storage.placeables.general import XMLEntityPlaceable
 
+
 DEFAULT_ACCESSKEY_MARKER = u"&"
 
 
@@ -50,7 +51,6 @@ class UnitMixer(object):
                     # ".accesskey")
         return mixedentities
 
-
     def mix_units(self, label_unit, accesskey_unit, target_unit):
         """Mix the given units into the given target_unit if possible.
 
@@ -87,7 +87,7 @@ class UnitMixer(object):
             if entity.endswith(labelsuffix):
                 entitybase = entity[:entity.rfind(labelsuffix)]
                 for akeytype in self.accesskeysuffixes:
-                    if (entitybase + akeytype) in store.index:
+                    if (entitybase + akeytype) in store.id_index:
                         labelentity = entity
                         accesskeyentity = labelentity[:labelentity.rfind(labelsuffix)] + akeytype
                         break
@@ -97,7 +97,7 @@ class UnitMixer(object):
                     accesskeyentity = entity
                     for labelsuffix in self.labelsuffixes:
                         labelentity = accesskeyentity[:accesskeyentity.rfind(akeytype)] + labelsuffix
-                        if labelentity in store.index:
+                        if labelentity in store.id_index:
                             break
                     else:
                         labelentity = None
@@ -134,8 +134,8 @@ def extract(string, accesskey_marker=DEFAULT_ACCESSKEY_MARKER):
                 XMLEntityPlaceable.regex.match(string[marker_pos-1:])):
                 continue
             label = string[:marker_pos-1] + string[marker_pos:]
-            accesskey = string[marker_pos]
-            break
+            if string[marker_pos] != " ":  # FIXME weak filtering
+                accesskey = string[marker_pos]
     return label, accesskey
 
 
@@ -146,6 +146,9 @@ def combine(label, accesskey,
     We place an accesskey marker before the accesskey in the label and this
     creates a string with the two combined e.g. "File" + "F" = "&File"
 
+    The case of the accesskey is preferred unless no match is found, in which
+    case the alternate case is used.
+
     :type label: unicode
     :param label: a label
     :type accesskey: unicode char
@@ -155,39 +158,43 @@ def combine(label, accesskey,
     """
     assert isinstance(label, unicode)
     assert isinstance(accesskey, unicode)
+
     if len(accesskey) == 0:
         return None
+
     searchpos = 0
     accesskeypos = -1
     in_entity = False
     accesskeyaltcasepos = -1
+
+    if accesskey.isupper():
+        accesskey_alt_case = accesskey.lower()
+    else:
+        accesskey_alt_case = accesskey.upper()
+
     while (accesskeypos < 0) and searchpos < len(label):
         searchchar = label[searchpos]
         if searchchar == '&':
             in_entity = True
-        elif searchchar == ';':
+        elif searchchar == ';' or searchchar == " ":
             in_entity = False
-        else:
-            if not in_entity:
-                if searchchar == accesskey.upper():
-                    # always prefer uppercase
-                    accesskeypos = searchpos
-                if searchchar == accesskey.lower():
-                    # take lower case otherwise...
-                    if accesskeyaltcasepos == -1:
-                        # only want to remember first altcasepos
-                        accesskeyaltcasepos = searchpos
-                        # note: we keep on looping through in hope
-                        # of exact match
+        if not in_entity:
+            if searchchar == accesskey:  # Prefer supplied case
+                accesskeypos = searchpos
+            elif searchchar == accesskey_alt_case:  # Other case otherwise
+                if accesskeyaltcasepos == -1:
+                    # only want to remember first altcasepos
+                    accesskeyaltcasepos = searchpos
+                    # note: we keep on looping through in hope
+                    # of exact match
         searchpos += 1
+
     # if we didn't find an exact case match, use an alternate one if available
     if accesskeypos == -1:
         accesskeypos = accesskeyaltcasepos
+
     # now we want to handle whatever we found...
     if accesskeypos >= 0:
-        string = label[:accesskeypos] + accesskey_marker + label[accesskeypos:]
-        string = string.encode("UTF-8", "replace")
-        return string
-    else:
-        # can't currently mix accesskey if it's not in label
-        return None
+        return label[:accesskeypos] + accesskey_marker + label[accesskeypos:]
+    # can't currently mix accesskey if it's not in label
+    return None
diff --git a/translate/convert/convert.py b/translate/convert/convert.py
index fd87c46..b9e6f45 100644
--- a/translate/convert/convert.py
+++ b/translate/convert/convert.py
@@ -22,13 +22,12 @@
 :mod:`translate.convert` tools)."""
 
 import os.path
-try:
-    from cStringIO import StringIO
-except ImportError:
-    from StringIO import StringIO
+from cStringIO import StringIO
 
 from translate.misc import optrecurse
-# don't import optparse ourselves, get the version from optrecurse
+
+
+# Don't import optparse ourselves, get the version from optrecurse.
 optparse = optrecurse.optparse
 
 
@@ -74,7 +73,7 @@ class ConvertOptionParser(optrecurse.RecursiveOptionParser, object):
         self.add_option(
             "", "--duplicates", dest="duplicatestyle", default=default,
             type="choice", choices=["msgctxt", "merge"],
-            help="what to do with duplicate strings (identical source text): merge, msgctxt (default: '%s')" % \
+            help="what to do with duplicate strings (identical source text): merge, msgctxt (default: '%s')" %
                  default,
             metavar="DUPLICATESTYLE"
         )
@@ -165,7 +164,7 @@ class ConvertOptionParser(optrecurse.RecursiveOptionParser, object):
         options.outputoptions = self.filteroutputoptions(options)
         try:
             self.verifyoptions(options)
-        except Exception, e:
+        except Exception as e:
             self.error(str(e))
         self.recursiveprocess(options)
 
@@ -179,6 +178,7 @@ class ConvertOptionParser(optrecurse.RecursiveOptionParser, object):
                                       fullinputpath, fulloutputpath,
                                       fulltemplatepath)
 
+
 def copyinput(inputfile, outputfile, templatefile, **kwargs):
     """Copies the input file to the output file."""
     outputfile.write(inputfile.read())
@@ -389,7 +389,8 @@ class ArchiveConvertOptionParser(ConvertOptionParser):
         if self.isarchive(options.output, 'output'):
             outputstream = options.outputarchive.openoutputfile(fulloutputpath)
             if outputstream is None:
-                self.warning("Could not find where to put %s in output archive; writing to tmp" % fulloutputpath)
+                self.warning("Could not find where to put %s in output "
+                             "archive; writing to tmp" % fulloutputpath)
                 return StringIO()
             return outputstream
         else:
@@ -452,6 +453,7 @@ class ArchiveConvertOptionParser(ConvertOptionParser):
                                           fullinputpath, fulloutputpath,
                                           fulltemplatepath)
 
+
 def _output_is_newer(input_path, output_path):
     """Check if input_path was not modified since output_path was generated,
     used to avoid needless regeneration of output.
@@ -467,6 +469,7 @@ def _output_is_newer(input_path, output_path):
 
     return output_mtime > input_mtime
 
+
 def should_output_store(store, threshold):
     """Check if the percent of translated source words more than or equal to
     the given threshold.
@@ -487,6 +490,7 @@ def should_output_store(store, threshold):
 
     return percent >= threshold
 
+
 def main(argv=None):
     parser = ArchiveConvertOptionParser({}, description=__doc__)
     parser.run(argv)
diff --git a/translate/convert/csv2po.py b/translate/convert/csv2po.py
index 0a0e15a..6ca2598 100644
--- a/translate/convert/csv2po.py
+++ b/translate/convert/csv2po.py
@@ -24,14 +24,14 @@ See: http://docs.translatehouse.org/projects/translate-toolkit/en/latest/command
 for examples and usage instructions.
 """
 
-import sys
 import logging
 
-from translate.storage import po
-from translate.storage import csvl10n
+from translate.storage import csvl10n, po
+
 
 logger = logging.getLogger(__name__)
 
+
 def replacestrings(source, *pairs):
     """Use ``pairs`` of ``(original, replacement)`` to replace text found in
     ``source``.
@@ -159,7 +159,7 @@ class csv2po:
             elif simplify(csvunit.source) == simplify(pluralid):
                 pounit.msgstr[1] = csvunit.target
             else:
-                logger.warning("couldn't work out singular/plural: %r, %r, %r", 
+                logger.warning("couldn't work out singular/plural: %r, %r, %r",
                                csvunit.source, singularid, pluralid)
                 self.unmatched += 1
                 return
@@ -235,11 +235,9 @@ def main(argv=None):
                                          usepots=True,
                                          description=__doc__)
     parser.add_option("", "--charset", dest="charset", default=None,
-        help="set charset to decode from csv files", metavar="CHARSET"
-    )
+        help="set charset to decode from csv files", metavar="CHARSET")
     parser.add_option("", "--columnorder", dest="columnorder", default=None,
-        help="specify the order and position of columns (location,source,target)"
-    )
+        help="specify the order and position of columns (location,source,target)")
     parser.add_duplicates_option()
     parser.passthrough.append("charset")
     parser.passthrough.append("columnorder")
diff --git a/translate/convert/csv2tbx.py b/translate/convert/csv2tbx.py
index cfee018..7e11f7b 100644
--- a/translate/convert/csv2tbx.py
+++ b/translate/convert/csv2tbx.py
@@ -25,8 +25,7 @@ See: http://docs.translatehouse.org/projects/translate-toolkit/en/latest/command
 for examples and usage instructions
 """
 
-from translate.storage import tbx
-from translate.storage import csvl10n
+from translate.storage import csvl10n, tbx
 
 
 class csv2tbx:
@@ -80,11 +79,9 @@ def main():
     parser = convert.ConvertOptionParser(formats, usetemplates=False,
                                          description=__doc__)
     parser.add_option("", "--charset", dest="charset", default=None,
-        help="set charset to decode from csv files", metavar="CHARSET"
-    )
+        help="set charset to decode from csv files", metavar="CHARSET")
     parser.add_option("", "--columnorder", dest="columnorder", default=None,
-        help="specify the order and position of columns (comment,source,target)"
-    )
+        help="specify the order and position of columns (comment,source,target)")
     parser.passthrough.append("charset")
     parser.passthrough.append("columnorder")
     parser.run()
diff --git a/translate/convert/dtd2po.py b/translate/convert/dtd2po.py
index 47cfb10..0fbac88 100644
--- a/translate/convert/dtd2po.py
+++ b/translate/convert/dtd2po.py
@@ -26,10 +26,9 @@ dtd2po convertor class which is in this module
 You can convert back to .dtd using po2dtd.py.
 """
 
-from translate.storage import po
-from translate.storage import dtd
-from translate.misc import quote
 from translate.convert.accesskey import UnitMixer
+from translate.misc import quote
+from translate.storage import dtd, po
 
 
 def is_css_entity(entity):
@@ -92,8 +91,8 @@ class dtd2po:
         # quotes have been escaped already by escapeforpo, so just add the
         # start and end quotes
         if len(lines) > 1:
-            po_unit.source = "\n".join([lines[0].rstrip() + ' '] + \
-                    [line.strip() + ' ' for line in lines[1:-1]] + \
+            po_unit.source = "\n".join([lines[0].rstrip() + ' '] +
+                    [line.strip() + ' ' for line in lines[1:-1]] +
                     [lines[-1].lstrip()])
         elif lines:
             po_unit.source = lines[0]
@@ -184,8 +183,8 @@ class dtd2po:
 
         #assert alreadymixed is None
         labelentity, accesskeyentity = self.mixer.find_mixed_pair(self.mixedentities, store, unit)
-        labeldtd = store.index.get(labelentity, None)
-        accesskeydtd = store.index.get(accesskeyentity, None)
+        labeldtd = store.id_index.get(labelentity, None)
+        accesskeydtd = store.id_index.get(accesskeyentity, None)
         po_unit = self.convertmixedunit(labeldtd, accesskeydtd)
         if po_unit is not None:
             if accesskeyentity is not None:
@@ -213,7 +212,7 @@ class dtd2po:
                              "developer")
 
         dtd_store.makeindex()
-        self.mixedentities = self.mixer.match_entities(dtd_store.index)
+        self.mixedentities = self.mixer.match_entities(dtd_store.id_index)
         # go through the dtd and convert each unit
         for dtd_unit in dtd_store.units:
             if not dtd_unit.istranslatable():
@@ -230,16 +229,16 @@ class dtd2po:
                 x_accelerator_marker="&",
                 x_merge_on="location",
         )
-        targetheader.addnote("extracted from %s, %s" % \
+        targetheader.addnote("extracted from %s, %s" %
                              (origdtdfile.filename,
                               translateddtdfile.filename),
                              "developer")
 
         origdtdfile.makeindex()
         #TODO: self.mixedentities is overwritten below, so this is useless:
-        self.mixedentities = self.mixer.match_entities(origdtdfile.index)
+        self.mixedentities = self.mixer.match_entities(origdtdfile.id_index)
         translateddtdfile.makeindex()
-        self.mixedentities = self.mixer.match_entities(translateddtdfile.index)
+        self.mixedentities = self.mixer.match_entities(translateddtdfile.id_index)
         # go through the dtd files and convert each unit
         for origdtd in origdtdfile.units:
             if not origdtd.istranslatable():
@@ -266,8 +265,8 @@ class dtd2po:
                 # this means its a mixed entity (with accesskey) that's
                 # already been dealt with)
                 continue
-            if orig_entity in translateddtdfile.index:
-                translateddtd = translateddtdfile.index[orig_entity]
+            if orig_entity in translateddtdfile.id_index:
+                translateddtd = translateddtdfile.id_index[orig_entity]
                 translatedpo = self.convertdtdunit(translateddtdfile,
                                                    translateddtd,
                                                    mixbucket=mixbucket)
diff --git a/translate/convert/factory.py b/translate/convert/factory.py
index 4749ee0..f1ae7d4 100644
--- a/translate/convert/factory.py
+++ b/translate/convert/factory.py
@@ -22,6 +22,7 @@
 
 import os
 
+
 #from translate.convert import prop2po, po2prop, odf2xliff, xliff2odf
 
 
diff --git a/translate/convert/html2po.py b/translate/convert/html2po.py
index 2b4f69a..ed704de 100644
--- a/translate/convert/html2po.py
+++ b/translate/convert/html2po.py
@@ -25,8 +25,7 @@ See: http://docs.translatehouse.org/projects/translate-toolkit/en/latest/command
 for examples and usage instructions.
 """
 
-from translate.storage import po
-from translate.storage import html
+from translate.storage import html, po
 
 
 class html2po:
@@ -66,11 +65,12 @@ def main(argv=None):
     from translate.misc import stdiotell
     import sys
     sys.stdout = stdiotell.StdIOWrapper(sys.stdout)
-    formats = {"html": ("po", converthtml),
-               "htm": ("po", converthtml),
-               "xhtml": ("po", converthtml),
-               None: ("po", converthtml),
-              }
+    formats = {
+        "html": ("po", converthtml),
+        "htm": ("po", converthtml),
+        "xhtml": ("po", converthtml),
+        None: ("po", converthtml),
+    }
     parser = convert.ConvertOptionParser(formats, usepots=True,
                                          description=__doc__)
     parser.add_option("-u", "--untagged", dest="includeuntagged",
diff --git a/translate/convert/ical2po.py b/translate/convert/ical2po.py
index 7a674f4..0c2e648 100644
--- a/translate/convert/ical2po.py
+++ b/translate/convert/ical2po.py
@@ -24,14 +24,14 @@ See: http://docs.translatehouse.org/projects/translate-toolkit/en/latest/command
 for examples and usage instructions.
 """
 
-import sys
 import logging
 
-from translate.storage import po
-from translate.storage import ical
+from translate.storage import ical, po
+
 
 logger = logging.getLogger(__name__)
 
+
 class ical2po:
     """convert a iCal file to a .po file for handling the translation..."""
 
diff --git a/translate/convert/ini2po b/translate/convert/ini2po
index 0580f27..008983f 100755
--- a/translate/convert/ini2po
+++ b/translate/convert/ini2po
@@ -17,10 +17,11 @@
 # You should have received a copy of the GNU General Public License
 # along with this program; if not, see <http://www.gnu.org/licenses/>.
 
-"""simple script to convert a .ini file to a gettext .po localization file"""
+"""Simple script to convert a .ini file to a gettext .po localization file."""
 
 from translate.convert import ini2po
 
+
 if __name__ == '__main__':
     ini2po.main()
 
diff --git a/translate/convert/ini2po.py b/translate/convert/ini2po.py
index aa039c2..df1407e 100644
--- a/translate/convert/ini2po.py
+++ b/translate/convert/ini2po.py
@@ -24,21 +24,23 @@ See: http://docs.translatehouse.org/projects/translate-toolkit/en/latest/command
 for examples and usage instructions.
 """
 
-import sys
 import logging
 
 from translate.storage import po
 
+
 logger = logging.getLogger(__name__)
 
+
 class ini2po:
-    """convert a .ini file to a .po file for handling the translation..."""
+    """Convert a .ini file to a .po file for handling the translation..."""
 
     def convert_store(self, input_store, duplicatestyle="msgctxt"):
-        """converts a .ini file to a .po file..."""
+        """Convert a .ini file to a .po file..."""
         output_store = po.pofile()
         output_header = output_store.header()
-        output_header.addnote("extracted from %s" % input_store.filename, "developer")
+        output_header.addnote("extracted from %s" % input_store.filename,
+                              "developer")
 
         for input_unit in input_store.units:
             output_unit = self.convert_unit(input_unit, "developer")
@@ -47,23 +49,26 @@ class ini2po:
         output_store.removeduplicates(duplicatestyle)
         return output_store
 
-    def merge_store(self, template_store, input_store, blankmsgstr=False, duplicatestyle="msgctxt"):
-        """converts two .ini files to a .po file..."""
+    def merge_store(self, template_store, input_store, blankmsgstr=False,
+                    duplicatestyle="msgctxt"):
+        """Convert two .ini files to a .po file..."""
         output_store = po.pofile()
         output_header = output_store.header()
-        output_header.addnote("extracted from %s, %s" % (template_store.filename, input_store.filename), "developer")
+        note = "extracted from %s, %s" % (template_store.filename,
+                                          input_store.filename)
+        output_header.addnote(note, "developer")
 
         input_store.makeindex()
         for template_unit in template_store.units:
             origpo = self.convert_unit(template_unit, "developer")
-            # try and find a translation of the same name...
+            # Try and find a translation of the same name...
             template_unit_name = "".join(template_unit.getlocations())
             if template_unit_name in input_store.locationindex:
                 translatedini = input_store.locationindex[template_unit_name]
                 translatedpo = self.convert_unit(translatedini, "translator")
             else:
                 translatedpo = None
-            # if we have a valid po unit, get the translation and add it...
+            # If we have a valid po unit, get the translation and add it...
             if origpo is not None:
                 if translatedpo is not None and not blankmsgstr:
                     origpo.target = translatedpo.source
@@ -75,11 +80,12 @@ class ini2po:
         return output_store
 
     def convert_unit(self, input_unit, commenttype):
-        """Converts a .ini unit to a .po unit. Returns None if empty
-        or not for translation."""
+        """Convert a .ini unit to a .po unit. Returns None if empty or not for
+        translation.
+        """
         if input_unit is None:
             return None
-        # escape unicode
+        # Escape unicode.
         output_unit = po.pounit(encoding="UTF-8")
         output_unit.addlocation("".join(input_unit.getlocations()))
         output_unit.source = input_unit.source
@@ -87,35 +93,43 @@ class ini2po:
         return output_unit
 
 
-def convertini(input_file, output_file, template_file, pot=False, duplicatestyle="msgctxt", dialect="default"):
-    """Reads in *input_file* using ini, converts using :class:`ini2po`,
-    writes to *output_file*."""
+def convertini(input_file, output_file, template_file, pot=False,
+               duplicatestyle="msgctxt", dialect="default"):
+    """Read in *input_file* using ini, converts using :class:`ini2po`, writes
+    to *output_file*.
+    """
     from translate.storage import ini
     input_store = ini.inifile(input_file, dialect=dialect)
     convertor = ini2po()
     if template_file is None:
-        output_store = convertor.convert_store(input_store, duplicatestyle=duplicatestyle)
+        output_store = convertor.convert_store(input_store,
+                                               duplicatestyle=duplicatestyle)
     else:
         template_store = ini.inifile(template_file, dialect=dialect)
-        output_store = convertor.merge_store(template_store, input_store, blankmsgstr=pot, duplicatestyle=duplicatestyle)
+        output_store = convertor.merge_store(template_store, input_store,
+                                             blankmsgstr=pot,
+                                             duplicatestyle=duplicatestyle)
     if output_store.isempty():
         return 0
     output_file.write(str(output_store))
     return 1
 
 
-def convertisl(input_file, output_file, template_file, pot=False, duplicatestyle="msgctxt", dialect="inno"):
-    return convertini(input_file, output_file, template_file, pot=False, duplicatestyle="msgctxt", dialect=dialect)
+def convertisl(input_file, output_file, template_file, pot=False,
+               duplicatestyle="msgctxt", dialect="inno"):
+    return convertini(input_file, output_file, template_file, pot=False,
+                      duplicatestyle="msgctxt", dialect=dialect)
 
 
 def main(argv=None):
     from translate.convert import convert
     formats = {
-               "ini": ("po", convertini), ("ini", "ini"): ("po", convertini),
-               "isl": ("po", convertisl), ("isl", "isl"): ("po", convertisl),
-               "iss": ("po", convertisl), ("iss", "iss"): ("po", convertisl),
-              }
-    parser = convert.ConvertOptionParser(formats, usetemplates=True, usepots=True, description=__doc__)
+        "ini": ("po", convertini), ("ini", "ini"): ("po", convertini),
+        "isl": ("po", convertisl), ("isl", "isl"): ("po", convertisl),
+        "iss": ("po", convertisl), ("iss", "iss"): ("po", convertisl),
+    }
+    parser = convert.ConvertOptionParser(formats, usetemplates=True,
+                                         usepots=True, description=__doc__)
     parser.add_duplicates_option()
     parser.passthrough.append("pot")
     parser.run(argv)
diff --git a/translate/convert/json2po.py b/translate/convert/json2po.py
index d300d2c..0a91ab9 100644
--- a/translate/convert/json2po.py
+++ b/translate/convert/json2po.py
@@ -24,13 +24,14 @@ See: http://docs.translatehouse.org/projects/translate-toolkit/en/latest/command
 for examples and usage instructions.
 """
 
-import sys
 import logging
 
 from translate.storage import po
 
+
 logger = logging.getLogger(__name__)
 
+
 class json2po:
     """Convert a JSON file to a PO file"""
 
diff --git a/translate/convert/moz2po.py b/translate/convert/moz2po.py
index 456b9d4..fda5b1c 100644
--- a/translate/convert/moz2po.py
+++ b/translate/convert/moz2po.py
@@ -24,26 +24,25 @@ See: http://docs.translatehouse.org/projects/translate-toolkit/en/latest/command
 for examples and usage instructions.
 """
 
-from translate.convert import dtd2po
-from translate.convert import prop2po
-from translate.convert import mozfunny2prop
-from translate.convert import mozlang2po
-from translate.convert import convert
+from translate.convert import (convert, dtd2po, mozfunny2prop, mozlang2po,
+                               prop2po)
 
 
 def main(argv=None):
-    formats = {(None, "*"): ("*", convert.copytemplate),
-               ("*", "*"): ("*", convert.copyinput),
-               "*": ("*", convert.copyinput),
-              }
+    formats = {
+        (None, "*"): ("*", convert.copytemplate),
+        ("*", "*"): ("*", convert.copyinput),
+        "*": ("*", convert.copyinput),
+    }
     # handle formats that convert to .po files
-    converters = [("dtd", dtd2po.convertdtd),
-                  ("properties", prop2po.convertmozillaprop),
-                  ("it", mozfunny2prop.it2po),
-                  ("ini", mozfunny2prop.ini2po),
-                  ("inc", mozfunny2prop.inc2po),
-                  ("lang", mozlang2po.convertlang),
-                 ]
+    converters = [
+        ("dtd", dtd2po.convertdtd),
+        ("properties", prop2po.convertmozillaprop),
+        ("it", mozfunny2prop.it2po),
+        ("ini", mozfunny2prop.ini2po),
+        ("inc", mozfunny2prop.inc2po),
+        ("lang", mozlang2po.convertlang),
+    ]
     for format, converter in converters:
         formats[(format, format)] = (format + ".po", converter)
         formats[format] = (format + ".po", converter)
diff --git a/translate/convert/mozlang2po.py b/translate/convert/mozlang2po.py
index 273864e..aa3af46 100644
--- a/translate/convert/mozlang2po.py
+++ b/translate/convert/mozlang2po.py
@@ -24,8 +24,7 @@
 """Convert Mozilla .lang files to Gettext PO localization files.
 """
 
-from translate.storage import mozilla_lang as lang
-from translate.storage import po
+from translate.storage import mozilla_lang as lang, po
 
 
 class lang2po:
diff --git a/translate/convert/odf2xliff.py b/translate/convert/odf2xliff.py
index d3426c2..08ac475 100644
--- a/translate/convert/odf2xliff.py
+++ b/translate/convert/odf2xliff.py
@@ -25,10 +25,10 @@ See: http://docs.translatehouse.org/projects/translate-toolkit/en/latest/command
 for examples and usage instructions.
 """
 
-from translate.storage import factory
-from translate.misc.contextlib import contextmanager
-from translate.misc.context import with_
-from translate.storage import odf_io
+from contextlib import contextmanager
+from cStringIO import StringIO
+
+from translate.storage import factory, odf_io
 
 
 def convertodf(inputfile, outputfile, templates, engine='toolkit'):
@@ -37,9 +37,6 @@ def convertodf(inputfile, outputfile, templates, engine='toolkit'):
     """
 
     def translate_toolkit_implementation(store):
-        import cStringIO
-        import zipfile
-
         from translate.storage.xml_extract import extract
         from translate.storage import odf_shared
 
@@ -47,7 +44,7 @@ def convertodf(inputfile, outputfile, templates, engine='toolkit'):
         for data in contents.values():
             parse_state = extract.ParseState(odf_shared.no_translate_content_elements,
                                              odf_shared.inline_elements)
-            extract.build_store(cStringIO.StringIO(data), store, parse_state)
+            extract.build_store(StringIO(data), store, parse_state)
 
     def itools_implementation(store):
         from itools.handlers import get_handler
@@ -74,22 +71,22 @@ def convertodf(inputfile, outputfile, templates, engine='toolkit'):
         try:
             store.setfilename(store.getfilenode('NoName'), inputfile.name)
         except:
-            print "couldn't set origin filename"
+            print("couldn't set origin filename")
         yield store
         store.save()
 
-    def with_block(store):
-        if engine == "toolkit":
-            translate_toolkit_implementation(store)
-        else:
-            itools_implementation(store)
-
     # Since the convertoptionsparser will give us an open file, we risk that
     # it could have been opened in non-binary mode on Windows, and then we'll
     # have problems, so let's make sure we have what we want.
     inputfile.close()
     inputfile = file(inputfile.name, mode='rb')
-    with_(store_context(), with_block)
+
+    with store_context() as store:
+        if engine == "toolkit":
+            translate_toolkit_implementation(store)
+        else:
+            itools_implementation(store)
+
     return True
 
 
diff --git a/translate/convert/oo2po.py b/translate/convert/oo2po.py
index 0c87889..4eb7297 100644
--- a/translate/convert/oo2po.py
+++ b/translate/convert/oo2po.py
@@ -28,13 +28,14 @@ for examples and usage instructions.
 import logging
 from urllib import urlencode
 
-from translate.storage import po
-from translate.storage import oo
+from translate.storage import oo, po
+
 
 # TODO: support using one GSI file as template, another as input (for when English is in one and translation in another)
 
 logger = logging.getLogger(__name__)
 
+
 class oo2po:
 
     def __init__(self, sourcelanguage, targetlanguage, blankmsgstr=False, long_keys=False):
@@ -94,13 +95,14 @@ class oo2po:
         thetargetfile = po.pofile()
         # create a header for the file
         bug_url = 'http://qa.openoffice.org/issues/enter_bug.cgi?%s' % \
-                  urlencode({"subcomponent": "ui",
-                             "comment": "",
-                             "short_desc": "Localization issue in file: %s" % \
-                                           theoofile.filename,
-                             "component": "l10n",
-                             "form_name": "enter_issue",
-                            })
+                  urlencode({
+                      "subcomponent": "ui",
+                      "comment": "",
+                      "short_desc": "Localization issue in file: %s" %
+                                    theoofile.filename,
+                      "component": "l10n",
+                      "form_name": "enter_issue",
+                  })
         targetheader = thetargetfile.init_headers(
                               x_accelerator_marker="~",
                               x_merge_on="location",
diff --git a/translate/convert/oo2xliff.py b/translate/convert/oo2xliff.py
index 4846273..9260533 100644
--- a/translate/convert/oo2xliff.py
+++ b/translate/convert/oo2xliff.py
@@ -25,17 +25,17 @@ See: http://docs.translatehouse.org/projects/translate-toolkit/en/latest/command
 for examples and usage instructions.
 """
 
-import sys
 import logging
 from urllib import urlencode
 
-from translate.storage import xliff
-from translate.storage import oo
+from translate.storage import oo, xliff
+
 
 # TODO: support using one GSI file as template, another as input (for when English is in one and translation in another)
 
 logger = logging.getLogger(__name__)
 
+
 class oo2xliff:
 
     def __init__(self, sourcelanguage, targetlanguage, blankmsgstr=False, long_keys=False):
@@ -100,13 +100,14 @@ class oo2xliff:
         thetargetfile.settargetlanguage(self.targetlanguage)
         # create a header for the file
         bug_url = 'http://qa.openoffice.org/issues/enter_bug.cgi?%s' % \
-                  urlencode({"subcomponent": "ui",
-                             "comment": "",
-                             "short_desc": "Localization issue in file: %s" % \
-                                           theoofile.filename,
-                             "component": "l10n",
-                             "form_name": "enter_issue",
-                            })
+                  urlencode({
+                      "subcomponent": "ui",
+                      "comment": "",
+                      "short_desc": "Localization issue in file: %s" %
+                                    theoofile.filename,
+                      "component": "l10n",
+                      "form_name": "enter_issue",
+                  })
         # go through the oo and convert each element
         for theoo in theoofile.units:
             unitlist = self.convertelement(theoo)
diff --git a/translate/convert/php2po.py b/translate/convert/php2po.py
index 9c056c7..636f743 100644
--- a/translate/convert/php2po.py
+++ b/translate/convert/php2po.py
@@ -24,7 +24,6 @@ See: http://docs.translatehouse.org/projects/translate-toolkit/en/latest/command
 for examples and usage instructions.
 """
 
-import sys
 import logging
 
 from translate.convert import convert
diff --git a/translate/convert/po2csv.py b/translate/convert/po2csv.py
index 57ca47a..2dd5621 100644
--- a/translate/convert/po2csv.py
+++ b/translate/convert/po2csv.py
@@ -24,8 +24,7 @@ See: http://docs.translatehouse.org/projects/translate-toolkit/en/latest/command
 for examples and usage instructions.
 """
 
-from translate.storage import po
-from translate.storage import csvl10n
+from translate.storage import csvl10n, po
 
 
 class po2csv:
diff --git a/translate/convert/po2dtd.py b/translate/convert/po2dtd.py
index 04d0409..cc93cf5 100644
--- a/translate/convert/po2dtd.py
+++ b/translate/convert/po2dtd.py
@@ -26,11 +26,9 @@
 
 import warnings
 
-from translate.storage import dtd
-from translate.storage import po
+from translate.convert import accesskey, convert
 from translate.misc import quote
-from translate.convert import accesskey
-from translate.convert import convert
+from translate.storage import dtd, po
 
 
 def dtdwarning(message, category, filename, lineno, line=None):
@@ -58,6 +56,8 @@ def applytranslation(entity, dtdunit, inputunit, mixedentities):
                     label, unquotedstr = accesskey.extract(unquotedstr)
                     if not unquotedstr:
                         warnings.warn("Could not find accesskey for %s" % entity)
+                        # Use the source language accesskey
+                        label, unquotedstr = accesskey.extract(inputunit.source)
                     else:
                         original = dtdunit.source
                         # For the sake of diffs we keep the case of the
@@ -68,8 +68,7 @@ def applytranslation(entity, dtdunit, inputunit, mixedentities):
                                 unquotedstr = unquotedstr.upper()
                             elif original.islower():
                                 unquotedstr = unquotedstr.lower()
-    if len(unquotedstr) > 0:
-        dtdunit.source = dtd.removeinvalidamps(entity, unquotedstr)
+    dtdunit.source = unquotedstr
 
 
 class redtd:
@@ -90,9 +89,9 @@ class redtd:
         entities = inunit.getlocations()
         mixedentities = self.mixer.match_entities(entities)
         for entity in entities:
-            if entity in self.dtdfile.index:
+            if entity in self.dtdfile.id_index:
                 # now we need to replace the definition of entity with msgstr
-                dtdunit = self.dtdfile.index[entity]  # find the dtd
+                dtdunit = self.dtdfile.id_index[entity]  # find the dtd
                 if inunit.istranslated() or not bool(inunit.source):
                     applytranslation(entity, dtdunit, inunit, mixedentities)
                 elif self.remove_untranslated and not (includefuzzy and inunit.isfuzzy()):
diff --git a/translate/convert/po2html.py b/translate/convert/po2html.py
index ed42cc1..b1bde3e 100644
--- a/translate/convert/po2html.py
+++ b/translate/convert/po2html.py
@@ -25,8 +25,7 @@ for examples and usage instructions.
 """
 
 from translate.convert import convert
-from translate.storage import html
-from translate.storage import po
+from translate.storage import html, po
 
 
 class po2html:
@@ -38,10 +37,11 @@ class po2html:
         if unit is None:
             return string
         unit = unit[0]
-        if self.includefuzzy or not unit.isfuzzy():
+        if unit.istranslated():
             return unit.target
-        else:
-            return unit.source
+        if self.includefuzzy and unit.isfuzzy():
+            return unit.target
+        return unit.source
 
     def mergestore(self, inputstore, templatetext, includefuzzy):
         """converts a file to .po format"""
@@ -76,11 +76,12 @@ def main(argv=None):
     from translate.misc import stdiotell
     import sys
     sys.stdout = stdiotell.StdIOWrapper(sys.stdout)
-    formats = {("po", "htm"): ("htm", converthtml),
-               ("po", "html"): ("html", converthtml),
-               ("po", "xhtml"): ("xhtml", converthtml),
-               ("po"): ("html", converthtml),
-              }
+    formats = {
+        ("po", "htm"): ("htm", converthtml),
+        ("po", "html"): ("html", converthtml),
+        ("po", "xhtml"): ("xhtml", converthtml),
+        ("po"): ("html", converthtml),
+    }
     parser = convert.ConvertOptionParser(formats, usetemplates=True,
                                          description=__doc__)
     parser.add_threshold_option()
diff --git a/translate/convert/po2ical.py b/translate/convert/po2ical.py
index e3edd3b..4b59a12 100644
--- a/translate/convert/po2ical.py
+++ b/translate/convert/po2ical.py
@@ -25,8 +25,7 @@ for examples and usage instructions.
 """
 
 from translate.convert import convert
-from translate.storage import factory
-from translate.storage import ical
+from translate.storage import factory, ical
 
 
 class reical:
diff --git a/translate/convert/po2moz.py b/translate/convert/po2moz.py
index 3e2ea43..dcb27dd 100644
--- a/translate/convert/po2moz.py
+++ b/translate/convert/po2moz.py
@@ -26,11 +26,8 @@ for examples and usage instructions.
 
 import os.path
 
-from translate.convert import po2dtd
-from translate.convert import po2prop
-from translate.convert import po2mozlang
-from translate.convert import prop2mozfunny
-from translate.convert import convert
+from translate.convert import (convert, po2dtd, po2mozlang, po2prop,
+                               prop2mozfunny)
 
 
 class MozConvertOptionParser(convert.ConvertOptionParser):
diff --git a/translate/convert/po2mozlang.py b/translate/convert/po2mozlang.py
index 3d355e7..427b4b0 100644
--- a/translate/convert/po2mozlang.py
+++ b/translate/convert/po2mozlang.py
@@ -25,8 +25,7 @@
 """
 
 from translate.convert import convert
-from translate.storage import mozilla_lang as lang
-from translate.storage import po
+from translate.storage import mozilla_lang as lang, po
 
 
 class po2lang:
diff --git a/translate/convert/po2oo.py b/translate/convert/po2oo.py
index 8fc1dd0..23e186d 100644
--- a/translate/convert/po2oo.py
+++ b/translate/convert/po2oo.py
@@ -24,20 +24,18 @@ See: http://docs.translatehouse.org/projects/translate-toolkit/en/latest/command
 for examples and usage instructions.
 """
 
+import logging
 import os
-import sys
 import time
-import logging
 
 from translate.convert import convert
-from translate.storage import oo
-from translate.storage import factory
-from translate.filters import pofilter
-from translate.filters import checks
-from translate.filters import autocorrect
+from translate.filters import autocorrect, checks, pofilter
+from translate.storage import factory, oo
+
 
 logger = logging.getLogger(__name__)
 
+
 class reoo:
 
     def __init__(self, templatefile, languages=None, timestamp=None, includefuzzy=False, long_keys=False, filteraction="exclude"):
diff --git a/translate/convert/po2php.py b/translate/convert/po2php.py
index 56898be..52a6d12 100644
--- a/translate/convert/po2php.py
+++ b/translate/convert/po2php.py
@@ -26,8 +26,8 @@ for examples and usage instructions.
 
 from translate.convert import convert
 from translate.misc import quote
-from translate.storage import po
-from translate.storage import php
+from translate.storage import php, po
+
 
 eol = "\n"
 
diff --git a/translate/convert/po2prop.py b/translate/convert/po2prop.py
index 55b1e80..869ed97 100644
--- a/translate/convert/po2prop.py
+++ b/translate/convert/po2prop.py
@@ -24,14 +24,48 @@ See: http://docs.translatehouse.org/projects/translate-toolkit/en/latest/command
 for examples and usage instructions.
 """
 
-from translate.convert import convert
+import warnings
+
+from translate.convert import accesskey, convert
 from translate.misc import quote
-from translate.storage import po
-from translate.storage import properties
+from translate.storage import po, properties
+
 
 eol = u"\n"
 
 
+def applytranslation(key, propunit, inunit, mixedkeys):
+    """applies the translation for key in the po unit to the prop unit"""
+    # this converts the po-style string to a prop-style string
+    value = inunit.target
+    # handle mixed keys
+    for labelsuffix in properties.labelsuffixes:
+        if key.endswith(labelsuffix):
+            if key in mixedkeys:
+                value, akey = accesskey.extract(value)
+                break
+    else:
+        for akeysuffix in properties.accesskeysuffixes:
+            if key.endswith(akeysuffix):
+                if key in mixedkeys:
+                    label, value = accesskey.extract(value)
+                    if not value:
+                        warnings.warn("Could not find accesskey for %s" % key)
+                        # Use the source language accesskey
+                        label, value = accesskey.extract(inunit.source)
+                    else:
+                        original = propunit.source
+                        # For the sake of diffs we keep the case of the
+                        # accesskey the same if we know the translation didn't
+                        # change. Casing matters in XUL.
+                        if value == propunit.source and original.lower() == value.lower():
+                            if original.isupper():
+                                value = value.upper()
+                            elif original.islower():
+                                value = value.lower()
+    return value
+
+
 class reprop:
 
     def __init__(self, templatefile, inputstore, personality, encoding=None,
@@ -43,6 +77,8 @@ class reprop:
         if self.encoding is None:
             self.encoding = self.personality.default_encoding
         self.remove_untranslated = remove_untranslated
+        self.mixer = accesskey.UnitMixer(properties.labelsuffixes,
+                                         properties.accesskeysuffixes)
 
     def convertstore(self, includefuzzy=False):
         self.includefuzzy = includefuzzy
@@ -59,6 +95,20 @@ class reprop:
             outputlines.append(outputstr)
         return u"".join(outputlines).encode(self.encoding)
 
+
+    def _handle_accesskeys(self, inunit, currkey):
+        value = inunit.target
+        if self.personality.name == "mozilla":
+            keys = inunit.getlocations()
+            mixedkeys = self.mixer.match_entities(keys)
+            for key in keys:
+                if key == currkey and key in self.inputstore.locationindex:
+                    propunit = self.inputstore.locationindex[key]  # find the prop
+                    value = applytranslation(key, propunit, inunit, mixedkeys)
+                    break
+
+        return value
+
     def _explode_gaia_plurals(self):
         """Explode the gaia plurals."""
         from translate.lang import data
@@ -76,7 +126,7 @@ class reprop:
                 if category == 'zero':
                     # [zero] cases are translated as separate units
                     continue
-                new_unit = self.inputstore.addsourceunit(u"fish") # not used
+                new_unit = self.inputstore.addsourceunit(u"fish")  # not used
                 new_location = '%s[%s]' % (location, category)
                 new_unit.addlocation(new_location)
                 new_unit.target = text
@@ -122,20 +172,20 @@ class reprop:
                     if unit.isfuzzy() and not self.includefuzzy or len(unit.target) == 0:
                         value = unit.source
                     else:
-                        value = unit.target
+                        value = self._handle_accesskeys(unit, key)
                     self.inecho = False
                     assert isinstance(value, unicode)
-                    returnline = "%(key)s%(del)s%(value)s%(term)s%(eol)s" % \
-                         {"key": "%s%s%s" % (self.personality.key_wrap_char,
-                                             key,
-                                             self.personality.key_wrap_char),
-                          "del": delimiter,
-                          "value": "%s%s%s" % (self.personality.value_wrap_char,
-                                               self.personality.encode(value),
-                                               self.personality.value_wrap_char),
-                          "term": self.personality.pair_terminator,
-                          "eol": eol,
-                         }
+                    returnline = "%(key)s%(del)s%(value)s%(term)s%(eol)s" % {
+                        "key": "%s%s%s" % (self.personality.key_wrap_char,
+                                           key,
+                                           self.personality.key_wrap_char),
+                        "del": delimiter,
+                        "value": "%s%s%s" % (self.personality.value_wrap_char,
+                                             self.personality.encode(value),
+                                             self.personality.value_wrap_char),
+                        "term": self.personality.pair_terminator,
+                        "eol": eol,
+                    }
             else:
                 self.inecho = True
                 returnline = line + eol
diff --git a/translate/convert/po2rc.py b/translate/convert/po2rc.py
index e66256b..5b4eee4 100644
--- a/translate/convert/po2rc.py
+++ b/translate/convert/po2rc.py
@@ -25,8 +25,7 @@ for examples and usage instructions.
 """
 
 from translate.convert import convert
-from translate.storage import po
-from translate.storage import rc
+from translate.storage import po, rc
 
 
 class rerc:
diff --git a/translate/convert/po2tiki.py b/translate/convert/po2tiki.py
index 674ebd5..0c81a34 100644
--- a/translate/convert/po2tiki.py
+++ b/translate/convert/po2tiki.py
@@ -26,8 +26,7 @@ for examples and usage instructions.
 
 import sys
 
-from translate.storage import tiki
-from translate.storage import po
+from translate.storage import po, tiki
 
 
 class po2tiki:
diff --git a/translate/convert/po2tmx.py b/translate/convert/po2tmx.py
index d621164..b69a8f8 100644
--- a/translate/convert/po2tmx.py
+++ b/translate/convert/po2tmx.py
@@ -26,19 +26,22 @@ for examples and usage instructions.
 
 import os
 
-from translate.storage import po
-from translate.storage import tmx
 from translate.convert import convert
 from translate.misc import wStringIO
+from translate.storage import po, tmx
 
 
 class po2tmx:
 
-    def cleancomments(self, comments):
+    def cleancomments(self, comments, comment_type=None):
         """Removes the comment marks from the PO strings."""
+        # FIXME this is a bit hacky, needs some fixes in the PO classes
         for index, comment in enumerate(comments):
             if comment.startswith("#"):
-                comments[index] = comment[1:].rstrip()
+                if comment_type is None:
+                    comments[index] = comment[1:].rstrip()
+                else:
+                    comments[index] = comment[2:].strip()
 
         return ''.join(comments)
 
@@ -53,8 +56,8 @@ class po2tmx:
             translation = inunit.target
 
             commenttext = {
-                'source': self.cleancomments(inunit.sourcecomments),
-                'type': self.cleancomments(inunit.typecomments),
+                'source': self.cleancomments(inunit.sourcecomments, "source"),
+                'type': self.cleancomments(inunit.typecomments, "type"),
                 'others': self.cleancomments(inunit.othercomments),
             }.get(comment, None)
 
diff --git a/translate/convert/po2ts.py b/translate/convert/po2ts.py
index dd40719..8a00a64 100644
--- a/translate/convert/po2ts.py
+++ b/translate/convert/po2ts.py
@@ -24,8 +24,7 @@ See: http://docs.translatehouse.org/projects/translate-toolkit/en/latest/command
 for examples and usage instructions.
 """
 
-from translate.storage import po
-from translate.storage import ts
+from translate.storage import po, ts
 
 
 class po2ts:
diff --git a/translate/convert/po2txt.py b/translate/convert/po2txt.py
index 0a18887..c479a72 100644
--- a/translate/convert/po2txt.py
+++ b/translate/convert/po2txt.py
@@ -24,10 +24,7 @@ See: http://docs.translatehouse.org/projects/translate-toolkit/en/latest/command
 for examples and usage instructions.
 """
 
-try:
-    import textwrap
-except ImportError:
-    textwrap = None
+import textwrap
 
 from translate.convert import convert
 from translate.storage import factory
@@ -100,10 +97,9 @@ def main(argv=None):
     parser.add_option("", "--encoding", dest="encoding", default='utf-8', type="string",
             help="The encoding of the template file (default: UTF-8)")
     parser.passthrough.append("encoding")
-    if textwrap is not None:
-        parser.add_option("-w", "--wrap", dest="wrap", default=None, type="int",
-                help="set number of columns to wrap text at", metavar="WRAP")
-        parser.passthrough.append("wrap")
+    parser.add_option("-w", "--wrap", dest="wrap", default=None, type="int",
+            help="set number of columns to wrap text at", metavar="WRAP")
+    parser.passthrough.append("wrap")
     parser.add_threshold_option()
     parser.add_fuzzy_option()
     parser.run(argv)
diff --git a/translate/convert/po2web2py.py b/translate/convert/po2web2py.py
index fb7fd68..2fd44fa 100644
--- a/translate/convert/po2web2py.py
+++ b/translate/convert/po2web2py.py
@@ -18,7 +18,6 @@
 # You should have received a copy of the GNU General Public License
 # along with this program; if not, see <http://www.gnu.org/licenses/>.
 #
-# (c) 2009 Dominic König (dominic at nursix.org)
 
 """Convert GNU/gettext PO files to web2py translation dictionaries (.py).
 
@@ -26,6 +25,7 @@ See: http://docs.translatehouse.org/projects/translate-toolkit/en/latest/command
 for examples and usage instructions.
 """
 
+from translate.convert import convert
 from translate.storage import factory
 
 
@@ -35,7 +35,7 @@ class po2pydict:
         return
 
     def convertstore(self, inputstore, includefuzzy):
-        from StringIO import StringIO
+        from cStringIO import StringIO
         str_obj = StringIO()
 
         mydict = dict()
diff --git a/translate/convert/po2wordfast.py b/translate/convert/po2wordfast.py
index 2b71a2d..b83a40f 100644
--- a/translate/convert/po2wordfast.py
+++ b/translate/convert/po2wordfast.py
@@ -26,10 +26,9 @@ for examples and usage instructions.
 
 import os
 
-from translate.storage import po
-from translate.storage import wordfast
 from translate.convert import convert
 from translate.misc import wStringIO
+from translate.storage import po, wordfast
 
 
 class po2wordfast:
diff --git a/translate/convert/po2xliff.py b/translate/convert/po2xliff.py
index 3b80f69..7820d88 100644
--- a/translate/convert/po2xliff.py
+++ b/translate/convert/po2xliff.py
@@ -24,8 +24,7 @@ See: http://docs.translatehouse.org/projects/translate-toolkit/en/latest/command
 for examples and usage instructions.
 """
 
-from translate.storage import po
-from translate.storage import poxliff
+from translate.storage import po, poxliff
 
 
 class po2xliff:
diff --git a/translate/convert/pot2po.py b/translate/convert/pot2po.py
index 1722a30..13b9e5a 100644
--- a/translate/convert/pot2po.py
+++ b/translate/convert/pot2po.py
@@ -25,12 +25,10 @@ See: http://docs.translatehouse.org/projects/translate-toolkit/en/latest/command
 for examples and usage instructions.
 """
 
-from translate.storage import factory
-from translate.search import match
 from translate.misc.multistring import multistring
+from translate.search import match
+from translate.storage import catkeys, factory, poheader
 from translate.tools import pretranslate
-from translate.storage import poheader, po
-from translate.storage import catkeys
 
 
 def convertpot(input_file, output_file, template_file, tm=None,
@@ -226,8 +224,8 @@ def _do_poheaders(input_store, output_store, template_store):
 
     inputheadervalues = input_store.parseheader()
     for key, value in inputheadervalues.iteritems():
-        if key in ("Project-Id-Version", "Last-Translator", "Language-Team", \
-                   "PO-Revision-Date", "Content-Type", \
+        if key in ("Project-Id-Version", "Last-Translator", "Language-Team",
+                   "PO-Revision-Date", "Content-Type",
                    "Content-Transfer-Encoding", "Plural-Forms"):
             # want to carry these from the template so we ignore them
             pass
diff --git a/translate/convert/prop2mozfunny.py b/translate/convert/prop2mozfunny.py
index f6f2a74..bdb351d 100644
--- a/translate/convert/prop2mozfunny.py
+++ b/translate/convert/prop2mozfunny.py
@@ -21,10 +21,9 @@
 """Converts properties files to additional Mozilla format files.
 """
 
-from translate.storage import properties
-from translate.convert import po2prop
-from translate.convert import mozfunny2prop
+from translate.convert import mozfunny2prop, po2prop
 from translate.misc.wStringIO import StringIO
+from translate.storage import properties
 
 
 def prop2inc(pf):
diff --git a/translate/convert/prop2po.py b/translate/convert/prop2po.py
index ed6028b..7404297 100644
--- a/translate/convert/prop2po.py
+++ b/translate/convert/prop2po.py
@@ -1,7 +1,7 @@
 #!/usr/bin/env python
 # -*- coding: utf-8 -*-
 #
-# Copyright 2002-2010,2012 Zuza Software Foundation
+# Copyright 2002-2014 Zuza Software Foundation
 #
 # This file is part of translate.
 #
@@ -25,31 +25,29 @@ for examples and usage instructions.
 """
 
 import logging
-import sys
 
+from translate.convert.accesskey import UnitMixer
 from translate.storage import po, properties
 
 
 logger = logging.getLogger(__name__)
 
 
-def _collapse(store, units):
-    sources = [u.source for u in units]
-    targets = [u.target for u in units]
-    # TODO: only consider the right ones for sources and targets
-    plural_unit = store.addsourceunit(sources)
-    plural_unit.target = targets
-    return plural_unit
-
-
 class prop2po:
     """convert a .properties file to a .po file for handling the
     translation."""
 
-    def convertstore(self, thepropfile, personality="java",
-                     duplicatestyle="msgctxt"):
-        """converts a .properties file to a .po file..."""
+    def __init__(self, personality="java", blankmsgstr=False,
+                 duplicatestyle="msgctxt"):
         self.personality = personality
+        self.blankmsgstr = blankmsgstr
+        self.duplicatestyle = duplicatestyle
+        self.mixedkeys = {}
+        self.mixer = UnitMixer(properties.labelsuffixes,
+                               properties.accesskeysuffixes)
+
+    def convertstore(self, thepropfile):
+        """converts a .properties file to a .po file..."""
         thetargetfile = po.pofile()
         if self.personality in ("mozilla", "skype"):
             targetheader = thetargetfile.init_headers(
@@ -60,12 +58,15 @@ class prop2po:
             targetheader = thetargetfile.header()
         targetheader.addnote("extracted from %s" % thepropfile.filename,
                              "developer")
+
+        thepropfile.makeindex()
+        self.mixedkeys = self.mixer.match_entities(thepropfile.id_index)
         # we try and merge the header po with any comments at the start of the
         # properties file
         appendedheader = False
         waitingcomments = []
         for propunit in thepropfile.units:
-            pounit = self.convertunit(propunit, "developer")
+            pounit = self.convertpropunit(thepropfile, propunit, "developer")
             if pounit is None:
                 waitingcomments.extend(propunit.comments)
             # FIXME the storage class should not be creating blank units
@@ -85,13 +86,11 @@ class prop2po:
                 thetargetfile.addunit(pounit)
         if self.personality == "gaia":
             thetargetfile = self.fold_gaia_plurals(thetargetfile)
-        thetargetfile.removeduplicates(duplicatestyle)
+        thetargetfile.removeduplicates(self.duplicatestyle)
         return thetargetfile
 
-    def mergestore(self, origpropfile, translatedpropfile, personality="java",
-                   blankmsgstr=False, duplicatestyle="msgctxt"):
+    def mergestore(self, origpropfile, translatedpropfile):
         """converts two .properties files to a .po file..."""
-        self.personality = personality
         thetargetfile = po.pofile()
         if self.personality in ("mozilla", "skype"):
             targetheader = thetargetfile.init_headers(
@@ -102,14 +101,18 @@ class prop2po:
             targetheader = thetargetfile.header()
         targetheader.addnote("extracted from %s, %s" % (origpropfile.filename, translatedpropfile.filename),
                              "developer")
+        origpropfile.makeindex()
+        #TODO: self.mixedkeys is overwritten below, so this is useless:
+        self.mixedkeys = self.mixer.match_entities(origpropfile.id_index)
         translatedpropfile.makeindex()
+        self.mixedkeys = self.mixer.match_entities(translatedpropfile.id_index)
         # we try and merge the header po with any comments at the start of
         # the properties file
         appendedheader = False
         waitingcomments = []
         # loop through the original file, looking at units one by one
         for origprop in origpropfile.units:
-            origpo = self.convertunit(origprop, "developer")
+            origpo = self.convertpropunit(origpropfile, origprop, "developer")
             if origpo is None:
                 waitingcomments.extend(origprop.comments)
             # FIXME the storage class should not be creating blank units
@@ -128,14 +131,16 @@ class prop2po:
                 translatedprop = translatedpropfile.locationindex[origprop.name]
                 # Need to check that this comment is not a copy of the
                 # developer comments
-                translatedpo = self.convertunit(translatedprop, "translator")
+                translatedpo = self.convertpropunit(translatedpropfile,
+                                                    translatedprop,
+                                                    "translator")
                 if translatedpo is "discard":
                     continue
             else:
                 translatedpo = None
             # if we have a valid po unit, get the translation and add it...
             if origpo is not None:
-                if translatedpo is not None and not blankmsgstr:
+                if translatedpo is not None and not self.blankmsgstr:
                     origpo.target = translatedpo.source
                 origpo.addnote(u"".join(waitingcomments).rstrip(),
                                "developer", position="prepend")
@@ -146,11 +151,22 @@ class prop2po:
                              origprop.name)
         if self.personality == "gaia":
             thetargetfile = self.fold_gaia_plurals(thetargetfile)
-        thetargetfile.removeduplicates(duplicatestyle)
+        thetargetfile.removeduplicates(self.duplicatestyle)
         return thetargetfile
 
     def fold_gaia_plurals(self, postore):
         """Fold the multiple plural units of a gaia file into a gettext plural."""
+
+        def _append_plural_unit(store, plurals, plural):
+            units = plurals[plural]
+            sources = [u.source for u in units]
+            targets = [u.target for u in units]
+            # TODO: only consider the right ones for sources and targets
+            plural_unit = store.addsourceunit(sources)
+            plural_unit.target = targets
+            plural_unit.addlocation(plural)
+            del plurals[plural]
+
         new_store = type(postore)()
         plurals = {}
         current_plural = u""
@@ -159,6 +175,10 @@ class prop2po:
                 #TODO: reconsider: we could lose header comments here
                 continue
             if u"plural(n)" in unit.source:
+                if current_plural:
+                    # End of a set of plural units
+                    _append_plural_unit(new_store, plurals, current_plural)
+                    current_plural = u""
                 # start of a set of plural units
                 location = unit.getlocations()[0]
                 current_plural = location
@@ -174,18 +194,14 @@ class prop2po:
                         continue
                 elif current_plural:
                     # End of a set of plural units
-                    new_unit = _collapse(new_store, plurals[current_plural])
-                    new_unit.addlocation(current_plural)
-                    del plurals[current_plural]
+                    _append_plural_unit(new_store, plurals, current_plural)
                     current_plural = u""
 
                 new_store.addunit(unit)
 
         if current_plural:
             # The file ended with a set of plural units
-            new_unit = _collapse(new_store, plurals[current_plural])
-            new_unit.addlocation(current_plural)
-            del plurals[current_plural]
+            _append_plural_unit(new_store, plurals, current_plural)
             current_plural = u""
 
         # if everything went well, there should be nothing left in plurals
@@ -214,6 +230,63 @@ class prop2po:
         pounit.target = u""
         return pounit
 
+    def convertmixedunit(self, labelprop, accesskeyprop, commenttype):
+        label_unit = self.convertunit(labelprop, commenttype)
+        accesskey_unit = self.convertunit(accesskeyprop, commenttype)
+        if label_unit is None:
+            return accesskey_unit
+        if accesskey_unit is None:
+            return label_unit
+        target_unit = po.pounit(encoding="UTF-8")
+        return self.mixer.mix_units(label_unit, accesskey_unit, target_unit)
+
+    def convertpropunit(self, store, unit, commenttype, mixbucket="dtd"):
+        """Converts a unit from store to a po unit, keeping track of mixed
+        names along the way.
+
+        ``mixbucket`` can be specified to indicate if the given unit is part of
+        the template or the translated file.
+        """
+        if self.personality != "mozilla":
+            # XXX should we enable unit mixing for other personalities?
+            return self.convertunit(unit, commenttype)
+
+        # keep track of whether accesskey and label were combined
+        key = unit.getid()
+        if key not in self.mixedkeys:
+            return self.convertunit(unit, commenttype)
+
+        # use special convertmixed unit which produces one pounit with
+        # both combined for the label and None for the accesskey
+        alreadymixed = self.mixedkeys[key].get(mixbucket, None)
+        if alreadymixed:
+            # we are successfully throwing this away...
+            return None
+        elif alreadymixed is False:
+            # The mix failed before
+            return self.convertunit(unit, commenttype)
+
+        #assert alreadymixed is None
+        labelkey, accesskeykey = self.mixer.find_mixed_pair(self.mixedkeys, store, unit)
+        labelprop = store.id_index.get(labelkey, None)
+        accesskeyprop = store.id_index.get(accesskeykey, None)
+        po_unit = self.convertmixedunit(labelprop, accesskeyprop, commenttype)
+        if po_unit is not None:
+            if accesskeykey is not None:
+                self.mixedkeys[accesskeykey][mixbucket] = True
+            if labelkey is not None:
+                self.mixedkeys[labelkey][mixbucket] = True
+            return po_unit
+        else:
+            # otherwise the mix failed. add each one separately and
+            # remember they weren't mixed
+            if accesskeykey is not None:
+                self.mixedkeys[accesskeykey][mixbucket] = False
+            if labelkey is not None:
+                self.mixedkeys[labelkey][mixbucket] = False
+
+        return self.convertunit(unit, commenttype)
+
 
 def convertstrings(inputfile, outputfile, templatefile, personality="strings",
                    pot=False, duplicatestyle="msgctxt", encoding=None):
@@ -236,15 +309,13 @@ def convertprop(inputfile, outputfile, templatefile, personality="java",
     """reads in inputfile using properties, converts using prop2po, writes
     to outputfile"""
     inputstore = properties.propfile(inputfile, personality, encoding)
-    convertor = prop2po()
+    convertor = prop2po(personality=personality, blankmsgstr=pot,
+                        duplicatestyle=duplicatestyle)
     if templatefile is None:
-        outputstore = convertor.convertstore(inputstore, personality,
-                                             duplicatestyle=duplicatestyle)
+        outputstore = convertor.convertstore(inputstore)
     else:
         templatestore = properties.propfile(templatefile, personality, encoding)
-        outputstore = convertor.mergestore(templatestore, inputstore,
-                                           personality, blankmsgstr=pot,
-                                           duplicatestyle=duplicatestyle)
+        outputstore = convertor.mergestore(templatestore, inputstore)
     if outputstore.isempty():
         return 0
     outputfile.write(str(outputstore))
diff --git a/translate/convert/rc2po.py b/translate/convert/rc2po.py
index a468739..d34a26b 100644
--- a/translate/convert/rc2po.py
+++ b/translate/convert/rc2po.py
@@ -24,14 +24,14 @@ See: http://docs.translatehouse.org/projects/translate-toolkit/en/latest/command
 for examples and usage instructions.
 """
 
-import sys
 import logging
 
-from translate.storage import po
-from translate.storage import rc
+from translate.storage import po, rc
+
 
 logger = logging.getLogger(__name__)
 
+
 class rc2po:
     """Convert a .rc file to a .po file for handling the translation."""
 
diff --git a/translate/convert/sub2po.py b/translate/convert/sub2po.py
index 2ecec19..d39495b 100644
--- a/translate/convert/sub2po.py
+++ b/translate/convert/sub2po.py
@@ -24,13 +24,14 @@ See: http://docs.translatehouse.org/projects/translate-toolkit/en/latest/command
 for examples and usage instructions.
 """
 
-import sys
 import logging
 
 from translate.storage import po
 
+
 logger = logging.getLogger(__name__)
 
+
 def convert_store(input_store, duplicatestyle="msgctxt"):
     """converts a subtitle file to a .po file..."""
     output_store = po.pofile()
@@ -51,7 +52,7 @@ def merge_store(template_store, input_store, blankmsgstr=False,
     """converts two subtitle files to a .po file..."""
     output_store = po.pofile()
     output_header = output_store.headers()
-    output_header.addnote("extracted from %s, %s" % \
+    output_header.addnote("extracted from %s, %s" %
                           (template_store.filename, input_store.filename),
                           "developer")
 
diff --git a/translate/convert/symb2po.py b/translate/convert/symb2po.py
index f4035f0..079012f 100644
--- a/translate/convert/symb2po.py
+++ b/translate/convert/symb2po.py
@@ -84,7 +84,7 @@ def build_output(units, template_header, template_dict):
         'Language-Team': template_dict.get('r_string_languagegroup_name', ''),
         'Content-Transfer-Encoding': '8bit',
         'Content-Type': 'text/plain; charset=UTF-8',
-        }
+    }
     output_store.updateheader(add=True, **header_entries)
     for id, source in units:
         if id in ignore:
diff --git a/translate/convert/test_accesskey.py b/translate/convert/test_accesskey.py
index bd8aa7f..94d8937 100644
--- a/translate/convert/test_accesskey.py
+++ b/translate/convert/test_accesskey.py
@@ -54,6 +54,13 @@ def test_unicode():
     label, akey = accesskey.extract(u"E&ḓiṱ")
     assert label, akey == (u"Eḓiṱ", u"ḓ")
     assert isinstance(label, unicode) and isinstance(akey, unicode)
+    assert accesskey.combine(u"Eḓiṱ", u"ḓ") == (u"E&ḓiṱ")
+
+
+def test_numeric():
+    """test combining and extracting numeric markers"""
+    assert accesskey.extract(u"&100%") == (u"100%", u"1")
+    assert accesskey.combine(u"100%", u"1") == u"&100%"
 
 
 def test_empty_string():
@@ -74,8 +81,26 @@ def test_combine_label_accesskey():
     assert accesskey.combine(u"File", u"F", u"~") == u"~File"
 
 
+def test_combine_label_accesskey_different_capitals():
+    """test that we can combine accesskey and label to create a label+accesskey
+    string when we have more then one case or case is wrong."""
+    # Prefer the correct case, even when an alternate case occurs first
+    assert accesskey.combine(u"Close Other Tabs", u"o") == u"Cl&ose Other Tabs"
+    assert accesskey.combine(u"Other Closed Tab", u"o") == u"Other Cl&osed Tab"
+    assert accesskey.combine(u"Close Other Tabs", u"O") == u"Close &Other Tabs"
+    # Correct case is missing from string, so use alternate case
+    assert accesskey.combine(u"Close Tabs", u"O") == u"Cl&ose Tabs"
+    assert accesskey.combine(u"Other Tabs", u"o") == u"&Other Tabs"
+
+
 def test_uncombinable():
     """test our behaviour when we cannot combine label and accesskey"""
     assert accesskey.combine(u"File", u"D") is None
     assert accesskey.combine(u"File", u"") is None
     assert accesskey.combine(u"", u"") is None
+
+
+def test_accesskey_already_in_text():
+    """test that we can combine if the accesskey is already in the text"""
+    assert accesskey.combine(u"Mail & Newsgroups", u"N") == u"Mail & &Newsgroups"
+    assert accesskey.extract(u"Mail & &Newsgroups") == (u"Mail & Newsgroups", u"N")
diff --git a/translate/convert/test_convert.py b/translate/convert/test_convert.py
index 6bc4a1b..7401bca 100644
--- a/translate/convert/test_convert.py
+++ b/translate/convert/test_convert.py
@@ -107,7 +107,7 @@ class TestConvertCommand:
             sys.stdout = stdout
         helpfile.close()
         help_string = self.read_testfile("help.txt")
-        print help_string
+        print(help_string)
         convertsummary = self.convertmodule.__doc__.split("\n")[0]
         # the convertsummary might be wrapped. this will probably unwrap it
         assert convertsummary in help_string.replace("\n", " ")
diff --git a/translate/convert/test_csv2po.py b/translate/convert/test_csv2po.py
index 8f92658..e866728 100644
--- a/translate/convert/test_csv2po.py
+++ b/translate/convert/test_csv2po.py
@@ -1,11 +1,9 @@
 #!/usr/bin/env python
 
-from translate.convert import csv2po
-from translate.convert import test_convert
+from translate.convert import csv2po, test_convert
 from translate.misc import wStringIO
-from translate.storage import po
-from translate.storage import csvl10n
-from translate.storage.test_base import headerless_len, first_translatable
+from translate.storage import csvl10n, po
+from translate.storage.test_base import first_translatable, headerless_len
 
 
 def test_replacestrings():
@@ -32,7 +30,7 @@ class TestCSV2PO:
 
     def singleelement(self, storage):
         """checks that the pofile contains a single non-header element, and returns it"""
-        print str(storage)
+        print(str(storage))
         assert headerless_len(storage.units) == 1
         return first_translatable(storage)
 
@@ -74,7 +72,7 @@ wat lank aanhou"
         unit = self.singleelement(pofile)
         assert unit.getlocations() == ['Random comment\nwith continuation']
         assert unit.source == "Original text"
-        print unit.target
+        print(unit.target)
         assert unit.target == "Langdradige teks\nwat lank aanhou"
 
     def test_tabs(self):
@@ -82,7 +80,7 @@ wat lank aanhou"
         minicsv = ',"First column\tSecond column","Twee kolomme gesky met \t"'
         pofile = self.csv2po(minicsv)
         unit = self.singleelement(pofile)
-        print unit.source
+        print(unit.source)
         assert unit.source == "First column\tSecond column"
         assert not pofile.findunit("First column\tSecond column").target == "Twee kolomme gesky met \\t"
 
@@ -90,18 +88,18 @@ wat lank aanhou"
         """Test the escaping of quotes (and slash)"""
         minicsv = r''',"Hello ""Everyone""","Good day ""All"""
 ,"Use \"".","Gebruik \""."'''
-        print minicsv
+        print(minicsv)
         csvfile = csvl10n.csvfile(wStringIO.StringIO(minicsv))
-        print str(csvfile)
+        print(str(csvfile))
         pofile = self.csv2po(minicsv)
         unit = first_translatable(pofile)
         assert unit.source == 'Hello "Everyone"'
         assert pofile.findunit('Hello "Everyone"').target == 'Good day "All"'
-        print str(pofile)
+        print(str(pofile))
         for unit in pofile.units:
-            print unit.source
-            print unit.target
-            print
+            print(unit.source)
+            print(unit.target)
+            print()
 #        assert pofile.findunit('Use \\".').target == 'Gebruik \\".'
 
     def test_empties(self):
diff --git a/translate/convert/test_dtd2po.py b/translate/convert/test_dtd2po.py
index a47f56b..25c928b 100644
--- a/translate/convert/test_dtd2po.py
+++ b/translate/convert/test_dtd2po.py
@@ -3,11 +3,9 @@
 
 from pytest import mark
 
-from translate.convert import dtd2po
-from translate.convert import test_convert
+from translate.convert import dtd2po, test_convert
 from translate.misc import wStringIO
-from translate.storage import po
-from translate.storage import dtd
+from translate.storage import dtd, po
 
 
 class TestDTD2PO:
@@ -37,7 +35,7 @@ class TestDTD2PO:
         """checks that the pofile contains a single non-header element, and returns it"""
         assert len(pofile.units) == 2
         assert pofile.units[0].isheader()
-        print pofile.units[1]
+        print(pofile.units[1])
         return pofile.units[1]
 
     def countelements(self, pofile):
@@ -83,7 +81,7 @@ class TestDTD2PO:
         dtdsource = """<!ENTITY test.metoo '"Bananas" for sale'>\n"""
         pofile = self.dtd2po(dtdsource)
         pounit = self.singleelement(pofile)
-        print str(pounit)
+        print(str(pounit))
         assert pounit.source == '"Bananas" for sale'
 
     def test_emptyentity(self):
@@ -107,7 +105,7 @@ class TestDTD2PO:
         dtdsource = '<!ENTITY credit.translation "Translators Names">\n'
         pofile = self.dtd2po(dtdsource, dtdtemplate)
         unit = self.singleelement(pofile)
-        print unit
+        print(unit)
         assert "credit.translation" in str(unit)
         # We don't want this to simply be seen as a header:
         assert len(unit.getid()) != 0
@@ -124,7 +122,7 @@ class TestDTD2PO:
 '''
         pofile = self.dtd2po(dtdsource)
         posource = str(pofile)
-        print posource
+        print(posource)
         assert posource.count('#.') == 5  # 1 Header extracted from, 3 comment lines, 1 autoinserted comment
 
     def test_localisation_note_merge(self):
@@ -134,7 +132,7 @@ class TestDTD2PO:
         dtdsource = dtdtemplate % ("note1.label", "note1.label") + dtdtemplate % ("note2.label", "note2.label")
         pofile = self.dtd2po(dtdsource)
         posource = str(pofile.units[1]) + str(pofile.units[2])
-        print posource
+        print(posource)
         assert posource.count('#.') == 2
         assert posource.count('msgctxt') == 2
 
@@ -258,7 +256,7 @@ Some other text
         dtdsource = '<!ENTITY mainWindow.titlemodifiermenuseparator " - with a newline\n    and more text">'
         pofile = self.dtd2po(dtdsource)
         unit = self.singleelement(pofile)
-        print repr(unit.source)
+        print(repr(unit.source))
         assert unit.source == " - with a newline \nand more text"
 
     def test_escaping_newline_tabs(self):
@@ -269,8 +267,8 @@ Some other text
         thedtd.parse(dtdsource)
         thepo = po.pounit()
         converter.convertstrings(thedtd, thepo)
-        print thedtd
-        print thepo.source
+        print(thedtd)
+        print(thepo.source)
         # \n in a dtd should also appear as \n in the PO file
         assert thepo.source == r"A hard coded newline.\nAnd tab\t and a \r carriage return."
 
@@ -304,7 +302,7 @@ Some other text
 <!ENTITY managecerts.button "ﺇﺩﺍﺭﺓ ﺎﻠﺸﻫﺍﺩﺎﺗ...">
 <!ENTITY managecerts.accesskey "ﺩ">'''
         pofile = self.dtd2po(dtdlanguage, dtdtemplate)
-        print pofile
+        print(pofile)
         assert pofile.units[3].source == "Manage Certificates..."
         assert pofile.units[3].target == u"ﺇﺩﺍﺭﺓ ﺎﻠﺸﻫﺍﺩﺎﺗ..."
         assert pofile.units[4].source == "M"
@@ -318,14 +316,14 @@ Some other text
         dtdlanguage = '''<!ENTITY useAutoScroll.label             "使用自動捲動(Autoscrolling)">
 <!ENTITY useAutoScroll.accesskey         "a">'''
         pofile = self.dtd2po(dtdlanguage, dtdtemplate)
-        print pofile
+        print(pofile)
         assert pofile.units[1].target == "使用自動捲動(&Autoscrolling)"
         # We assume that accesskeys with no associated key should be done as follows "XXXX (&A)"
         # TODO - check that we can unfold this from PO -> DTD
         dtdlanguage = '''<!ENTITY useAutoScroll.label             "使用自動捲動">
 <!ENTITY useAutoScroll.accesskey         "a">'''
         pofile = self.dtd2po(dtdlanguage, dtdtemplate)
-        print pofile
+        print(pofile)
         assert pofile.units[1].target == "使用自動捲動 (&A)"
 
     def test_exclude_entity_includes(self):
@@ -349,7 +347,7 @@ Some other text
         dtdtemplate = '''<!ENTITY unreadFolders.label "Unread">\n<!ENTITY viewPickerUnread.label "Unread">\n<!ENTITY unreadColumn.label "Unread">'''
         dtdlanguage = '''<!ENTITY viewPickerUnread.label "Непрочетени">\n<!ENTITY unreadFolders.label "Непрочетени">'''
         pofile = self.dtd2po(dtdlanguage, dtdtemplate)
-        print pofile
+        print(pofile)
         assert pofile.units[1].source == "Unread"
 
     def test_merge_without_template(self):
@@ -361,7 +359,7 @@ Some other text
         dtdtemplate = ''
         dtdsource = '<!ENTITY no.template "Target">'
         pofile = self.dtd2po(dtdsource, dtdtemplate)
-        print pofile
+        print(pofile)
         assert self.countelements(pofile) == 0
 
 
diff --git a/translate/convert/test_html2po.py b/translate/convert/test_html2po.py
index 2d06049..6711ef9 100644
--- a/translate/convert/test_html2po.py
+++ b/translate/convert/test_html2po.py
@@ -2,9 +2,7 @@
 
 from pytest import mark
 
-from translate.convert import html2po
-from translate.convert import po2html
-from translate.convert import test_convert
+from translate.convert import html2po, po2html, test_convert
 from translate.misc import wStringIO
 
 
@@ -31,15 +29,15 @@ class TestHTML2PO:
         if actual > 0:
             if pofile.units[0].isheader():
                 actual = actual - 1
-        print pofile
+        print(pofile)
         assert actual == expected
 
     def compareunit(self, pofile, unitnumber, expected):
         """helper to validate a PO message"""
         if not pofile.units[0].isheader():
             unitnumber = unitnumber - 1
-        print 'unit source: ' + pofile.units[unitnumber].source.encode('utf-8') + '|'
-        print 'expected: ' + expected.encode('utf-8') + '|'
+        print('unit source: ' + pofile.units[unitnumber].source.encode('utf-8') + '|')
+        print('expected: ' + expected.encode('utf-8') + '|')
         assert unicode(pofile.units[unitnumber].source) == unicode(expected)
 
     def check_single(self, markup, itemtext):
diff --git a/translate/convert/test_json2po.py b/translate/convert/test_json2po.py
index 03e89b2..3f8dc79 100644
--- a/translate/convert/test_json2po.py
+++ b/translate/convert/test_json2po.py
@@ -1,7 +1,6 @@
 #!/usr/bin/env python
 
-from translate.convert import json2po
-from translate.convert import test_convert
+from translate.convert import json2po, test_convert
 from translate.misc import wStringIO
 from translate.storage import jsonl10n
 
@@ -18,7 +17,7 @@ class TestJson2PO:
 
     def singleelement(self, storage):
         """checks that the pofile contains a single non-header element, and returns it"""
-        print str(storage)
+        print(str(storage))
         assert len(storage.units) == 1
         return storage.units[0]
 
@@ -71,7 +70,7 @@ msgstr ""
 
         poresult = self.json2po(jsonsource)
         assert poresult.units[0].isheader()
-        print len(poresult.units)
+        print(len(poresult.units))
         assert len(poresult.units) == 11
 
 
diff --git a/translate/convert/test_moz2po.py b/translate/convert/test_moz2po.py
index 2e93f69..e72061e 100644
--- a/translate/convert/test_moz2po.py
+++ b/translate/convert/test_moz2po.py
@@ -1,7 +1,6 @@
 #!/usr/bin/env python
 
-from translate.convert import moz2po
-from translate.convert import test_convert
+from translate.convert import moz2po, test_convert
 
 
 class TestMoz2PO:
diff --git a/translate/convert/test_mozfunny2prop.py b/translate/convert/test_mozfunny2prop.py
index d19e028..6056f5d 100644
--- a/translate/convert/test_mozfunny2prop.py
+++ b/translate/convert/test_mozfunny2prop.py
@@ -25,13 +25,13 @@ class TestInc2PO:
         """checks that the pofile contains a single non-header element, and returns it"""
         assert len(pofile.units) == 2
         assert pofile.units[0].isheader()
-        print pofile
+        print(pofile)
         return pofile.units[1]
 
     def countelements(self, pofile):
         """counts the number of non-header entries"""
         assert pofile.units[0].isheader()
-        print pofile
+        print(pofile)
         return len(pofile.units) - 1
 
     def test_simpleentry(self):
diff --git a/translate/convert/test_mozlang2po.py b/translate/convert/test_mozlang2po.py
index ba818e4..c931b18 100644
--- a/translate/convert/test_mozlang2po.py
+++ b/translate/convert/test_mozlang2po.py
@@ -1,12 +1,8 @@
 #!/usr/bin/env python
 # -*- coding: utf-8 -*-
 
-from pytest import mark
-
-from translate.convert import mozlang2po
-from translate.convert import test_convert
+from translate.convert import mozlang2po, test_convert
 from translate.misc import wStringIO
-from translate.storage import po
 from translate.storage import mozilla_lang as lang
 
 
@@ -32,13 +28,13 @@ class TestLang2PO:
         """checks that the pofile contains a single non-header element, and returns it"""
         assert len(pofile.units) == 2
         assert pofile.units[0].isheader()
-        print pofile
+        print(pofile)
         return pofile.units[1]
 
     def countelements(self, pofile):
         """counts the number of non-header entries"""
         assert pofile.units[0].isheader()
-        print pofile
+        print(pofile)
         return len(pofile.units) - 1
 
     def test_simpleentry(self):
diff --git a/translate/convert/test_oo2po.py b/translate/convert/test_oo2po.py
index 7b463b1..a191e02 100644
--- a/translate/convert/test_oo2po.py
+++ b/translate/convert/test_oo2po.py
@@ -3,18 +3,12 @@
 
 import os
 import urlparse
-try:
-    from urlparse import parse_qs
-except ImportError:
-    from cgi import parse_qs
-
-from translate.convert import oo2po
-from translate.convert import po2oo
-from translate.convert import test_convert
+from urlparse import parse_qs
+
+from translate.convert import oo2po, po2oo, test_convert
 from translate.misc import wStringIO
-from translate.storage import po
+from translate.storage import oo, po
 from translate.storage.poheader import poheader
-from translate.storage import oo
 
 
 class TestOO2PO:
@@ -59,7 +53,7 @@ class TestOO2PO:
         oooutputfile = wStringIO.StringIO()
         po2oo.convertoo(poinputfile, oooutputfile, ootemplatefile, targetlanguage="en-US")
         ooresult = oooutputfile.getvalue()
-        print "original oo:\n", oosource, "po version:\n", posource, "output oo:\n", ooresult
+        print("original oo:\n", oosource, "po version:\n", posource, "output oo:\n", ooresult)
         return ooresult.split('\t')[10]
 
     def check_roundtrip(self, filename, text):
@@ -80,15 +74,15 @@ class TestOO2PO:
         pofile = self.convert(oosource)
         pounit = self.singleelement(pofile)
         poelementsrc = str(pounit)
-        print poelementsrc
+        print(poelementsrc)
         assert "Newline \n Newline" in pounit.source
         assert "Tab \t Tab" in pounit.source
         assert "CR \r CR" in pounit.source
 
     def test_roundtrip_escape(self):
         self.check_roundtrip('strings.src', r'The given command is not a SELECT statement.\nOnly queries are allowed.')
-        self.check_roundtrip('source\ui\dlg\AutoControls_tmpl.hrc', r';\t59\t,\t44\t:\t58\t{Tab}\t9\t{Space}\t32')
-        self.check_roundtrip('inc_openoffice\windows\msi_languages\Nsis.ulf', r'The installation files must be unpacked and copied to your hard disk in preparation for the installation. After that, the %PRODUCTNAME installation will start automatically.\r\n\r\nClick \'Next\' to continue.')
+        self.check_roundtrip('source\\ui\\dlg\\AutoControls_tmpl.hrc', r';\t59\t,\t44\t:\t58\t{Tab}\t9\t{Space}\t32')
+        self.check_roundtrip('inc_openoffice\\windows\\msi_languages\\Nsis.ulf', r'The installation files must be unpacked and copied to your hard disk in preparation for the installation. After that, the %PRODUCTNAME installation will start automatically.\r\n\r\nClick \'Next\' to continue.')
         self.check_roundtrip('file.xhp', r'\<ahelp\>')
         self.check_roundtrip('file.xhp', r'\<ahelp prop=\"value\"\>')
         self.check_roundtrip('file.xhp', r'\<ahelp prop=\"value\"\>marked up text\</ahelp\>')
@@ -106,7 +100,7 @@ class TestOO2PO:
         pofile = self.convert(oosource)
         pounit = self.singleelement(pofile)
         poelementsrc = str(pounit)
-        print poelementsrc
+        print(poelementsrc)
         assert pounit.source == r"\<"
 
     def test_escapes_helpcontent2(self):
@@ -115,7 +109,7 @@ class TestOO2PO:
         pofile = self.convert(oosource)
         pounit = self.singleelement(pofile)
         poelementsrc = str(pounit)
-        print poelementsrc
+        print(poelementsrc)
         assert pounit.source == r'size *2 \langle x \rangle'
 
     def test_msgid_bug_error_address(self):
@@ -125,14 +119,13 @@ class TestOO2PO:
         assert pofile.units[0].isheader()
         assert pofile.parseheader()["Report-Msgid-Bugs-To"]
         bug_url = urlparse.urlparse(pofile.parseheader()["Report-Msgid-Bugs-To"])
-        print bug_url
+        print(bug_url)
         assert bug_url[:3] == ("http", "qa.openoffice.org", "/issues/enter_bug.cgi")
         assert parse_qs(bug_url[4], True) == {u'comment': [u''],
-                                                       u'component': [u'l10n'],
-                                                       u'form_name': [u'enter_issue'],
-                                                       u'short_desc': [u'Localization issue in file: '],
-                                                       u'subcomponent': [u'ui'],
-                                                      }
+                                              u'component': [u'l10n'],
+                                              u'form_name': [u'enter_issue'],
+                                              u'short_desc': [u'Localization issue in file: '],
+                                              u'subcomponent': [u'ui'],}
 
     def test_x_comment_inclusion(self):
         """test that we can merge x-comment language entries into comment sections of the PO file"""
diff --git a/translate/convert/test_oo2xliff.py b/translate/convert/test_oo2xliff.py
index 57564bb..fe17191 100644
--- a/translate/convert/test_oo2xliff.py
+++ b/translate/convert/test_oo2xliff.py
@@ -3,13 +3,8 @@
 
 import os
 
-from translate.convert import test_oo2po
-from translate.convert import oo2xliff
-from translate.convert import xliff2oo
-from translate.convert import test_convert
-from translate.misc import wStringIO
-from translate.storage import xliff
-from translate.storage import oo
+from translate.convert import oo2xliff, test_convert, test_oo2po
+from translate.storage import oo, xliff
 
 
 class TestOO2XLIFF(test_oo2po.TestOO2PO):
diff --git a/translate/convert/test_php2po.py b/translate/convert/test_php2po.py
index 46fae75..3c69c2c 100644
--- a/translate/convert/test_php2po.py
+++ b/translate/convert/test_php2po.py
@@ -1,11 +1,9 @@
 #!/usr/bin/env python
 # -*- coding: utf-8 -*-
 
-from translate.convert import php2po
-from translate.convert import test_convert
+from translate.convert import php2po, test_convert
 from translate.misc import wStringIO
-from translate.storage import po
-from translate.storage import php
+from translate.storage import php, po
 
 
 class TestPhp2PO:
@@ -35,13 +33,13 @@ class TestPhp2PO:
         """checks that the pofile contains a single non-header element, and returns it"""
         assert len(pofile.units) == 2
         assert pofile.units[0].isheader()
-        print pofile
+        print(pofile)
         return pofile.units[1]
 
     def countelements(self, pofile):
         """counts the number of non-header entries"""
         assert pofile.units[0].isheader()
-        print pofile
+        print(pofile)
         return len(pofile.units) - 1
 
     def test_simpleentry(self):
@@ -67,8 +65,8 @@ class TestPhp2PO:
         phpsource = """$lang['nb'] = '%s';""" % unistring
         pofile = self.php2po(phpsource)
         pounit = self.singleelement(pofile)
-        print repr(pofile.units[0].target)
-        print repr(pounit.source)
+        print(repr(pofile.units[0].target))
+        print(repr(pounit.source))
         assert pounit.source == u'Norsk bokm\u00E5l'
 
     def test_multiline(self):
@@ -77,7 +75,7 @@ class TestPhp2PO:
 of connections to this server. If so, use the Advanced IMAP Server Settings dialog to
 reduce the number of cached connections.';"""
         pofile = self.php2po(phpsource)
-        print repr(pofile.units[1].target)
+        print(repr(pofile.units[1].target))
         assert self.countelements(pofile) == 1
 
     def test_comments_before(self):
diff --git a/translate/convert/test_po2csv.py b/translate/convert/test_po2csv.py
index 4c069ba..4f167e1 100644
--- a/translate/convert/test_po2csv.py
+++ b/translate/convert/test_po2csv.py
@@ -1,12 +1,9 @@
 #!/usr/bin/env python
 
-from translate.convert import po2csv
-from translate.convert import csv2po
-from translate.convert import test_convert
+from translate.convert import csv2po, po2csv, test_convert
 from translate.misc import wStringIO
-from translate.storage import po
-from translate.storage import csvl10n
-from translate.storage.test_base import headerless_len, first_translatable
+from translate.storage import csvl10n, po
+from translate.storage.test_base import first_translatable, headerless_len
 
 
 class TestPO2CSV:
@@ -102,20 +99,20 @@ msgstr "Gebruik \\\"."
 msgstr "Vind\\Opsies"
 '''
         csvfile = self.po2csv(minipo)
-        print minipo
-        print csvfile
+        print(minipo)
+        print(csvfile)
         assert csvfile.findunit(r'Find\Options').target == r'Vind\Opsies'
 
     def test_singlequotes(self):
         """Tests that single quotes are preserved correctly"""
         minipo = '''msgid "source 'source'"\nmsgstr "target 'target'"\n'''
         csvfile = self.po2csv(minipo)
-        print str(csvfile)
+        print(str(csvfile))
         assert csvfile.findunit("source 'source'").target == "target 'target'"
         # Make sure we don't mess with start quotes until writing
         minipo = '''msgid "'source'"\nmsgstr "'target'"\n'''
         csvfile = self.po2csv(minipo)
-        print str(csvfile)
+        print(str(csvfile))
         assert csvfile.findunit(r"'source'").target == r"'target'"
         # TODO check that we escape on writing not in the internal representation
 
diff --git a/translate/convert/test_po2dtd.py b/translate/convert/test_po2dtd.py
index 13c9a55..2241075 100644
--- a/translate/convert/test_po2dtd.py
+++ b/translate/convert/test_po2dtd.py
@@ -5,12 +5,9 @@ import warnings
 
 import pytest
 
-from translate.convert import po2dtd
-from translate.convert import dtd2po
-from translate.convert import test_convert
+from translate.convert import dtd2po, po2dtd, test_convert
 from translate.misc import wStringIO
-from translate.storage import po
-from translate.storage import dtd
+from translate.storage import dtd, po
 
 
 class TestPO2DTD:
@@ -62,7 +59,7 @@ class TestPO2DTD:
         dtdresult = dtdoutputfile.getvalue()
         print_string = "Original DTD:\n%s\n\nPO version:\n%s\n\n"
         print_string = print_string + "Output DTD:\n%s\n################"
-        print print_string % (dtdsource, posource, dtdresult)
+        print(print_string % (dtdsource, posource, dtdresult))
         return dtdresult
 
     def roundtripstring(self, entitystring):
@@ -106,8 +103,13 @@ class TestPO2DTD:
 
     def test_missingaccesskey(self):
         """tests that proper warnings are given if access key is missing"""
-        simplepo = '''#: simple.label\n#: simple.accesskey\nmsgid "Simple &String"\nmsgstr "Dimpled Ring"\n'''
-        simpledtd = '''<!ENTITY simple.label "Simple String">\n<!ENTITY simple.accesskey "S">'''
+        simplepo = '''#: simple.label
+#: simple.accesskey
+msgid "Simple &String"
+msgstr "Dimpled Ring"
+'''
+        simpledtd = '''<!ENTITY simple.label "Simple String">
+<!ENTITY simple.accesskey "S">'''
         warnings.simplefilter("error")
         assert pytest.raises(Warning, self.merge2dtd, simpledtd, simplepo)
 
@@ -132,7 +134,7 @@ class TestPO2DTD:
             simpledtd = simpledtd_template % (en_label, en_akey)
             dtdfile = self.merge2dtd(simpledtd, simplepo)
             dtdfile.makeindex()
-            accel = dtd.unquotefromdtd(dtdfile.index["simple.accesskey"].definition)
+            accel = dtd.unquotefromdtd(dtdfile.id_index["simple.accesskey"].definition)
             assert accel == target_akey
 
     def test_accesskey_types(self):
@@ -145,7 +147,7 @@ class TestPO2DTD:
                 simpledtd = simpledtd_template % (label, accesskey)
                 dtdfile = self.merge2dtd(simpledtd, simplepo)
                 dtdfile.makeindex()
-                assert dtd.unquotefromdtd(dtdfile.index["simple.%s" % accesskey].definition) == "a"
+                assert dtd.unquotefromdtd(dtdfile.id_index["simple.%s" % accesskey].definition) == "a"
 
     def test_ampersandfix(self):
         """tests that invalid ampersands are fixed in the dtd"""
@@ -163,9 +165,77 @@ msgstr "&searchIntegration.engineName; &ileti aramasına izin ver"
 <!ENTITY searchIntegration.label       "Allow &searchIntegration.engineName; to search messages">'''
         dtdfile = self.merge2dtd(dtd_snippet, po_snippet)
         dtdsource = str(dtdfile)
-        print dtdsource
+        print(dtdsource)
         assert '"&searchIntegration.engineName; ileti aramasına izin ver"' in dtdsource
 
+    def test_accesskey_missing(self):
+        """tests that missing ampersands use the source accesskey"""
+        po_snippet = r'''#: key.label
+#: key.accesskey
+msgid "&Search"
+msgstr "Ileti"
+'''
+        dtd_snippet = r'''<!ENTITY key.accesskey      "S">
+<!ENTITY key.label       "Ileti">'''
+        dtdfile = self.merge2dtd(dtd_snippet, po_snippet)
+        dtdsource = str(dtdfile)
+        print(dtdsource)
+        assert '"Ileti"' in dtdsource
+        assert '""' not in dtdsource
+        assert '"S"' in dtdsource
+
+    def test_accesskey_and_amp_case_no_accesskey(self):
+        """tests that accesskey and & can work together
+
+        If missing we use the source accesskey"""
+        po_snippet = r'''#: key.label
+#: key.accesskey
+msgid "Colour & &Light"
+msgstr "Lig en Kleur"
+'''
+        dtd_snippet = r'''<!ENTITY key.accesskey      "L">
+<!ENTITY key.label       "Colour & Light">'''
+        dtdfile = self.merge2dtd(dtd_snippet, po_snippet)
+        dtdsource = str(dtdfile)
+        print(dtdsource)
+        assert '"Lig en Kleur"' in dtdsource
+        assert '"L"' in dtdsource
+
+    def test_accesskey_and_amp_case_no_amp(self):
+        """tests that accesskey and & can work together
+
+        If present we use the target accesskey"""
+        po_snippet = r'''#: key.label
+#: key.accesskey
+msgid "Colour & &Light"
+msgstr "Lig en &Kleur"
+'''
+        dtd_snippet = r'''<!ENTITY key.accesskey      "L">
+<!ENTITY key.label       "Colour & Light">'''
+        dtdfile = self.merge2dtd(dtd_snippet, po_snippet)
+        dtdsource = str(dtdfile)
+        print(dtdsource)
+        assert '"Lig en Kleur"' in dtdsource
+        assert '"K"' in dtdsource
+
+    def test_accesskey_and_amp_case_both_amp_and_accesskey(self):
+        """tests that accesskey and & can work together
+
+        If present both & (and) and a marker then we use the correct source
+        accesskey"""
+        po_snippet = r'''#: key.label
+#: key.accesskey
+msgid "Colour & &Light"
+msgstr "Lig & &Kleur"
+'''
+        dtd_snippet = r'''<!ENTITY key.accesskey      "L">
+<!ENTITY key.label       "Colour & Light">'''
+        dtdfile = self.merge2dtd(dtd_snippet, po_snippet)
+        dtdsource = str(dtdfile)
+        print(dtdsource)
+        assert '"Lig & Kleur"' in dtdsource
+        assert '"K"' in dtdsource
+
     def test_entities_two(self):
         """test the error ouput when we find two entities"""
         simplestring = '''#: simple.string second.string\nmsgid "Simple String"\nmsgstr "Dimpled Ring"\n'''
@@ -201,7 +271,7 @@ msgstr "&searchIntegration.engineName; &ileti aramasına izin ver"
         dtdtemplate = '''<!ENTITY simple.label "Simple String">\n<!ENTITY simple.accesskey "S">\n'''
         dtdexpected = '''<!ENTITY simple.label "Dimpled Ring">\n<!ENTITY simple.accesskey "R">\n'''
         newdtd = self.convertdtd(posource, dtdtemplate)
-        print newdtd
+        print(newdtd)
         assert newdtd == dtdexpected
 
     def test_untranslated_with_template(self):
@@ -234,7 +304,7 @@ msgstr "simple string four"
 
 '''
         newdtd = self.convertdtd(posource, dtdtemplate, remove_untranslated=True)
-        print newdtd
+        print(newdtd)
         assert newdtd == dtdexpected
 
     def test_untranslated_without_template(self):
@@ -260,7 +330,7 @@ msgstr "simple string four"
 <!ENTITY simple.label3 "Simple string 3">
 '''
         newdtd = self.po2dtd(posource, remove_untranslated=True)
-        print newdtd
+        print(newdtd)
         assert str(newdtd) == dtdexpected
 
     def test_blank_source(self):
@@ -290,10 +360,10 @@ msgstr "Simple string 3"
 <!ENTITY simple.label3 "Simple string 3">
 '''
         newdtd_with_template = self.convertdtd(posource, dtdtemplate, remove_untranslated=True)
-        print newdtd_with_template
+        print(newdtd_with_template)
         assert newdtd_with_template == dtdexpected_with_template
         newdtd_no_template = self.po2dtd(posource, remove_untranslated=True)
-        print newdtd_no_template
+        print(newdtd_no_template)
         assert str(newdtd_no_template) == dtdexpected_no_template
 
     def test_newlines_escapes(self):
@@ -302,7 +372,7 @@ msgstr "Simple string 3"
         dtdtemplate = '<!ENTITY  simple.label "A hard coded newline.\n">\n'
         dtdexpected = '''<!ENTITY  simple.label "Hart gekoeerde nuwe lyne\n">\n'''
         dtdfile = self.merge2dtd(dtdtemplate, posource)
-        print dtdfile
+        print(dtdfile)
         assert str(dtdfile) == dtdexpected
 
     def test_roundtrip_simple(self):
@@ -356,6 +426,13 @@ msgstr "Simple string 3"
         self.check_roundtrip(r'''"Both Quotes "" '' "''',
                              r'''"Both Quotes "" '' "''')
 
+    def test_roundtrip_amp(self):
+        """Checks that quotes make it through a DTD->PO->DTD roundtrip.
+
+        Quotes may be escaped or not.
+        """
+        self.check_roundtrip('"Colour & Light"')
+
     def test_merging_entries_with_spaces_removed(self):
         """dtd2po removes pretty printed spaces, this tests that we can merge this back into the pretty printed dtd"""
         posource = '''#: simple.label\nmsgid "First line then "\n"next lines."\nmsgstr "Eerste lyne en dan volgende lyne."\n'''
@@ -363,7 +440,7 @@ msgstr "Simple string 3"
           '                                          next lines.">\n'
         dtdexpected = '<!ENTITY simple.label "Eerste lyne en dan volgende lyne.">\n'
         dtdfile = self.merge2dtd(dtdtemplate, posource)
-        print dtdfile
+        print(dtdfile)
         assert str(dtdfile) == dtdexpected
 
     def test_preserving_spaces(self):
@@ -372,7 +449,7 @@ msgstr "Simple string 3"
         dtdtemplate = '<!ENTITY     simple.label         "One">\n'
         dtdexpected = '<!ENTITY     simple.label         "Een">\n'
         dtdfile = self.merge2dtd(dtdtemplate, posource)
-        print dtdfile
+        print(dtdfile)
         assert str(dtdfile) == dtdexpected
 
     def test_preserving_spaces(self):
@@ -382,13 +459,13 @@ msgstr "Simple string 3"
         dtdtemplate = '<!ENTITY simple.label "One" >\n'
         dtdexpected = '<!ENTITY simple.label "Een" >\n'
         dtdfile = self.merge2dtd(dtdtemplate, posource)
-        print dtdfile
+        print(dtdfile)
         assert str(dtdfile) == dtdexpected
         # Space after >
         dtdtemplate = '<!ENTITY simple.label "One"> \n'
         dtdexpected = '<!ENTITY simple.label "Een"> \n'
         dtdfile = self.merge2dtd(dtdtemplate, posource)
-        print dtdfile
+        print(dtdfile)
         assert str(dtdfile) == dtdexpected
 
     def test_comments(self):
@@ -396,7 +473,7 @@ msgstr "Simple string 3"
         posource = '''#: name\nmsgid "Text"\nmsgstr "Teks"'''
         dtdtemplate = '''<!ENTITY name "%s">\n<!-- \n\nexample -->\n'''
         dtdfile = self.merge2dtd(dtdtemplate % "Text", posource)
-        print dtdfile
+        print(dtdfile)
         assert str(dtdfile) == dtdtemplate % "Teks"
 
     def test_duplicates(self):
@@ -427,7 +504,7 @@ msgstr "Dipukutshwayo3"
 <!ENTITY bookmarksButton.label "Dipukutshwayo3">
 '''
         dtdfile = self.merge2dtd(dtdtemplate, posource)
-        print dtdfile
+        print(dtdfile)
         assert str(dtdfile) == dtdexpected
 
 
diff --git a/translate/convert/test_po2html.py b/translate/convert/test_po2html.py
index 44ee712..f5697c2 100644
--- a/translate/convert/test_po2html.py
+++ b/translate/convert/test_po2html.py
@@ -2,21 +2,20 @@
 
 from pytest import mark
 
-from translate.convert import po2html
-from translate.convert import test_convert
+from translate.convert import po2html, test_convert
 from translate.misc import wStringIO
 
 
 class TestPO2Html:
 
-    def converthtml(self, posource, htmltemplate):
+    def converthtml(self, posource, htmltemplate, includefuzzy=False):
         """helper to exercise the command line function"""
         inputfile = wStringIO.StringIO(posource)
-        print inputfile.getvalue()
+        print(inputfile.getvalue())
         outputfile = wStringIO.StringIO()
         templatefile = wStringIO.StringIO(htmltemplate)
-        assert po2html.converthtml(inputfile, outputfile, templatefile)
-        print outputfile.getvalue()
+        assert po2html.converthtml(inputfile, outputfile, templatefile, includefuzzy)
+        print(outputfile.getvalue())
         return outputfile.getvalue()
 
     def test_simple(self):
@@ -86,18 +85,42 @@ sin.
         htmlexpected = '<p>"ek is dom"</p>'
         assert htmlexpected in self.converthtml(posource, htmlsource)
 
-    def test_fuzzy_strings(self):
+    def test_states_translated(self):
+        """Test that we use target when translated"""
+        htmlsource = '<div>aaa</div>'
+        posource = 'msgid "aaa"\nmsgstr "bbb"\n'
+        htmltarget = '<div>bbb</div>'
+        assert htmltarget in self.converthtml(posource, htmlsource)
+        assert htmlsource not in self.converthtml(posource, htmlsource)
+
+    def test_states_untranslated(self):
+        """Test that we use source when a string is untranslated"""
+        htmlsource = '<div>aaa</div>'
+        posource = 'msgid "aaa"\nmsgstr ""\n'
+        htmltarget = htmlsource
+        assert htmltarget in self.converthtml(posource, htmlsource)
+
+    def test_states_fuzzy(self):
         """Test that we use source when a string is fuzzy
 
-        This fixes :bug:`3145`
+        This fixes :issue:`3145`
         """
         htmlsource = '<div>aaa</div>'
-        posource = '#: html:3\nmsgid "aaa"\nmsgstr "bbb"\n'
-        posource_fuzzy = '#: html:3\n#, fuzzy\nmsgid "aaa"\nmsgstr "bbb"\n'
-        htmlexpected = '<div>bbb</div>'
-        assert htmlexpected in self.converthtml(posource, htmlsource)
-        assert htmlexpected not in self.converthtml(posource_fuzzy, htmlsource)
-        assert htmlsource in self.converthtml(posource_fuzzy, htmlsource)
+        posource = '#: html:3\n#, fuzzy\nmsgid "aaa"\nmsgstr "bbb"\n'
+        htmltarget = '<div>bbb</div>'
+        # Don't use fuzzies
+        assert htmltarget not in self.converthtml(posource, htmlsource, includefuzzy=False)
+        assert htmlsource in self.converthtml(posource, htmlsource, includefuzzy=False)
+        # Use fuzzies
+        assert htmltarget in self.converthtml(posource, htmlsource, includefuzzy=True)
+        assert htmlsource not in self.converthtml(posource, htmlsource, includefuzzy=True)
+
+    def test_untranslated_attributes(self):
+        """Verify that untranslated attributes are output as source, not dropped."""
+        htmlsource = '<meta name="keywords" content="life, the universe, everything" />'
+        posource = '#: test.html+:-1\nmsgid "life, the universe, everything"\nmsgstr ""'
+        expected = '<meta name="keywords" content="life, the universe, everything" />'
+        assert expected in self.converthtml(posource, htmlsource)
 
 
 class TestPO2HtmlCommand(test_convert.TestConvertCommand, TestPO2Html):
diff --git a/translate/convert/test_po2ical.py b/translate/convert/test_po2ical.py
index 3db351d..96447be 100644
--- a/translate/convert/test_po2ical.py
+++ b/translate/convert/test_po2ical.py
@@ -4,11 +4,11 @@
 import pytest
 pytest.importorskip("vobject")
 
-from translate.convert import po2ical
-from translate.convert import test_convert
+from translate.convert import po2ical, test_convert
 from translate.misc import wStringIO
 from translate.storage import po
 
+
 icalboiler = '''BEGIN:VCALENDAR
 VERSION:2.0
 PRODID:-//hacksw/handcal//NONSGML v1.0//EN
@@ -44,7 +44,7 @@ class TestPO2Ical:
         #templateprop = properties.propfile(templatefile)
         convertor = po2ical.reical(templatefile, inputpo)
         outputical = convertor.convertstore()
-        print outputical
+        print(outputical)
         return outputical
 
     def test_simple_summary(self):
@@ -56,7 +56,7 @@ msgstr "Waarde"
         icaltemplate = icalboiler % "Value"
         icalexpected = icalboiler % "Waarde"
         icalfile = self.merge2ical(icaltemplate, posource)
-        print icalexpected
+        print(icalexpected)
         assert icalfile == icalexpected
 
     # FIXME we should also test for DESCRIPTION, LOCATION and COMMENT
diff --git a/translate/convert/test_po2ini.py b/translate/convert/test_po2ini.py
index 50cd83a..31f695c 100644
--- a/translate/convert/test_po2ini.py
+++ b/translate/convert/test_po2ini.py
@@ -3,8 +3,7 @@
 
 from pytest import importorskip
 
-from translate.convert import po2ini
-from translate.convert import test_convert
+from translate.convert import po2ini, test_convert
 from translate.misc import wStringIO
 from translate.storage import po
 
@@ -29,7 +28,7 @@ class TestPO2Ini:
         templatefile = wStringIO.StringIO(inisource)
         convertor = po2ini.reini(templatefile, inputpo, dialect=dialect)
         outputini = convertor.convertstore()
-        print outputini
+        print(outputini)
         return outputini
 
     def test_merging_simple(self):
@@ -38,7 +37,7 @@ class TestPO2Ini:
         initemplate = '''[section]\nprop=value\n'''
         iniexpected = '''[section]\nprop=waarde\n'''
         inifile = self.merge2ini(initemplate, posource)
-        print inifile
+        print(inifile)
         assert inifile == iniexpected
 
     def test_space_preservation(self):
@@ -47,7 +46,7 @@ class TestPO2Ini:
         initemplate = '''[section]\nprop  =  value\n'''
         iniexpected = '''[section]\nprop  =  waarde\n'''
         inifile = self.merge2ini(initemplate, posource)
-        print inifile
+        print(inifile)
         assert inifile == iniexpected
 
     def test_merging_blank_entries(self):
@@ -60,7 +59,7 @@ msgstr ""'''
         initemplate = '[section]\naccesskey-accept=\n'
         iniexpected = '[section]\naccesskey-accept=\n'
         inifile = self.merge2ini(initemplate, posource)
-        print inifile
+        print(inifile)
         assert inifile == iniexpected
 
     def test_merging_fuzzy(self):
@@ -69,7 +68,7 @@ msgstr ""'''
         initemplate = '''[section]\nprop=value\n'''
         iniexpected = '''[section]\nprop=value\n'''
         inifile = self.merge2ini(initemplate, posource)
-        print inifile
+        print(inifile)
         assert inifile == iniexpected
 
     def test_merging_propertyless_template(self):
@@ -78,7 +77,7 @@ msgstr ""'''
         initemplate = "# A comment\n"
         iniexpected = initemplate
         inifile = self.merge2ini(initemplate, posource)
-        print inifile
+        print(inifile)
         assert inifile == iniexpected
 
     def test_empty_value(self):
@@ -91,7 +90,7 @@ msgstr "translated"
         initemplate = '''[section]\nkey =\n'''
         iniexpected = '''[section]\nkey =translated\n'''
         inifile = self.merge2ini(initemplate, posource)
-        print inifile
+        print(inifile)
         assert inifile == iniexpected
 
     def test_dialects_inno(self):
@@ -103,7 +102,7 @@ msgstr "ṽḁḽṻḝ\tṽḁḽṻḝ2\n"
         initemplate = '''[section]\nprop  =  value%tvalue%n\n'''
         iniexpected = '''[section]\nprop  =  ṽḁḽṻḝ%tṽḁḽṻḝ2%n\n'''
         inifile = self.merge2ini(initemplate, posource, "inno")
-        print inifile
+        print(inifile)
         assert inifile == iniexpected
 
 
diff --git a/translate/convert/test_po2moz.py b/translate/convert/test_po2moz.py
index d0b84c0..9e958ee 100644
--- a/translate/convert/test_po2moz.py
+++ b/translate/convert/test_po2moz.py
@@ -1,7 +1,6 @@
 #!/usr/bin/env python
 
-from translate.convert import po2moz
-from translate.convert import test_convert
+from translate.convert import po2moz, test_convert
 
 
 class TestPO2Moz:
diff --git a/translate/convert/test_po2mozlang.py b/translate/convert/test_po2mozlang.py
index 435a63a..469d9b0 100644
--- a/translate/convert/test_po2mozlang.py
+++ b/translate/convert/test_po2mozlang.py
@@ -1,8 +1,7 @@
 #!/usr/bin/env python
 # -*- coding: utf-8 -*-
 
-from translate.convert import po2mozlang
-from translate.convert import test_convert
+from translate.convert import po2mozlang, test_convert
 from translate.misc import wStringIO
 from translate.storage import po
 
@@ -22,7 +21,7 @@ class TestPO2Lang:
         posource = '''#: prop\nmsgid "Source"\nmsgstr "Target"\n'''
         propexpected = ''';Source\nTarget\n'''
         langfile = self.po2lang(posource)
-        print langfile
+        print(langfile)
         assert str(langfile) == propexpected
 
     def test_comment(self):
@@ -30,7 +29,7 @@ class TestPO2Lang:
         posource = '''#. Comment\n#: prop\nmsgid "Source"\nmsgstr "Target"\n'''
         propexpected = '''# Comment\n;Source\nTarget\n'''
         langfile = self.po2lang(posource)
-        print langfile
+        print(langfile)
         assert str(langfile) == propexpected
 
     def test_fuzzy(self):
@@ -38,7 +37,7 @@ class TestPO2Lang:
         posource = '''#. Comment\n#: prop\n#, fuzzy\nmsgid "Source"\nmsgstr "Target"\n'''
         propexpected = '''# Comment\n;Source\nSource\n'''
         langfile = self.po2lang(posource)
-        print langfile
+        print(langfile)
         assert str(langfile) == propexpected
 
     def test_ok_marker(self):
@@ -46,7 +45,7 @@ class TestPO2Lang:
         posource = '''#: prop\nmsgid "Same"\nmsgstr "Same"\n'''
         propexpected = ''';Same\nSame {ok}\n'''
         langfile = self.po2lang(posource)
-        print langfile
+        print(langfile)
         assert str(langfile) == propexpected
 
 
diff --git a/translate/convert/test_po2oo.py b/translate/convert/test_po2oo.py
index 88e51cd..09ac859 100644
--- a/translate/convert/test_po2oo.py
+++ b/translate/convert/test_po2oo.py
@@ -5,9 +5,7 @@ import warnings
 
 from pytest import mark
 
-from translate.convert import po2oo
-from translate.convert import oo2po
-from translate.convert import test_convert
+from translate.convert import oo2po, po2oo, test_convert
 from translate.misc import wStringIO
 from translate.storage import po
 
@@ -41,7 +39,7 @@ class TestPO2OO:
         oooutputfile = wStringIO.StringIO()
         po2oo.convertoo(poinputfile, oooutputfile, ootemplatefile, targetlanguage="en-US")
         ooresult = oooutputfile.getvalue()
-        print "original oo:\n", oosource, "po version:\n", posource, "output oo:\n", ooresult
+        print("original oo:\n", oosource, "po version:\n", posource, "output oo:\n", ooresult)
         assert ooresult.startswith(oointro) and ooresult.endswith(oooutro)
         return ooresult[len(oointro):-len(oooutro)]
 
diff --git a/translate/convert/test_po2php.py b/translate/convert/test_po2php.py
index 3442014..b6f4a3e 100644
--- a/translate/convert/test_po2php.py
+++ b/translate/convert/test_po2php.py
@@ -3,8 +3,7 @@
 
 from pytest import mark
 
-from translate.convert import po2php
-from translate.convert import test_convert
+from translate.convert import po2php, test_convert
 from translate.misc import wStringIO
 from translate.storage import po
 
@@ -27,7 +26,7 @@ class TestPO2Php:
         #templatephp = php.phpfile(templatefile)
         convertor = po2php.rephp(templatefile, inputpo)
         outputphp = convertor.convertstore()
-        print outputphp
+        print(outputphp)
         return outputphp
 
     def test_merging_simple(self):
@@ -36,7 +35,7 @@ class TestPO2Php:
         phptemplate = '''$lang['name'] = 'value';\n'''
         phpexpected = '''$lang['name'] = 'waarde';\n'''
         phpfile = self.merge2php(phptemplate, posource)
-        print phpfile
+        print(phpfile)
         assert phpfile == [phpexpected]
 
     def test_space_preservation(self):
@@ -45,7 +44,7 @@ class TestPO2Php:
         phptemplate = '''$lang['name']  =  'value';\n'''
         phpexpected = '''$lang['name']  =  'waarde';\n'''
         phpfile = self.merge2php(phptemplate, posource)
-        print phpfile
+        print(phpfile)
         assert phpfile == [phpexpected]
 
     def test_merging_blank_entries(self):
@@ -58,7 +57,7 @@ msgstr ""'''
         phptemplate = '''$lang['accesskey-accept'] = '';\n'''
         phpexpected = '''$lang['accesskey-accept'] = '';\n'''
         phpfile = self.merge2php(phptemplate, posource)
-        print phpfile
+        print(phpfile)
         assert phpfile == [phpexpected]
 
     def test_merging_fuzzy(self):
@@ -67,7 +66,7 @@ msgstr ""'''
         phptemplate = '''$lang['name']  =  'value';\n'''
         phpexpected = '''$lang['name']  =  'value';\n'''
         phpfile = self.merge2php(phptemplate, posource)
-        print phpfile
+        print(phpfile)
         assert phpfile == [phpexpected]
 
     def test_locations_with_spaces(self):
@@ -76,7 +75,7 @@ msgstr ""'''
         phptemplate = '''$lang[ 'name' ]  =  'value';\n'''
         phpexpected = '''$lang[ 'name' ]  =  'waarde';\n'''
         phpfile = self.merge2php(phptemplate, posource)
-        print phpfile
+        print(phpfile)
         assert phpfile == [phpexpected]
 
     def test_inline_comments(self):
@@ -85,7 +84,7 @@ msgstr ""'''
         phptemplate = '''$lang[ 'name' ]  =  'value'; //inline comment\n'''
         phpexpected = '''$lang[ 'name' ]  =  'waarde'; //inline comment\n'''
         phpfile = self.merge2php(phptemplate, posource)
-        print phpfile
+        print(phpfile)
         assert phpfile == [phpexpected]
 
     def test_named_variables(self):
@@ -97,7 +96,7 @@ msgstr "Jaar"
         phptemplate = '''$dictYear = 'Year';\n'''
         phpexpected = '''$dictYear = 'Jaar';\n'''
         phpfile = self.merge2php(phptemplate, posource)
-        print phpfile
+        print(phpfile)
         assert phpfile == [phpexpected]
 
     def test_multiline(self):
@@ -119,7 +118,7 @@ about to automatically upgrade your server to this version:
 <p>Once you do this you can not go back again.</p>
 <p>Are you sure you want to upgrade this server to this version?</p>';\n'''
         phpfile = self.merge2php(phptemplate, posource)
-        print phpfile[0]
+        print(phpfile[0])
         assert phpfile[0] == phptemplate
 
     def test_hash_comment(self):
@@ -131,7 +130,7 @@ msgstr "stringetjie"
         phptemplate = '''# inside alt= stuffies\n$variable = 'stringy';\n'''
         phpexpected = '''# inside alt= stuffies\n$variable = 'stringetjie';\n'''
         phpfile = self.merge2php(phptemplate, posource)
-        print phpfile
+        print(phpfile)
         assert "".join(phpfile) == phpexpected
 
     def test_arrays(self):
@@ -140,7 +139,7 @@ msgstr "stringetjie"
         phptemplate = '''$lang = array(\n    'name' => 'value',\n);\n'''
         phpexpected = '''$lang = array(\n    'name' => 'waarde',\n);\n'''
         phpfile = self.merge2php(phptemplate, posource)
-        print phpfile
+        print(phpfile)
         assert "".join(phpfile) == phpexpected
 
     @mark.xfail(reason="Need to review if we want this behaviour")
@@ -150,7 +149,7 @@ msgstr "stringetjie"
         proptemplate = "# A comment\n"
         propexpected = proptemplate
         propfile = self.merge2prop(proptemplate, posource)
-        print propfile
+        print(propfile)
         assert propfile == [propexpected]
 
 
diff --git a/translate/convert/test_po2prop.py b/translate/convert/test_po2prop.py
index 4d89a90..702ed76 100644
--- a/translate/convert/test_po2prop.py
+++ b/translate/convert/test_po2prop.py
@@ -1,8 +1,7 @@
 #!/usr/bin/env python
 # -*- coding: utf-8 -*-
 
-from translate.convert import po2prop
-from translate.convert import test_convert
+from translate.convert import po2prop, test_convert
 from translate.misc import wStringIO
 from translate.storage import po
 
@@ -25,7 +24,7 @@ class TestPO2Prop:
         #templateprop = properties.propfile(templatefile)
         convertor = po2prop.reprop(templatefile, inputpo, personality=personality, remove_untranslated=remove_untranslated)
         outputprop = convertor.convertstore()
-        print outputprop
+        print(outputprop)
         return outputprop
 
     def test_merging_simple(self):
@@ -34,7 +33,7 @@ class TestPO2Prop:
         proptemplate = '''prop=value\n'''
         propexpected = '''prop=waarde\n'''
         propfile = self.merge2prop(proptemplate, posource)
-        print propfile
+        print(propfile)
         assert propfile == propexpected
 
     def test_merging_untranslated(self):
@@ -43,7 +42,7 @@ class TestPO2Prop:
         proptemplate = '''prop=value\n'''
         propexpected = proptemplate
         propfile = self.merge2prop(proptemplate, posource)
-        print propfile
+        print(propfile)
         assert propfile == propexpected
 
     def test_hard_newlines_preserved(self):
@@ -52,7 +51,7 @@ class TestPO2Prop:
         proptemplate = '''prop=\\nvalue\\n\\n\n'''
         propexpected = '''prop=\\nwaarde\\n\\n\n'''
         propfile = self.merge2prop(proptemplate, posource)
-        print propfile
+        print(propfile)
         assert propfile == propexpected
 
     def test_space_preservation(self):
@@ -61,7 +60,7 @@ class TestPO2Prop:
         proptemplate = '''prop  =  value\n'''
         propexpected = '''prop  =  waarde\n'''
         propfile = self.merge2prop(proptemplate, posource)
-        print propfile
+        print(propfile)
         assert propfile == propexpected
 
     def test_merging_blank_entries(self):
@@ -74,7 +73,7 @@ msgstr ""'''
         proptemplate = 'accesskey-accept=\n'
         propexpected = 'accesskey-accept=\n'
         propfile = self.merge2prop(proptemplate, posource)
-        print propfile
+        print(propfile)
         assert propfile == propexpected
 
     def test_merging_fuzzy(self):
@@ -83,7 +82,90 @@ msgstr ""'''
         proptemplate = '''prop=value\n'''
         propexpected = '''prop=value\n'''
         propfile = self.merge2prop(proptemplate, posource)
-        print propfile
+        print(propfile)
+        assert propfile == propexpected
+
+    def test_mozilla_accesskeys(self):
+        """check merging Mozilla accesskeys"""
+        posource = '''#: prop.label prop.accesskey
+msgid "&Value"
+msgstr "&Waarde"
+
+#: key.label key.accesskey
+msgid "&Key"
+msgstr "&Sleutel"
+'''
+        proptemplate = '''prop.label=Value
+prop.accesskey=V
+key.label=Key
+key.accesskey=K
+'''
+        propexpected = '''prop.label=Waarde
+prop.accesskey=W
+key.label=Sleutel
+key.accesskey=S
+'''
+        propfile = self.merge2prop(proptemplate, posource, personality="mozilla")
+        print(propfile)
+        assert propfile == propexpected
+
+    def test_mozilla_accesskeys_missing_accesskey(self):
+        """check merging Mozilla accesskeys"""
+        posource = '''#: prop.label prop.accesskey
+# No accesskey because we forgot or language doesn't do accesskeys
+msgid "&Value"
+msgstr "Waarde"
+'''
+        proptemplate = '''prop.label=Value
+prop.accesskey=V
+'''
+        propexpected = '''prop.label=Waarde
+prop.accesskey=V
+'''
+        propfile = self.merge2prop(proptemplate, posource, personality="mozilla")
+        print(propfile)
+        assert propfile == propexpected
+
+    def test_mozilla_margin_whitespace(self):
+        """Check handling of Mozilla leading and trailing spaces"""
+        posource = '''#: sepAnd
+msgid " and "
+msgstr " و "
+
+#: sepComma
+msgid ", "
+msgstr "، "
+'''
+        proptemplate = r'''sepAnd = \u0020and\u0020
+sepComma = ,\u20
+'''
+        propexpected = r'''sepAnd = \u0020و\u0020
+sepComma = ،\u0020
+'''
+        propfile = self.merge2prop(proptemplate, posource, personality="mozilla")
+        print(propfile)
+        assert propfile == propexpected
+
+    def test_mozilla_all_whitespace(self):
+        """Check for all white-space Mozilla hack, remove when the
+        corresponding code is removed."""
+        posource = '''#: accesskey-accept
+msgctxt "accesskey-accept"
+msgid ""
+msgstr " "
+
+#: accesskey-help
+msgid "H"
+msgstr "م"
+'''
+        proptemplate = '''accesskey-accept=
+accesskey-help=H
+'''
+        propexpected = '''accesskey-accept=
+accesskey-help=م
+'''
+        propfile = self.merge2prop(proptemplate, posource, personality="mozilla")
+        print(propfile)
         assert propfile == propexpected
 
     def test_merging_propertyless_template(self):
@@ -92,7 +174,7 @@ msgstr ""'''
         proptemplate = "# A comment\n"
         propexpected = proptemplate
         propfile = self.merge2prop(proptemplate, posource)
-        print propfile
+        print(propfile)
         assert propfile == propexpected
 
     def test_delimiters(self):
@@ -101,9 +183,9 @@ msgstr ""'''
         proptemplate = '''prop %s value\n'''
         propexpected = '''prop %s translated\n'''
         for delim in ['=', ':', '']:
-            print "testing '%s' as delimiter" % delim
+            print("testing '%s' as delimiter" % delim)
             propfile = self.merge2prop(proptemplate % delim, posource)
-            print propfile
+            print(propfile)
             assert propfile == propexpected % delim
 
     def test_empty_value(self):
@@ -116,7 +198,7 @@ msgstr "translated"
         proptemplate = '''key\n'''
         propexpected = '''key = translated\n'''
         propfile = self.merge2prop(proptemplate, posource)
-        print propfile
+        print(propfile)
         assert propfile == propexpected
 
     def test_personalities(self):
@@ -146,10 +228,10 @@ msgstr "translated"
         posource = '''#: prop\nmsgid "value"\nmsgstr ""\n'''
         proptemplate = '''prop = value\n'''
         propfile = self.merge2prop(proptemplate, posource)
-        print propfile
+        print(propfile)
         assert propfile == proptemplate  # We use the existing values
         propfile = self.merge2prop(proptemplate, posource, remove_untranslated=True)
-        print propfile
+        print(propfile)
         assert propfile == ''  # We drop the key
 
     def test_merging_untranslated_multiline(self):
@@ -160,10 +242,10 @@ msgstr "translated"
 '''
         propexpected = '''prop = value1 value2\n'''
         propfile = self.merge2prop(proptemplate, posource)
-        print propfile
+        print(propfile)
         assert propfile == propexpected  # We use the existing values
         propfile = self.merge2prop(proptemplate, posource, remove_untranslated=True)
-        print propfile
+        print(propfile)
         assert propfile == ''  # We drop the key
 
     def test_merging_untranslated_comments(self):
@@ -172,10 +254,10 @@ msgstr "translated"
         proptemplate = '''# A comment\nprop = value\n'''
         propexpected = '# A comment\nprop = value\n'
         propfile = self.merge2prop(proptemplate, posource)
-        print propfile
+        print(propfile)
         assert propfile == propexpected  # We use the existing values
         propfile = self.merge2prop(proptemplate, posource, remove_untranslated=True)
-        print propfile
+        print(propfile)
         # FIXME ideally we should drop the comment as well as the unit
         assert propfile == '# A comment\n'  # We drop the key
 
@@ -195,7 +277,7 @@ prop2=value2
 
         propexpected = '''prop2=value2\n'''
         propfile = self.merge2prop(proptemplate, posource, remove_untranslated=True)
-        print propfile
+        print(propfile)
         assert propfile == propexpected
 
     def test_merging_blank(self):
@@ -219,10 +301,10 @@ prop2=
 '''
 
         propfile = self.merge2prop(proptemplate, posource, remove_untranslated=False)
-        print propfile
+        print(propfile)
         assert propfile == propexpected
         propfile = self.merge2prop(proptemplate, posource, remove_untranslated=True)
-        print propfile
+        print(propfile)
         assert propfile == propexpected
 
     def test_gaia_plurals(self):
diff --git a/translate/convert/test_po2sub.py b/translate/convert/test_po2sub.py
index 56b4bea..4babb52 100644
--- a/translate/convert/test_po2sub.py
+++ b/translate/convert/test_po2sub.py
@@ -3,8 +3,7 @@
 
 from pytest import importorskip
 
-from translate.convert import po2sub
-from translate.convert import test_convert
+from translate.convert import po2sub, test_convert
 from translate.misc import wStringIO
 from translate.storage import po
 
@@ -32,7 +31,7 @@ class TestPO2Sub:
         templatefile = wStringIO.StringIO(subsource)
         convertor = po2sub.resub(templatefile, inputpo)
         outputsub = convertor.convertstore()
-        print outputsub
+        print(outputsub)
         return outputsub
 
     def test_subrip(self):
@@ -62,7 +61,7 @@ Blah blah blah blah
 Koei koei koei koei
 '''
         subfile = self.merge2sub(subtemplate, posource)
-        print subexpected
+        print(subexpected)
         assert subfile == subexpected
 
 
diff --git a/translate/convert/test_po2tiki.py b/translate/convert/test_po2tiki.py
index 7f9a25f..ae29478 100644
--- a/translate/convert/test_po2tiki.py
+++ b/translate/convert/test_po2tiki.py
@@ -5,9 +5,7 @@
 # Author: Wil Clouser <wclouser at mozilla.com>
 # Date: 2008-12-01
 
-from translate.convert import po2tiki
-from translate.storage import tiki
-from translate.convert import test_convert
+from translate.convert import po2tiki, test_convert
 from translate.misc import wStringIO
 
 
diff --git a/translate/convert/test_po2tmx.py b/translate/convert/test_po2tmx.py
index 9ac27b4..d991767 100644
--- a/translate/convert/test_po2tmx.py
+++ b/translate/convert/test_po2tmx.py
@@ -1,11 +1,10 @@
 #!/usr/bin/env python
 # -*- coding: utf-8 -*-
 
-from translate.convert import po2tmx
-from translate.convert import test_convert
+from translate.convert import po2tmx, test_convert
 from translate.misc import wStringIO
-from translate.storage import tmx
 from translate.misc.xml_helpers import XML_NS
+from translate.storage import tmx
 
 
 class TestPO2TMX:
@@ -42,8 +41,8 @@ msgid "Applications"
 msgstr "Toepassings"
 """
         tmx = self.po2tmx(minipo)
-        print "The generated xml:"
-        print str(tmx)
+        print("The generated xml:")
+        print(str(tmx))
         assert tmx.translate("Applications") == "Toepassings"
         assert tmx.translate("bla") is None
         xmltext = str(tmx)
@@ -58,16 +57,16 @@ msgstr "Toepassings"
     def test_sourcelanguage(self):
         minipo = 'msgid "String"\nmsgstr "String"\n'
         tmx = self.po2tmx(minipo, sourcelanguage="xh")
-        print "The generated xml:"
-        print str(tmx)
+        print("The generated xml:")
+        print(str(tmx))
         header = tmx.document.find("header")
         assert header.get("srclang") == "xh"
 
     def test_targetlanguage(self):
         minipo = 'msgid "String"\nmsgstr "String"\n'
         tmx = self.po2tmx(minipo, targetlanguage="xh")
-        print "The generated xml:"
-        print str(tmx)
+        print("The generated xml:")
+        print(str(tmx))
         tuv = tmx.document.findall(".//%s" % tmx.namespaced("tuv"))[1]
         #tag[0] will be the source, we want the target tuv
         assert tuv.get("{%s}lang" % XML_NS) == "xh"
@@ -79,8 +78,8 @@ msgstr "Toepassings"
 msgstr "Eerste deel "
 "en ekstra"'''
         tmx = self.po2tmx(minipo)
-        print "The generated xml:"
-        print str(tmx)
+        print("The generated xml:")
+        print(str(tmx))
         assert tmx.translate('First part and extra') == 'Eerste deel en ekstra'
 
     def test_escapednewlines(self):
@@ -89,8 +88,8 @@ msgstr "Eerste deel "
 msgstr "Eerste lyn\nTweede lyn"
 '''
         tmx = self.po2tmx(minipo)
-        print "The generated xml:"
-        print str(tmx)
+        print("The generated xml:")
+        print(str(tmx))
         assert tmx.translate("First line\nSecond line") == "Eerste lyn\nTweede lyn"
 
     def test_escapedtabs(self):
@@ -99,8 +98,8 @@ msgstr "Eerste lyn\nTweede lyn"
 msgstr "Eerste kolom\tTweede kolom"
 '''
         tmx = self.po2tmx(minipo)
-        print "The generated xml:"
-        print str(tmx)
+        print("The generated xml:")
+        print(str(tmx))
         assert tmx.translate("First column\tSecond column") == "Eerste kolom\tTweede kolom"
 
     def test_escapedquotes(self):
@@ -112,8 +111,8 @@ msgid "Use \\\"."
 msgstr "Gebruik \\\"."
 '''
         tmx = self.po2tmx(minipo)
-        print "The generated xml:"
-        print str(tmx)
+        print("The generated xml:")
+        print(str(tmx))
         assert tmx.translate('Hello "Everyone"') == 'Good day "All"'
         assert tmx.translate(r'Use \".') == r'Gebruik \".'
 
@@ -130,8 +129,8 @@ msgid ""
 msgstr "Drie"
 '''
         tmx = self.po2tmx(minipo)
-        print "The generated xml:"
-        print str(tmx)
+        print("The generated xml:")
+        print(str(tmx))
         assert "<tu" not in str(tmx)
         assert len(tmx.units) == 0
 
@@ -141,7 +140,7 @@ msgstr "Drie"
 msgstr "Bézier-kurwe"
 '''
         tmx = self.po2tmx(minipo)
-        print str(tmx)
+        print(str(tmx))
         assert tmx.translate(u"Bézier curve") == u"Bézier-kurwe"
 
     def test_nonecomments(self):
@@ -151,7 +150,7 @@ msgid "Bézier curve"
 msgstr "Bézier-kurwe"
 '''
         tmx = self.po2tmx(minipo)
-        print str(tmx)
+        print(str(tmx))
         unit = tmx.findunits(u"Bézier curve")
         assert len(unit[0].getnotes()) == 0
 
@@ -162,7 +161,7 @@ msgid "Bézier curve"
 msgstr "Bézier-kurwe"
 '''
         tmx = self.po2tmx(minipo, comment='others')
-        print str(tmx)
+        print(str(tmx))
         unit = tmx.findunits(u"Bézier curve")
         assert unit[0].getnotes() == u"My comment rules"
 
@@ -173,9 +172,9 @@ msgid "Bézier curve"
 msgstr "Bézier-kurwe"
 '''
         tmx = self.po2tmx(minipo, comment='source')
-        print str(tmx)
+        print(str(tmx))
         unit = tmx.findunits(u"Bézier curve")
-        assert unit[0].getnotes() == u": ../PuzzleFourSided.h:45"
+        assert unit[0].getnotes() == u"../PuzzleFourSided.h:45"
 
     def test_typecomments(self):
         """Tests that others comments are imported."""
@@ -184,9 +183,10 @@ msgid "Bézier curve"
 msgstr "Bézier-kurwe"
 '''
         tmx = self.po2tmx(minipo, comment='type')
-        print str(tmx)
+        print(str(tmx))
         unit = tmx.findunits(u"Bézier curve")
-        assert unit[0].getnotes() == u", csharp-format"
+        assert unit[0].getnotes() == u"csharp-format"
+
 
 class TestPO2TMXCommand(test_convert.TestConvertCommand, TestPO2TMX):
     """Tests running actual po2tmx commands on files"""
diff --git a/translate/convert/test_po2ts.py b/translate/convert/test_po2ts.py
index 21471fb..296893f 100644
--- a/translate/convert/test_po2ts.py
+++ b/translate/convert/test_po2ts.py
@@ -1,7 +1,6 @@
 #!/usr/bin/env python
 
-from translate.convert import po2ts
-from translate.convert import test_convert
+from translate.convert import po2ts, test_convert
 from translate.misc import wStringIO
 from translate.storage import po
 
@@ -27,7 +26,7 @@ class TestPO2TS:
 msgid "Term"
 msgstr "asdf"'''
         tsfile = self.po2ts(minipo)
-        print tsfile
+        print(tsfile)
         assert "<name>term.cpp</name>" in tsfile
         assert "<source>Term</source>" in tsfile
         assert "<translation>asdf</translation>" in tsfile
@@ -42,7 +41,7 @@ msgid "Source"
 msgstr "Target"
 '''
         tsfile = self.po2ts(posource)
-        print tsfile
+        print(tsfile)
         # The other section are a duplicate of test_simplentry
         # FIXME need to think about auto vs trans comments maybe in TS v1.1
         assert "<comment>Translator comment</comment>" in tsfile
@@ -54,7 +53,7 @@ msgstr "Target"
 msgid "Source"
 msgstr "Target"'''
         tsfile = self.po2ts(posource)
-        print tsfile
+        print(tsfile)
         assert '''<translation type="unfinished">Target</translation>''' in tsfile
 
     def test_obsolete(self):
@@ -64,7 +63,7 @@ msgstr "Target"'''
 msgid "Source"
 msgstr "Target"'''
         tsfile = self.po2ts(posource)
-        print tsfile
+        print(tsfile)
         assert '''<translation type="obsolete">Target</translation>''' in tsfile
 
     def test_duplicates(self):
@@ -78,7 +77,7 @@ msgid "English"
 msgstr "b"
 '''
         tsfile = self.po2ts(posource)
-        print tsfile
+        print(tsfile)
         assert tsfile.find("English") != tsfile.rfind("English")
 
 
diff --git a/translate/convert/test_po2txt.py b/translate/convert/test_po2txt.py
index 74134e6..5967cfd 100644
--- a/translate/convert/test_po2txt.py
+++ b/translate/convert/test_po2txt.py
@@ -1,8 +1,7 @@
 #!/usr/bin/env python
 # -*- coding: utf-8 -*-
 
-from translate.convert import po2txt
-from translate.convert import test_convert
+from translate.convert import po2txt, test_convert
 from translate.misc import wStringIO
 
 
@@ -11,14 +10,14 @@ class TestPO2Txt:
     def po2txt(self, posource, txttemplate=None):
         """helper that converts po source to txt source without requiring files"""
         inputfile = wStringIO.StringIO(posource)
-        print inputfile.getvalue()
+        print(inputfile.getvalue())
         outputfile = wStringIO.StringIO()
         if txttemplate:
             templatefile = wStringIO.StringIO(txttemplate)
         else:
             templatefile = None
         assert po2txt.converttxt(inputfile, outputfile, templatefile)
-        print outputfile.getvalue()
+        print(outputfile.getvalue())
         return outputfile.getvalue()
 
     def test_basic(self):
diff --git a/translate/convert/test_po2xliff.py b/translate/convert/test_po2xliff.py
index acbccdd..db24634 100644
--- a/translate/convert/test_po2xliff.py
+++ b/translate/convert/test_po2xliff.py
@@ -1,9 +1,8 @@
 #!/usr/bin/env python
 
 from translate.convert import po2xliff
-from translate.storage import po
-from translate.storage import poxliff
 from translate.misc.xml_helpers import XML_NS, getText
+from translate.storage import po, poxliff
 
 
 class TestPO2XLIFF:
@@ -24,8 +23,8 @@ class TestPO2XLIFF:
     def test_minimal(self):
         minipo = '''msgid "red"\nmsgstr "rooi"\n'''
         xliff = self.po2xliff(minipo)
-        print "The generated xml:"
-        print str(xliff)
+        print("The generated xml:")
+        print(str(xliff))
         assert len(xliff.units) == 1
         assert xliff.translate("red") == "rooi"
         assert xliff.translate("bla") is None
@@ -51,8 +50,8 @@ msgid "Applications"
 msgstr "Toepassings"
 """
         xliff = self.po2xliff(minipo)
-        print "The generated xml:"
-        print str(xliff)
+        print("The generated xml:")
+        print(str(xliff))
         assert xliff.translate("Applications") == "Toepassings"
         assert xliff.translate("bla") is None
         xmltext = str(xliff)
@@ -69,8 +68,8 @@ msgstr "Toepassings"
 msgstr "Eerste deel "
 "en ekstra"'''
         xliff = self.po2xliff(minipo)
-        print "The generated xml:"
-        print str(xliff)
+        print("The generated xml:")
+        print(str(xliff))
         assert xliff.translate('First part and extra') == 'Eerste deel en ekstra'
 
     def test_escapednewlines(self):
@@ -79,9 +78,9 @@ msgstr "Eerste deel "
 msgstr "Eerste lyn\nTweede lyn"
 '''
         xliff = self.po2xliff(minipo)
-        print "The generated xml:"
+        print("The generated xml:")
         xmltext = str(xliff)
-        print xmltext
+        print(xmltext)
         assert xliff.translate("First line\nSecond line") == "Eerste lyn\nTweede lyn"
         assert xliff.translate("First line\\nSecond line") is None
         assert xmltext.find("line\\nSecond") == -1
@@ -95,9 +94,9 @@ msgstr "Eerste lyn\nTweede lyn"
 msgstr "Eerste kolom\tTweede kolom"
 '''
         xliff = self.po2xliff(minipo)
-        print "The generated xml:"
+        print("The generated xml:")
         xmltext = str(xliff)
-        print xmltext
+        print(xmltext)
         assert xliff.translate("First column\tSecond column") == "Eerste kolom\tTweede kolom"
         assert xliff.translate("First column\\tSecond column") is None
         assert xmltext.find("column\\tSecond") == -1
@@ -114,9 +113,9 @@ msgid "Use \\\"."
 msgstr "Gebruik \\\"."
 '''
         xliff = self.po2xliff(minipo)
-        print "The generated xml:"
+        print("The generated xml:")
         xmltext = str(xliff)
-        print xmltext
+        print(xmltext)
         assert xliff.translate('Hello "Everyone"') == 'Good day "All"'
         assert xliff.translate(r'Use \".') == r'Gebruik \".'
         assert xmltext.find(r'\"') > 0 or xmltext.find(r'\"') > 0
@@ -134,9 +133,9 @@ msgid "one"
 msgstr "kunye"
 '''
         xliff = self.po2xliff(minipo)
-        print "The generated xml:"
+        print("The generated xml:")
         xmltext = str(xliff)
-        print xmltext
+        print(xmltext)
         assert xliff.translate("one") == "kunye"
         assert len(xliff.units) == 1
         node = xliff.units[0].xmlelement
@@ -155,9 +154,9 @@ msgid "one"
 msgstr "kunye"
 '''
         xliff = self.po2xliff(minipo)
-        print "The generated xml:"
+        print("The generated xml:")
         xmltext = str(xliff)
-        print xmltext
+        print(xmltext)
         assert xliff.translate("one") == "kunye"
         assert len(xliff.units) == 1
         node = xliff.units[0].xmlelement
@@ -178,9 +177,9 @@ msgid "one"
 msgstr "kunye"
 '''
         xliff = self.po2xliff(minipo)
-        print "The generated xml:"
+        print("The generated xml:")
         xmltext = str(xliff)
-        print xmltext
+        print(xmltext)
         assert xliff.translate("one") == "kunye"
         assert len(xliff.units) == 1
         node = xliff.units[0].xmlelement
@@ -201,9 +200,9 @@ msgstr ""
 "Content-Type: text/plain; charset=UTF-8\n"
 '''
         xliff = self.po2xliff(minipo)
-        print "The generated xml:"
+        print("The generated xml:")
         xmltext = str(xliff)
-        print xmltext
+        print(xmltext)
         assert len(xliff.units) == 1
         unit = xliff.units[0]
         assert unit.source == unit.target == "Content-Type: text/plain; charset=UTF-8\n"
@@ -221,9 +220,9 @@ msgid "three"
 msgstr "raro"
 '''
         xliff = self.po2xliff(minipo)
-        print "The generated xml:"
+        print("The generated xml:")
         xmltext = str(xliff)
-        print xmltext
+        print(xmltext)
         assert len(xliff.units) == 2
         assert xliff.units[0].isfuzzy()
         assert not xliff.units[1].isfuzzy()
@@ -235,9 +234,9 @@ msgstr[0] "inkomo"
 msgstr[1] "iinkomo"
 '''
         xliff = self.po2xliff(minipo)
-        print "The generated xml:"
+        print("The generated xml:")
         xmltext = str(xliff)
-        print xmltext
+        print(xmltext)
         assert len(xliff.units) == 1
         assert xliff.translate("cow") == "inkomo"
 
@@ -249,9 +248,9 @@ msgstr[1] "iinkomo"
 msgstr[2] "iiinkomo"
 '''
         xliff = self.po2xliff(minipo)
-        print "The generated xml:"
+        print("The generated xml:")
         xmltext = str(xliff)
-        print xmltext
+        print(xmltext)
         assert len(xliff.units) == 1
         assert xliff.translate("cow") == "inkomo"
 
@@ -267,7 +266,7 @@ msgstr "Uno"
         minipo = r'''msgid "%s%s%s%s has made %s his or her buddy%s%s"
 msgstr "%s%s%s%s het %s sy/haar vriend/vriendin gemaak%s%s"'''
         xliff = self.po2xliff(minipo)
-        print xliff.units[0].source
+        print(xliff.units[0].source)
         assert xliff.units[0].source == "%s%s%s%s has made %s his or her buddy%s%s"
 
     def test_approved(self):
@@ -282,9 +281,9 @@ msgid "four"
 msgstr ""
 '''
         xliff = self.po2xliff(minipo)
-        print "The generated xml:"
+        print("The generated xml:")
         xmltext = str(xliff)
-        print xmltext
+        print(xmltext)
         assert len(xliff.units) == 3
         assert xliff.units[0].xmlelement.get("approved") != "yes"
         assert not xliff.units[0].isapproved()
diff --git a/translate/convert/test_pot2po.py b/translate/convert/test_pot2po.py
index 5cea190..e8fa27f 100644
--- a/translate/convert/test_pot2po.py
+++ b/translate/convert/test_pot2po.py
@@ -5,8 +5,7 @@ import warnings
 
 from pytest import mark
 
-from translate.convert import pot2po
-from translate.convert import test_convert
+from translate.convert import pot2po, test_convert
 from translate.misc import wStringIO
 from translate.storage import po
 
@@ -35,7 +34,7 @@ class TestPOT2PO:
         """checks that the pofile contains a single non-header unit, and returns it"""
         assert len(pofile.units) == 2
         assert pofile.units[0].isheader()
-        print pofile.units[1]
+        print(pofile.units[1])
         return pofile.units[1]
 
     def test_convertpot_blank(self):
@@ -113,7 +112,7 @@ msgstr[1] "%d handleidings."
         posource = '''#: simple.label\n#: simple.accesskey\nmsgid "A &hard coded newline.\\n"\nmsgstr "&Hart gekoeerde nuwe lyne\\n"\n'''
         poexpected = '''#: simple.label\n#: simple.accesskey\n#, fuzzy\nmsgid "Its &hard coding a newline.\\n"\nmsgstr "&Hart gekoeerde nuwe lyne\\n"\n'''
         newpo = self.convertpot(potsource, posource)
-        print newpo
+        print(newpo)
         assert str(self.singleunit(newpo)) == poexpected
 
     def test_merging_location_change(self):
@@ -122,7 +121,7 @@ msgstr[1] "%d handleidings."
         posource = '''#: simple.label%ssimple.accesskey\nmsgid "A &hard coded newline.\\n"\nmsgstr "&Hart gekoeerde nuwe lyne\\n"\n''' % po.lsep
         poexpected = '''#: new_simple.label%snew_simple.accesskey\nmsgid "A &hard coded newline.\\n"\nmsgstr "&Hart gekoeerde nuwe lyne\\n"\n''' % po.lsep
         newpo = self.convertpot(potsource, posource)
-        print newpo
+        print(newpo)
         assert str(self.singleunit(newpo)) == poexpected
 
     def test_merging_location_and_whitespace_change(self):
@@ -133,7 +132,7 @@ msgstr[1] "%d handleidings."
         posource = '''#: doublespace.label%sdoublespace.accesskey\nmsgid "&We  have  spaces"\nmsgstr "&One  het  spasies"\n''' % po.lsep
         poexpected = '''#: singlespace.label%ssinglespace.accesskey\n#, fuzzy\nmsgid "&We have spaces"\nmsgstr "&One  het  spasies"\n''' % po.lsep
         newpo = self.convertpot(potsource, posource)
-        print newpo
+        print(newpo)
         assert str(self.singleunit(newpo)) == poexpected
 
     def test_merging_location_ambiguous_with_disambiguous(self):
@@ -145,7 +144,7 @@ msgstr[1] "%d handleidings."
         poexpected1 = '''#: location.c:1\n#, fuzzy\nmsgid ""\n"_: location.c:1\\n"\n"Source"\nmsgstr "Target"\n'''
         poexpected2 = '''#: location.c:10\n#, fuzzy\nmsgid ""\n"_: location.c:10\\n"\n"Source"\nmsgstr "Target"\n'''
         newpo = self.convertpot(potsource, posource)
-        print "Expected:\n", poexpected1, "Actual:\n", newpo.units[1]
+        print("Expected:\n", poexpected1, "Actual:\n", newpo.units[1])
         assert str(newpo.units[1]) == poexpected1
         assert str(newpo.units[2]) == poexpected2
 
@@ -156,7 +155,7 @@ msgstr[1] "%d handleidings."
         posource = '''#: someline.c\nmsgid "&About"\nmsgstr "&Info"\n'''
         poexpected = '''#: someline.c\nmsgid "A&bout"\nmsgstr "&Info"\n'''
         newpo = self.convertpot(potsource, posource)
-        print newpo
+        print(newpo)
         assert str(self.singleunit(newpo)) == poexpected
 
     @mark.xfail(reason="Not Implemented - review if this is even correct")
@@ -210,10 +209,10 @@ msgstr "Sekuriteit"
         poexpected = posource
         newpo = self.convertpot(potsource, posource)
         newpounit = self.singleunit(newpo)
-        print "expected"
-        print poexpected
-        print "got:"
-        print str(newpounit)
+        print("expected")
+        print(poexpected)
+        print("got:")
+        print(str(newpounit))
         assert str(newpounit) == poexpected
 
     def test_merging_msgidcomments(self):
@@ -276,7 +275,7 @@ msgstr "Sertifikate"
         potsource = '''msgid "One"\nmsgid_plural "Two"\nmsgstr[0] ""\nmsgstr[1] ""\n'''
         posource = '''msgid "One"\nmsgid_plural "Two"\nmsgstr[0] "Een"\nmsgstr[1] "Twee"\nmsgstr[2] "Drie"\n'''
         newpo = self.convertpot(potsource, posource)
-        print newpo
+        print(newpo)
         newpounit = self.singleunit(newpo)
         assert str(newpounit) == posource
 
@@ -287,7 +286,7 @@ msgstr "Sertifikate"
         posource = '# Some comment\n#. Extracted comment\n#: obsoleteme:10\nmsgid "One"\nmsgstr "Een"\n'
         expected = '# Some comment\n#~ msgid "One"\n#~ msgstr "Een"\n'
         newpo = self.convertpot(potsource, posource)
-        print str(newpo)
+        print(str(newpo))
         newpounit = self.singleunit(newpo)
         assert str(newpounit) == expected
 
@@ -297,7 +296,7 @@ msgstr "Sertifikate"
         potsource = 'msgid ""\nmsgstr ""\n'
         posource = '#: obsoleteme:10\nmsgid "One"\nmsgstr ""\n'
         newpo = self.convertpot(potsource, posource)
-        print str(newpo)
+        print(str(newpo))
         # We should only have the header
         assert len(newpo.units) == 1
 
@@ -327,7 +326,7 @@ msgstr "Sertifikate"
         posource = '''#~ msgid "&About"\n#~ msgstr "&Omtrent"\n'''
         expected = '''#: resurect.c\nmsgid "&About"\nmsgstr "&Omtrent"\n'''
         newpo = self.convertpot(potsource, posource)
-        print newpo
+        print(newpo)
         assert len(newpo.units) == 2
         assert newpo.units[0].isheader()
         newpounit = self.singleunit(newpo)
@@ -341,7 +340,7 @@ msgstr "Sertifikate"
         expected1 = '''#: resurect1.c\nmsgid "About"\nmsgstr "Omtrent"\n'''
         expected2 = '''#: resurect2.c\n#, fuzzy\nmsgid ""\n"_: resurect2.c\\n"\n"About"\nmsgstr "Omtrent"\n'''
         newpo = self.convertpot(potsource, posource)
-        print newpo
+        print(newpo)
         assert len(newpo.units) == 3
         assert newpo.units[0].isheader()
         assert str(newpo.units[1]) == expected1
@@ -393,8 +392,8 @@ msgstr ""
 "X-Generator: Translate Toolkit 0.10rc2\n"
 '''
         newpo = self.convertpot(potsource, posource)
-        print 'Output Header:\n%s' % newpo
-        print 'Expected Header:\n%s' % expected
+        print('Output Header:\n%s' % newpo)
+        print('Expected Header:\n%s' % expected)
         assert str(newpo) == expected
 
     def test_merging_comments(self):
@@ -403,7 +402,7 @@ msgstr ""
         posource = '''#. Don't do it!\n#: file.py:2\nmsgid "One"\nmsgstr "Een"\n'''
         poexpected = '''#. Don't do it!\n#: file.py:1\nmsgid "One"\nmsgstr "Een"\n'''
         newpo = self.convertpot(potsource, posource)
-        print newpo
+        print(newpo)
         newpounit = self.singleunit(newpo)
         assert str(newpounit) == poexpected
 
@@ -414,7 +413,7 @@ msgstr ""
         poexpected = '''#: file.c:1\n#, c-format\nmsgid "%d pipes"\nmsgstr "%d pype"\n'''
         newpo = self.convertpot(potsource, posource)
         newpounit = self.singleunit(newpo)
-        print newpounit
+        print(newpounit)
         assert str(newpounit) == poexpected
 
         potsource = '''#: file.c:1\n#, c-format\nmsgid "%d computers"\nmsgstr ""\n'''
@@ -462,7 +461,7 @@ msgid "text"
 msgstr "teks"
 """
         newpo = self.convertpot(potsource, posource)
-        print newpo
+        print(newpo)
         assert poexpected in str(newpo)
 
     def test_msgctxt_multiline(self):
@@ -636,7 +635,7 @@ msgid ""
 msgstr "trans"
 """
         newpo = self.convertpot(potsource, posource)
-        print newpo
+        print(newpo)
         assert len(newpo.units) == 2
         assert newpo.units[0].isheader()
         unit = newpo.units[1]
@@ -674,7 +673,7 @@ msgid ""
 msgstr "trans"
 '''
         newpo = self.convertpot(potsource, posource)
-        print newpo
+        print(newpo)
         assert len(newpo.units) == 2
         assert newpo.units[0].isheader()
         unit = newpo.units[1]
@@ -703,7 +702,7 @@ msgstr "Eerste eenheid"
 #~ msgstr "Ou eenheid3"
 """
         newpo = self.convertpot(potsource, posource)
-        print newpo
+        print(newpo)
         assert len(newpo.units) == 5
         assert newpo.units[1].getcontext() == 'newContext'
         # Search in unit string, because obsolete units can't return a context
@@ -774,8 +773,8 @@ msgid "R"
 msgstr ""
 '''
         newpo = self.convertpot(potsource, posource)
-        print 'Output:\n%s' % newpo
-        print 'Expected:\n%s' % expected
+        print('Output:\n%s' % newpo)
+        print('Expected:\n%s' % expected)
         assert str(newpo) == expected
 
 
diff --git a/translate/convert/test_prop2mozfunny.py b/translate/convert/test_prop2mozfunny.py
index 196df87..d204c90 100644
--- a/translate/convert/test_prop2mozfunny.py
+++ b/translate/convert/test_prop2mozfunny.py
@@ -13,7 +13,7 @@ class TestPO2Prop:
         outputfile = wStringIO.StringIO()
         result = prop2mozfunny.po2inc(inputfile, outputfile, templatefile)
         outputinc = outputfile.getvalue()
-        print outputinc
+        print(outputinc)
         assert result
         return outputinc
 
@@ -23,7 +23,7 @@ class TestPO2Prop:
         inctemplate = '''#define MOZ_LANG_TITLE Deutsch (DE)\n'''
         incexpected = inctemplate
         incfile = self.merge2inc(inctemplate, posource)
-        print incfile
+        print(incfile)
         assert incfile == incexpected
 
     def test_uncomment_contributors(self):
@@ -36,7 +36,7 @@ msgstr "<em:contributor>Mr Fury</em:contributor>"
         inctemplate = '''# #define MOZ_LANGPACK_CONTRIBUTORS <em:contributor>Joe Solon</em:contributor>\n'''
         incexpected = '''#define MOZ_LANGPACK_CONTRIBUTORS <em:contributor>Mr Fury</em:contributor>\n'''
         incfile = self.merge2inc(inctemplate, posource)
-        print incfile
+        print(incfile)
         assert incfile == incexpected
 
     def test_multiline_comment_newlines(self):
@@ -48,5 +48,5 @@ msgstr "<em:contributor>Mr Fury</em:contributor>"
 '''
         incexpected = inctemplate
         incfile = self.merge2inc(inctemplate, None)
-        print incfile
+        print(incfile)
         assert incfile == incexpected
diff --git a/translate/convert/test_prop2po.py b/translate/convert/test_prop2po.py
index 34e0dc8..6248460 100644
--- a/translate/convert/test_prop2po.py
+++ b/translate/convert/test_prop2po.py
@@ -3,20 +3,18 @@
 
 from pytest import mark
 
-from translate.convert import prop2po
-from translate.convert import test_convert
+from translate.convert import prop2po, test_convert
 from translate.misc import wStringIO
-from translate.storage import po
-from translate.storage import properties
+from translate.storage import po, properties
 
 
 class TestProp2PO:
 
-    def prop2po(self, propsource, proptemplate=None):
+    def prop2po(self, propsource, proptemplate=None, personality="java"):
         """helper that converts .properties source to po source without requiring files"""
         inputfile = wStringIO.StringIO(propsource)
-        inputprop = properties.propfile(inputfile)
-        convertor = prop2po.prop2po()
+        inputprop = properties.propfile(inputfile, personality=personality)
+        convertor = prop2po.prop2po(personality=personality)
         if proptemplate:
             templatefile = wStringIO.StringIO(proptemplate)
             templateprop = properties.propfile(templatefile)
@@ -37,13 +35,13 @@ class TestProp2PO:
         """checks that the pofile contains a single non-header element, and returns it"""
         assert len(pofile.units) == 2
         assert pofile.units[0].isheader()
-        print pofile
+        print(pofile)
         return pofile.units[1]
 
     def countelements(self, pofile):
         """counts the number of non-header entries"""
         assert pofile.units[0].isheader()
-        print pofile
+        print(pofile)
         return len(pofile.units) - 1
 
     def test_simpleentry(self):
@@ -104,8 +102,8 @@ class TestProp2PO:
         propsource = 'nb = %s\n' % unistring
         pofile = self.prop2po(propsource)
         pounit = self.singleelement(pofile)
-        print repr(pofile.units[0].target)
-        print repr(pounit.source)
+        print(repr(pofile.units[0].target))
+        print(repr(pounit.source))
         assert pounit.source == u'Norsk bokm\u00E5l'
 
     def test_multiline_escaping(self):
@@ -114,7 +112,7 @@ class TestProp2PO:
 of connections to this server. If so, use the Advanced IMAP Server Settings dialog to \
 reduce the number of cached connections."""
         pofile = self.prop2po(propsource)
-        print repr(pofile.units[1].target)
+        print(repr(pofile.units[1].target))
         assert self.countelements(pofile) == 1
 
     def test_comments(self):
@@ -135,20 +133,19 @@ prefPanel-smime=Security'''
 prefPanel-smime=
 '''
         pofile = self.prop2po(propsource)
-        print str(pofile)
+        print(str(pofile))
         #header comments:
         assert "#. # Comment\n#. # commenty 2" in str(pofile)
         pounit = self.singleelement(pofile)
         assert pounit.getnotes("developer") == "## @name GENERIC_ERROR\n## @loc none"
 
-    @mark.xfail(reason="Not Implemented")
     def test_folding_accesskeys(self):
         """check that we can fold various accesskeys into their associated label (bug #115)"""
-        propsource = r'''cmd_addEngine = Add Engines...
-cmd_addEngine_accesskey = A'''
-        pofile = self.prop2po(propsource)
+        propsource = r'''cmd_addEngine.label = Add Engines...
+cmd_addEngine.accesskey = A'''
+        pofile = self.prop2po(propsource, personality="mozilla")
         pounit = self.singleelement(pofile)
-        assert pounit.target == "&Add Engines..."
+        assert pounit.source == "&Add Engines..."
 
     def test_dont_translate(self):
         """check that we know how to ignore don't translate instructions in properties files (bug #116)"""
@@ -212,19 +209,14 @@ do=translate me
         (accelerators, merge criterion).
         """
         propsource = '''prop=value\n'''
-        convertor = prop2po.prop2po()
 
-        inputfile = wStringIO.StringIO(propsource)
-        inputprop = properties.propfile(inputfile, personality="mozilla")
-        outputpo = convertor.convertstore(inputprop, personality="mozilla")
+        outputpo = self.prop2po(propsource, personality="mozilla")
         assert "X-Accelerator-Marker" in str(outputpo)
         assert "X-Merge-On" in str(outputpo)
 
         # Even though the gaia flavour inherrits from mozilla, it should not
         # get the header
-        inputfile = wStringIO.StringIO(propsource)
-        inputprop = properties.propfile(inputfile, personality="gaia")
-        outputpo = convertor.convertstore(inputprop, personality="gaia")
+        outputpo = self.prop2po(propsource, personality="gaia")
         assert "X-Accelerator-Marker" not in str(outputpo)
         assert "X-Merge-On" not in str(outputpo)
 
@@ -239,19 +231,54 @@ message-multiedit-header[few]={{ n }} selected
 message-multiedit-header[many]={{ n }} selected
 message-multiedit-header[other]={{ n }} selected
 '''
-        convertor = prop2po.prop2po()
-        inputfile = wStringIO.StringIO(propsource)
-        inputprop = properties.propfile(inputfile, personality="gaia")
-        outputpo = convertor.convertstore(inputprop, personality="gaia")
+        outputpo = self.prop2po(propsource, personality="gaia")
         pounit = outputpo.units[-1]
         assert pounit.hasplural()
         assert pounit.getlocations() == [u'message-multiedit-header']
 
-        print outputpo
+        print(outputpo)
         zero_unit = outputpo.units[-2]
         assert not zero_unit.hasplural()
         assert zero_unit.source == u"Edit"
 
+    def test_successive_gaia_plurals(self):
+        """Test conversion of two successive gaia plural units."""
+        propsource = '''
+message-multiedit-header={[ plural(n) ]}
+message-multiedit-header[zero]=Edit
+message-multiedit-header[one]={{ n }} selected
+message-multiedit-header[two]={{ n }} selected
+message-multiedit-header[few]={{ n }} selected
+message-multiedit-header[many]={{ n }} selected
+message-multiedit-header[other]={{ n }} selected
+
+message-multiedit-header2={[ plural(n) ]}
+message-multiedit-header2[zero]=Edit 2
+message-multiedit-header2[one]={{ n }} selected 2
+message-multiedit-header2[two]={{ n }} selected 2
+message-multiedit-header2[few]={{ n }} selected 2
+message-multiedit-header2[many]={{ n }} selected 2
+message-multiedit-header2[other]={{ n }} selected 2
+'''
+        outputpo = self.prop2po(propsource, personality="gaia")
+        pounit = outputpo.units[-1]
+        assert pounit.hasplural()
+        assert pounit.getlocations() == [u'message-multiedit-header2']
+
+        pounit = outputpo.units[-3]
+        assert pounit.hasplural()
+        assert pounit.getlocations() == [u'message-multiedit-header']
+
+        print(outputpo)
+        zero_unit = outputpo.units[-2]
+        assert not zero_unit.hasplural()
+        assert zero_unit.source == u"Edit 2"
+
+        zero_unit = outputpo.units[-4]
+        assert not zero_unit.hasplural()
+        assert zero_unit.source == u"Edit"
+
+
 class TestProp2POCommand(test_convert.TestConvertCommand, TestProp2PO):
     """Tests running actual prop2po commands on files"""
     convertmodule = prop2po
diff --git a/translate/convert/test_tiki2po.py b/translate/convert/test_tiki2po.py
index d1773fd..3fe7e09 100644
--- a/translate/convert/test_tiki2po.py
+++ b/translate/convert/test_tiki2po.py
@@ -5,9 +5,7 @@
 # Author: Wil Clouser <wclouser at mozilla.com>
 # Date: 2008-12-01
 
-from translate.convert import tiki2po
-from translate.storage import tiki
-from translate.convert import test_convert
+from translate.convert import test_convert, tiki2po
 from translate.misc import wStringIO
 
 
diff --git a/translate/convert/test_ts2po.py b/translate/convert/test_ts2po.py
index 49a7e50..a873809 100644
--- a/translate/convert/test_ts2po.py
+++ b/translate/convert/test_ts2po.py
@@ -1,8 +1,7 @@
 #!/usr/bin/env python
 # -*- coding: utf-8 -*-
 
-from translate.convert import ts2po
-from translate.convert import test_convert
+from translate.convert import test_convert, ts2po
 from translate.misc import wStringIO
 
 
@@ -12,8 +11,8 @@ class TestTS2PO:
         converter = ts2po.ts2po()
         tsfile = wStringIO.StringIO(tssource)
         outputpo = converter.convertfile(tsfile)
-        print "The generated po:"
-        print str(outputpo)
+        print("The generated po:")
+        print(str(outputpo))
         return outputpo
 
     def test_blank(self):
diff --git a/translate/convert/test_txt2po.py b/translate/convert/test_txt2po.py
index 1e45691..523cd78 100644
--- a/translate/convert/test_txt2po.py
+++ b/translate/convert/test_txt2po.py
@@ -1,7 +1,6 @@
 #!/usr/bin/env python
 
-from translate.convert import txt2po
-from translate.convert import test_convert
+from translate.convert import test_convert, txt2po
 from translate.misc import wStringIO
 from translate.storage import txt
 
@@ -18,7 +17,7 @@ class TestTxt2PO:
 
     def singleelement(self, storage):
         """checks that the pofile contains a single non-header element, and returns it"""
-        print str(storage)
+        print(str(storage))
         assert len(storage.units) == 1
         return storage.units[0]
 
@@ -73,7 +72,7 @@ class TestDoku2po:
 
     def singleelement(self, storage):
         """checks that the pofile contains a single non-header element, and returns it"""
-        print str(storage)
+        print(str(storage))
         assert len(storage.units) == 1
         return storage.units[0]
 
diff --git a/translate/convert/test_xliff2po.py b/translate/convert/test_xliff2po.py
index 7420526..90bc180 100644
--- a/translate/convert/test_xliff2po.py
+++ b/translate/convert/test_xliff2po.py
@@ -1,14 +1,10 @@
 #!/usr/bin/env python
 
-from translate.convert import po2xliff
-from translate.convert import test_convert
-from translate.convert import xliff2po
+from translate.convert import test_convert, xliff2po
 from translate.misc import wStringIO
-from translate.misc import wStringIO
-from translate.storage import po
-from translate.storage import xliff
+from translate.storage import po, xliff
 from translate.storage.poheader import poheader
-from translate.storage.test_base import headerless_len, first_translatable
+from translate.storage.test_base import first_translatable, headerless_len
 
 
 class TestXLIFF2PO:
@@ -27,9 +23,9 @@ class TestXLIFF2PO:
         inputfile = wStringIO.StringIO(xliffsource)
         convertor = xliff2po.xliff2po()
         outputpo = convertor.convertstore(inputfile)
-        print "The generated po:"
-        print type(outputpo)
-        print str(outputpo)
+        print("The generated po:")
+        print(type(outputpo))
+        print(str(outputpo))
         return outputpo
 
     def test_minimal(self):
@@ -63,7 +59,7 @@ Content-Transfer-Encoding: 8bit'''
     <target>utshani</target>
   </trans-unit>''') % (headertext, headertext)
 
-        print minixlf
+        print(minixlf)
         pofile = self.xliff2po(minixlf)
         assert pofile.translate("gras") == "utshani"
         assert pofile.translate("bla") is None
@@ -208,7 +204,7 @@ garbage</note>
         </trans-unit>
 </group>'''
         pofile = self.xliff2po(minixlf)
-        print str(pofile)
+        print(str(pofile))
         potext = str(pofile)
         assert headerless_len(pofile.units) == 1
         assert potext.index('msgid_plural "cows"')
diff --git a/translate/convert/tiki2po.py b/translate/convert/tiki2po.py
index 111fc1a..4bc79dd 100644
--- a/translate/convert/tiki2po.py
+++ b/translate/convert/tiki2po.py
@@ -26,8 +26,7 @@ for examples and usage instructions.
 
 import sys
 
-from translate.storage import tiki
-from translate.storage import po
+from translate.storage import po, tiki
 
 
 class tiki2po:
diff --git a/translate/convert/ts2po.py b/translate/convert/ts2po.py
index d4335c1..4837603 100644
--- a/translate/convert/ts2po.py
+++ b/translate/convert/ts2po.py
@@ -25,8 +25,7 @@ for examples and usage instructions.
 """
 
 
-from translate.storage import po
-from translate.storage import ts
+from translate.storage import po, ts
 
 
 class ts2po:
diff --git a/translate/convert/txt2po.py b/translate/convert/txt2po.py
index a23861f..2aacb9b 100644
--- a/translate/convert/txt2po.py
+++ b/translate/convert/txt2po.py
@@ -24,8 +24,7 @@ See: http://docs.translatehouse.org/projects/translate-toolkit/en/latest/command
 for examples and usage instructions.
 """
 
-from translate.storage import txt
-from translate.storage import po
+from translate.storage import po, txt
 
 
 class txt2po:
diff --git a/translate/convert/web2py2po.py b/translate/convert/web2py2po.py
index f01c3b9..120961c 100644
--- a/translate/convert/web2py2po.py
+++ b/translate/convert/web2py2po.py
@@ -18,8 +18,6 @@
 # You should have received a copy of the GNU General Public License
 # along with this program; if not, see <http://www.gnu.org/licenses/>.
 #
-# (c) 2009 Dominic König (dominic at nursix.org)
-#
 
 """Convert web2py translation dictionaries (.py) to GNU/gettext PO files.
 
diff --git a/translate/convert/xliff2odf.py b/translate/convert/xliff2odf.py
index 9aa716b..07dc3e3 100644
--- a/translate/convert/xliff2odf.py
+++ b/translate/convert/xliff2odf.py
@@ -24,17 +24,13 @@
 See: http://docs.translatehouse.org/projects/translate-toolkit/en/latest/commands/odf2xliff.html
 for examples and usage instructions.
 """
-
-import cStringIO
 import zipfile
+from cStringIO import StringIO
 
 import lxml.etree as etree
 
-from translate.storage import factory
-from translate.storage.xml_extract import unit_tree
-from translate.storage.xml_extract import extract
-from translate.storage.xml_extract import generate
-from translate.storage import odf_shared, odf_io
+from translate.storage import factory, odf_io, odf_shared
+from translate.storage.xml_extract import extract, generate, unit_tree
 
 
 def first_child(unit_node):
@@ -45,7 +41,7 @@ def translate_odf(template, input_file):
 
     def load_dom_trees(template):
         odf_data = odf_io.open_odf(template)
-        return dict((filename, etree.parse(cStringIO.StringIO(data))) for filename, data in odf_data.iteritems())
+        return dict((filename, etree.parse(StringIO(data))) for filename, data in odf_data.iteritems())
 
     def load_unit_tree(input_file, dom_trees):
         store = factory.getobject(input_file)
@@ -109,7 +105,7 @@ def write_odf(xlf_data, template, output_file, dom_trees):
 def convertxliff(input_file, output_file, template):
     """reads in stdin using fromfileclass, converts using convertorclass, writes to stdout"""
     xlf_data = input_file.read()
-    dom_trees = translate_odf(template, cStringIO.StringIO(xlf_data))
+    dom_trees = translate_odf(template, StringIO(xlf_data))
     write_odf(xlf_data, template, output_file, dom_trees)
     output_file.close()
     return True
diff --git a/translate/convert/xliff2oo.py b/translate/convert/xliff2oo.py
index 8777bb1..b83b044 100644
--- a/translate/convert/xliff2oo.py
+++ b/translate/convert/xliff2oo.py
@@ -24,19 +24,17 @@ See: http://docs.translatehouse.org/projects/translate-toolkit/en/latest/command
 for examples and usage instructions.
 """
 
+import logging
 import os
-import sys
 import time
-import logging
 
-from translate.storage import oo
-from translate.storage import factory
-from translate.filters import pofilter
-from translate.filters import checks
-from translate.filters import autocorrect
+from translate.filters import autocorrect, checks, pofilter
+from translate.storage import factory, oo
+
 
 logger = logging.getLogger(__name__)
 
+
 class reoo:
 
     def __init__(self, templatefile, languages=None, timestamp=None, includefuzzy=False, long_keys=False, filteraction="exclude"):
diff --git a/translate/convert/xliff2po.py b/translate/convert/xliff2po.py
index 37e902c..7c3fb77 100644
--- a/translate/convert/xliff2po.py
+++ b/translate/convert/xliff2po.py
@@ -24,9 +24,8 @@ See: http://docs.translatehouse.org/projects/translate-toolkit/en/latest/command
 for examples and usage instructions.
 """
 
-from translate.storage import po
-from translate.storage import xliff
 from translate.misc import wStringIO
+from translate.storage import po, xliff
 
 
 class xliff2po:
@@ -109,7 +108,10 @@ def convertxliff(inputfile, outputfile, templates, duplicatestyle="msgctxt"):
 
 def main(argv=None):
     from translate.convert import convert
-    formats = {"xlf": ("po", convertxliff)}
+    formats = {
+        "xlf": ("po", convertxliff),
+        "xliff": ("po", convertxliff),
+    }
     parser = convert.ConvertOptionParser(formats, usepots=True,
                                          description=__doc__)
     parser.add_duplicates_option()
diff --git a/translate/filters/autocorrect.py b/translate/filters/autocorrect.py
index 1558ca0..f115e50 100644
--- a/translate/filters/autocorrect.py
+++ b/translate/filters/autocorrect.py
@@ -22,11 +22,7 @@
 
 from translate.filters import decoration
 
-from translate.misc.typecheck import accepts, returns, IsOneOf
 
-
- at accepts(unicode, unicode)
- at returns(IsOneOf(unicode, type(None)))
 def correct(source, target):
     """Runs a set of easy and automatic corrections
 
diff --git a/translate/filters/checks.py b/translate/filters/checks.py
index 35758a5..bf954f5 100644
--- a/translate/filters/checks.py
+++ b/translate/filters/checks.py
@@ -30,20 +30,16 @@ When adding a new test here, please document and explain their behaviour on the
 :doc:`pofilter tests </commands/pofilter_tests>` page.
 """
 
-import re
 import logging
+import re
 
-from translate.filters import decoration
-from translate.filters import helpers
-from translate.filters import prefilters
-from translate.filters import spelling
-from translate.filters.decorators import (critical, functional, cosmetic,
-                                          extraction)
-from translate.lang import factory
-from translate.lang import data
-
+from translate.filters import decoration, helpers, prefilters, spelling
+from translate.filters.decorators import (cosmetic, critical, extraction,
+                                          functional)
+from translate.lang import data, factory
 from translate.misc import lru
 
+
 logger = logging.getLogger(__name__)
 
 # These are some regular expressions that are compiled for use in some tests
@@ -78,6 +74,8 @@ tag_re = re.compile("<[^>]+>")
 
 gconf_attribute_re = re.compile('"[a-z_]+?"')
 
+# XML/HTML tags in LibreOffice help and readme, exclude short tags
+lo_tag_re = re.compile('''<[/]??[a-z][a-z_\-]+?(?:| +[a-z]+?=".*?") *>''')
 
 def tagname(string):
     """Returns the name of the XML/HTML tag in string"""
@@ -167,11 +165,12 @@ class SeriousFilterFailure(FilterFailure):
 #the property/tag that is specified as None. A non-None value of "value"
 #indicates that the value of the attribute must be taken into account.
 common_ignoretags = [(None, "xml-lang", None)]
-common_canchangetags = [("img", "alt", None),
-                        (None, "title", None),
-                        (None, "dir", None),
-                        (None, "lang", None),
-                       ]
+common_canchangetags = [
+    ("img", "alt", None),
+    (None, "title", None),
+    (None, "dir", None),
+    (None, "lang", None),
+]
 # Actually the title tag is allowed on many tags in HTML (but probably not all)
 
 
@@ -446,13 +445,13 @@ class UnitChecker(object):
 
             try:
                 filterresult = self.run_test(filterfunction, unit)
-            except FilterFailure, e:
+            except FilterFailure as e:
                 filterresult = False
                 filtermessage = unicode(e)
-            except Exception, e:
+            except Exception as e:
                 if self.errorhandler is None:
-                    raise ValueError("error in filter %s: %r, %r, %s" % \
-                            (functionname, unit.source, unit.target, e))
+                    raise ValueError("error in filter %s: %r, %r, %s" %
+                                     (functionname, unit.source, unit.target, e))
                 else:
                     filterresult = self.errorhandler(functionname, unit.source,
                                                      unit.target, e)
@@ -508,7 +507,7 @@ class TranslationChecker(UnitChecker):
                 try:
                     if not test(self.str1, unicode(pluralform)):
                         filterresult = False
-                except FilterFailure, e:
+                except FilterFailure as e:
                     filterresult = False
                     filtermessages.extend(e.messages)
 
@@ -642,9 +641,9 @@ class StandardChecker(TranslationChecker):
         if self.config.notranslatewords:
             words1 = str1.split()
             if len(words1) == 1 and [word for word in words1 if word in self.config.notranslatewords]:
-            #currently equivalent to:
-            #   if len(words1) == 1 and words1[0] in self.config.notranslatewords:
-            #why do we only test for one notranslate word?
+                #currently equivalent to:
+                #   if len(words1) == 1 and words1[0] in self.config.notranslatewords:
+                #why do we only test for one notranslate word?
                 return True
 
         # we could also check for things like str1.isnumeric(), but the test
@@ -850,7 +849,7 @@ class StandardChecker(TranslationChecker):
                         if match2.group('fullvar') != match1.group('fullvar'):
                             raise FilterFailure(u"Different printf variable: %s" % match2.group())
 
-                if str1ord == None:
+                if str1ord is None:
                     raise FilterFailure(u"Added printf variable: %s" % match2.group())
             elif str2key:
                 str1key = None
@@ -868,7 +867,7 @@ class StandardChecker(TranslationChecker):
                         if match1.group('fullvar') != match2.group('fullvar'):
                             raise FilterFailure(u"Different printf variable: %s" % match2.group())
 
-                if str1key == None:
+                if str1key is None:
                     raise FilterFailure(u"Added printf variable: %s" % match2.group())
             else:
                 for var_num1, match1 in enumerate(printf_pat.finditer(str1)):
@@ -1514,7 +1513,8 @@ class StandardChecker(TranslationChecker):
                          "isreview", "notranslatewords", "musttranslatewords",
                          "emails", "simpleplurals", "urls", "printf",
                          "tabs", "newlines", "functions", "options",
-                         "blank", "nplurals", "gconf", "dialogsizes"),
+                         "blank", "nplurals", "gconf", "dialogsizes",
+                         "validxml"),
           "blank": ("simplecaps", "variables", "startcaps",
                     "accelerators", "brackets", "endpunc",
                     "acronyms", "xmltags", "startpunc",
@@ -1525,7 +1525,7 @@ class StandardChecker(TranslationChecker):
                     "isreview", "notranslatewords", "musttranslatewords",
                     "emails", "simpleplurals", "urls", "printf",
                     "tabs", "newlines", "functions", "options",
-                    "gconf", "dialogsizes"),
+                    "gconf", "dialogsizes", "validxml"),
           "credits": ("simplecaps", "variables", "startcaps",
                       "accelerators", "brackets", "endpunc",
                       "acronyms", "xmltags", "startpunc",
@@ -1533,7 +1533,8 @@ class StandardChecker(TranslationChecker):
                       "filepaths", "doublespacing",
                       "sentencecount", "numbers",
                       "emails", "simpleplurals", "urls", "printf",
-                      "tabs", "newlines", "functions", "options"),
+                      "tabs", "newlines", "functions", "options",
+                      "validxml"),
          "purepunc": ("startcaps", "options"),
          # This is causing some problems since Python 2.6, as
          # startcaps is now seen as an important one to always execute
@@ -1553,7 +1554,7 @@ class StandardChecker(TranslationChecker):
                           "startwhitespace", "endwhitespace",
                           "singlequoting", "doublequoting",
                           "filepaths", "purepunc", "doublewords", "printf",
-                          "newlines"),
+                          "newlines", "validxml"),
          }
 
 # code to actually run the tests (use unittest?)
@@ -1567,7 +1568,7 @@ openofficeconfig = CheckerConfig(
     ignoretags=[("alt", "xml-lang", None), ("ahelp", "visibility", "visible"),
                 ("img", "width", None), ("img", "height", None)],
     canchangetags=[("link", "name", None)],
-    )
+)
 
 
 class OpenOfficeChecker(StandardChecker):
@@ -1582,13 +1583,64 @@ class OpenOfficeChecker(StandardChecker):
         checkerconfig.update(openofficeconfig)
         StandardChecker.__init__(self, **kwargs)
 
+libreofficeconfig = CheckerConfig(
+    accelmarkers=["~"],
+    varmatches=[("&", ";"), ("%", "%"), ("%", None), ("%", 0), ("$(", ")"),
+                ("$", "$"), ("${", "}"), ("#", "#"), ("#", 1), ("#", 0),
+                ("($", ")"), ("$[", "]"), ("[", "]"), ("@", "@"),
+                ("$", None)],
+    ignoretags=[("alt", "xml-lang", None), ("ahelp", "visibility", "visible"),
+                ("img", "width", None), ("img", "height", None)],
+    canchangetags=[("link", "name", None)],
+)
+
+
+class LibreOfficeChecker(StandardChecker):
+
+    def __init__(self, **kwargs):
+        checkerconfig = kwargs.get("checkerconfig", None)
+
+        if checkerconfig is None:
+            checkerconfig = CheckerConfig()
+            kwargs["checkerconfig"] = checkerconfig
+
+        checkerconfig.update(libreofficeconfig)
+        checkerconfig.update(openofficeconfig)
+        StandardChecker.__init__(self, **kwargs)
+
+
+    @critical
+    def validxml(self, str1, str2):
+        """Check that all XML/HTML open/close tags has close/open
+        pair in the translation."""
+        for location in self.locations:
+            if location.endswith(".xrm") or location.endswith(".xhp"):
+                opentags = []
+                match = re.search(lo_tag_re, str2)
+                while match:
+                    acttag = match.group(0)
+                    if acttag.startswith("</"):
+                        if len(opentags) == 0:
+                            raise FilterFailure(u"There is no open tag for %s" % (acttag))
+                        opentag = opentags.pop()
+                        if tagname(acttag) != "/" + tagname(opentag):
+                            raise FilterFailure(u"Open tag %s and close tag %s "
+                                                 "don't match" % (opentag, acttag))
+                    else:
+                        opentags.append(acttag)
+                    str2 = str2[match.end(0):]
+                    match = re.search(lo_tag_re, str2)
+                if len(opentags) != 0:
+                    raise FilterFailure(u"There is no close tag for %s" % (opentags.pop()))
+        return True
+
 
 mozillaconfig = CheckerConfig(
     accelmarkers=["&"],
     varmatches=[("&", ";"), ("%", "%"), ("%", 1), ("$", "$"), ("$", None),
                 ("#", 1), ("${", "}"), ("$(^", ")"), ("{{", "}}"), ],
     criticaltests=["accelerators"],
-    )
+)
 
 
 class MozillaChecker(StandardChecker):
@@ -1693,7 +1745,7 @@ class MozillaChecker(StandardChecker):
 
 drupalconfig = CheckerConfig(
     varmatches=[("%", None), ("@", None), ("!", None)],
-    )
+)
 
 
 class DrupalChecker(StandardChecker):
@@ -1713,7 +1765,7 @@ gnomeconfig = CheckerConfig(
     accelmarkers=["_"],
     varmatches=[("%", 1), ("$(", ")")],
     credit_sources=[u"translator-credits"],
-    )
+)
 
 
 class GnomeChecker(StandardChecker):
@@ -1751,7 +1803,7 @@ kdeconfig = CheckerConfig(
     accelmarkers=["&"],
     varmatches=[("%", 1)],
     credit_sources=[u"Your names", u"Your emails", u"ROLES_OF_TRANSLATORS"],
-    )
+)
 
 
 class KdeChecker(StandardChecker):
@@ -1803,6 +1855,7 @@ class TermChecker(StandardChecker):
 
 projectcheckers = {
     "openoffice": OpenOfficeChecker,
+    "libreoffice": LibreOfficeChecker,
     "mozilla": MozillaChecker,
     "kde": KdeChecker,
     "wx": KdeChecker,
@@ -1810,7 +1863,7 @@ projectcheckers = {
     "creativecommons": CCLicenseChecker,
     "drupal": DrupalChecker,
     "terminology": TermChecker,
-    }
+}
 
 
 class StandardUnitChecker(UnitChecker):
@@ -1872,8 +1925,8 @@ def runtests(str1, str2, ignorelist=()):
     failures = checker.run_filters(unit)
 
     for test in failures:
-        print "failure: %s: %s\n  %r\n  %r" % \
-              (test, failures[test]['message'], str1, str2)
+        print("failure: %s: %s\n  %r\n  %r" % \
+              (test, failures[test]['message'], str1, str2))
 
     return failures
 
@@ -1886,8 +1939,7 @@ def batchruntests(pairs):
         if runtests(str1, str2):
             passed += 1
 
-    print
-    print "total: %d/%d pairs passed" % (passed, numpairs)
+    print("\ntotal: %d/%d pairs passed" % (passed, numpairs))
 
 
 if __name__ == '__main__':
diff --git a/translate/filters/decoration.py b/translate/filters/decoration.py
index 8e40526..2a58090 100644
--- a/translate/filters/decoration.py
+++ b/translate/filters/decoration.py
@@ -262,9 +262,10 @@ _function_re = re.compile(r'''((?:
     \(\)                 # Must close with ()
 )+)
 ''', re.VERBOSE)  # shouldn't be locale aware
-    # pam_*_item() IO::String NULL() POE::Component::Client::LDAP->new()
-    # POE::Wheel::Null mechanize.UserAgent POSIX::sigaction()
-    # window.resizeBy() @fptr()
+# Reference functions:
+#   pam_*_item() IO::String NULL() POE::Component::Client::LDAP->new()
+#   POE::Wheel::Null mechanize.UserAgent POSIX::sigaction()
+#   window.resizeBy() @fptr()
 
 
 def getfunctions(str1):
diff --git a/translate/filters/decorators.py b/translate/filters/decorators.py
index bcd2783..030dc98 100644
--- a/translate/filters/decorators.py
+++ b/translate/filters/decorators.py
@@ -20,7 +20,7 @@
 
 """Decorators to categorize pofilter checks."""
 
-from translate.misc.decorators import decorate
+from functools import wraps
 
 
 #: Quality checks' failure categories
@@ -32,9 +32,9 @@ class Category(object):
     NO_CATEGORY = 0
 
 
- at decorate
 def critical(f):
 
+    @wraps(f)
     def critical_f(self, *args, **kwargs):
         if f.__name__ not in self.__class__.categories:
             self.__class__.categories[f.__name__] = Category.CRITICAL
@@ -44,9 +44,9 @@ def critical(f):
     return critical_f
 
 
- at decorate
 def functional(f):
 
+    @wraps(f)
     def functional_f(self, *args, **kwargs):
         if f.__name__ not in self.__class__.categories:
             self.__class__.categories[f.__name__] = Category.FUNCTIONAL
@@ -56,9 +56,9 @@ def functional(f):
     return functional_f
 
 
- at decorate
 def cosmetic(f):
 
+    @wraps(f)
     def cosmetic_f(self, *args, **kwargs):
         if f.__name__ not in self.__class__.categories:
             self.__class__.categories[f.__name__] = Category.COSMETIC
@@ -68,9 +68,9 @@ def cosmetic(f):
     return cosmetic_f
 
 
- at decorate
 def extraction(f):
 
+    @wraps(f)
     def extraction_f(self, *args, **kwargs):
         if f.__name__ not in self.__class__.categories:
             self.__class__.categories[f.__name__] = Category.EXTRACTION
diff --git a/translate/filters/pofilter.py b/translate/filters/pofilter.py
index df70b95..5345e4c 100644
--- a/translate/filters/pofilter.py
+++ b/translate/filters/pofilter.py
@@ -31,11 +31,10 @@ for full descriptions of all tests.
 
 import os
 
+from translate.filters import autocorrect, checks
+from translate.misc import optrecurse
 from translate.storage import factory
 from translate.storage.poheader import poheader
-from translate.filters import checks
-from translate.filters import autocorrect
-from translate.misc import optrecurse
 
 
 def build_checkerconfig(options):
@@ -203,7 +202,7 @@ class FilterOptionParser(optrecurse.RecursiveOptionParser):
         options.outputoptions = self.outputoptions
 
         if options.listfilters:
-            print options.checkfilter.getfilterdocs()
+            print(options.checkfilter.getfilterdocs())
         else:
             self.recursiveprocess(options)
 
@@ -252,6 +251,9 @@ def cmdlineparser():
     parser.add_option("", "--openoffice", dest="filterclass",
         action="store_const", default=None, const=checks.OpenOfficeChecker,
         help="use the standard checks for OpenOffice translations")
+    parser.add_option("", "--libreoffice", dest="filterclass",
+        action="store_const", default=None, const=checks.LibreOfficeChecker,
+        help="use the standard checks for LibreOffice translations")
     parser.add_option("", "--mozilla", dest="filterclass",
         action="store_const", default=None, const=checks.MozillaChecker,
         help="use the standard checks for Mozilla translations")
diff --git a/translate/filters/prefilters.py b/translate/filters/prefilters.py
index c2c768b..f5342e0 100644
--- a/translate/filters/prefilters.py
+++ b/translate/filters/prefilters.py
@@ -29,10 +29,10 @@ from translate.misc import quote
 
 def removekdecomments(str1):
     r"""Remove KDE-style PO comments.
-    
+
     KDE comments start with ``_:[space]`` and end with a literal ``\n``.
     Example::
-    
+
       "_: comment\n"
     """
     assert isinstance(str1, unicode)
@@ -68,7 +68,7 @@ def filteraccelerators(accelmarker):
         accelmarkerlen = len(accelmarker)
 
     def filtermarkedaccelerators(str1, acceptlist=None):
-	"""Modifies the accelerators in *str1* marked with the given
+        """Modifies the accelerators in *str1* marked with the given
         *accelmarker*, using a given *acceptlist* filter.
         """
         acclocs, badlocs = decoration.findaccelerators(str1, accelmarker, acceptlist)
@@ -134,7 +134,7 @@ def filtervariables(startmarker, endmarker, varfilter):
         endmarkerlen = len(endmarker)
 
     def filtermarkedvariables(str1):
-	"""Modifies the variables in *str1* marked with a given *\*marker*,
+        """Modifies the variables in *str1* marked with a given *\*marker*,
         using a given filter."""
         varlocs = decoration.findmarkedvariables(str1, startmarker, endmarker)
         fstr1, pos = "", 0
@@ -148,8 +148,7 @@ def filtervariables(startmarker, endmarker, varfilter):
 
 # a list of special words with punctuation
 # all apostrophes in the middle of the word are handled already
-wordswithpunctuation = ["'n", "'t",  # Afrikaans
-                       ]
+wordswithpunctuation = ["'n", "'t",]  # Afrikaans
 # map all the words to their non-punctified equivalent
 wordswithpunctuation = dict([(word, filter(str.isalnum, word)) for word in wordswithpunctuation])
 
diff --git a/translate/filters/spelling.py b/translate/filters/spelling.py
index 4b8f532..40ac793 100644
--- a/translate/filters/spelling.py
+++ b/translate/filters/spelling.py
@@ -23,13 +23,14 @@
 
 import logging
 
+
 logger = logging.getLogger(__name__)
 
 available = False
 
 try:
     # Enchant
-    from enchant import checker, DictNotFoundError, Error as EnchantError
+    from enchant import checker, Error as EnchantError
     available = True
     checkers = {}
 
@@ -39,7 +40,7 @@ try:
                 checkers[lang] = checker.SpellChecker(lang)
                 # some versions only report an error when checking something
                 checkers[lang].check(u'bla')
-            except EnchantError, e:
+            except EnchantError as e:
                 # sometimes this is raised instead of DictNotFoundError
                 logger.error(str(e))
                 checkers[lang] = None
diff --git a/translate/filters/test_autocorrect.py b/translate/filters/test_autocorrect.py
index cf384d9..5642bbf 100644
--- a/translate/filters/test_autocorrect.py
+++ b/translate/filters/test_autocorrect.py
@@ -9,11 +9,11 @@ class TestAutocorrect:
     def correct(self, msgid, msgstr, expected):
         """helper to run correct function from autocorrect module"""
         corrected = autocorrect.correct(msgid, msgstr)
-        print repr(msgid)
-        print repr(msgstr)
-        print msgid.encode('utf-8')
-        print msgstr.encode('utf-8')
-        print (corrected or u"").encode('utf-8')
+        print(repr(msgid))
+        print(repr(msgstr))
+        print(msgid.encode('utf-8'))
+        print(msgstr.encode('utf-8'))
+        print((corrected or u"").encode('utf-8'))
         assert corrected == expected
 
     def test_empty_target(self):
diff --git a/translate/filters/test_checks.py b/translate/filters/test_checks.py
index 4de44d6..3cd11b9 100644
--- a/translate/filters/test_checks.py
+++ b/translate/filters/test_checks.py
@@ -18,7 +18,7 @@ def check_category(filterfunction):
 
     for klass in classes:
         categories = getattr(klass, 'categories', None)
-        has_category.append(categories is not None and \
+        has_category.append(categories is not None and
                             filterfunction.__name__ in categories)
 
     return True in has_category
@@ -29,7 +29,7 @@ def passes(filterfunction, str1, str2):
     str1, str2, no_message = strprep(str1, str2)
     try:
         filterresult = filterfunction(str1, str2)
-    except checks.FilterFailure, e:
+    except checks.FilterFailure as e:
         filterresult = False
 
     filterresult = filterresult and check_category(filterfunction)
@@ -42,13 +42,13 @@ def fails(filterfunction, str1, str2, message=None):
     str1, str2, message = strprep(str1, str2, message)
     try:
         filterresult = filterfunction(str1, str2)
-    except checks.SeriousFilterFailure, e:
+    except checks.SeriousFilterFailure as e:
         filterresult = True
-    except checks.FilterFailure, e:
+    except checks.FilterFailure as e:
         if message:
             exc_message = e.messages[0]
             filterresult = exc_message != message
-            print exc_message.encode('utf-8')
+            print(exc_message.encode('utf-8'))
         else:
             filterresult = False
 
@@ -62,11 +62,11 @@ def fails_serious(filterfunction, str1, str2, message=None):
     str1, str2, message = strprep(str1, str2, message)
     try:
         filterresult = filterfunction(str1, str2)
-    except checks.SeriousFilterFailure, e:
+    except checks.SeriousFilterFailure as e:
         if message:
             exc_message = e.messages[0]
             filterresult = exc_message != message
-            print exc_message.encode('utf-8')
+            print(exc_message.encode('utf-8'))
         else:
             filterresult = False
 
@@ -89,6 +89,7 @@ def test_construct():
     stdchecker = checks.StandardChecker()
     mozillachecker = checks.MozillaChecker()
     ooochecker = checks.OpenOfficeChecker()
+    loochecker = checks.LibreOfficeChecker()
     gnomechecker = checks.GnomeChecker()
     kdechecker = checks.KdeChecker()
 
@@ -101,6 +102,8 @@ def test_accelerator_markers():
     assert mozillachecker.config.accelmarkers == ["&"]
     ooochecker = checks.OpenOfficeChecker()
     assert ooochecker.config.accelmarkers == ["~"]
+    lochecker = checks.LibreOfficeChecker()
+    assert lochecker.config.accelmarkers == ["~"]
     gnomechecker = checks.GnomeChecker()
     assert gnomechecker.config.accelmarkers == ["_"]
     kdechecker = checks.KdeChecker()
@@ -149,6 +152,14 @@ def test_accelerators():
 
     # We don't want an accelerator for letters with a diacritic
     assert fails(ooochecker.accelerators, "F~ile", "L~êer")
+    lochecker = checks.LibreOfficeChecker()
+    assert passes(lochecker.accelerators, "~File", "~Fayile")
+    assert fails(lochecker.accelerators, "~File", "Fayile")
+    assert fails(lochecker.accelerators, "File", "~Fayile")
+
+    # We don't want an accelerator for letters with a diacritic
+    assert fails(lochecker.accelerators, "F~ile", "L~êer")
+
     # Bug 289: accept accented accelerator characters
     afchecker = checks.StandardChecker(checks.CheckerConfig(accelmarkers="&", targetlanguage="fi"))
     assert passes(afchecker.accelerators, "&Reload Frame", "P&äivitä kehys")
@@ -181,6 +192,9 @@ def test_acceleratedvariables():
     ooochecker = checks.OpenOfficeChecker()
     assert fails(ooochecker.acceleratedvariables, "%PRODUCTNAME% ~Options", "~%PRODUCTNAME% Ikhetho")
     assert passes(ooochecker.acceleratedvariables, "%PRODUCTNAME% ~Options", "%PRODUCTNAME% ~Ikhetho")
+    lochecker = checks.LibreOfficeChecker()
+    assert fails(lochecker.acceleratedvariables, "%PRODUCTNAME% ~Options", "~%PRODUCTNAME% Ikhetho")
+    assert passes(lochecker.acceleratedvariables, "%PRODUCTNAME% ~Options", "%PRODUCTNAME% ~Ikhetho")
 
 
 def test_acronyms():
@@ -263,6 +277,9 @@ def test_doublespacing():
     ooochecker = checks.OpenOfficeChecker()
     assert passes(ooochecker.doublespacing, "Execute %PROGRAMNAME Calc", "Blah %PROGRAMNAME Calc")
     assert passes(ooochecker.doublespacing, "Execute %PROGRAMNAME Calc", "Blah % PROGRAMNAME Calc")
+    lochecker = checks.LibreOfficeChecker()
+    assert passes(lochecker.doublespacing, "Execute %PROGRAMNAME Calc", "Blah %PROGRAMNAME Calc")
+    assert passes(lochecker.doublespacing, "Execute %PROGRAMNAME Calc", "Blah % PROGRAMNAME Calc")
 
 
 def test_doublewords():
@@ -343,6 +360,8 @@ def test_escapes():
     # Real example
     ooochecker = checks.OpenOfficeChecker()
     assert passes(ooochecker.escapes, ",\t44\t;\t59\t:\t58\t{Tab}\t9\t{space}\t32", ",\t44\t;\t59\t:\t58\t{Tab}\t9\t{space}\t32")
+    lochecker = checks.LibreOfficeChecker()
+    assert passes(lochecker.escapes, ",\t44\t;\t59\t:\t58\t{Tab}\t9\t{space}\t32", ",\t44\t;\t59\t:\t58\t{Tab}\t9\t{space}\t32")
 
 
 def test_newlines():
@@ -365,6 +384,8 @@ def test_newlines():
     # Real example
     ooochecker = checks.OpenOfficeChecker()
     assert fails(ooochecker.newlines, "The arrowhead was modified without saving.\nWould you like to save the arrowhead now?", "Ṱhoho ya musevhe yo khwinifhadzwa hu si na u seiva.Ni khou ṱoda u seiva thoho ya musevhe zwino?")
+    lochecker = checks.LibreOfficeChecker()
+    assert fails(lochecker.newlines, "The arrowhead was modified without saving.\nWould you like to save the arrowhead now?", "Ṱhoho ya musevhe yo khwinifhadzwa hu si na u seiva.Ni khou ṱoda u seiva thoho ya musevhe zwino?")
 
 
 def test_tabs():
@@ -377,6 +398,8 @@ def test_tabs():
     assert fails_serious(stdchecker.tabs, "A file", "'n Leer\t")
     ooochecker = checks.OpenOfficeChecker()
     assert passes(ooochecker.tabs, ",\t44\t;\t59\t:\t58\t{Tab}\t9\t{space}\t32", ",\t44\t;\t59\t:\t58\t{Tab}\t9\t{space}\t32")
+    lochecker = checks.LibreOfficeChecker()
+    assert passes(lochecker.tabs, ",\t44\t;\t59\t:\t58\t{Tab}\t9\t{space}\t32", ",\t44\t;\t59\t:\t58\t{Tab}\t9\t{space}\t32")
 
 
 def test_filepaths():
@@ -621,6 +644,8 @@ def test_singlequoting():
     assert passes(mozillachecker.singlequoting, "&Don't import anything", "&Moenie enigiets invoer nie")
     ooochecker = checks.OpenOfficeChecker()
     assert passes(ooochecker.singlequoting, "~Don't import anything", "~Moenie enigiets invoer nie")
+    lochecker = checks.LibreOfficeChecker()
+    assert passes(lochecker.singlequoting, "~Don't import anything", "~Moenie enigiets invoer nie")
 
     vichecker = checks.StandardChecker(checks.CheckerConfig(targetlanguage="vi"))
     assert passes(vichecker.singlequoting, "Save 'File'", u"Lưu « Tập tin »")
@@ -650,6 +675,9 @@ def test_simplecaps():
     ooochecker = checks.OpenOfficeChecker()
     assert passes(ooochecker.simplecaps, "SOLK (%PRODUCTNAME Link)", "SOLK (%PRODUCTNAME Thumanyo)")
     assert passes(ooochecker.simplecaps, "%STAROFFICE Image", "Tshifanyiso tsha %STAROFFICE")
+    lochecker = checks.LibreOfficeChecker()
+    assert passes(lochecker.simplecaps, "SOLK (%PRODUCTNAME Link)", "SOLK (%PRODUCTNAME Thumanyo)")
+    assert passes(lochecker.simplecaps, "%STAROFFICE Image", "Tshifanyiso tsha %STAROFFICE")
     assert passes(stdchecker.simplecaps, "Flies, flies, everywhere! Ack!", u"Vlieë, oral vlieë! Jig!")
 
 
@@ -872,41 +900,41 @@ def test_variables_mozilla():
 def test_variables_openoffice():
     """tests variables in OpenOffice translations"""
     # OpenOffice.org variables
-    ooochecker = checks.OpenOfficeChecker()
-    assert passes(ooochecker.variables, "Use the &brandShortname; instance.", "Gebruik die &brandShortname; weergawe.")
-    assert fails_serious(ooochecker.variables, "Use the &brandShortname; instance.", "Gebruik die &brandKortnaam; weergawe.")
-    assert passes(ooochecker.variables, "Save %file%", "Stoor %file%")
-    assert fails_serious(ooochecker.variables, "Save %file%", "Stoor %leer%")
-    assert passes(ooochecker.variables, "Save %file", "Stoor %file")
-    assert fails_serious(ooochecker.variables, "Save %file", "Stoor %leer")
-    assert passes(ooochecker.variables, "Save %1", "Stoor %1")
-    assert fails_serious(ooochecker.variables, "Save %1", "Stoor %2")
-    assert passes(ooochecker.variables, "Save %", "Stoor %")
-    assert fails_serious(ooochecker.variables, "Save %", "Stoor")
-    assert passes(ooochecker.variables, "Save $(file)", "Stoor $(file)")
-    assert fails_serious(ooochecker.variables, "Save $(file)", "Stoor $(leer)")
-    assert passes(ooochecker.variables, "Save $file$", "Stoor $file$")
-    assert fails_serious(ooochecker.variables, "Save $file$", "Stoor $leer$")
-    assert passes(ooochecker.variables, "Save ${file}", "Stoor ${file}")
-    assert fails_serious(ooochecker.variables, "Save ${file}", "Stoor ${leer}")
-    assert passes(ooochecker.variables, "Save #file#", "Stoor #file#")
-    assert fails_serious(ooochecker.variables, "Save #file#", "Stoor #leer#")
-    assert passes(ooochecker.variables, "Save #1", "Stoor #1")
-    assert fails_serious(ooochecker.variables, "Save #1", "Stoor #2")
-    assert passes(ooochecker.variables, "Save #", "Stoor #")
-    assert fails_serious(ooochecker.variables, "Save #", "Stoor")
-    assert passes(ooochecker.variables, "Save ($file)", "Stoor ($file)")
-    assert fails_serious(ooochecker.variables, "Save ($file)", "Stoor ($leer)")
-    assert passes(ooochecker.variables, "Save $[file]", "Stoor $[file]")
-    assert fails_serious(ooochecker.variables, "Save $[file]", "Stoor $[leer]")
-    assert passes(ooochecker.variables, "Save [file]", "Stoor [file]")
-    assert fails_serious(ooochecker.variables, "Save [file]", "Stoor [leer]")
-    assert passes(ooochecker.variables, "Save $file", "Stoor $file")
-    assert fails_serious(ooochecker.variables, "Save $file", "Stoor $leer")
-    assert passes(ooochecker.variables, "Use @EXTENSION@", "Gebruik @EXTENSION@")
-    assert fails_serious(ooochecker.variables, "Use @EXTENSUION@", "Gebruik @UITBRUIDING@")
-    # Same variable name twice
-    assert fails_serious(ooochecker.variables, r"""Start %PROGRAMNAME% as %PROGRAMNAME%""", "Begin %PROGRAMNAME%")
+    for ooochecker in (checks.OpenOfficeChecker(), checks.LibreOfficeChecker()):
+        assert passes(ooochecker.variables, "Use the &brandShortname; instance.", "Gebruik die &brandShortname; weergawe.")
+        assert fails_serious(ooochecker.variables, "Use the &brandShortname; instance.", "Gebruik die &brandKortnaam; weergawe.")
+        assert passes(ooochecker.variables, "Save %file%", "Stoor %file%")
+        assert fails_serious(ooochecker.variables, "Save %file%", "Stoor %leer%")
+        assert passes(ooochecker.variables, "Save %file", "Stoor %file")
+        assert fails_serious(ooochecker.variables, "Save %file", "Stoor %leer")
+        assert passes(ooochecker.variables, "Save %1", "Stoor %1")
+        assert fails_serious(ooochecker.variables, "Save %1", "Stoor %2")
+        assert passes(ooochecker.variables, "Save %", "Stoor %")
+        assert fails_serious(ooochecker.variables, "Save %", "Stoor")
+        assert passes(ooochecker.variables, "Save $(file)", "Stoor $(file)")
+        assert fails_serious(ooochecker.variables, "Save $(file)", "Stoor $(leer)")
+        assert passes(ooochecker.variables, "Save $file$", "Stoor $file$")
+        assert fails_serious(ooochecker.variables, "Save $file$", "Stoor $leer$")
+        assert passes(ooochecker.variables, "Save ${file}", "Stoor ${file}")
+        assert fails_serious(ooochecker.variables, "Save ${file}", "Stoor ${leer}")
+        assert passes(ooochecker.variables, "Save #file#", "Stoor #file#")
+        assert fails_serious(ooochecker.variables, "Save #file#", "Stoor #leer#")
+        assert passes(ooochecker.variables, "Save #1", "Stoor #1")
+        assert fails_serious(ooochecker.variables, "Save #1", "Stoor #2")
+        assert passes(ooochecker.variables, "Save #", "Stoor #")
+        assert fails_serious(ooochecker.variables, "Save #", "Stoor")
+        assert passes(ooochecker.variables, "Save ($file)", "Stoor ($file)")
+        assert fails_serious(ooochecker.variables, "Save ($file)", "Stoor ($leer)")
+        assert passes(ooochecker.variables, "Save $[file]", "Stoor $[file]")
+        assert fails_serious(ooochecker.variables, "Save $[file]", "Stoor $[leer]")
+        assert passes(ooochecker.variables, "Save [file]", "Stoor [file]")
+        assert fails_serious(ooochecker.variables, "Save [file]", "Stoor [leer]")
+        assert passes(ooochecker.variables, "Save $file", "Stoor $file")
+        assert fails_serious(ooochecker.variables, "Save $file", "Stoor $leer")
+        assert passes(ooochecker.variables, "Use @EXTENSION@", "Gebruik @EXTENSION@")
+        assert fails_serious(ooochecker.variables, "Use @EXTENSUION@", "Gebruik @UITBRUIDING@")
+        # Same variable name twice
+        assert fails_serious(ooochecker.variables, r"""Start %PROGRAMNAME% as %PROGRAMNAME%""", "Begin %PROGRAMNAME%")
 
 
 def test_variables_cclicense():
@@ -965,24 +993,24 @@ No s'ha pogut crear el servidor
 
 def test_ooxmltags():
     """Tests the xml tags in OpenOffice.org translations for quality as done in gsicheck"""
-    ooochecker = checks.OpenOfficeChecker()
-    #some attributes can be changed or removed
-    assert fails(ooochecker.xmltags, "<img src=\"a.jpg\" width=\"400\">", "<img src=\"b.jpg\" width=\"500\">")
-    assert passes(ooochecker.xmltags, "<img src=\"a.jpg\" width=\"400\">", "<img src=\"a.jpg\" width=\"500\">")
-    assert passes(ooochecker.xmltags, "<img src=\"a.jpg\" width=\"400\">", "<img src=\"a.jpg\">")
-    assert passes(ooochecker.xmltags, "<img src=\"a.jpg\">", "<img src=\"a.jpg\" width=\"400\">")
-    assert passes(ooochecker.xmltags, "<alt xml-lang=\"ab\">text</alt>", "<alt>teks</alt>")
-    assert passes(ooochecker.xmltags, "<ahelp visibility=\"visible\">bla</ahelp>", "<ahelp>blu</ahelp>")
-    assert fails(ooochecker.xmltags, "<ahelp visibility=\"visible\">bla</ahelp>", "<ahelp visibility=\"invisible\">blu</ahelp>")
-    assert fails(ooochecker.xmltags, "<ahelp visibility=\"invisible\">bla</ahelp>", "<ahelp>blu</ahelp>")
-    #some attributes can be changed, but not removed
-    assert passes(ooochecker.xmltags, "<link name=\"John\">", "<link name=\"Jan\">")
-    assert fails(ooochecker.xmltags, "<link name=\"John\">", "<link naam=\"Jan\">")
-
-    # Reported OOo error
-    ## Bug 1910
-    assert fails(ooochecker.xmltags, u"""<variable id="FehlendesElement">In a database file window, click the <emph>Queries</emph> icon, then choose <emph>Edit - Edit</emph>. When referenced fields no longer exist, you see this dialog</variable>""", u"""<variable id="FehlendesElement">Dans  une fenêtre de fichier de base de données, cliquez sur l'icône <emph>Requêtes</emph>, puis choisissez <emph>Éditer - Éditer</emp>. Lorsque les champs de référence n'existent plus, vous voyez cette boî [...]
-    assert fails(ooochecker.xmltags, "<variable> <emph></emph> <emph></emph> </variable>", "<variable> <emph></emph> <emph></emp> </variable>")
+    for ooochecker in (checks.OpenOfficeChecker(), checks.LibreOfficeChecker()):
+        #some attributes can be changed or removed
+        assert fails(ooochecker.xmltags, "<img src=\"a.jpg\" width=\"400\">", "<img src=\"b.jpg\" width=\"500\">")
+        assert passes(ooochecker.xmltags, "<img src=\"a.jpg\" width=\"400\">", "<img src=\"a.jpg\" width=\"500\">")
+        assert passes(ooochecker.xmltags, "<img src=\"a.jpg\" width=\"400\">", "<img src=\"a.jpg\">")
+        assert passes(ooochecker.xmltags, "<img src=\"a.jpg\">", "<img src=\"a.jpg\" width=\"400\">")
+        assert passes(ooochecker.xmltags, "<alt xml-lang=\"ab\">text</alt>", "<alt>teks</alt>")
+        assert passes(ooochecker.xmltags, "<ahelp visibility=\"visible\">bla</ahelp>", "<ahelp>blu</ahelp>")
+        assert fails(ooochecker.xmltags, "<ahelp visibility=\"visible\">bla</ahelp>", "<ahelp visibility=\"invisible\">blu</ahelp>")
+        assert fails(ooochecker.xmltags, "<ahelp visibility=\"invisible\">bla</ahelp>", "<ahelp>blu</ahelp>")
+        #some attributes can be changed, but not removed
+        assert passes(ooochecker.xmltags, "<link name=\"John\">", "<link name=\"Jan\">")
+        assert fails(ooochecker.xmltags, "<link name=\"John\">", "<link naam=\"Jan\">")
+
+        # Reported OOo error
+        ## Bug 1910
+        assert fails(ooochecker.xmltags, u"""<variable id="FehlendesElement">In a database file window, click the <emph>Queries</emph> icon, then choose <emph>Edit - Edit</emph>. When referenced fields no longer exist, you see this dialog</variable>""", u"""<variable id="FehlendesElement">Dans  une fenêtre de fichier de base de données, cliquez sur l'icône <emph>Requêtes</emph>, puis choisissez <emph>Éditer - Éditer</emp>. Lorsque les champs de référence n'existent plus, vous voyez cette [...]
+        assert fails(ooochecker.xmltags, "<variable> <emph></emph> <emph></emph> </variable>", "<variable> <emph></emph> <emph></emp> </variable>")
 
 
 def test_functions():
@@ -1106,6 +1134,31 @@ def test_gconf():
     assert passes(gnomechecker.gconf, 'Blah "gconf_setting"', 'Bleh "gconf_setting"')
     assert fails(gnomechecker.gconf, 'Blah "gconf_setting"', 'Bleh "gconf_steling"')
 
+def test_validxml():
+    """test wheather validxml recognize invalid xml/html expressions"""
+    lochecker = checks.LibreOfficeChecker()
+    # Test validity only for xrm and xhp files
+    lochecker.locations = ["description.xml"]
+    assert passes(lochecker.validxml, "","normal string")
+    assert passes(lochecker.validxml, "","<emph> only an open tag")
+    lochecker.locations = ["readme.xrm"]
+    assert passes(lochecker.validxml, "","normal string")
+    assert passes(lochecker.validxml, "","<tt>closed formula</tt>")
+    assert fails(lochecker.validxml, "","<tt> only an open tag")
+    lochecker.locations = ["wikisend.xhp"]
+    assert passes(lochecker.validxml, "","A <emph> well formed expression </emph>")
+    assert fails(lochecker.validxml, "","Missing <emph> close tag <emph>")
+    assert fails(lochecker.validxml, "","Missing open tag </emph>")
+    assert fails(lochecker.validxml, "","<ahelp hid=\".\"> open tag not match with close tag</link>")
+    assert passes(lochecker.validxml, "","Skip <IMG> because it is with capitalization so it is part of the text")
+    assert passes(lochecker.validxml, "","Skip the capitalized <Empty>, because it is just a pseudo tag not a real one")
+    assert passes(lochecker.validxml, "","Skip <br/> short tag, because no need to close it.")
+    # Larger tests
+    assert passes(lochecker.validxml, "","<bookmark_value>yazdırma; çizim varsayılanları</bookmark_value><bookmark_value>çizimler; yazdırma varsayılanları</bookmark_value><bookmark_value>sayfalar;sunumlarda sayfa adı yazdırma</bookmark_value><bookmark_value>yazdırma; sunumlarda tarihler</bookmark_value><bookmark_value>tarihler; sunumlarda  yazdırma</bookmark_value><bookmark_value>zamanlar; sunumları yazdırırken ekleme</bookmark_value><bookmark_value>yazdırma; sunumların gizli sayfaları</ [...]
+    assert fails(lochecker.validxml, "","Kullanıcı etkileşimi verisinin kaydedilmesini ve bu verilerin gönderilmesini dilediğiniz zaman etkinleştirebilir veya devre dışı bırakabilirsiniz.  <item type=\"menuitem\"><switchinline select=\"sys\"><caseinline select=\"MAC\">%PRODUCTNAME - Tercihler</caseinline><defaultinline>Araçlar - Seçenekler</defaultinline></switchinline> - %PRODUCTNAME - Gelişim Programı</item>'nı seçin. Daha fazla bilgi için web sitesinde gezinmek için <defaultinline>Bil [...]
+    assert fails(lochecker.validxml, "","<caseinline select=\"DRAW\">Bir sayfanın içerik menüsünde ek komutlar vardır:</caseinline><caseinline select=\"IMPRESS\">Bir sayfanın içerik menüsünde ek komutlar vardır:</caseinline></switchinline>")
+    assert fails(lochecker.validxml, "","<bookmark_value>sunum; sihirbazı başlatmak<bookmark_value>nesneler; her zaman taşınabilir (Impress/Draw)</bookmark_value><bookmark_value>çizimleri eğriltme</bookmark_value><bookmark_value>aralama; sunumdaki sekmeler</bookmark_value><bookmark_value>metin nesneleri; sunumlarda ve çizimlerde</bookmark_value>")
+
 
 def test_hassuggestion():
     """test that hassuggestion() works"""
diff --git a/translate/filters/test_decoration.py b/translate/filters/test_decoration.py
index 0638c18..13764a5 100644
--- a/translate/filters/test_decoration.py
+++ b/translate/filters/test_decoration.py
@@ -18,14 +18,14 @@ def test_spacestart():
 def test_isvalidaccelerator():
     """test the isvalidaccelerator() function"""
     # Mostly this tests the old code path where acceptlist is None
-    assert decoration.isvalidaccelerator(u"") == False
-    assert decoration.isvalidaccelerator(u"a") == True
-    assert decoration.isvalidaccelerator(u"1") == True
-    assert decoration.isvalidaccelerator(u"ḽ") == False
+    assert not decoration.isvalidaccelerator(u"")
+    assert decoration.isvalidaccelerator(u"a")
+    assert decoration.isvalidaccelerator(u"1")
+    assert not decoration.isvalidaccelerator(u"ḽ")
     # Test new code path where we actually have an acceptlist
-    assert decoration.isvalidaccelerator(u"a", u"aeiou") == True
-    assert decoration.isvalidaccelerator(u"ḽ", u"ḓṱḽṋṅ") == True
-    assert decoration.isvalidaccelerator(u"a", u"ḓṱḽṋṅ") == False
+    assert decoration.isvalidaccelerator(u"a", u"aeiou")
+    assert decoration.isvalidaccelerator(u"ḽ", u"ḓṱḽṋṅ")
+    assert not decoration.isvalidaccelerator(u"a", u"ḓṱḽṋṅ")
 
 
 def test_find_marked_variables():
diff --git a/translate/filters/test_pofilter.py b/translate/filters/test_pofilter.py
index 90d7ae9..e89f569 100644
--- a/translate/filters/test_pofilter.py
+++ b/translate/filters/test_pofilter.py
@@ -1,12 +1,10 @@
 #!/usr/bin/env python
 # -*- coding: utf-8 -*-
 
-from translate.storage import factory
-from translate.storage import xliff
-from translate.storage.test_base import headerless_len, first_translatable
-from translate.filters import pofilter
-from translate.filters import checks
+from translate.filters import checks, pofilter
 from translate.misc import wStringIO
+from translate.storage import factory, xliff
+from translate.storage.test_base import first_translatable, headerless_len
 
 
 class BaseTestFilter(object):
@@ -27,7 +25,7 @@ class BaseTestFilter(object):
         returns the resulting store."""
         if cmdlineoptions is None:
             cmdlineoptions = []
-        options, args = pofilter.cmdlineparser().parse_args([self.filename] + \
+        options, args = pofilter.cmdlineparser().parse_args([self.filename] +
                                                             cmdlineoptions)
         checkerclasses = [checks.StandardChecker, checks.StandardUnitChecker]
         if checkerconfig is None:
@@ -46,8 +44,8 @@ class BaseTestFilter(object):
         """checks that an obviously wrong string fails"""
         self.unit.target = "REST"
         filter_result = self.filter(self.translationstore)
-        print filter_result
-        print filter_result.units
+        print(filter_result)
+        print(filter_result.units)
         assert 'startcaps' in first_translatable(filter_result).geterrors()
 
     def test_variables_across_lines(self):
@@ -217,7 +215,7 @@ msgstr "koei"
         pofile = self.parse_text(posource)
         filter_result = self.filter(pofile)
         if headerless_len(filter_result.units):
-            print first_translatable(filter_result)
+            print(first_translatable(filter_result))
         assert headerless_len(filter_result.units) == 0
 
 
diff --git a/translate/i18n.py b/translate/i18n.py
deleted file mode 100644
index 60ba682..0000000
--- a/translate/i18n.py
+++ /dev/null
@@ -1,27 +0,0 @@
-#!/usr/bin/env python
-# -*- coding: utf-8 -*-
-#
-# Copyright 2009 Zuza Software Foundation
-#
-# This file is part of translate.
-#
-# translate is free software; you can redistribute it and/or modify
-# it under the terms of the GNU General Public License as published by
-# the Free Software Foundation; either version 2 of the License, or
-# (at your option) any later version.
-#
-# translate is distributed in the hope that it will be useful,
-# but WITHOUT ANY WARRANTY; without even the implied warranty of
-# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
-# GNU General Public License for more details.
-#
-# You should have received a copy of the GNU General Public License
-# along with this program; if not, see <http://www.gnu.org/licenses/>.
-
-"""Internationalization functions and functionality
-"""
-
-import gettext
-import __builtin__
-if not '_' in __builtin__.__dict__:
-    gettext.install("translate-toolkit", unicode=1)
diff --git a/translate/lang/af.py b/translate/lang/af.py
index f73de4f..a059484 100644
--- a/translate/lang/af.py
+++ b/translate/lang/af.py
@@ -27,6 +27,7 @@ import re
 
 from translate.lang import common
 
+
 articlere = re.compile(r"'n\b")
 
 
@@ -44,11 +45,11 @@ class af(common.Common):
         \s+         # the spacing after the puntuation
         (?='n\s[A-Z]|[^'a-z\d]|'[^n])
         # lookahead that next part starts with caps or 'n followed by caps
-        """ % sentenceend, re.VERBOSE
-    )
+        """ % sentenceend, re.VERBOSE)
 
     specialchars = u"ëïêôûáéíóúý"
 
+    @classmethod
     def capsstart(cls, text):
         """Modify this for the indefinite article ('n)."""
         match = articlere.search(text, 0, 20)
@@ -60,7 +61,6 @@ class af(common.Common):
             if match:
                 return common.Common.capsstart(stripped[match.end():])
         return common.Common.capsstart(text)
-    capsstart = classmethod(capsstart)
 
 cyr2lat = {
    u"А": "A", u"а": "a",
diff --git a/translate/lang/ar.py b/translate/lang/ar.py
index 267367b..6c54a84 100644
--- a/translate/lang/ar.py
+++ b/translate/lang/ar.py
@@ -49,7 +49,7 @@ class ar(common.Common):
 
     ignoretests = ["startcaps", "simplecaps", "acronyms"]
 
+    @classmethod
     def punctranslate(cls, text):
         text = super(cls, cls).punctranslate(text)
         return reverse_quotes(text)
-    punctranslate = classmethod(punctranslate)
diff --git a/translate/lang/common.py b/translate/lang/common.py
index 1b481bb..3bd3a22 100644
--- a/translate/lang/common.py
+++ b/translate/lang/common.py
@@ -60,13 +60,15 @@ TODOs and Ideas for possible features:
   - phrases
 """
 
-import re
 import logging
+import re
 
 from translate.lang import data
 
+
 logger = logging.getLogger(__name__)
 
+
 class Common(object):
     """This class is the common parent class for all language classes."""
 
@@ -153,8 +155,8 @@ class Common(object):
     miscpunc = u"…±°¹²³·©®×£¥€"
     """The middle dot (·) is used by Greek and Georgian."""
 
-    punctuation = u"".join([commonpunc, quotes, invertedpunc, rtlpunc, CJKpunc,\
-            indicpunc, ethiopicpunc, miscpunc])
+    punctuation = u"".join([commonpunc, quotes, invertedpunc, rtlpunc, CJKpunc,
+                            indicpunc, ethiopicpunc, miscpunc])
     """We include many types of punctuation here, simply since this is only
     meant to determine if something is punctuation. Hopefully we catch some
     languages which might not be represented with modules. Most languages won't
@@ -174,8 +176,7 @@ class Common(object):
         [%s]        # the puntuation for sentence ending
         \s+         # the spacing after the puntuation
         (?=[^a-zа-џ\d])  # lookahead that next part starts with caps
-        """ % sentenceend, re.VERBOSE | re.UNICODE
-    )
+        """ % sentenceend, re.VERBOSE | re.UNICODE)
 
     puncdict = {}
     """A dictionary of punctuation transformation rules that can be used by
@@ -234,6 +235,7 @@ class Common(object):
             detail = "(%s)" % self.code
         return "<class 'translate.lang.common.Common%s'>" % detail
 
+    @classmethod
     def punctranslate(cls, text):
         """Converts the punctuation in a string according to the rules of the
         language."""
@@ -259,8 +261,8 @@ class Common(object):
             (len(text) < 2 or text[-2] != text[-1])):
             text = text[:-1] + cls.puncdict[text[-1] + u" "].rstrip()
         return text
-    punctranslate = classmethod(punctranslate)
 
+    @classmethod
     def length_difference(cls, length):
         """Returns an estimate to a likely change in length relative to an
         English string of length length."""
@@ -278,8 +280,8 @@ class Common(object):
         constant = max(5, int(40 * expansion_factor))
         # The default: return 5 + length/10
         return constant + int(expansion_factor * length)
-    length_difference = classmethod(length_difference)
 
+    @classmethod
     def alter_length(cls, text):
         """Converts the given string by adding or removing characters as an
         estimation of translation length (with English assumed as source
@@ -299,8 +301,8 @@ class Common(object):
             expanded.append(alter_it(subtext))
         text = u"\n\n".join(expanded)
         return text
-    alter_length = classmethod(alter_length)
 
+    @classmethod
     def character_iter(cls, text):
         """Returns an iterator over the characters in text."""
         #We don't return more than one consecutive whitespace character
@@ -311,13 +313,13 @@ class Common(object):
             prev = c
             if not (c in cls.punctuation):
                 yield c
-    character_iter = classmethod(character_iter)
 
+    @classmethod
     def characters(cls, text):
         """Returns a list of characters in text."""
         return [c for c in cls.character_iter(text)]
-    characters = classmethod(characters)
 
+    @classmethod
     def word_iter(cls, text):
         """Returns an iterator over the words in text."""
         #TODO: Consider replacing puctuation with space before split()
@@ -325,13 +327,13 @@ class Common(object):
             word = w.strip(cls.punctuation)
             if word:
                 yield word
-    word_iter = classmethod(word_iter)
 
+    @classmethod
     def words(cls, text):
         """Returns a list of words in text."""
         return [w for w in cls.word_iter(text)]
-    words = classmethod(words)
 
+    @classmethod
     def sentence_iter(cls, text, strip=True):
         """Returns an iterator over the sentences in text."""
         lastmatch = 0
@@ -348,21 +350,20 @@ class Common(object):
             remainder = remainder.strip()
         if remainder:
             yield remainder
-    sentence_iter = classmethod(sentence_iter)
 
+    @classmethod
     def sentences(cls, text, strip=True):
         """Returns a list of sentences in text."""
         return [s for s in cls.sentence_iter(text, strip=strip)]
-    sentences = classmethod(sentences)
 
+    @classmethod
     def capsstart(cls, text):
         """Determines whether the text starts with a capital letter."""
         stripped = text.lstrip().lstrip(cls.punctuation)
         return stripped and stripped[0].isupper()
-    capsstart = classmethod(capsstart)
 
+    @classmethod
     def numstart(cls, text):
         """Determines whether the text starts with a numeric value."""
         stripped = text.lstrip().lstrip(cls.punctuation)
         return stripped and stripped[0].isnumeric()
-    numstart = classmethod(numstart)
diff --git a/translate/lang/data.py b/translate/lang/data.py
index f2472e5..0d1f8ae 100644
--- a/translate/lang/data.py
+++ b/translate/lang/data.py
@@ -22,150 +22,150 @@
 
 
 languages = {
-'af': (u'Afrikaans', 2, '(n != 1)'),
-'ak': (u'Akan', 2, 'n > 1'),
-'am': (u'Amharic', 2, 'n > 1'),
-'an': (u'Aragonese', 2, '(n != 1)'),
-'ar': (u'Arabic', 6,
-       'n==0 ? 0 : n==1 ? 1 : n==2 ? 2 : n%100>=3 && n%100<=10 ? 3 : n%100>=11 ? 4 : 5'),
-'arn': (u'Mapudungun; Mapuche', 2, 'n > 1'),
-'ast': (u'Asturian; Bable; Leonese; Asturleonese', 2, '(n != 1)'),
-'az': (u'Azerbaijani', 2, '(n != 1)'),
-'be': (u'Belarusian', 3,
-       'n%10==1 && n%100!=11 ? 0 : n%10>=2 && n%10<=4 && (n%100<10 || n%100>=20) ? 1 : 2'),
-'bg': (u'Bulgarian', 2, '(n != 1)'),
-'bn': (u'Bengali', 2, '(n != 1)'),
-'bn_IN': (u'Bengali (India)', 2, '(n != 1)'),
-'bo': (u'Tibetan', 1, '0'),
-'br': (u'Breton', 2, 'n > 1'),
-'bs': (u'Bosnian', 3,
-       'n%10==1 && n%100!=11 ? 0 : n%10>=2 && n%10<=4 && (n%100<10 || n%100>=20) ? 1 : 2'),
-'ca': (u'Catalan; Valencian', 2, '(n != 1)'),
-'ca at valencia': (u'Catalan; Valencian (Valencia)', 2, '(n != 1)'),
-'cs': (u'Czech', 3, '(n==1) ? 0 : (n>=2 && n<=4) ? 1 : 2'),
-'csb': (u'Kashubian', 3,
-        'n==1 ? 0 : n%10>=2 && n%10<=4 && (n%100<10 || n%100>=20) ? 1 : 2'),
-'cy': (u'Welsh', 2, '(n==2) ? 1 : 0'),
-'da': (u'Danish', 2, '(n != 1)'),
-'de': (u'German', 2, '(n != 1)'),
-'dz': (u'Dzongkha', 1, '0'),
-'el': (u'Greek, Modern (1453-)', 2, '(n != 1)'),
-'en': (u'English', 2, '(n != 1)'),
-'en_GB': (u'English (United Kingdom)', 2, '(n != 1)'),
-'en_ZA': (u'English (South Africa)', 2, '(n != 1)'),
-'eo': (u'Esperanto', 2, '(n != 1)'),
-'es': (u'Spanish; Castilian', 2, '(n != 1)'),
-'et': (u'Estonian', 2, '(n != 1)'),
-'eu': (u'Basque', 2, '(n != 1)'),
-'fa': (u'Persian', 1, '0'),
-'ff': (u'Fulah', 2, '(n != 1)'),
-'fi': (u'Finnish', 2, '(n != 1)'),
-'fil': (u'Filipino; Pilipino', 2, '(n > 1)'),
-'fo': (u'Faroese', 2, '(n != 1)'),
-'fr': (u'French', 2, '(n > 1)'),
-'fur': (u'Friulian', 2, '(n != 1)'),
-'fy': (u'Frisian', 2, '(n != 1)'),
-'ga': (u'Irish', 5, 'n==1 ? 0 : n==2 ? 1 : n<7 ? 2 : n<11 ? 3 : 4'),
-'gd': (u'Gaelic; Scottish Gaelic', 4, '(n==1 || n==11) ? 0 : (n==2 || n==12) ? 1 : (n > 2 && n < 20) ? 2 : 3'),
-'gl': (u'Galician', 2, '(n != 1)'),
-'gu': (u'Gujarati', 2, '(n != 1)'),
-'gun': (u'Gun', 2, '(n > 1)'),
-'ha': (u'Hausa', 2, '(n != 1)'),
-'he': (u'Hebrew', 2, '(n != 1)'),
-'hi': (u'Hindi', 2, '(n != 1)'),
-'hy': (u'Armenian', 1, '0'),
-'hr': (u'Croatian', 3, '(n%10==1 && n%100!=11 ? 0 : n%10>=2 && n%10<=4 && (n%100<10 || n%100>=20) ? 1 : 2)'),
-'ht': (u'Haitian; Haitian Creole', 2, '(n != 1)'),
-'hu': (u'Hungarian', 2, '(n != 1)'),
-'ia': (u"Interlingua (International Auxiliary Language Association)", 2, '(n != 1)'),
-'id': (u'Indonesian', 1, '0'),
-'is': (u'Icelandic', 2, '(n != 1)'),
-'it': (u'Italian', 2, '(n != 1)'),
-'ja': (u'Japanese', 1, '0'),
-'jv': (u'Javanese', 2, '(n != 1)'),
-'ka': (u'Georgian', 1, '0'),
-'kk': (u'Kazakh', 1, '0'),
-'km': (u'Central Khmer', 1, '0'),
-'kn': (u'Kannada', 2, '(n != 1)'),
-'ko': (u'Korean', 1, '0'),
-'ku': (u'Kurdish', 2, '(n != 1)'),
-'kw': (u'Cornish', 4, '(n==1) ? 0 : (n==2) ? 1 : (n == 3) ? 2 : 3'),
-'ky': (u'Kirghiz; Kyrgyz', 1, '0'),
-'lb': (u'Luxembourgish; Letzeburgesch', 2, '(n != 1)'),
-'ln': (u'Lingala', 2, '(n > 1)'),
-'lo': (u'Lao', 1, '0'),
-'lt': (u'Lithuanian', 3, '(n%10==1 && n%100!=11 ? 0 : n%10>=2 && (n%100<10 || n%100>=20) ? 1 : 2)'),
-'lv': (u'Latvian', 3, '(n%10==1 && n%100!=11 ? 0 : n != 0 ? 1 : 2)'),
-'mai': (u'Maithili', 2, '(n != 1)'),
-'mfe': (u'Morisyen', 2, '(n > 1)'),
-'mg': (u'Malagasy', 2, '(n > 1)'),
-'mi': (u'Maori', 2, '(n > 1)'),
-'mk': (u'Macedonian', 2, 'n==1 || n%10==1 ? 0 : 1'),
-'ml': (u'Malayalam', 2, '(n != 1)'),
-'mn': (u'Mongolian', 2, '(n != 1)'),
-'mr': (u'Marathi', 2, '(n != 1)'),
-'ms': (u'Malay', 1, '0'),
-'mt': (u'Maltese', 4,
-       '(n==1 ? 0 : n==0 || ( n%100>1 && n%100<11) ? 1 : (n%100>10 && n%100<20 ) ? 2 : 3)'),
-'nah': (u'Nahuatl languages', 2, '(n != 1)'),
-'nap': (u'Neapolitan', 2, '(n != 1)'),
-'nb': (u'Bokmål, Norwegian; Norwegian Bokmål', 2, '(n != 1)'),
-'ne': (u'Nepali', 2, '(n != 1)'),
-'nl': (u'Dutch; Flemish', 2, '(n != 1)'),
-'nn': (u'Norwegian Nynorsk; Nynorsk, Norwegian', 2, '(n != 1)'),
-'nqo': (u"N'Ko", 2, '(n > 1)'),
-'nso': (u'Pedi; Sepedi; Northern Sotho', 2, '(n != 1)'),
-'oc': (u'Occitan (post 1500)', 2, '(n > 1)'),
-'or': (u'Oriya', 2, '(n != 1)'),
-'pa': (u'Panjabi; Punjabi', 2, '(n != 1)'),
-'pap': (u'Papiamento', 2, '(n != 1)'),
-'pl': (u'Polish', 3,
-       '(n==1 ? 0 : n%10>=2 && n%10<=4 && (n%100<10 || n%100>=20) ? 1 : 2)'),
-'pms': (u'Piemontese', 2, '(n != 1)'),
-'ps': (u'Pushto; Pashto', 2, '(n != 1)'),
-'pt': (u'Portuguese', 2, '(n != 1)'),
-'pt_BR': (u'Portuguese (Brazil)', 2, '(n != 1)'),
-'rm': (u'Romansh', 2, '(n != 1)'),
-'ro': (u'Romanian', 3, '(n==1 ? 0 : (n==0 || (n%100 > 0 && n%100 < 20)) ? 1 : 2);'),
-'ru': (u'Russian', 3,
-      '(n%10==1 && n%100!=11 ? 0 : n%10>=2 && n%10<=4 && (n%100<10 || n%100>=20) ? 1 : 2)'),
-'sah': (u'Yakut', 1, '0'),
-'sco': (u'Scots', 2, '(n != 1)'),
-'si': (u'Sinhala; Sinhalese', 2, '(n != 1)'),
-'sk': (u'Slovak', 3, '(n==1) ? 0 : (n>=2 && n<=4) ? 1 : 2'),
-'sl': (u'Slovenian', 4, '(n%100==1 ? 0 : n%100==2 ? 1 : n%100==3 || n%100==4 ? 2 : 3)'),
-'so': (u'Somali', 2, '(n != 1)'),
-'son': (u'Songhai languages', 2, '(n != 1)'),
-'sq': (u'Albanian', 2, '(n != 1)'),
-'sr': (u'Serbian', 3,
-       '(n%10==1 && n%100!=11 ? 0 : n%10>=2 && n%10<=4 && (n%100<10 || n%100>=20) ? 1 : 2)'),
-'st': (u'Sotho, Southern', 2, '(n != 1)'),
-'su': (u'Sundanese', 1, '0'),
-'sv': (u'Swedish', 2, '(n != 1)'),
-'sw': (u'Swahili', 2, '(n != 1)'),
-'ta': (u'Tamil', 2, '(n != 1)'),
-'te': (u'Telugu', 2, '(n != 1)'),
-'tg': (u'Tajik', 2, '(n != 1)'),
-'ti': (u'Tigrinya', 2, '(n > 1)'),
-'th': (u'Thai', 1, '0'),
-'tk': (u'Turkmen', 2, '(n != 1)'),
-'tr': (u'Turkish', 1, '0'),
-'tt': (u'Tatar', 1, '0'),
-'ug': (u'Uighur; Uyghur', 1, '0'),
-'uk': (u'Ukrainian', 3,
-       '(n%10==1 && n%100!=11 ? 0 : n%10>=2 && n%10<=4 && (n%100<10 || n%100>=20) ? 1 : 2)'),
-'vi': (u'Vietnamese', 1, '0'),
-'ve': (u'Venda', 2, '(n != 1)'),
-'wa': (u'Walloon', 2, '(n > 1)'),
-'wo': (u'Wolof', 2, '(n != 1)'),
-'yo': (u'Yoruba', 2, '(n != 1)'),
-# Chinese is difficult because the main divide is on script, not really
-# country. Simplified Chinese is used mostly in China, Singapore and Malaysia.
-# Traditional Chinese is used mostly in Hong Kong, Taiwan and Macau.
-'zh_CN': (u'Chinese (China)', 1, '0'),
-'zh_HK': (u'Chinese (Hong Kong)', 1, '0'),
-'zh_TW': (u'Chinese (Taiwan)', 1, '0'),
-'zu': (u'Zulu', 2, '(n != 1)'),
+    'af': (u'Afrikaans', 2, '(n != 1)'),
+    'ak': (u'Akan', 2, 'n > 1'),
+    'am': (u'Amharic', 2, 'n > 1'),
+    'an': (u'Aragonese', 2, '(n != 1)'),
+    'ar': (u'Arabic', 6,
+           'n==0 ? 0 : n==1 ? 1 : n==2 ? 2 : n%100>=3 && n%100<=10 ? 3 : n%100>=11 ? 4 : 5'),
+    'arn': (u'Mapudungun; Mapuche', 2, 'n > 1'),
+    'ast': (u'Asturian; Bable; Leonese; Asturleonese', 2, '(n != 1)'),
+    'az': (u'Azerbaijani', 2, '(n != 1)'),
+    'be': (u'Belarusian', 3,
+           'n%10==1 && n%100!=11 ? 0 : n%10>=2 && n%10<=4 && (n%100<10 || n%100>=20) ? 1 : 2'),
+    'bg': (u'Bulgarian', 2, '(n != 1)'),
+    'bn': (u'Bengali', 2, '(n != 1)'),
+    'bn_IN': (u'Bengali (India)', 2, '(n != 1)'),
+    'bo': (u'Tibetan', 1, '0'),
+    'br': (u'Breton', 2, 'n > 1'),
+    'bs': (u'Bosnian', 3,
+           'n%10==1 && n%100!=11 ? 0 : n%10>=2 && n%10<=4 && (n%100<10 || n%100>=20) ? 1 : 2'),
+    'ca': (u'Catalan; Valencian', 2, '(n != 1)'),
+    'ca at valencia': (u'Catalan; Valencian (Valencia)', 2, '(n != 1)'),
+    'cs': (u'Czech', 3, '(n==1) ? 0 : (n>=2 && n<=4) ? 1 : 2'),
+    'csb': (u'Kashubian', 3,
+            'n==1 ? 0 : n%10>=2 && n%10<=4 && (n%100<10 || n%100>=20) ? 1 : 2'),
+    'cy': (u'Welsh', 2, '(n==2) ? 1 : 0'),
+    'da': (u'Danish', 2, '(n != 1)'),
+    'de': (u'German', 2, '(n != 1)'),
+    'dz': (u'Dzongkha', 1, '0'),
+    'el': (u'Greek, Modern (1453-)', 2, '(n != 1)'),
+    'en': (u'English', 2, '(n != 1)'),
+    'en_GB': (u'English (United Kingdom)', 2, '(n != 1)'),
+    'en_ZA': (u'English (South Africa)', 2, '(n != 1)'),
+    'eo': (u'Esperanto', 2, '(n != 1)'),
+    'es': (u'Spanish; Castilian', 2, '(n != 1)'),
+    'et': (u'Estonian', 2, '(n != 1)'),
+    'eu': (u'Basque', 2, '(n != 1)'),
+    'fa': (u'Persian', 1, '0'),
+    'ff': (u'Fulah', 2, '(n != 1)'),
+    'fi': (u'Finnish', 2, '(n != 1)'),
+    'fil': (u'Filipino; Pilipino', 2, '(n > 1)'),
+    'fo': (u'Faroese', 2, '(n != 1)'),
+    'fr': (u'French', 2, '(n > 1)'),
+    'fur': (u'Friulian', 2, '(n != 1)'),
+    'fy': (u'Frisian', 2, '(n != 1)'),
+    'ga': (u'Irish', 5, 'n==1 ? 0 : n==2 ? 1 : n<7 ? 2 : n<11 ? 3 : 4'),
+    'gd': (u'Gaelic; Scottish Gaelic', 4, '(n==1 || n==11) ? 0 : (n==2 || n==12) ? 1 : (n > 2 && n < 20) ? 2 : 3'),
+    'gl': (u'Galician', 2, '(n != 1)'),
+    'gu': (u'Gujarati', 2, '(n != 1)'),
+    'gun': (u'Gun', 2, '(n > 1)'),
+    'ha': (u'Hausa', 2, '(n != 1)'),
+    'he': (u'Hebrew', 2, '(n != 1)'),
+    'hi': (u'Hindi', 2, '(n != 1)'),
+    'hy': (u'Armenian', 1, '0'),
+    'hr': (u'Croatian', 3, '(n%10==1 && n%100!=11 ? 0 : n%10>=2 && n%10<=4 && (n%100<10 || n%100>=20) ? 1 : 2)'),
+    'ht': (u'Haitian; Haitian Creole', 2, '(n != 1)'),
+    'hu': (u'Hungarian', 2, '(n != 1)'),
+    'ia': (u"Interlingua (International Auxiliary Language Association)", 2, '(n != 1)'),
+    'id': (u'Indonesian', 1, '0'),
+    'is': (u'Icelandic', 2, '(n != 1)'),
+    'it': (u'Italian', 2, '(n != 1)'),
+    'ja': (u'Japanese', 1, '0'),
+    'jv': (u'Javanese', 2, '(n != 1)'),
+    'ka': (u'Georgian', 1, '0'),
+    'kk': (u'Kazakh', 1, '0'),
+    'km': (u'Central Khmer', 1, '0'),
+    'kn': (u'Kannada', 2, '(n != 1)'),
+    'ko': (u'Korean', 1, '0'),
+    'ku': (u'Kurdish', 2, '(n != 1)'),
+    'kw': (u'Cornish', 4, '(n==1) ? 0 : (n==2) ? 1 : (n == 3) ? 2 : 3'),
+    'ky': (u'Kirghiz; Kyrgyz', 1, '0'),
+    'lb': (u'Luxembourgish; Letzeburgesch', 2, '(n != 1)'),
+    'ln': (u'Lingala', 2, '(n > 1)'),
+    'lo': (u'Lao', 1, '0'),
+    'lt': (u'Lithuanian', 3, '(n%10==1 && n%100!=11 ? 0 : n%10>=2 && (n%100<10 || n%100>=20) ? 1 : 2)'),
+    'lv': (u'Latvian', 3, '(n%10==1 && n%100!=11 ? 0 : n != 0 ? 1 : 2)'),
+    'mai': (u'Maithili', 2, '(n != 1)'),
+    'mfe': (u'Morisyen', 2, '(n > 1)'),
+    'mg': (u'Malagasy', 2, '(n > 1)'),
+    'mi': (u'Maori', 2, '(n > 1)'),
+    'mk': (u'Macedonian', 2, 'n==1 || n%10==1 ? 0 : 1'),
+    'ml': (u'Malayalam', 2, '(n != 1)'),
+    'mn': (u'Mongolian', 2, '(n != 1)'),
+    'mr': (u'Marathi', 2, '(n != 1)'),
+    'ms': (u'Malay', 1, '0'),
+    'mt': (u'Maltese', 4,
+           '(n==1 ? 0 : n==0 || ( n%100>1 && n%100<11) ? 1 : (n%100>10 && n%100<20 ) ? 2 : 3)'),
+    'nah': (u'Nahuatl languages', 2, '(n != 1)'),
+    'nap': (u'Neapolitan', 2, '(n != 1)'),
+    'nb': (u'Bokmål, Norwegian; Norwegian Bokmål', 2, '(n != 1)'),
+    'ne': (u'Nepali', 2, '(n != 1)'),
+    'nl': (u'Dutch; Flemish', 2, '(n != 1)'),
+    'nn': (u'Norwegian Nynorsk; Nynorsk, Norwegian', 2, '(n != 1)'),
+    'nqo': (u"N'Ko", 2, '(n > 1)'),
+    'nso': (u'Pedi; Sepedi; Northern Sotho', 2, '(n != 1)'),
+    'oc': (u'Occitan (post 1500)', 2, '(n > 1)'),
+    'or': (u'Oriya', 2, '(n != 1)'),
+    'pa': (u'Panjabi; Punjabi', 2, '(n != 1)'),
+    'pap': (u'Papiamento', 2, '(n != 1)'),
+    'pl': (u'Polish', 3,
+           '(n==1 ? 0 : n%10>=2 && n%10<=4 && (n%100<10 || n%100>=20) ? 1 : 2)'),
+    'pms': (u'Piemontese', 2, '(n != 1)'),
+    'ps': (u'Pushto; Pashto', 2, '(n != 1)'),
+    'pt': (u'Portuguese', 2, '(n != 1)'),
+    'pt_BR': (u'Portuguese (Brazil)', 2, '(n != 1)'),
+    'rm': (u'Romansh', 2, '(n != 1)'),
+    'ro': (u'Romanian', 3, '(n==1 ? 0 : (n==0 || (n%100 > 0 && n%100 < 20)) ? 1 : 2);'),
+    'ru': (u'Russian', 3,
+          '(n%10==1 && n%100!=11 ? 0 : n%10>=2 && n%10<=4 && (n%100<10 || n%100>=20) ? 1 : 2)'),
+    'sah': (u'Yakut', 1, '0'),
+    'sco': (u'Scots', 2, '(n != 1)'),
+    'si': (u'Sinhala; Sinhalese', 2, '(n != 1)'),
+    'sk': (u'Slovak', 3, '(n==1) ? 0 : (n>=2 && n<=4) ? 1 : 2'),
+    'sl': (u'Slovenian', 4, '(n%100==1 ? 0 : n%100==2 ? 1 : n%100==3 || n%100==4 ? 2 : 3)'),
+    'so': (u'Somali', 2, '(n != 1)'),
+    'son': (u'Songhai languages', 2, '(n != 1)'),
+    'sq': (u'Albanian', 2, '(n != 1)'),
+    'sr': (u'Serbian', 3,
+           '(n%10==1 && n%100!=11 ? 0 : n%10>=2 && n%10<=4 && (n%100<10 || n%100>=20) ? 1 : 2)'),
+    'st': (u'Sotho, Southern', 2, '(n != 1)'),
+    'su': (u'Sundanese', 1, '0'),
+    'sv': (u'Swedish', 2, '(n != 1)'),
+    'sw': (u'Swahili', 2, '(n != 1)'),
+    'ta': (u'Tamil', 2, '(n != 1)'),
+    'te': (u'Telugu', 2, '(n != 1)'),
+    'tg': (u'Tajik', 2, '(n != 1)'),
+    'ti': (u'Tigrinya', 2, '(n > 1)'),
+    'th': (u'Thai', 1, '0'),
+    'tk': (u'Turkmen', 2, '(n != 1)'),
+    'tr': (u'Turkish', 1, '0'),
+    'tt': (u'Tatar', 1, '0'),
+    'ug': (u'Uighur; Uyghur', 1, '0'),
+    'uk': (u'Ukrainian', 3,
+           '(n%10==1 && n%100!=11 ? 0 : n%10>=2 && n%10<=4 && (n%100<10 || n%100>=20) ? 1 : 2)'),
+    'vi': (u'Vietnamese', 1, '0'),
+    've': (u'Venda', 2, '(n != 1)'),
+    'wa': (u'Walloon', 2, '(n > 1)'),
+    'wo': (u'Wolof', 2, '(n != 1)'),
+    'yo': (u'Yoruba', 2, '(n != 1)'),
+    # Chinese is difficult because the main divide is on script, not really
+    # country. Simplified Chinese is used mostly in China, Singapore and Malaysia.
+    # Traditional Chinese is used mostly in Hong Kong, Taiwan and Macau.
+    'zh_CN': (u'Chinese (China)', 1, '0'),
+    'zh_HK': (u'Chinese (Hong Kong)', 1, '0'),
+    'zh_TW': (u'Chinese (Taiwan)', 1, '0'),
+    'zu': (u'Zulu', 2, '(n != 1)'),
 }
 """Dictionary of language data.
 The language code is the dictionary key (which may contain country codes
@@ -249,8 +249,9 @@ expansion_factors = {
 
 import gettext
 import locale
-import re
 import os
+import re
+
 
 iso639 = {}
 """ISO 639 language codes"""
@@ -268,7 +269,7 @@ def languagematch(languagecode, otherlanguagecode):
     if languagecode is None:
         return langcode_re.match(otherlanguagecode)
     return languagecode == otherlanguagecode or \
-           (otherlanguagecode.startswith(languagecode) and \
+           (otherlanguagecode.startswith(languagecode) and
             variant_re.match(otherlanguagecode[len(languagecode):]))
 
 dialect_name_re = re.compile(r"(.+)\s\(([^)\d]{,25})\)$")
@@ -413,9 +414,10 @@ def simplify_to_common(language_code, languages=languages):
     else:
         return simplify_to_common(simpler)
 
+
 def get_language(code):
     code = code.replace("-", "_").replace("@", "_").lower()
     if "_" in code:
         # convert ab_cd → ab_CD
-        code = "%s_%s" %(code.split("_")[0], code.split("_", 1)[1].upper())
+        code = "%s_%s" % (code.split("_")[0], code.split("_", 1)[1].upper())
     return languages.get(code, None)
diff --git a/translate/lang/el.py b/translate/lang/el.py
index d102124..7cfabe1 100644
--- a/translate/lang/el.py
+++ b/translate/lang/el.py
@@ -40,8 +40,7 @@ class el(common.Common):
         [%s]        # the puntuation for sentence ending
         \s+         # the spacing after the puntuation
         (?=[^a-zά-ώ\d])  # lookahead that next part starts with caps
-        """ % sentenceend, re.VERBOSE | re.UNICODE
-    )
+        """ % sentenceend, re.VERBOSE | re.UNICODE)
 
     puncdict = {
         u"?": u";",
diff --git a/translate/lang/es.py b/translate/lang/es.py
index 02b3248..0a89f2e 100644
--- a/translate/lang/es.py
+++ b/translate/lang/es.py
@@ -30,13 +30,14 @@ from translate.lang import common
 class es(common.Common):
     """This class represents Spanish."""
 
+    @classmethod
     def punctranslate(cls, text):
         """Implement some extra features for inverted punctuation.
         """
         text = super(cls, cls).punctranslate(text)
         # If the first sentence ends with ? or !, prepend inverted ¿ or ¡
         firstmatch = cls.sentencere.match(text)
-        if firstmatch == None:
+        if firstmatch is None:
             # only one sentence (if any) - use entire string
             first = text
         else:
@@ -51,4 +52,3 @@ class es(common.Common):
         elif first[-1] == '!':
             text = u"¡" + text
         return text
-    punctranslate = classmethod(punctranslate)
diff --git a/translate/lang/fa.py b/translate/lang/fa.py
index b893597..b05740a 100644
--- a/translate/lang/fa.py
+++ b/translate/lang/fa.py
@@ -23,10 +23,10 @@
 .. seealso:: http://en.wikipedia.org/wiki/Persian_language
 """
 
-from translate.lang import common
-
 import re
 
+from translate.lang import common
+
 
 def guillemets(text):
 
@@ -69,8 +69,8 @@ class fa(common.Common):
     #TODO: check persian numerics
     #TODO: zwj and zwnj?
 
+    @classmethod
     def punctranslate(cls, text):
         """Implement "French" quotation marks."""
         text = super(cls, cls).punctranslate(text)
         return guillemets(text)
-    punctranslate = classmethod(punctranslate)
diff --git a/translate/lang/factory.py b/translate/lang/factory.py
index 5dd4bfe..82546fb 100644
--- a/translate/lang/factory.py
+++ b/translate/lang/factory.py
@@ -20,8 +20,8 @@
 
 """This module provides a factory to instantiate language classes."""
 
-from translate.lang import common
-from translate.lang import data
+from translate.lang import common, data
+
 
 prefix = "code_"
 
@@ -44,7 +44,7 @@ def getlanguage(code):
                             internal_code)
         langclass = getattr(module, internal_code)
         return langclass(code)
-    except ImportError, e:
+    except ImportError as e:
         simplercode = data.simplercode(code)
         if simplercode:
             relatedlanguage = getlanguage(simplercode)
diff --git a/translate/lang/fr.py b/translate/lang/fr.py
index ec197d3..b5d0bcc 100644
--- a/translate/lang/fr.py
+++ b/translate/lang/fr.py
@@ -68,6 +68,7 @@ class fr(common.Common):
     # TODO: consider adding % and $, but think about the consequences of how
     # they could be part of variables
 
+    @classmethod
     def punctranslate(cls, text):
         """Implement some extra features for quotation marks.
 
@@ -80,4 +81,3 @@ class fr(common.Common):
         # http ://
         text = text.replace(u"\u00a0://", "://")
         return guillemets(text)
-    punctranslate = classmethod(punctranslate)
diff --git a/translate/lang/hy.py b/translate/lang/hy.py
index 599ddf4..4b7fdc4 100644
--- a/translate/lang/hy.py
+++ b/translate/lang/hy.py
@@ -44,8 +44,7 @@ class hy(common.Common):
         [%s]        # the puntuation for sentence ending
         \s+         # the spacing after the puntuation
         (?=[^a-zա-ֆ\d])  # lookahead that next part starts with caps
-        """ % sentenceend, re.VERBOSE | re.UNICODE
-    )
+        """ % sentenceend, re.VERBOSE | re.UNICODE)
 
     puncdict = {
         u".": u"։",
diff --git a/translate/lang/identify.py b/translate/lang/identify.py
index d5bd253..c5f4694 100644
--- a/translate/lang/identify.py
+++ b/translate/lang/identify.py
@@ -25,9 +25,9 @@ models.
 
 from os import extsep, path
 
+from translate.lang.ngram import NGram
 from translate.misc.file_discovery import get_abs_data_filename
 from translate.storage.base import TranslationStore
-from translate.lang.ngram import NGram
 
 
 class LanguageIdentifier(object):
@@ -128,4 +128,4 @@ if __name__ == "__main__":
     import locale
     encoding = locale.getpreferredencoding()
     text = file(argv[1]).read().decode(encoding)
-    print "Language detected:", identifier.identify_lang(text)
+    print("Language detected:", identifier.identify_lang(text))
diff --git a/translate/lang/ngram.py b/translate/lang/ngram.py
index 2aa1192..6fc5719 100644
--- a/translate/lang/ngram.py
+++ b/translate/lang/ngram.py
@@ -26,10 +26,10 @@
 .. note:: Orignal code from http://thomas.mangin.me.uk/data/source/ngram.py
 """
 
-import sys
+import glob
 import re
+import sys
 from os import path
-import glob
 
 
 nb_ngrams = 400
@@ -118,12 +118,12 @@ class NGram:
                     for i, line in enumerate(lines):
                         ngram, _t, _f = line.partition(u'\t')
                         ngrams[ngram] = i
-                except AttributeError, e:
+                except AttributeError as e:
                     # Python2.4 doesn't have unicode.partition()
                     for i, line in enumerate(lines):
                         ngram = line.split(u'\t')[0]
                         ngrams[ngram] = i
-            except UnicodeDecodeError, e:
+            except UnicodeDecodeError as e:
                 continue
 
             if ngrams:
@@ -186,4 +186,4 @@ if __name__ == '__main__':
     text = sys.stdin.readline()
     from translate.misc.file_discovery import get_abs_data_filename
     l = NGram(get_abs_data_filename('langmodels'))
-    print l.classify(text)
+    print(l.classify(text))
diff --git a/translate/lang/nqo.py b/translate/lang/nqo.py
index e847f79..61d566f 100644
--- a/translate/lang/nqo.py
+++ b/translate/lang/nqo.py
@@ -48,7 +48,7 @@ class nqo(common.Common):
 
     ignoretests = ["startcaps", "simplecaps", "acronyms"]
 
+    @classmethod
     def punctranslate(cls, text):
         text = super(cls, cls).punctranslate(text)
         return reverse_quotes(text)
-    punctranslate = classmethod(punctranslate)
diff --git a/translate/lang/team.py b/translate/lang/team.py
index 20413a5..5b372db 100644
--- a/translate/lang/team.py
+++ b/translate/lang/team.py
@@ -24,8 +24,6 @@ the header of a Gettext PO file.
 
 import re
 
-from translate.misc.typecheck import accepts, returns, IsOneOf
-from translate.misc.typecheck.typeclasses import String
 
 __all__ = ['LANG_TEAM_CONTACT_SNIPPETS', 'guess_language']
 
@@ -411,8 +409,6 @@ def _snippet_guesser(snippets_dict, string, filter_=_nofilter):
     return None
 
 
- at accepts(unicode)
- at returns(IsOneOf(String, type(None)))
 def guess_language(team_string):
     """Gueses the language of a PO file based on the Language-Team entry"""
 
@@ -440,4 +436,4 @@ if __name__ == "__main__":
     from translate.storage import factory
     for fname in argv[1:]:
         store = factory.getobject(fname)
-        print fname, guess_language(store.parseheader().get('Language-Team', u""))
+        print(fname, guess_language(store.parseheader().get('Language-Team', u"")))
diff --git a/translate/lang/test_af.py b/translate/lang/test_af.py
index 186f38d..be5a590 100644
--- a/translate/lang/test_af.py
+++ b/translate/lang/test_af.py
@@ -1,8 +1,7 @@
 #!/usr/bin/env python
 # -*- coding: utf-8 -*-
 
-from translate.lang import factory
-from translate.lang import af
+from translate.lang import af, factory
 
 
 def test_sentences():
@@ -35,8 +34,8 @@ def test_capsstart():
 def test_transliterate_cyrillic():
 
     def trans(text):
-        print ("Orig: %s" % text).encode("utf-8")
+        print(("Orig: %s" % text).encode("utf-8"))
         trans = af.tranliterate_cyrillic(text)
-        print ("Trans: %s" % trans).encode("utf-8")
+        print(("Trans: %s" % trans).encode("utf-8"))
         return trans
     assert trans(u"Борис Николаевич Ельцин") == u"Boris Nikolajewitj Jeltsin"
diff --git a/translate/lang/test_am.py b/translate/lang/test_am.py
index c172f7f..0275184 100644
--- a/translate/lang/test_am.py
+++ b/translate/lang/test_am.py
@@ -22,5 +22,5 @@ def test_sentences():
     assert sentences == []
 
     sentences = language.sentences(u"ለምልክቱ መግቢያ የተለየ መለያ። ይህ የሚጠቅመው የታሪኩን ዝርዝር ለማስቀመጥ ነው።")
-    print sentences
+    print(sentences)
     assert sentences == [u"ለምልክቱ መግቢያ የተለየ መለያ።", u"ይህ የሚጠቅመው የታሪኩን ዝርዝር ለማስቀመጥ ነው።"]
diff --git a/translate/lang/test_ar.py b/translate/lang/test_ar.py
index 2e4818c..32f0b89 100644
--- a/translate/lang/test_ar.py
+++ b/translate/lang/test_ar.py
@@ -11,7 +11,7 @@ def test_punctranslate():
     assert language.punctranslate(u"abc efg") == u"abc efg"
     assert language.punctranslate(u"abc efg.") == u"abc efg."
     assert language.punctranslate(u"abc, efg; d?") == u"abc، efg؛ d؟"
-    # See http://bugs.locamotion.org/show_bug.cgi?id=1819
+    # See https://github.com/translate/translate/issues/1819
     assert language.punctranslate(u"It is called “abc”") == u"It is called ”abc“"
 
 
@@ -22,10 +22,10 @@ def test_sentences():
     assert sentences == []
 
     sentences = language.sentences(u"يوجد بالفعل مجلد بالإسم \"%s\". أترغب في استبداله؟")
-    print sentences
+    print(sentences)
     assert sentences == [u"يوجد بالفعل مجلد بالإسم \"%s\".", u"أترغب في استبداله؟"]
     # This probably doesn't make sense: it is just the above reversed, to make sure
     # we test the '؟' as an end of sentence marker.
     sentences = language.sentences(u"أترغب في استبداله؟ يوجد بالفعل مجلد بالإسم \"%s\".")
-    print sentences
+    print(sentences)
     assert sentences == [u"أترغب في استبداله؟", u"يوجد بالفعل مجلد بالإسم \"%s\"."]
diff --git a/translate/lang/test_common.py b/translate/lang/test_common.py
index 78b1379..f334f5c 100644
--- a/translate/lang/test_common.py
+++ b/translate/lang/test_common.py
@@ -1,10 +1,10 @@
 #!/usr/bin/env python
 # -*- coding: utf-8 -*-
 
-from translate.lang import common
-
 from pytest import mark
 
+from translate.lang import common
+
 
 def test_characters():
     """Test the basic characters segmentation"""
@@ -37,15 +37,15 @@ def test_words():
 
 @mark.xfail("sys.version_info >= (2, 6)",
             reason="ZWS "
-	    "is not considered a space in Python 2.6+. Khmer should extend "
-	    "words() to include \\u200b in addition to other word breakers.")
+                   "is not considered a space in Python 2.6+. Khmer should extend "
+                   "words() to include \\u200b in addition to other word breakers.")
 def test_word_khmer():
     language = common.Common
     # Let's test Khmer with zero width space (\u200b)
     words = language.words(u"ផ្ដល់​យោបល់")
-    print u"ផ្ដល់​យោបល់"
-    print language.words(u"ផ្ដល់<200b>យោបល់")
-    print [u"ផ្ដល់", u"យោបល់"]
+    print(u"ផ្ដល់​យោបល់")
+    print(language.words(u"ផ្ដល់<200b>យោបល់"))
+    print([u"ផ្ដល់", u"យោបល់"])
     assert words == [u"ផ្ដល់", u"យោបល់"]
 
 
diff --git a/translate/lang/test_es.py b/translate/lang/test_es.py
index 2632995..7d50672 100644
--- a/translate/lang/test_es.py
+++ b/translate/lang/test_es.py
@@ -26,5 +26,5 @@ def test_sentences():
     assert sentences == []
 
     sentences = language.sentences(u"El archivo <b>%1</b> ha sido modificado. ¿Desea guardarlo?")
-    print sentences
+    print(sentences)
     assert sentences == [u"El archivo <b>%1</b> ha sido modificado.", u"¿Desea guardarlo?"]
diff --git a/translate/lang/test_identify.py b/translate/lang/test_identify.py
index 7c119cc..10635ab 100644
--- a/translate/lang/test_identify.py
+++ b/translate/lang/test_identify.py
@@ -166,7 +166,7 @@ class TestLanguageIdentifier(object):
         self.langident = LanguageIdentifier()
 
     def test_identify_lang(self):
-        assert self.langident.identify_lang('') == None
+        assert self.langident.identify_lang('') is None
         assert self.langident.identify_lang(TEXT) == 'de'
 
     def test_identify_store(self):
diff --git a/translate/lang/test_km.py b/translate/lang/test_km.py
index 1c9a3e8..93a92c1 100644
--- a/translate/lang/test_km.py
+++ b/translate/lang/test_km.py
@@ -10,8 +10,8 @@ def test_punctranslate():
     assert language.punctranslate(u"") == u""
     assert language.punctranslate(u"abc efg") == u"abc efg"
     assert language.punctranslate(u"abc efg.") == u"abc efg\u00a0។"
-    print language.punctranslate(u"abc efg. hij.").encode('utf-8')
-    print u"abc efg\u00a0។ hij\u00a0។".encode('utf-8')
+    print(language.punctranslate(u"abc efg. hij.").encode('utf-8'))
+    print(u"abc efg\u00a0។ hij\u00a0។".encode('utf-8'))
     assert language.punctranslate(u"abc efg. hij.") == u"abc efg\u00a0។ hij\u00a0។"
     assert language.punctranslate(u"abc efg!") == u"abc efg\u00a0!"
     assert language.punctranslate(u"abc efg? hij!") == u"abc efg\u00a0? hij\u00a0!"
@@ -25,5 +25,5 @@ def test_sentences():
     assert sentences == []
 
     sentences = language.sentences(u"លក្ខណៈ​​នេះ​អាច​ឲ្យ​យើងធ្វើ​ជាតូបនីយកម្មកម្មវិធី​កុំព្យូទ័រ​ ។ លក្ខណៈ​​នេះ​អាច​ឲ្យ​យើងធ្វើ​ជាតូបនីយកម្មកម្មវិធី​កុំព្យូទ័រ​ ។")
-    print sentences
+    print(sentences)
     assert sentences == [u"លក្ខណៈ​​នេះ​អាច​ឲ្យ​យើងធ្វើ​ជាតូបនីយកម្មកម្មវិធី​កុំព្យូទ័រ​ ។", u"លក្ខណៈ​​នេះ​អាច​ឲ្យ​យើងធ្វើ​ជាតូបនីយកម្មកម្មវិធី​កុំព្យូទ័រ​ ។"]
diff --git a/translate/lang/test_ko.py b/translate/lang/test_ko.py
index c221714..7263398 100644
--- a/translate/lang/test_ko.py
+++ b/translate/lang/test_ko.py
@@ -24,5 +24,5 @@ def test_sentences():
     assert sentences == []
 
     sentences = language.sentences(u"이 연락처에 바뀐 부분이 있습니다. 바뀐 사항을 저장하시겠습니까?")
-    print sentences
+    print(sentences)
     assert sentences == [u"이 연락처에 바뀐 부분이 있습니다.", u"바뀐 사항을 저장하시겠습니까?"]
diff --git a/translate/lang/test_nqo.py b/translate/lang/test_nqo.py
index 3005ccc..438e7a4 100644
--- a/translate/lang/test_nqo.py
+++ b/translate/lang/test_nqo.py
@@ -12,7 +12,7 @@ def test_punctranslate():
     assert language.punctranslate(u"abc efg.") == u"abc efg."
     assert language.punctranslate(u"abc efg!") == u"abc efg߹"
     assert language.punctranslate(u"abc, efg; d?") == u"abc߸ efg؛ d؟"
-    # See http://bugs.locamotion.org/show_bug.cgi?id=1819
+    # See https://github.com/translate/translate/issues/1819
     assert language.punctranslate(u"It is called “abc”") == u"It is called ”abc“"
 
 
@@ -25,8 +25,8 @@ def test_sentences():
     # this text probably does not make sense, I just copied it from Firefox
     # translation and added some punctuation marks
     sentences = language.sentences(u"ߡߍ߲ ߠߎ߬ ߦߋ߫ ߓߊ߯ߙߊ߫ ߟߊ߫ ߢߐ߲߮ ߝߍ߬ ߞߊ߬ ߓߟߐߟߐ ߟߊߞߊ߬ߣߍ߲ ߕߏ߫. ߖߊ߬ߡߊ ߣߌ߫ ߓߍ߯ ߛߊ߬ߥߏ ߘߐ߫.")
-    print sentences
+    print(sentences)
     assert sentences == [u"ߡߍ߲ ߠߎ߬ ߦߋ߫ ߓߊ߯ߙߊ߫ ߟߊ߫ ߢߐ߲߮ ߝߍ߬ ߞߊ߬ ߓߟߐߟߐ ߟߊߞߊ߬ߣߍ߲ ߕߏ߫.", u"ߖߊ߬ߡߊ ߣߌ߫ ߓߍ߯ ߛߊ߬ߥߏ ߘߐ߫."]
     sentences = language.sentences(u"ߡߍ߲ ߠߎ߬ ߦߋ߫ ߓߊ߯ߙߊ߫ ߟߊ߫ ߢߐ߲߮ ߝߍ߬ ߞߊ߬ ߓߟߐߟߐ ߟߊߞߊ߬ߣߍ߲ ߕߏ߫? ߖߊ߬ߡߊ ߣߌ߫ ߓߍ߯ ߛߊ߬ߥߏ ߘߐ߫.")
-    print sentences
+    print(sentences)
     assert sentences == [u"ߡߍ߲ ߠߎ߬ ߦߋ߫ ߓߊ߯ߙߊ߫ ߟߊ߫ ߢߐ߲߮ ߝߍ߬ ߞߊ߬ ߓߟߐߟߐ ߟߊߞߊ߬ߣߍ߲ ߕߏ߫?", u"ߖߊ߬ߡߊ ߣߌ߫ ߓߍ߯ ߛߊ߬ߥߏ ߘߐ߫."]
diff --git a/translate/lang/test_team.py b/translate/lang/test_team.py
index fd3c056..0c5a186 100644
--- a/translate/lang/test_team.py
+++ b/translate/lang/test_team.py
@@ -10,11 +10,11 @@ def test_simple():
     # standard regex guess
     assert guess_language(u"ab at li.org") == "ab"
     # We never suggest 'en', it's always a mistake
-    assert guess_language(u"en at li.org") == None
+    assert guess_language(u"en at li.org") is None
     # We can't have a single char language code
-    assert guess_language(u"C at li.org") == None
+    assert guess_language(u"C at li.org") is None
     # Testing regex postfilter
-    assert guess_language(u"LL at li.org") == None
+    assert guess_language(u"LL at li.org") is None
 
     # snippet guess based on contact info
     assert guess_language(u"assam at mm.assam-glug.org") == "as"
diff --git a/translate/lang/test_tr.py b/translate/lang/test_tr.py
index c1e4da0..e6ab87c 100644
--- a/translate/lang/test_tr.py
+++ b/translate/lang/test_tr.py
@@ -3,6 +3,7 @@
 
 from translate.lang import factory
 
+
 def test_sentences():
     """Tests basic functionality of sentence segmentation."""
     language = factory.getlanguage('tr')
diff --git a/translate/lang/tr.py b/translate/lang/tr.py
index b98fba0..1f8b3c3 100644
--- a/translate/lang/tr.py
+++ b/translate/lang/tr.py
@@ -23,6 +23,7 @@
 
 from translate.lang import common
 
+
 class tr(common.Common):
     """This class represents Turkish."""
 
diff --git a/translate/lang/vi.py b/translate/lang/vi.py
index 0353f5e..9af286c 100644
--- a/translate/lang/vi.py
+++ b/translate/lang/vi.py
@@ -23,8 +23,7 @@
 .. seealso:: http://en.wikipedia.org/wiki/Vietnamese_language
 """
 
-from translate.lang import common
-from translate.lang import fr
+from translate.lang import common, fr
 
 
 class vi(common.Common):
@@ -36,6 +35,7 @@ class vi(common.Common):
     for c in u":;!#":
         puncdict[c] = u" %s" % c
 
+    @classmethod
     def punctranslate(cls, text):
         """Implement some extra features for quotation marks.
 
@@ -45,7 +45,6 @@ class vi(common.Common):
         """
         text = super(cls, cls).punctranslate(text)
         return fr.guillemets(text)
-    punctranslate = classmethod(punctranslate)
 
     mozilla_nplurals = 2
     mozilla_pluralequation = "n!=1 ? 1 : 0"
diff --git a/translate/lang/zh.py b/translate/lang/zh.py
index 44a7139..e97e566 100644
--- a/translate/lang/zh.py
+++ b/translate/lang/zh.py
@@ -63,8 +63,8 @@ class zh(common.Common):
         u"% ": u"%",
     }
 
+    @classmethod
     def length_difference(cls, length):
         return 10 - length / 2
-    length_difference = classmethod(length_difference)
 
     ignoretests = ["startcaps", "simplecaps"]
diff --git a/translate/lang/zh_cn.py b/translate/lang/zh_cn.py
index 06ce1bd..225ab1d 100644
--- a/translate/lang/zh_cn.py
+++ b/translate/lang/zh_cn.py
@@ -25,6 +25,6 @@
 
 from translate.lang.zh import zh
 
+
 class zh_cn(zh):
     specialchars = u"←→↔×÷©…—‘’“”【】《》"
-
diff --git a/translate/lang/zh_hk.py b/translate/lang/zh_hk.py
index ca48d28..392a5f4 100644
--- a/translate/lang/zh_hk.py
+++ b/translate/lang/zh_hk.py
@@ -25,6 +25,6 @@
 
 from translate.lang.zh import zh
 
+
 class zh_hk(zh):
     specialchars = u"←→↔×÷©…—‘’“”「」『』【】《》"
-
diff --git a/translate/lang/zh_tw.py b/translate/lang/zh_tw.py
index 19fdb74..2db19d4 100644
--- a/translate/lang/zh_tw.py
+++ b/translate/lang/zh_tw.py
@@ -25,6 +25,6 @@
 
 from translate.lang.zh import zh
 
+
 class zh_tw(zh):
     specialchars = u"←→↔×÷©…—‘’“”「」『』【】《》"
-
diff --git a/translate/misc/autoencode.py b/translate/misc/autoencode.py
index 59e056d..c2033cc 100644
--- a/translate/misc/autoencode.py
+++ b/translate/misc/autoencode.py
@@ -22,6 +22,13 @@
 and uses this when converting to a string."""
 
 
+# Python 3 compatibility
+try:
+    unicode
+except NameError:
+    unicode = str
+
+
 class autoencode(unicode):
 
     def __new__(newtype, string=u"", encoding=None, errors=None):
@@ -40,7 +47,7 @@ class autoencode(unicode):
             elif errors is None:
                 try:
                     newstring = unicode.__new__(newtype, string, encoding)
-                except LookupError, e:
+                except LookupError as e:
                     raise ValueError(str(e))
             elif encoding is None:
                 newstring = unicode.__new__(newtype, string, errors)
diff --git a/translate/misc/context.py b/translate/misc/context.py
deleted file mode 100644
index 71e2143..0000000
--- a/translate/misc/context.py
+++ /dev/null
@@ -1,48 +0,0 @@
-#!/usr/bin/env python
-# -*- coding: utf-8 -*-
-#
-# Copyright 2002-2006 Zuza Software Foundation
-#
-# This file is part of translate.
-#
-# translate is free software; you can redistribute it and/or modify
-# it under the terms of the GNU General Public License as published by
-# the Free Software Foundation; either version 2 of the License, or
-# (at your option) any later version.
-#
-# translate is distributed in the hope that it will be useful,
-# but WITHOUT ANY WARRANTY; without even the implied warranty of
-# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
-# GNU General Public License for more details.
-#
-# You should have received a copy of the GNU General Public License
-# along with this program; if not, see <http://www.gnu.org/licenses/>.
-#
-
-import sys
-
-
-def with_(mgr, body):
-    """A function to mimic the with statement introduced in Python 2.5
-
-    The code below was taken from http://www.python.org/dev/peps/pep-0343/
-    """
-    exit = mgr.__exit__  # Not calling it yet
-    value = mgr.__enter__()
-    exc = True
-    try:
-        try:
-            if isinstance(value, (tuple, list)):
-                return body(*value)
-            else:
-                return body(value)
-        except:
-            # The exceptional case is handled here
-            exc = False
-            if not exit(*sys.exc_info()):
-                raise
-            # The exception is swallowed if exit() returns true
-    finally:
-        # The normal and non-local-goto cases are handled here
-        if exc:
-            exit(None, None, None)
diff --git a/translate/misc/contextlib.py b/translate/misc/contextlib.py
deleted file mode 100644
index 636fe2b..0000000
--- a/translate/misc/contextlib.py
+++ /dev/null
@@ -1,199 +0,0 @@
-#!/usr/bin/env python
-# -*- coding: utf-8 -*-
-#
-# Copyright 2002-2006 Zuza Software Foundation
-#
-# This file is part of translate.
-# The file was copied from the Python 2.5 source.
-#
-# translate is free software; you can redistribute it and/or modify
-# it under the terms of the GNU General Public License as published by
-# the Free Software Foundation; either version 2 of the License, or
-# (at your option) any later version.
-#
-# translate is distributed in the hope that it will be useful,
-# but WITHOUT ANY WARRANTY; without even the implied warranty of
-# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
-# GNU General Public License for more details.
-#
-# You should have received a copy of the GNU General Public License
-# along with this program; if not, see <http://www.gnu.org/licenses/>.
-#
-
-# NB! IMPORTANT SEMANTIC DIFFERENCE WITH THE OFFICIAL contextlib.
-# In Python 2.5+, if an exception is thrown in a 'with' statement
-# which uses a generator-based context manager (that is, a
-# context manager created by decorating a generator with
-# @contextmanager), the exception will be propagated to the
-# generator via the .throw method of the generator.
-#
-# This does not exist in Python 2.4. Thus, we just naively finish
-# off the context manager. This also means that generator-based
-# context managers can't deal with exceptions, so be warned.
-
-"""Utilities for with-statement contexts.  See PEP 343."""
-
-import sys
-
-__all__ = ["contextmanager", "nested", "closing"]
-
-
-class GeneratorContextManager(object):
-    """Helper for @contextmanager decorator."""
-
-    def __init__(self, gen):
-        self.gen = gen
-
-    def __enter__(self):
-        try:
-            return self.gen.next()
-        except StopIteration:
-            raise RuntimeError("generator didn't yield")
-
-    def __exit__(self, type, value, tb):
-        if type is None:
-            try:
-                self.gen.next()
-            except StopIteration:
-                return
-            else:
-                raise RuntimeError("generator didn't stop")
-        else:
-            if value is None:
-                # Need to force instantiation so we can reliably
-                # tell if we get the same exception back
-                value = type()
-            try:
-                try:
-                    self.gen.next()
-                except StopIteration:
-                    import traceback
-                    traceback.print_exception(type, value, tb)
-                    raise value
-            except StopIteration, exc:
-                # Suppress the exception *unless* it's the same exception that
-                # was passed to throw().  This prevents a StopIteration
-                # raised inside the "with" statement from being suppressed
-                return exc is not value
-
-
-def contextmanager(func):
-    """@contextmanager decorator.
-
-    Typical usage::
-
-        @contextmanager
-        def some_generator(<arguments>):
-            <setup>
-            try:
-                yield <value>
-            finally:
-                <cleanup>
-
-    This makes this::
-
-        with some_generator(<arguments>) as <variable>:
-            <body>
-
-    equivalent to this::
-
-        <setup>
-        try:
-            <variable> = <value>
-            <body>
-        finally:
-            <cleanup>
-
-    """
-
-    def helper(*args, **kwds):
-        return GeneratorContextManager(func(*args, **kwds))
-    try:
-        helper.__name__ = func.__name__
-        helper.__doc__ = func.__doc__
-        helper.__dict__ = func.__dict__
-    except:
-        pass
-    return helper
-
-
- at contextmanager
-def nested(*managers):
-    """Support multiple context managers in a single with-statement.
-
-    Code like this::
-
-        with nested(A, B, C) as (X, Y, Z):
-            <body>
-
-    is equivalent to this::
-
-        with A as X:
-            with B as Y:
-                with C as Z:
-                    <body>
-
-    """
-    exits = []
-    vars = []
-    exc = (None, None, None)
-    # Lambdas are an easy way to create unique objects. We don't want
-    # this to be None, since our answer might actually be None
-    undefined = lambda: 42
-    result = undefined
-
-    try:
-        for mgr in managers:
-            exit = mgr.__exit__
-            enter = mgr.__enter__
-            vars.append(enter())
-            exits.append(exit)
-        result = vars
-    except:
-        exc = sys.exc_info()
-
-    # If nothing has gone wrong, then result contains our return value
-    # and thus it is not equal to 'undefined'. Thus, yield the value.
-    if result != undefined:
-        yield result
-
-    while exits:
-        exit = exits.pop()
-        try:
-            if exit(*exc):
-                exc = (None, None, None)
-        except:
-            exc = sys.exc_info()
-    if exc != (None, None, None):
-        # Don't rely on sys.exc_info() still containing
-        # the right information. Another exception may
-        # have been raised and caught by an exit method
-        raise exc[0], exc[1], exc[2]
-
-
-class closing(object):
-    """Context to automatically close something at the end of a block.
-
-    Code like this::
-
-        with closing(<module>.open(<arguments>)) as f:
-            <block>
-
-    is equivalent to this::
-
-        f = <module>.open(<arguments>)
-        try:
-            <block>
-        finally:
-            f.close()
-
-    """
-
-    def __init__(self, thing):
-        self.thing = thing
-
-    def __enter__(self):
-        return self.thing
-
-    def __exit__(self, *exc_info):
-        self.thing.close()
diff --git a/translate/misc/decorators.py b/translate/misc/decorators.py
deleted file mode 100644
index dfe7c4b..0000000
--- a/translate/misc/decorators.py
+++ /dev/null
@@ -1,45 +0,0 @@
-#!/usr/bin/env python
-# -*- coding: utf-8 -*-
-#
-# Copyright 2012 Zuza Software Foundation
-#
-# This file is part of translate.
-#
-# translate is free software; you can redistribute it and/or modify
-# it under the terms of the GNU General Public License as published by
-# the Free Software Foundation; either version 2 of the License, or
-# (at your option) any later version.
-#
-# translate is distributed in the hope that it will be useful,
-# but WITHOUT ANY WARRANTY; without even the implied warranty of
-# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
-# GNU General Public License for more details.
-#
-# You should have received a copy of the GNU General Public License
-# along with this program; if not, see <http://www.gnu.org/licenses/>.
-
-"""Helpers for decorators."""
-
-
-def decorate(decorator):
-    """Simple decorator to turn functions into well-behaved decorators.
-
-    This is used to emulate ``functools.wraps``, which is only available
-    since Python 2.5.
-
-    Borrowed from Python's wiki:
-    http://wiki.python.org/moin/PythonDecoratorLibrary
-    """
-    def new_decorator(f):
-        g = decorator(f)
-        g.__name__ = f.__name__
-        g.__doc__ = f.__doc__
-        g.__dict__.update(f.__dict__)
-
-        return g
-
-    new_decorator.__name__ = decorator.__name__
-    new_decorator.__doc__ = decorator.__doc__
-    new_decorator.__dict__.update(decorator.__dict__)
-
-    return new_decorator
diff --git a/translate/misc/deprecation.py b/translate/misc/deprecation.py
new file mode 100644
index 0000000..067fd39
--- /dev/null
+++ b/translate/misc/deprecation.py
@@ -0,0 +1,45 @@
+#!/usr/bin/env python
+# -*- coding: utf-8 -*-
+#
+# Copyright 2014 Zuza Software Foundation
+#
+# This file is part of translate.
+#
+# translate is free software; you can redistribute it and/or modify it under
+# the terms of the GNU General Public License as published by the Free Software
+# Foundation; either version 2 of the License, or (at your option) any later
+# version.
+#
+# translate is distributed in the hope that it will be useful, but WITHOUT ANY
+# WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR
+# A PARTICULAR PURPOSE. See the GNU General Public License for more details.
+#
+# You should have received a copy of the GNU General Public License along with
+# this program; if not, see <http://www.gnu.org/licenses/>.
+
+import warnings
+from functools import wraps
+
+
+def deprecated(message=""):
+    """Decorator that marks functions and methods as deprecated.
+
+    A warning will be emitted when the function or method is used. If a custom
+    message is provided, it will be shown after the default warning message.
+    """
+    def inner_render(func):
+        @wraps(func)
+        def new_func(*args, **kwargs):
+            msg = message  # Hack to avoid UnboundLocalError.
+            if msg:
+                msg = "\n" + msg
+            warnings.warn_explicit(
+                "Call to deprecated function {0}.{1}".format(func.__name__,
+                                                             msg),
+                category=DeprecationWarning,
+                filename=func.func_code.co_filename,
+                lineno=func.func_code.co_firstlineno + 1
+            )
+            return func(*args, **kwargs)
+        return new_func
+    return inner_render
diff --git a/translate/misc/dictutils.py b/translate/misc/dictutils.py
index 32fe365..bfa7a6d 100644
--- a/translate/misc/dictutils.py
+++ b/translate/misc/dictutils.py
@@ -22,13 +22,6 @@ order-sensitive dictionary"""
 # along with this program; if not, see <http://www.gnu.org/licenses/>.
 
 
-def generalupper(str):
-    """this uses the object's upper method - works with string and unicode"""
-    if str is None:
-        return str
-    return str.upper()
-
-
 class cidict(dict):
 
     def __init__(self, fromdict=None):
@@ -40,7 +33,7 @@ class cidict(dict):
         if type(key) != str and type(key) != unicode:
             raise TypeError("cidict can only have str or unicode as key (got %r)" %
                             type(key))
-        for akey in self.iterkeys():
+        for akey in self.keys():
             if akey.lower() == key.lower():
                 return dict.__getitem__(self, akey)
         raise IndexError
@@ -49,7 +42,7 @@ class cidict(dict):
         if type(key) != str and type(key) != unicode:
             raise TypeError("cidict can only have str or unicode as key (got %r)" %
                             type(key))
-        for akey in self.iterkeys():
+        for akey in self.keys():
             if akey.lower() == key.lower():
                 return dict.__setitem__(self, akey, value)
         return dict.__setitem__(self, key, value)
@@ -64,7 +57,7 @@ class cidict(dict):
         if type(key) != str and type(key) != unicode:
             raise TypeError("cidict can only have str or unicode as key (got %r)" %
                             type(key))
-        for akey in self.iterkeys():
+        for akey in self.keys():
             if akey.lower() == key.lower():
                 return dict.__delitem__(self, akey)
         raise IndexError
@@ -73,7 +66,7 @@ class cidict(dict):
         if type(key) != str and type(key) != unicode:
             raise TypeError("cidict can only have str or unicode as key (got %r)" %
                             type(key))
-        for akey in self.iterkeys():
+        for akey in self.keys():
             if akey.lower() == key.lower():
                 return 1
         return 0
diff --git a/translate/misc/diff_match_patch.py b/translate/misc/diff_match_patch.py
index eec137a..8110bf1 100644
--- a/translate/misc/diff_match_patch.py
+++ b/translate/misc/diff_match_patch.py
@@ -1,1797 +1,29 @@
-#!/usr/bin/python2.4
-
-"""Diff Match and Patch
-
-Copyright 2006 Google Inc.
-http://code.google.com/p/google-diff-match-patch/
-
-Licensed under the Apache License, Version 2.0 (the "License");
-you may not use this file except in compliance with the License.
-You may obtain a copy of the License at
-
-  http://www.apache.org/licenses/LICENSE-2.0
-
-Unless required by applicable law or agreed to in writing, software
-distributed under the License is distributed on an "AS IS" BASIS,
-WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-See the License for the specific language governing permissions and
-limitations under the License.
+#!/usr/bin/env python
+# -*- coding: utf-8 -*-
+#
+# Copyright 2014 Zuza Software Foundation
+#
+# This file is part of translate.
+#
+# translate is free software; you can redistribute it and/or modify
+# it under the terms of the GNU General Public License as published by
+# the Free Software Foundation; either version 2 of the License, or
+# (at your option) any later version.
+#
+# translate is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+# GNU General Public License for more details.
+#
+# You should have received a copy of the GNU General Public License
+# along with this program; if not, see <http://www.gnu.org/licenses/>.
+
+"""Module for providing backwards compatible diff-match-patch.
+
+Some old third-party apps, like Virtaal, rely on diff-match-patch being
+provided by Translate Toolkit.
 """
 
-"""Functions for diff, match and patch.
-
-Computes the difference between two texts to create a patch.
-Applies the patch onto another text, allowing for errors.
-"""
-
-__author__ = 'fraser at google.com (Neil Fraser)'
-
-import time
-import re
-
-class diff_match_patch:
-  """Class containing the diff, match and patch methods.
-
-  Also contains the behaviour settings.
-  """
-
-  def __init__(self):
-    """Inits a diff_match_patch object with default settings.
-    Redefine these in your program to override the defaults.
-    """
-
-    # Number of seconds to map a diff before giving up (0 for infinity).
-    self.Diff_Timeout = 1.0
-    # Cost of an empty edit operation in terms of edit characters.
-    self.Diff_EditCost = 4
-    # The size beyond which the double-ended diff activates.
-    # Double-ending is twice as fast, but less accurate.
-    self.Diff_DualThreshold = 32
-    # At what point is no match declared (0.0 = perfection, 1.0 = very loose).
-    self.Match_Threshold = 0.5
-    # How far to search for a match (0 = exact location, 1000+ = broad match).
-    # A match this many characters away from the expected location will add
-    # 1.0 to the score (0.0 is a perfect match).
-    self.Match_Distance = 1000
-    # When deleting a large block of text (over ~64 characters), how close does
-    # the contents have to match the expected contents. (0.0 = perfection,
-    # 1.0 = very loose).  Note that Match_Threshold controls how closely the
-    # end points of a delete need to match.
-    self.Patch_DeleteThreshold = 0.5
-    # Chunk size for context length.
-    self.Patch_Margin = 4
-
-    # How many bits in a number?
-    # Python has no maximum, thus to disable patch splitting set to 0.
-    # However to avoid long patches in certain pathological cases, use 32.
-    # Multiple short patches (using native ints) are much faster than long ones.
-    self.Match_MaxBits = 32
-
-  #  DIFF FUNCTIONS
-
-  # The data structure representing a diff is an array of tuples:
-  # [(DIFF_DELETE, "Hello"), (DIFF_INSERT, "Goodbye"), (DIFF_EQUAL, " world.")]
-  # which means: delete "Hello", add "Goodbye" and keep " world."
-  DIFF_DELETE = -1
-  DIFF_INSERT = 1
-  DIFF_EQUAL = 0
-
-  def diff_main(self, text1, text2, checklines=True):
-    """Find the differences between two texts.  Simplifies the problem by
-      stripping any common prefix or suffix off the texts before diffing.
-
-    :param text1: Old string to be diffed.
-    :param text2: New string to be diffed.
-    :param checklines: Optional speedup flag.  If present and false, then
-                       don't run a line-level diff first to identify the
-                       changed areas.
-                       Defaults to True, which does a faster, slightly
-                       less optimal diff.
-
-    :returns: Array of changes.
-    """
-
-    # Check for null inputs.
-    if text1 == None or text2 == None:
-      raise ValueError("Null inputs. (diff_main)")
-
-    # Check for equality (speedup).
-    if text1 == text2:
-      return [(self.DIFF_EQUAL, text1)]
-
-    # Trim off common prefix (speedup).
-    commonlength = self.diff_commonPrefix(text1, text2)
-    commonprefix = text1[:commonlength]
-    text1 = text1[commonlength:]
-    text2 = text2[commonlength:]
-
-    # Trim off common suffix (speedup).
-    commonlength = self.diff_commonSuffix(text1, text2)
-    if commonlength == 0:
-      commonsuffix = ''
-    else:
-      commonsuffix = text1[-commonlength:]
-      text1 = text1[:-commonlength]
-      text2 = text2[:-commonlength]
-
-    # Compute the diff on the middle block.
-    diffs = self.diff_compute(text1, text2, checklines)
-
-    # Restore the prefix and suffix.
-    if commonprefix:
-      diffs[:0] = [(self.DIFF_EQUAL, commonprefix)]
-    if commonsuffix:
-      diffs.append((self.DIFF_EQUAL, commonsuffix))
-    self.diff_cleanupMerge(diffs)
-    return diffs
-
-  def diff_compute(self, text1, text2, checklines):
-    """Find the differences between two texts.  Assumes that the texts do not
-      have any common prefix or suffix.
-
-    :param text1: Old string to be diffed.
-    :param text2: New string to be diffed.
-    :param checklines: Speedup flag.  If false, then don't run a
-                       line-level diff first to identify the changed areas.
-                       If True, then run a faster, slightly less optimal diff.
-
-    :returns: Array of changes.
-    """
-    if not text1:
-      # Just add some text (speedup).
-      return [(self.DIFF_INSERT, text2)]
-
-    if not text2:
-      # Just delete some text (speedup).
-      return [(self.DIFF_DELETE, text1)]
-
-    if len(text1) > len(text2):
-      (longtext, shorttext) = (text1, text2)
-    else:
-      (shorttext, longtext) = (text1, text2)
-    i = longtext.find(shorttext)
-    if i != -1:
-      # Shorter text is inside the longer text (speedup).
-      diffs = [(self.DIFF_INSERT, longtext[:i]), (self.DIFF_EQUAL, shorttext),
-               (self.DIFF_INSERT, longtext[i + len(shorttext):])]
-      # Swap insertions for deletions if diff is reversed.
-      if len(text1) > len(text2):
-        diffs[0] = (self.DIFF_DELETE, diffs[0][1])
-        diffs[2] = (self.DIFF_DELETE, diffs[2][1])
-      return diffs
-    longtext = shorttext = None  # Garbage collect.
-
-    # Check to see if the problem can be split in two.
-    hm = self.diff_halfMatch(text1, text2)
-    if hm:
-      # A half-match was found, sort out the return data.
-      (text1_a, text1_b, text2_a, text2_b, mid_common) = hm
-      # Send both pairs off for separate processing.
-      diffs_a = self.diff_main(text1_a, text2_a, checklines)
-      diffs_b = self.diff_main(text1_b, text2_b, checklines)
-      # Merge the results.
-      return diffs_a + [(self.DIFF_EQUAL, mid_common)] + diffs_b
-
-    # Perform a real diff.
-    if checklines and (len(text1) < 100 or len(text2) < 100):
-      checklines = False  # Too trivial for the overhead.
-    if checklines:
-      # Scan the text on a line-by-line basis first.
-      (text1, text2, linearray) = self.diff_linesToChars(text1, text2)
-
-    diffs = self.diff_map(text1, text2)
-    if not diffs:  # No acceptable result.
-      diffs = [(self.DIFF_DELETE, text1), (self.DIFF_INSERT, text2)]
-    if checklines:
-      # Convert the diff back to original text.
-      self.diff_charsToLines(diffs, linearray)
-      # Eliminate freak matches (e.g. blank lines)
-      self.diff_cleanupSemantic(diffs)
-
-      # Rediff any replacement blocks, this time character-by-character.
-      # Add a dummy entry at the end.
-      diffs.append((self.DIFF_EQUAL, ''))
-      pointer = 0
-      count_delete = 0
-      count_insert = 0
-      text_delete = ''
-      text_insert = ''
-      while pointer < len(diffs):
-        if diffs[pointer][0] == self.DIFF_INSERT:
-          count_insert += 1
-          text_insert += diffs[pointer][1]
-        elif diffs[pointer][0] == self.DIFF_DELETE:
-          count_delete += 1
-          text_delete += diffs[pointer][1]
-        elif diffs[pointer][0] == self.DIFF_EQUAL:
-          # Upon reaching an equality, check for prior redundancies.
-          if count_delete >= 1 and count_insert >= 1:
-            # Delete the offending records and add the merged ones.
-            a = self.diff_main(text_delete, text_insert, False)
-            diffs[pointer - count_delete - count_insert : pointer] = a
-            pointer = pointer - count_delete - count_insert + len(a)
-          count_insert = 0
-          count_delete = 0
-          text_delete = ''
-          text_insert = ''
-
-        pointer += 1
-
-      diffs.pop()  # Remove the dummy entry at the end.
-    return diffs
-
-  def diff_linesToChars(self, text1, text2):
-    """Split two texts into an array of strings.  Reduce the texts to a string
-    of hashes where each Unicode character represents one line.
-
-    :param text1: First string.
-    :param text2: Second string.
-
-    :returns: Three element tuple, containing the encoded text1,
-              the encoded text2 and the array of unique strings.
-              The zeroth element of the array of unique strings is
-              intentionally blank.
-    """
-    lineArray = []  # e.g. lineArray[4] == "Hello\n"
-    lineHash = {}   # e.g. lineHash["Hello\n"] == 4
-
-    # "\x00" is a valid character, but various debuggers don't like it.
-    # So we'll insert a junk entry to avoid generating a null character.
-    lineArray.append('')
-
-    def diff_linesToCharsMunge(text):
-      """Split a text into an array of strings.  Reduce the texts to a string
-      of hashes where each Unicode character represents one line.
-      Modifies linearray and linehash through being a closure.
-
-      :param text: String to encode.
-
-      :returns: Encoded string.
-      """
-      chars = []
-      # Walk the text, pulling out a substring for each line.
-      # text.split('\n') would would temporarily double our memory footprint.
-      # Modifying text would create many large strings to garbage collect.
-      lineStart = 0
-      lineEnd = -1
-      while lineEnd < len(text) - 1:
-        lineEnd = text.find('\n', lineStart)
-        if lineEnd == -1:
-          lineEnd = len(text) - 1
-        line = text[lineStart:lineEnd + 1]
-        lineStart = lineEnd + 1
-
-        if line in lineHash:
-          chars.append(unichr(lineHash[line]))
-        else:
-          lineArray.append(line)
-          lineHash[line] = len(lineArray) - 1
-          chars.append(unichr(len(lineArray) - 1))
-      return "".join(chars)
-
-    chars1 = diff_linesToCharsMunge(text1)
-    chars2 = diff_linesToCharsMunge(text2)
-    return (chars1, chars2, lineArray)
-
-  def diff_charsToLines(self, diffs, lineArray):
-    """Rehydrate the text in a diff from a string of line hashes to real lines
-    of text.
-
-    :param diffs: Array of diff tuples.
-    :param lineArray: Array of unique strings.
-    """
-    for x in xrange(len(diffs)):
-      text = []
-      for char in diffs[x][1]:
-        text.append(lineArray[ord(char)])
-      diffs[x] = (diffs[x][0], "".join(text))
-
-  def diff_map(self, text1, text2):
-    """Explore the intersection points between the two texts.
-
-    :param text1: Old string to be diffed.
-    :param text2: New string to be diffed.
-
-    :returns: Array of diff tuples or None if no diff available.
-    """
-
-    # Unlike in most languages, Python counts time in seconds.
-    s_end = time.time() + self.Diff_Timeout  # Don't run for too long.
-    # Cache the text lengths to prevent multiple calls.
-    text1_length = len(text1)
-    text2_length = len(text2)
-    max_d = text1_length + text2_length - 1
-    doubleEnd = self.Diff_DualThreshold * 2 < max_d
-    # Python efficiency note: (x << 32) + y is the fastest way to combine
-    # x and y into a single hashable value.  Tested in Python 2.5.
-    # It is unclear why it is faster for v_map[d] to be indexed with an
-    # integer whereas footsteps is indexed with a string.
-    v_map1 = []
-    v_map2 = []
-    v1 = {}
-    v2 = {}
-    v1[1] = 0
-    v2[1] = 0
-    footsteps = {}
-    done = False
-    # If the total number of characters is odd, then the front path will
-    # collide with the reverse path.
-    front = (text1_length + text2_length) % 2
-    for d in xrange(max_d):
-      # Bail out if timeout reached.
-      if self.Diff_Timeout > 0 and time.time() > s_end:
-        return None
-
-      # Walk the front path one step.
-      v_map1.append({})
-      for k in xrange(-d, d + 1, 2):
-        if k == -d or k != d and v1[k - 1] < v1[k + 1]:
-          x = v1[k + 1]
-        else:
-          x = v1[k - 1] + 1
-        y = x - k
-        if doubleEnd:
-          footstep = str((x << 32) + y)
-          if front and footstep in footsteps:
-            done = True
-          if not front:
-            footsteps[footstep] = d
-
-        while (not done and x < text1_length and y < text2_length and
-               text1[x] == text2[y]):
-          x += 1
-          y += 1
-          if doubleEnd:
-            footstep = str((x << 32) + y)
-            if front and footstep in footsteps:
-              done = True
-            if not front:
-              footsteps[footstep] = d
-
-        v1[k] = x
-        v_map1[d][(x << 32) + y] = True
-        if x == text1_length and y == text2_length:
-          # Reached the end in single-path mode.
-          return self.diff_path1(v_map1, text1, text2)
-        elif done:
-          # Front path ran over reverse path.
-          v_map2 = v_map2[:footsteps[footstep] + 1]
-          a = self.diff_path1(v_map1, text1[:x], text2[:y])
-          b = self.diff_path2(v_map2, text1[x:], text2[y:])
-          return a + b
-
-      if doubleEnd:
-        # Walk the reverse path one step.
-        v_map2.append({})
-        for k in xrange(-d, d + 1, 2):
-          if k == -d or k != d and v2[k - 1] < v2[k + 1]:
-            x = v2[k + 1]
-          else:
-            x = v2[k - 1] + 1
-          y = x - k
-          footstep = str((text1_length - x << 32) + text2_length - y)
-          if not front and footstep in footsteps:
-            done = True
-          if front:
-            footsteps[footstep] = d
-          while (not done and x < text1_length and y < text2_length and
-                 text1[-x - 1] == text2[-y - 1]):
-            x += 1
-            y += 1
-            footstep = str((text1_length - x << 32) + text2_length - y)
-            if not front and footstep in footsteps:
-              done = True
-            if front:
-              footsteps[footstep] = d
-
-          v2[k] = x
-          v_map2[d][(x << 32) + y] = True
-          if done:
-            # Reverse path ran over front path.
-            v_map1 = v_map1[:footsteps[footstep] + 1]
-            a = self.diff_path1(v_map1, text1[:text1_length - x],
-                                text2[:text2_length - y])
-            b = self.diff_path2(v_map2, text1[text1_length - x:],
-                                text2[text2_length - y:])
-            return a + b
-
-    # Number of diffs equals number of characters, no commonality at all.
-    return None
-
-  def diff_path1(self, v_map, text1, text2):
-    """Work from the middle back to the start to determine the path.
-
-    :param v_map: Array of paths.
-    :param text1: Old string fragment to be diffed.
-    :param text2: New string fragment to be diffed.
-
-    :returns: Array of diff tuples.
-    """
-    path = []
-    x = len(text1)
-    y = len(text2)
-    last_op = None
-    for d in xrange(len(v_map) - 2, -1, -1):
-      while True:
-        if (x - 1 << 32) + y in v_map[d]:
-          x -= 1
-          if last_op == self.DIFF_DELETE:
-            path[0] = (self.DIFF_DELETE, text1[x] + path[0][1])
-          else:
-            path[:0] = [(self.DIFF_DELETE, text1[x])]
-          last_op = self.DIFF_DELETE
-          break
-        elif (x << 32) + y - 1 in v_map[d]:
-          y -= 1
-          if last_op == self.DIFF_INSERT:
-            path[0] = (self.DIFF_INSERT, text2[y] + path[0][1])
-          else:
-            path[:0] = [(self.DIFF_INSERT, text2[y])]
-          last_op = self.DIFF_INSERT
-          break
-        else:
-          x -= 1
-          y -= 1
-          assert text1[x] == text2[y], ("No diagonal.  " +
-              "Can't happen. (diff_path1)")
-          if last_op == self.DIFF_EQUAL:
-            path[0] = (self.DIFF_EQUAL, text1[x] + path[0][1])
-          else:
-            path[:0] = [(self.DIFF_EQUAL, text1[x])]
-          last_op = self.DIFF_EQUAL
-    return path
-
-  def diff_path2(self, v_map, text1, text2):
-    """Work from the middle back to the end to determine the path.
-
-    :param v_map: Array of paths.
-    :param text1: Old string fragment to be diffed.
-    :param text2: New string fragment to be diffed.
-
-    :returns: Array of diff tuples.
-    """
-    path = []
-    x = len(text1)
-    y = len(text2)
-    last_op = None
-    for d in xrange(len(v_map) - 2, -1, -1):
-      while True:
-        if (x - 1 << 32) + y in v_map[d]:
-          x -= 1
-          if last_op == self.DIFF_DELETE:
-            path[-1] = (self.DIFF_DELETE, path[-1][1] + text1[-x - 1])
-          else:
-            path.append((self.DIFF_DELETE, text1[-x - 1]))
-          last_op = self.DIFF_DELETE
-          break
-        elif (x << 32) + y - 1 in v_map[d]:
-          y -= 1
-          if last_op == self.DIFF_INSERT:
-            path[-1] = (self.DIFF_INSERT, path[-1][1] + text2[-y - 1])
-          else:
-            path.append((self.DIFF_INSERT, text2[-y - 1]))
-          last_op = self.DIFF_INSERT
-          break
-        else:
-          x -= 1
-          y -= 1
-          assert text1[-x - 1] == text2[-y - 1], ("No diagonal.  " +
-              "Can't happen. (diff_path2)")
-          if last_op == self.DIFF_EQUAL:
-            path[-1] = (self.DIFF_EQUAL, path[-1][1] + text1[-x - 1])
-          else:
-            path.append((self.DIFF_EQUAL, text1[-x - 1]))
-          last_op = self.DIFF_EQUAL
-    return path
-
-  def diff_commonPrefix(self, text1, text2):
-    """Determine the common prefix of two strings.
-
-    :param text1: First string.
-    :param text2: Second string.
-
-    :returns: The number of characters common to the start of each string.
-    """
-    # Quick check for common null cases.
-    if not text1 or not text2 or text1[0] != text2[0]:
-      return 0
-    # Binary search.
-    # Performance analysis: http://neil.fraser.name/news/2007/10/09/
-    pointermin = 0
-    pointermax = min(len(text1), len(text2))
-    pointermid = pointermax
-    pointerstart = 0
-    while pointermin < pointermid:
-      if text1[pointerstart:pointermid] == text2[pointerstart:pointermid]:
-        pointermin = pointermid
-        pointerstart = pointermin
-      else:
-        pointermax = pointermid
-      pointermid = int((pointermax - pointermin) / 2 + pointermin)
-    return pointermid
-
-  def diff_commonSuffix(self, text1, text2):
-    """Determine the common suffix of two strings.
-
-    :param text1: First string.
-    :param text2: Second string.
-
-    :returns: The number of characters common to the end of each string.
-    """
-    # Quick check for common null cases.
-    if not text1 or not text2 or text1[-1] != text2[-1]:
-      return 0
-    # Binary search.
-    # Performance analysis: http://neil.fraser.name/news/2007/10/09/
-    pointermin = 0
-    pointermax = min(len(text1), len(text2))
-    pointermid = pointermax
-    pointerend = 0
-    while pointermin < pointermid:
-      if (text1[-pointermid:len(text1) - pointerend] ==
-          text2[-pointermid:len(text2) - pointerend]):
-        pointermin = pointermid
-        pointerend = pointermin
-      else:
-        pointermax = pointermid
-      pointermid = int((pointermax - pointermin) / 2 + pointermin)
-    return pointermid
-
-  def diff_halfMatch(self, text1, text2):
-    """Do the two texts share a substring which is at least half the length of
-    the longer text?
-
-    :param text1: First string.
-    :param text2: Second string.
-
-    :returns: Five element Array, containing the prefix of text1, the
-              suffix of text1, the prefix of text2, the suffix of text2
-              and the common middle.  Or None if there was no match.
-    """
-    if len(text1) > len(text2):
-      (longtext, shorttext) = (text1, text2)
-    else:
-      (shorttext, longtext) = (text1, text2)
-    if len(longtext) < 10 or len(shorttext) < 1:
-      return None  # Pointless.
-
-    def diff_halfMatchI(longtext, shorttext, i):
-      """Does a substring of shorttext exist within longtext such that the
-      substring is at least half the length of longtext?
-      Closure, but does not reference any external variables.
-
-      :param longtext: Longer string.
-      :param shorttext: Shorter string.
-      :param i: Start index of quarter length substring within longtext.
-
-      :returns: Five element Array, containing the prefix of longtext, the
-                suffix of longtext, the prefix of shorttext, the suffix of
-                shorttext and the common middle.  Or None if there was no match.
-      """
-      seed = longtext[i:i + len(longtext) / 4]
-      best_common = ''
-      j = shorttext.find(seed)
-      while j != -1:
-        prefixLength = self.diff_commonPrefix(longtext[i:], shorttext[j:])
-        suffixLength = self.diff_commonSuffix(longtext[:i], shorttext[:j])
-        if len(best_common) < suffixLength + prefixLength:
-          best_common = (shorttext[j - suffixLength:j] +
-              shorttext[j:j + prefixLength])
-          best_longtext_a = longtext[:i - suffixLength]
-          best_longtext_b = longtext[i + prefixLength:]
-          best_shorttext_a = shorttext[:j - suffixLength]
-          best_shorttext_b = shorttext[j + prefixLength:]
-        j = shorttext.find(seed, j + 1)
-
-      if len(best_common) >= len(longtext) / 2:
-        return (best_longtext_a, best_longtext_b,
-                best_shorttext_a, best_shorttext_b, best_common)
-      else:
-        return None
-
-    # First check if the second quarter is the seed for a half-match.
-    hm1 = diff_halfMatchI(longtext, shorttext, (len(longtext) + 3) / 4)
-    # Check again based on the third quarter.
-    hm2 = diff_halfMatchI(longtext, shorttext, (len(longtext) + 1) / 2)
-    if not hm1 and not hm2:
-      return None
-    elif not hm2:
-      hm = hm1
-    elif not hm1:
-      hm = hm2
-    else:
-      # Both matched.  Select the longest.
-      if len(hm1[4]) > len(hm2[4]):
-        hm = hm1
-      else:
-        hm = hm2
-
-    # A half-match was found, sort out the return data.
-    if len(text1) > len(text2):
-      (text1_a, text1_b, text2_a, text2_b, mid_common) = hm
-    else:
-      (text2_a, text2_b, text1_a, text1_b, mid_common) = hm
-    return (text1_a, text1_b, text2_a, text2_b, mid_common)
-
-  def diff_cleanupSemantic(self, diffs):
-    """Reduce the number of edits by eliminating semantically trivial
-    equalities.
-
-    :param diffs: Array of diff tuples.
-    """
-    changes = False
-    equalities = []  # Stack of indices where equalities are found.
-    lastequality = None  # Always equal to equalities[-1][1]
-    pointer = 0  # Index of current position.
-    length_changes1 = 0  # Number of chars that changed prior to the equality.
-    length_changes2 = 0  # Number of chars that changed after the equality.
-    while pointer < len(diffs):
-      if diffs[pointer][0] == self.DIFF_EQUAL:  # equality found
-        equalities.append(pointer)
-        length_changes1 = length_changes2
-        length_changes2 = 0
-        lastequality = diffs[pointer][1]
-      else:  # an insertion or deletion
-        length_changes2 += len(diffs[pointer][1])
-        if (lastequality != None and (len(lastequality) <= length_changes1) and
-            (len(lastequality) <= length_changes2)):
-          # Duplicate record
-          diffs.insert(equalities[-1], (self.DIFF_DELETE, lastequality))
-          # Change second copy to insert.
-          diffs[equalities[-1] + 1] = (self.DIFF_INSERT,
-              diffs[equalities[-1] + 1][1])
-          # Throw away the equality we just deleted.
-          equalities.pop()
-          # Throw away the previous equality (it needs to be reevaluated).
-          if len(equalities) != 0:
-            equalities.pop()
-          if len(equalities):
-            pointer = equalities[-1]
-          else:
-            pointer = -1
-          length_changes1 = 0  # Reset the counters.
-          length_changes2 = 0
-          lastequality = None
-          changes = True
-      pointer += 1
-
-    if changes:
-      self.diff_cleanupMerge(diffs)
-
-    self.diff_cleanupSemanticLossless(diffs)
-
-  def diff_cleanupSemanticLossless(self, diffs):
-    """Look for single edits surrounded on both sides by equalities
-    which can be shifted sideways to align the edit to a word boundary.
-    e.g: The c<ins>at c</ins>ame. -> The <ins>cat </ins>came.
-
-    :param diffs: Array of diff tuples.
-    """
-
-    def diff_cleanupSemanticScore(one, two):
-      """Given two strings, compute a score representing whether the
-      internal boundary falls on logical boundaries.
-      Scores range from 5 (best) to 0 (worst).
-      Closure, but does not reference any external variables.
-
-      :param one: First string.
-      :param two: Second string.
-
-      :returns: The score.
-      """
-      if not one or not two:
-        # Edges are the best.
-        return 5
-
-      # Each port of this function behaves slightly differently due to
-      # subtle differences in each language's definition of things like
-      # 'whitespace'.  Since this function's purpose is largely cosmetic,
-      # the choice has been made to use each language's native features
-      # rather than force total conformity.
-      score = 0
-      # One point for non-alphanumeric.
-      if not one[-1].isalnum() or not two[0].isalnum():
-        score += 1
-        # Two points for whitespace.
-        if one[-1].isspace() or two[0].isspace():
-          score += 1
-          # Three points for line breaks.
-          if (one[-1] == "\r" or one[-1] == "\n" or
-              two[0] == "\r" or two[0] == "\n"):
-            score += 1
-            # Four points for blank lines.
-            if (re.search("\\n\\r?\\n$", one) or
-                re.match("^\\r?\\n\\r?\\n", two)):
-              score += 1
-      return score
-
-    pointer = 1
-    # Intentionally ignore the first and last element (don't need checking).
-    while pointer < len(diffs) - 1:
-      if (diffs[pointer - 1][0] == self.DIFF_EQUAL and
-          diffs[pointer + 1][0] == self.DIFF_EQUAL):
-        # This is a single edit surrounded by equalities.
-        equality1 = diffs[pointer - 1][1]
-        edit = diffs[pointer][1]
-        equality2 = diffs[pointer + 1][1]
-
-        # First, shift the edit as far left as possible.
-        commonOffset = self.diff_commonSuffix(equality1, edit)
-        if commonOffset:
-          commonString = edit[-commonOffset:]
-          equality1 = equality1[:-commonOffset]
-          edit = commonString + edit[:-commonOffset]
-          equality2 = commonString + equality2
-
-        # Second, step character by character right, looking for the best fit.
-        bestEquality1 = equality1
-        bestEdit = edit
-        bestEquality2 = equality2
-        bestScore = (diff_cleanupSemanticScore(equality1, edit) +
-            diff_cleanupSemanticScore(edit, equality2))
-        while edit and equality2 and edit[0] == equality2[0]:
-          equality1 += edit[0]
-          edit = edit[1:] + equality2[0]
-          equality2 = equality2[1:]
-          score = (diff_cleanupSemanticScore(equality1, edit) +
-              diff_cleanupSemanticScore(edit, equality2))
-          # The >= encourages trailing rather than leading whitespace on edits.
-          if score >= bestScore:
-            bestScore = score
-            bestEquality1 = equality1
-            bestEdit = edit
-            bestEquality2 = equality2
-
-        if diffs[pointer - 1][1] != bestEquality1:
-          # We have an improvement, save it back to the diff.
-          if bestEquality1:
-            diffs[pointer - 1] = (diffs[pointer - 1][0], bestEquality1)
-          else:
-            del diffs[pointer - 1]
-            pointer -= 1
-          diffs[pointer] = (diffs[pointer][0], bestEdit)
-          if bestEquality2:
-            diffs[pointer + 1] = (diffs[pointer + 1][0], bestEquality2)
-          else:
-            del diffs[pointer + 1]
-            pointer -= 1
-      pointer += 1
-
-  def diff_cleanupEfficiency(self, diffs):
-    """Reduce the number of edits by eliminating operationally trivial
-    equalities.
-
-    :param diffs: Array of diff tuples.
-    """
-    changes = False
-    equalities = []  # Stack of indices where equalities are found.
-    lastequality = ''  # Always equal to equalities[-1][1]
-    pointer = 0  # Index of current position.
-    pre_ins = False  # Is there an insertion operation before the last equality.
-    pre_del = False  # Is there a deletion operation before the last equality.
-    post_ins = False  # Is there an insertion operation after the last equality.
-    post_del = False  # Is there a deletion operation after the last equality.
-    while pointer < len(diffs):
-      if diffs[pointer][0] == self.DIFF_EQUAL:  # equality found
-        if (len(diffs[pointer][1]) < self.Diff_EditCost and
-            (post_ins or post_del)):
-          # Candidate found.
-          equalities.append(pointer)
-          pre_ins = post_ins
-          pre_del = post_del
-          lastequality = diffs[pointer][1]
-        else:
-          # Not a candidate, and can never become one.
-          equalities = []
-          lastequality = ''
-
-        post_ins = post_del = False
-      else:  # an insertion or deletion
-        if diffs[pointer][0] == self.DIFF_DELETE:
-          post_del = True
-        else:
-          post_ins = True
-
-        # Five types to be split:
-        # <ins>A</ins><del>B</del>XY<ins>C</ins><del>D</del>
-        # <ins>A</ins>X<ins>C</ins><del>D</del>
-        # <ins>A</ins><del>B</del>X<ins>C</ins>
-        # <ins>A</del>X<ins>C</ins><del>D</del>
-        # <ins>A</ins><del>B</del>X<del>C</del>
-
-        if lastequality and ((pre_ins and pre_del and post_ins and post_del) or
-                             ((len(lastequality) < self.Diff_EditCost / 2) and
-                              (pre_ins + pre_del + post_ins + post_del) == 3)):
-          # Duplicate record
-          diffs.insert(equalities[-1], (self.DIFF_DELETE, lastequality))
-          # Change second copy to insert.
-          diffs[equalities[-1] + 1] = (self.DIFF_INSERT,
-              diffs[equalities[-1] + 1][1])
-          equalities.pop()  # Throw away the equality we just deleted
-          lastequality = ''
-          if pre_ins and pre_del:
-            # No changes made which could affect previous entry, keep going.
-            post_ins = post_del = True
-            equalities = []
-          else:
-            if len(equalities):
-              equalities.pop()  # Throw away the previous equality
-            if len(equalities):
-              pointer = equalities[-1]
-            else:
-              pointer = -1
-            post_ins = post_del = False
-          changes = True
-      pointer += 1
-
-    if changes:
-      self.diff_cleanupMerge(diffs)
-
-  def diff_cleanupMerge(self, diffs):
-    """Reorder and merge like edit sections.  Merge equalities.
-    Any edit section can move as long as it doesn't cross an equality.
-
-    :param diffs: Array of diff tuples.
-    """
-    diffs.append((self.DIFF_EQUAL, ''))  # Add a dummy entry at the end.
-    pointer = 0
-    count_delete = 0
-    count_insert = 0
-    text_delete = ''
-    text_insert = ''
-    while pointer < len(diffs):
-      if diffs[pointer][0] == self.DIFF_INSERT:
-        count_insert += 1
-        text_insert += diffs[pointer][1]
-        pointer += 1
-      elif diffs[pointer][0] == self.DIFF_DELETE:
-        count_delete += 1
-        text_delete += diffs[pointer][1]
-        pointer += 1
-      elif diffs[pointer][0] == self.DIFF_EQUAL:
-        # Upon reaching an equality, check for prior redundancies.
-        if count_delete != 0 or count_insert != 0:
-          if count_delete != 0 and count_insert != 0:
-            # Factor out any common prefixies.
-            commonlength = self.diff_commonPrefix(text_insert, text_delete)
-            if commonlength != 0:
-              x = pointer - count_delete - count_insert - 1
-              if x >= 0 and diffs[x][0] == self.DIFF_EQUAL:
-                diffs[x] = (diffs[x][0], diffs[x][1] +
-                            text_insert[:commonlength])
-              else:
-                diffs.insert(0, (self.DIFF_EQUAL, text_insert[:commonlength]))
-                pointer += 1
-              text_insert = text_insert[commonlength:]
-              text_delete = text_delete[commonlength:]
-            # Factor out any common suffixies.
-            commonlength = self.diff_commonSuffix(text_insert, text_delete)
-            if commonlength != 0:
-              diffs[pointer] = (diffs[pointer][0], text_insert[-commonlength:] +
-                  diffs[pointer][1])
-              text_insert = text_insert[:-commonlength]
-              text_delete = text_delete[:-commonlength]
-          # Delete the offending records and add the merged ones.
-          if count_delete == 0:
-            diffs[pointer - count_insert : pointer] = [
-                (self.DIFF_INSERT, text_insert)]
-          elif count_insert == 0:
-            diffs[pointer - count_delete : pointer] = [
-                (self.DIFF_DELETE, text_delete)]
-          else:
-            diffs[pointer - count_delete - count_insert : pointer] = [
-                (self.DIFF_DELETE, text_delete),
-                (self.DIFF_INSERT, text_insert)]
-          pointer = pointer - count_delete - count_insert + 1
-          if count_delete != 0:
-            pointer += 1
-          if count_insert != 0:
-            pointer += 1
-        elif pointer != 0 and diffs[pointer - 1][0] == self.DIFF_EQUAL:
-          # Merge this equality with the previous one.
-          diffs[pointer - 1] = (diffs[pointer - 1][0],
-                                diffs[pointer - 1][1] + diffs[pointer][1])
-          del diffs[pointer]
-        else:
-          pointer += 1
-
-        count_insert = 0
-        count_delete = 0
-        text_delete = ''
-        text_insert = ''
-
-    if diffs[-1][1] == '':
-      diffs.pop()  # Remove the dummy entry at the end.
-
-    # Second pass: look for single edits surrounded on both sides by equalities
-    # which can be shifted sideways to eliminate an equality.
-    # e.g: A<ins>BA</ins>C -> <ins>AB</ins>AC
-    changes = False
-    pointer = 1
-    # Intentionally ignore the first and last element (don't need checking).
-    while pointer < len(diffs) - 1:
-      if (diffs[pointer - 1][0] == self.DIFF_EQUAL and
-          diffs[pointer + 1][0] == self.DIFF_EQUAL):
-        # This is a single edit surrounded by equalities.
-        if diffs[pointer][1].endswith(diffs[pointer - 1][1]):
-          # Shift the edit over the previous equality.
-          diffs[pointer] = (diffs[pointer][0],
-              diffs[pointer - 1][1] +
-              diffs[pointer][1][:-len(diffs[pointer - 1][1])])
-          diffs[pointer + 1] = (diffs[pointer + 1][0],
-                                diffs[pointer - 1][1] + diffs[pointer + 1][1])
-          del diffs[pointer - 1]
-          changes = True
-        elif diffs[pointer][1].startswith(diffs[pointer + 1][1]):
-          # Shift the edit over the next equality.
-          diffs[pointer - 1] = (diffs[pointer - 1][0],
-                                diffs[pointer - 1][1] + diffs[pointer + 1][1])
-          diffs[pointer] = (diffs[pointer][0],
-              diffs[pointer][1][len(diffs[pointer + 1][1]):] +
-              diffs[pointer + 1][1])
-          del diffs[pointer + 1]
-          changes = True
-      pointer += 1
-
-    # If shifts were made, the diff needs reordering and another shift sweep.
-    if changes:
-      self.diff_cleanupMerge(diffs)
-
-  def diff_xIndex(self, diffs, loc):
-    """loc is a location in text1, compute and return the equivalent location
-    in text2.  e.g. "The cat" vs "The big cat", 1->1, 5->8
-
-    :param diffs: Array of diff tuples.
-    :param loc: Location within text1.
-
-    :returns: Location within text2.
-    """
-    chars1 = 0
-    chars2 = 0
-    last_chars1 = 0
-    last_chars2 = 0
-    for x in xrange(len(diffs)):
-      (op, text) = diffs[x]
-      if op != self.DIFF_INSERT:  # Equality or deletion.
-        chars1 += len(text)
-      if op != self.DIFF_DELETE:  # Equality or insertion.
-        chars2 += len(text)
-      if chars1 > loc:  # Overshot the location.
-        break
-      last_chars1 = chars1
-      last_chars2 = chars2
-
-    if len(diffs) != x and diffs[x][0] == self.DIFF_DELETE:
-      # The location was deleted.
-      return last_chars2
-    # Add the remaining len(character).
-    return last_chars2 + (loc - last_chars1)
-
-  def diff_prettyHtml(self, diffs):
-    """Convert a diff array into a pretty HTML report.
-
-    :param diffs: Array of diff tuples.
-
-    :returns: HTML representation.
-    """
-    html = []
-    i = 0
-    for (op, data) in diffs:
-      text = (data.replace("&", "&").replace("<", "<")
-                 .replace(">", ">").replace("\n", "¶<BR>"))
-      if op == self.DIFF_INSERT:
-        html.append("<INS STYLE=\"background:#E6FFE6;\" TITLE=\"i=%i\">%s</INS>"
-            % (i, text))
-      elif op == self.DIFF_DELETE:
-        html.append("<DEL STYLE=\"background:#FFE6E6;\" TITLE=\"i=%i\">%s</DEL>"
-            % (i, text))
-      elif op == self.DIFF_EQUAL:
-        html.append("<SPAN TITLE=\"i=%i\">%s</SPAN>" % (i, text))
-      if op != self.DIFF_DELETE:
-        i += len(data)
-    return "".join(html)
-
-  def diff_text1(self, diffs):
-    """Compute and return the source text (all equalities and deletions).
-
-    :param diffs: Array of diff tuples.
-
-    :returns: Source text.
-    """
-    text = []
-    for (op, data) in diffs:
-      if op != self.DIFF_INSERT:
-        text.append(data)
-    return "".join(text)
-
-  def diff_text2(self, diffs):
-    """Compute and return the destination text (all equalities and insertions).
-
-    :param diffs: Array of diff tuples.
-
-    :returns: Destination text.
-    """
-    text = []
-    for (op, data) in diffs:
-      if op != self.DIFF_DELETE:
-        text.append(data)
-    return "".join(text)
-
-  def diff_levenshtein(self, diffs):
-    """Compute the Levenshtein distance; the number of inserted, deleted or
-    substituted characters.
-
-    :param diffs: Array of diff tuples.
-
-    :returns: Number of changes.
-    """
-    levenshtein = 0
-    insertions = 0
-    deletions = 0
-    for (op, data) in diffs:
-      if op == self.DIFF_INSERT:
-        insertions += len(data)
-      elif op == self.DIFF_DELETE:
-        deletions += len(data)
-      elif op == self.DIFF_EQUAL:
-        # A deletion and an insertion is one substitution.
-        levenshtein += max(insertions, deletions)
-        insertions = 0
-        deletions = 0
-    levenshtein += max(insertions, deletions)
-    return levenshtein
-
-  def diff_toDelta(self, diffs):
-    """Crush the diff into an encoded string which describes the operations
-    required to transform text1 into text2.
-    E.g. =3\t-2\t+ing  -> Keep 3 chars, delete 2 chars, insert 'ing'.
-    Operations are tab-separated.  Inserted text is escaped using %xx notation.
-
-    :param diffs: Array of diff tuples.
-
-    :returns: Delta text.
-    """
-    import urllib
-    text = []
-    for (op, data) in diffs:
-      if op == self.DIFF_INSERT:
-        # High ascii will raise UnicodeDecodeError.  Use Unicode instead.
-        data = data.encode("utf-8")
-        text.append("+" + urllib.quote(data, "!~*'();/?:@&=+$,# "))
-      elif op == self.DIFF_DELETE:
-        text.append("-%d" % len(data))
-      elif op == self.DIFF_EQUAL:
-        text.append("=%d" % len(data))
-    return "\t".join(text)
-
-  def diff_fromDelta(self, text1, delta):
-    """Given the original text1, and an encoded string which describes the
-    operations required to transform text1 into text2, compute the full diff.
-
-    :param text1: Source string for the diff.
-    :param delta: Delta text.
-
-    :returns: Array of diff tuples.
-
-    :raise ValueError: If invalid input.
-    """
-    import urllib
-    if type(delta) == unicode:
-      # Deltas should be composed of a subset of ascii chars, Unicode not
-      # required.  If this encode raises UnicodeEncodeError, delta is invalid.
-      delta = delta.encode("ascii")
-    diffs = []
-    pointer = 0  # Cursor in text1
-    tokens = delta.split("\t")
-    for token in tokens:
-      if token == "":
-        # Blank tokens are ok (from a trailing \t).
-        continue
-      # Each token begins with a one character parameter which specifies the
-      # operation of this token (delete, insert, equality).
-      param = token[1:]
-      if token[0] == "+":
-        param = urllib.unquote(param).decode("utf-8")
-        diffs.append((self.DIFF_INSERT, param))
-      elif token[0] == "-" or token[0] == "=":
-        try:
-          n = int(param)
-        except ValueError:
-          raise ValueError("Invalid number in diff_fromDelta: " + param)
-        if n < 0:
-          raise ValueError("Negative number in diff_fromDelta: " + param)
-        text = text1[pointer : pointer + n]
-        pointer += n
-        if token[0] == "=":
-          diffs.append((self.DIFF_EQUAL, text))
-        else:
-          diffs.append((self.DIFF_DELETE, text))
-      else:
-        # Anything else is an error.
-        raise ValueError("Invalid diff operation in diff_fromDelta: " +
-            token[0])
-    if pointer != len(text1):
-      raise ValueError(
-          "Delta length (%d) does not equal source text length (%d)." %
-         (pointer, len(text1)))
-    return diffs
-
-  #  MATCH FUNCTIONS
-
-  def match_main(self, text, pattern, loc):
-    """Locate the best instance of 'pattern' in 'text' near 'loc'.
-
-    :param text: The text to search.
-    :param pattern: The pattern to search for.
-    :param loc: The location to search around.
-
-    :returns: Best match index or -1.
-    """
-    # Check for null inputs.
-    if text == None or pattern == None:
-      raise ValueError("Null inputs. (match_main)")
-
-    loc = max(0, min(loc, len(text)))
-    if text == pattern:
-      # Shortcut (potentially not guaranteed by the algorithm)
-      return 0
-    elif not text:
-      # Nothing to match.
-      return -1
-    elif text[loc:loc + len(pattern)] == pattern:
-      # Perfect match at the perfect spot!  (Includes case of null pattern)
-      return loc
-    else:
-      # Do a fuzzy compare.
-      match = self.match_bitap(text, pattern, loc)
-      return match
-
-  def match_bitap(self, text, pattern, loc):
-    """Locate the best instance of 'pattern' in 'text' near 'loc' using the
-    Bitap algorithm.
-
-    :param text: The text to search.
-    :param pattern: The pattern to search for.
-    :param loc: The location to search around.
-
-    :returns: Best match index or -1.
-    """
-    # Python doesn't have a maxint limit, so ignore this check.
-    #if self.Match_MaxBits != 0 and len(pattern) > self.Match_MaxBits:
-    #  raise ValueError("Pattern too long for this application.")
-
-    # Initialise the alphabet.
-    s = self.match_alphabet(pattern)
-
-    def match_bitapScore(e, x):
-      """Compute and return the score for a match with e errors and x location.
-      Accesses loc and pattern through being a closure.
-
-      :param e: Number of errors in match.
-      :param x: Location of match.
-
-      :returns: Overall score for match (0.0 = good, 1.0 = bad).
-      """
-      accuracy = float(e) / len(pattern)
-      proximity = abs(loc - x)
-      if not self.Match_Distance:
-        # Dodge divide by zero error.
-        return proximity and 1.0 or accuracy
-      return accuracy + (proximity / float(self.Match_Distance))
-
-    # Highest score beyond which we give up.
-    score_threshold = self.Match_Threshold
-    # Is there a nearby exact match? (speedup)
-    best_loc = text.find(pattern, loc)
-    if best_loc != -1:
-      score_threshold = min(match_bitapScore(0, best_loc), score_threshold)
-      # What about in the other direction? (speedup)
-      best_loc = text.rfind(pattern, loc + len(pattern))
-      if best_loc != -1:
-        score_threshold = min(match_bitapScore(0, best_loc), score_threshold)
-
-    # Initialise the bit arrays.
-    matchmask = 1 << (len(pattern) - 1)
-    best_loc = -1
-
-    bin_max = len(pattern) + len(text)
-    # Empty initialization added to appease pychecker.
-    last_rd = None
-    for d in xrange(len(pattern)):
-      # Scan for the best match each iteration allows for one more error.
-      # Run a binary search to determine how far from 'loc' we can stray at
-      # this error level.
-      bin_min = 0
-      bin_mid = bin_max
-      while bin_min < bin_mid:
-        if match_bitapScore(d, loc + bin_mid) <= score_threshold:
-          bin_min = bin_mid
-        else:
-          bin_max = bin_mid
-        bin_mid = (bin_max - bin_min) / 2 + bin_min
-
-      # Use the result from this iteration as the maximum for the next.
-      bin_max = bin_mid
-      start = max(1, loc - bin_mid + 1)
-      finish = min(loc + bin_mid, len(text)) + len(pattern)
-
-      rd = range(finish + 1)
-      rd.append((1 << d) - 1)
-      for j in xrange(finish, start - 1, -1):
-        if len(text) <= j - 1:
-          # Out of range.
-          charMatch = 0
-        else:
-          charMatch = s.get(text[j - 1], 0)
-        if d == 0:  # First pass: exact match.
-          rd[j] = ((rd[j + 1] << 1) | 1) & charMatch
-        else:  # Subsequent passes: fuzzy match.
-          rd[j] = ((rd[j + 1] << 1) | 1) & charMatch | (
-              ((last_rd[j + 1] | last_rd[j]) << 1) | 1) | last_rd[j + 1]
-        if rd[j] & matchmask:
-          score = match_bitapScore(d, j - 1)
-          # This match will almost certainly be better than any existing match.
-          # But check anyway.
-          if score <= score_threshold:
-            # Told you so.
-            score_threshold = score
-            best_loc = j - 1
-            if best_loc > loc:
-              # When passing loc, don't exceed our current distance from loc.
-              start = max(1, 2 * loc - best_loc)
-            else:
-              # Already passed loc, downhill from here on in.
-              break
-      # No hope for a (better) match at greater error levels.
-      if match_bitapScore(d + 1, loc) > score_threshold:
-        break
-      last_rd = rd
-    return best_loc
-
-  def match_alphabet(self, pattern):
-    """Initialise the alphabet for the Bitap algorithm.
-
-    :param pattern: The text to encode.
-
-    :returns: Hash of character locations.
-    """
-    s = {}
-    for char in pattern:
-      s[char] = 0
-    for i in xrange(len(pattern)):
-      s[pattern[i]] |= 1 << (len(pattern) - i - 1)
-    return s
-
-  #  PATCH FUNCTIONS
-
-  def patch_addContext(self, patch, text):
-    """Increase the context until it is unique,
-    but don't let the pattern expand beyond Match_MaxBits.
-
-    :param patch: The patch to grow.
-    :param text: Source text.
-    """
-    if len(text) == 0:
-      return
-    pattern = text[patch.start2 : patch.start2 + patch.length1]
-    padding = 0
-
-    # Look for the first and last matches of pattern in text.  If two different
-    # matches are found, increase the pattern length.
-    while (text.find(pattern) != text.rfind(pattern) and (self.Match_MaxBits ==
-        0 or len(pattern) < self.Match_MaxBits - self.Patch_Margin -
-        self.Patch_Margin)):
-      padding += self.Patch_Margin
-      pattern = text[max(0, patch.start2 - padding) :
-                     patch.start2 + patch.length1 + padding]
-    # Add one chunk for good luck.
-    padding += self.Patch_Margin
-
-    # Add the prefix.
-    prefix = text[max(0, patch.start2 - padding) : patch.start2]
-    if prefix:
-      patch.diffs[:0] = [(self.DIFF_EQUAL, prefix)]
-    # Add the suffix.
-    suffix = text[patch.start2 + patch.length1 :
-                  patch.start2 + patch.length1 + padding]
-    if suffix:
-      patch.diffs.append((self.DIFF_EQUAL, suffix))
-
-    # Roll back the start points.
-    patch.start1 -= len(prefix)
-    patch.start2 -= len(prefix)
-    # Extend lengths.
-    patch.length1 += len(prefix) + len(suffix)
-    patch.length2 += len(prefix) + len(suffix)
-
-  def patch_make(self, a, b=None, c=None):
-    """Compute a list of patches to turn text1 into text2.
-    Use diffs if provided, otherwise compute it ourselves.
-    There are four ways to call this function, depending on what data is
-    available to the caller:
-    Method 1:
-    a = text1, b = text2
-    Method 2:
-    a = diffs
-    Method 3 (optimal):
-    a = text1, b = diffs
-    Method 4 (deprecated, use method 3):
-    a = text1, b = text2, c = diffs
-
-    :param a: text1 (methods 1,3,4) or Array of diff tuples for text1 to
-              text2 (method 2).
-    :param b: text2 (methods 1,4) or Array of diff tuples for text1 to
-              text2 (method 3) or undefined (method 2).
-    :param c: Array of diff tuples for text1 to text2 (method 4) or
-              undefined (methods 1,2,3).
-
-    :returns: Array of patch objects.
-    """
-    text1 = None
-    diffs = None
-    # Note that texts may arrive as 'str' or 'unicode'.
-    if isinstance(a, basestring) and isinstance(b, basestring) and c is None:
-      # Method 1: text1, text2
-      # Compute diffs from text1 and text2.
-      text1 = a
-      diffs = self.diff_main(text1, b, True)
-      if len(diffs) > 2:
-        self.diff_cleanupSemantic(diffs)
-        self.diff_cleanupEfficiency(diffs)
-    elif isinstance(a, list) and b is None and c is None:
-      # Method 2: diffs
-      # Compute text1 from diffs.
-      diffs = a
-      text1 = self.diff_text1(diffs)
-    elif isinstance(a, basestring) and isinstance(b, list) and c is None:
-      # Method 3: text1, diffs
-      text1 = a
-      diffs = b
-    elif (isinstance(a, basestring) and isinstance(b, basestring) and
-          isinstance(c, list)):
-      # Method 4: text1, text2, diffs
-      # text2 is not used.
-      text1 = a
-      diffs = c
-    else:
-      raise ValueError("Unknown call format to patch_make.")
-
-    if not diffs:
-      return []  # Get rid of the None case.
-    patches = []
-    patch = patch_obj()
-    char_count1 = 0  # Number of characters into the text1 string.
-    char_count2 = 0  # Number of characters into the text2 string.
-    prepatch_text = text1  # Recreate the patches to determine context info.
-    postpatch_text = text1
-    for x in xrange(len(diffs)):
-      (diff_type, diff_text) = diffs[x]
-      if len(patch.diffs) == 0 and diff_type != self.DIFF_EQUAL:
-        # A new patch starts here.
-        patch.start1 = char_count1
-        patch.start2 = char_count2
-      if diff_type == self.DIFF_INSERT:
-        # Insertion
-        patch.diffs.append(diffs[x])
-        patch.length2 += len(diff_text)
-        postpatch_text = (postpatch_text[:char_count2] + diff_text +
-                          postpatch_text[char_count2:])
-      elif diff_type == self.DIFF_DELETE:
-        # Deletion.
-        patch.length1 += len(diff_text)
-        patch.diffs.append(diffs[x])
-        postpatch_text = (postpatch_text[:char_count2] +
-                          postpatch_text[char_count2 + len(diff_text):])
-      elif (diff_type == self.DIFF_EQUAL and
-            len(diff_text) <= 2 * self.Patch_Margin and
-            len(patch.diffs) != 0 and len(diffs) != x + 1):
-        # Small equality inside a patch.
-        patch.diffs.append(diffs[x])
-        patch.length1 += len(diff_text)
-        patch.length2 += len(diff_text)
-
-      if (diff_type == self.DIFF_EQUAL and
-          len(diff_text) >= 2 * self.Patch_Margin):
-        # Time for a new patch.
-        if len(patch.diffs) != 0:
-          self.patch_addContext(patch, prepatch_text)
-          patches.append(patch)
-          patch = patch_obj()
-          # Unlike Unidiff, our patch lists have a rolling context.
-          # http://code.google.com/p/google-diff-match-patch/wiki/Unidiff
-          # Update prepatch text & pos to reflect the application of the
-          # just completed patch.
-          prepatch_text = postpatch_text
-          char_count1 = char_count2
-
-      # Update the current character count.
-      if diff_type != self.DIFF_INSERT:
-        char_count1 += len(diff_text)
-      if diff_type != self.DIFF_DELETE:
-        char_count2 += len(diff_text)
-
-    # Pick up the leftover patch if not empty.
-    if len(patch.diffs) != 0:
-      self.patch_addContext(patch, prepatch_text)
-      patches.append(patch)
-    return patches
-
-  def patch_deepCopy(self, patches):
-    """Given an array of patches, return another array that is identical.
-
-    :param patches: Array of patch objects.
-
-    :returns: Array of patch objects.
-    """
-    patchesCopy = []
-    for patch in patches:
-      patchCopy = patch_obj()
-      # No need to deep copy the tuples since they are immutable.
-      patchCopy.diffs = patch.diffs[:]
-      patchCopy.start1 = patch.start1
-      patchCopy.start2 = patch.start2
-      patchCopy.length1 = patch.length1
-      patchCopy.length2 = patch.length2
-      patchesCopy.append(patchCopy)
-    return patchesCopy
-
-  def patch_apply(self, patches, text):
-    """Merge a set of patches onto the text.  Return a patched text, as well
-    as a list of true/false values indicating which patches were applied.
-
-    :param patches: Array of patch objects.
-    :param text: Old text.
-
-    :returns: Two element Array, containing the new text and an array of
-              boolean values.
-    """
-    if not patches:
-      return (text, [])
-
-    # Deep copy the patches so that no changes are made to originals.
-    patches = self.patch_deepCopy(patches)
-
-    nullPadding = self.patch_addPadding(patches)
-    text = nullPadding + text + nullPadding
-    self.patch_splitMax(patches)
-
-    # delta keeps track of the offset between the expected and actual location
-    # of the previous patch.  If there are patches expected at positions 10 and
-    # 20, but the first patch was found at 12, delta is 2 and the second patch
-    # has an effective expected position of 22.
-    delta = 0
-    results = []
-    for patch in patches:
-      expected_loc = patch.start2 + delta
-      text1 = self.diff_text1(patch.diffs)
-      end_loc = -1
-      if len(text1) > self.Match_MaxBits:
-        # patch_splitMax will only provide an oversized pattern in the case of
-        # a monster delete.
-        start_loc = self.match_main(text, text1[:self.Match_MaxBits],
-                                    expected_loc)
-        if start_loc != -1:
-          end_loc = self.match_main(text, text1[-self.Match_MaxBits:],
-              expected_loc + len(text1) - self.Match_MaxBits)
-          if end_loc == -1 or start_loc >= end_loc:
-            # Can't find valid trailing context.  Drop this patch.
-            start_loc = -1
-      else:
-        start_loc = self.match_main(text, text1, expected_loc)
-      if start_loc == -1:
-        # No match found.  :(
-        results.append(False)
-        # Subtract the delta for this failed patch from subsequent patches.
-        delta -= patch.length2 - patch.length1
-      else:
-        # Found a match.  :)
-        results.append(True)
-        delta = start_loc - expected_loc
-        if end_loc == -1:
-          text2 = text[start_loc : start_loc + len(text1)]
-        else:
-          text2 = text[start_loc : end_loc + self.Match_MaxBits]
-        if text1 == text2:
-          # Perfect match, just shove the replacement text in.
-          text = (text[:start_loc] + self.diff_text2(patch.diffs) +
-                      text[start_loc + len(text1):])
-        else:
-          # Imperfect match.
-          # Run a diff to get a framework of equivalent indices.
-          diffs = self.diff_main(text1, text2, False)
-          if (len(text1) > self.Match_MaxBits and
-              self.diff_levenshtein(diffs) / float(len(text1)) >
-              self.Patch_DeleteThreshold):
-            # The end points match, but the content is unacceptably bad.
-            results[-1] = False
-          else:
-            self.diff_cleanupSemanticLossless(diffs)
-            index1 = 0
-            for (op, data) in patch.diffs:
-              if op != self.DIFF_EQUAL:
-                index2 = self.diff_xIndex(diffs, index1)
-              if op == self.DIFF_INSERT:  # Insertion
-                text = text[:start_loc + index2] + data + text[start_loc +
-                                                               index2:]
-              elif op == self.DIFF_DELETE:  # Deletion
-                text = text[:start_loc + index2] + text[start_loc +
-                    self.diff_xIndex(diffs, index1 + len(data)):]
-              if op != self.DIFF_DELETE:
-                index1 += len(data)
-    # Strip the padding off.
-    text = text[len(nullPadding):-len(nullPadding)]
-    return (text, results)
-
-  def patch_addPadding(self, patches):
-    """Add some padding on text start and end so that edges can match
-    something.  Intended to be called only from within patch_apply.
-
-    :param patches: Array of patch objects.
-
-    :returns: The padding string added to each side.
-    """
-    paddingLength = self.Patch_Margin
-    nullPadding = ""
-    for x in xrange(1, paddingLength + 1):
-      nullPadding += chr(x)
-
-    # Bump all the patches forward.
-    for patch in patches:
-      patch.start1 += paddingLength
-      patch.start2 += paddingLength
-
-    # Add some padding on start of first diff.
-    patch = patches[0]
-    diffs = patch.diffs
-    if not diffs or diffs[0][0] != self.DIFF_EQUAL:
-      # Add nullPadding equality.
-      diffs.insert(0, (self.DIFF_EQUAL, nullPadding))
-      patch.start1 -= paddingLength  # Should be 0.
-      patch.start2 -= paddingLength  # Should be 0.
-      patch.length1 += paddingLength
-      patch.length2 += paddingLength
-    elif paddingLength > len(diffs[0][1]):
-      # Grow first equality.
-      extraLength = paddingLength - len(diffs[0][1])
-      newText = nullPadding[len(diffs[0][1]):] + diffs[0][1]
-      diffs[0] = (diffs[0][0], newText)
-      patch.start1 -= extraLength
-      patch.start2 -= extraLength
-      patch.length1 += extraLength
-      patch.length2 += extraLength
-
-    # Add some padding on end of last diff.
-    patch = patches[-1]
-    diffs = patch.diffs
-    if not diffs or diffs[-1][0] != self.DIFF_EQUAL:
-      # Add nullPadding equality.
-      diffs.append((self.DIFF_EQUAL, nullPadding))
-      patch.length1 += paddingLength
-      patch.length2 += paddingLength
-    elif paddingLength > len(diffs[-1][1]):
-      # Grow last equality.
-      extraLength = paddingLength - len(diffs[-1][1])
-      newText = diffs[-1][1] + nullPadding[:extraLength]
-      diffs[-1] = (diffs[-1][0], newText)
-      patch.length1 += extraLength
-      patch.length2 += extraLength
-
-    return nullPadding
-
-  def patch_splitMax(self, patches):
-    """Look through the patches and break up any which are longer than the
-    maximum limit of the match algorithm.
-
-    :param patches: Array of patch objects.
-    """
-    if self.Match_MaxBits == 0:
-      return
-    for x in xrange(len(patches)):
-      if patches[x].length1 > self.Match_MaxBits:
-        bigpatch = patches[x]
-        # Remove the big old patch.
-        del patches[x]
-        x -= 1
-        patch_size = self.Match_MaxBits
-        start1 = bigpatch.start1
-        start2 = bigpatch.start2
-        precontext = ''
-        while len(bigpatch.diffs) != 0:
-          # Create one of several smaller patches.
-          patch = patch_obj()
-          empty = True
-          patch.start1 = start1 - len(precontext)
-          patch.start2 = start2 - len(precontext)
-          if precontext:
-            patch.length1 = patch.length2 = len(precontext)
-            patch.diffs.append((self.DIFF_EQUAL, precontext))
-
-          while (len(bigpatch.diffs) != 0 and
-                 patch.length1 < patch_size - self.Patch_Margin):
-            (diff_type, diff_text) = bigpatch.diffs[0]
-            if diff_type == self.DIFF_INSERT:
-              # Insertions are harmless.
-              patch.length2 += len(diff_text)
-              start2 += len(diff_text)
-              patch.diffs.append(bigpatch.diffs.pop(0))
-              empty = False
-            elif (diff_type == self.DIFF_DELETE and len(patch.diffs) == 1 and
-                patch.diffs[0][0] == self.DIFF_EQUAL and
-                len(diff_text) > 2 * patch_size):
-              # This is a large deletion.  Let it pass in one chunk.
-              patch.length1 += len(diff_text)
-              start1 += len(diff_text)
-              empty = False
-              patch.diffs.append((diff_type, diff_text))
-              del bigpatch.diffs[0]
-            else:
-              # Deletion or equality.  Only take as much as we can stomach.
-              diff_text = diff_text[:patch_size - patch.length1 -
-                                    self.Patch_Margin]
-              patch.length1 += len(diff_text)
-              start1 += len(diff_text)
-              if diff_type == self.DIFF_EQUAL:
-                patch.length2 += len(diff_text)
-                start2 += len(diff_text)
-              else:
-                empty = False
-
-              patch.diffs.append((diff_type, diff_text))
-              if diff_text == bigpatch.diffs[0][1]:
-                del bigpatch.diffs[0]
-              else:
-                bigpatch.diffs[0] = (bigpatch.diffs[0][0],
-                                     bigpatch.diffs[0][1][len(diff_text):])
-
-          # Compute the head context for the next patch.
-          precontext = self.diff_text2(patch.diffs)
-          precontext = precontext[-self.Patch_Margin:]
-          # Append the end context for this patch.
-          postcontext = self.diff_text1(bigpatch.diffs)[:self.Patch_Margin]
-          if postcontext:
-            patch.length1 += len(postcontext)
-            patch.length2 += len(postcontext)
-            if len(patch.diffs) != 0 and patch.diffs[-1][0] == self.DIFF_EQUAL:
-              patch.diffs[-1] = (self.DIFF_EQUAL, patch.diffs[-1][1] +
-                                 postcontext)
-            else:
-              patch.diffs.append((self.DIFF_EQUAL, postcontext))
-
-          if not empty:
-            x += 1
-            patches.insert(x, patch)
-
-  def patch_toText(self, patches):
-    """Take a list of patches and return a textual representation.
-
-    :param patches: Array of patch objects.
-
-    :returns: Text representation of patches.
-    """
-    text = []
-    for patch in patches:
-      text.append(str(patch))
-    return "".join(text)
-
-  def patch_fromText(self, textline):
-    """Parse a textual representation of patches and return a list of patch
-    objects.
-
-    :param textline: Text representation of patches.
-
-    :returns: Array of patch objects.
-
-    :raises ValueError: If invalid input.
-    """
-    if type(textline) == unicode:
-      # Patches should be composed of a subset of ascii chars, Unicode not
-      # required.  If this encode raises UnicodeEncodeError, patch is invalid.
-      textline = textline.encode("ascii")
-    patches = []
-    if not textline:
-      return patches
-    text = textline.split('\n')
-    while len(text) != 0:
-      m = re.match("^@@ -(\d+),?(\d*) \+(\d+),?(\d*) @@$", text[0])
-      if not m:
-        raise ValueError("Invalid patch string: " + text[0])
-      patch = patch_obj()
-      patches.append(patch)
-      patch.start1 = int(m.group(1))
-      if m.group(2) == '':
-        patch.start1 -= 1
-        patch.length1 = 1
-      elif m.group(2) == '0':
-        patch.length1 = 0
-      else:
-        patch.start1 -= 1
-        patch.length1 = int(m.group(2))
-
-      patch.start2 = int(m.group(3))
-      if m.group(4) == '':
-        patch.start2 -= 1
-        patch.length2 = 1
-      elif m.group(4) == '0':
-        patch.length2 = 0
-      else:
-        patch.start2 -= 1
-        patch.length2 = int(m.group(4))
-
-      del text[0]
-
-      import urllib
-      while len(text) != 0:
-        if text[0]:
-          sign = text[0][0]
-        else:
-          sign = ''
-        line = urllib.unquote(text[0][1:])
-        line = line.decode("utf-8")
-        if sign == '+':
-          # Insertion.
-          patch.diffs.append((self.DIFF_INSERT, line))
-        elif sign == '-':
-          # Deletion.
-          patch.diffs.append((self.DIFF_DELETE, line))
-        elif sign == ' ':
-          # Minor equality.
-          patch.diffs.append((self.DIFF_EQUAL, line))
-        elif sign == '@':
-          # Start of next patch.
-          break
-        elif sign == '':
-          # Blank line?  Whatever.
-          pass
-        else:
-          # WTF?
-          raise ValueError("Invalid patch mode: '%s'\n%s" % (sign, line))
-        del text[0]
-    return patches
-
-
-class patch_obj:
-  """Class representing one patch operation.
-  """
-
-  def __init__(self):
-    """Initializes with an empty list of diffs.
-    """
-    self.diffs = []
-    self.start1 = None
-    self.start2 = None
-    self.length1 = 0
-    self.length2 = 0
-
-  def __str__(self):
-    """Emmulate GNU diff's format.
-    Header: @@ -382,8 +481,9 @@
-    Indicies are printed as 1-based, not 0-based.
+from __future__ import absolute_import  # Needed because of cyclic self-import.
 
-    :returns: The GNU diff string.
-    """
-    import urllib
-    if self.length1 == 0:
-      coords1 = str(self.start1) + ",0"
-    elif self.length1 == 1:
-      coords1 = str(self.start1 + 1)
-    else:
-      coords1 = str(self.start1 + 1) + "," + str(self.length1)
-    if self.length2 == 0:
-      coords2 = str(self.start2) + ",0"
-    elif self.length2 == 1:
-      coords2 = str(self.start2 + 1)
-    else:
-      coords2 = str(self.start2 + 1) + "," + str(self.length2)
-    text = ["@@ -", coords1, " +", coords2, " @@\n"]
-    # Escape the body of the patch with %xx notation.
-    for (op, data) in self.diffs:
-      if op == diff_match_patch.DIFF_INSERT:
-        text.append("+")
-      elif op == diff_match_patch.DIFF_DELETE:
-        text.append("-")
-      elif op == diff_match_patch.DIFF_EQUAL:
-        text.append(" ")
-      # High ascii will raise UnicodeDecodeError.  Use Unicode instead.
-      data = data.encode("utf-8")
-      text.append(urllib.quote(data, "!~*'();/?:@&=+$,# ") + "\n")
-    return "".join(text)
+from diff_match_patch import diff_match_patch
diff --git a/translate/misc/file_discovery.py b/translate/misc/file_discovery.py
index 9865655..5c94495 100644
--- a/translate/misc/file_discovery.py
+++ b/translate/misc/file_discovery.py
@@ -2,6 +2,7 @@
 # -*- coding: utf-8 -*-
 #
 # Copyright 2008 Zuza Software Foundation
+# Copyright 2014 F Wolff
 #
 # This file is part of translate.
 #
@@ -21,8 +22,8 @@
 
 __all__ = ['get_abs_data_filename']
 
-import sys
 import os
+import sys
 
 
 def get_abs_data_filename(path_parts, basedirs=None):
@@ -32,17 +33,22 @@ def get_abs_data_filename(path_parts, basedirs=None):
     :type  path_parts: list
     :param path_parts: The path parts that can be joined by ``os.path.join()``.
     """
-    if basedirs is None:
-        basedirs = []
-
     if isinstance(path_parts, str):
         path_parts = [path_parts]
 
-    BASE_DIRS = basedirs + [
-        os.path.dirname(unicode(__file__, sys.getfilesystemencoding())),
-        os.path.dirname(unicode(sys.executable, sys.getfilesystemencoding())),
+    DATA_DIRS = [
+        ["..", "share"],
     ]
 
+    BASE_DIRS = basedirs
+    if not basedirs:
+        # Useful for running from checkout or similar layout. This will find
+        # Toolkit's data files
+        base = os.path.dirname(unicode(__file__, sys.getfilesystemencoding()))
+        BASE_DIRS = [
+                os.path.join(base, os.path.pardir),
+        ]
+
     # Freedesktop standard
     if 'XDG_DATA_HOME' in os.environ:
         BASE_DIRS += [os.environ['XDG_DATA_HOME']]
@@ -53,10 +59,15 @@ def get_abs_data_filename(path_parts, basedirs=None):
     if 'RESOURCEPATH' in os.environ:
         BASE_DIRS += os.environ['RESOURCEPATH'].split(os.path.pathsep)
 
-    DATA_DIRS = [
-        ["..", "..", "share"],
-        ["..", "share"],
-        ["share"],
+    if getattr(sys, 'frozen', False):
+        # We know exactly what the layout is when we package for Windows, so
+        # let's avoid unnecessary paths
+        DATA_DIRS = [["share"]]
+        BASE_DIRS = []
+
+    BASE_DIRS += [
+            # installed linux (/usr/bin) as well as Windows
+            os.path.dirname(unicode(sys.executable, sys.getfilesystemencoding())),
     ]
 
     for basepath, data_dir in ((x, y) for x in BASE_DIRS for y in DATA_DIRS):
diff --git a/translate/misc/hash.py b/translate/misc/hash.py
deleted file mode 100644
index fe3263c..0000000
--- a/translate/misc/hash.py
+++ /dev/null
@@ -1,30 +0,0 @@
-#!/usr/bin/env python
-# -*- coding: utf-8 -*-
-#
-# Copyright 2008 Zuza Software Foundation
-#
-# This file is part of translate.
-#
-# This program is free software; you can redistribute it and/or modify
-# it under the terms of the GNU General Public License as published by
-# the Free Software Foundation; either version 2 of the License, or
-# (at your option) any later version.
-#
-# This program is distributed in the hope that it will be useful,
-# but WITHOUT ANY WARRANTY; without even the implied warranty of
-# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
-# GNU General Public License for more details.
-#
-# You should have received a copy of the GNU General Public License
-# along with this program; if not, see <http://www.gnu.org/licenses/>.
-
-"""This module contains some temporary glue to make us work with md5 hashes on
-old and new versions of Python. The function md5_f() wraps whatever is
-available."""
-
-try:
-    import hashlib
-    md5_f = hashlib.md5
-except ImportError:
-    import md5
-    md5_f = md5.new
diff --git a/translate/misc/ini.py b/translate/misc/ini.py
deleted file mode 100644
index d72bdbe..0000000
--- a/translate/misc/ini.py
+++ /dev/null
@@ -1,576 +0,0 @@
-# Copyright (c) 2001, 2002, 2003 Python Software Foundation
-# Copyright (c) 2004 Paramjit Oberoi <param.cs.wisc.edu>
-# All Rights Reserved.  See LICENSE-PSF & LICENSE for details.
-
-"""Access and/or modify INI files
-
-* Compatiable with ConfigParser
-* Preserves order of sections & options
-* Preserves comments/blank lines/etc
-* More convenient access to data
-
-Example:
-
-    >>> from StringIO import StringIO
-    >>> sio = StringIO('''# configure foo-application
-    ... [foo]
-    ... bar1 = qualia
-    ... bar2 = 1977
-    ... [foo-ext]
-    ... special = 1''')
-
-    >>> cfg = INIConfig(sio)
-    >>> print cfg.foo.bar1
-    qualia
-    >>> print cfg['foo-ext'].special
-    1
-    >>> cfg.foo.newopt = 'hi!'
-
-    >>> print cfg
-    # configure foo-application
-    [foo]
-    bar1 = qualia
-    bar2 = 1977
-    newopt = hi!
-    [foo-ext]
-    special = 1
-
-"""
-
-# An ini parser that supports ordered sections/options
-# Also supports updates, while preserving structure
-# Backward-compatiable with ConfigParser
-
-import re
-try:
-    # test to see if the built-in type is available:
-    __test_set = set()
-    del __test_set
-    #if it is, we'll avoid the deprecation warning inside the sets module
-except Exception:
-    from sets import Set as set
-
-from iniparse import config
-from ConfigParser import DEFAULTSECT, ParsingError, MissingSectionHeaderError
-
-class LineType(object):
-    line = None
-
-    def __init__(self, line=None):
-        if line is not None:
-            self.line = line.strip('\n')
-
-    # Return the original line for unmodified objects
-    # Otherwise construct using the current attribute values
-    def __str__(self):
-        if self.line is not None:
-            return self.line
-        else:
-            return self.to_string()
-
-    # If an attribute is modified after initialization
-    # set line to None since it is no longer accurate.
-    def __setattr__(self, name, value):
-        if hasattr(self,name):
-            self.__dict__['line'] = None
-        self.__dict__[name] = value
-
-    def to_string(self):
-        raise Exception('This method must be overridden in derived classes')
-
-
-class SectionLine(LineType):
-    regex =  re.compile(r'^\['
-                        r'(?P<name>[^]]+)'
-                        r'\]\s*'
-                        r'((?P<csep>;|#)(?P<comment>.*))?$')
-
-    def __init__(self, name, comment=None, comment_separator=None,
-                             comment_offset=-1, line=None):
-        super(SectionLine, self).__init__(line)
-        self.name = name
-        self.comment = comment
-        self.comment_separator = comment_separator
-        self.comment_offset = comment_offset
-
-    def to_string(self):
-        out = '[' + self.name + ']'
-        if self.comment is not None:
-            # try to preserve indentation of comments
-            out = (out+' ').ljust(self.comment_offset)
-            out = out + self.comment_separator + self.comment
-        return out
-
-    def parse(cls, line):
-        m = cls.regex.match(line.rstrip())
-        if m is None:
-            return None
-        return cls(m.group('name'), m.group('comment'),
-                   m.group('csep'), m.start('csep'),
-                   line)
-    parse = classmethod(parse)
-
-
-class OptionLine(LineType):
-    def __init__(self, name, value, separator=' = ', comment=None,
-                 comment_separator=None, comment_offset=-1, line=None):
-        super(OptionLine, self).__init__(line)
-        self.name = name
-        self.value = value
-        self.separator = separator
-        self.comment = comment
-        self.comment_separator = comment_separator
-        self.comment_offset = comment_offset
-
-    def to_string(self):
-        out = '%s%s%s' % (self.name, self.separator, self.value)
-        if self.comment is not None:
-            # try to preserve indentation of comments
-            out = (out+' ').ljust(self.comment_offset)
-            out = out + self.comment_separator + self.comment
-        return out
-
-    regex = re.compile(r'^(?P<name>[^:=\s[][^:=\s]*)'
-                       r'(?P<sep>\s*[:=]\s*)'
-                       r'(?P<value>.*)$')
-
-    def parse(cls, line):
-        m = cls.regex.match(line.rstrip())
-        if m is None:
-            return None
-
-        name = m.group('name').rstrip()
-        value = m.group('value')
-        sep = m.group('sep')
-
-        # comments are not detected in the regex because
-        # ensuring total compatibility with ConfigParser
-        # requires that:
-        #     option = value    ;comment   // value=='value'
-        #     option = value;1  ;comment   // value=='value;1  ;comment'
-        #
-        # Doing this in a regex would be complicated.  I
-        # think this is a bug.  The whole issue of how to
-        # include ';' in the value needs to be addressed.
-        # Also, '#' doesn't mark comments in options...
-
-        coff = value.find(';')
-        if coff != -1 and value[coff-1].isspace():
-            comment = value[coff+1:]
-            csep = value[coff]
-            value = value[:coff].rstrip()
-            coff = m.start('value') + coff
-        else:
-            comment = None
-            csep = None
-            coff = -1
-
-        return cls(name, value, sep, comment, csep, coff, line)
-    parse = classmethod(parse)
-
-
-class CommentLine(LineType):
-    regex = re.compile(r'^(?P<csep>[;#]|[rR][eE][mM])'
-                       r'(?P<comment>.*)$')
-
-    def __init__(self, comment='', separator='#', line=None):
-        super(CommentLine, self).__init__(line)
-        self.comment = comment
-        self.separator = separator
-
-    def to_string(self):
-        return self.separator + self.comment
-
-    def parse(cls, line):
-        m = cls.regex.match(line.rstrip())
-        if m is None:
-            return None
-        return cls(m.group('comment'), m.group('csep'), line)
-    parse = classmethod(parse)
-
-
-class EmptyLine(LineType):
-    # could make this a singleton
-    def to_string(self):
-        return ''
-
-    def parse(cls, line):
-        if line.strip(): return None
-        return cls(line)
-    parse = classmethod(parse)
-
-
-class ContinuationLine(LineType):
-    regex = re.compile(r'^\s+(?P<value>.*)$')
-
-    def __init__(self, value, value_offset=8, line=None):
-        super(ContinuationLine, self).__init__(line)
-        self.value = value
-        self.value_offset = value_offset
-
-    def to_string(self):
-        return ' '*self.value_offset + self.value
-
-    def parse(cls, line):
-        m = cls.regex.match(line.rstrip())
-        if m is None:
-            return None
-        return cls(m.group('value'), m.start('value'), line)
-    parse = classmethod(parse)
-
-
-class LineContainer(object):
-    def __init__(self, d=None):
-        self.contents = []
-        self.orgvalue = None
-        if d:
-            if isinstance(d, list): self.extend(d)
-            else: self.add(d)
-
-    def add(self, x):
-        self.contents.append(x)
-
-    def extend(self, x):
-        for i in x: self.add(i)
-
-    def get_name(self):
-        return self.contents[0].name
-
-    def set_name(self, data):
-        self.contents[0].name = data
-
-    def get_value(self):
-        if self.orgvalue is not None:
-            return self.orgvalue
-        elif len(self.contents) == 1:
-            return self.contents[0].value
-        else:
-            return '\n'.join([str(x.value) for x in self.contents
-                              if not isinstance(x, (CommentLine, EmptyLine))])
-
-    def set_value(self, data):
-        self.orgvalue = data
-        lines = str(data).split('\n')
-        linediff = len(lines) - len(self.contents)
-        if linediff > 0:
-            for _ in range(linediff):
-                self.add(ContinuationLine(''))
-        elif linediff < 0:
-            self.contents = self.contents[:linediff]
-        for i,v in enumerate(lines):
-            self.contents[i].value = v
-
-    name = property(get_name, set_name)
-    value = property(get_value, set_value)
-
-    def __str__(self):
-        s = [str(x) for x in self.contents]
-        return '\n'.join(s)
-
-    def finditer(self, key):
-        for x in self.contents[::-1]:
-            if hasattr(x, 'name') and x.name==key:
-                yield x
-
-    def find(self, key):
-        for x in self.finditer(key):
-            return x
-        raise KeyError(key)
-
-
-def _make_xform_property(myattrname, srcattrname=None):
-    private_attrname = myattrname + 'value'
-    private_srcname = myattrname + 'source'
-    if srcattrname is None:
-        srcattrname = myattrname
-
-    def getfn(self):
-        srcobj = getattr(self, private_srcname)
-        if srcobj is not None:
-            return getattr(srcobj, srcattrname)
-        else:
-            return getattr(self, private_attrname)
-
-    def setfn(self, value):
-        srcobj = getattr(self, private_srcname)
-        if srcobj is not None:
-            setattr(srcobj, srcattrname, value)
-        else:
-            setattr(self, private_attrname, value)
-
-    return property(getfn, setfn)
-
-
-class INISection(config.ConfigNamespace):
-    _lines = None
-    _options = None
-    _defaults = None
-    _optionxformvalue = None
-    _optionxformsource = None
-    def __init__(self, lineobj, defaults = None,
-                       optionxformvalue=None, optionxformsource=None):
-        self._lines = [lineobj]
-        self._defaults = defaults
-        self._optionxformvalue = optionxformvalue
-        self._optionxformsource = optionxformsource
-        self._options = {}
-
-    _optionxform = _make_xform_property('_optionxform')
-
-    def __getitem__(self, key):
-        if key == '__name__':
-            return self._lines[-1].name
-        if self._optionxform: key = self._optionxform(key)
-        try:
-            return self._options[key].value
-        except KeyError:
-            if self._defaults and key in self._defaults._options:
-                return self._defaults._options[key].value
-            else:
-                raise
-
-    def __setitem__(self, key, value):
-        if self._optionxform: xkey = self._optionxform(key)
-        else: xkey = key
-        if xkey not in self._options:
-            # create a dummy object - value may have multiple lines
-            obj = LineContainer(OptionLine(key, ''))
-            self._lines[-1].add(obj)
-            self._options[xkey] = obj
-        # the set_value() function in LineContainer
-        # automatically handles multi-line values
-        self._options[xkey].value = value
-
-    def __delitem__(self, key):
-        if self._optionxform: key = self._optionxform(key)
-        for l in self._lines:
-            remaining = []
-            for o in l.contents:
-                if isinstance(o, LineContainer):
-                    n = o.name
-                    if self._optionxform: n = self._optionxform(n)
-                    if key != n: remaining.append(o)
-                else:
-                    remaining.append(o)
-            l.contents = remaining
-        del self._options[key]
-
-    def __iter__(self):
-        d = set()
-        for l in self._lines:
-            for x in l.contents:
-                if isinstance(x, LineContainer):
-                    if self._optionxform:
-                        ans = self._optionxform(x.name)
-                    else:
-                        ans = x.name
-                    if ans not in d:
-                        yield ans
-                        d.add(ans)
-        if self._defaults:
-            for x in self._defaults:
-                if x not in d:
-                    yield x
-                    d.add(x)
-
-    def new_namespace(self, name):
-        raise Exception('No sub-sections allowed', name)
-
-
-def make_comment(line):
-    return CommentLine(line.rstrip())
-
-
-def readline_iterator(f):
-    """iterate over a file by only using the file object's readline method"""
-
-    have_newline = False
-    while True:
-        line = f.readline()
-
-        if not line:
-            if have_newline:
-                yield ""
-            return
-
-        if line.endswith('\n'):
-            have_newline = True
-        else:
-            have_newline = False
-
-        yield line
-
-
-class INIConfig(config.ConfigNamespace):
-    _data = None
-    _sections = None
-    _defaults = None
-    _optionxformvalue = None
-    _optionxformsource = None
-    _sectionxformvalue = None
-    _sectionxformsource = None
-    _parse_exc = None
-    def __init__(self, fp=None, defaults = None, parse_exc=True,
-                 optionxformvalue=str.lower, optionxformsource=None,
-                 sectionxformvalue=None, sectionxformsource=None):
-        self._data = LineContainer()
-        self._parse_exc = parse_exc
-        self._optionxformvalue = optionxformvalue
-        self._optionxformsource = optionxformsource
-        self._sectionxformvalue = sectionxformvalue
-        self._sectionxformsource = sectionxformsource
-        self._sections = {}
-        if defaults is None: defaults = {}
-        self._defaults = INISection(LineContainer(), optionxformsource=self)
-        for name, value in defaults.iteritems():
-            self._defaults[name] = value
-        if fp is not None:
-            self.readfp(fp)
-
-    _optionxform = _make_xform_property('_optionxform', 'optionxform')
-    _sectionxform = _make_xform_property('_sectionxform', 'optionxform')
-
-    def __getitem__(self, key):
-        if key == DEFAULTSECT:
-            return self._defaults
-        if self._sectionxform: key = self._sectionxform(key)
-        return self._sections[key]
-
-    def __setitem__(self, key, value):
-        raise Exception('Values must be inside sections', key, value)
-
-    def __delitem__(self, key):
-        if self._sectionxform: key = self._sectionxform(key)
-        for line in self._sections[key]._lines:
-            self._data.contents.remove(line)
-        del self._sections[key]
-
-    def __iter__(self):
-        d = set()
-        for x in self._data.contents:
-            if isinstance(x, LineContainer):
-                if x.name not in d:
-                    yield x.name
-                    d.add(x.name)
-
-    def new_namespace(self, name):
-        if self._data.contents:
-            self._data.add(EmptyLine())
-        obj = LineContainer(SectionLine(name))
-        self._data.add(obj)
-        if self._sectionxform: name = self._sectionxform(name)
-        if name in self._sections:
-            ns = self._sections[name]
-            ns._lines.append(obj)
-        else:
-            ns = INISection(obj, defaults=self._defaults,
-                            optionxformsource=self)
-            self._sections[name] = ns
-        return ns
-
-    def __str__(self):
-        return str(self._data)
-
-    _line_types = [EmptyLine, CommentLine,
-                   SectionLine, OptionLine,
-                   ContinuationLine]
-
-    def _parse(self, line):
-        for linetype in self._line_types:
-            lineobj = linetype.parse(line)
-            if lineobj:
-                return lineobj
-        else:
-            # can't parse line
-            return None
-
-    def readfp(self, fp):
-        cur_section = None
-        cur_option = None
-        cur_section_name = None
-        cur_option_name = None
-        pending_lines = []
-        try:
-            fname = fp.name
-        except AttributeError:
-            fname = '<???>'
-        linecount = 0
-        exc = None
-        line = None
-
-        for line in readline_iterator(fp):
-            lineobj = self._parse(line)
-            linecount += 1
-
-            if not cur_section and not isinstance(lineobj,
-                                (CommentLine, EmptyLine, SectionLine)):
-                if self._parse_exc:
-                    raise MissingSectionHeaderError(fname, linecount, line)
-                else:
-                    lineobj = make_comment(line)
-
-            if lineobj is None:
-                if self._parse_exc:
-                    if exc is None: exc = ParsingError(fname)
-                    exc.append(linecount, line)
-                lineobj = make_comment(line)
-
-            if isinstance(lineobj, ContinuationLine):
-                if cur_option:
-                    cur_option.extend(pending_lines)
-                    pending_lines = []
-                    cur_option.add(lineobj)
-                else:
-                    # illegal continuation line - convert to comment
-                    if self._parse_exc:
-                        if exc is None: exc = ParsingError(fname)
-                        exc.append(linecount, line)
-                    lineobj = make_comment(line)
-
-            if isinstance(lineobj, OptionLine):
-                cur_section.extend(pending_lines)
-                pending_lines = []
-                cur_option = LineContainer(lineobj)
-                cur_section.add(cur_option)
-                if self._optionxform:
-                    cur_option_name = self._optionxform(cur_option.name)
-                else:
-                    cur_option_name = cur_option.name
-                if cur_section_name == DEFAULTSECT:
-                    optobj = self._defaults
-                else:
-                    optobj = self._sections[cur_section_name]
-                optobj._options[cur_option_name] = cur_option
-
-            if isinstance(lineobj, SectionLine):
-                self._data.extend(pending_lines)
-                pending_lines = []
-                cur_section = LineContainer(lineobj)
-                self._data.add(cur_section)
-                cur_option = None
-                cur_option_name = None
-                if cur_section.name == DEFAULTSECT:
-                    self._defaults._lines.append(cur_section)
-                    cur_section_name = DEFAULTSECT
-                else:
-                    if self._sectionxform:
-                        cur_section_name = self._sectionxform(cur_section.name)
-                    else:
-                        cur_section_name = cur_section.name
-                    if cur_section_name not in self._sections:
-                        self._sections[cur_section_name] = \
-                                INISection(cur_section, defaults=self._defaults,
-                                           optionxformsource=self)
-                    else:
-                        self._sections[cur_section_name]._lines.append(cur_section)
-
-            if isinstance(lineobj, (CommentLine, EmptyLine)):
-                pending_lines.append(lineobj)
-
-        self._data.extend(pending_lines)
-        if line and line[-1]=='\n':
-            self._data.add(EmptyLine())
-
-        if exc:
-            raise exc
-
diff --git a/translate/misc/lru.py b/translate/misc/lru.py
index 3ad4220..46215a1 100644
--- a/translate/misc/lru.py
+++ b/translate/misc/lru.py
@@ -18,9 +18,9 @@
 # You should have received a copy of the GNU General Public License
 # along with this program; if not, see <http://www.gnu.org/licenses/>.
 
+import gc
 from collections import deque
 from weakref import WeakValueDictionary
-import gc
 
 
 class LRUCachingDict(WeakValueDictionary):
@@ -92,7 +92,7 @@ class LRUCachingDict(WeakValueDictionary):
     def __getitem__(self, key):
         value = WeakValueDictionary.__getitem__(self, key)
         # check boundaries to minimiza duplicate references
-        while len(self.queue) > 0  and self.queue[0][0] == key:
+        while len(self.queue) > 0 and self.queue[0][0] == key:
             # item at left end of queue pop it since it'll be appended
             # to right
             self.queue.popleft()
diff --git a/translate/misc/optrecurse.py b/translate/misc/optrecurse.py
index cd2c45a..b039dd5 100644
--- a/translate/misc/optrecurse.py
+++ b/translate/misc/optrecurse.py
@@ -18,20 +18,17 @@
 # You should have received a copy of the GNU General Public License
 # along with this program; if not, see <http://www.gnu.org/licenses/>.
 
-import re
-import sys
-import os.path
 import fnmatch
 import logging
-import traceback
 import optparse
-try:
-    from cStringIO import StringIO
-except ImportError:
-    from StringIO import StringIO
+import os.path
+import re
+import sys
+import traceback
+from cStringIO import StringIO
 
-from translate.misc import progressbar
 from translate import __version__
+from translate.misc import progressbar
 
 
 class ManPageOption(optparse.Option, object):
@@ -297,12 +294,13 @@ class RecursiveOptionParser(optparse.OptionParser, object):
         errorleveloption = optparse.Option(None, "--errorlevel",
                 dest="errorlevel", default="message",
                 choices=self.errorleveltypes, metavar="ERRORLEVEL",
-                help="show errorlevel as: %s" % \
+                help="show errorlevel as: %s" %
                      (", ".join(self.errorleveltypes)))
         self.define_option(errorleveloption)
 
     def getformathelp(self, formats):
         """Make a nice help string for describing formats..."""
+        formats = sorted(formats)
         if None in formats:
             formats = filter(lambda format: format is not None, formats)
         if len(formats) == 0:
@@ -468,7 +466,7 @@ class RecursiveOptionParser(optparse.OptionParser, object):
                 try:
                     self.warning("Output directory does not exist. Attempting to create")
                     os.mkdir(options.output)
-                except IOError, e:
+                except IOError as e:
                     self.error(optparse.OptionValueError("Output directory does not exist, attempt to create failed"))
             if isinstance(options.input, list):
                 inputfiles = self.recurseinputfilelist(options)
@@ -504,7 +502,7 @@ class RecursiveOptionParser(optparse.OptionParser, object):
                 fulloutputpath = self.getfulloutputpath(options, outputpath)
                 if options.recursiveoutput and outputpath:
                     self.checkoutputsubdir(options, os.path.dirname(outputpath))
-            except Exception, error:
+            except Exception as error:
                 if isinstance(error, KeyboardInterrupt):
                     raise
                 self.warning("Couldn't handle input file %s" %
@@ -514,7 +512,7 @@ class RecursiveOptionParser(optparse.OptionParser, object):
                 success = self.processfile(fileprocessor, options,
                                            fullinputpath, fulloutputpath,
                                            fulltemplatepath)
-            except Exception, error:
+            except Exception as error:
                 if isinstance(error, KeyboardInterrupt):
                     raise
                 self.warning("Error processing: input %s, output %s, template %s" %
diff --git a/translate/misc/ourdom.py b/translate/misc/ourdom.py
index a1bde04..7b677aa 100644
--- a/translate/misc/ourdom.py
+++ b/translate/misc/ourdom.py
@@ -27,8 +27,8 @@ as minidom.parseString, since the functionality provided here will not be in
 those objects.
 """
 
-from xml.dom import minidom
-from xml.dom import expatbuilder
+from xml.dom import expatbuilder, minidom
+
 
 # helper functions we use to do xml the way we want, used by modified
 # classes below
diff --git a/translate/misc/profiling.py b/translate/misc/profiling.py
deleted file mode 100644
index deeb71d..0000000
--- a/translate/misc/profiling.py
+++ /dev/null
@@ -1,122 +0,0 @@
-#!/usr/bin/env python
-# -*- coding: utf-8 -*-
-#
-# Copyright 2008-2009 Zuza Software Foundation
-#
-# This file is part of Virtaal.
-#
-# This program is free software; you can redistribute it and/or modify
-# it under the terms of the GNU General Public License as published by
-# the Free Software Foundation; either version 2 of the License, or
-# (at your option) any later version.
-#
-# This program is distributed in the hope that it will be useful,
-# but WITHOUT ANY WARRANTY; without even the implied warranty of
-# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
-# GNU General Public License for more details.
-#
-# You should have received a copy of the GNU General Public License
-# along with this program; if not, see <http://www.gnu.org/licenses/>.
-
-def label(code):
-    if isinstance(code, str):
-        return ('~', 0, code)    # built-in functions ('~' sorts at the end)
-    else:
-        return '%s %s:%d' % (code.co_name, code.co_filename, code.co_firstlineno)
-
-
-class KCacheGrind(object):
-    def __init__(self, profiler):
-        self.data = profiler.getstats()
-        self.out_file = None
-
-    def output(self, out_file):
-        self.out_file = out_file
-        print >> out_file, 'events: Ticks'
-        self._print_summary()
-        for entry in self.data:
-            self._entry(entry)
-
-    def _print_summary(self):
-        max_cost = 0
-        for entry in self.data:
-            totaltime = int(entry.totaltime * 1000)
-            max_cost = max(max_cost, totaltime)
-        print >> self.out_file, 'summary: %d' % (max_cost,)
-
-    def _entry(self, entry):
-        out_file = self.out_file
-        code = entry.code
-        inlinetime = int(entry.inlinetime * 1000)
-        #print >> out_file, 'ob=%s' % (code.co_filename,)
-        if isinstance(code, str):
-            print >> out_file, 'fi=~'
-        else:
-            print >> out_file, 'fi=%s' % (code.co_filename,)
-        print >> out_file, 'fn=%s' % (label(code),)
-        if isinstance(code, str):
-            print >> out_file, '0 ', inlinetime
-        else:
-            print >> out_file, '%d %d' % (code.co_firstlineno, inlinetime)
-        # recursive calls are counted in entry.calls
-        if entry.calls:
-            calls = entry.calls
-        else:
-            calls = []
-        if isinstance(code, str):
-            lineno = 0
-        else:
-            lineno = code.co_firstlineno
-        for subentry in calls:
-            self._subentry(lineno, subentry)
-        print >> out_file
-
-    def _subentry(self, lineno, subentry):
-        out_file = self.out_file
-        code = subentry.code
-        totaltime = int(subentry.totaltime * 1000)
-        #print >> out_file, 'cob=%s' % (code.co_filename,)
-        print >> out_file, 'cfn=%s' % (label(code),)
-        if isinstance(code, str):
-            print >> out_file, 'cfi=~'
-            print >> out_file, 'calls=%d 0' % (subentry.callcount,)
-        else:
-            print >> out_file, 'cfi=%s' % (code.co_filename,)
-            print >> out_file, 'calls=%d %d' % (
-                subentry.callcount, code.co_firstlineno)
-        print >> out_file, '%d %d' % (lineno, totaltime)
-
-
-def profile_func(filename=None, mode='w+'):
-    """Function/method decorator that will cause only the decorated callable to
-    be profiled (with a :class:`KCacheGrind` profiler) and saved to the
-    specified file.
-
-    :type filename: str
-    :param filename: The filename to write the profile to. If not specified the
-                     decorated function's name is used, followed by
-                     ``_func.profile``.
-    :type mode: str
-    :param mode: The mode in which to open ``filename``. Default is ``w+``.
-    """
-    def proffunc(f):
-        def profiled_func(*args, **kwargs):
-            import cProfile
-            import logging
-
-            logging.info('Profiling function %s' % (f.__name__))
-
-            try:
-                profile_file = open(filename or '%s_func.profile' % (f.__name__), mode)
-                profiler = cProfile.Profile()
-                retval = profiler.runcall(f, *args, **kwargs)
-                k_cache_grind = KCacheGrind(profiler)
-                k_cache_grind.output(profile_file)
-                profile_file.close()
-            except IOError:
-                logging.exception(_("Could not open profile file '%(filename)s'") % {"filename": filename})
-
-            return retval
-
-        return profiled_func
-    return proffunc
diff --git a/translate/misc/progressbar.py b/translate/misc/progressbar.py
index 591980e..e8b7baa 100644
--- a/translate/misc/progressbar.py
+++ b/translate/misc/progressbar.py
@@ -97,7 +97,7 @@ class ProgressBar:
     def show(self, verbosemessage):
         """displays the progress bar"""
         # pylint: disable=W0613
-        print self
+        print(self)
 
 
 class MessageProgressBar(ProgressBar):
diff --git a/translate/misc/quote.py b/translate/misc/quote.py
index a93f5d6..448858b 100644
--- a/translate/misc/quote.py
+++ b/translate/misc/quote.py
@@ -22,9 +22,8 @@
 of delimiters"""
 
 import logging
-import htmlentitydefs
 
-from translate.misc.typecheck import accepts, returns
+from six.moves import html_entities
 
 
 def find_all(searchin, substr):
@@ -190,27 +189,66 @@ def extractwithoutquotes(source, startdelim, enddelim, escape=None,
     return (extracted, instring)
 
 
- at accepts(unicode)
- at returns(unicode)
-def htmlentityencode(source):
-    """encodes source using HTML entities e.g. © -> ©"""
+def _encode_entity_char(char, codepoint2name):
+    charnum = ord(char)
+    if charnum in codepoint2name:
+        return u"&%s;" % codepoint2name[charnum]
+    else:
+        return char
+
+def entityencode(source, codepoint2name):
+    """Encode ``source`` using entities from ``codepoint2name``.
+
+    :param unicode source: Source string to encode
+    :param dict codepoint2name: Dictionary mapping code points to entity names
+           (without the the leading ``&`` or the trailing ``;``)
+    """
     output = u""
+    inentity = False
     for char in source:
-        charnum = ord(char)
-        if charnum in htmlentitydefs.codepoint2name:
-            output += u"&%s;" % htmlentitydefs.codepoint2name[charnum]
+        if char == "&":
+            inentity = True
+            possibleentity = ""
+            continue
+        if inentity:
+            if char == ";":
+                output += "&" + possibleentity + ";"
+                inentity = False
+            elif char == " ":
+                output += _encode_entity_char("&", codepoint2name) + \
+                          entityencode(possibleentity + char, codepoint2name)
+                inentity = False
+            else:
+                possibleentity += char
         else:
-            output += str(char)
+            output += _encode_entity_char(char, codepoint2name)
+    if inentity:
+        # Handle nonentities at end of string.
+        output += _encode_entity_char("&", codepoint2name) + \
+                  entityencode(possibleentity, codepoint2name)
+
     return output
 
 
- at accepts(unicode)
- at returns(unicode)
-def htmlentitydecode(source):
-    """decodes source using HTML entities e.g. © -> ©"""
+def _has_entity_end(source):
+    for char in source:
+        if char == ";":
+            return True
+        elif char == " ":
+            return False
+    return False
+
+def entitydecode(source, name2codepoint):
+    """Decode ``source`` using entities from ``name2codepoint``.
+
+    :param unicode source: Source string to decode
+    :param dict name2codepoint: Dictionary mapping entity names (without the
+           the leading ``&`` or the trailing ``;``) to code points
+    """
     output = u""
     inentity = False
-    for char in source:
+    for i, char in enumerate(source):
+        char = source[i]
         if char == "&":
             inentity = True
             possibleentity = ""
@@ -218,8 +256,12 @@ def htmlentitydecode(source):
         if inentity:
             if char == ";":
                 if (len(possibleentity) > 0 and
-                    possibleentity in htmlentitydefs.name2codepoint):
-                    output += unichr(htmlentitydefs.name2codepoint[possibleentity])
+                    possibleentity in name2codepoint):
+                    entchar = unichr(name2codepoint[possibleentity])
+                    if entchar == u'&' and _has_entity_end(source[i+1:]):
+                        output += "&" + possibleentity + ";"
+                    else:
+                        output += entchar
                     inentity = False
                 else:
                     output += "&" + possibleentity + ";"
@@ -231,11 +273,28 @@ def htmlentitydecode(source):
                 possibleentity += char
         else:
             output += char
+    if inentity:
+        # Handle nonentities at end of string.
+        output += "&" + possibleentity
     return output
 
 
- at accepts(unicode)
- at returns(unicode)
+def htmlentityencode(source):
+    """Encode ``source`` using HTML entities e.g. © -> ``©``
+
+    :param unicode source: Source string to encode
+    """
+    return entityencode(source, html_entities.codepoint2name)
+
+
+def htmlentitydecode(source):
+    """Decode source using HTML entities e.g. ``©`` -> ©.
+
+    :param unicode source: Source string to decode
+    """
+    return entitydecode(source, html_entities.name2codepoint)
+
+
 def javapropertiesencode(source):
     """Encodes source in the escaped-unicode encoding used by Java
     .properties files
@@ -254,8 +313,6 @@ def javapropertiesencode(source):
     return output
 
 
- at accepts(unicode)
- at returns(unicode)
 def mozillapropertiesencode(source):
     """Encodes source in the escaped-unicode encoding used by Mozilla
     .properties files.
@@ -268,18 +325,40 @@ def mozillapropertiesencode(source):
             output += char
     return output
 
+def escapespace(char):
+    assert(len(char) == 1)
+    if char.isspace():
+        return u"\\u%04X" %(ord(char))
+    return char
+
+def mozillaescapemarginspaces(source):
+    """Escape leading and trailing spaces for Mozilla .properties files."""
+    if not source:
+        return u""
+
+    if len(source) == 1 and source.isspace():
+        # FIXME: This is hack for people using white-space to mark empty
+        # Mozilla strings translated, drop this once we have better way to
+        # handle this in Pootle.
+        return u""
+
+    if len(source) == 1:
+        return escapespace(source)
+    else:
+        return escapespace(source[0]) + source[1:-1] + escapespace(source[-1])
+
 propertyescapes = {
     # escapes that are self-escaping
     "\\": "\\", "'": "'", '"': '"',
     # control characters that we keep
     "f": "\f", "n": "\n", "r": "\r", "t": "\t",
-    }
+}
 
 controlchars = {
     # the reverse of the above...
     "\\": "\\\\",
     "\f": "\\f", "\n": "\\n", "\r": "\\r", "\t": "\\t",
-    }
+}
 
 
 def escapecontrols(source):
@@ -289,8 +368,6 @@ def escapecontrols(source):
     return source
 
 
- at accepts(unicode)
- at returns(unicode)
 def propertiesdecode(source):
     """Decodes source from the escaped-unicode encoding used by .properties
     files.
@@ -314,8 +391,7 @@ def propertiesdecode(source):
             # we just return the character, unescaped
             # if people want to escape them they can use escapecontrols
             return unichr(i)
-        else:
-            return "\\u%04x" % i
+        return "\\u%04x" % i
 
     while s < len(source):
         c = source[s]
@@ -342,16 +418,18 @@ def propertiesdecode(source):
             digits = 4
             x = 0
             for digit in range(digits):
-                x <<= 4
                 if s + digit >= len(source):
                     digits = digit
                     break
                 c = source[s + digit].lower()
-                if c.isdigit():
-                    x += ord(c) - ord('0')
-                elif c in "abcdef":
-                    x += ord(c) - ord('a') + 10
+                if c.isdigit() or c in "abcdef":
+                    x <<= 4
+                    if c.isdigit():
+                        x += ord(c) - ord('0')
+                    else:
+                        x += ord(c) - ord('a') + 10
                 else:
+                    digits = digit
                     break
             s += digits
             output += unichr2(x)
diff --git a/translate/misc/sparse.py b/translate/misc/sparse.py
index 8701b16..d6a9bc2 100644
--- a/translate/misc/sparse.py
+++ b/translate/misc/sparse.py
@@ -54,8 +54,8 @@ class ParserError(ValueError):
         """takes a message and the number of the token that caused the error"""
         tokenpos = parser.findtokenpos(tokennum)
         line, charpos = parser.getlinepos(tokenpos)
-        ValueError.__init__(self, "%s at line %d, char %d (token %r)" % \
-            (message, line, charpos, parser.tokens[tokennum]))
+        ValueError.__init__(self, "%s at line %d, char %d (token %r)" %
+                                  (message, line, charpos, parser.tokens[tokennum]))
         self.parser = parser
         self.tokennum = tokennum
 
diff --git a/translate/misc/test_multistring.py b/translate/misc/test_multistring.py
index f569ff7..d6dd0ac 100644
--- a/translate/misc/test_multistring.py
+++ b/translate/misc/test_multistring.py
@@ -2,8 +2,7 @@
 
 import pytest
 
-from translate.misc import multistring
-from translate.misc import test_autoencode
+from translate.misc import multistring, test_autoencode
 
 
 class TestMultistring(test_autoencode.TestAutoencode):
diff --git a/translate/misc/test_optrecurse.py b/translate/misc/test_optrecurse.py
index fc6e02c..a067860 100644
--- a/translate/misc/test_optrecurse.py
+++ b/translate/misc/test_optrecurse.py
@@ -16,5 +16,5 @@ class TestRecursiveOptionParser:
         dirname = os.path.join("some", "path", "to")
         fullpath = os.path.join(dirname, filename)
         root = os.path.join(dirname, name)
-        print fullpath
+        print(fullpath)
         assert self.parser.splitext(fullpath) == (root, extension)
diff --git a/translate/misc/test_quote.py b/translate/misc/test_quote.py
index e555a6d..75d57be 100644
--- a/translate/misc/test_quote.py
+++ b/translate/misc/test_quote.py
@@ -74,8 +74,18 @@ class TestEncoding:
         assert quote.mozillapropertiesencode(u"abcḓ") == u"abcḓ"
         assert quote.mozillapropertiesencode(u"abc\n") == u"abc\\n"
 
+    def test_escapespace(self):
+        assert quote.escapespace(u" ") == u"\\u0020"
+        assert quote.escapespace(u"\t") == u"\\u0009"
+
+    def test_mozillaescapemarginspaces(self):
+        assert quote.mozillaescapemarginspaces(u" ") == u""
+        assert quote.mozillaescapemarginspaces(u"A") == u"A"
+        assert quote.mozillaescapemarginspaces(u" abc ") == u"\\u0020abc\\u0020"
+        assert quote.mozillaescapemarginspaces(u"  abc ") == u"\\u0020 abc\\u0020"
+
     def test_mozilla_control_escapes(self):
-        """test that we do \uNNNN escapes for certain control characters instead of converting to UTF-8 characters"""
+        r"""test that we do \uNNNN escapes for certain control characters instead of converting to UTF-8 characters"""
         prefix, suffix = "bling", "blang"
         for control in (u"\u0005", u"\u0006", u"\u0007", u"\u0011"):
             string = prefix + control + suffix
@@ -87,6 +97,14 @@ class TestEncoding:
         assert quote.propertiesdecode(u"abc\u1E13") == u"abcḓ"
         assert quote.propertiesdecode(u"abc\N{LEFT CURLY BRACKET}") == u"abc{"
         assert quote.propertiesdecode(u"abc\\") == u"abc\\"
+        assert quote.propertiesdecode(u"abc\\") == u"abc\\"
+
+    def test_properties_decode_slashu(self):
+        assert quote.propertiesdecode(u"abc\u1e13") == u"abcḓ"
+        assert quote.propertiesdecode(u"abc\u0020") == u"abc "
+        # NOTE Java only accepts 4 digit unicode, Mozilla accepts two
+        # unfortunately, but it seems harmless to accept both.
+        assert quote.propertiesdecode("abc\u20") == u"abc "
 
     def _html_encoding_helper(self, pairs):
         for from_, to in pairs:
@@ -98,16 +116,25 @@ class TestEncoding:
         raw_encoded = [(u"€", u"€"), (u"©", u"©"), (u'"', u""")]
         self._html_encoding_helper(raw_encoded)
 
+    def test_htmlencoding_existing_entities(self):
+        """test that we don't mess existing entities"""
+        assert quote.htmlentityencode(u"&") == u"&"
+
     def test_htmlencoding_passthrough(self):
         """test that we can encode and decode things that look like HTML entities but aren't"""
-        raw_encoded = [(u"copy quot", u"copy quot"),      # Raw text should have nothing done to it.
-                      ]
+        raw_encoded = [(u"copy quot", u"copy quot"),]     # Raw text should have nothing done to it.
         self._html_encoding_helper(raw_encoded)
 
     def test_htmlencoding_nonentities(self):
         """tests to give us full coverage"""
         for encoded, real in [(u"Some &; text", u"Some &; text"),
                               (u"&copy ", u"&copy "),
-                              (u"&rogerrabbit;", u"&rogerrabbit;"),
-                             ]:
+                              (u"&copy", u"&copy"),
+                              (u"&rogerrabbit;", u"&rogerrabbit;"),]:
             assert quote.htmlentitydecode(encoded) == real
+
+        for decoded, real in [(u"Some &; text", u"Some &; text"),
+                              (u"&copy ", u"&copy "),
+                              (u"&copy", u"&copy"),
+                              (u"&rogerrabbit;", u"&rogerrabbit;"),]:
+            assert quote.htmlentityencode(decoded) == real
diff --git a/translate/misc/textwrap.py b/translate/misc/textwrap.py
deleted file mode 100644
index b66471e..0000000
--- a/translate/misc/textwrap.py
+++ /dev/null
@@ -1,203 +0,0 @@
-# -*- coding: utf-8 -*-
-"""Text wrapping and filling.
-"""
-
-# Copyright (C) 1999-2001 Gregory P. Ward.
-# Copyright (C) 2002, 2003 Python Software Foundation.
-# Written by Greg Ward <gward at python.net>
-# 2013 F Wolff
-
-import re
-
-
-__all__ = ['TextWrapper', 'wrap', 'fill']
-
-
-class TextWrapper:
-    """
-    Object for wrapping/filling text.  The public interface consists of
-    the wrap() and fill() methods; the other methods are just there for
-    subclasses to override in order to tweak the default behaviour.
-    If you want to completely replace the main wrapping algorithm,
-    you'll probably have to override _wrap_chunks().
-
-    Several instance attributes control various aspects of wrapping:
-      width (default: 70)
-        the maximum width of wrapped lines (unless break_long_words
-        is false)
-      break_long_words (default: true)
-        Break words longer than 'width'.  If false, those words will not
-        be broken, and some lines might be longer than 'width'.
-    """
-
-    # This funky little regex is just the trick for splitting
-    # text up into word-wrappable chunks.  E.g.
-    #   "Hello there -- you goof-ball, use the -b option!"
-    # splits into
-    #   Hello /there /--/ /you /goof-/ball,/ /use/ /the /-b/ /option!
-    # (after stripping out empty strings).
-    wordsep_re = re.compile(
-        r'(\s+|'                                  # any whitespace
-        r'[\w\!"\'\&\.\,\?]+\s+|'                 # space should go with a word
-        r'[^\s\w]*\w+[a-zA-Z]-(?=\w+[a-zA-Z])|'   # hyphenated words
-        r'(?<=[\w\!\"\'\&\.\,\?])-{2,}(?=\w))')   # em-dash
-
-    def __init__(self,
-                 width=70,
-                 break_long_words=True):
-        self.width = width
-        self.break_long_words = break_long_words
-
-
-    # -- Private methods -----------------------------------------------
-    # (possibly useful for subclasses to override)
-
-    def _split(self, text):
-        """_split(text : string) -> [string]
-
-        Split the text to wrap into indivisible chunks.  Chunks are
-        not quite the same as words; see wrap_chunks() for full
-        details.  As an example, the text
-          Look, goof-ball -- use the -b option!
-        breaks into the following chunks:
-          'Look,', ' ', 'goof-', 'ball', ' ', '--', ' ',
-          'use', ' ', 'the', ' ', '-b', ' ', 'option!'
-        """
-        chunks = self.wordsep_re.split(text)
-        chunks = filter(None, chunks)
-        return chunks
-
-    def _handle_long_word(self, reversed_chunks, cur_line, cur_len, width):
-        """_handle_long_word(chunks : [string],
-                             cur_line : [string],
-                             cur_len : int, width : int)
-
-        Handle a chunk of text (most likely a word, not whitespace) that
-        is too long to fit in any line.
-        """
-        space_left = max(width - cur_len, 1)
-
-        # If we're allowed to break long words, then do so: put as much
-        # of the next chunk onto the current line as will fit.
-        if self.break_long_words:
-            cur_line.append(reversed_chunks[-1][:space_left])
-            reversed_chunks[-1] = reversed_chunks[-1][space_left:]
-
-        # Otherwise, we have to preserve the long word intact.  Only add
-        # it to the current line if there's nothing already there --
-        # that minimizes how much we violate the width constraint.
-        elif not cur_line:
-            cur_line.append(reversed_chunks.pop())
-
-        # If we're not allowed to break long words, and there's already
-        # text on the current line, do nothing.  Next time through the
-        # main loop of _wrap_chunks(), we'll wind up here again, but
-        # cur_len will be zero, so the next line will be entirely
-        # devoted to the long word that we can't handle right now.
-
-    def _wrap_chunks(self, chunks):
-        """_wrap_chunks(chunks : [string]) -> [string]
-
-        Wrap a sequence of text chunks and return a list of lines of
-        length 'self.width' or less.  (If 'break_long_words' is false,
-        some lines may be longer than this.)  Chunks correspond roughly
-        to words and the whitespace between them: each chunk is
-        indivisible (modulo 'break_long_words'), but a line break can
-        come between any two chunks.  Chunks should not have internal
-        whitespace; ie. a chunk is either all whitespace or a "word".
-        Whitespace chunks will be removed from the beginning and end of
-        lines, but apart from that whitespace is preserved.
-        """
-        lines = []
-        if self.width <= 0:
-            raise ValueError("invalid width %r (must be > 0)" % self.width)
-
-        # Arrange in reverse order so items can be efficiently popped
-        # from a stack of chucks.
-        chunks.reverse()
-
-        while chunks:
-
-            # Start the list of chunks that will make up the current line.
-            # cur_len is just the length of all the chunks in cur_line.
-            cur_line = []
-            cur_len = 0
-
-            # Maximum width for this line.
-            width = self.width
-
-            while chunks:
-                l = len(chunks[-1])
-
-                # Can at least squeeze this chunk onto the current line.
-                if cur_len + l <= width:
-                    cur_line.append(chunks.pop())
-                    cur_len += l
-
-                # Nope, this line is full.
-                else:
-                    break
-
-            # The current line is full, and the next chunk is too big to
-            # fit on *any* line (not just this one).
-            if chunks and len(chunks[-1]) > width:
-                self._handle_long_word(chunks, cur_line, cur_len, width)
-
-            # Convert current line back to a string and store it in list
-            # of all lines (return value).
-            if cur_line:
-                lines.append(''.join(cur_line))
-
-        return lines
-
-
-    # -- Public interface ----------------------------------------------
-
-    def wrap(self, text):
-        """wrap(text : string) -> [string]
-
-        Reformat the single paragraph in 'text' so it fits in lines of
-        no more than 'self.width' columns, and return a list of wrapped
-        lines.  Tabs in 'text' are expanded with string.expandtabs(),
-        and all other whitespace characters (including newline) are
-        converted to space.
-        """
-        chunks = self._split(text)
-        return self._wrap_chunks(chunks)
-
-    def fill(self, text):
-        """fill(text : string) -> string
-
-        Reformat the single paragraph in 'text' to fit in lines of no
-        more than 'self.width' columns, and return a new string
-        containing the entire wrapped paragraph.
-        """
-        return "\n".join(self.wrap(text))
-
-
-# -- Convenience interface ---------------------------------------------
-
-def wrap(text, width=70, **kwargs):
-    """Wrap a single paragraph of text, returning a list of wrapped lines.
-
-    Reformat the single paragraph in 'text' so it fits in lines of no
-    more than 'width' columns, and return a list of wrapped lines.  By
-    default, tabs in 'text' are expanded with string.expandtabs(), and
-    all other whitespace characters (including newline) are converted to
-    space.  See TextWrapper class for available keyword args to customize
-    wrapping behaviour.
-    """
-    w = TextWrapper(width=width, **kwargs)
-    return w.wrap(text)
-
-def fill(text, width=70, **kwargs):
-    """Fill a single paragraph of text, returning a new string.
-
-    Reformat the single paragraph in 'text' to fit in lines of no more
-    than 'width' columns, and return a new string containing the entire
-    wrapped paragraph.  As with wrap(), tabs are expanded and other
-    whitespace characters converted to space.  See TextWrapper class for
-    available keyword args to customize wrapping behaviour.
-    """
-    w = TextWrapper(width=width, **kwargs)
-    return w.fill(text)
diff --git a/translate/misc/typecheck/__init__.py b/translate/misc/typecheck/__init__.py
deleted file mode 100644
index 334ace0..0000000
--- a/translate/misc/typecheck/__init__.py
+++ /dev/null
@@ -1,1559 +0,0 @@
-__all__ = ['accepts', 'returns', 'yields', 'TypeCheckError', 'Length', 'Empty'
-          ,'TypeSignatureError', 'And', 'Any', 'Class', 'Exact', 'HasAttr'
-          ,'IsAllOf', 'IsCallable', 'IsIterable', 'IsNoneOf', 'IsOneOf'
-          ,'IsOnlyOneOf', 'Not', 'Or', 'Self', 'Xor', 'YieldSeq'
-          ,'register_type', 'is_registered_type', 'unregister_type'
-          ,'Function']
-
-import inspect
-import types
-
-from types import GeneratorType, FunctionType, MethodType, ClassType, TypeType
-
-# Controls whether typechecking is on (True) or off (False)
-enable_checking = True
-
-# Pretty little wrapper function around __typecheck__
-def check_type(type, func, val):
-    type.__typecheck__(func, val)
-
-### Internal exception classes (these MUST NOT get out to the user)
-### typecheck_{args,return,yield} should catch these and convert them to
-### appropriate Type{Check,Signature}Error instances
-
-# We can't inherit from object because raise doesn't like new-style classes
-# We can't use super() because we can't inherit from object
-class _TC_Exception(Exception):
-    def error_message(self):
-        raise NotImplementedError("Incomplete _TC_Exception subclass (%s)" % str(self.__class__))
-        
-    def format_bad_object(self, bad_object):
-        return ("for %s, " % str(bad_object), self)
-
-class _TC_LengthError(_TC_Exception):
-    def __init__(self, wrong, right=None):
-        _TC_Exception.__init__(self)
-    
-        self.wrong = wrong
-        self.right = right
-        
-    def error_message(self):
-        m = None
-        if self.right is not None:
-            m = ", expected %d" % self.right
-        return "length was %d%s" % (self.wrong, m or "")
-        
-class _TC_TypeError(_TC_Exception):
-    def __init__(self, wrong, right):
-        _TC_Exception.__init__(self)
-    
-        self.wrong = calculate_type(wrong)
-        self.right = right
-        
-    def error_message(self):
-        return "expected %s, got %s" % (self.right, self.wrong)
-
-class _TC_NestedError(_TC_Exception):
-    def __init__(self, inner_exception):
-        self.inner = inner_exception
-    
-    def error_message(self):
-        try:
-            return ", " + self.inner.error_message()
-        except:
-            print "'%s'" % self.inner.message
-            raw_input()
-            raise
-
-class _TC_IndexError(_TC_NestedError):
-    def __init__(self, index, inner_exception):
-        _TC_NestedError.__init__(self, inner_exception)
-    
-        self.index = index
-        
-    def error_message(self):
-        return ("at index %d" % self.index) + _TC_NestedError.error_message(self)
-
-# _TC_DictError exists as a wrapper around dict-related exceptions.
-# It provides a single place to sort the bad dictionary's keys in the error
-# message.
-class _TC_DictError(_TC_NestedError):
-    def format_bad_object(self, bad_object):
-        message = "for {%s}, " % ', '.join(["%s: %s" % (repr(k), repr(bad_object[k])) for k in sorted(bad_object.keys())])
-        
-        if not isinstance(self.inner, _TC_LengthError):
-            return (message, self)
-        return (message, self.inner)
-        
-    def error_message(self):
-        raise NotImplementedError("Incomplete _TC_DictError subclass: " + str(self.__class__))
-
-class _TC_KeyError(_TC_DictError):
-    def __init__(self, key, inner_exception):
-        _TC_NestedError.__init__(self, inner_exception)
-        
-        self.key = key
-
-    def error_message(self):
-        return ("for key %s" % repr(self.key)) + _TC_NestedError.error_message(self)
-        
-class _TC_KeyValError(_TC_KeyError):
-    def __init__(self, key, val, inner_exception):
-        _TC_KeyError.__init__(self, key, inner_exception)
-
-        self.val = val
-        
-    def error_message(self):
-        return ("at key %s, value %s" % (repr(self.key), repr(self.val))) + _TC_NestedError.error_message(self)
-        
-class _TC_GeneratorError(_TC_NestedError):
-    def __init__(self, yield_no, inner_exception):
-        _TC_NestedError.__init__(self, inner_exception)
-        
-        self.yield_no = yield_no
-        
-    def error_message(self):
-        raise RuntimeError("_TC_GeneratorError.message should never be called")
-        
-    def format_bad_object(self, bad_object):
-        bad_obj, start_message = self.inner.format_bad_object(bad_object)
-        message = "At yield #%d: %s" % (self.yield_no, bad_obj)
-        return (message, start_message)
-
-### These next three exceptions exist to give HasAttr better error messages     
-class _TC_AttrException(_TC_Exception):
-    def __init__(self, attr):
-        _TC_Exception.__init__(self, attr)
-        
-        self.attr = attr
-
-class _TC_AttrError(_TC_AttrException, _TC_NestedError):
-    def __init__(self, attr, inner_exception):
-        _TC_AttrException.__init__(self, attr)
-        _TC_NestedError.__init__(self, inner_exception)
-        
-    def error_message(self):
-        return ("as for attribute %s" % self.attr) + _TC_NestedError.error_message(self)
-        
-class _TC_MissingAttrError(_TC_AttrException):
-    def error_message(self):
-        return "missing attribute %s" % self.attr
-
-# This is like _TC_LengthError for YieldSeq     
-class _TC_YieldCountError(_TC_Exception):
-    def __init__(self, expected):
-        _TC_Exception.__init__(self, expected)
-        
-        self.expected = expected
-        
-    def format_bad_object(self, bad_object):
-        return ("", self)
-        
-    def error_message(self):
-        plural = "s"
-        if self.expected == 1:
-            plural = ""
-        
-        return "only expected the generator to yield %d time%s" % (self.expected, plural)
-
-# This exists to provide more detailed error messages about why a given
-# Xor() assertion failed
-class _TC_XorError(_TC_NestedError):
-    def __init__(self, matched_conds, inner_exception):
-        assert matched_conds in (0, 2)
-        assert isinstance(inner_exception, _TC_TypeError)
-        
-        _TC_Exception.__init__(self, matched_conds, inner_exception)
-        _TC_NestedError.__init__(self, inner_exception)
-        self.matched_conds = matched_conds
-        
-    def error_message(self):
-        if self.matched_conds == 0:
-            m = "neither assertion"
-        else:
-            m = "both assertions"
-    
-        return _TC_NestedError.error_message(self) + " (matched %s)" % m
-        
-class _TC_FunctionError(_TC_Exception):
-    def __init__(self, checking_func, obj):
-        self.checking_func = checking_func
-        self.rejected_obj = obj
-        
-    def error_message(self):
-        return " was rejected by %s" % self.checking_func
-        
-    def format_bad_object(self, bad_object):
-        return (str(bad_object), self)
-        
-class _TC_ExactError(_TC_Exception):
-    def __init__(self, wrong, right):
-        self.wrong = wrong
-        self.right = right
-        
-    def error_message(self):
-        return "expected %s, got %s" % (self.right, self.wrong)
-
-### The following exist to provide detailed TypeSignatureErrors
-class _TS_Exception(Exception):
-    def error_message(self):
-        raise NotImplementedError("Incomplete _TS_Exception subclass (%s)" % str(self.__class__))
-
-# This is used when there was an error related to an auto-unpacked tuple
-# in the function's signature
-class _TS_TupleError(_TS_Exception):
-    def __init__(self, parameters, types):
-        parameters = _rec_tuple(parameters)
-        types = _rec_tuple(types)
-        _TS_Exception.__init__(self, parameters, types)
-        
-        self.parameters = parameters
-        self.types = types
-        
-    def error_message(self):
-        return "the signature type %s does not match %s" % (str(self.types), str(self.parameters))
-        
-class _TS_ExtraKeywordError(_TS_Exception):
-    def __init__(self, keyword):
-        _TS_Exception.__init__(self, keyword)
-        
-        self.keyword = keyword
-        
-    def error_message(self):
-        return "the keyword '%s' in the signature is not in the function" % self.keyword
-        
-class _TS_ExtraPositionalError(_TS_Exception):
-    def __init__(self, type):
-        _TS_Exception.__init__(self, type)
-        
-        self.type = type
-        
-    def error_message(self):
-        return "an extra positional type has been supplied"
-    
-class _TS_MissingTypeError(_TS_Exception):
-    def __init__(self, parameter):
-        _TS_Exception.__init__(self, parameter)
-        
-        self.parameter = parameter
-        
-    def error_message(self):
-        return "parameter '%s' lacks a type" % self.parameter
-
-# If the user has given a keyword parameter a type both positionally and
-# with a keyword argument, this will be raised      
-class _TS_TwiceTypedError(_TS_Exception):
-    def __init__(self, parameter, kw_type, pos_type):
-        _TS_Exception.__init__(self, parameter, kw_type, pos_type)
-        
-        self.parameter = parameter
-        self.kw_type = kw_type
-        self.pos_type = pos_type
-        
-    def error_message(self):
-        return "parameter '%s' is provided two types (%s and %s)" % (self.parameter, str(self.kw_type), str(self.pos_type))
-
-### The following functions are the way new type handlers are registered
-### The Type function will iterate over all registered type handlers;
-### the first handler to return a non-None value is considered the winner
-#########################################################################
-
-_hooks = ("__typesig__", "__startchecking__", "__stopchecking__", "__switchchecking__")
-
-_registered_types = set()
-_registered_hooks = dict([(_h, set()) for _h in _hooks])
-
-def _manage_registration(add_remove, reg_type):
-    if not isinstance(reg_type, (types.ClassType, types.TypeType)):
-        raise ValueError("registered types must be classes or types")
-    
-    valid = False
-    for hook in _hooks:
-        if hasattr(reg_type, hook):
-            getattr(_registered_hooks[hook], add_remove)(reg_type)
-            valid = True
-
-    if valid:
-        getattr(_registered_types, add_remove)(reg_type)
-    else:
-        raise ValueError("registered types must have at least one of the following methods: " + ", ".join(_hooks))
-
-def register_type(reg_type):
-    _manage_registration('add', reg_type)
-    
-def unregister_type(reg_type):
-    _manage_registration('remove', reg_type)
-    
-def is_registered_type(reg_type):
-    return reg_type in _registered_types
-
-### Factory function; this is what should be used to dispatch
-### type-checker class requests
-
-def Type(obj):
-    # Note that registered types cannot count on being run in a certain order;
-    # their __typesig__ methods must be sufficiently flexible to account for
-    # this
-    for reg_type in _registered_hooks['__typesig__']:
-        v = reg_type.__typesig__(obj)
-        if v is not None:
-            return v
-
-    raise AssertionError("Object is of type '%s'; not a type" % str(type(obj)))
-
-def __checking(start_stop, *args):
-    attr = '__%schecking__' % start_stop
-
-    for reg_type in _registered_hooks[attr]:
-        getattr(reg_type, attr)(*args)
-    
-def start_checking(function):
-    __checking('start', function)
-
-def stop_checking(function):
-    __checking('stop', function)
-    
-def switch_checking(from_func, to_func):
-    for reg_type in _registered_types:
-        if hasattr(reg_type, '__switchchecking__'):
-            getattr(reg_type, '__switchchecking__')(from_func, to_func)
-        else:
-            if hasattr(reg_type, '__stopchecking__'):
-                getattr(reg_type, '__stopchecking__')(from_func)
-            if hasattr(reg_type, '__startchecking__'):
-                getattr(reg_type, '__startchecking__')(to_func)
-
-### Deduce the type of a data structure
-###
-### XXX: Find a way to allow registered utility classes
-### to hook into this
-def calculate_type(obj):
-    if isinstance(obj, types.InstanceType):
-        return obj.__class__
-    elif isinstance(obj, dict):
-        if len(obj) == 0:
-            return {}
-
-        key_types = set()
-        val_types = set()
-        
-        for (k,v) in obj.items():
-            key_types.add( calculate_type(k) )
-            val_types.add( calculate_type(v) )
-            
-        if len(key_types) == 1:
-            key_types = key_types.pop()
-        else:
-            key_types = Or(*key_types)
-            
-        if len(val_types) == 1:
-            val_types = val_types.pop()
-        else:
-            val_types = Or(*val_types)
-            
-        return {key_types: val_types}
-    elif isinstance(obj, tuple):
-        return tuple([calculate_type(t) for t in obj])
-    elif isinstance(obj, list):
-        length = len(obj)
-        if length == 0:
-            return []
-        obj = [calculate_type(o) for o in obj]
-
-        partitions = [1]
-        partitions.extend([i for i in range(2, int(length/2)+1) if length%i==0])
-        partitions.append(length)
-
-        def evaluate(items_per):
-            parts = length / items_per
-
-            for i in range(0, parts):
-                for j in range(0, items_per):
-                    if obj[items_per * i + j] != obj[j]:
-                        raise StopIteration
-            return obj[0:items_per]
-
-        for items_per in partitions:
-            try:
-                return evaluate(items_per)
-            except StopIteration:
-                continue
-    else:
-        return type(obj)
-
-### The following classes are the work-horses of the typechecker
-
-# The base class for all the other utility classes
-class CheckType(object):
-    def __repr__(self):
-        return type(self).name + '(' + ', '.join(sorted(repr(t) for t in self._types)) + ')'
-
-    __str__ = __repr__
-
-    def __eq__(self, other):
-        return not self != other
-        
-    def __ne__(self, other):
-        return not self == other
-
-    def __hash__(self):
-        raise NotImplementedError("Incomplete CheckType subclass: %s" % self.__class__)
-    
-    def __typecheck__(self, func, obj):
-        raise NotImplementedError("Incomplete CheckType subclass: %s" % self.__class__)
-    
-    @classmethod
-    def __typesig__(cls, obj):
-        if isinstance(obj, CheckType):
-            return obj
-            
-class Single(CheckType):
-    name = "Single"
-
-    def __init__(self, type):
-        if not isinstance(type, (types.ClassType, types.TypeType)):
-            raise TypeError("Cannot type-check a %s" % type(type))
-        else:
-            self.type = type
-            
-        self._types = [self.type]
-
-    def __typecheck__(self, func, to_check):
-        if not isinstance(to_check, self.type):
-            raise _TC_TypeError(to_check, self.type)
-        
-    def __eq__(self, other):
-        if other.__class__ is not self.__class__:
-            return False
-        return self.type == other.type
-        
-    def __hash__(self):
-        return hash(str(hash(self.__class__)) + str(hash(self.type)))
-    
-    # XXX Is this really a good idea?
-    # Removing this only breaks 3 tests; that seems suspiciously low
-    def __repr__(self):
-        return repr(self.type)
-        
-    @classmethod
-    def __typesig__(cls, obj):
-        if isinstance(obj, (types.ClassType, types.TypeType)):
-            return Single(obj)
-
-### Provide a way to enforce the empty-ness of iterators    
-class Empty(Single):
-    name = "Empty"
-    
-    def __init__(self, type):
-        if not hasattr(type, '__len__'):
-            raise TypeError("Can only assert emptyness for types with __len__ methods")
-        
-        Single.__init__(self, type)
-
-    def __typecheck__(self, func, to_check):
-        Single.__typecheck__(self, func, to_check)
-        
-        if len(to_check) > 0:
-            err = _TC_LengthError(len(to_check), 0)
-            if isinstance(to_check, dict):
-                raise _TC_DictError(err)
-            raise err
-
-class Dict(CheckType):
-    name = "Dict"
-
-    def __init__(self, key, val):
-        self.__check_key = Type(key)
-        self.__check_val = Type(val)
-        
-        self.type = {key: val}
-        self._types = [key, val]
-        
-    def __typecheck__(self, func, to_check):
-        if not isinstance(to_check, types.DictType):
-            raise _TC_TypeError(to_check, self.type)
-        
-        for (k, v) in to_check.items():
-            # Check the key
-            try:
-                check_type(self.__check_key, func, k)
-            except _TC_Exception, inner:
-                raise _TC_KeyError(k, inner)
-
-            # Check the value
-            try:
-                check_type(self.__check_val, func, v)
-            except _TC_Exception, inner:
-                raise _TC_KeyValError(k, v, inner)
-        
-    def __eq__(self, other):
-        if other.__class__ is not self.__class__:
-            return False
-        return self.type == other.type
-        
-    def __hash__(self):
-        cls = self.__class__
-        key = self.__check_key
-        val = self.__check_val
-        
-        def strhash(obj):
-            return str(hash(obj))
-    
-        return hash(''.join(map(strhash, [cls, key, val])))
-    
-    @classmethod
-    def __typesig__(cls, obj):
-        if isinstance(obj, dict):
-            if len(obj) == 0:
-                return Empty(dict)
-            return Dict(obj.keys()[0], obj.values()[0])
-
-### Provide typechecking for the built-in list() type
-class List(CheckType):
-    name = "List"
-
-    def __init__(self, *type):
-        self._types = [Type(t) for t in type]
-        self.type = [t.type for t in self._types]
-
-    def __typecheck__(self, func, to_check):
-        if not isinstance(to_check, list):
-            raise _TC_TypeError(to_check, self.type)
-        if len(to_check) % len(self._types):
-            raise _TC_LengthError(len(to_check))
-        
-        # lists can be patterned, meaning that [int, float]
-        # requires that the to-be-checked list contain an alternating
-        # sequence of integers and floats. The pattern must be completed
-        # (e.g, [5, 5.0, 6, 6.0] but not [5, 5.0, 6]) for the list to
-        # typecheck successfully.
-        #
-        # A list with a single type, [int], is a sub-case of patterned
-        # lists
-        #
-        # XXX: Investigate speed increases by special-casing single-typed
-        # lists 
-        pat_len = len(self._types)
-        type_tuples = [(i, val, self._types[i % pat_len]) for (i, val)
-                    in enumerate(to_check)]
-        for (i, val, type) in type_tuples:
-            try:
-                check_type(type, func, val)
-            except _TC_Exception, e:
-                raise _TC_IndexError(i, e)
-        
-    def __eq__(self, other):
-        if other.__class__ is not self.__class__:
-            return False
-            
-        if len(self._types) != len(other._types):
-            return False
-        
-        for (s, o) in zip(self._types, other._types):
-            if s != o:
-                return False
-        return True
-        
-    def __hash__(self):
-        def strhash(obj):
-            return str(hash(obj))
-            
-        return hash(''.join(map(strhash, [self.__class__] + self._types)))
-    
-    @classmethod    
-    def __typesig__(cls, obj):
-        if isinstance(obj, list):
-            if len(obj) == 0:
-                return Empty(list)
-            return List(*obj)
-
-### Provide typechecking for the built-in tuple() class
-class Tuple(List):
-    name = "Tuple"
-
-    def __init__(self, *type):
-        List.__init__(self, *type)
-        
-        self.type = tuple(self.type)
-
-    def __typecheck__(self, func, to_check):
-        # Note that tuples of varying length (e.g., (int, int) and (int, int, int))
-        # are separate types, not merely differences in length like lists
-        if not isinstance(to_check, types.TupleType) or len(to_check) != len(self._types):
-            raise _TC_TypeError(to_check, self.type)
-
-        for (i, (val, type)) in enumerate(zip(to_check, self._types)):
-            try:
-                check_type(type, func, val)
-            except _TC_Exception, inner:
-                raise _TC_IndexError(i, inner)
-        
-    @classmethod
-    def __typesig__(cls, obj):
-        if isinstance(obj, tuple):
-            return Tuple(*obj)
-            
-class TypeVariables(CheckType):
-    # This is a stack of {typevariable -> type} mappings
-    # It is intentional that it is class-wide; it maintains
-    # the mappings of the outer functions if we descend into
-    # nested typechecked functions
-    __mapping_stack = []
-
-    # This is the {typevariable -> type} mapping for the function
-    # currently being checked
-    __active_mapping = None
-
-    # This dict maps generators to their mappings
-    __gen_mappings = {}
-
-    def __init__(self, name):
-        self.type = name
-
-    def __str__(self):
-        return "TypeVariable(%s)" % self.type
-
-    __repr__ = __str__
-
-    def __hash__(self):
-        return hash(''.join([str(o) for o in self.__class__
-                                           , hash(type(self.type))
-                                           , hash(self.type)]))
-
-    def __eq__(self, other):
-        if self.__class__ is not other.__class__:
-            return False
-        return type(self.type) is type(other.type) and self.type == other.type
-
-    def __typecheck__(self, func, to_check):
-        name = self.type
-        if isinstance(func, GeneratorType):
-            active = self.__class__.__gen_mappings[func]
-        else:
-            active = self.__class__.__active_mapping
-
-        # We have to do this because u'a' == 'a'
-        lookup = (name, type(name))
-        if lookup in active:
-            check_type(active[lookup], func, to_check)
-        else:
-            # This is the first time we've encountered this
-            # typevariable for this function call.
-            #
-            # In this case, we automatically approve the object
-            active[lookup] = Type(calculate_type(to_check))
-
-    @classmethod
-    def __typesig__(cls, obj):
-        if isinstance(obj, basestring):
-            return cls(obj)
-
-    @classmethod
-    def __startchecking__(cls, func):
-        if isinstance(func, GeneratorType):
-            cls.__gen_mappings.setdefault(func, {})
-        elif isinstance(func, FunctionType):
-            cls.__mapping_stack.append(cls.__active_mapping)
-            cls.__active_mapping = {}
-        else:
-            raise TypeError(func)
-
-    @classmethod
-    def __switchchecking__(cls, from_func, to_func):
-        if isinstance(from_func, FunctionType):
-            if isinstance(to_func, GeneratorType):
-                cls.__gen_mappings[to_func] = cls.__active_mapping
-                cls.__stopchecking__(from_func)
-            elif isinstance(to_func, FunctionType):
-                cls.__stopchecking__(from_func)
-                cls.__startchecking__(to_func)
-            else:
-                raise TypeError(to_func)
-        else:
-            raise TypeError(from_func)
-
-    @classmethod
-    def __stopchecking__(cls, func):
-        if isinstance(func, GeneratorType):
-            del cls.__gen_mappings[func]
-        elif isinstance(func, FunctionType):
-            cls.__active_mapping = cls.__mapping_stack.pop()
-        else:
-            raise TypeError(func)
-            
-class Function(CheckType):
-    def __init__(self, func):
-        self._func = func
-        self.type = self
-    
-    @classmethod
-    def __typesig__(cls, obj):
-        if isinstance(obj, (FunctionType, MethodType)):
-            return cls(obj)
-            
-        # Snag callable class instances (that aren't types or classes)
-        if type(obj) not in (types.ClassType, type) and callable(obj):
-            return cls(obj)
-            
-    def __typecheck__(self, func, to_check):
-        if False == self._func(to_check):
-            raise _TC_FunctionError(self._func, to_check)
-            
-    def __str__(self):
-        return "Function(%s)" % self._func
-        
-    def __repr__(self):
-        return str(self)
-    
-    def __eq__(self, other):
-        if self.__class__ is not other.__class__:
-            return False
-        return self._func is other._func
-        
-    def __hash__(self):
-        return hash(str(self.__class__) + str(hash(self._func)))
-        
-# Register some of the above types so that Type() knows about them
-for c in (CheckType, List, Tuple, Dict, Single, TypeVariables, Function):
-    register_type(c)
-
-### The following are utility classes intended to make writing complex
-### signatures easier.
-######################################################################
-
-### Instances of Any() automatically approve of the object they're supposed
-### to be checking (ie, they don't actually check it; use this with caution)
-class Any(CheckType):
-    name = "Any"
-    
-    def __init__(self):
-        self.type = object
-    
-    def __typecheck__(self, func, to_check):
-        pass
-        
-    def __str__(self):
-        return "Any()"
-        
-    __repr__ = __str__
-
-    # All instances of this class are equal     
-    def __eq__(self, other):
-        return other.__class__ is self.__class__
-        
-    def __hash__(self):
-        return hash(self.__class__)
-
-### Base class for Or() and And()
-class _Boolean(CheckType):
-    def __init__(self, first_type, second_type, *types):
-        self._types = set()
-        
-        for t in (first_type, second_type)+types:
-            if type(t) is type(self):
-                self._types.update(t._types)
-            else:
-                self._types.add(Type(t))
-                
-        if len(self._types) < 2:
-            raise TypeError("there must be at least 2 distinct parameters to __init__()")
-
-        self.type = self
-        
-    def __eq__(self, other):
-        if other.__class__ is not self.__class__:
-            return False
-
-        return self._types == other._types
-        
-    def __hash__(self):
-        return hash(str(hash(self.__class__)) + str(hash(frozenset(self._types))))
-
-class Or(_Boolean):
-    name = "Or"
-
-    def __typecheck__(self, func, to_check):
-        for type in self._types:
-            try:
-                check_type(type, func, to_check)
-                return
-            except _TC_Exception:
-                pass
-
-        raise _TC_TypeError(to_check, self)
-
-class And(_Boolean):
-    name = "And"
-
-    def __typecheck__(self, func, to_check):
-        for type in self._types:
-            try:
-                check_type(type, func, to_check)
-            except _TC_Exception, e:
-                raise _TC_TypeError(to_check, self)
-
-class Not(Or):
-    name = "Not"
-    
-    # We override _Boolean's __init__ so that we can accept a single
-    # condition
-    def __init__(self, first_type, *types):
-        self._types = set([Type(t) for t in (first_type,)+types])
-                        
-        self.type = self
-    
-    def __typecheck__(self, func, to_check):
-        # Or does our work for us, but we invert its result
-        try:
-            Or.__typecheck__(self, func, to_check)
-        except _TC_Exception:
-            return
-        raise _TC_TypeError(to_check, self)
-        
-class Xor(_Boolean):
-    name = "Xor"
-
-    def __typecheck__(self, func, to_check):
-        already_met_1_cond = False
-        
-        for typ in self._types:
-            try:
-                check_type(typ, func, to_check)
-            except _TC_Exception:
-                pass
-            else:
-                if already_met_1_cond:
-                    raise _TC_XorError(2, _TC_TypeError(to_check, self))
-                already_met_1_cond = True
-                    
-        if not already_met_1_cond:
-            raise _TC_XorError(0, _TC_TypeError(to_check, self))        
-        
-class IsCallable(CheckType):
-    def __init__(self):
-        self.type = self
-        
-    def __str__(self):
-        return "IsCallable()"
-        
-    __repr__ = __str__
-    
-    # They're all the same
-    # XXX Change IsCallable to a singleton class    
-    def __hash__(self):
-        return id(self.__class__)
-    
-    def __eq__(self, other):
-        return self.__class__ is other.__class__
-    
-    def __typecheck__(self, func, to_check):
-        if not callable(to_check):
-            raise _TC_TypeError(to_check, 'a callable')
-            
-class HasAttr(CheckType):
-    def __init__(self, set_1, set_2=None):
-        attr_sets = {list: [], dict: {}}
-    
-        for (arg_1, arg_2) in ((set_1, set_2), (set_2, set_1)):
-            for t in (list, dict):
-                if isinstance(arg_1, t):
-                    attr_sets[t] = arg_1
-                    if isinstance(arg_2, t):
-                        raise TypeError("can only have one list and/or one dict")
-                        
-        self._attr_types = dict.fromkeys(attr_sets[list], Any())
-        
-        for (attr, typ) in attr_sets[dict].items():
-            self._attr_types[attr] = Type(typ)
-        
-    def __typecheck__(self, func, to_check):
-        for (attr, typ) in self._attr_types.items():
-            if not hasattr(to_check, attr):
-                raise _TC_MissingAttrError(attr)
-                
-            try:
-                check_type(typ, func, getattr(to_check, attr))
-            except _TC_Exception, e:
-                raise _TC_AttrError(attr, e)
-                
-    def __eq__(self, other):
-        if self.__class__ is not other.__class__:
-            return False
-        return self._attr_types == other._attr_types
-        
-    def __hash__(self):
-        return hash(str(hash(self.__class__)) + str(hash(str(self._attr_types))))
-        
-    def __str__(self):
-        any_type = []
-        spec_type = {}
-        
-        any = Any()
-        
-        for (attr, typ) in self._attr_types.items():
-            if typ == any:
-                any_type.append(attr)
-            else:
-                spec_type[attr] = typ
-
-        msg = [t for t in (any_type, spec_type) if len(t)]              
-    
-        return "HasAttr(" + ', '.join(map(str, msg)) + ")"
-        
-    __repr__ = __str__
-                
-class IsIterable(CheckType):
-    def __init__(self):
-        self.type = self
-        
-    def __eq__(self, other):
-        return self.__class__ is other.__class__
-    
-    # They're all the same
-    # XXX Change IsIterable to a singleton class    
-    def __hash__(self):
-        return id(self.__class__)
-        
-    def __str__(self):
-        return "IsIterable()"
-        
-    __repr__ = __str__
-        
-    def __typecheck__(self, func, to_check):
-        if not (hasattr(to_check, '__iter__') and callable(to_check.__iter__)):
-            raise _TC_TypeError(to_check, "an iterable")
-            
-class YieldSeq(CheckType):
-    _index_map = {}
-
-    def __init__(self, type_1, type_2, *types):
-        self.type = self
-        
-        self._type = [type_1, type_2] + list(types)
-        self._types = [Type(t) for t in self._type]
-        
-    def __hash__(self):
-        return id(self)
-        
-    def __str__(self):
-        return "YieldSeq(" + ", ".join(map(str, self._type)) + ")"
-        
-    __repr__ = __str__
-    
-    def __eq__(self, other):
-        if self.__class__ is not other.__class__:
-            return False
-        return self._types == other._types
-        
-    def __hash__(self):
-        return hash(str(self.__class__) + str([hash(t) for t in self._types]))
-    
-    # We have to use __{start,stop}checking__ so that the indexes get
-    # reset every time we run through the typechecking sequence
-    @classmethod
-    def __startchecking__(cls, gen):
-        if isinstance(gen, GeneratorType):
-            cls._index_map[gen] = {}
-    
-    @classmethod    
-    def __stopchecking__(cls, gen):
-        if gen in cls._index_map:
-            del cls._index_map[gen]
-    
-    def __typecheck__(self, gen, to_check):
-        index_map = self.__class__._index_map
-        
-        # There might be multiple YieldSeq's per signature
-        if self not in index_map[gen]:
-            index_map[gen][self] = -1
-        index = index_map[gen]
-
-        if index[self] >= len(self._types)-1:
-            raise _TC_YieldCountError(len(self._types))
-            
-        index[self] += 1        
-        check_type(self._types[index[self]], gen, to_check)
-                
-register_type(YieldSeq)
-
-class Exact(CheckType):
-    def __init__(self, obj):
-        self.type = self
-        self._obj = obj
-        
-    def __hash__(self):
-        try:
-            obj_hash = str(hash(self._obj))
-        except TypeError:
-            obj_hash = str(type(self._obj)) + str(self._obj)
-        
-        return hash(str(self.__class__) + obj_hash)
-        
-    def __eq__(self, other):
-        if self.__class__ is not other.__class__:
-            return False
-        return self._obj == other._obj
-        
-    def __typecheck__(self, func, to_check):
-        if self._obj != to_check:
-            raise _TC_ExactError(to_check, self._obj)
-            
-class Length(CheckType):
-    def __init__(self, length):
-        self.type = self
-        self._length = int(length)
-        
-    def __hash__(self):
-        return hash(str(self.__class__) + str(self._length))
-        
-    def __eq__(self, other):
-        if self.__class__ is not other.__class__:
-            return False
-        return self._length == other._length
-        
-    def __typecheck__(self, func, to_check):
-        try:
-            length = len(to_check)
-        except TypeError:
-            raise _TC_TypeError(to_check, "something with a __len__ method")
-            
-        if length != self._length:
-            raise _TC_LengthError(length, self._length)
-
-import sys          
-class Class(CheckType):
-    def __init__(self, class_name):
-        self.type = self
-        self.class_name = class_name
-        self.class_obj = None
-        self._frame = sys._getframe(1)
-        
-    def __hash__(self):
-        return hash(str(self.__class__) + self.class_name)
-        
-    def __str__(self):
-        return "Class('%s')" % self.class_name
-        
-    __repr__ = __str__
-        
-    def __eq__(self, other):
-        if self.__class__ is not other.__class__:
-            return False
-        return self.class_name == other.class_name
-        
-    def __typecheck__(self, func, to_check):
-        if self.class_obj is None:
-            class_name = self.class_name
-            frame = self._frame
-        
-            for f_dict in (frame.f_locals, frame.f_globals):
-                if class_name in frame.f_locals:
-                    if self is not frame.f_locals[class_name]:
-                        self.class_obj = frame.f_locals[class_name]
-                        self._frame = None
-                        break
-            else:
-                raise NameError("name '%s' is not defined" % class_name)
-        
-        if not isinstance(to_check, self.class_obj):
-            raise _TC_TypeError(to_check, self.class_obj)
-            
-class Typeclass(CheckType):
-    bad_members = dict.fromkeys(['__class__', '__new__', '__init__'], True)
-
-    def __init__(self, *types):
-        if len(types) == 0:
-            raise TypeError("Must supply at least one type to __init__()")
-    
-        self.type = self
-    
-        self._cache = set()
-        self._interface = set()
-        self._instances = set()
-        for t in types:
-            self.add_instance(t)
-                
-        self._calculate_interface()
-        
-    def recalculate_interface(self):
-        self._cache = self._instances.copy()
-        self._calculate_interface()
-                
-    def instances(self):
-        return list(self._instances)
-        
-    def interface(self):
-        return list(self._interface)
-        
-    def has_instance(self, instance):
-        return instance in self._instances
-        
-    def add_instance(self, instance):
-        if isinstance(instance, self.__class__):
-            for inst in instance.instances():
-                self._instances.add(inst)
-                self._cache.add(inst)
-        elif isinstance(instance, (ClassType, TypeType)):
-            self._instances.add(instance)
-            self._cache.add(instance)
-        else:
-            raise TypeError("All instances must be classes or types")
-        
-    def intersect(self, other):
-        if isinstance(other, self.__class__):
-            new_instances = other.instances()
-        else:
-            new_instances = other
-            
-        self._instances.update(new_instances)
-        self._cache.update(new_instances)
-        self._calculate_interface()
-        
-    def _calculate_interface(self):
-        bad_members = self.bad_members
-    
-        for instance in self._instances:
-            inst_attrs = []
-        
-            for attr, obj in instance.__dict__.items():
-                if callable(obj) and attr not in bad_members:
-                    inst_attrs.append(attr)
-            
-            if len(self._interface) == 0:
-                self._interface = set(inst_attrs)
-            else:
-                self._interface.intersection_update(inst_attrs)
-                
-    def __typecheck__(self, func, to_check):
-        if to_check.__class__ in self._cache:
-            return
-            
-        for method in self._interface:
-            if not hasattr(to_check, method):
-                raise _TC_MissingAttrError(method)
-
-            attr = getattr(to_check, method)
-            if not callable(attr):
-                raise _TC_AttrError(method, _TC_TypeError(attr, IsCallable()))
-                
-        self._cache.add(to_check.__class__)
-        
-    def __eq__(self, other):
-        if self.__class__ is not other.__class__:
-            return False
-        return self._instances == other._instances
-        
-    def __hash__(self):
-        return hash(str(self.__class__) + str(hash(frozenset(self._instances))))
-        
-    def __repr__(self):
-        return object.__repr__(self)
-        
-    def __str__(self):
-        return 'Typeclass(' + ', '.join(map(str, self._instances)) + ')'
-
-# The current implementation of Self relies on the TypeVariables machinery
-_Self = TypeVariables("this is the class of the invocant")
-def Self():
-    return _Self
-    
-### Aliases
-###########
-
-IsOneOf = Or
-IsAllOf = And
-IsNoneOf = Not
-IsOnlyOneOf = Xor
-
-### This is the public side of the module
-#########################################
-
-# This is for backwards compatibility with v0.1.6 and earlier
-class TypeCheckException(Exception):
-    pass
-
-class TypeCheckError(TypeCheckException):
-    def __init__(self, prefix, bad_object, exception):
-        TypeCheckException.__init__(self, prefix, bad_object, exception)
-        
-        self.prefix = prefix
-        self.internal = exception
-        self.bad_object = bad_object
-        
-        (bad_obj_str, start_message) = exception.format_bad_object(bad_object)
-        self.__message = prefix + bad_obj_str + start_message.error_message()
-        
-    def __str__(self):
-        return self.__message
-        
-class TypeSignatureError(Exception):
-    def __init__(self, internal_exc):
-        Exception.__init__(self, internal_exc)
-        
-        self.internal = internal_exc
-        self.__message = internal_exc.error_message()
-
-    def __str__(self):
-        return self.__message
-
-### Begin helper classes/functions for typecheck_args
-#####################################################
-def _rec_tuple(obj):
-    if isinstance(obj, list):
-        return tuple(_rec_tuple(o) for o in obj)
-    return obj
-
-def _rec_tuple_str(obj):
-    if not isinstance(obj, (list, tuple)):
-        return obj
-
-    if len(obj) == 1:
-        return '(%s,)' % obj
-
-    return '(' + ', '.join(_rec_tuple_str(o) for o in obj) + ')'
-
-def _gen_arg_to_param(func, (posargs, varargs, varkw, defaults)):
-    sig_args = list()
-    dic_args = list()
-    
-    for obj in posargs:
-        if isinstance(obj, list):
-            rts = _rec_tuple_str(obj)
-        
-            sig_args.append(rts)
-            dic_args.append((_rec_tuple(obj), rts))
-        else:
-            sig_args.append(str(obj))
-            dic_args.append(('"%s"' % obj, obj))
-    
-    func_code = ''
-    if varargs:
-        dic_args.append(('"%s"' % varargs, varargs))
-        sig_args.append('*' + varargs)
-        func_code = '\n\t%s = list(%s)' % (varargs, varargs)
-    if varkw:
-        dic_args.append(('"%s"' % varkw, varkw))
-        sig_args.append('**' + varkw)
-
-    func_name = func.func_name + '_'
-    while func_name in dic_args:
-        func_name += '_'
-
-    func_def = 'def %s(' % func.func_name
-    func_return = func_code \
-                + '\n\treturn {' \
-                + ', '.join('%s: %s' % kv for kv in dic_args) \
-                + '}'
-    
-    locals = {}
-    exec func_def + ','.join(sig_args) + '):' + func_return in locals
-    func = locals[func.func_name]
-    func.func_defaults = defaults
-    return func
-
-def _validate_tuple(ref, obj):
-    if not isinstance(ref, (list, tuple)):
-        return
-    if not isinstance(obj, (list, tuple)):
-        raise _TS_TupleError(ref, obj)
-
-    if len(ref) != len(obj):
-        raise _TS_TupleError(ref, obj)
-        
-    try:
-        for r, o in zip(ref, obj):
-            _validate_tuple(r, o)
-    except _TS_TupleError:
-        raise _TS_TupleError(ref, obj)
-
-def _param_to_type((params, varg_name, kwarg_name), vargs, kwargs):
-    vargs = list(vargs)
-    kwargs = dict(kwargs)
-    
-    # Make parameter names to values
-    param_value = dict()
-    
-    # There are excess positional arguments, but no *args parameter
-    if len(params) < len(vargs) and varg_name is None:
-        raise _TS_ExtraPositionalError(vargs[len(params)])
-    # There are not enough position args and no kwargs to draw from
-    if len(params) > len(vargs) and len(kwargs) == 0:
-        raise _TS_MissingTypeError(params[len(vargs)])
-    
-    # No reason to do this if there aren't any vargs
-    if len(vargs):
-        for p, a in zip(params, vargs):
-            # Make sure all auto-unpacked tuples match up
-            _validate_tuple(p, a)
-            param_value[_rec_tuple(p)] = a
-        
-    # No reason to do all this work if there aren't any kwargs
-    if len(kwargs) > 0:
-        # All params that still need values
-        params = set([k for k in params if k not in param_value])
-        if kwarg_name and kwarg_name not in param_value:
-            params.add(kwarg_name)
-        if varg_name and varg_name not in param_value:
-            params.add(varg_name)
-            
-        # Lift this out of the loop
-        no_double_star = kwarg_name is None
-        
-        # All parameter slots have been filled, but there are still keyword
-        # args remaining with no **kwargs parameter present
-        if len(params) == 0 and no_double_star:
-            raise _TS_ExtraKeywordError(kwargs.keys()[0])
-        
-        # Match up remaining keyword args with open parameter slots
-        for p, a in kwargs.items():
-            if p in param_value:
-                raise _TS_TwiceTypedError(p, a, param_value[p])
-            if p not in params and no_double_star:
-                raise _TS_ExtraKeywordError(p)
-
-            # Make sure all auto-unpacked tuples match up
-            _validate_tuple(p, a)
-
-            # Bookkeeping
-            params.remove(p)
-            param_value[p] = a
-        
-        # Any elements left in params indicate that the parameter is missing
-        # a value
-        if len(params):
-            raise _TS_MissingTypeError(params.pop())
-
-    return param_value
-
-def _make_fake_function(func):
-    def fake_function(*vargs, **kwargs):
-        # We call start_checking here, but __check_result
-        # has to call stop_checking on its own. The reason
-        # for this is so that typecheck_yield can call
-        # stop_checking on the function and then start_checking
-        # on the generator
-        start_checking(func)
-
-        # If either one of these operations fails, we need to call
-        # stop_checking()
-        try:
-            fake_function.__check_args(vargs, kwargs)
-            result = func(*vargs, **kwargs)
-        except:
-            stop_checking(func)
-            raise
-
-        return fake_function.__check_result(func, result)
-
-    # These are the default implementations of __check_args
-    # and __check_results
-    def _pass_args(vargs, kwargs):
-        pass
-    def _pass_result(func, result):
-        stop_checking(func)
-        return result
-
-    fake_function.__check_args = _pass_args
-    fake_function.__check_result = _pass_result
-    fake_function.__wrapped_func = func
-
-    # Mock-up the fake function to look as much like the
-    # real function as possible
-    fake_function.__module__ = func.__module__
-    fake_function.__name__ = func.__name__
-    fake_function.__doc__ = func.__doc__
-
-    return fake_function
-
-###################################################
-### End helper classes/functions for typecheck_args
-
-def typecheck_args(*v_sig, **kw_sig):
-    # typecheck_args is run to obtain the real decorator
-    def decorator(func):
-        if hasattr(func, '__wrapped_func'):
-            if hasattr(func, 'type_args'):
-                raise RuntimeError('Cannot use the same typecheck_* function more than once on the same function')
-            wrapped_func = func.__wrapped_func
-        else:
-            wrapped_func = func
-
-        param_list, varg_name, kwarg_name, defaults = inspect.getargspec(wrapped_func)
-        args_to_params = _gen_arg_to_param(wrapped_func, (param_list, varg_name, kwarg_name, defaults))
-
-        try:        
-            param_types = _param_to_type((param_list, varg_name, kwarg_name), v_sig, kw_sig)
-        except _TS_Exception, e:
-            raise TypeSignatureError(e)
-        
-        ### We need to fix-up the types of the *vargs and **kwargs parameters
-        #####################################################################
-        if varg_name:
-            if not isinstance(param_types[varg_name], list):
-                param_types[varg_name] = [param_types[varg_name]]
-
-        if kwarg_name:
-            if not isinstance(param_types[kwarg_name], dict):
-                param_types[kwarg_name] = {str: param_types[kwarg_name]}
-        
-        #####################################################################
-        ### /Fix-up
-        
-        # Convert the signatures to types now, rather than rebuild them in every function call
-        check_param_types = dict()
-        for k, v in param_types.items():
-            check_param_types[k] = Type(v)
-
-        def __check_args(__vargs, __kwargs):
-            # Type-checking can be turned on and off by toggling the
-            # value of the global enable_checking variable
-            if enable_checking:
-                arg_dict = args_to_params(*__vargs, **__kwargs)
-
-                # Type-check the keyword arguments
-                try:
-                    for name, val in arg_dict.items():
-                        check_type(check_param_types[name], wrapped_func, val)
-                except _TC_Exception, e:
-                    str_name = _rec_tuple_str(name)
-                    raise TypeCheckError("Argument %s: " % str_name, val, e)
-
-        if hasattr(func, '__check_result'):
-            # This is one of our wrapper functions, probably created by
-            # typecheck_yield or typecheck_return
-            fake_function = func
-        else:
-            # We need to build a wrapper
-            fake_function = _make_fake_function(func)
-
-        # Specify how argument checking should be done
-        fake_function.__check_args = __check_args
-
-        ### Add the publically-accessible signature information
-        fake_function.type_args = param_types
-
-        return fake_function
-    return decorator
-    
-# Refactor this out of typecheck_{return,yield} 
-def _decorator(signature, conflict_field, twice_field, check_result_func):
-    def decorator(func):
-        if hasattr(func, '__check_result'):
-            # This is one of our wrapper functions, probably created by
-            # typecheck_args
-            if hasattr(func, conflict_field):
-                raise RuntimeError("Cannot use typecheck_return and typecheck_yield on the same function")
-            elif hasattr(func, twice_field):
-                raise RuntimeError('Cannot use the same typecheck_* function more than once on the same function')
-
-            fake_function = func
-        else:
-            fake_function = _make_fake_function(func)
-                    
-        setattr(fake_function, twice_field, signature)
-        fake_function.__check_result = check_result_func
-        return fake_function
-    return decorator
-
-def typecheck_return(*signature):
-    if len(signature) == 1:
-        signature = signature[0]
-    sig_types = Type(signature)
-
-    def __check_return(func, return_vals):
-        if enable_checking:
-            try:
-                check_type(sig_types, func, return_vals)
-            except _TC_Exception, e:
-                stop_checking(func)
-                raise TypeCheckError("Return value: ", return_vals, e)
-
-        stop_checking(func)
-        return return_vals
-    return _decorator(signature, 'type_yield', 'type_return', __check_return)
-
-class Fake_generator(object):
-    def __init__(self, real_gen, signature):
-        # The generator should have the same yield signature
-        # as the function that produced it; however, we don't
-        # copy the args signature because the generator
-        # doesn't take arguments
-        self.type_yield = signature
-
-        self.__yield_no = 0     
-        self.__real_gen = real_gen
-        self.__sig_types = Type(signature)
-        self.__needs_stopping = True
-
-    def next(self):
-        gen = self.__real_gen
-    
-        self.__yield_no += 1
-
-        try:
-            return_vals = gen.next()
-        except StopIteration:
-            if self.__needs_stopping:
-                stop_checking(gen)
-                self.__needs_stopping = False
-            raise
-
-        if enable_checking:
-            try:
-                check_type(self.__sig_types, gen, return_vals)
-            except _TC_Exception, e:
-                # Insert this error into the chain so we can know
-                # which yield the error occurred at
-                middle_exc = _TC_GeneratorError(self.__yield_no, e)
-                raise TypeCheckError("", return_vals, middle_exc)
-
-        # Everything checks out. Return the results
-        return return_vals
-        
-    def __del__(self):
-        if self.__needs_stopping:
-            stop_checking(self.__real_gen)
-
-def typecheck_yield(*signature):
-    if len(signature) == 1:
-        signature = signature[0]
-
-    def __check_yield(func, gen):
-        # If the return value isn't a generator, we blow up
-        if not isinstance(gen, types.GeneratorType):
-            stop_checking(func)
-            raise TypeError("typecheck_yield only works for generators")
-
-        # Inform all listening classes that they might want to preserve any information
-        # from the function to the generator (*hint* TypeVariables *hint*)
-        #
-        # stop_checking() will not be invoked on the generator until it raises
-        # StopIteration or its refcount drops to 0
-        switch_checking(func, gen)
-    
-        # Otherwise, we build ourselves a fake generator
-        return Fake_generator(gen, signature)
-    return _decorator(signature, 'type_return', 'type_yield', __check_yield)
-
-_null_decorator = lambda *args, **kwargs: lambda f: f
-typecheck = _null_decorator
-accepts = _null_decorator
-returns = _null_decorator
-yields = _null_decorator
-
-# Aliases
-def enable_typechecking():
-    global typecheck
-    global accepts
-    global returns
-    global yields
-  
-    typecheck = typecheck_args
-    accepts = typecheck_args
-    returns = typecheck_return
-    yields = typecheck_yield
-
-import os
-if "PYTHONTYPECHECK" in os.environ:
-    enable_typechecking()
-
diff --git a/translate/misc/typecheck/doctest_support.py b/translate/misc/typecheck/doctest_support.py
deleted file mode 100644
index 0933dda..0000000
--- a/translate/misc/typecheck/doctest_support.py
+++ /dev/null
@@ -1,36 +0,0 @@
-"""
-This module allows doctest to find typechecked functions.
-
-Currently, doctest verifies functions to make sure that their
-globals() dict is the __dict__ of their module. In the case of
-decorated functions, the globals() dict *is* not the right one.
-
-To enable support for doctest do:
-    
-    import typecheck.doctest_support
-
-This import must occur before any calls to doctest methods.
-"""
-
-def __DocTestFinder_from_module(self, module, object):
-    """
-    Return true if the given object is defined in the given
-    module.
-    """
-    import inspect
-    
-    if module is None:
-        return True 
-    elif inspect.isfunction(object) or inspect.isclass(object):
-        return module.__name__ == object.__module__
-    elif inspect.getmodule(object) is not None:
-        return module is inspect.getmodule(object)
-    elif hasattr(object, '__module__'):
-        return module.__name__ == object.__module__
-    elif isinstance(object, property):
-        return True # [XX] no way not be sure.
-    else:
-        raise ValueError("object must be a class or function")
-
-import doctest as __doctest
-__doctest.DocTestFinder._from_module = __DocTestFinder_from_module
\ No newline at end of file
diff --git a/translate/misc/typecheck/mixins.py b/translate/misc/typecheck/mixins.py
deleted file mode 100644
index 11ac7bb..0000000
--- a/translate/misc/typecheck/mixins.py
+++ /dev/null
@@ -1,84 +0,0 @@
-from translate.misc.typecheck import _TC_NestedError, _TC_TypeError, check_type, Or
-from translate.misc.typecheck import register_type, _TC_Exception
-
-class _TC_IterationError(_TC_NestedError):
-    def __init__(self, iteration, value, inner_exception):
-        _TC_NestedError.__init__(self, inner_exception)
-    
-        self.iteration = iteration
-        self.value = value
-        
-    def error_message(self):
-        return ("at iteration %d (value: %s)" % (self.iteration, repr(self.value))) + _TC_NestedError.error_message(self)
-
-### This is the shadow class behind UnorderedIteratorMixin.
-### Again, it tries to pretend it doesn't exist by mimicing
-### the class of <obj> as much as possible.
-###
-### This mixin provides typechecking for iterator classes
-### where you don't care about the order of the types (ie,
-### you simply Or() the types together, as opposed to patterned
-### lists, which would be ordered mixins)
-class _UnorderedIteratorMixin(object):
-    def __init__(self, class_name, obj):
-        vals = [o for o in obj]
-    
-        self.type = self
-        self._type = Or(*vals)
-        self.__cls = obj.__class__
-        self.__vals = vals
-        # This is necessary because it's a huge pain in the ass
-        # to get the "raw" name of the class once it's created
-        self.__cls_name = class_name
-
-    def __typecheck__(self, func, to_check):
-        if not isinstance(to_check, self.__cls):
-            raise _TC_TypeError(to_check, self)
-
-        for i, item in enumerate(to_check):
-            try:
-                check_type(self._type, func, item)
-            except _TC_Exception, e:
-                raise _TC_IterationError(i, item, e)
-
-    @classmethod    
-    def __typesig__(cls, obj):
-        if isinstance(obj, cls):
-            return obj
-
-    def __str__(self):
-        return "%s(%s)" % (self.__cls_name, str(self._type))
-
-    __repr__ = __str__
-
-### This is included in a class's parent-class section like so:
-###  class MyClass(UnorderedIteratorMixin("MyClass")):
-###    blah blah blah
-###
-### This serves as a class factory, whose produced classes
-### attempt to mask the fact they exist. Their purpose
-### is to redirect __typesig__ calls to appropriate
-### instances of _UnorderedIteratorMixin
-def UnorderedIteratorMixin(class_name):
-    class UIM(object):
-        @classmethod
-        def __typesig__(cls, obj):
-            if isinstance(obj, cls):
-                return _UnorderedIteratorMixin(class_name, obj)
-
-        def __repr__(self):
-            return "%s%s" % (class_name, str(tuple(e for e in self)))
-
-    # We register each produced class anew
-    # If someone needs to unregister these classes, they should
-    # save a copy of it before including it in the class-definition:
-    #
-    # my_UIM = UnorderedIteratorMixin("FooClass")
-    # class FooClass(my_UIM):
-    #   ...
-    #
-    # Alternatively, you could just look in FooClass.__bases__ later; whatever
-    register_type(UIM)
-    return UIM
-    
-register_type(_UnorderedIteratorMixin)
diff --git a/translate/misc/typecheck/sets.py b/translate/misc/typecheck/sets.py
deleted file mode 100644
index 42743bd..0000000
--- a/translate/misc/typecheck/sets.py
+++ /dev/null
@@ -1,62 +0,0 @@
-from translate.misc.typecheck import CheckType, _TC_TypeError, check_type, Type
-from translate.misc.typecheck import register_type, Or, _TC_Exception, _TC_KeyError
-from translate.misc.typecheck import _TC_LengthError
-
-### Provide typechecking for the built-in set() class
-###
-### XXX: Investigate rewriting this in terms of
-### UnorderedIteratorMixin or Or()          
-class Set(CheckType):
-    def __init__(self, set_list):
-        self.type = set(set_list)
-        self._types = [Type(t) for t in self.type]
-        
-        # self._type is used to build _TC_TypeError
-        if len(self._types) > 1:
-            self._type = Or(*self.type)
-        elif len(self._types) == 1:
-            # XXX Is there an easier way to get this?
-            t = self.type.pop()
-            self._type = t
-            self.type.add(t)
-    
-    def __str__(self):
-        return "Set(" + str([e for e in self.type]) + ")"
-        
-    __repr__ = __str__
-    
-    def __typecheck__(self, func, to_check):
-        if not isinstance(to_check, set):
-            raise _TC_TypeError(to_check, self.type)
-            
-        if len(self._types) == 0 and len(to_check) > 0:
-            raise _TC_LengthError(len(to_check), 0)
-            
-        for obj in to_check:
-            error = False
-            for type in self._types:
-                try:
-                    check_type(type, func, obj)
-                except _TC_Exception:
-                    error = True
-                    continue
-                else:
-                    error = False
-                    break
-            if error:
-                raise _TC_KeyError(obj, _TC_TypeError(obj, self._type))
-
-    def __eq__(self, other):
-        if self.__class__ is not other.__class__:
-            return False
-        return self.type == other.type
-        
-    def __hash__(self):
-        return hash(str(hash(self.__class__)) + str(hash(frozenset(self.type))))
-            
-    @classmethod
-    def __typesig__(self, obj):
-        if isinstance(obj, set):
-            return Set(obj)
-
-register_type(Set)
diff --git a/translate/misc/typecheck/typeclasses.py b/translate/misc/typecheck/typeclasses.py
deleted file mode 100644
index fcab3f4..0000000
--- a/translate/misc/typecheck/typeclasses.py
+++ /dev/null
@@ -1,35 +0,0 @@
-from translate.misc.typecheck import Typeclass
-
-### Number
-####################################################
-
-_numbers = [int, float, complex, long, bool]
-try:
-    from decimal import Decimal
-    _numbers.append(Decimal)
-    del Decimal
-except ImportError:
-    pass
-    
-Number = Typeclass(*_numbers)
-del _numbers
-    
-### String -- subinstance of ImSequence
-####################################################
-
-String = Typeclass(str, unicode)
-    
-### ImSequence -- immutable sequences
-####################################################
-
-ImSequence = Typeclass(tuple, xrange, String)
-
-### MSequence -- mutable sequences
-####################################################
-
-MSequence = Typeclass(list)
-
-### Mapping
-####################################################
-
-Mapping = Typeclass(dict)
diff --git a/translate/misc/wStringIO.py b/translate/misc/wStringIO.py
index e8a5e3d..937344a 100644
--- a/translate/misc/wStringIO.py
+++ b/translate/misc/wStringIO.py
@@ -75,7 +75,7 @@ class StringIO:
     def read(self, n=None):
         if self.closed:
             raise ValueError("I/O operation on closed file")
-        if n == None:
+        if n is None:
             r = self.buf.read()
         else:
             r = self.buf.read(n)
diff --git a/translate/misc/wsgiserver/LICENSE.txt b/translate/misc/wsgiserver/LICENSE.txt
new file mode 100644
index 0000000..a9b9bb3
--- /dev/null
+++ b/translate/misc/wsgiserver/LICENSE.txt
@@ -0,0 +1,25 @@
+Copyright (c) 2004-2011, CherryPy Team (team at cherrypy.org)
+All rights reserved.
+
+Redistribution and use in source and binary forms, with or without modification,
+are permitted provided that the following conditions are met:
+
+    * Redistributions of source code must retain the above copyright notice,
+      this list of conditions and the following disclaimer.
+    * Redistributions in binary form must reproduce the above copyright notice,
+      this list of conditions and the following disclaimer in the documentation
+      and/or other materials provided with the distribution.
+    * Neither the name of the CherryPy Team nor the names of its contributors
+      may be used to endorse or promote products derived from this software
+      without specific prior written permission.
+
+THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND
+ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
+WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
+DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE
+FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
+DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
+SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
+CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY,
+OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
diff --git a/translate/misc/wsgiserver/__init__.py b/translate/misc/wsgiserver/__init__.py
index 9ee5381..ee6190f 100644
--- a/translate/misc/wsgiserver/__init__.py
+++ b/translate/misc/wsgiserver/__init__.py
@@ -6,4 +6,9 @@ __all__ = ['HTTPRequest', 'HTTPConnection', 'HTTPServer',
            'Gateway', 'WSGIGateway', 'WSGIGateway_10', 'WSGIGateway_u0',
            'WSGIPathInfoDispatcher', 'get_ssl_adapter_class']
 
-from wsgiserver import *
+import sys
+if sys.version_info < (3, 0):
+    from wsgiserver2 import *
+else:
+    # Le sigh. Boo for backward-incompatible syntax.
+    exec('from .wsgiserver3 import *')
diff --git a/translate/misc/wsgiserver/wsgiserver.py b/translate/misc/wsgiserver/wsgiserver2.py
similarity index 98%
copy from translate/misc/wsgiserver/wsgiserver.py
copy to translate/misc/wsgiserver/wsgiserver2.py
index cf34f2b..656f430 100644
--- a/translate/misc/wsgiserver/wsgiserver.py
+++ b/translate/misc/wsgiserver/wsgiserver2.py
@@ -107,9 +107,9 @@ def format_exc(limit=None):
     finally:
         etype = value = tb = None
 
+import operator
 
 from urllib import unquote
-from urlparse import urlparse
 import warnings
 
 if sys.version_info >= (3, 0):
@@ -1495,13 +1495,29 @@ class ThreadPool(object):
 
     def grow(self, amount):
         """Spawn new worker threads (not above self.max)."""
-        for i in range(amount):
-            if self.max > 0 and len(self._threads) >= self.max:
-                break
-            worker = WorkerThread(self.server)
-            worker.setName("CP Server " + worker.getName())
-            self._threads.append(worker)
-            worker.start()
+        if self.max > 0:
+            budget = max(self.max - len(self._threads), 0)
+        else:
+            # self.max <= 0 indicates no maximum
+            budget = float('inf')
+
+        n_new = min(amount, budget)
+
+        workers = [self._spawn_worker() for i in range(n_new)]
+        while not self._all(operator.attrgetter('ready'), workers):
+            time.sleep(.1)
+        self._threads.extend(workers)
+
+    def _spawn_worker(self):
+        worker = WorkerThread(self.server)
+        worker.setName("CP Server " + worker.getName())
+        worker.start()
+        return worker
+
+    def _all(func, items):
+        results = [func(item) for item in items]
+        return reduce(operator.and_, results, True)
+    _all = staticmethod(_all)
 
     def shrink(self, amount):
         """Kill off worker threads (not below self.min)."""
@@ -1512,13 +1528,17 @@ class ThreadPool(object):
                 self._threads.remove(t)
                 amount -= 1
 
-        if amount > 0:
-            for i in range(min(amount, len(self._threads) - self.min)):
-                # Put a number of shutdown requests on the queue equal
-                # to 'amount'. Once each of those is processed by a worker,
-                # that worker will terminate and be culled from our list
-                # in self.put.
-                self._queue.put(_SHUTDOWNREQUEST)
+        # calculate the number of threads above the minimum
+        n_extra = max(len(self._threads) - self.min, 0)
+
+        # don't remove more than amount
+        n_to_remove = min(amount, n_extra)
+
+        # put shutdown requests on the queue equal to the number of threads
+        # to remove. As each request is processed by a worker, that worker
+        # will terminate and be culled from the list.
+        for n in range(n_to_remove):
+            self._queue.put(_SHUTDOWNREQUEST)
 
     def stop(self, timeout=5):
         # Must shut down threads here so the code that calls
@@ -1568,14 +1588,6 @@ try:
 except ImportError:
     try:
         from ctypes import windll, WinError
-        import ctypes.wintypes
-        _SetHandleInformation = windll.kernel32.SetHandleInformation
-        _SetHandleInformation.argtypes = [
-            ctypes.wintypes.HANDLE,
-            ctypes.wintypes.DWORD,
-            ctypes.wintypes.DWORD,
-        ]
-        _SetHandleInformation.restype = ctypes.wintypes.BOOL
     except ImportError:
         def prevent_socket_inheritance(sock):
             """Dummy function, since neither fcntl nor ctypes are available."""
@@ -1583,7 +1595,7 @@ except ImportError:
     else:
         def prevent_socket_inheritance(sock):
             """Mark the given socket fd as non-inheritable (Windows)."""
-            if not _SetHandleInformation(sock.fileno(), 1, 0):
+            if not windll.kernel32.SetHandleInformation(sock.fileno(), 1, 0):
                 raise WinError()
 else:
     def prevent_socket_inheritance(sock):
@@ -1647,7 +1659,7 @@ class HTTPServer(object):
     timeout = 10
     """The timeout in seconds for accepted connections (default 10)."""
 
-    version = "CherryPy/3.2.2"
+    version = "CherryPy/3.2.4"
     """A version string for the HTTPServer."""
 
     software = None
diff --git a/translate/misc/wsgiserver/wsgiserver.py b/translate/misc/wsgiserver/wsgiserver3.py
similarity index 79%
rename from translate/misc/wsgiserver/wsgiserver.py
rename to translate/misc/wsgiserver/wsgiserver3.py
index cf34f2b..4bf0381 100644
--- a/translate/misc/wsgiserver/wsgiserver.py
+++ b/translate/misc/wsgiserver/wsgiserver3.py
@@ -70,7 +70,7 @@ number of requests and their responses, so we run a nested loop::
 
 __all__ = ['HTTPRequest', 'HTTPConnection', 'HTTPServer',
            'SizeCheckWrapper', 'KnownLengthRFile', 'ChunkedRFile',
-           'CP_fileobject',
+           'CP_makefile',
            'MaxSizeExceeded', 'NoSSLError', 'FatalSSLAlert',
            'WorkerThread', 'ThreadPool', 'SSLAdapter',
            'CherryPyWSGIServer',
@@ -83,34 +83,20 @@ try:
 except:
     import Queue as queue
 import re
-import rfc822
+import email.utils
 import socket
 import sys
 if 'win' in sys.platform and not hasattr(socket, 'IPPROTO_IPV6'):
     socket.IPPROTO_IPV6 = 41
-try:
-    import cStringIO as StringIO
-except ImportError:
-    import StringIO
-DEFAULT_BUFFER_SIZE = -1
-
-_fileobject_uses_str_type = isinstance(socket._fileobject(None)._rbuf, basestring)
+if sys.version_info < (3,1):
+    import io
+else:
+    import _pyio as io
+DEFAULT_BUFFER_SIZE = io.DEFAULT_BUFFER_SIZE
 
 import threading
 import time
-import traceback
-def format_exc(limit=None):
-    """Like print_exc() but return a string. Backport for Python 2.3."""
-    try:
-        etype, value, tb = sys.exc_info()
-        return ''.join(traceback.format_exception(etype, value, tb, limit))
-    finally:
-        etype = value = tb = None
-
-
-from urllib import unquote
-from urlparse import urlparse
-import warnings
+from traceback import format_exc
 
 if sys.version_info >= (3, 0):
     bytestr = bytes
@@ -233,7 +219,7 @@ def read_headers(rfile, hdict=None):
         if k in comma_separated_headers:
             existing = hdict.get(hname)
             if existing:
-                v = ", ".join((existing, v))
+                v = b", ".join((existing, v))
         hdict[hname] = v
 
     return hdict
@@ -276,7 +262,7 @@ class SizeCheckWrapper(object):
             self._check_length()
             res.append(data)
             # See http://www.cherrypy.org/ticket/421
-            if len(data) < 256 or data[-1:] == "\n":
+            if len(data) < 256 or data[-1:].decode() == "\n":
                 return EMPTY.join(res)
 
     def readlines(self, sizehint=0):
@@ -320,7 +306,7 @@ class KnownLengthRFile(object):
 
     def read(self, size=None):
         if self.remaining == 0:
-            return ''
+            return b''
         if size is None:
             size = self.remaining
         else:
@@ -332,7 +318,7 @@ class KnownLengthRFile(object):
 
     def readline(self, size=None):
         if self.remaining == 0:
-            return ''
+            return b''
         if size is None:
             size = self.remaining
         else:
@@ -631,8 +617,9 @@ class HTTPRequest(object):
 
         try:
             method, uri, req_protocol = request_line.strip().split(SPACE, 2)
-            rp = int(req_protocol[5]), int(req_protocol[7])
-        except (ValueError, IndexError):
+            # The [x:y] slicing is necessary for byte strings to avoid getting ord's
+            rp = int(req_protocol[5:6]), int(req_protocol[7:8])
+        except ValueError:
             self.simple_response("400 Bad Request", "Malformed Request-Line")
             return False
 
@@ -661,12 +648,12 @@ class HTTPRequest(object):
         # safely decoded." http://www.ietf.org/rfc/rfc2396.txt, sec 2.4.2
         # Therefore, "/this%2Fpath" becomes "/this%2Fpath", not "/this/path".
         try:
-            atoms = [unquote(x) for x in quoted_slash.split(path)]
+            atoms = [self.unquote_bytes(x) for x in quoted_slash.split(path)]
         except ValueError:
             ex = sys.exc_info()[1]
             self.simple_response("400 Bad Request", ex.args[0])
             return False
-        path = "%2F".join(atoms)
+        path = b"%2F".join(atoms)
         self.path = path
 
         # Note that, like wsgiref and most other HTTP servers,
@@ -685,7 +672,8 @@ class HTTPRequest(object):
         # Notice that, in (b), the response will be "HTTP/1.1" even though
         # the client only understands 1.0. RFC 2616 10.5.6 says we should
         # only return 505 if the _major_ version is different.
-        sp = int(self.server.protocol[5]), int(self.server.protocol[7])
+        # The [x:y] slicing is necessary for byte strings to avoid getting ord's
+        sp = int(self.server.protocol[5:6]), int(self.server.protocol[7:8])
 
         if sp[0] != rp[0]:
             self.simple_response("505 HTTP Version Not Supported")
@@ -693,7 +681,6 @@ class HTTPRequest(object):
 
         self.request_protocol = req_protocol
         self.response_protocol = "HTTP/%s.%s" % min(rp, sp)
-
         return True
 
     def read_request_headers(self):
@@ -708,7 +695,7 @@ class HTTPRequest(object):
             return False
 
         mrbs = self.server.max_request_body_size
-        if mrbs and int(self.inheaders.get("Content-Length", 0)) > mrbs:
+        if mrbs and int(self.inheaders.get(b"Content-Length", 0)) > mrbs:
             self.simple_response("413 Request Entity Too Large",
                 "The entity sent with the request exceeds the maximum "
                 "allowed bytes.")
@@ -717,25 +704,25 @@ class HTTPRequest(object):
         # Persistent connection support
         if self.response_protocol == "HTTP/1.1":
             # Both server and client are HTTP/1.1
-            if self.inheaders.get("Connection", "") == "close":
+            if self.inheaders.get(b"Connection", b"") == b"close":
                 self.close_connection = True
         else:
             # Either the server or client (or both) are HTTP/1.0
-            if self.inheaders.get("Connection", "") != "Keep-Alive":
+            if self.inheaders.get(b"Connection", b"") != b"Keep-Alive":
                 self.close_connection = True
 
         # Transfer-Encoding support
         te = None
         if self.response_protocol == "HTTP/1.1":
-            te = self.inheaders.get("Transfer-Encoding")
+            te = self.inheaders.get(b"Transfer-Encoding")
             if te:
-                te = [x.strip().lower() for x in te.split(",") if x.strip()]
+                te = [x.strip().lower() for x in te.split(b",") if x.strip()]
 
         self.chunked_read = False
 
         if te:
             for enc in te:
-                if enc == "chunked":
+                if enc == b"chunked":
                     self.chunked_read = True
                 else:
                     # Note that, even if we see "chunked", we must reject
@@ -761,12 +748,12 @@ class HTTPRequest(object):
         #
         # We used to do 3, but are now doing 1. Maybe we'll do 2 someday,
         # but it seems like it would be a big slowdown for such a rare case.
-        if self.inheaders.get("Expect", "") == "100-continue":
+        if self.inheaders.get(b"Expect", b"") == b"100-continue":
             # Don't use simple_response here, because it emits headers
             # we don't want. See http://www.cherrypy.org/ticket/951
-            msg = self.server.protocol + " 100 Continue\r\n\r\n"
+            msg = self.server.protocol.encode('ascii') + b" 100 Continue\r\n\r\n"
             try:
-                self.conn.wfile.sendall(msg)
+                self.conn.wfile.write(msg)
             except socket.error:
                 x = sys.exc_info()[1]
                 if x.args[0] not in socket_errors_to_ignore:
@@ -796,15 +783,13 @@ class HTTPRequest(object):
         if uri == ASTERISK:
             return None, None, uri
 
-        i = uri.find('://')
-        if i > 0 and QUESTION_MARK not in uri[:i]:
+        scheme, sep, remainder = uri.partition(b'://')
+        if sep and QUESTION_MARK not in scheme:
             # An absoluteURI.
             # If there's a scheme (and it must be http or https), then:
             # http_URL = "http:" "//" host [ ":" port ] [ abs_path [ "?" query ]]
-            scheme, remainder = uri[:i].lower(), uri[i + 3:]
-            authority, path = remainder.split(FORWARD_SLASH, 1)
-            path = FORWARD_SLASH + path
-            return scheme, authority, path
+            authority, path_a, path_b = remainder.partition(FORWARD_SLASH)
+            return scheme.lower(), authority, path_a+path_b
 
         if uri.startswith(FORWARD_SLASH):
             # An abs_path.
@@ -813,13 +798,25 @@ class HTTPRequest(object):
             # An authority.
             return None, uri, None
 
+    def unquote_bytes(self, path):
+        """takes quoted string and unquotes % encoded values"""
+        res = path.split(b'%')
+
+        for i in range(1, len(res)):
+            item = res[i]
+            try:
+                res[i] = bytes([int(item[:2], 16)]) + item[2:]
+            except ValueError:
+                raise
+        return b''.join(res)
+
     def respond(self):
         """Call the gateway and write its iterable output."""
         mrbs = self.server.max_request_body_size
         if self.chunked_read:
             self.rfile = ChunkedRFile(self.conn.rfile, mrbs)
         else:
-            cl = int(self.inheaders.get("Content-Length", 0))
+            cl = int(self.inheaders.get(b"Content-Length", 0))
             if mrbs and mrbs < cl:
                 if not self.sent_headers:
                     self.simple_response("413 Request Entity Too Large",
@@ -834,15 +831,15 @@ class HTTPRequest(object):
             self.sent_headers = True
             self.send_headers()
         if self.chunked_write:
-            self.conn.wfile.sendall("0\r\n\r\n")
+            self.conn.wfile.write(b"0\r\n\r\n")
 
     def simple_response(self, status, msg=""):
         """Write a simple response back to the client."""
         status = str(status)
-        buf = [self.server.protocol + SPACE +
-               status + CRLF,
-               "Content-Length: %s\r\n" % len(msg),
-               "Content-Type: text/plain\r\n"]
+        buf = [bytes(self.server.protocol, "ascii") + SPACE +
+               bytes(status, "ISO-8859-1") + CRLF,
+               bytes("Content-Length: %s\r\n" % len(msg), "ISO-8859-1"),
+               b"Content-Type: text/plain\r\n"]
 
         if status[:3] in ("413", "414"):
             # Request Entity Too Large / Request-URI Too Long
@@ -851,7 +848,7 @@ class HTTPRequest(object):
                 # This will not be true for 414, since read_request_line
                 # usually raises 414 before reading the whole line, and we
                 # therefore cannot know the proper response_protocol.
-                buf.append("Connection: close\r\n")
+                buf.append(b"Connection: close\r\n")
             else:
                 # HTTP/1.0 had no 413/414 status nor Connection header.
                 # Emit 400 instead and trust the message body is enough.
@@ -864,7 +861,7 @@ class HTTPRequest(object):
             buf.append(msg)
 
         try:
-            self.conn.wfile.sendall("".join(buf))
+            self.conn.wfile.write(b"".join(buf))
         except socket.error:
             x = sys.exc_info()[1]
             if x.args[0] not in socket_errors_to_ignore:
@@ -873,10 +870,10 @@ class HTTPRequest(object):
     def write(self, chunk):
         """Write unbuffered data to the client."""
         if self.chunked_write and chunk:
-            buf = [hex(len(chunk))[2:], CRLF, chunk, CRLF]
-            self.conn.wfile.sendall(EMPTY.join(buf))
+            buf = [bytes(hex(len(chunk)), 'ASCII')[2:], CRLF, chunk, CRLF]
+            self.conn.wfile.write(EMPTY.join(buf))
         else:
-            self.conn.wfile.sendall(chunk)
+            self.conn.wfile.write(chunk)
 
     def send_headers(self):
         """Assert, process, and send the HTTP response message-headers.
@@ -889,7 +886,7 @@ class HTTPRequest(object):
         if status == 413:
             # Request Entity Too Large. Close conn to avoid garbage.
             self.close_connection = True
-        elif "content-length" not in hkeys:
+        elif b"content-length" not in hkeys:
             # "All 1xx (informational), 204 (no content),
             # and 304 (not modified) responses MUST NOT
             # include a message-body." So no point chunking.
@@ -897,23 +894,23 @@ class HTTPRequest(object):
                 pass
             else:
                 if (self.response_protocol == 'HTTP/1.1'
-                    and self.method != 'HEAD'):
+                    and self.method != b'HEAD'):
                     # Use the chunked transfer-coding
                     self.chunked_write = True
-                    self.outheaders.append(("Transfer-Encoding", "chunked"))
+                    self.outheaders.append((b"Transfer-Encoding", b"chunked"))
                 else:
                     # Closing the conn is the only way to determine len.
                     self.close_connection = True
 
-        if "connection" not in hkeys:
+        if b"connection" not in hkeys:
             if self.response_protocol == 'HTTP/1.1':
                 # Both server and client are HTTP/1.1 or better
                 if self.close_connection:
-                    self.outheaders.append(("Connection", "close"))
+                    self.outheaders.append((b"Connection", b"close"))
             else:
                 # Server and/or client are HTTP/1.0
                 if not self.close_connection:
-                    self.outheaders.append(("Connection", "Keep-Alive"))
+                    self.outheaders.append((b"Connection", b"Keep-Alive"))
 
         if (not self.close_connection) and (not self.chunked_read):
             # Read any remaining request body data on the socket.
@@ -932,17 +929,19 @@ class HTTPRequest(object):
             if remaining > 0:
                 self.rfile.read(remaining)
 
-        if "date" not in hkeys:
-            self.outheaders.append(("Date", rfc822.formatdate()))
+        if b"date" not in hkeys:
+            self.outheaders.append(
+                (b"Date", email.utils.formatdate(usegmt=True).encode('ISO-8859-1')))
 
-        if "server" not in hkeys:
-            self.outheaders.append(("Server", self.server.server_name))
+        if b"server" not in hkeys:
+            self.outheaders.append(
+                (b"Server", self.server.server_name.encode('ISO-8859-1')))
 
-        buf = [self.server.protocol + SPACE + self.status + CRLF]
+        buf = [self.server.protocol.encode('ascii') + SPACE + self.status + CRLF]
         for k, v in self.outheaders:
             buf.append(k + COLON + SPACE + v + CRLF)
         buf.append(CRLF)
-        self.conn.wfile.sendall(EMPTY.join(buf))
+        self.conn.wfile.write(EMPTY.join(buf))
 
 
 class NoSSLError(Exception):
@@ -955,305 +954,36 @@ class FatalSSLAlert(Exception):
     pass
 
 
-class CP_fileobject(socket._fileobject):
+class CP_BufferedWriter(io.BufferedWriter):
     """Faux file object attached to a socket object."""
 
-    def __init__(self, *args, **kwargs):
-        self.bytes_read = 0
-        self.bytes_written = 0
-        socket._fileobject.__init__(self, *args, **kwargs)
-
-    def sendall(self, data):
-        """Sendall for non-blocking sockets."""
-        while data:
-            try:
-                bytes_sent = self.send(data)
-                data = data[bytes_sent:]
-            except socket.error, e:
-                if e.args[0] not in socket_errors_nonblocking:
-                    raise
-
-    def send(self, data):
-        bytes_sent = self._sock.send(data)
-        self.bytes_written += bytes_sent
-        return bytes_sent
+    def write(self, b):
+        self._checkClosed()
+        if isinstance(b, str):
+            raise TypeError("can't write str to binary stream")
 
-    def flush(self):
-        if self._wbuf:
-            buffer = "".join(self._wbuf)
-            self._wbuf = []
-            self.sendall(buffer)
+        with self._write_lock:
+            self._write_buf.extend(b)
+            self._flush_unlocked()
+            return len(b)
 
-    def recv(self, size):
-        while True:
+    def _flush_unlocked(self):
+        self._checkClosed("flush of closed file")
+        while self._write_buf:
             try:
-                data = self._sock.recv(size)
-                self.bytes_read += len(data)
-                return data
-            except socket.error, e:
-                if (e.args[0] not in socket_errors_nonblocking
-                    and e.args[0] not in socket_error_eintr):
-                    raise
+                # ssl sockets only except 'bytes', not bytearrays
+                # so perhaps we should conditionally wrap this for perf?
+                n = self.raw.write(bytes(self._write_buf))
+            except io.BlockingIOError as e:
+                n = e.characters_written
+            del self._write_buf[:n]
 
-    if not _fileobject_uses_str_type:
-        def read(self, size=-1):
-            # Use max, disallow tiny reads in a loop as they are very inefficient.
-            # We never leave read() with any leftover data from a new recv() call
-            # in our internal buffer.
-            rbufsize = max(self._rbufsize, self.default_bufsize)
-            # Our use of StringIO rather than lists of string objects returned by
-            # recv() minimizes memory usage and fragmentation that occurs when
-            # rbufsize is large compared to the typical return value of recv().
-            buf = self._rbuf
-            buf.seek(0, 2)  # seek end
-            if size < 0:
-                # Read until EOF
-                self._rbuf = StringIO.StringIO()  # reset _rbuf.  we consume it via buf.
-                while True:
-                    data = self.recv(rbufsize)
-                    if not data:
-                        break
-                    buf.write(data)
-                return buf.getvalue()
-            else:
-                # Read until size bytes or EOF seen, whichever comes first
-                buf_len = buf.tell()
-                if buf_len >= size:
-                    # Already have size bytes in our buffer?  Extract and return.
-                    buf.seek(0)
-                    rv = buf.read(size)
-                    self._rbuf = StringIO.StringIO()
-                    self._rbuf.write(buf.read())
-                    return rv
-
-                self._rbuf = StringIO.StringIO()  # reset _rbuf.  we consume it via buf.
-                while True:
-                    left = size - buf_len
-                    # recv() will malloc the amount of memory given as its
-                    # parameter even though it often returns much less data
-                    # than that.  The returned data string is short lived
-                    # as we copy it into a StringIO and free it.  This avoids
-                    # fragmentation issues on many platforms.
-                    data = self.recv(left)
-                    if not data:
-                        break
-                    n = len(data)
-                    if n == size and not buf_len:
-                        # Shortcut.  Avoid buffer data copies when:
-                        # - We have no data in our buffer.
-                        # AND
-                        # - Our call to recv returned exactly the
-                        #   number of bytes we were asked to read.
-                        return data
-                    if n == left:
-                        buf.write(data)
-                        del data  # explicit free
-                        break
-                    assert n <= left, "recv(%d) returned %d bytes" % (left, n)
-                    buf.write(data)
-                    buf_len += n
-                    del data  # explicit free
-                    #assert buf_len == buf.tell()
-                return buf.getvalue()
-
-        def readline(self, size=-1):
-            buf = self._rbuf
-            buf.seek(0, 2)  # seek end
-            if buf.tell() > 0:
-                # check if we already have it in our buffer
-                buf.seek(0)
-                bline = buf.readline(size)
-                if bline.endswith('\n') or len(bline) == size:
-                    self._rbuf = StringIO.StringIO()
-                    self._rbuf.write(buf.read())
-                    return bline
-                del bline
-            if size < 0:
-                # Read until \n or EOF, whichever comes first
-                if self._rbufsize <= 1:
-                    # Speed up unbuffered case
-                    buf.seek(0)
-                    buffers = [buf.read()]
-                    self._rbuf = StringIO.StringIO()  # reset _rbuf.  we consume it via buf.
-                    data = None
-                    recv = self.recv
-                    while data != "\n":
-                        data = recv(1)
-                        if not data:
-                            break
-                        buffers.append(data)
-                    return "".join(buffers)
-
-                buf.seek(0, 2)  # seek end
-                self._rbuf = StringIO.StringIO()  # reset _rbuf.  we consume it via buf.
-                while True:
-                    data = self.recv(self._rbufsize)
-                    if not data:
-                        break
-                    nl = data.find('\n')
-                    if nl >= 0:
-                        nl += 1
-                        buf.write(data[:nl])
-                        self._rbuf.write(data[nl:])
-                        del data
-                        break
-                    buf.write(data)
-                return buf.getvalue()
-            else:
-                # Read until size bytes or \n or EOF seen, whichever comes first
-                buf.seek(0, 2)  # seek end
-                buf_len = buf.tell()
-                if buf_len >= size:
-                    buf.seek(0)
-                    rv = buf.read(size)
-                    self._rbuf = StringIO.StringIO()
-                    self._rbuf.write(buf.read())
-                    return rv
-                self._rbuf = StringIO.StringIO()  # reset _rbuf.  we consume it via buf.
-                while True:
-                    data = self.recv(self._rbufsize)
-                    if not data:
-                        break
-                    left = size - buf_len
-                    # did we just receive a newline?
-                    nl = data.find('\n', 0, left)
-                    if nl >= 0:
-                        nl += 1
-                        # save the excess data to _rbuf
-                        self._rbuf.write(data[nl:])
-                        if buf_len:
-                            buf.write(data[:nl])
-                            break
-                        else:
-                            # Shortcut.  Avoid data copy through buf when returning
-                            # a substring of our first recv().
-                            return data[:nl]
-                    n = len(data)
-                    if n == size and not buf_len:
-                        # Shortcut.  Avoid data copy through buf when
-                        # returning exactly all of our first recv().
-                        return data
-                    if n >= left:
-                        buf.write(data[:left])
-                        self._rbuf.write(data[left:])
-                        break
-                    buf.write(data)
-                    buf_len += n
-                    #assert buf_len == buf.tell()
-                return buf.getvalue()
-    else:
-        def read(self, size=-1):
-            if size < 0:
-                # Read until EOF
-                buffers = [self._rbuf]
-                self._rbuf = ""
-                if self._rbufsize <= 1:
-                    recv_size = self.default_bufsize
-                else:
-                    recv_size = self._rbufsize
-
-                while True:
-                    data = self.recv(recv_size)
-                    if not data:
-                        break
-                    buffers.append(data)
-                return "".join(buffers)
-            else:
-                # Read until size bytes or EOF seen, whichever comes first
-                data = self._rbuf
-                buf_len = len(data)
-                if buf_len >= size:
-                    self._rbuf = data[size:]
-                    return data[:size]
-                buffers = []
-                if data:
-                    buffers.append(data)
-                self._rbuf = ""
-                while True:
-                    left = size - buf_len
-                    recv_size = max(self._rbufsize, left)
-                    data = self.recv(recv_size)
-                    if not data:
-                        break
-                    buffers.append(data)
-                    n = len(data)
-                    if n >= left:
-                        self._rbuf = data[left:]
-                        buffers[-1] = data[:left]
-                        break
-                    buf_len += n
-                return "".join(buffers)
-
-        def readline(self, size=-1):
-            data = self._rbuf
-            if size < 0:
-                # Read until \n or EOF, whichever comes first
-                if self._rbufsize <= 1:
-                    # Speed up unbuffered case
-                    assert data == ""
-                    buffers = []
-                    while data != "\n":
-                        data = self.recv(1)
-                        if not data:
-                            break
-                        buffers.append(data)
-                    return "".join(buffers)
-                nl = data.find('\n')
-                if nl >= 0:
-                    nl += 1
-                    self._rbuf = data[nl:]
-                    return data[:nl]
-                buffers = []
-                if data:
-                    buffers.append(data)
-                self._rbuf = ""
-                while True:
-                    data = self.recv(self._rbufsize)
-                    if not data:
-                        break
-                    buffers.append(data)
-                    nl = data.find('\n')
-                    if nl >= 0:
-                        nl += 1
-                        self._rbuf = data[nl:]
-                        buffers[-1] = data[:nl]
-                        break
-                return "".join(buffers)
-            else:
-                # Read until size bytes or \n or EOF seen, whichever comes first
-                nl = data.find('\n', 0, size)
-                if nl >= 0:
-                    nl += 1
-                    self._rbuf = data[nl:]
-                    return data[:nl]
-                buf_len = len(data)
-                if buf_len >= size:
-                    self._rbuf = data[size:]
-                    return data[:size]
-                buffers = []
-                if data:
-                    buffers.append(data)
-                self._rbuf = ""
-                while True:
-                    data = self.recv(self._rbufsize)
-                    if not data:
-                        break
-                    buffers.append(data)
-                    left = size - buf_len
-                    nl = data.find('\n', 0, left)
-                    if nl >= 0:
-                        nl += 1
-                        self._rbuf = data[nl:]
-                        buffers[-1] = data[:nl]
-                        break
-                    n = len(data)
-                    if n >= left:
-                        self._rbuf = data[left:]
-                        buffers[-1] = data[:left]
-                        break
-                    buf_len += n
-                return "".join(buffers)
 
+def CP_makefile(sock, mode='r', bufsize=DEFAULT_BUFFER_SIZE):
+    if 'r' in mode:
+        return io.BufferedReader(socket.SocketIO(sock, mode), bufsize)
+    else:
+        return CP_BufferedWriter(socket.SocketIO(sock, mode), bufsize)
 
 class HTTPConnection(object):
     """An HTTP connection (active socket).
@@ -1270,7 +1000,7 @@ class HTTPConnection(object):
     wbufsize = DEFAULT_BUFFER_SIZE
     RequestHandlerClass = HTTPRequest
 
-    def __init__(self, server, sock, makefile=CP_fileobject):
+    def __init__(self, server, sock, makefile=CP_makefile):
         self.server = server
         self.socket = sock
         self.rfile = makefile(sock, "rb", self.rbufsize)
@@ -1338,7 +1068,7 @@ class HTTPConnection(object):
         except NoSSLError:
             if req and not req.sent_headers:
                 # Unwrap our wfile
-                self.wfile = CP_fileobject(self.socket._sock, "wb", self.wbufsize)
+                self.wfile = CP_makefile(self.socket._sock, "wb", self.wbufsize)
                 req.simple_response("400 Bad Request",
                     "The client sent a plain HTTP request, but "
                     "this server only speaks HTTPS on this port.")
@@ -1365,8 +1095,8 @@ class HTTPConnection(object):
             # want this server to send a FIN TCP segment immediately. Note this
             # must be called *before* calling socket.close(), because the latter
             # drops its reference to the kernel socket.
-            if hasattr(self.socket, '_sock'):
-                self.socket._sock.close()
+            # Python 3 *probably* fixed this with socket._real_close; hard to tell.
+##            self.socket._sock.close()
             self.socket.close()
         else:
             # On the other hand, sometimes we want to hang around for a bit
@@ -1495,13 +1225,24 @@ class ThreadPool(object):
 
     def grow(self, amount):
         """Spawn new worker threads (not above self.max)."""
-        for i in range(amount):
-            if self.max > 0 and len(self._threads) >= self.max:
-                break
-            worker = WorkerThread(self.server)
-            worker.setName("CP Server " + worker.getName())
-            self._threads.append(worker)
-            worker.start()
+        if self.max > 0:
+            budget = max(self.max - len(self._threads), 0)
+        else:
+            # self.max <= 0 indicates no maximum
+            budget = float('inf')
+
+        n_new = min(amount, budget)
+
+        workers = [self._spawn_worker() for i in range(n_new)]
+        while not all(worker.ready for worker in workers):
+            time.sleep(.1)
+        self._threads.extend(workers)
+
+    def _spawn_worker(self):
+        worker = WorkerThread(self.server)
+        worker.setName("CP Server " + worker.getName())
+        worker.start()
+        return worker
 
     def shrink(self, amount):
         """Kill off worker threads (not below self.min)."""
@@ -1512,13 +1253,17 @@ class ThreadPool(object):
                 self._threads.remove(t)
                 amount -= 1
 
-        if amount > 0:
-            for i in range(min(amount, len(self._threads) - self.min)):
-                # Put a number of shutdown requests on the queue equal
-                # to 'amount'. Once each of those is processed by a worker,
-                # that worker will terminate and be culled from our list
-                # in self.put.
-                self._queue.put(_SHUTDOWNREQUEST)
+        # calculate the number of threads above the minimum
+        n_extra = max(len(self._threads) - self.min, 0)
+
+        # don't remove more than amount
+        n_to_remove = min(amount, n_extra)
+
+        # put shutdown requests on the queue equal to the number of threads
+        # to remove. As each request is processed by a worker, that worker
+        # will terminate and be culled from the list.
+        for n in range(n_to_remove):
+            self._queue.put(_SHUTDOWNREQUEST)
 
     def stop(self, timeout=5):
         # Must shut down threads here so the code that calls
@@ -1568,14 +1313,6 @@ try:
 except ImportError:
     try:
         from ctypes import windll, WinError
-        import ctypes.wintypes
-        _SetHandleInformation = windll.kernel32.SetHandleInformation
-        _SetHandleInformation.argtypes = [
-            ctypes.wintypes.HANDLE,
-            ctypes.wintypes.DWORD,
-            ctypes.wintypes.DWORD,
-        ]
-        _SetHandleInformation.restype = ctypes.wintypes.BOOL
     except ImportError:
         def prevent_socket_inheritance(sock):
             """Dummy function, since neither fcntl nor ctypes are available."""
@@ -1583,7 +1320,7 @@ except ImportError:
     else:
         def prevent_socket_inheritance(sock):
             """Mark the given socket fd as non-inheritable (Windows)."""
-            if not _SetHandleInformation(sock.fileno(), 1, 0):
+            if not windll.kernel32.SetHandleInformation(sock.fileno(), 1, 0):
                 raise WinError()
 else:
     def prevent_socket_inheritance(sock):
@@ -1647,7 +1384,7 @@ class HTTPServer(object):
     timeout = 10
     """The timeout in seconds for accepted connections (default 10)."""
 
-    version = "CherryPy/3.2.2"
+    version = "CherryPy/3.2.4"
     """A version string for the HTTPServer."""
 
     software = None
@@ -1769,25 +1506,6 @@ class HTTPServer(object):
         if self.software is None:
             self.software = "%s Server" % self.version
 
-        # SSL backward compatibility
-        if (self.ssl_adapter is None and
-            getattr(self, 'ssl_certificate', None) and
-            getattr(self, 'ssl_private_key', None)):
-            warnings.warn(
-                    "SSL attributes are deprecated in CherryPy 3.2, and will "
-                    "be removed in CherryPy 3.3. Use an ssl_adapter attribute "
-                    "instead.",
-                    DeprecationWarning
-                )
-            try:
-                from translate.misc.wsgiserver.ssl_pyopenssl import pyOpenSSLAdapter
-            except ImportError:
-                pass
-            else:
-                self.ssl_adapter = pyOpenSSLAdapter(
-                    self.ssl_certificate, self.ssl_private_key,
-                    getattr(self, 'ssl_certificate_chain', None))
-
         # Select the appropriate socket
         if isinstance(self.bind_addr, basestring):
             # AF_UNIX socket
@@ -1848,7 +1566,6 @@ class HTTPServer(object):
             except:
                 self.error_log("Error in HTTPServer.tick", level=logging.ERROR,
                                traceback=True)
-
             if self.interrupt:
                 while self.interrupt is True:
                     # Wait for self.stop() to complete. See _set_interrupt.
@@ -1902,7 +1619,7 @@ class HTTPServer(object):
             if hasattr(s, 'settimeout'):
                 s.settimeout(self.timeout)
 
-            makefile = CP_fileobject
+            makefile = CP_makefile
             ssl_env = {}
             # if ssl cert and key are set, we try to be a secure HTTP server
             if self.ssl_adapter is not None:
@@ -1918,7 +1635,7 @@ class HTTPServer(object):
 
                     wfile = makefile(s, "wb", DEFAULT_BUFFER_SIZE)
                     try:
-                        wfile.sendall("".join(buf))
+                        wfile.write("".join(buf).encode('ISO-8859-1'))
                     except socket.error:
                         x = sys.exc_info()[1]
                         if x.args[0] not in socket_errors_to_ignore:
@@ -2045,10 +1762,9 @@ class Gateway(object):
 # of such classes (in which case they will be lazily loaded).
 ssl_adapters = {
     'builtin': 'translate.misc.wsgiserver.ssl_builtin.BuiltinSSLAdapter',
-    'pyopenssl': 'translate.misc.wsgiserver.ssl_pyopenssl.pyOpenSSLAdapter',
     }
 
-def get_ssl_adapter_class(name='pyopenssl'):
+def get_ssl_adapter_class(name='builtin'):
     """Return an SSL adapter class for the given name."""
     adapter = ssl_adapters[name.lower()]
     if isinstance(adapter, basestring):
@@ -2151,11 +1867,18 @@ class WSGIGateway(Gateway):
         # exc_info tuple."
         if self.req.sent_headers:
             try:
-                raise exc_info[0], exc_info[1], exc_info[2]
+                raise exc_info[0](exc_info[1]).with_traceback(exc_info[2])
             finally:
                 exc_info = None
 
-        self.req.status = status
+        # According to PEP 3333, when using Python 3, the response status
+        # and headers must be bytes masquerading as unicode; that is, they
+        # must be of type "str" but are restricted to code points in the
+        # "latin-1" set.
+        if not isinstance(status, str):
+            raise TypeError("WSGI response status is not of type str.")
+        self.req.status = status.encode('ISO-8859-1')
+
         for k, v in headers:
             if not isinstance(k, str):
                 raise TypeError("WSGI response header key %r is not of type str." % k)
@@ -2163,7 +1886,7 @@ class WSGIGateway(Gateway):
                 raise TypeError("WSGI response header value %r is not of type str." % v)
             if k.lower() == 'content-length':
                 self.remaining_bytes_out = int(v)
-        self.req.outheaders.extend(headers)
+            self.req.outheaders.append((k.encode('ISO-8859-1'), v.encode('ISO-8859-1')))
 
         return self.write
 
@@ -2213,23 +1936,23 @@ class WSGIGateway_10(WSGIGateway):
             # the *real* server protocol is (and what features to support).
             # See http://www.faqs.org/rfcs/rfc2145.html.
             'ACTUAL_SERVER_PROTOCOL': req.server.protocol,
-            'PATH_INFO': req.path,
-            'QUERY_STRING': req.qs,
+            'PATH_INFO': req.path.decode('ISO-8859-1'),
+            'QUERY_STRING': req.qs.decode('ISO-8859-1'),
             'REMOTE_ADDR': req.conn.remote_addr or '',
             'REMOTE_PORT': str(req.conn.remote_port or ''),
-            'REQUEST_METHOD': req.method,
+            'REQUEST_METHOD': req.method.decode('ISO-8859-1'),
             'REQUEST_URI': req.uri,
             'SCRIPT_NAME': '',
             'SERVER_NAME': req.server.server_name,
             # Bah. "SERVER_PROTOCOL" is actually the REQUEST protocol.
-            'SERVER_PROTOCOL': req.request_protocol,
+            'SERVER_PROTOCOL': req.request_protocol.decode('ISO-8859-1'),
             'SERVER_SOFTWARE': req.server.software,
             'wsgi.errors': sys.stderr,
             'wsgi.input': req.rfile,
             'wsgi.multiprocess': False,
             'wsgi.multithread': True,
             'wsgi.run_once': False,
-            'wsgi.url_scheme': req.scheme,
+            'wsgi.url_scheme': req.scheme.decode('ISO-8859-1'),
             'wsgi.version': (1, 0),
             }
 
@@ -2241,8 +1964,9 @@ class WSGIGateway_10(WSGIGateway):
             env["SERVER_PORT"] = str(req.server.bind_addr[1])
 
         # Request headers
-        for k, v in req.inheaders.iteritems():
-            env["HTTP_" + k.upper().replace("-", "_")] = v
+        for k, v in req.inheaders.items():
+            k = k.decode('ISO-8859-1').upper().replace("-", "_")
+            env["HTTP_" + k] = v.decode('ISO-8859-1')
 
         # CONTENT_TYPE/CONTENT_LENGTH
         ct = env.pop("HTTP_CONTENT_TYPE", None)
@@ -2269,23 +1993,20 @@ class WSGIGateway_u0(WSGIGateway_10):
         """Return a new environ dict targeting the given wsgi.version"""
         req = self.req
         env_10 = WSGIGateway_10.get_environ(self)
-        env = dict([(k.decode('ISO-8859-1'), v) for k, v in env_10.iteritems()])
-        env[u'wsgi.version'] = ('u', 0)
+        env = env_10.copy()
+        env['wsgi.version'] = ('u', 0)
 
         # Request-URI
-        env.setdefault(u'wsgi.url_encoding', u'utf-8')
+        env.setdefault('wsgi.url_encoding', 'utf-8')
         try:
-            for key in [u"PATH_INFO", u"SCRIPT_NAME", u"QUERY_STRING"]:
-                env[key] = env_10[str(key)].decode(env[u'wsgi.url_encoding'])
+            # SCRIPT_NAME is the empty string, who cares what encoding it is?
+            env["PATH_INFO"] = req.path.decode(env['wsgi.url_encoding'])
+            env["QUERY_STRING"] = req.qs.decode(env['wsgi.url_encoding'])
         except UnicodeDecodeError:
             # Fall back to latin 1 so apps can transcode if needed.
-            env[u'wsgi.url_encoding'] = u'ISO-8859-1'
-            for key in [u"PATH_INFO", u"SCRIPT_NAME", u"QUERY_STRING"]:
-                env[key] = env_10[str(key)].decode(env[u'wsgi.url_encoding'])
-
-        for k, v in sorted(env.items()):
-            if isinstance(v, str) and k not in ('REQUEST_URI', 'wsgi.input'):
-                env[k] = v.decode('ISO-8859-1')
+            env['wsgi.url_encoding'] = 'ISO-8859-1'
+            env["PATH_INFO"] = env_10["PATH_INFO"]
+            env["QUERY_STRING"] = env_10["QUERY_STRING"]
 
         return env
 
@@ -2307,7 +2028,7 @@ class WSGIPathInfoDispatcher(object):
             pass
 
         # Sort the apps by len(path), descending
-        apps.sort(cmp=lambda x,y: cmp(len(x[0]), len(y[0])))
+        apps.sort()
         apps.reverse()
 
         # The path_prefix strings must start, but not end, with a slash.
diff --git a/translate/misc/xml_helpers.py b/translate/misc/xml_helpers.py
index 80f09e8..9d8a8ca 100644
--- a/translate/misc/xml_helpers.py
+++ b/translate/misc/xml_helpers.py
@@ -24,6 +24,7 @@ import re
 
 from lxml import etree
 
+
 # some useful xpath expressions
 xml_preserve_ancestors = etree.XPath("ancestor-or-self::*[attribute::xml:space='preserve']")
 """All ancestors with xml:space='preserve'"""
diff --git a/translate/misc/xmlwrapper.py b/translate/misc/xmlwrapper.py
deleted file mode 100644
index 02bd4f4..0000000
--- a/translate/misc/xmlwrapper.py
+++ /dev/null
@@ -1,159 +0,0 @@
-#!/usr/bin/env python
-# -*- coding: utf-8 -*-
-#
-# Copyright 2004, 2005 Zuza Software Foundation
-#
-# This file is part of translate.
-#
-# translate is free software; you can redistribute it and/or modify
-# it under the terms of the GNU General Public License as published by
-# the Free Software Foundation; either version 2 of the License, or
-# (at your option) any later version.
-#
-# translate is distributed in the hope that it will be useful,
-# but WITHOUT ANY WARRANTY; without even the implied warranty of
-# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
-# GNU General Public License for more details.
-#
-# You should have received a copy of the GNU General Public License
-# along with this program; if not, see <http://www.gnu.org/licenses/>.
-
-"""simpler wrapper to the elementtree XML parser"""
-
-import sys
-try:
-    from xml.etree import ElementTree
-except ImportError:
-    from elementtree import ElementTree
-    elementmod = 'elementtree'
-else:
-    elementmod = 'xml.etree'
-
-# this is needed to prevent expat-version conflicts with wx >= 2.5.2.2
-from xml.parsers import expat
-
-# don't try this in Sphinx autodoc as xml.etree is Mock()ed
-if sys.modules[elementmod].__path__ != '/dev/null':
-    basicfixtag = ElementTree.fixtag
-
-
-def makefixtagproc(namespacemap):
-    """Constructs an alternative fixtag procedure that will use appropriate
-    names for namespaces."""
-
-    def fixtag(tag, namespaces):
-        """Given a decorated tag (of the form {uri}tag), return prefixed
-        tag and namespace declaration, if any."""
-        if isinstance(tag, ElementTree.QName):
-            tag = tag.text
-        namespace_uri, tag = tag[1:].split("}", 1)
-        prefix = namespaces.get(namespace_uri)
-        if prefix is None:
-            if namespace_uri in namespacemap:
-                prefix = namespacemap[namespace_uri]
-            else:
-                prefix = "ns%d" % len(namespaces)
-            namespaces[namespace_uri] = prefix
-            xmlns = ("xmlns:%s" % prefix, namespace_uri)
-        else:
-            xmlns = None
-        return "%s:%s" % (prefix, tag), xmlns
-    return fixtag
-
-
-def splitnamespace(fulltag):
-    if '{' in fulltag:
-        namespace = fulltag[fulltag.find('{'):fulltag.find('}')+1]
-    else:
-        namespace = ""
-    tag = fulltag.replace(namespace, "", 1)
-    return namespace, tag
-
-
-class XMLWrapper:
-    """simple wrapper for xml objects"""
-
-    def __init__(self, obj):
-        """construct object from the elementtree item"""
-        self.obj = obj
-        self.namespace, self.tag = splitnamespace(self.obj.tag)
-        self.attrib = {}
-        for fullkey, value in self.obj.attrib.iteritems():
-            namespace, key = splitnamespace(fullkey)
-            self.attrib[key] = value
-
-    def getchild(self, searchtag, tagclass=None):
-        """get a child with the given tag name"""
-        if tagclass is None:
-            tagclass = XMLWrapper
-        for childobj in self.obj.getiterator():
-            # getiterator() includes self...
-            if childobj == self.obj:
-                continue
-            childns, childtag = splitnamespace(childobj.tag)
-            if childtag == searchtag:
-                child = tagclass(childobj)
-                return child
-        raise KeyError("could not find child with tag %r" % searchtag)
-
-    def getchildren(self, searchtag, tagclass=None, excludetags=[]):
-        """get all children with the given tag name"""
-        if tagclass is None:
-            tagclass = XMLWrapper
-        childobjects = []
-        for childobj in self.obj.getiterator():
-            # getiterator() includes self...
-            if childobj == self.obj:
-                continue
-            childns, childtag = splitnamespace(childobj.tag)
-            if childtag == searchtag:
-                childobjects.append(childobj)
-        children = [tagclass(childobj) for childobj in childobjects]
-        return children
-
-    def gettext(self, searchtag):
-        """get some contained text"""
-        return self.getchild(searchtag).obj.text
-
-    def getxml(self, encoding=None):
-        return ElementTree.tostring(self.obj, encoding)
-
-    def getplaintext(self, excludetags=[]):
-        text = ""
-        if self.obj.text != None:
-            text += self.obj.text
-        for child in self.obj._children:
-            simplechild = XMLWrapper(child)
-            if simplechild.tag not in excludetags:
-                text += simplechild.getplaintext(excludetags)
-        if self.obj.tail != None:
-            text += self.obj.tail
-        return text
-
-    def getvalues(self, searchtag):
-        """get some contained values..."""
-        values = [child.obj.text for child in self.getchildren(searchtag)]
-        return values
-
-    def __repr__(self):
-        """return a representation of the object"""
-        return self.tag + ':' + repr(self.__dict__)
-
-    def getattr(self, attrname):
-        """gets an attribute of the tag"""
-        return self.attrib[attrname]
-
-    def write(self, file, encoding="UTF-8"):
-        """writes the object as XML to a file..."""
-        e = ElementTree.ElementTree(self.obj)
-        e.write(file, encoding)
-
-
-def BuildTree(xmlstring):
-    parser = ElementTree.XMLTreeBuilder()
-    parser.feed(xmlstring)
-    return parser.close()
-
-
-def MakeElement(tag, attrib={}, **extraargs):
-    return ElementTree.Element(tag, attrib, **extraargs)
diff --git a/translate/search/indexing/CommonIndexer.py b/translate/search/indexing/CommonIndexer.py
index 2f572dd..151bb5e 100644
--- a/translate/search/indexing/CommonIndexer.py
+++ b/translate/search/indexing/CommonIndexer.py
@@ -98,15 +98,15 @@ class CommonDatabase(object):
         """
         # just do some checks
         if self.QUERY_TYPE is None:
-            raise NotImplementedError("Incomplete indexer implementation: " \
-                    + "'QUERY_TYPE' is undefined")
+            raise NotImplementedError("Incomplete indexer implementation: "
+                                      "'QUERY_TYPE' is undefined")
         if self.INDEX_DIRECTORY_NAME is None:
-            raise NotImplementedError("Incomplete indexer implementation: " \
-                    + "'INDEX_DIRECTORY_NAME' is undefined")
+            raise NotImplementedError("Incomplete indexer implementation: "
+                                      "'INDEX_DIRECTORY_NAME' is undefined")
         self.location = os.path.join(basedir, self.INDEX_DIRECTORY_NAME)
         if (not create_allowed) and (not os.path.exists(self.location)):
-            raise OSError("Indexer: the database does not exist - and I am" \
-                    + " not configured to create it.")
+            raise OSError("Indexer: the database does not exist - and I am"
+                          " not configured to create it.")
         if analyzer is None:
             self.analyzer = self.ANALYZER_DEFAULT
         else:
@@ -122,8 +122,8 @@ class CommonDatabase(object):
         :param optimize: should the index be optimized if possible?
         :type optimize: bool
         """
-        raise NotImplementedError("Incomplete indexer implementation: " \
-                + "'flush' is missing")
+        raise NotImplementedError("Incomplete indexer implementation: "
+                                  "'flush' is missing")
 
     def make_query(self, args, require_all=True, analyzer=None):
         """Create simple queries (strings or field searches) or
@@ -193,8 +193,8 @@ class CommonDatabase(object):
                         require_all=require_all, analyzer=analyzer))
             else:
                 # other types of queries are not supported
-                raise ValueError("Unable to handle query type: %s" \
-                        % str(type(query)))
+                raise ValueError("Unable to handle query type: %s" %
+                                 str(type(query)))
         # return the combined query
         return self._create_query_combined(result, require_all)
 
@@ -208,8 +208,8 @@ class CommonDatabase(object):
         :return: the resulting query object
         :rtype: ``xapian.Query`` | ``PyLucene.Query``
         """
-        raise NotImplementedError("Incomplete indexer implementation: " \
-                + "'_create_query_for_query' is missing")
+        raise NotImplementedError("Incomplete indexer implementation: "
+                                  "'_create_query_for_query' is missing")
 
     def _create_query_for_string(self, text, require_all=True,
             analyzer=None):
@@ -234,8 +234,8 @@ class CommonDatabase(object):
         :return: resulting query object
         :rtype: xapian.Query | PyLucene.Query
         """
-        raise NotImplementedError("Incomplete indexer implementation: " \
-                + "'_create_query_for_string' is missing")
+        raise NotImplementedError("Incomplete indexer implementation: "
+                                  "'_create_query_for_string' is missing")
 
     def _create_query_for_field(self, field, value, analyzer=None):
         """Generate a field query.
@@ -257,8 +257,8 @@ class CommonDatabase(object):
         :return: resulting query object
         :rtype: ``xapian.Query`` | ``PyLucene.Query``
         """
-        raise NotImplementedError("Incomplete indexer implementation: " \
-                + "'_create_query_for_field' is missing")
+        raise NotImplementedError("Incomplete indexer implementation: "
+                                  "'_create_query_for_field' is missing")
 
     def _create_query_combined(self, queries, require_all=True):
         """generate a combined query
@@ -271,8 +271,8 @@ class CommonDatabase(object):
         :return: the resulting combined query object
         :rtype: ``xapian.Query`` | ``PyLucene.Query``
         """
-        raise NotImplementedError("Incomplete indexer implementation: " \
-                + "'_create_query_combined' is missing")
+        raise NotImplementedError("Incomplete indexer implementation: "
+                                  "'_create_query_combined' is missing")
 
     def index_document(self, data):
         """Add the given data to the database.
@@ -300,8 +300,8 @@ class CommonDatabase(object):
                     elif isinstance(value, basestring):
                         terms = [value]
                     else:
-                        raise ValueError("Invalid data type to be indexed: %s" \
-                                % str(type(data)))
+                        raise ValueError("Invalid data type to be indexed: %s" %
+                                         str(type(data)))
                     for one_term in terms:
                         self._add_plain_term(doc, self._decode(one_term),
                                 (self.ANALYZER_DEFAULT & self.ANALYZER_TOKENIZE > 0))
@@ -317,8 +317,8 @@ class CommonDatabase(object):
                 self._add_plain_term(doc, self._decode(dataset),
                         (self.ANALYZER_DEFAULT & self.ANALYZER_TOKENIZE > 0))
             else:
-                raise ValueError("Invalid data type to be indexed: %s" \
-                        % str(type(data)))
+                raise ValueError("Invalid data type to be indexed: %s" %
+                                 str(type(data)))
         self._add_document_to_index(doc)
 
     def _create_empty_document(self):
@@ -327,8 +327,8 @@ class CommonDatabase(object):
         :return: the new document object
         :rtype: ``xapian.Document`` | ``PyLucene.Document``
         """
-        raise NotImplementedError("Incomplete indexer implementation: " \
-                + "'_create_empty_document' is missing")
+        raise NotImplementedError("Incomplete indexer implementation: "
+                                  "'_create_empty_document' is missing")
 
     def _add_plain_term(self, document, term, tokenize=True):
         """Add a term to a document.
@@ -340,8 +340,8 @@ class CommonDatabase(object):
         :param tokenize: should the term be tokenized automatically
         :type tokenize: bool
         """
-        raise NotImplementedError("Incomplete indexer implementation: " \
-                + "'_add_plain_term' is missing")
+        raise NotImplementedError("Incomplete indexer implementation: "
+                                  "'_add_plain_term' is missing")
 
     def _add_field_term(self, document, field, term, tokenize=True):
         """Add a field term to a document.
@@ -355,8 +355,8 @@ class CommonDatabase(object):
         :param tokenize: should the term be tokenized automatically
         :type tokenize: bool
         """
-        raise NotImplementedError("Incomplete indexer implementation: " \
-                + "'_add_field_term' is missing")
+        raise NotImplementedError("Incomplete indexer implementation: "
+                                  "'_add_field_term' is missing")
 
     def _add_document_to_index(self, document):
         """Add a prepared document to the index database.
@@ -364,8 +364,8 @@ class CommonDatabase(object):
         :param document: the document to be added
         :type document: xapian.Document | PyLucene.Document
         """
-        raise NotImplementedError("Incomplete indexer implementation: " \
-                + "'_add_document_to_index' is missing")
+        raise NotImplementedError("Incomplete indexer implementation: "
+                                  "'_add_document_to_index' is missing")
 
     def begin_transaction(self):
         """begin a transaction
@@ -378,24 +378,24 @@ class CommonDatabase(object):
 
         Database types that do not support transactions may silently ignore it.
         """
-        raise NotImplementedError("Incomplete indexer implementation: " \
-                + "'begin_transaction' is missing")
+        raise NotImplementedError("Incomplete indexer implementation: "
+                                  "'begin_transaction' is missing")
 
     def cancel_transaction(self):
         """cancel an ongoing transaction
 
         See 'start_transaction' for details.
         """
-        raise NotImplementedError("Incomplete indexer implementation: " \
-                + "'cancel_transaction' is missing")
+        raise NotImplementedError("Incomplete indexer implementation: "
+                                  "'cancel_transaction' is missing")
 
     def commit_transaction(self):
         """Submit the currently ongoing transaction and write changes to disk.
 
         See 'start_transaction' for details.
         """
-        raise NotImplementedError("Incomplete indexer implementation: " \
-                + "'commit_transaction' is missing")
+        raise NotImplementedError("Incomplete indexer implementation: "
+                                  "'commit_transaction' is missing")
 
     def get_query_result(self, query):
         """return an object containing the results of a query
@@ -405,8 +405,8 @@ class CommonDatabase(object):
         :return: an object that allows access to the results
         :rtype: subclass of CommonEnquire
         """
-        raise NotImplementedError("Incomplete indexer implementation: " \
-                + "'get_query_result' is missing")
+        raise NotImplementedError("Incomplete indexer implementation: "
+                                  "'get_query_result' is missing")
 
     def delete_document_by_id(self, docid):
         """Delete a specified document.
@@ -414,8 +414,8 @@ class CommonDatabase(object):
         :param docid: the document ID to be deleted
         :type docid: int
         """
-        raise NotImplementedError("Incomplete indexer implementation: " \
-                + "'delete_document_by_id' is missing")
+        raise NotImplementedError("Incomplete indexer implementation: "
+                                  "'delete_document_by_id' is missing")
 
     def search(self, query, fieldnames):
         """Return a list of the contents of specified fields for all
@@ -428,8 +428,8 @@ class CommonDatabase(object):
         :return: a list of dicts containing the specified field(s)
         :rtype: list of dicts
         """
-        raise NotImplementedError("Incomplete indexer implementation: " \
-                + "'search' is missing")
+        raise NotImplementedError("Incomplete indexer implementation: "
+                                  "'search' is missing")
 
     def delete_doc(self, ident):
         """Delete the documents returned by a query.
@@ -464,8 +464,8 @@ class CommonDatabase(object):
         else:
             # invalid element type in list (not necessarily caught in the
             # lines above)
-            raise TypeError("description of documents to-be-deleted is not " \
-                    + "supported: list of %s" % type(ident_list[0]))
+            raise TypeError("description of documents to-be-deleted is not "
+                            "supported: list of %s" % type(ident_list[0]))
         # we successfully created a query - now iterate through the result
         # no documents deleted so far ...
         remove_list = []
@@ -567,7 +567,7 @@ class CommonDatabase(object):
         if isinstance(text, str):
             try:
                 result = unicode(text.decode("UTF-8"))
-            except UnicodeEncodeError, e:
+            except UnicodeEncodeError as e:
                 result = unicode(text.decode("charmap"))
         elif not isinstance(text, unicode):
             result = unicode(text)
@@ -603,8 +603,8 @@ class CommonEnquire(object):
                     ["rank", "percent", "document", "docid"]
 
         """
-        raise NotImplementedError("Incomplete indexing implementation: " \
-                + "'get_matches' for the 'Enquire' class is missing")
+        raise NotImplementedError("Incomplete indexing implementation: "
+                                  "'get_matches' for the 'Enquire' class is missing")
 
     def get_matches_count(self):
         """Return the estimated number of matches.
diff --git a/translate/search/indexing/PyLuceneIndexer.py b/translate/search/indexing/PyLuceneIndexer.py
index 39439e3..53a571f 100644
--- a/translate/search/indexing/PyLuceneIndexer.py
+++ b/translate/search/indexing/PyLuceneIndexer.py
@@ -25,10 +25,9 @@ interface for the PyLucene (v2.x) indexing engine
 take a look at PyLuceneIndexer1.py for the PyLucene v1.x interface
 """
 
-import re
+import logging
 import os
 import time
-import logging
 
 # try to import the PyLucene package (with the two possible names)
 # remember the type of the detected package (compiled with jcc (>=v2.3) or
@@ -92,7 +91,7 @@ class PyLuceneDatabase(CommonIndexer.CommonDatabase):
             # try to open an existing database
             tempreader = PyLucene.IndexReader.open(self.location)
             tempreader.close()
-        except PyLucene.JavaError, err_msg:
+        except PyLucene.JavaError as err_msg:
             # Write an error out, in case this is a real problem instead of an absence of an index
             # TODO: turn the following two lines into debug output
             #errorstr = str(e).strip() + "\n" + self.errorhandler.traceback_str()
@@ -106,17 +105,17 @@ class PyLuceneDatabase(CommonIndexer.CommonDatabase):
                 if not os.path.isdir(parent_path):
                     # recursively create all directories up to parent_path
                     os.makedirs(parent_path)
-            except IOError, err_msg:
-                raise OSError("Indexer: failed to create the parent " \
-                        + "directory (%s) of the indexing database: %s" \
-                        % (parent_path, err_msg))
+            except IOError as err_msg:
+                raise OSError("Indexer: failed to create the parent "
+                              "directory (%s) of the indexing database: %s" % (
+                              parent_path, err_msg))
             try:
                 tempwriter = PyLucene.IndexWriter(self.location,
                         self.pyl_analyzer, True)
                 tempwriter.close()
-            except PyLucene.JavaError, err_msg:
-                raise OSError("Indexer: failed to open or create a Lucene" \
-                        + " database (%s): %s" % (self.location, err_msg))
+            except PyLucene.JavaError as err_msg:
+                raise OSError("Indexer: failed to open or create a Lucene"
+                              " database (%s): %s" % (self.location, err_msg))
         # the indexer is initialized - now we prepare the searcher
         # windows file locking seems inconsistent, so we try 10 times
         numtries = 0
@@ -130,15 +129,15 @@ class PyLuceneDatabase(CommonIndexer.CommonDatabase):
                         self.location)
                     self.searcher = PyLucene.IndexSearcher(self.reader)
                     break
-                except PyLucene.JavaError, e:
+                except PyLucene.JavaError as e:
                     # store error message for possible later re-raise (below)
                     lock_error_msg = e
                     time.sleep(0.01)
                     numtries += 1
             else:
                 # locking failed for 10 times
-                raise OSError("Indexer: failed to lock index database" \
-                              + " (%s)" % lock_error_msg)
+                raise OSError("Indexer: failed to lock index database"
+                              " (%s)" % lock_error_msg)
         finally:
             pass
         #    self.dir_lock.release()
@@ -486,14 +485,14 @@ class PyLuceneDatabase(CommonIndexer.CommonDatabase):
             if self.reader is None or self.searcher is None:
                 self.reader = PyLucene.IndexReader.open(self.location)
                 self.searcher = PyLucene.IndexSearcher(self.reader)
-            elif self.index_version != self.reader.getCurrentVersion( \
-                    self.location):
+            elif (self.index_version !=
+                      self.reader.getCurrentVersion(self.location)):
                 self.searcher.close()
                 self.reader.close()
                 self.reader = PyLucene.IndexReader.open(self.location)
                 self.searcher = PyLucene.IndexSearcher(self.reader)
                 self.index_version = self.reader.getCurrentVersion(self.location)
-        except PyLucene.JavaError, e:
+        except PyLucene.JavaError as e:
             # TODO: add some debugging output?
             #self.errorhandler.logerror("Error attempting to read index - try reindexing: "+str(e))
             pass
@@ -536,11 +535,11 @@ class PyLuceneHits(CommonIndexer.CommonEnquire):
 
 
 def _occur(required, prohibited):
-    if required == True and prohibited == False:
+    if required and not prohibited:
         return PyLucene.BooleanClause.Occur.MUST
-    elif required == False and prohibited == False:
+    elif not required and not prohibited:
         return PyLucene.BooleanClause.Occur.SHOULD
-    elif required == False and prohibited == True:
+    elif not required and prohibited:
         return PyLucene.BooleanClause.Occur.MUST_NOT
     else:
         # It is an error to specify a clause as both required
diff --git a/translate/search/indexing/XapianIndexer.py b/translate/search/indexing/XapianIndexer.py
index 4f44d4f..2aa3a57 100644
--- a/translate/search/indexing/XapianIndexer.py
+++ b/translate/search/indexing/XapianIndexer.py
@@ -104,10 +104,10 @@ class XapianDatabase(CommonIndexer.CommonDatabase):
             # try to open an existing database
             try:
                 self.reader = xapian.Database(self.location)
-            except xapian.DatabaseOpeningError, err_msg:
-                raise ValueError("Indexer: failed to open xapian database " \
-                        + "(%s) - maybe it is not a xapian database: %s" \
-                        % (self.location, str(err_msg)))
+            except xapian.DatabaseOpeningError as err_msg:
+                raise ValueError("Indexer: failed to open xapian database "
+                                 "(%s) - maybe it is not a xapian database: %s" % (
+                                 self.location, str(err_msg)))
         else:
             # create a new database
             if not create_allowed:
@@ -118,17 +118,17 @@ class XapianDatabase(CommonIndexer.CommonDatabase):
                 if not os.path.isdir(parent_path):
                     # recursively create all directories up to parent_path
                     os.makedirs(parent_path)
-            except IOError, err_msg:
-                raise OSError("Indexer: failed to create the parent " \
-                        + "directory (%s) of the indexing database: %s" \
-                        % (parent_path, str(err_msg)))
+            except IOError as err_msg:
+                raise OSError("Indexer: failed to create the parent "
+                              "directory (%s) of the indexing database: %s" % (
+                              parent_path, str(err_msg)))
             try:
                 self.writer = xapian.WritableDatabase(self.location,
                         xapian.DB_CREATE_OR_OPEN)
                 self.flush()
-            except xapian.DatabaseOpeningError, err_msg:
-                raise OSError("Indexer: failed to open or create a xapian " \
-                        + "database (%s): %s" % (self.location, str(err_msg)))
+            except xapian.DatabaseOpeningError as err_msg:
+                raise OSError("Indexer: failed to open or create a xapian "
+                              "database (%s): %s" % (self.location, str(err_msg)))
 
     def __del__(self):
         self.reader = None
@@ -307,8 +307,7 @@ class XapianDatabase(CommonIndexer.CommonDatabase):
             term_gen.set_document(document)
             term_gen.index_text(term, 1, field.upper())
         else:
-            document.add_term(_truncate_term_length("%s%s" % \
-                        (field.upper(), term)))
+            document.add_term(_truncate_term_length("%s%s" % (field.upper(), term)))
 
     def _add_document_to_index(self, document):
         """add a prepared document to the index database
@@ -412,11 +411,11 @@ class XapianDatabase(CommonIndexer.CommonDatabase):
             self._delete_stale_lock()
             try:
                 self.writer = xapian.WritableDatabase(self.location, xapian.DB_OPEN)
-            except xapian.DatabaseOpeningError, err_msg:
+            except xapian.DatabaseOpeningError as err_msg:
 
-                raise ValueError("Indexer: failed to open xapian database " \
-                                 + "(%s) - maybe it is not a xapian database: %s" \
-                                 % (self.location, str(err_msg)))
+                raise ValueError("Indexer: failed to open xapian database "
+                                 "(%s) - maybe it is not a xapian database: %s" % (
+                                 self.location, str(err_msg)))
 
     def _writer_close(self):
         """close indexing write access and remove database lock"""
@@ -435,10 +434,10 @@ class XapianDatabase(CommonIndexer.CommonDatabase):
                 self.reader = xapian.Database(self.location)
             else:
                 self.reader.reopen()
-        except xapian.DatabaseOpeningError, err_msg:
-            raise ValueError("Indexer: failed to open xapian database " \
-                             + "(%s) - maybe it is not a xapian database: %s" \
-                             % (self.location, str(err_msg)))
+        except xapian.DatabaseOpeningError as err_msg:
+            raise ValueError("Indexer: failed to open xapian database "
+                             "(%s) - maybe it is not a xapian database: %s" % (
+                             self.location, str(err_msg)))
 
 
 class XapianEnquire(CommonIndexer.CommonEnquire):
diff --git a/translate/search/indexing/__init__.py b/translate/search/indexing/__init__.py
index cbd867c..d0d5638 100644
--- a/translate/search/indexing/__init__.py
+++ b/translate/search/indexing/__init__.py
@@ -21,12 +21,13 @@
 
 """Interface for differrent indexing engines for the Translate Toolkit."""
 
+import logging
 import os
 import shutil
-import logging
 
 import CommonIndexer
 
+
 """ TODO for indexing engines:
     * get rid of jToolkit.glock dependency
     * add partial matching at the beginning of a term
@@ -53,8 +54,9 @@ def _get_available_indexers():
             # we should not import ourself
             continue
         mod_path = os.path.join(indexer_dir, mod_file)
-        if (not mod_path.endswith(".py")) or (not os.path.isfile(mod_path)) \
-                or (not os.access(mod_path, os.R_OK)):
+        if (not mod_path.endswith(".py") or
+            not os.path.isfile(mod_path) or
+            not os.access(mod_path, os.R_OK)):
             # no file / wrong extension / not readable -> skip it
             continue
         # strip the ".py" prefix
@@ -66,8 +68,8 @@ def _get_available_indexers():
             # maybe it is unusable or dependencies are missing
             continue
         # the module function "is_available" must return "True"
-        if not (hasattr(module, "is_available") and \
-                callable(module.is_available) and \
+        if not (hasattr(module, "is_available") and
+                callable(module.is_available) and
                 module.is_available()):
             continue
         for item in dir(module):
@@ -179,4 +181,4 @@ def get_indexer(basedir, preference=None):
 if __name__ == "__main__":
     # show all supported indexing engines (with fulfilled requirements)
     for ONE_INDEX in _AVAILABLE_INDEXERS:
-        print ONE_INDEX
+        print(ONE_INDEX)
diff --git a/translate/search/indexing/test_indexers.py b/translate/search/indexing/test_indexers.py
index dd14d92..30bc15c 100644
--- a/translate/search/indexing/test_indexers.py
+++ b/translate/search/indexing/test_indexers.py
@@ -21,18 +21,20 @@
 
 
 import os
-import sys
 import shutil
+import sys
+
 import pytest
 
 import __init__ as indexing
 import CommonIndexer
 
+
 # following block only needs running under pytest; unclear how to detect it?
 
 # check whether any indexer is present at all
 noindexer = True
-for indexer in [ "lucene", "PyLucene", "xapian" ]:
+for indexer in ["lucene", "PyLucene", "xapian"]:
     try:
         __import__(indexer)
     except ImportError:
@@ -300,6 +302,7 @@ def test_or_queries():
     # clean up
     clean_database()
 
+
 def test_string_queries():
     """test if string queries work as expected"""
     # clean up everything first
@@ -322,6 +325,7 @@ def test_string_queries():
     # clean up
     clean_database()
 
+
 def test_lower_upper_case():
     """test if case is ignored for queries and for indexed terms"""
     # clean up everything first
@@ -436,23 +440,23 @@ def _show_database_pylucene(database):
     database.flush()
     reader = database.reader
     for index in range(reader.maxDoc()):
-        print reader.document(index).toString().encode("charmap")
+        print(reader.document(index).toString().encode("charmap"))
 
 
 def _show_database_xapian(database):
     import xapian
     doccount = database.reader.get_doccount()
     max_doc_index = database.reader.get_lastdocid()
-    print "Database overview: %d items up to index %d" % (doccount, max_doc_index)
+    print("Database overview: %d items up to index %d" % (doccount, max_doc_index))
     for index in range(1, max_doc_index + 1):
         try:
             document = database.reader.get_document(index)
         except xapian.DocNotFoundError:
             continue
         # print the document's terms and their positions
-        print "\tDocument [%d]: %s" % (index,
+        print("\tDocument [%d]: %s" % (index,
                 str([(one_term.term, [posi for posi in one_term.positer])
-                for one_term in document.termlist()]))
+                for one_term in document.termlist()])))
 
 
 def _get_number_of_docs(database):
@@ -475,8 +479,8 @@ def report_whitelisted_success(db, name):
     supposed to fail for a specific indexing engine.
     As this test works now for the engine, the whitelisting should be removed.
     """
-    print "the test '%s' works again for '%s' - please remove the exception" \
-            % (name, get_engine_name(db))
+    print("the test '%s' works again for '%s' - please remove the exception"
+        % (name, get_engine_name(db)))
 
 
 def report_whitelisted_failure(db, name):
@@ -485,8 +489,8 @@ def report_whitelisted_failure(db, name):
     Since the test behaves as expected (it fails), this is just for reminding
     developers on these open issues of the indexing engine support.
     """
-    print "the test '%s' fails - as expected for '%s'" % (name,
-            get_engine_name(db))
+    print("the test '%s' fails - as expected for '%s'" % (name,
+            get_engine_name(db)))
 
 
 def assert_whitelisted(db, assert_value, white_list_engines, name_of_check):
@@ -517,11 +521,9 @@ if __name__ == "__main__":
         clean_database()
         engine_name = get_engine_name(_get_indexer(DATABASE))
         if engine_name == default_engine:
-            print "************ running tests for '%s' *****************" \
-                    % engine_name
+            print("****** running tests for '%s' ******" % engine_name)
         else:
-            print "************ SKIPPING tests for '%s' *****************" \
-                    % default_engine
+            print("****** SKIPPING tests for '%s' ******" % default_engine)
             continue
         test_create_database()
         test_open_database()
diff --git a/translate/search/lshtein.py b/translate/search/lshtein.py
index 5ec12ad..0038242 100644
--- a/translate/search/lshtein.py
+++ b/translate/search/lshtein.py
@@ -159,4 +159,4 @@ class LevenshteinComparer:
 if __name__ == "__main__":
     from sys import argv
     comparer = LevenshteinComparer()
-    print "Similarity:\n%s" % comparer.similarity(argv[1], argv[2], 50)
+    print("Similarity:\n%s" % comparer.similarity(argv[1], argv[2], 50))
diff --git a/translate/search/match.py b/translate/search/match.py
index 900f30d..d3a8241 100644
--- a/translate/search/match.py
+++ b/translate/search/match.py
@@ -21,15 +21,13 @@
 """Class to perform translation memory matching from a store of
 translation units."""
 
-import itertools
 import heapq
+import itertools
 import re
 
-from translate.search import lshtein
-from translate.search import terminology
-from translate.storage import base
-from translate.storage import po
 from translate.misc.multistring import multistring
+from translate.search import lshtein, terminology
+from translate.storage import base, po
 
 
 def sourcelen(unit):
diff --git a/translate/services/tmserver.py b/translate/services/tmserver.py
index d8ddbd1..73a2c9d 100644
--- a/translate/services/tmserver.py
+++ b/translate/services/tmserver.py
@@ -21,19 +21,13 @@
 """A translation memory server using tmdb for storage, communicates
 with clients using JSON over HTTP."""
 
-#import urllib
+import json
 import logging
-from cgi import parse_qs
-from optparse import OptionParser
-try:
-    import json  # available since Python 2.6
-except ImportError:
-    import simplejson as json  # API compatible with the json module
+from argparse import ArgumentParser
+from urlparse import parse_qs
 
-from translate.misc import selector
-from translate.misc import wsgi
-from translate.storage import base
-from translate.storage import tmdb
+from translate.misc import selector, wsgi
+from translate.storage import base, tmdb
 
 
 class TMServer(object):
@@ -128,10 +122,10 @@ class TMServer(object):
     @selector.opliant
     def upload_store(self, environ, start_response, sid, slang, tlang):
         """add units from uploaded file to tmdb"""
-        import StringIO
+        from cStringIO import StringIO
         from translate.storage import factory
         start_response("200 OK", [('Content-type', 'text/plain')])
-        data = StringIO.StringIO(environ['wsgi.input'].read(int(environ['CONTENT_LENGTH'])))
+        data = StringIO(environ['wsgi.input'].read(int(environ['CONTENT_LENGTH'])))
         data.name = sid
         store = factory.getobject(data)
         count = self.tmdb.add_store(store, slang, tlang)
@@ -157,54 +151,51 @@ class TMServer(object):
 
 
 def main():
-    parser = OptionParser()
-    parser.add_option("-d", "--tmdb", dest="tmdbfile", default=":memory:",
-                      help="translation memory database file")
-    parser.add_option("-f", "--import-translation-file", dest="tmfiles",
-                      action="append",
-                      help="translation file to import into the database")
-    parser.add_option("-t", "--import-target-lang", dest="target_lang",
-                      help="target language of translation files")
-    parser.add_option("-s", "--import-source-lang", dest="source_lang",
-                      help="source language of translation files")
-    parser.add_option("-b", "--bind", dest="bind", default="localhost",
-                      help="adress to bind server to (default: localhost)")
-    parser.add_option("-p", "--port", dest="port", type="int", default=8888,
-                      help="port to listen on (default: 8888)")
-    parser.add_option("--max-candidates", dest="max_candidates", type="int",
-                      default=3,
-                      help="Maximum number of candidates")
-    parser.add_option("--min-similarity", dest="min_similarity", type="int",
-                      default=75,
-                      help="minimum similarity")
-    parser.add_option("--max-length", dest="max_length", type="int",
-                      default=1000,
-                      help="Maxmimum string length")
-    parser.add_option("--debug", action="store_true", dest="debug",
-                      default=False,
-                      help="enable debugging features")
-
-    (options, args) = parser.parse_args()
+    parser = ArgumentParser()
+    parser.add_argument("-d", "--tmdb", dest="tmdbfile", default=":memory:",
+                        help="translation memory database file")
+    parser.add_argument("-f", "--import-translation-file", dest="tmfiles",
+                        action="append",
+                        help="translation file to import into the database")
+    parser.add_argument("-t", "--import-target-lang", dest="target_lang",
+                        help="target language of translation files")
+    parser.add_argument("-s", "--import-source-lang", dest="source_lang",
+                        help="source language of translation files")
+    parser.add_argument("-b", "--bind", dest="bind", default="localhost",
+                        help="adress to bind server to (default: localhost)")
+    parser.add_argument("-p", "--port", dest="port", type=int, default=8888,
+                        help="port to listen on (default: 8888)")
+    parser.add_argument("--max-candidates", dest="max_candidates", type=int,
+                        default=3,
+                        help="Maximum number of candidates")
+    parser.add_argument("--min-similarity", dest="min_similarity", type=int,
+                        default=75,
+                        help="minimum similarity")
+    parser.add_argument("--max-length", dest="max_length", type=int,
+                        default=1000,
+                        help="Maxmimum string length")
+    parser.add_argument("--debug", action="store_true", dest="debug",
+                        default=False,
+                        help="enable debugging features")
+
+    args = parser.parse_args()
 
     #setup debugging
     format = '%(asctime)s %(levelname)s %(message)s'
-    level = options.debug and logging.DEBUG or logging.WARNING
-    if options.debug:
+    level = args.debug and logging.DEBUG or logging.WARNING
+    if args.debug:
         format = '%(levelname)7s %(module)s.%(funcName)s:%(lineno)d: %(message)s'
-        import sys
-        if sys.version_info[:2] < (2, 5):
-            format = '%(levelname)7s %(module)s [%(filename)s:%(lineno)d]: %(message)s'
 
     logging.basicConfig(level=level, format=format)
 
-    application = TMServer(options.tmdbfile, options.tmfiles,
-                           max_candidates=options.max_candidates,
-                           min_similarity=options.min_similarity,
-                           max_length=options.max_length,
+    application = TMServer(args.tmdbfile, args.tmfiles,
+                           max_candidates=args.max_candidates,
+                           min_similarity=args.min_similarity,
+                           max_length=args.max_length,
                            prefix="/tmserver",
-                           source_lang=options.source_lang,
-                           target_lang=options.target_lang)
-    wsgi.launch_server(options.bind, options.port, application.rest)
+                           source_lang=args.source_lang,
+                           target_lang=args.target_lang)
+    wsgi.launch_server(args.bind, args.port, application.rest)
 
 
 if __name__ == '__main__':
diff --git a/translate/storage/_factory_classes.py b/translate/storage/_factory_classes.py
index 84326aa..2d0d8b2 100644
--- a/translate/storage/_factory_classes.py
+++ b/translate/storage/_factory_classes.py
@@ -22,20 +22,21 @@
 just for the sake of the Windows installer to easily pick up all the stuff
 that we need and ensure they make it into the installer."""
 
+import catkeys
 import csvl10n
+import mo
 import omegat
 import po
-import mo
 import qm
-import utx
-import wordfast
-import catkeys
 import qph
 import tbx
 import tmx
 import ts2
+import utx
+import wordfast
 import xliff
+
 try:
     import trados
-except ImportError, e:
+except ImportError as e:
     pass
diff --git a/translate/storage/aresource.py b/translate/storage/aresource.py
old mode 100644
new mode 100755
index ae1efa5..d815e19
--- a/translate/storage/aresource.py
+++ b/translate/storage/aresource.py
@@ -2,6 +2,7 @@
 # -*- coding: utf-8 -*-
 #
 # Copyright 2012 Michal Čihař
+# Copyright 2014 Luca De Petrillo
 #
 # This file is part of the Translate Toolkit.
 #
@@ -18,15 +19,18 @@
 # You should have received a copy of the GNU General Public License
 # along with this program; if not, see <http://www.gnu.org/licenses/>.
 
-"""Module for handling Android String resource files."""
+"""Module for handling Android String and Plurals resource files."""
 
 import re
+import os
 
 from lxml import etree
 
 from translate.lang import data
 from translate.storage import base, lisa
+from translate.misc.multistring import multistring
 
+from babel.core import Locale
 
 EOF = None
 WHITESPACE = ' \n\t'  # Whitespace that we collapse.
@@ -35,19 +39,23 @@ MULTIWHITESPACE = re.compile('[ \n\t]{2}')
 
 class AndroidResourceUnit(base.TranslationUnit):
     """A single entry in the Android String resource file."""
-    rootNode = "string"
-    languageNode = "string"
 
     @classmethod
     def createfromxmlElement(cls, element):
-        term = cls(None, xmlelement = element)
+        term = None
+        # Actually this class supports only plurals and string tags
+        if ((element.tag == "plurals") or (element.tag == "string")):
+            term = cls(None, xmlelement=element)
         return term
 
     def __init__(self, source, empty=False, xmlelement=None, **kwargs):
         if xmlelement is not None:
             self.xmlelement = xmlelement
         else:
-            self.xmlelement = etree.Element(self.rootNode)
+            if self.hasplurals(source):
+                self.xmlelement = etree.Element("plurals")
+            else:
+                self.xmlelement = etree.Element("string")
             self.xmlelement.tail = '\n'
         if source is not None:
             self.setid(source)
@@ -157,16 +165,16 @@ class AndroidResourceUnit(base.TranslationUnit):
                         # in the clauses below without issue.
                         pass
                     elif c == 'n' or c == 'N':
-                        text[i-1 : i+1] = '\n' # an actual newline
+                        text[i-1 : i+1] = '\n'  # an actual newline
                         i -= 1
                     elif c == 't' or c == 'T':
-                        text[i-1 : i+1] = '\t' # an actual tab
+                        text[i-1 : i+1] = '\t'  # an actual tab
                         i -= 1
                     elif c == ' ':
-                        text[i-1 : i+1] = ' ' # an actual space
+                        text[i-1 : i+1] = ' '  # an actual space
                         i -= 1
                     elif c in '"\'@':
-                        text[i-1 : i] = '' # remove the backslash
+                        text[i-1 : i] = ''  # remove the backslash
                         i -= 1
                     elif c == 'u':
                         # Unicode sequence. Android is nice enough to deal
@@ -205,7 +213,7 @@ class AndroidResourceUnit(base.TranslationUnit):
         # Join the string together again, but w/o EOF marker
         return "".join(text[:-1])
 
-    def escape(self, text):
+    def escape(self, text, add_quote=True):
         '''
         Escape all the characters which need to be escaped in an Android XML file.
         '''
@@ -226,7 +234,7 @@ class AndroidResourceUnit(base.TranslationUnit):
         if text.startswith('@'):
             text = '\\@' + text[1:]
         # Quote strings with more whitespace
-        if text[0] in WHITESPACE or text[-1] in WHITESPACE or len(MULTIWHITESPACE.findall(text)) > 0:
+        if add_quote and (text[0] in WHITESPACE or text[-1] in WHITESPACE or len(MULTIWHITESPACE.findall(text)) > 0):
             return '"%s"' % text
         return text
 
@@ -241,7 +249,7 @@ class AndroidResourceUnit(base.TranslationUnit):
 
     source = property(getsource, setsource)
 
-    def settarget(self, target):
+    def set_xml_text_value(self, target, xmltarget):
         if '<' in target:
             # Handle text with possible markup
             target = target.replace('&', '&')
@@ -254,27 +262,95 @@ class AndroidResourceUnit(base.TranslationUnit):
                 newstring = etree.fromstring('<string>%s</string>' % target)
             # Update text
             if newstring.text is None:
-                self.xmlelement.text = ''
+                xmltarget.text = ''
             else:
-                self.xmlelement.text = newstring.text
+                xmltarget.text = newstring.text
             # Remove old elements
-            for x in self.xmlelement.iterchildren():
-                self.xmlelement.remove(x)
+            for x in xmltarget.iterchildren():
+                xmltarget.remove(x)
+            # Escape all text parts
+            for x in newstring.iter():
+                x.text = self.escape(x.text, False)
+                if x.prefix is not None:
+                    x.prefix = self.escape(x.prefix, False)
+                if x.tail is not None:
+                    x.tail = self.escape(x.tail, False)
             # Add new elements
             for x in newstring.iterchildren():
-                self.xmlelement.append(x)
+                xmltarget.append(x)
         else:
             # Handle text only
-            self.xmlelement.text = self.escape(target)
+            xmltarget.text = self.escape(target)
+
+    def settarget(self, target):
+        if (self.hasplurals(self.source) or self.hasplurals(target)):
+            # Fix the root tag if mismatching
+            if self.xmlelement.tag != "plurals":
+                old_id = self.getid()
+                self.xmlelement = etree.Element("plurals")
+                self.setid(old_id)
+
+            lang_tags = set(Locale(self.gettargetlanguage()).plural_form.tags)
+            # Ensure that the implicit default "other" rule is present (usually omitted by Babel)
+            lang_tags.add('other')
+
+            # Get plural tags in the right order.
+            plural_tags = [tag for tag in ['zero', 'one', 'two', 'few', 'many', 'other'] if tag in lang_tags]
+
+            # Get string list to handle, wrapping non multistring/list targets into a list.
+            if isinstance(target, multistring):
+                plural_strings = target.strings
+            elif isinstance(target, list):
+                plural_strings = target
+            else:
+                plural_strings = [target]
+
+            # Sync plural_strings elements to plural_tags count.
+            if len(plural_strings) < len(plural_tags):
+                plural_strings += [''] * (len(plural_tags) - len(plural_strings))
+            plural_strings = plural_strings[:len(plural_tags)]
+
+            # Rebuild plurals.
+            for entry in self.xmlelement.iterchildren():
+                self.xmlelement.remove(entry)
+
+            self.xmlelement.text = "\n\t"
+
+            for plural_tag, plural_string in zip(plural_tags, plural_strings):
+                item = etree.Element("item")
+                item.set("quantity", plural_tag)
+                self.set_xml_text_value(plural_string, item)
+                item.tail = "\n\t"
+                self.xmlelement.append(item)
+            # Remove the tab from last item
+            item.tail = "\n"
+        else:
+            # Fix the root tag if mismatching
+            if self.xmlelement.tag != "string":
+                old_id = self.getid()
+                self.xmlelement = etree.Element("string")
+                self.setid(old_id)
+
+            self.set_xml_text_value(target, self.xmlelement)
+
         super(AndroidResourceUnit, self).settarget(target)
 
-    def gettarget(self, lang=None):
+    def get_xml_text_value(self, xmltarget):
         # Grab inner text
-        target = self.unescape(self.xmlelement.text or u'')
+        target = self.unescape(xmltarget.text or u'')
         # Include markup as well
-        target += u''.join([data.forceunicode(etree.tostring(child, encoding='utf-8')) for child in self.xmlelement.iterchildren()])
+        target += u''.join([data.forceunicode(etree.tostring(child, encoding='utf-8')) for child in xmltarget.iterchildren()])
         return target
 
+    def gettarget(self, lang=None):
+        if (self.xmlelement.tag == "plurals"):
+            target = []
+            for entry in self.xmlelement.iterchildren():
+                target.append(self.get_xml_text_value(entry))
+            return multistring(target)
+        else:
+            return self.get_xml_text_value(self.xmlelement)
+
     target = property(gettarget, settarget)
 
     def getlanguageNode(self, lang=None, index=None):
@@ -317,11 +393,18 @@ class AndroidResourceUnit(base.TranslationUnit):
     def __eq__(self, other):
         return (str(self) == str(other))
 
+    def hasplurals(self, thing):
+        if isinstance(thing, multistring):
+            return True
+        elif isinstance(thing, list):
+            return True
+        return False
+
 
 class AndroidResourceFile(lisa.LISAfile):
     """Class representing an Android String resource file store."""
     UnitClass = AndroidResourceUnit
-    Name = _("Android String Resource")
+    Name = "Android String Resource"
     Mimetypes = ["application/xml"]
     Extensions = ["xml"]
     rootNode = "resources"
@@ -334,3 +417,41 @@ class AndroidResourceFile(lisa.LISAfile):
         XML again."""
         self.namespace = self.document.getroot().nsmap.get(None, None)
         self.body = self.document.getroot()
+
+    def parse(self, xml):
+        """Populates this object from the given xml string"""
+        if not hasattr(self, 'filename'):
+            self.filename = getattr(xml, 'name', '')
+        if hasattr(xml, "read"):
+            xml.seek(0)
+            posrc = xml.read()
+            xml = posrc
+        parser = etree.XMLParser(strip_cdata=False)
+        self.document = etree.fromstring(xml, parser).getroottree()
+        self._encoding = self.document.docinfo.encoding
+        self.initbody()
+        assert self.document.getroot().tag == self.namespaced(self.rootNode)
+
+        for entry in self.document.getroot().iterchildren():
+            term = self.UnitClass.createfromxmlElement(entry)
+            if term is not None:
+                self.addunit(term, new=False)
+
+    def gettargetlanguage(self):
+        target_lang = super(AndroidResourceFile, self).gettargetlanguage()
+
+        # If targetlanguage isn't set, we try to extract it from the filename path (if any).
+        if (target_lang is None) and hasattr(self, 'filename') and self.filename:
+            # Android standards expect resource files to be in a directory named "values[-<lang>[-r<region>]]".
+            parent_dir = os.path.split(os.path.dirname(self.filename))[1]
+            match = re.search('^values-(\w*)', parent_dir)
+            if (match is not None):
+                target_lang = match.group(1)
+            elif (parent_dir == 'values'):
+                # If the resource file is inside the "values" directory, then it is the default/source language.
+                target_lang = self.sourcelanguage
+
+            # Cache it
+            self.settargetlanguage(target_lang)
+
+        return target_lang
diff --git a/translate/storage/base.py b/translate/storage/base.py
index 3333d8f..f410d59 100644
--- a/translate/storage/base.py
+++ b/translate/storage/base.py
@@ -25,11 +25,8 @@ try:
     import cPickle as pickle
 except ImportError:
     import pickle
-from exceptions import NotImplementedError
 
-import translate.i18n
 from translate.misc.multistring import multistring
-from translate.misc.typecheck import accepts, Self, IsOneOf
 from translate.storage.placeables import (StringElem, general,
                                           parse as rich_parse)
 from translate.storage.workflow import StateEnum as states
@@ -45,7 +42,7 @@ def force_override(method, baseclass):
         actualclass = method.im_class
     if actualclass != baseclass:
         raise NotImplementedError(
-            "%s does not reimplement %s as required by %s" % \
+            "%s does not reimplement %s as required by %s" %
             (actualclass.__name__, method.__name__, baseclass.__name__))
 
 
@@ -69,16 +66,16 @@ class TranslationUnit(object):
 
     A translation unit consists of the following:
 
-      - A *source* string. This is the original translatable text.
-      - A *target* string. This is the translation of the *source*.
-      - Zero or more *notes* on the unit. Notes would typically be some
-        comments from a translator on the unit, or some comments originating
-        from the source code.
-      - Zero or more *locations*. Locations indicate where in the original
-        source code this unit came from.
-      - Zero or more *errors*. Some tools (eg.
-        :mod:`~translate.filters.pofilter`)
-        can run checks on translations and produce error messages.
+    - A *source* string. This is the original translatable text.
+    - A *target* string. This is the translation of the *source*.
+    - Zero or more *notes* on the unit. Notes would typically be some comments
+      from a translator on the unit, or some comments originating from the
+      source code.
+    - Zero or more *locations*. Locations indicate where in the original source
+      code this unit came from.
+    - Zero or more *errors*. Some tools (eg.
+      :mod:`~translate.filters.pofilter`) can run checks on translations and
+      produce error messages.
 
     """
 
diff --git a/translate/storage/benchmark.py b/translate/storage/benchmark.py
index 7beea0b..bb27b44 100644
--- a/translate/storage/benchmark.py
+++ b/translate/storage/benchmark.py
@@ -17,15 +17,14 @@
 # You should have received a copy of the GNU General Public License
 # along with this program; if not, see <http://www.gnu.org/licenses/>.
 
+import argparse
 import cProfile
 import os
 import pstats
 import random
 import sys
-import argparse
 
-from translate.storage import factory
-from translate.storage import placeables
+from translate.storage import factory, placeables
 
 
 class TranslateBenchmarker:
@@ -87,7 +86,7 @@ class TranslateBenchmarker:
                 parsedfile = self.StoreClass(open(pofilename, 'r'))
                 count += len(parsedfile.units)
                 self.parsedfiles.append(parsedfile)
-        print "counted %d units" % count
+        print("counted %d units" % count)
 
     def parse_placeables(self):
         """parses placeables"""
@@ -97,7 +96,7 @@ class TranslateBenchmarker:
                 placeables.parse(unit.source, placeables.general.parsers)
                 placeables.parse(unit.target, placeables.general.parsers)
             count += len(parsedfile.units)
-        print "counted %d units" % count
+        print("counted %d units" % count)
 
 
 if __name__ == "__main__":
@@ -123,7 +122,7 @@ if __name__ == "__main__":
                             globals(), fromlist=_module)
         storeclass = getattr(module, _class)
     else:
-        print "StoreClass: '%s' is not a base class that the class factory can load" % storetype
+        print("StoreClass: '%s' is not a base class that the class factory can load" % storetype)
         sys.exit()
 
     sample_files = [
@@ -146,7 +145,7 @@ if __name__ == "__main__":
         if args.podir is None:
             benchmarker.create_sample_files(*sample_file_sizes)
         benchmarker.parse_files(file_dir=args.podir)
-        methods = [] # [("create_sample_files", "*sample_file_sizes")]
+        methods = []  # [("create_sample_files", "*sample_file_sizes")]
 
         if args.check_parsing:
             methods.append(("parse_files", ""))
@@ -156,10 +155,10 @@ if __name__ == "__main__":
 
         for methodname, methodparam in methods:
             #print methodname, "%d dirs, %d files, %d strings, %d/%d words" % sample_file_sizes
-            print "_______________________________________________________"
+            print("_______________________________________________________")
             statsfile = "%s_%s" % (methodname, storetype) + '_%d_%d_%d_%d_%d.stats' % sample_file_sizes
             cProfile.run('benchmarker.%s(%s)' % (methodname, methodparam), statsfile)
             stats = pstats.Stats(statsfile)
             stats.sort_stats('time').print_stats(20)
-            print "_______________________________________________________"
+            print("_______________________________________________________")
         benchmarker.clear_test_dir()
diff --git a/translate/storage/bundleprojstore.py b/translate/storage/bundleprojstore.py
index ec5590b..a56692f 100644
--- a/translate/storage/bundleprojstore.py
+++ b/translate/storage/bundleprojstore.py
@@ -25,6 +25,7 @@ from zipfile import ZipFile
 
 from translate.storage.projstore import *
 
+
 __all__ = ['BundleProjectStore', 'InvalidBundleError']
 
 
@@ -147,7 +148,7 @@ class BundleProjectStore(ProjectStore):
         """Try and find a project file name for the given real file name."""
         try:
             fname = super(BundleProjectStore, self).get_proj_filename(realfname)
-        except ValueError, ve:
+        except ValueError as ve:
             fname = None
         if fname:
             return fname
@@ -243,8 +244,9 @@ class BundleProjectStore(ProjectStore):
         if hasattr(infile, 'seek'):
             infile.seek(0)
         self.zip.writestr(pfname, infile.read())
-        self._files[pfname] = None  # Clear the cached file object to force the
-                                    # file to be read from the zip file.
+        # Clear the cached file object to force the file to be read from the
+        # zip file.
+        self._files[pfname] = None
 
     def _zip_delete(self, fnames):
         """Delete the files with the given names from the zip file (``self.zip``)."""
diff --git a/translate/storage/catkeys.py b/translate/storage/catkeys.py
index 6f356ee..e4a4c3b 100644
--- a/translate/storage/catkeys.py
+++ b/translate/storage/catkeys.py
@@ -51,11 +51,11 @@ Escaping
 """
 
 import csv
-import sys
 
 from translate.lang import data
 from translate.storage import base
 
+
 FIELDNAMES_HEADER = ["version", "language", "mimetype", "checksum"]
 """Field names for the catkeys header"""
 
@@ -63,10 +63,10 @@ FIELDNAMES = ["source", "context", "comment", "target"]
 """Field names for a catkeys TU"""
 
 FIELDNAMES_HEADER_DEFAULTS = {
-"version": "1",
-"language": "",
-"mimetype": "",
-"checksum": "",
+    "version": "1",
+    "language": "",
+    "mimetype": "",
+    "checksum": "",
 }
 """Default or minimum header entries for a catkeys file"""
 
@@ -223,7 +223,7 @@ class CatkeysUnit(base.TranslationUnit):
 
 class CatkeysFile(base.TranslationStore):
     """A catkeys translation memory file"""
-    Name = _("Haiku catkeys file")
+    Name = "Haiku catkeys file"
     Mimetypes = ["application/x-catkeys"]
     Extensions = ["catkeys"]
 
diff --git a/translate/storage/cpo.py b/translate/storage/cpo.py
index 1b95812..1b42dbc 100644
--- a/translate/storage/cpo.py
+++ b/translate/storage/cpo.py
@@ -29,23 +29,21 @@ to have a look at gettext-tools/libgettextpo/gettext-po.h from the gettext
 package for the public API of the library.
 """
 
-from __future__ import with_statement
-
-from ctypes import c_size_t, c_int, c_uint, c_char_p, c_long, CFUNCTYPE, POINTER
-from ctypes import Structure, cdll
 import ctypes.util
+import logging
 import os
 import re
 import sys
-import logging
 import tempfile
+from ctypes import (CFUNCTYPE, POINTER, Structure, c_char_p, c_int, c_long,
+                    c_size_t, c_uint, cdll)
 
 from translate.lang import data
 from translate.misc.multistring import multistring
-from translate.storage import base, pocommon
-from translate.storage import pypo
+from translate.storage import base, pocommon, pypo
 from translate.storage.pocommon import encodingToUse
 
+
 logger = logging.getLogger(__name__)
 
 lsep = " "
@@ -75,11 +73,11 @@ class po_xerror_handler(Structure):
 
 class po_error_handler(Structure):
     _fields_ = [
-    ('error', CFUNCTYPE(None, c_int, c_int, STRING)),
-    ('error_at_line', CFUNCTYPE(None, c_int, c_int, STRING, c_uint, STRING)),
-    ('multiline_warning', CFUNCTYPE(None, STRING, STRING)),
-    ('multiline_error', CFUNCTYPE(None, STRING, STRING)),
-]
+        ('error', CFUNCTYPE(None, c_int, c_int, STRING)),
+        ('error_at_line', CFUNCTYPE(None, c_int, c_int, STRING, c_uint, STRING)),
+        ('multiline_warning', CFUNCTYPE(None, STRING, STRING)),
+        ('multiline_error', CFUNCTYPE(None, STRING, STRING)),
+    ]
 
 
 # Callback functions for po_xerror_handler
@@ -101,6 +99,7 @@ def xerror2_cb(severity, message1, filename1, lineno1, column1, multiline_p1,
     if severity >= 1:
         raise ValueError(message_text1)
 
+
 # Setup return and parameter types
 def setup_call_types(gpo):
     # File access
@@ -167,7 +166,7 @@ else:
 
 if gpo:
     setup_call_types(gpo)
-    
+
 # Setup the po_xerror_handler
 xerror_handler = po_xerror_handler()
 xerror_handler.xerror = xerror_prototype(xerror_cb)
@@ -368,7 +367,7 @@ class pounit(pocommon.pounit):
         return id
 
     def getnotes(self, origin=None):
-        if origin == None:
+        if origin is None:
             comments = gpo.po_message_comments(self._gpo_message) + \
                        gpo.po_message_extracted_comments(self._gpo_message)
         elif origin == "translator":
@@ -566,6 +565,7 @@ class pounit(pocommon.pounit):
         context = data.forceunicode(context)
         gpo.po_message_set_msgctxt(self._gpo_message, context.encode(self.CPO_ENC))
 
+    @classmethod
     def buildfromunit(cls, unit, encoding=None):
         """Build a native unit from a foreign unit, preserving as much
         information as possible."""
@@ -600,7 +600,6 @@ class pounit(pocommon.pounit):
             return newunit
         else:
             return base.TranslationUnit.buildfromunit(unit)
-    buildfromunit = classmethod(buildfromunit)
 
 
 class pofile(pocommon.pofile):
diff --git a/translate/storage/csvl10n.py b/translate/storage/csvl10n.py
index 7f7a5ee..ad3fba0 100644
--- a/translate/storage/csvl10n.py
+++ b/translate/storage/csvl10n.py
@@ -22,12 +22,9 @@
 or entire files (csvfile) for use with localisation
 """
 
-import csv
 import codecs
-try:
-    import cStringIO as StringIO
-except:
-    import StringIO
+import csv
+from cStringIO import StringIO
 
 from translate.misc import sparse
 from translate.storage import base
@@ -273,7 +270,7 @@ class csvunit(base.TranslationUnit):
             'context': from_unicode(self.context, encoding),
             'translator_comments': from_unicode(self.translator_comments, encoding),
             'developer_comments': from_unicode(self.developer_comments, encoding),
-            }
+        }
 
         return output
 
@@ -326,7 +323,7 @@ def valid_fieldnames(fieldnames):
 
 def detect_header(sample, dialect, fieldnames):
     """Test if file has a header or not, also returns number of columns in first row"""
-    inputfile = StringIO.StringIO(sample)
+    inputfile = StringIO(sample)
     try:
         reader = csv.reader(inputfile, dialect)
     except csv.Error:
@@ -348,7 +345,7 @@ class csvfile(base.TranslationStore):
     """This class represents a .csv file with various lines.
     The default format contains three columns: location, source, target"""
     UnitClass = csvunit
-    Name = _("Comma Separated Value")
+    Name = "Comma Separated Value"
     Mimetypes = ['text/comma-separated-values', 'text/csv']
     Extensions = ["csv"]
 
@@ -423,7 +420,7 @@ class csvfile(base.TranslationStore):
         return source.encode(encoding)
 
     def getoutput(self):
-        outputfile = StringIO.StringIO()
+        outputfile = StringIO()
         writer = csv.DictWriter(outputfile, self.fieldnames, extrasaction='ignore', dialect=self.dialect)
         # write header
         hdict = dict(map(None, self.fieldnames, self.fieldnames))
diff --git a/translate/storage/directory.py b/translate/storage/directory.py
index 0c39ba5..71757e2 100644
--- a/translate/storage/directory.py
+++ b/translate/storage/directory.py
@@ -25,7 +25,7 @@
 
 #TODO: consider also providing directories as we currently provide files
 
-from os import path
+import os
 
 from translate.storage import factory
 
@@ -52,7 +52,7 @@ class Directory:
     def unit_iter(self):
         """Iterator over all the units in all the files in this directory."""
         for dirname, filename in self.file_iter():
-            store = factory.getobject(path.join(dirname, filename))
+            store = factory.getobject(os.path.join(dirname, filename))
             #TODO: don't regenerate all the storage objects
             for unit in store.unit_iter():
                 yield unit
@@ -65,9 +65,8 @@ class Directory:
         """Populate the internal file data."""
         self.filedata = []
 
-        def addfile(arg, dirname, fnames):
+        for dirpath, dirnames, filenames in os.walk(self.dir):
+            fnames = dirnames + filenames
             for fname in fnames:
-                if path.isfile(path.join(dirname, fname)):
-                    self.filedata.append((dirname, fname))
-
-        path.walk(self.dir, addfile, None)
+                if os.path.isfile(os.path.join(dirpath, fname)):
+                    self.filedata.append((dirpath, fname))
diff --git a/translate/storage/dtd.py b/translate/storage/dtd.py
index bdd9170..362d309 100644
--- a/translate/storage/dtd.py
+++ b/translate/storage/dtd.py
@@ -57,7 +57,7 @@ Escaping in regular DTD
     - The % character is escaped using % or % or &#x25;
     - The " character is escaped using "
     - The ' character is escaped using ' (partial roundtrip)
-    - The & character is escaped using & (not yet implemented)
+    - The & character is escaped using &
     - The < character is escaped using < (not yet implemented)
     - The > character is escaped using > (not yet implemented)
 
@@ -84,17 +84,18 @@ Escaping in Android DTD
     - The " character is escaped using \"
 """
 
-from translate.storage import base
-from translate.misc import quote
-
 import re
 import warnings
+from cStringIO import StringIO
 try:
     from lxml import etree
-    import StringIO
 except ImportError:
     etree = None
 
+from translate.misc import quote
+from translate.storage import base
+
+
 labelsuffixes = (".label", ".title")
 """Label suffixes: entries with this suffix are able to be comibed with accesskeys
 found in in entries ending with :attr:`.accesskeysuffixes`"""
@@ -123,11 +124,16 @@ def unquotefromandroid(source):
     return value
 
 
+_DTD_CODEPOINT2NAME = {
+    ord("%"): "#037",  # Always escape % sign as %.
+    ord("&"): "amp",
+   #ord("<"): "lt",  # Not really so useful.
+   #ord(">"): "gt",  # Not really so useful.
+}
+
 def quotefordtd(source):
     """Quotes and escapes a line for regular DTD files."""
-    source = source.replace("%", "%")  # Always escape % sign as %.
-    #source = source.replace("<", "<")  # Not really so useful.
-    #source = source.replace(">", ">")  # Not really so useful.
+    source = quote.entityencode(source, _DTD_CODEPOINT2NAME)
     if '"' in source:
         source = source.replace("'", "'")  # This seems not to runned.
         if '="' not in source:  # Avoid escaping " chars in href attributes.
@@ -140,6 +146,19 @@ def quotefordtd(source):
     return value.encode('utf-8')
 
 
+_DTD_NAME2CODEPOINT = {
+    "quot":   ord('"'),
+    "amp":    ord("&"),
+   #"lt":     ord("<"),  # Not really so useful.
+   #"gt":     ord(">"),  # Not really so useful.
+   # FIXME these should probably be handled in a more general way
+    "#x0022": ord('"'),
+    "#187":   ord(u"»"),
+    "#037":   ord("%"),
+    "#37":    ord("%"),
+    "#x25":   ord("%"),
+}
+
 def unquotefromdtd(source):
     """unquotes a quoted dtd definition"""
     # extract the string, get rid of quoting
@@ -152,15 +171,7 @@ def unquotefromdtd(source):
     extracted = extracted.decode('utf-8')
     if quotechar == "'":
         extracted = extracted.replace("'", "'")
-    extracted = extracted.replace(""", "\"")
-    extracted = extracted.replace("&#x0022;", "\"")
-    # FIXME these should probably be handled with a lookup
-    extracted = extracted.replace("»", u"»")
-    extracted = extracted.replace("%", "%")
-    extracted = extracted.replace("%", "%")
-    extracted = extracted.replace("&#x25;", "%")
-    #extracted = extracted.replace("<", "<")  # Not really so useful.
-    #extracted = extracted.replace(">", ">")  # Not really so useful.
+    extracted = quote.entitydecode(extracted, _DTD_NAME2CODEPOINT)
     return extracted
 
 
@@ -184,7 +195,7 @@ def removeinvalidamps(name, value):
 
     def is_valid_entity_name(name):
         """Check that supplied *name* is a valid entity name."""
-        if name.replace('.', '').isalnum():
+        if name.replace('.', '').replace('_', '').isalnum():
             return True
         elif name[0] == '#' and name[1:].isalnum():
             return True
@@ -552,7 +563,7 @@ class dtdfile(base.TranslationStore):
                     linesprocessed = newdtd.parse("\n".join(lines[start:end]))
                     if linesprocessed >= 1 and (not newdtd.isnull() or newdtd.unparsedlines):
                         self.units.append(newdtd)
-                except Exception, e:
+                except Exception as e:
                     warnings.warn("%s\nError occured between lines %d and %d:\n%s" % (e, start + 1, end, "\n".join(lines[start:end])))
                 start += linesprocessed
 
@@ -572,11 +583,11 @@ class dtdfile(base.TranslationStore):
         return "".join(sources)
 
     def makeindex(self):
-        """makes self.index dictionary keyed on entities"""
-        self.index = {}
+        """makes self.id_index dictionary keyed on entities"""
+        self.id_index = {}
         for dtd in self.units:
             if not dtd.isnull():
-                self.index[dtd.entity] = dtd
+                self.id_index[dtd.entity] = dtd
 
     def _valid_store(self):
         """Validate the store to determine if it is valid
@@ -590,8 +601,8 @@ class dtdfile(base.TranslationStore):
         if etree is not None and not self.android:
             try:
                 # #expand is a Mozilla hack and are removed as they are not valid in DTDs
-                dtd = etree.DTD(StringIO.StringIO(re.sub("#expand", "", self.getoutput())))
-            except etree.DTDParseError, e:
+                dtd = etree.DTD(StringIO(re.sub("#expand", "", self.getoutput())))
+            except etree.DTDParseError as e:
                 warnings.warn("DTD parse error: %s" % e.error_log)
                 return False
         return True
diff --git a/translate/storage/fpo.py b/translate/storage/fpo.py
index 29f25a7..15c3050 100644
--- a/translate/storage/fpo.py
+++ b/translate/storage/fpo.py
@@ -28,15 +28,16 @@ directly, but can be used once cpo has been established to work."""
 # - previous msgid and msgctxt
 # - accept only unicodes everywhere
 
-import re
 import copy
-import cStringIO
+import re
+from cStringIO import StringIO
 
 from translate.lang import data
 from translate.misc.multistring import multistring
-from translate.storage import pocommon, base, cpo, poparser
+from translate.storage import base, cpo, pocommon, poparser
 from translate.storage.pocommon import encodingToUse
 
+
 lsep = " "
 """Separator for #: entries"""
 
@@ -124,7 +125,7 @@ class pounit(pocommon.pounit):
 
     def getnotes(self, origin=None):
         """Return comments based on origin value (programmer, developer, source code and translator)"""
-        if origin == None:
+        if origin is None:
             comments = u"\n".join(self.othercomments)
             comments += u"\n".join(self.automaticcomments)
         elif origin == "translator":
@@ -329,7 +330,7 @@ class pounit(pocommon.pounit):
 
     def parse(self, src):
         raise DeprecationWarning("Should not be parsing with a unit")
-        return poparser.parse_unit(poparser.ParseState(cStringIO.StringIO(src), pounit), self)
+        return poparser.parse_unit(poparser.ParseState(StringIO(src), pounit), self)
 
     def __str__(self):
         """convert to a string. double check that unicode is handled somehow here"""
@@ -388,6 +389,7 @@ class pounit(pocommon.pounit):
             id = u"%s\04%s" % (context, id)
         return id
 
+    @classmethod
     def buildfromunit(cls, unit):
         """Build a native unit from a foreign unit, preserving as much
         information as possible."""
@@ -420,7 +422,6 @@ class pounit(pocommon.pounit):
             return newunit
         else:
             return base.TranslationUnit.buildfromunit(unit)
-    buildfromunit = classmethod(buildfromunit)
 
 
 class pofile(pocommon.pofile):
@@ -503,7 +504,7 @@ class pofile(pocommon.pofile):
             del self._cpo_store
             if tmp_header_added:
                 self.units = self.units[1:]
-        except Exception, e:
+        except Exception as e:
             raise base.ParseError(e)
 
     def removeduplicates(self, duplicatestyle="merge"):
@@ -554,7 +555,7 @@ class pofile(pocommon.pofile):
         self._cpo_store = cpo.pofile(encoding=self._encoding, noheader=True)
         try:
             self._build_cpo_from_self()
-        except UnicodeEncodeError, e:
+        except UnicodeEncodeError as e:
             self._encoding = "utf-8"
             self.updateheader(add=True, Content_Type="text/plain; charset=UTF-8")
             self._build_cpo_from_self()
diff --git a/translate/storage/html.py b/translate/storage/html.py
index ba1386a..fb3bf22 100644
--- a/translate/storage/html.py
+++ b/translate/storage/html.py
@@ -22,15 +22,17 @@
 """module for parsing html files for translation"""
 
 import re
-from htmlentitydefs import name2codepoint
-import HTMLParser
+
+from six.moves import html_parser
+from six.moves.html_entities import name2codepoint
 
 from translate.storage import base
 from translate.storage.base import ParseError
 
+
 # Override the piclose tag from simple > to ?> otherwise we consume HTML
 # within the processing instructions
-HTMLParser.piclose = re.compile('\?>')
+html_parser.piclose = re.compile('\?>')
 
 
 strip_html_re = re.compile(r'''
@@ -116,23 +118,52 @@ class htmlunit(base.TranslationUnit):
         return self.locations
 
 
-class htmlfile(HTMLParser.HTMLParser, base.TranslationStore):
+class htmlfile(html_parser.HTMLParser, base.TranslationStore):
     UnitClass = htmlunit
 
-    MARKINGTAGS = ["p", "title", "h1", "h2", "h3", "h4", "h5", "h6", "th",
-                   "td", "div", "li", "dt", "dd", "address", "caption", "pre"]
+    MARKINGTAGS = [
+        "address",
+        "caption",
+        "div",
+        "dt", "dd",
+        "figcaption",
+        "h1", "h2", "h3", "h4", "h5", "h6",
+        "li",
+        "p",
+        "pre",
+        "title",
+        "th", "td",
+    ]
     """Text in these tags that will be extracted from the HTML document"""
 
     MARKINGATTRS = []
     """Text from tags with these attributes will be extracted from the HTML
     document"""
 
-    INCLUDEATTRS = ["alt", "summary", "standby", "abbr", "content"]
+    INCLUDEATTRS = [
+        "alt",
+        "abbr",
+        "content",
+        "standby",
+        "summary",
+        "title"
+    ]
     """Text from these attributes are extracted"""
 
-    SELF_CLOSING_TAGS = [u"area", u"base", u"basefont", u"br", u"col",
-                         u"frame", u"hr", u"img", u"input", u"link", u"meta",
-                         u"param"]
+    SELF_CLOSING_TAGS = [
+        u"area",
+        u"base",
+        u"basefont",
+        u"br",
+        u"col",
+        u"frame",
+        u"hr",
+        u"img",
+        u"input",
+        u"link",
+        u"meta",
+        u"param",
+    ]
     """HTML self-closing tags.  Tags that should be specified as <img /> but
     might be <img>.
     `Reference <http://learnwebsitemaking.com/htmlselfclosingtags.html>`_"""
@@ -154,7 +185,7 @@ class htmlfile(HTMLParser.HTMLParser, base.TranslationStore):
         else:
             self.callback = callback
         self.includeuntaggeddata = includeuntaggeddata
-        HTMLParser.HTMLParser.__init__(self)
+        html_parser.HTMLParser.__init__(self)
 
         if inputfile is not None:
             htmlsrc = inputfile.read()
diff --git a/translate/storage/ical.py b/translate/storage/ical.py
index 2c0a6ab..466febb 100644
--- a/translate/storage/ical.py
+++ b/translate/storage/ical.py
@@ -25,14 +25,13 @@ specification.
 
 The iCalendar specification uses the following naming conventions:
 
-    - Component: an event, journal entry, timezone, etc
-    - Property: a property of a component: summary, description, start
-      time, etc
-    - Attribute: an attribute of a property, e.g. language
+- Component: an event, journal entry, timezone, etc
+- Property: a property of a component: summary, description, start time, etc
+- Attribute: an attribute of a property, e.g. language
 
 The following are localisable in this implementation:
 
-    - VEVENT component: SUMMARY, DESCRIPTION, COMMENT and LOCATION properties
+- VEVENT component: SUMMARY, DESCRIPTION, COMMENT and LOCATION properties
 
 While other items could be localised this is not seen as important until use
 cases arise.  In such a case simply adjusting the component.name and
@@ -52,8 +51,9 @@ Future Format Support
     `vCard <http://en.wikipedia.org/wiki/VCard>`_ it is possible to expand
     this format to understand those if needed.
 """
+
 import re
-from StringIO import StringIO
+from cStringIO import StringIO
 
 import vobject
 
diff --git a/translate/storage/ini.py b/translate/storage/ini.py
index 539fb72..2abd87c 100644
--- a/translate/storage/ini.py
+++ b/translate/storage/ini.py
@@ -31,42 +31,47 @@ b : a string
 """
 
 import re
-from StringIO import StringIO
+from cStringIO import StringIO
+
+from iniparse import INIConfig
 
-from translate.misc.ini import INIConfig
 from translate.storage import base
 
-_dialects = {}
+
+dialects = {}
 
 
-def register_dialect(name, dialect):
-    """Register the dialect"""
-    _dialects[name] = dialect
+def register_dialect(dialect):
+    """Decorator that registers the dialect."""
+    dialects[dialect.name] = dialect
+    return dialect
 
 
 class Dialect(object):
     """Base class for differentiating dialect options and functions"""
-    pass
+    name = None
 
 
+ at register_dialect
 class DialectDefault(Dialect):
+    name = 'default'
 
     def unescape(self, text):
         return text
 
     def escape(self, text):
         return text.encode('utf-8')
-register_dialect("default", DialectDefault)
 
 
+ at register_dialect
 class DialectInno(DialectDefault):
+    name = 'inno'
 
     def unescape(self, text):
         return text.replace("%n", "\n").replace("%t", "\t")
 
     def escape(self, text):
         return text.replace("\t", "%t").replace("\n", "%n").encode('utf-8')
-register_dialect("inno", DialectInno)
 
 
 class iniunit(base.TranslationUnit):
@@ -92,7 +97,7 @@ class inifile(base.TranslationStore):
     def __init__(self, inputfile=None, unitclass=iniunit, dialect="default"):
         """construct an INI file, optionally reading in from inputfile."""
         self.UnitClass = unitclass
-        self._dialect = _dialects.get(dialect, DialectDefault)()  # fail correctly/use getattr/
+        self._dialect = dialects.get(dialect, DialectDefault)()  # fail correctly/use getattr/
         base.TranslationStore.__init__(self, unitclass=unitclass)
         self.units = []
         self.filename = ''
@@ -112,7 +117,7 @@ class inifile(base.TranslationStore):
             return ""
 
     def parse(self, input):
-        """parse the given file or file source string"""
+        """Parse the given file or file source string."""
         if hasattr(input, 'name'):
             self.filename = input.name
         elif not getattr(self, 'filename', ''):
@@ -121,12 +126,15 @@ class inifile(base.TranslationStore):
             inisrc = input.read()
             input.close()
             input = inisrc
+
         if isinstance(input, str):
             input = StringIO(input)
             self._inifile = INIConfig(input, optionxformvalue=None)
         else:
             self._inifile = INIConfig(file(input), optionxformvalue=None)
+
         for section in self._inifile:
             for entry in self._inifile[section]:
-                newunit = self.addsourceunit(self._dialect.unescape(self._inifile[section][entry]))
+                source = self._dialect.unescape(self._inifile[section][entry])
+                newunit = self.addsourceunit(source)
                 newunit.addlocation("[%s]%s" % (section, entry))
diff --git a/translate/storage/jsonl10n.py b/translate/storage/jsonl10n.py
index 23e61e3..9a2526a 100644
--- a/translate/storage/jsonl10n.py
+++ b/translate/storage/jsonl10n.py
@@ -18,61 +18,60 @@
 # You should have received a copy of the GNU General Public License
 # along with this program; if not, see <http://www.gnu.org/licenses/>.
 
-"""Class that manages JSON data files for translation
+r"""Class that manages JSON data files for translation
 
 JSON is an acronym for JavaScript Object Notation, it is an open standard
 designed for human-readable data interchange.
 
 JSON basic types:
 
-  - Number (integer or real)
-  - String (double-quoted Unicode with backslash escaping)
-  - Boolean (true or false)
-  - Array (an ordered sequence of values, comma-separated and enclosed
-    in square brackets)
-  - Object (a collection of key:value pairs, comma-separated and
-    enclosed in curly braces)
-  - null
-
-Example::
-
-  {
-       "firstName": "John",
-       "lastName": "Smith",
-       "age": 25,
-       "address": {
-           "streetAddress": "21 2nd Street",
-           "city": "New York",
-           "state": "NY",
-           "postalCode": "10021"
-       },
-       "phoneNumber": [
-           {
-             "type": "home",
-             "number": "212 555-1234"
-           },
-           {
-             "type": "fax",
-             "number": "646 555-4567"
-           }
-       ]
+- Number (integer or real)
+- String (double-quoted Unicode with backslash escaping)
+- Boolean (true or false)
+- Array (an ordered sequence of values, comma-separated and enclosed in square
+  brackets)
+- Object (a collection of key:value pairs, comma-separated and enclosed in
+  curly braces)
+- null
+
+Example:
+
+.. code-block:: json
+
+   {
+        "firstName": "John",
+        "lastName": "Smith",
+        "age": 25,
+        "address": {
+            "streetAddress": "21 2nd Street",
+            "city": "New York",
+            "state": "NY",
+            "postalCode": "10021"
+        },
+        "phoneNumber": [
+            {
+              "type": "home",
+              "number": "212 555-1234"
+            },
+            {
+              "type": "fax",
+              "number": "646 555-4567"
+            }
+        ]
    }
 
 
 TODO:
 
-  - Handle \u and other escapes in Unicode
-  - Manage data type storage and conversion. True -> "True" -> True
-  - Sort the extracted data to the order of the JSON file
+- Handle ``\u`` and other escapes in Unicode
+- Manage data type storage and conversion. True --> "True" --> True
+- Sort the extracted data to the order of the JSON file
 
 """
 
+import json
 import os
-from StringIO import StringIO
-try:
-    import json as json  # available since Python 2.6
-except ImportError:
-    import simplejson as json  # API compatible with the json module
+from cStringIO import StringIO
 
 from translate.storage import base
 
@@ -183,9 +182,9 @@ class JsonFile(base.TranslationStore):
                                                           i, name_node, data):
                     yield x
         # apply filter
-        elif (stop is None \
-            or (isinstance(last_node, dict) and name_node in stop) \
-            or (isinstance(last_node, list) and name_last_node in stop)):
+        elif (stop is None or
+              (isinstance(last_node, dict) and name_node in stop) or
+              (isinstance(last_node, list) and name_last_node in stop)):
 
             if isinstance(data, str) or isinstance(data, unicode):
                 yield (prev, data, last_node, name_node)
@@ -213,7 +212,7 @@ class JsonFile(base.TranslationStore):
             input = StringIO(input)
         try:
             self._file = json.load(input)
-        except ValueError, e:
+        except ValueError as e:
             raise base.ParseError(e.message)
 
         for k, data, ref, item in self._extract_translatables(self._file,
diff --git a/translate/storage/lisa.py b/translate/storage/lisa.py
index 4287e5b..7c7b964 100644
--- a/translate/storage/lisa.py
+++ b/translate/storage/lisa.py
@@ -20,17 +20,15 @@
 
 """Parent class for LISA standards (TMX, TBX, XLIFF)"""
 
-import re
-
 try:
     from lxml import etree
-    from translate.misc.xml_helpers import getText, getXMLlang, setXMLlang, \
-                                           getXMLspace, setXMLspace, namespaced
-except ImportError, e:
+    from translate.misc.xml_helpers import (getText, getXMLlang, getXMLspace,
+                                            namespaced, setXMLlang, setXMLspace)
+except ImportError as e:
     raise ImportError("lxml is not installed. It might be possible to continue without support for XML formats.")
 
-from translate.storage import base
 from translate.lang import data
+from translate.storage import base
 
 
 class LISAunit(base.TranslationUnit):
@@ -160,7 +158,7 @@ class LISAunit(base.TranslationUnit):
                     terms = languageNode.iter(self.namespaced(self.textNode))
                     try:
                         languageNode = terms.next()
-                    except StopIteration, e:
+                    except StopIteration as e:
                         pass
                 languageNode.text = text
         else:
@@ -258,11 +256,11 @@ class LISAunit(base.TranslationUnit):
     rid = property(lambda self: self.xmlelement.attrib[self.namespaced('rid')],
                    lambda self, value: self._set_property(self.namespaced('rid'), value))
 
+    @classmethod
     def createfromxmlElement(cls, element):
         term = cls(None, empty=True)
         term.xmlelement = element
         return term
-    createfromxmlElement = classmethod(createfromxmlElement)
 
 
 class LISAfile(base.TranslationStore):
@@ -339,12 +337,7 @@ class LISAfile(base.TranslationStore):
             xml.seek(0)
             posrc = xml.read()
             xml = posrc
-        if etree.LXML_VERSION >= (2, 1, 0):
-            #Since version 2.1.0 we can pass the strip_cdata parameter to
-            #indicate that we don't want cdata to be converted to raw XML
-            parser = etree.XMLParser(strip_cdata=False)
-        else:
-            parser = etree.XMLParser()
+        parser = etree.XMLParser(strip_cdata=False)
         self.document = etree.fromstring(xml, parser).getroottree()
         self._encoding = self.document.docinfo.encoding
         self.initbody()
diff --git a/translate/storage/mo.py b/translate/storage/mo.py
index 439a1f8..0145265 100644
--- a/translate/storage/mo.py
+++ b/translate/storage/mo.py
@@ -45,11 +45,10 @@ import re
 import struct
 
 from translate.misc.multistring import multistring
-from translate.storage import base
-from translate.storage import po
-from translate.storage import poheader
+from translate.storage import base, po, poheader
 
-MO_MAGIC_NUMBER = 0x950412deL
+
+MO_MAGIC_NUMBER = 0x950412de
 
 
 def mounpack(filename='messages.mo'):
@@ -133,7 +132,7 @@ class mounit(base.TranslationUnit):
 class mofile(poheader.poheader, base.TranslationStore):
     """A class representing a .mo file."""
     UnitClass = mounit
-    Name = _("Gettext MO file")
+    Name = "Gettext MO file"
     Mimetypes = ["application/x-gettext-catalog", "application/x-mo"]
     Extensions = ["mo", "gmo"]
     _binary = True
diff --git a/translate/storage/mozilla_lang.py b/translate/storage/mozilla_lang.py
index e3d0783..a1117eb 100644
--- a/translate/storage/mozilla_lang.py
+++ b/translate/storage/mozilla_lang.py
@@ -23,8 +23,7 @@
 
 """A class to manage Mozilla .lang files."""
 
-from translate.storage import base
-from translate.storage import txt
+from translate.storage import base, txt
 
 
 class LangUnit(base.TranslationUnit):
@@ -39,7 +38,7 @@ class LangUnit(base.TranslationUnit):
             unchanged = " {ok}"
         else:
             unchanged = ""
-        if self.target == "" or self.target is None:
+        if not self.istranslated():
             target = self.source
         else:
             target = self.target
@@ -59,7 +58,7 @@ class LangStore(txt.TxtFile):
     """We extend TxtFile, since that has a lot of useful stuff for encoding"""
     UnitClass = LangUnit
 
-    Name = _("Mozilla .lang")
+    Name = "Mozilla .lang"
     Extensions = ['lang']
 
     def __init__(self, inputfile=None, flavour=None, encoding="utf-8", mark_active=False):
@@ -92,7 +91,7 @@ class LangStore(txt.TxtFile):
                 readyTrans = False  # We already have our translation
                 continue
 
-            if line.startswith('#'): # A comment
+            if line.startswith('#'):  # A comment
                 comment += line[1:].strip() + "\n"
 
             if line.startswith(';'):
diff --git a/translate/storage/odf_shared.py b/translate/storage/odf_shared.py
index b32139e..ee116ef 100644
--- a/translate/storage/odf_shared.py
+++ b/translate/storage/odf_shared.py
@@ -42,7 +42,8 @@ def define_tables():
         (text_uri, 'ruby-base'),
         (text_uri, 's'),
         (text_uri, 'span'),
-        (text_uri, 'tab')]
+        (text_uri, 'tab')
+    ]
 
     no_translate_content_elements = [
 
@@ -173,7 +174,7 @@ def define_tables():
 
         # From translate
         (text_uri, 'tracked-changes'),
-        ]
+    ]
 
     globals()['inline_elements'] = inline_elements
     globals()['no_translate_content_elements'] = no_translate_content_elements
diff --git a/translate/storage/omegat.py b/translate/storage/omegat.py
index f5ac646..5fac9d8 100644
--- a/translate/storage/omegat.py
+++ b/translate/storage/omegat.py
@@ -41,10 +41,10 @@ Encoding
 
 import csv
 import locale
-import sys
 
 from translate.storage import base
 
+
 OMEGAT_FIELDNAMES = ["source", "target", "comment"]
 """Field names for an OmegaT glossary unit"""
 
@@ -139,7 +139,7 @@ class OmegaTUnit(base.TranslationUnit):
 
 class OmegaTFile(base.TranslationStore):
     """An OmegaT glossary file"""
-    Name = _("OmegaT Glossary")
+    Name = "OmegaT Glossary"
     Mimetypes = ["application/x-omegat-glossary"]
     Extensions = ["utf8"]
 
@@ -199,7 +199,7 @@ class OmegaTFile(base.TranslationStore):
 
 class OmegaTFileTab(OmegaTFile):
     """An OmegaT glossary file in the default system encoding"""
-    Name = _("OmegaT Glossary")
+    Name = "OmegaT Glossary"
     Mimetypes = ["application/x-omegat-glossary"]
     Extensions = ["tab"]
 
diff --git a/translate/storage/oo.py b/translate/storage/oo.py
index ec59572..f282e27 100644
--- a/translate/storage/oo.py
+++ b/translate/storage/oo.py
@@ -36,8 +36,8 @@ import os
 import re
 import warnings
 
-from translate.misc import quote
-from translate.misc import wStringIO
+from translate.misc import quote, wStringIO
+
 
 # File normalisation
 
@@ -182,8 +182,8 @@ class ooline(object):
     def setparts(self, parts):
         """create a line from its tab-delimited parts"""
         if len(parts) != 15:
-            warnings.warn("oo line contains %d parts, it should contain 15: %r" % \
-                    (len(parts), parts))
+            warnings.warn("oo line contains %d parts, it should contain 15: %r" %
+                          (len(parts), parts))
             newparts = list(parts)
             if len(newparts) < 15:
                 newparts = newparts + [""] * (15 - len(newparts))
diff --git a/translate/storage/php.py b/translate/storage/php.py
index 6455e78..4889fb1 100644
--- a/translate/storage/php.py
+++ b/translate/storage/php.py
@@ -22,26 +22,32 @@
 entire files :class:`phpfile`. These files are used in translating many
 PHP based applications.
 
-Only PHP files written with these conventions are supported::
-
-  $lang['item'] = "vale";  # Array of values
-  $some_entity = "value";  # Named variables
-  define("ENTITY", "value");
-  $lang = array(
-     'item1' => 'value1'    ,   #Supports space before comma
-     'item2' => 'value2',
-  );
-  $lang = array(    # Nested arrays
-     'item1' => 'value1',
-     'item2' => array(
-        'key' => 'value'    ,   #Supports space before comma
-        'key2' => 'value2',
-     ),
-  );
-
-Nested arrays without key for nested array are not supported::
-
-  $lang = array(array('key' => 'value'));
+Only PHP files written with these conventions are supported:
+
+.. code-block:: php
+
+   <?php
+   $lang['item'] = "vale";  # Array of values
+   $some_entity = "value";  # Named variables
+   define("ENTITY", "value");
+   $lang = array(
+      'item1' => 'value1'    ,   #Supports space before comma
+      'item2' => 'value2',
+   );
+   $lang = array(    # Nested arrays
+      'item1' => 'value1',
+      'item2' => array(
+         'key' => 'value'    ,   #Supports space before comma
+         'key2' => 'value2',
+      ),
+   );
+
+Nested arrays without key for nested array are not supported:
+
+.. code-block:: php
+
+   <?php
+   $lang = array(array('key' => 'value'));
 
 The working of PHP strings and specifically the escaping conventions which
 differ between single quote (') and double quote (") characters are
@@ -73,10 +79,11 @@ def phpencode(text, quotechar="'"):
         # pretty layout that might have appeared in muliline entries we might
         # lose some "blah\nblah" layouts but that's probably not the most
         # frequent use case. See bug 588
-        escapes = [("\\", "\\\\"), ("\r", "\\r"), ("\t", "\\t"),
-                   ("\v", "\\v"), ("\f", "\\f"), ("\\\\$", "\\$"),
-                   ('"', '\\"'), ("\\\\", "\\"),
-                  ]
+        escapes = [
+            ("\\", "\\\\"), ("\r", "\\r"), ("\t", "\\t"),
+            ("\v", "\\v"), ("\f", "\\f"), ("\\\\$", "\\$"),
+            ('"', '\\"'), ("\\\\", "\\"),
+        ]
         for a, b in escapes:
             text = text.replace(a, b)
         return text
@@ -88,7 +95,7 @@ def phpdecode(text, quotechar="'"):
     """Convert PHP escaped string to a Python string."""
 
     def decode_octal_hex(match):
-        """decode Octal \NNN and Hex values"""
+        r"""decode Octal \NNN and Hex values"""
         if "octal" in match.groupdict():
             return match.groupdict()['octal'].decode("string_escape")
         elif "hex" in match.groupdict():
@@ -101,9 +108,10 @@ def phpdecode(text, quotechar="'"):
     if quotechar == '"':
         # We do not escape \$ as it is used by variables and we can't
         # roundtrip that item.
-        escapes = [('\\"', '"'), ("\\\\", "\\"), ("\\n", "\n"), ("\\r", "\r"),
-                   ("\\t", "\t"), ("\\v", "\v"), ("\\f", "\f"),
-                  ]
+        escapes = [
+            ('\\"', '"'), ("\\\\", "\\"), ("\\n", "\n"), ("\\r", "\r"),
+            ("\\t", "\t"), ("\\v", "\v"), ("\\f", "\f"),
+        ]
         for a, b in escapes:
             text = text.replace(a, b)
         text = re.sub(r"(?P<octal>\\[0-7]{1,3})", decode_octal_hex, text)
@@ -201,6 +209,13 @@ class phpfile(base.TranslationStore):
             inputfile.close()
             self.parse(phpsrc)
 
+    def __str__(self):
+        """Convert the units back to lines."""
+        lines = []
+        for unit in self.units:
+            lines.append(str(unit))
+        return "".join(lines)
+
     def parse(self, phpsrc):
         """Read the source of a PHP file in and include them as units."""
         newunit = phpunit()
@@ -244,8 +259,9 @@ class phpfile(base.TranslationStore):
                 newunit.addnote(line, "developer")
                 continue
 
-            # If an array starts in the current line.
-            if line.lower().replace(" ", "").find('array(') != -1:
+            # If an array starts in the current line and is using array syntax
+            if (line.lower().replace(" ", "").find('array(') != -1 and
+                line.lower().replace(" ", "").find('array()') == -1):
                 # If this is a nested array.
                 if inarray:
                     prename = prename + line[:line.find('=')].strip() + "->"
@@ -324,23 +340,24 @@ class phpfile(base.TranslationStore):
                 if invalue:
                     value = line
 
-            # Get the end delimiter position (colonpos)
-            colonpos = value.rfind(enddel)
+            # Get the end delimiter position.
+            enddelpos = value.rfind(enddel)
 
-            while colonpos != -1:
+            # Process the current line until all entries on it are parsed.
+            while enddelpos != -1:
                 # Check if the latest non-whitespace character before the end
-                # delimiter is the valuequote
-                if value[:colonpos].rstrip()[-1] == valuequote:
+                # delimiter is the valuequote.
+                if value[:enddelpos].rstrip()[-1] == valuequote:
                     # Save the value string without trailing whitespaces and
-                    # without the ending quotes
-                    newunit.value = lastvalue + value[:colonpos].rstrip()[:-1]
+                    # without the ending quotes.
+                    newunit.value = lastvalue + value[:enddelpos].rstrip()[:-1]
                     newunit.escape_type = valuequote
                     lastvalue = ""
                     invalue = False
 
                 # If there is more text (a comment) after the translation.
-                if not invalue and colonpos != (len(value) - 1):
-                    commentinlinepos = value.find("//", colonpos)
+                if not invalue and enddelpos != (len(value) - 1):
+                    commentinlinepos = value.find("//", enddelpos)
                     if commentinlinepos != -1:
                         newunit.addnote(value[commentinlinepos+2:].strip(),
                                         "developer")
@@ -352,18 +369,18 @@ class phpfile(base.TranslationStore):
                     value = ""
                     newunit = phpunit()
 
-                # Update end delimiter position (colonpos) to the previous last
-                # appearance of end delimiter.
-                colonpos = value.rfind(enddel, 0, colonpos)
+                # Update end delimiter position to the previous last appearance
+                # of the end delimiter, because it might be several entries in
+                # the same line.
+                enddelpos = value.rfind(enddel, 0, enddelpos)
+            else:
+                # After processing current line, if we are not in an array,
+                # fall back to default dialect (PHP simple variable syntax).
+                if not inarray:
+                    equaldel = "="
+                    enddel = ";"
 
             # If this is part of a multiline translation, just append it to the
             # previous translation lines.
             if invalue:
                 lastvalue = lastvalue + value + "\n"
-
-    def __str__(self):
-        """Convert the units back to lines."""
-        lines = []
-        for unit in self.units:
-            lines.append(str(unit))
-        return "".join(lines)
diff --git a/translate/storage/placeables/__init__.py b/translate/storage/placeables/__init__.py
index fe12efa..143806c 100644
--- a/translate/storage/placeables/__init__.py
+++ b/translate/storage/placeables/__init__.py
@@ -54,6 +54,7 @@ from base import __all__ as all_your_base
 from strelem import StringElem
 from parse import parse
 
+
 __all__ = [
     'base', 'interfaces', 'general', 'parse', 'StringElem', 'xliff'
 ] + all_your_base
diff --git a/translate/storage/placeables/general.py b/translate/storage/placeables/general.py
index 6a3499c..9edfae0 100644
--- a/translate/storage/placeables/general.py
+++ b/translate/storage/placeables/general.py
@@ -27,6 +27,7 @@ import re
 
 from translate.storage.placeables.base import G, Ph, StringElem
 
+
 __all__ = ['AltAttrPlaceable', 'XMLEntityPlaceable', 'XMLTagPlaceable', 'parsers', 'to_general_placeables']
 
 
@@ -57,12 +58,12 @@ class AltAttrPlaceable(G):
 
 
 class NewlinePlaceable(Ph):
-    """Matches new-lines."""
+    """Placeable for new-lines."""
 
     iseditable = False
     isfragile = True
     istranslatable = False
-    regex = re.compile(r'\n')
+    regex = re.compile(r'\r\n|\n|\r')
     parse = classmethod(regex_parse)
 
 
@@ -148,8 +149,8 @@ class JavaMessageFormatPlaceable(Ph):
 
 class FormattingPlaceable(Ph):
     """Placeable representing string formatting variables."""
-    #For more information, see  man 3 printf
-    #We probably don't want to support absolutely everything
+    # For more information, see  man 3 printf
+    # We probably don't want to support absolutely everything
 
     iseditable = False
     istranslatable = False
@@ -186,7 +187,7 @@ class FilePlaceable(Ph):
 
     istranslatable = False
     regex = re.compile(r"(~/|/|\./)([-A-Za-z0-9_\$\.\+\!\*\(\),;:@&=\?/~\#\%]|\\){3,}")
-    #TODO: Handle Windows drive letters. Some common Windows paths won't be
+    # TODO: Handle Windows drive letters. Some common Windows paths won't be
     # handled correctly while not allowing spaces, such as
     #     "C:\Documents and Settings"
     #     "C:\Program Files"
@@ -301,17 +302,16 @@ class OptionPlaceable(Ph):
 
 
 def to_general_placeables(tree, classmap={
-        G: (AltAttrPlaceable,),
-        Ph: (NumberPlaceable,
-             XMLEntityPlaceable,
-             XMLTagPlaceable,
-             UrlPlaceable,
-             FilePlaceable,
-             EmailPlaceable,
-             OptionPlaceable,
-             PunctuationPlaceable,
-            ),
-        }):
+                                    G: (AltAttrPlaceable,),
+                                    Ph: (NumberPlaceable,
+                                         XMLEntityPlaceable,
+                                         XMLTagPlaceable,
+                                         UrlPlaceable,
+                                         FilePlaceable,
+                                         EmailPlaceable,
+                                         OptionPlaceable,
+                                         PunctuationPlaceable,),
+                                }):
     if not isinstance(tree, StringElem):
         return tree
 
diff --git a/translate/storage/placeables/lisa.py b/translate/storage/placeables/lisa.py
index e90d697..ba5c8d2 100644
--- a/translate/storage/placeables/lisa.py
+++ b/translate/storage/placeables/lisa.py
@@ -21,9 +21,10 @@
 from lxml import etree
 
 from translate.misc.xml_helpers import normalize_xml_space
-from translate.storage.placeables import base, xliff, StringElem
+from translate.storage.placeables import StringElem, base, xliff
 from translate.storage.xml_extract import misc
 
+
 __all__ = ['xml_to_strelem', 'strelem_to_xml']
 # Use the above functions as entry points into this module. The rest are
 # used by these functions.
@@ -211,7 +212,7 @@ def strelem_to_xml(parent_node, elem):
 def parse_xliff(pstr):
     try:
         return xml_to_strelem(etree.fromstring('<source>%s</source>' % (pstr)))
-    except Exception, exc:
+    except Exception as exc:
         raise
         return None
 xliff.parsers = [parse_xliff]
diff --git a/translate/storage/placeables/parse.py b/translate/storage/placeables/parse.py
index b12ab24..adf697f 100644
--- a/translate/storage/placeables/parse.py
+++ b/translate/storage/placeables/parse.py
@@ -23,7 +23,7 @@ Contains the ``parse`` function that parses normal strings into StringElem-
 based "rich" string element trees.
 """
 
-from translate.storage.placeables import base, StringElem
+from translate.storage.placeables import StringElem, base
 
 
 def parse(tree, parse_funcs):
diff --git a/translate/storage/placeables/strelem.py b/translate/storage/placeables/strelem.py
index 86e2110..1f2bad0 100644
--- a/translate/storage/placeables/strelem.py
+++ b/translate/storage/placeables/strelem.py
@@ -2,6 +2,7 @@
 # -*- coding: utf-8 -*-
 #
 # Copyright 2009 Zuza Software Foundation
+# Copyright 2013-2014 F Wolff
 #
 # This file is part of the Translate Toolkit.
 #
@@ -93,14 +94,14 @@ class StringElem(object):
         if not isinstance(rhs, StringElem):
             return False
 
-        return  self.id == rhs.id and \
-                self.iseditable == rhs.iseditable and \
-                self.istranslatable == rhs.istranslatable and \
-                self.isvisible == rhs.isvisible and \
-                self.rid == rhs.rid and \
-                self.xid == rhs.xid and \
-                len(self.sub) == len(rhs.sub) and \
-                not [i for i in range(len(self.sub)) if self.sub[i] != rhs.sub[i]]
+        return self.id == rhs.id and \
+               self.iseditable == rhs.iseditable and \
+               self.istranslatable == rhs.istranslatable and \
+               self.isvisible == rhs.isvisible and \
+               self.rid == rhs.rid and \
+               self.xid == rhs.xid and \
+               len(self.sub) == len(rhs.sub) and \
+               not [i for i in range(len(self.sub)) if self.sub[i] != rhs.sub[i]]
 
     def __ge__(self, rhs):
         """Emulate the ``unicode`` class."""
@@ -154,7 +155,7 @@ class StringElem(object):
         elemstr = ', '.join([repr(elem) for elem in self.sub])
         return '<%(class)s(%(id)s%(rid)s%(xid)s[%(subs)s])>' % {
             'class': self.__class__.__name__,
-            'id': self.id  is not None and 'id="%s" ' % (self.id) or '',
+            'id': self.id is not None and 'id="%s" ' % (self.id) or '',
             'rid': self.rid is not None and 'rid="%s" ' % (self.rid) or '',
             'xid': self.xid is not None and 'xid="%s" ' % (self.xid) or '',
             'subs': elemstr,
@@ -185,10 +186,10 @@ class StringElem(object):
                     elem.sub[i] = f(elem.sub[i])
 
     def copy(self):
-	"""Returns a copy of the sub-tree.  This should be overridden in
-	sub-classes with more data.
+        """Returns a copy of the sub-tree.  This should be overridden in
+        sub-classes with more data.
 
-	.. note:: ``self.renderer`` is **not** copied."""
+        .. note:: ``self.renderer`` is **not** copied."""
         #logging.debug('Copying instance of class %s' % (self.__class__.__name__))
         cp = self.__class__(id=self.id, xid=self.xid, rid=self.rid)
         for sub in self.sub:
@@ -366,7 +367,7 @@ class StringElem(object):
         for node in marked_nodes:
             try:
                 self.delete_elem(node)
-            except ElementNotFoundError, e:
+            except ElementNotFoundError as e:
                 pass
 
         if start['elem'] is not end['elem']:
@@ -515,7 +516,7 @@ class StringElem(object):
     def insert(self, offset, text, preferred_parent=None):
         """Insert the given text at the specified offset of this string-tree's
             string (Unicode) representation."""
-        if offset < 0 or offset > len(self) + 1:
+        if offset < 0 or offset > len(self):
             raise IndexError('Index out of range: %d' % (offset))
         if isinstance(text, (str, unicode)):
             text = StringElem(text)
@@ -563,12 +564,22 @@ class StringElem(object):
             return False
 
         # Case 2 #
-        if offset >= len(self):
+        if offset == len(self):
             #logging.debug('Case 2')
             last = self.flatten()[-1]
             parent = self.get_ancestor_where(last, lambda x: x.iseditable)
             if parent is None:
                 parent = self
+            preferred_type = type(preferred_parent)
+            oelem_type = type(oelem)
+            len_oelem = len(oelem)
+            if preferred_parent is oelem:
+                # The preferred parent is still in this StringElem
+                return oelem.insert(len_oelem, text)
+            elif oelem_type == preferred_type:
+                # oelem has the right type
+                return oelem.insert(len_oelem, text)
+
             parent.sub.append(checkleaf(parent, text))
             return True
 
@@ -601,7 +612,15 @@ class StringElem(object):
             bparent = self.get_parent_elem(before)
             # bparent cannot be a leaf (because it has before as a child), so
             # we insert the text as StringElem(text)
-            bindex = bparent.sub.index(before)
+            # We need the index of `before`, but can't use .index(), since we
+            # need to test identity, otherwise we might hit an earlier
+            # occurrence of an identical string (likely with lots of newlines,
+            # for example).
+            bindex = 0
+            for child in bparent.sub:
+                if child is before:
+                    break
+                bindex += 1
             bparent.sub.insert(bindex + 1, text)
             return True
 
@@ -624,16 +643,21 @@ class StringElem(object):
                 # Both are the wrong type, so we add it as if neither were
                 # editable
                 bparent = self.get_parent_elem(before)
-                bindex = bparent.sub.index(before)
+                # As above, we can't use .index()
+                bindex = 0
+                for child in bparent.sub:
+                    if child is before:
+                        break
+                    bindex += 1
                 bparent.sub.insert(bindex + 1, text)
                 return True
 
-            return before.insert(len(before) + 1, text)  # Reinterpret as a case 2
+            return before.insert(len(before), text)  # Reinterpret as a case 2
 
         # 4.3 #
         elif before.iseditable and not oelem.iseditable:
             #logging.debug('Case 4.3')
-            return before.insert(len(before) + 1, text)  # Reinterpret as a case 2
+            return before.insert(len(before), text)  # Reinterpret as a case 2
 
         # 4.4 #
         elif not before.iseditable and oelem.iseditable:
@@ -811,14 +835,14 @@ class StringElem(object):
         if verbose:
             out += u' ' + repr(self)
 
-        print out
+        print(out)
 
         for elem in self.sub:
             if isinstance(elem, StringElem):
                 elem.print_tree(indent + 1, verbose=verbose)
             else:
-                print (u'%s%s[%s]' % (indent_prefix, indent_prefix,
-                                      elem)).encode('utf-8')
+                print((u'%s%s[%s]' % (indent_prefix, indent_prefix,
+                                      elem)).encode('utf-8'))
 
     def prune(self):
         """Remove unnecessary nodes to make the tree optimal."""
diff --git a/translate/storage/placeables/terminology.py b/translate/storage/placeables/terminology.py
index 86da9e1..35406a5 100644
--- a/translate/storage/placeables/terminology.py
+++ b/translate/storage/placeables/terminology.py
@@ -22,7 +22,8 @@
 Contains the placeable that represents a terminology term.
 """
 
-from translate.storage.placeables import base, StringElem
+from translate.storage.placeables import StringElem, base
+
 
 __all__ = ['TerminologyPlaceable', 'parsers']
 
diff --git a/translate/storage/placeables/test_base.py b/translate/storage/placeables/test_base.py
index ab7f9cb..9a16787 100644
--- a/translate/storage/placeables/test_base.py
+++ b/translate/storage/placeables/test_base.py
@@ -2,6 +2,7 @@
 # -*- coding: utf-8 -*-
 #
 # Copyright 2009 Zuza Software Foundation
+# Copyright 2014 F Wolff
 #
 # This file is part of the Translate Toolkit.
 #
@@ -20,7 +21,7 @@
 
 from pytest import mark
 
-from translate.storage.placeables import base, general, parse, xliff, StringElem
+from translate.storage.placeables import StringElem, base, general, parse, xliff
 
 
 class TestStringElem:
@@ -147,8 +148,14 @@ class TestStringElem:
 
         # Test inserting at the end
         elem = self.elem.copy()
-        elem.insert(len(elem) + 1, u'xxx')
+        elem.insert(len(elem), u'xxx')
         assert elem.flatten()[-1] == StringElem(u'xxx')
+        assert unicode(elem).endswith('&brandLong;</a>xxx')
+
+        elem = self.elem.copy()
+        elem.insert(len(elem), u">>>", preferred_parent=elem.sub[-1])
+        assert unicode(elem.flatten()[-1]) == u'</a>>>>'
+        assert unicode(elem).endswith('&brandLong;</a>>>>')
 
         # Test inserting in the middle of an existing string
         elem = self.elem.copy()
diff --git a/translate/storage/placeables/test_general.py b/translate/storage/placeables/test_general.py
index 9022ad1..7f7de16 100644
--- a/translate/storage/placeables/test_general.py
+++ b/translate/storage/placeables/test_general.py
@@ -103,17 +103,17 @@ def test_placeable_formatting():
     assert fp.parse(u'%1$s was kicked by %2$s')[0] == fp([u'%1$s'])
     assert fp.parse(u'There were %Id cows')[1] == fp([u'%Id'])
     assert fp.parse(u'There were % d cows')[1] == fp([u'% d'])
-    #only a real space is allowed as formatting flag
+    # only a real space is allowed as formatting flag
     assert fp.parse(u'There were %\u00a0d cows') is None
     assert fp.parse(u"There were %'f cows")[1] == fp([u"%'f"])
     assert fp.parse(u"There were %#x cows")[1] == fp([u"%#x"])
 
-    #field width
+    # field width
     assert fp.parse(u'There were %3d cows')[1] == fp([u'%3d'])
     assert fp.parse(u'There were %33d cows')[1] == fp([u'%33d'])
     assert fp.parse(u'There were %*d cows')[1] == fp([u'%*d'])
 
-    #numbered variables
+    # numbered variables
     assert fp.parse(u'There were %1$d cows')[1] == fp([u'%1$d'])
 
 
diff --git a/translate/storage/placeables/test_lisa.py b/translate/storage/placeables/test_lisa.py
index 69b29fb..48b0d60 100644
--- a/translate/storage/placeables/test_lisa.py
+++ b/translate/storage/placeables/test_lisa.py
@@ -21,7 +21,7 @@
 
 from lxml import etree
 
-from translate.storage.placeables import lisa, StringElem
+from translate.storage.placeables import StringElem, lisa
 from translate.storage.placeables.xliff import Bx, Ex, G, UnknownXML, X
 
 
@@ -46,7 +46,7 @@ def test_xml_to_strelem():
 def test_xml_space():
     source = etree.fromstring(u'<source xml:space="default"> a <x id="foo[1]/bar[1]/baz[1]"/> </source>')
     elem = lisa.xml_to_strelem(source)
-    print elem.sub
+    print(elem.sub)
     assert elem.sub == [StringElem(u'a '), X(id=u'foo[1]/bar[1]/baz[1]'), StringElem(u' ')]
 
 
diff --git a/translate/storage/placeables/test_terminology.py b/translate/storage/placeables/test_terminology.py
index 16a298e..3619b59 100644
--- a/translate/storage/placeables/test_terminology.py
+++ b/translate/storage/placeables/test_terminology.py
@@ -18,11 +18,12 @@
 # You should have received a copy of the GNU General Public License
 # along with this program; if not, see <http://www.gnu.org/licenses/>.
 
-from StringIO import StringIO
+from cStringIO import StringIO
 
 from translate.search.match import terminologymatcher
-from translate.storage.placeables import base, general, parse, StringElem
-from translate.storage.placeables.terminology import parsers as term_parsers, TerminologyPlaceable
+from translate.storage.placeables import StringElem, base, general, parse
+from translate.storage.placeables.terminology import (TerminologyPlaceable,
+                                                      parsers as term_parsers)
 from translate.storage.pypo import pofile
 
 
diff --git a/translate/storage/placeables/xliff.py b/translate/storage/placeables/xliff.py
index e5026c1..61268e2 100644
--- a/translate/storage/placeables/xliff.py
+++ b/translate/storage/placeables/xliff.py
@@ -23,6 +23,7 @@
 from translate.storage.placeables import base
 from translate.storage.placeables.strelem import StringElem
 
+
 __all__ = [
     'Bpt', 'Ept', 'X', 'Bx', 'Ex', 'G', 'It', 'Sub', 'Ph', 'UnknownXML',
     'parsers', 'to_xliff_placeables'
@@ -98,7 +99,7 @@ class UnknownXML(StringElem):
         return '<%(class)s{%(tag)s}(%(id)s%(rid)s%(xid)s[%(subs)s])>' % {
             'class': self.__class__.__name__,
             'tag': tag,
-            'id': self.id  is not None and 'id="%s" ' % (self.id) or '',
+            'id': self.id is not None and 'id="%s" ' % (self.id) or '',
             'rid': self.rid is not None and 'rid="%s" ' % (self.rid) or '',
             'xid': self.xid is not None and 'xid="%s" ' % (self.xid) or '',
             'subs': elemstr,
@@ -106,8 +107,8 @@ class UnknownXML(StringElem):
 
     # METHODS #
     def copy(self):
-	"""Returns a copy of the sub-tree.  This should be overridden in
-	sub-classes with more data.
+        """Returns a copy of the sub-tree.  This should be overridden in
+        sub-classes with more data.
 
         .. note:: ``self.renderer`` is **not** copied.
         """
diff --git a/translate/storage/po.py b/translate/storage/po.py
index 7982517..48ca840 100644
--- a/translate/storage/po.py
+++ b/translate/storage/po.py
@@ -27,16 +27,23 @@ Python based parser is used (slower but very well tested)."""
 
 import logging
 import os
+import platform
 
-if os.getenv('USECPO'):
-    if os.getenv('USECPO') == "1":
+
+usecpo = os.getenv('USECPO')
+
+if platform.python_implementation() == "CPython":
+    if usecpo == "1":
         logging.info("Using cPO")
         from translate.storage.cpo import *  # pylint: disable=W0401,W0614
-    elif os.getenv('USECPO') == "2":
-        logging.info("Using new cPO")
+    elif usecpo == "2":
+        logging.info("Using new fPO")
         from translate.storage.fpo import *  # pylint: disable=W0401,W0614
     else:
         logging.info("Using Python PO")
         from translate.storage.pypo import *  # pylint: disable=W0401,W0614
 else:
+    if usecpo:
+        logging.error("cPO and fPO do not work on %s defaulting to PyPO" %
+                      platform.python_implementation())
     from translate.storage.pypo import *  # pylint: disable=W0401
diff --git a/translate/storage/pocommon.py b/translate/storage/pocommon.py
index c3db669..b44e556 100644
--- a/translate/storage/pocommon.py
+++ b/translate/storage/pocommon.py
@@ -21,10 +21,9 @@
 import re
 import urllib
 
-from translate.storage import base
-from translate.storage import poheader
+from translate.storage import base, poheader
 from translate.storage.workflow import StateEnum as state
-from translate.misc.typecheck import accepts, returns
+
 
 msgid_comment_re = re.compile("_: (.*?)\n")
 
@@ -45,15 +44,13 @@ def quote_plus(text):
     return urllib.quote_plus(text.encode("utf-8"))
 
 
- at accepts(unicode)
- at returns(unicode)
 def unquote_plus(text):
     """unquote('%7e/abc+def') -> '~/abc def'"""
     try:
         if isinstance(text, unicode):
             text = text.encode('utf-8')
         return urllib.unquote_plus(text).decode('utf-8')
-    except UnicodeEncodeError, e:
+    except UnicodeEncodeError as e:
         # for some reason there is a non-ascii character here. Let's assume it
         # is already unicode (because of originally decoding the file)
         return text
@@ -185,7 +182,7 @@ class pounit(base.TranslationUnit):
 def encodingToUse(encoding):
     """Tests whether the given encoding is known in the python runtime, or returns utf-8.
     This function is used to ensure that a valid encoding is always used."""
-    if encoding == "CHARSET" or encoding == None:
+    if encoding == "CHARSET" or encoding is None:
         return 'utf-8'
     return encoding
 #    if encoding is None: return False
@@ -198,7 +195,7 @@ def encodingToUse(encoding):
 
 
 class pofile(poheader.poheader, base.TranslationStore):
-    Name = _("Gettext PO file")  # pylint: disable=E0602
+    Name = "Gettext PO file"  # pylint: disable=E0602
     Mimetypes = ["text/x-gettext-catalog", "text/x-gettext-translation", "text/x-po", "text/x-pot"]
     Extensions = ["po", "pot"]
     # We don't want windows line endings on Windows:
diff --git a/translate/storage/poheader.py b/translate/storage/poheader.py
index 8b99510..326222f 100644
--- a/translate/storage/poheader.py
+++ b/translate/storage/poheader.py
@@ -23,8 +23,15 @@
 import re
 import time
 
+try:
+    from collections import OrderedDict
+except ImportError:
+    # Python <= 2.6 fallback
+    from translate.misc.dictutils import ordereddict as OrderedDict
+
 from translate import __version__
-from translate.misc import dictutils
+from translate.misc.dictutils import cidict
+
 
 author_re = re.compile(r".*<\S+@\S+>.*\d{4,4}")
 
@@ -34,13 +41,13 @@ default_header = {
     "Last-Translator": "FULL NAME <EMAIL at ADDRESS>",
     "Language-Team": "LANGUAGE <LL at li.org>",
     "Plural-Forms": "nplurals=INTEGER; plural=EXPRESSION;",
-    }
+}
 
 
 def parseheaderstring(input):
     """Parses an input string with the definition of a PO header and returns
     the interpreted values as a dictionary."""
-    headervalues = dictutils.ordereddict()
+    headervalues = OrderedDict()
     for line in input.split("\n"):
         if not line or ":" not in line:
             continue
@@ -74,8 +81,8 @@ def update(existing, add=False, **kwargs):
     :return: Updated dictionary of header entries
     :rtype: dict
     """
-    headerargs = dictutils.ordereddict()
-    fixedargs = dictutils.cidict()
+    headerargs = OrderedDict()
+    fixedargs = cidict()
     for key, value in kwargs.items():
         key = key.replace("_", "-")
         if key.islower():
@@ -120,7 +127,7 @@ class poheader(object):
         "Content-Transfer-Encoding",
         "Plural-Forms",
         "X-Generator",
-        ]
+    ]
 
     def init_headers(self, charset='UTF-8', encoding='8bit', **kwargs):
         """sets default values for po headers"""
@@ -173,7 +180,7 @@ class poheader(object):
         if report_msgid_bugs_to is None:
             report_msgid_bugs_to = ""
 
-        defaultargs = dictutils.ordereddict()
+        defaultargs = OrderedDict()
         defaultargs["Project-Id-Version"] = project_id_version
         defaultargs["Report-Msgid-Bugs-To"] = report_msgid_bugs_to
         defaultargs["POT-Creation-Date"] = pot_creation_date
diff --git a/translate/storage/poparser.py b/translate/storage/poparser.py
index 1ea5e50..f303610 100644
--- a/translate/storage/poparser.py
+++ b/translate/storage/poparser.py
@@ -20,6 +20,7 @@
 
 import re
 
+
 """
 From the GNU gettext manual:
      WHITE-SPACE
diff --git a/translate/storage/poxliff.py b/translate/storage/poxliff.py
index 4c76b3b..e4f292c 100644
--- a/translate/storage/poxliff.py
+++ b/translate/storage/poxliff.py
@@ -24,9 +24,10 @@ XLIFF.
 This way the API supports plurals as if it was a PO file, for example.
 """
 
-from lxml import etree
 import re
 
+from lxml import etree
+
 from translate.misc.multistring import multistring
 from translate.storage import base, lisa, poheader, xliff
 from translate.storage.placeables import general
@@ -85,7 +86,7 @@ class PoXliffUnit(xliff.xliffunit):
 #            return self.units[0].getlanguageNodes()
 
     def setsource(self, source, sourcelang="en"):
-#        TODO: consider changing from plural to singular, etc.
+        # TODO: consider changing from plural to singular, etc.
         self._rich_source = None
         if not hasplurals(source):
             super(PoXliffUnit, self).setsource(source, sourcelang)
@@ -261,6 +262,7 @@ class PoXliffUnit(xliff.xliffunit):
     def istranslatable(self):
         return super(PoXliffUnit, self).istranslatable() and not self.isheader()
 
+    @classmethod
     def createfromxmlElement(cls, element, namespace=None):
         if element.tag.endswith("trans-unit"):
             object = cls(None, empty=True)
@@ -277,7 +279,6 @@ class PoXliffUnit(xliff.xliffunit):
             subunit.namespace = namespace
             group.units.append(subunit)
         return group
-    createfromxmlElement = classmethod(createfromxmlElement)
 
     def hasplural(self):
         return self.xmlelement.tag == self.namespaced("group")
@@ -394,7 +395,7 @@ class PoXliffFile(xliff.xlifffile, poheader.poheader):
                 self.addunit(nextplural, new=False)
                 try:
                     nextplural = pluralunit_iter.next()
-                except StopIteration, i:
+                except StopIteration as i:
                     nextplural = None
             else:
                 self.addunit(term, new=False)
diff --git a/translate/storage/project.py b/translate/storage/project.py
index 05eb338..ad25709 100644
--- a/translate/storage/project.py
+++ b/translate/storage/project.py
@@ -23,6 +23,7 @@ import os
 from translate.convert import factory as convert_factory
 from translate.storage.projstore import ProjectStore
 
+
 __all__ = ['Project']
 
 
diff --git a/translate/storage/projstore.py b/translate/storage/projstore.py
index 109628c..6298170 100644
--- a/translate/storage/projstore.py
+++ b/translate/storage/projstore.py
@@ -19,7 +19,6 @@
 # along with this program; if not, see <http://www.gnu.org/licenses/>.
 
 import os
-from StringIO import StringIO
 
 from lxml import etree
 
@@ -108,11 +107,11 @@ class ProjectStore(object):
     # SPECIAL METHODS #
     def __in__(self, lhs):
         """@returns ``True`` if ``lhs`` is a file name or file object in the project store."""
-        return  lhs in self._sourcefiles or \
-                lhs in self._targetfiles or \
-                lhs in self._transfiles or \
-                lhs in self._files or \
-                lhs in self._files.values()
+        return lhs in self._sourcefiles or \
+               lhs in self._targetfiles or \
+               lhs in self._transfiles or \
+               lhs in self._files or \
+               lhs in self._files.values()
 
     # METHODS #
     def append_file(self, afile, fname, ftype='trans', delete_orig=False):
diff --git a/translate/storage/properties.py b/translate/storage/properties.py
index 6f3ac45..cffee57 100644
--- a/translate/storage/properties.py
+++ b/translate/storage/properties.py
@@ -1,7 +1,7 @@
 #!/usr/bin/env python
 # -*- coding: utf-8 -*-
 #
-# Copyright 2004-2006 Zuza Software Foundation
+# Copyright 2004-2014 Zuza Software Foundation
 #
 # This file is part of translate.
 #
@@ -30,11 +30,11 @@ parsing and handling of the various dialects.
 
 Currently we support:
 
-    - Java .properties
-    - Mozilla .properties
-    - Adobe Flex files
-    - MacOS X .strings files
-    - Skype .lang files
+- Java .properties
+- Mozilla .properties
+- Adobe Flex files
+- MacOS X .strings files
+- Skype .lang files
 
 The following provides references and descriptions of the various
 dialects supported:
@@ -77,55 +77,65 @@ Skype
 
 A simple summary of what is permissible follows.
 
-Comments supported::
+Comments supported:
 
-    # a comment
-    ! a comment
-    // a comment (only at the beginning of a line)
-    /* a comment (not across multiple lines) */
+.. code-block:: properties
 
-Name and Value pairs::
+   # a comment
+   ! a comment
+   // a comment (only at the beginning of a line)
+   /* a comment (not across multiple lines) */
 
-    # Delimiters
-    key = value
-    key : value
-    key value
+Name and Value pairs:
 
-    # Space in key and around value
-    \ key\ = \ value
+.. code-block:: properties
 
-    # Note that the b and c are escaped for reST rendering
-    b = a string with escape sequences \\t \\n \\r \\\\ \\" \\' \\ (space) \u0123
-    c = a string with a continuation line \\
-        continuation line
+   # Delimiters
+   key = value
+   key : value
+   key value
 
-    # Special cases
-    # key with no value
-    key
-    # value no key (extractable in prop2po but not mergeable in po2prop)
-    =value
+   # Space in key and around value
+   \ key\ = \ value
+
+   # Note that the b and c are escaped for reST rendering
+   b = a string with escape sequences \\t \\n \\r \\\\ \\" \\' \\ (space) \u0123
+   c = a string with a continuation line \\
+       continuation line
+
+   # Special cases
+   # key with no value
+   key
+   # value no key (extractable in prop2po but not mergeable in po2prop)
+   =value
+
+   # .strings specific
+   "key" = "value";
 
-    # .strings specific
-    "key" = "value";
 """
 
 import re
-import warnings
-import logging
 
 from translate.lang import data
 from translate.misc import quote
-from translate.misc.typecheck import accepts, returns, IsOneOf
+from translate.misc.deprecation import deprecated
 from translate.storage import base
 
+
+labelsuffixes = (".label", ".title")
+"""Label suffixes: entries with this suffix are able to be comibed with accesskeys
+found in in entries ending with :attr:`.accesskeysuffixes`"""
+accesskeysuffixes = (".accesskey", ".accessKey", ".akey")
+"""Accesskey Suffixes: entries with this suffix may be combined with labels
+ending in :attr:`.labelsuffixes` into accelerator notation"""
+
+
 # the rstripeols convert dos <-> unix nicely as well
 # output will be appropriate for the platform
 
 eol = "\n"
 
 
- at accepts(unicode, [unicode])
- at returns(IsOneOf(type(None), unicode), int)
 def _find_delimiter(line, delimiters):
     """Find the type and position of the delimiter in a property line.
 
@@ -176,17 +186,16 @@ def _find_delimiter(line, delimiters):
     return (mindelimiter, minpos)
 
 
+ at deprecated("Use Dialect.find_delimiter instead")
 def find_delimeter(line):
     """Misspelled function that is kept around in case someone relies on it.
 
-    Deprecated."""
-    warnings.warn("deprecated use Dialect.find_delimiter instead",
-                  DeprecationWarning)
+    .. deprecated:: 1.7.0
+       Use :func:`find_delimiter` instead
+    """
     return _find_delimiter(line, DialectJava.delimiters)
 
 
- at accepts(unicode)
- at returns(bool)
 def is_line_continuation(line):
     """Determine whether *line* has a line continuation marker.
 
@@ -211,8 +220,7 @@ def is_line_continuation(line):
         count += 1
     return (count % 2) == 1  # Odd is a line continuation, even is not
 
- at accepts(unicode)
- at returns(bool)
+
 def is_comment_one_line(line):
     """Determine whether a *line* is a one-line comment.
 
@@ -231,8 +239,6 @@ def is_comment_one_line(line):
     return False
 
 
- at accepts(unicode)
- at returns(bool)
 def is_comment_start(line):
     """Determine whether a *line* starts a new multi-line comment.
 
@@ -245,8 +251,6 @@ def is_comment_start(line):
     return stripped.startswith('/*') and not stripped.endswith('*/')
 
 
- at accepts(unicode)
- at returns(bool)
 def is_comment_end(line):
     """Determine whether a *line* ends a new multi-line comment.
 
@@ -259,8 +263,6 @@ def is_comment_end(line):
     return not stripped.startswith('/*') and stripped.endswith('*/')
 
 
- at accepts(unicode)
- at returns(unicode)
 def _key_strip(key):
     """Cleanup whitespace found around a key
 
@@ -280,7 +282,9 @@ default_dialect = "java"
 
 
 def register_dialect(dialect):
+    """Decorator that registers the dialect."""
     dialects[dialect.name] = dialect
+    return dialect
 
 
 def get_dialect(dialect=default_dialect):
@@ -297,6 +301,7 @@ class Dialect(object):
     value_wrap_char = u""
     drop_comments = []
 
+    @classmethod
     def encode(cls, string, encoding=None):
         """Encode the string"""
         # FIXME: dialects are a bad idea, not possible for subclasses
@@ -304,71 +309,78 @@ class Dialect(object):
         if encoding != "utf-8":
             return quote.javapropertiesencode(string or u"")
         return string or u""
-    encode = classmethod(encode)
 
+    @classmethod
     def find_delimiter(cls, line):
         """Find the delimiter"""
         return _find_delimiter(line, cls.delimiters)
-    find_delimiter = classmethod(find_delimiter)
 
+    @classmethod
     def key_strip(cls, key):
         """Strip unneeded characters from the key"""
         return _key_strip(key)
-    key_strip = classmethod(key_strip)
 
+    @classmethod
     def value_strip(cls, value):
         """Strip unneeded characters from the value"""
         return value.lstrip()
-    value_strip = classmethod(value_strip)
 
 
+ at register_dialect
 class DialectJava(Dialect):
     name = "java"
     default_encoding = "iso-8859-1"
     delimiters = [u"=", u":", u" "]
-register_dialect(DialectJava)
 
 
+ at register_dialect
 class DialectJavaUtf8(DialectJava):
     name = "java-utf8"
     default_encoding = "utf-8"
     delimiters = [u"=", u":", u" "]
 
+    @classmethod
     def encode(cls, string, encoding=None):
         return quote.mozillapropertiesencode(string or u"")
-    encode = classmethod(encode)
-register_dialect(DialectJavaUtf8)
 
 
+ at register_dialect
 class DialectFlex(DialectJava):
     name = "flex"
     default_encoding = "utf-8"
-register_dialect(DialectFlex)
 
 
+ at register_dialect
 class DialectMozilla(DialectJavaUtf8):
     name = "mozilla"
     delimiters = [u"="]
-register_dialect(DialectMozilla)
+
+    @classmethod
+    def encode(cls, string, encoding=None):
+        """Encode the string"""
+        string = quote.mozillapropertiesencode(string or u"")
+        string = quote.mozillaescapemarginspaces(string or u"")
+        return string
 
 
+ at register_dialect
 class DialectGaia(DialectMozilla):
     name = "gaia"
     delimiters = [u"="]
-register_dialect(DialectGaia)
 
 
+ at register_dialect
 class DialectSkype(Dialect):
     name = "skype"
     default_encoding = "utf-16"
     delimiters = [u"="]
 
+    @classmethod
     def encode(cls, string, encoding=None):
         return quote.mozillapropertiesencode(string or u"")
-    encode = classmethod(encode)
-register_dialect(DialectSkype)
 
 
+ at register_dialect
 class DialectStrings(Dialect):
     name = "strings"
     default_encoding = "utf-16"
@@ -380,6 +392,7 @@ class DialectStrings(Dialect):
     out_delimiter_wrappers = u' '
     drop_comments = ["/* No comment provided by engineer. */"]
 
+    @classmethod
     def key_strip(cls, key):
         """Strip unneeded characters from the key"""
         newkey = key.rstrip().rstrip('"')
@@ -388,8 +401,8 @@ class DialectStrings(Dialect):
             newkey += key[len(newkey):len(newkey)+1]
         ret = newkey.lstrip().lstrip('"')
         return ret.replace('\\"', '"')
-    key_strip = classmethod(key_strip)
 
+    @classmethod
     def value_strip(cls, value):
         """Strip unneeded characters from the value"""
         newvalue = value.rstrip().rstrip(';').rstrip('"')
@@ -398,12 +411,16 @@ class DialectStrings(Dialect):
             newvalue += value[len(newvalue):len(newvalue)+1]
         ret = newvalue.lstrip().lstrip('"')
         return ret.replace('\\"', '"')
-    value_strip = classmethod(value_strip)
 
+    @classmethod
     def encode(cls, string, encoding=None):
         return string.replace("\n", r"\n").replace("\t", r"\t")
-    encode = classmethod(encode)
-register_dialect(DialectStrings)
+
+
+ at register_dialect
+class DialectStringsUtf8(DialectStrings):
+    name = "strings-utf8"
+    default_encoding = "utf-8"
 
 
 class propunit(base.TranslationUnit):
@@ -551,6 +568,7 @@ class propfile(base.TranslationStore):
             propsrc = inputfile.read()
             inputfile.close()
             self.parse(propsrc)
+            self.makeindex()
 
     def parse(self, propsrc):
         """Read the source of a properties file in and include them
@@ -558,6 +576,9 @@ class propfile(base.TranslationStore):
         text, encoding = self.detect_encoding(propsrc,
             default_encodings=[self.personality.default_encoding, 'utf-8',
                                'utf-16'])
+        if not text:
+            raise IOError("Cannot detect encoding for %s." % (self.filename or
+                                                              "given string"))
         self.encoding = encoding
         propsrc = text
 
@@ -624,9 +645,8 @@ class propfile(base.TranslationStore):
         uret = u"".join(lines)
         return uret.encode(self.encoding)
 
-
 class javafile(propfile):
-    Name = _("Java Properties")
+    Name = "Java Properties"
     Extensions = ['properties']
 
     def __init__(self, *args, **kwargs):
@@ -636,7 +656,7 @@ class javafile(propfile):
 
 
 class javautf8file(propfile):
-    Name = _("Java Properties (UTF-8)")
+    Name = "Java Properties (UTF-8)"
     Extensions = ['properties']
 
     def __init__(self, *args, **kwargs):
@@ -646,9 +666,19 @@ class javautf8file(propfile):
 
 
 class stringsfile(propfile):
-    Name = _("OS X Strings")
+    Name = "OS X Strings"
     Extensions = ['strings']
 
     def __init__(self, *args, **kwargs):
         kwargs['personality'] = "strings"
         super(stringsfile, self).__init__(*args, **kwargs)
+
+
+class stringsutf8file(propfile):
+    Name = "OS X Strings (UTF-8)"
+    Extensions = ['strings']
+
+    def __init__(self, *args, **kwargs):
+        kwargs['personality'] = "strings-utf8"
+        kwargs['encoding'] = "utf-8"
+        super(stringsutf8file, self).__init__(*args, **kwargs)
diff --git a/translate/storage/pypo.py b/translate/storage/pypo.py
index 786c80f..5208984 100644
--- a/translate/storage/pypo.py
+++ b/translate/storage/pypo.py
@@ -23,18 +23,19 @@
 files (pofile).
 """
 
-from __future__ import generators
 import copy
-import cStringIO
 import re
 import textwrap
+from cStringIO import StringIO
 
 from translate.lang import data
-from translate.misc.multistring import multistring
 from translate.misc import quote
-from translate.storage import pocommon, base, poparser
+from translate.misc.deprecation import deprecated
+from translate.misc.multistring import multistring
+from translate.storage import base, pocommon, poparser
 from translate.storage.pocommon import encodingToUse
 
+
 lsep = "\n#: "
 """Separator for #: entries"""
 
@@ -68,18 +69,12 @@ def unescapehandler(escape):
     return po_unescape_map.get(escape, escape)
 
 
-try:
-    wrapper = textwrap.TextWrapper(
-            width=77,
-            replace_whitespace=False,
-            expand_tabs=False,
-            drop_whitespace=False
-    )
-except TypeError:
-    # Python < 2.6 didn't support drop_whitespace
-    from translate.misc import textwrap
-    wrapper = textwrap.TextWrapper(width=77)
-
+wrapper = textwrap.TextWrapper(
+        width=77,
+        replace_whitespace=False,
+        expand_tabs=False,
+        drop_whitespace=False
+)
 wrapper.wordsep_re = re.compile(
     r'(\s+|'                                  # any whitespace
     r'\w*\\.|'                                # any escape should not be split
@@ -110,12 +105,15 @@ def quoteforpo(text):
     return polines
 
 
+ at deprecated("Use pypo.unescape() instead")
 def extractpoline(line):
     """Remove quote and unescape line from po file.
 
     :param line: a quoted line from a po file (msgid or msgstr)
 
     .. deprecated:: 1.10
+       Replaced by :func:`unescape`. :func:`extractpoline` is kept to allow
+       tests of correctness, and in case of external users.
     """
     extracted = quote.extractwithoutquotes(line, '"', '"', '\\', includeescapes=unescapehandler)[0]
     return extracted
@@ -325,7 +323,7 @@ class pounit(pocommon.pounit):
 
         :param origin: programmer, developer, source code, translator or None
         """
-        if origin == None:
+        if origin is None:
             comments = u"".join([comment[2:] for comment in self.othercomments])
             comments += u"".join([comment[3:] for comment in self.automaticcomments])
         elif origin == "translator":
@@ -547,6 +545,7 @@ class pounit(pocommon.pounit):
             self.set_state_n(self.STATE[self.S_UNTRANSLATED][0])
         else:
             self.set_state_n(self.STATE[self.S_TRANSLATED][0])
+        self._domarkfuzzy(present)
 
     def _domarkfuzzy(self, present=True):
         self.settypecomment("fuzzy", present)
@@ -577,7 +576,7 @@ class pounit(pocommon.pounit):
         return len(self.msgid_plural) > 0
 
     def parse(self, src):
-        return poparser.parse_unit(poparser.ParseState(cStringIO.StringIO(src), pounit), self)
+        return poparser.parse_unit(poparser.ParseState(StringIO(src), pounit), self)
 
     def _getmsgpartstr(self, partname, partlines, partcomments=""):
         if isinstance(partlines, dict):
@@ -761,18 +760,15 @@ class pofile(pocommon.pofile):
     def parse(self, input):
         """Parses the given file or file source string."""
         if True:
-#        try:
             if hasattr(input, 'name'):
                 self.filename = input.name
             elif not getattr(self, 'filename', ''):
                 self.filename = ''
             if isinstance(input, str):
-                input = cStringIO.StringIO(input)
+                input = StringIO(input)
             # clear units to get rid of automatically generated headers before parsing
             self.units = []
             poparser.parse_units(poparser.ParseState(input, pounit), self)
-#        except Exception, e:
-#            raise base.ParseError(e)
 
     def removeduplicates(self, duplicatestyle="merge"):
         """Make sure each msgid is unique ; merge comments etc from
@@ -825,7 +821,7 @@ class pofile(pocommon.pofile):
         if isinstance(output, unicode):
             try:
                 return output.encode(getattr(self, "_encoding", "UTF-8"))
-            except UnicodeEncodeError, e:
+            except UnicodeEncodeError as e:
                 self.updateheader(add=True, Content_Type="text/plain; charset=UTF-8")
                 self._encoding = "UTF-8"
                 for unit in self.units:
@@ -866,7 +862,7 @@ class pofile(pocommon.pofile):
                 self._encoding.lower() != "charset"):
                 try:
                     line = line.decode(self._encoding)
-                except UnicodeError, e:
+                except UnicodeError as e:
                     raise UnicodeError("Error decoding line with encoding %r: %s. Line is %r" %
                                        (self._encoding, e, line))
             newlines.append(line)
diff --git a/translate/storage/qm.py b/translate/storage/qm.py
index 76a5f06..b10f632 100644
--- a/translate/storage/qm.py
+++ b/translate/storage/qm.py
@@ -59,16 +59,16 @@ http://qt.gitorious.org/+kde-developers/qt/kde-qt/blobs/master/tools/linguist/sh
 """
 
 import codecs
-import struct
-import sys
 import logging
+import struct
 
 from translate.misc.multistring import multistring
 from translate.storage import base
 
+
 logger = logging.getLogger(__name__)
 
-QM_MAGIC_NUMBER = (0x3CB86418L, 0xCAEF9C95L, 0xCD211CBFL, 0x60A1BDDDL)
+QM_MAGIC_NUMBER = (0x3CB86418, 0xCAEF9C95, 0xCD211CBF, 0x60A1BDDD)
 
 
 def qmunpack(file_='messages.qm'):
@@ -89,7 +89,7 @@ class qmunit(base.TranslationUnit):
 class qmfile(base.TranslationStore):
     """A class representing a .qm file."""
     UnitClass = qmunit
-    Name = _("Qt .qm file")
+    Name = "Qt .qm file"
     Mimetypes = ["application/x-qm"]
     Extensions = ["qm"]
     _binary = True
diff --git a/translate/storage/qph.py b/translate/storage/qph.py
index 173965b..445fbfc 100644
--- a/translate/storage/qph.py
+++ b/translate/storage/qph.py
@@ -35,8 +35,6 @@ provides the reference implementation for the Qt Linguist product.
 from lxml import etree
 
 from translate.lang import data
-from translate.misc.typecheck import accepts, Self, IsOneOf
-from translate.misc.typecheck.typeclasses import String
 from translate.storage import lisa
 
 
@@ -69,7 +67,6 @@ class QphUnit(lisa.LISAunit):
 
         return filter(not_none, [self._getsourcenode(), self._gettargetnode()])
 
-    @accepts(Self(), unicode, IsOneOf(String, type(None)), String)
     def addnote(self, text, origin=None, position="append"):
         """Add a note specifically in a "definition" tag"""
         current_notes = self.getnotes(origin)
@@ -95,7 +92,7 @@ class QphUnit(lisa.LISAunit):
 class QphFile(lisa.LISAfile):
     """Class representing a QPH file store."""
     UnitClass = QphUnit
-    Name = _("Qt Phrase Book")
+    Name = "Qt Phrase Book"
     Mimetypes = ["application/x-qph"]
     Extensions = ["qph"]
     rootNode = "QPH"
@@ -150,14 +147,6 @@ class QphFile(lisa.LISAfile):
 
         We have to override this to ensure mimic the Qt convention:
             - no XML decleration
-            - plain DOCTYPE that lxml seems to ignore
         """
-        # A bug in lxml means we have to output the doctype ourselves. For
-        # more information, see:
-        # http://codespeak.net/pipermail/lxml-dev/2008-October/004112.html
-        # The problem was fixed in lxml 2.1.3
-        output = etree.tostring(self.document, pretty_print=True,
-                                xml_declaration=False, encoding='utf-8')
-        if not "<!DOCTYPE QPH>" in output[:30]:
-            output = "<!DOCTYPE QPH>" + output
-        return output
+        return etree.tostring(self.document, pretty_print=True,
+                              xml_declaration=False, encoding='utf-8')
diff --git a/translate/storage/rc.py b/translate/storage/rc.py
index a6d1b7b..bff1f26 100644
--- a/translate/storage/rc.py
+++ b/translate/storage/rc.py
@@ -100,7 +100,7 @@ class rcunit(base.TranslationUnit):
         return [self.name]
 
     def addnote(self, text, origin=None, position="append"):
-        self.comments.append(note)
+        self.comments.append(text)
 
     def getnotes(self, origin=None):
         return '\n'.join(self.comments)
@@ -135,7 +135,8 @@ class rcfile(base.TranslationStore):
                          (?:
                          LANGUAGE\s+[^\n]*|                              # Language details
                          /\*.*?\*/[^\n]*|                                      # Comments
-                         (?:[0-9A-Z_]+\s+(?:MENU|DIALOG|DIALOGEX)|STRINGTABLE)\s  # Translatable section
+                         \/\/[^\n\r]*|                                  # One line comments
+                         (?:[0-9A-Z_]+\s+(?:MENU|DIALOG|DIALOGEX|TEXTINCLUDE)|STRINGTABLE)\s  # Translatable section or include text (visual studio)
                          .*?
                          (?:
                          BEGIN(?:\s*?POPUP.*?BEGIN.*?END\s*?)+?END|BEGIN.*?END|  # FIXME Need a much better approach to nesting menus
@@ -167,10 +168,9 @@ class rcfile(base.TranslationStore):
         processsection = False
         self.blocks = BLOCKS_RE.findall(rcsrc)
         for blocknum, block in enumerate(self.blocks):
-            #print block.split("\n")[0]
             processblock = None
             if block.startswith("LANGUAGE"):
-                if self.lang == None or self.sublang == None or re.match("LANGUAGE\s+%s,\s*%s\s*$" % (self.lang, self.sublang), block) is not None:
+                if self.lang is None or self.sublang is None or re.match("LANGUAGE\s+%s,\s*%s\s*$" % (self.lang, self.sublang), block) is not None:
                     processsection = True
                 else:
                     processsection = False
@@ -181,11 +181,10 @@ class rcfile(base.TranslationStore):
                     else:
                         processblock = False
 
-            if not (processblock == True or (processsection == True and processblock != False)):
+            if not (processblock or (processsection and processblock is None)):
                 continue
 
             if block.startswith("STRINGTABLE"):
-                #print "stringtable:\n %s------\n" % block
                 for match in STRINGTABLE_RE.finditer(block):
                     if not match.groupdict()['value']:
                         continue
@@ -194,13 +193,15 @@ class rcfile(base.TranslationStore):
                     newunit.match = match
                     self.addunit(newunit)
             if block.startswith("/*"):  # Comments
-                #print "comment"
-                pass
+                continue
+            if block.startswith("//"):  # One line comments
+                continue
+            if re.match("[0-9A-Z_]+\s+TEXTINCLUDE", block) is not None:  # TEXTINCLUDE is editor specific, not part of the app.
+                continue
             if re.match("[0-9A-Z_]+\s+DIALOG", block) is not None:
                 dialog = re.match("(?P<dialogname>[0-9A-Z_]+)\s+(?P<dialogtype>DIALOGEX|DIALOG)", block).groupdict()
                 dialogname = dialog["dialogname"]
                 dialogtype = dialog["dialogtype"]
-                #print "dialog: %s" % dialogname
                 for match in DIALOG_RE.finditer(block):
                     if not match.groupdict()['value']:
                         continue
@@ -218,7 +219,6 @@ class rcfile(base.TranslationStore):
                     self.addunit(newunit)
             if re.match("[0-9A-Z_]+\s+MENU", block) is not None:
                 menuname = re.match("(?P<menuname>[0-9A-Z_]+)\s+MENU", block).groupdict()["menuname"]
-                #print "menu: %s" % menuname
                 for match in MENU_RE.finditer(block):
                     if not match.groupdict()['value']:
                         continue
diff --git a/translate/storage/statistics.py b/translate/storage/statistics.py
index 4904779..8e5cde2 100644
--- a/translate/storage/statistics.py
+++ b/translate/storage/statistics.py
@@ -25,6 +25,7 @@
 from translate import lang
 from translate.lang import factory
 
+
 # calling classifyunits() in the constructor is probably not ideal.
 # idea: have a property for .classification that calls it if necessary
 
diff --git a/translate/storage/statsdb.py b/translate/storage/statsdb.py
index 1cd9364..24fd1d2 100644
--- a/translate/storage/statsdb.py
+++ b/translate/storage/statsdb.py
@@ -23,16 +23,13 @@
 
 """
 
-try:
-    from sqlite3 import dbapi2
-except ImportError:
-    from pysqlite2 import dbapi2
+import logging
 import os.path
 import re
-import sys
 import stat
+import sys
 import thread
-import logging
+from sqlite3 import dbapi2
 from UserDict import UserDict
 
 from translate import __version__ as toolkitversion
@@ -41,11 +38,20 @@ from translate.misc.multistring import multistring
 from translate.storage import factory
 from translate.storage.workflow import StateEnum
 
+
 logger = logging.getLogger(__name__)
 
 #kdepluralre = re.compile("^_n: ") #Restore this if you really need support for old kdeplurals
 brtagre = re.compile("<br\s*?/?>")
-xmltagre = re.compile("<[^>]+>")
+# xmltagre is a direct copy of the from placeables/general.py
+xmltagre = re.compile(r'''
+        <                         # start of opening tag
+        ([\w.:]+)                 # tag name, possibly namespaced
+        (\s([\w.:]+=              # space and attribute name followed by =
+            ((".*?")|('.*?'))     # attribute value, single or double quoted
+        )?)*/?>                   # end of opening tag, possibly self closing
+        |</([\w.]+)>              # or a closing tag
+        ''', re.VERBOSE)
 numberre = re.compile("\\D\\.\\D")
 
 extended_state_strings = {
@@ -55,7 +61,7 @@ extended_state_strings = {
     StateEnum.NEEDS_REVIEW: "needs-review",
     StateEnum.UNREVIEWED: "unreviewed",
     StateEnum.FINAL: "final",
-    }
+}
 
 UNTRANSLATED = StateEnum.EMPTY
 FUZZY = StateEnum.NEEDS_WORK
@@ -74,8 +80,8 @@ def wordcount(string):
     string = brtagre.sub("\n", string)
     string = xmltagre.sub("", string)
     string = numberre.sub(" ", string)
-    #TODO: This should still use the correct language to count in the target
-    #language
+    # TODO: This should still use the correct language to count in the target
+    # language
     return len(Common.words(string))
 
 
@@ -103,7 +109,7 @@ def wordsinunit(unit):
 class Record(UserDict):
 
     def __init__(self, record_keys, record_values=None, compute_derived_values=lambda x: x):
-        if record_values == None:
+        if record_values is None:
             record_values = (0 for _i in record_keys)
         self.record_keys = record_keys
         self.data = dict(zip(record_keys, record_values))
@@ -188,6 +194,7 @@ class FileTotals(object):
                 untranslated            INTEGER NOT NULL,
                 translatedtargetwords   INTEGER NOT NULL);""")
 
+    @classmethod
     def new_record(cls, state_for_db=None, sourcewords=None, targetwords=None):
         record = Record(cls.keys, compute_derived_values=cls._compute_derived_values)
         if state_for_db is not None:
@@ -203,8 +210,7 @@ class FileTotals(object):
                 record['fuzzysourcewords'] = sourcewords
         return record
 
-    new_record = classmethod(new_record)
-
+    @classmethod
     def _compute_derived_values(cls, record):
         record["total"] = record["untranslated"] + \
                           record["translated"] + \
@@ -213,7 +219,6 @@ class FileTotals(object):
                                      record["translatedsourcewords"] + \
                                      record["fuzzysourcewords"]
         record["review"] = 0
-    _compute_derived_values = classmethod(_compute_derived_values)
 
     def __getitem__(self, fileid):
         result = self.cur.execute("""
@@ -446,11 +451,11 @@ class StatsCache(object):
                     index = unitindex
                 # what about plurals in .source and .target?
                 unit_state_for_db = statefordb(unit)
-                unitvalues.append((unit.getid(), fileid, index, \
-                                unit.source, unit.target, \
-                                sourcewords, targetwords, \
-                                unit_state_for_db,
-                                unit.get_state_id()))
+                unitvalues.append((unit.getid(), fileid, index,
+                                   unit.source, unit.target,
+                                   sourcewords, targetwords,
+                                   unit_state_for_db,
+                                   unit.get_state_id()))
                 file_totals_record = file_totals_record + FileTotals.new_record(unit_state_for_db, sourcewords, targetwords)
         # XXX: executemany is non-standard
         self.cur.executemany("""INSERT INTO units
@@ -491,7 +496,7 @@ class StatsCache(object):
                 "units": value[1],
                 "sourcewords": value[2],
                 "targetwords": value[3],
-                }
+            }
         return stats
 
     def filetotals(self, filename, store=None, extended=False):
diff --git a/translate/storage/subtitles.py b/translate/storage/subtitles.py
index 991780e..ff34b88 100644
--- a/translate/storage/subtitles.py
+++ b/translate/storage/subtitles.py
@@ -28,27 +28,25 @@
 """
 
 import os
-from StringIO import StringIO
 import tempfile
+from cStringIO import StringIO
 
 try:
-    from aeidon import Subtitle
-    from aeidon import documents
+    from aeidon import Subtitle, documents, newlines
     from aeidon.encodings import detect
+    from aeidon.files import (AdvSubStationAlpha, MicroDVD, SubRip,
+                              SubStationAlpha, new)
     from aeidon.util import detect_format as determine
-    from aeidon.files import new
-    from aeidon.files import MicroDVD, SubStationAlpha, AdvSubStationAlpha, SubRip
-    from aeidon import newlines
 except ImportError:
-    from gaupol.subtitle import Subtitle
-    from gaupol import documents
+    from gaupol import FormatDeterminer, documents
     from gaupol.encodings import detect
-    from gaupol import FormatDeterminer
+    from gaupol.files import (AdvSubStationAlpha, MicroDVD, SubRip,
+                              SubStationAlpha, new)
+    from gaupol.newlines import newlines
+    from gaupol.subtitle import Subtitle
+    from translate.storage import base
     _determiner = FormatDeterminer()
     determine = _determiner.determine
-    from gaupol.files import new
-    from gaupol.files import MicroDVD, SubStationAlpha, AdvSubStationAlpha, SubRip
-    from gaupol.newlines import newlines
 
 from translate.storage import base
 
@@ -115,7 +113,7 @@ class SubtitleFile(base.TranslationStore):
                 newunit._start = subtitle.start
                 newunit._end = subtitle.end
                 newunit._duration = subtitle.duration_seconds
-        except Exception, e:
+        except Exception as e:
             raise base.ParseError(e)
 
     def _parsefile(self, storefile):
@@ -164,7 +162,7 @@ class SubtitleFile(base.TranslationStore):
 
 class SubRipFile(SubtitleFile):
     """specialized class for SubRipFile's only"""
-    Name = _("SubRip subtitles file")
+    Name = "SubRip subtitles file"
     Extensions = ['srt']
 
     def __init__(self, *args, **kwargs):
@@ -177,7 +175,7 @@ class SubRipFile(SubtitleFile):
 
 class MicroDVDFile(SubtitleFile):
     """specialized class for SubRipFile's only"""
-    Name = _("MicroDVD subtitles file")
+    Name = "MicroDVD subtitles file"
     Extensions = ['sub']
 
     def __init__(self, *args, **kwargs):
@@ -190,7 +188,7 @@ class MicroDVDFile(SubtitleFile):
 
 class AdvSubStationAlphaFile(SubtitleFile):
     """specialized class for SubRipFile's only"""
-    Name = _("Advanced Substation Alpha subtitles file")
+    Name = "Advanced Substation Alpha subtitles file"
     Extensions = ['ass']
 
     def __init__(self, *args, **kwargs):
@@ -203,7 +201,7 @@ class AdvSubStationAlphaFile(SubtitleFile):
 
 class SubStationAlphaFile(SubtitleFile):
     """specialized class for SubRipFile's only"""
-    Name = _("Substation Alpha subtitles file")
+    Name = "Substation Alpha subtitles file"
     Extensions = ['ssa']
 
     def __init__(self, *args, **kwargs):
diff --git a/translate/storage/symbian.py b/translate/storage/symbian.py
index a72a3ac..3c5836d 100644
--- a/translate/storage/symbian.py
+++ b/translate/storage/symbian.py
@@ -20,6 +20,7 @@
 
 import re
 
+
 charset_re = re.compile('CHARACTER_SET[ ]+(?P<charset>.*)')
 header_item_or_end_re = re.compile('(((?P<key>[^ ]+)(?P<space>[ ]*:[ ]*)(?P<value>.*))|(?P<end_comment>[*]/))')
 header_item_re = re.compile('(?P<key>[^ ]+)(?P<space>[ ]*:[ ]*)(?P<value>.*)')
diff --git a/translate/storage/tbx.py b/translate/storage/tbx.py
index 5e3e0a3..e72d124 100644
--- a/translate/storage/tbx.py
+++ b/translate/storage/tbx.py
@@ -53,7 +53,7 @@ Provisional work is done to make several languages possible."""
 class tbxfile(lisa.LISAfile):
     """Class representing a TBX file store."""
     UnitClass = tbxunit
-    Name = _("TBX Glossary")
+    Name = "TBX Glossary"
     Mimetypes = ["application/x-tbx"]
     Extensions = ["tbx"]
     rootNode = "martif"
diff --git a/translate/storage/test_aresource.py b/translate/storage/test_aresource.py
index 61d3df4..c435032 100644
--- a/translate/storage/test_aresource.py
+++ b/translate/storage/test_aresource.py
@@ -4,14 +4,22 @@
 from lxml import etree
 
 from translate.storage import aresource, test_monolingual
+from translate.misc.multistring import multistring
+from translate.storage.base import TranslationStore
 
 
 class TestAndroidResourceUnit(test_monolingual.TestMonolingualUnit):
     UnitClass = aresource.AndroidResourceUnit
 
-    def __check_escape(self, string, xml):
+    def __check_escape(self, string, xml, target_language=None):
         """Helper that checks that a string is output with the right escape."""
         unit = self.UnitClass("Test String")
+
+        if (target_language is not None):
+            store = TranslationStore()
+            store.settargetlanguage(target_language)
+            unit._store = store
+
         unit.target = string
 
         print("unit.target:", repr(unit.target))
@@ -21,12 +29,7 @@ class TestAndroidResourceUnit(test_monolingual.TestMonolingualUnit):
 
     def __check_parse(self, string, xml):
         """Helper that checks that a string is parsed correctly."""
-        if etree.LXML_VERSION >= (2, 1, 0):
-            # Since version 2.1.0 we can pass the strip_cdata parameter to
-            # indicate that we don't want cdata to be converted to raw XML.
-            parser = etree.XMLParser(strip_cdata=False)
-        else:
-            parser = etree.XMLParser()
+        parser = etree.XMLParser(strip_cdata=False)
 
         translatable = 'translatable="false"' not in xml
         et = etree.fromstring(xml, parser)
@@ -83,6 +86,12 @@ class TestAndroidResourceUnit(test_monolingual.TestMonolingualUnit):
                '</string>\n\n')
         self.__check_escape(string, xml)
 
+    def test_escape_html_code_quote(self):
+        string = 'some <b>html code</b> \'here\''
+        xml = ('<string name="Test String">some <b>html code</b> \\\'here\\\''
+               '</string>\n\n')
+        self.__check_escape(string, xml)
+
     def test_escape_arrows(self):
         string = '<<< arrow'
         xml = '<string name="Test String"><<< arrow</string>\n\n'
@@ -105,6 +114,14 @@ class TestAndroidResourceUnit(test_monolingual.TestMonolingualUnit):
         xml = '<string name="Test String"></string>\n\n'
         self.__check_escape(string, xml)
 
+    def test_plural_escape_message_with_newline(self):
+        mString = multistring(['one message\nwith newline', 'other message\nwith newline'])
+        xml = ('<plurals name="Test String">\n\t'
+                 '<item quantity="one">one message\\nwith newline</item>\n\t'
+                 '<item quantity="other">other message\\nwith newline</item>\n'
+               '</plurals>\n')
+        self.__check_escape(mString, xml, 'en')
+
     ############################ Check string parse ###########################
 
     def test_parse_message_with_newline(self):
@@ -197,6 +214,59 @@ class TestAndroidResourceUnit(test_monolingual.TestMonolingualUnit):
                '</string>\n\n')
         self.__check_parse(string, xml)
 
+    def test_plural_parse_message_with_newline(self):
+        mString = multistring(['one message\nwith newline', 'other message\nwith newline'])
+        xml = ('<plurals name="Test String">\n\t'
+                 '<item quantity="one">one message\\nwith newline</item>\n\t'
+                 '<item quantity="other">other message\\nwith newline</item>\n'
+               '</plurals>\n')
+        self.__check_parse(mString, xml)
+
 
 class TestAndroidResourceFile(test_monolingual.TestMonolingualStore):
     StoreClass = aresource.AndroidResourceFile
+
+    def test_targetlanguage_default_handlings(self):
+        store = self.StoreClass()
+
+        # Initial value is None
+        assert store.gettargetlanguage() is None
+
+        # sourcelanguage shouldn't change the targetlanguage
+        store.setsourcelanguage('en')
+        assert store.gettargetlanguage() is None
+
+        # targetlanguage setter works correctly
+        store.settargetlanguage('de')
+        assert store.gettargetlanguage() == 'de'
+
+        # explicit targetlanguage wins over filename
+        store.filename = 'dommy/values-it/res.xml'
+        assert store.gettargetlanguage() == 'de'
+
+    def test_targetlanguage_auto_detection_filename(self):
+        store = self.StoreClass()
+
+        # Check language auto_detection
+        store.filename = 'project/values-it/res.xml'
+        assert store.gettargetlanguage() == 'it'
+
+    def test_targetlanguage_auto_detection_filename_default_language(self):
+        store = self.StoreClass()
+
+        store.setsourcelanguage('en')
+
+        # Check language auto_detection
+        store.filename = 'project/values/res.xml'
+        assert store.gettargetlanguage() == 'en'
+
+    def test_targetlanguage_auto_detection_invalid_filename(self):
+        store = self.StoreClass()
+
+        store.setsourcelanguage('en')
+
+        store.filename = 'project/invalid_directory/res.xml'
+        assert store.gettargetlanguage() is None
+
+        store.filename = 'invalid_directory'
+        assert store.gettargetlanguage() is None
diff --git a/translate/storage/test_base.py b/translate/storage/test_base.py
index 213afd4..8246417 100644
--- a/translate/storage/test_base.py
+++ b/translate/storage/test_base.py
@@ -52,10 +52,10 @@ def test_force_override():
             base.force_override(self.test, BaseClass)
             return True
 
+        @classmethod
         def classtest(cls):
             base.force_override(cls.classtest, BaseClass)
             return True
-        classtest = classmethod(classtest)
 
     class DerivedClass(BaseClass):
         pass
@@ -86,7 +86,7 @@ class TestTranslationUnit:
     def test_create(self):
         """tests a simple creation with a source string"""
         unit = self.unit
-        print 'unit.source:', unit.source
+        print('unit.source:', unit.source)
         assert unit.source == "Test String"
 
     def test_eq(self):
@@ -129,8 +129,8 @@ class TestTranslationUnit:
                     '\n', '\t', '\r', '\r\n', '\\r', '\\', '\\\r']
         for special in specials:
             unit.source = special
-            print "unit.source:", repr(unit.source)
-            print "special:", repr(special)
+            print("unit.source:", repr(unit.source))
+            print("special:", repr(special))
             assert unit.source == special
 
     def test_difficult_escapes(self):
@@ -143,8 +143,8 @@ class TestTranslationUnit:
                     '\\r\\n', '\\\\r\\n', '\\r\\\\n', '\\\\n\\\\r']
         for special in specials:
             unit.source = special
-            print "unit.source:", repr(unit.source) + '|'
-            print "special:", repr(special) + '|'
+            print("unit.source:", repr(unit.source) + '|')
+            print("special:", repr(special) + '|')
             assert unit.source == special
 
     def test_note_sanity(self):
@@ -234,8 +234,8 @@ class TestTranslationStore(object):
         """Tests adding a new unit with a source string"""
         store = self.StoreClass()
         unit = store.addsourceunit("Test String")
-        print str(unit)
-        print str(store)
+        print(str(unit))
+        print(str(store))
         assert headerless_len(store.units) == 1
         assert unit.source == "Test String"
 
@@ -271,13 +271,13 @@ class TestTranslationStore(object):
             store2unit = store2.units[n]
             match = store1unit == store2unit
             if not match:
-                print "match failed between elements %d of %d" % ((n + 1), headerless_len(store1.units))
-                print "store1:"
-                print str(store1)
-                print "store2:"
-                print str(store2)
-                print "store1.units[%d].__dict__:" % n, store1unit.__dict__
-                print "store2.units[%d].__dict__:" % n, store2unit.__dict__
+                print("match failed between elements %d of %d" % ((n + 1), headerless_len(store1.units)))
+                print("store1:")
+                print(str(store1))
+                print("store2:")
+                print(str(store2))
+                print("store1.units[%d].__dict__:" % n, store1unit.__dict__)
+                print("store2.units[%d].__dict__:" % n, store2unit.__dict__)
                 assert store1unit == store2unit
 
     def test_parse(self):
@@ -340,8 +340,8 @@ class TestTranslationStore(object):
         if not (self.StoreClass.Name and self.StoreClass.Name in supported_dict):
             return
         detail = supported_dict[self.StoreClass.Name]  # will start to get problematic once translated
-        print "Factory:", detail[0]
-        print "StoreClass:", self.StoreClass.Extensions
+        print("Factory:", detail[0])
+        print("StoreClass:", self.StoreClass.Extensions)
         for ext in detail[0]:
             assert ext in self.StoreClass.Extensions
         for ext in self.StoreClass.Extensions:
@@ -354,8 +354,8 @@ class TestTranslationStore(object):
         if not (self.StoreClass.Name and self.StoreClass.Name in supported_dict):
             return
         detail = supported_dict[self.StoreClass.Name]  # will start to get problematic once translated
-        print "Factory:", detail[1]
-        print "StoreClass:", self.StoreClass.Mimetypes
+        print("Factory:", detail[1])
+        print("StoreClass:", self.StoreClass.Mimetypes)
         for ext in detail[1]:
             assert ext in self.StoreClass.Mimetypes
         for ext in self.StoreClass.Mimetypes:
diff --git a/translate/storage/test_catkeys.py b/translate/storage/test_catkeys.py
index b82dd86..b3aca34 100644
--- a/translate/storage/test_catkeys.py
+++ b/translate/storage/test_catkeys.py
@@ -1,8 +1,7 @@
 #!/usr/bin/env python
 # -*- coding: utf-8 -*-
 
-from translate.storage import test_base
-from translate.storage import catkeys
+from translate.storage import catkeys, test_base
 
 
 class TestCatkeysUnit(test_base.TestTranslationUnit):
@@ -20,8 +19,8 @@ class TestCatkeysUnit(test_base.TestTranslationUnit):
                     '\\\n', '\\\t', '\\\\r', '\\\\"']
         for special in specials:
             unit.source = special
-            print "unit.source:", repr(unit.source) + '|'
-            print "special:", repr(special) + '|'
+            print("unit.source:", repr(unit.source) + '|')
+            print("special:", repr(special) + '|')
             assert unit.source == special
 
     def test_newlines(self):
diff --git a/translate/storage/test_cpo.py b/translate/storage/test_cpo.py
index ed8a219..6ad5a2e 100644
--- a/translate/storage/test_cpo.py
+++ b/translate/storage/test_cpo.py
@@ -1,11 +1,16 @@
 #!/usr/bin/env python
 # -*- coding: utf-8 -*-
 
-from pytest import raises, mark, importorskip
+import sys
+
+from pytest import importorskip, mark, raises
+cpo = importorskip("not sys.platform.startswith('linux')")
 
 from translate.misc import wStringIO
 from translate.misc.multistring import multistring
 from translate.storage import test_po
+
+
 cpo = importorskip("translate.storage.cpo")
 
 
@@ -84,8 +89,8 @@ class TestCPOFile(test_po.TestPOFile):
         pofile = self.poparse(posource)
         thepo = pofile.units[0]
         thepo.msgidcomment = "first comment"
-        print pofile
-        print "Blah", thepo.source
+        print(pofile)
+        print("Blah", thepo.source)
         assert thepo.source == "test me"
         thepo.msgidcomment = "second comment"
         assert str(pofile).count("_:") == 1
@@ -97,7 +102,7 @@ class TestCPOFile(test_po.TestPOFile):
         pofile = self.poparse(posource)
         assert len(pofile.units) == 2
         pofile.removeduplicates("msgctxt")
-        print pofile
+        print(pofile)
         assert len(pofile.units) == 2
         assert str(pofile.units[0]).count("source1") == 2
         assert str(pofile.units[1]).count("source2") == 2
@@ -110,8 +115,8 @@ class TestCPOFile(test_po.TestPOFile):
         assert len(pofile.units) == 2
         pofile.removeduplicates("merge")
         assert len(pofile.units) == 2
-        print pofile.units[0].msgidcomments
-        print pofile.units[1].msgidcomments
+        print(pofile.units[0].msgidcomments)
+        print(pofile.units[1].msgidcomments)
         assert po.unquotefrompo(pofile.units[0].msgidcomments) == "_: source1\n"
         assert po.unquotefrompo(pofile.units[1].msgidcomments) == "_: source2\n"
 
@@ -147,7 +152,7 @@ class TestCPOFile(test_po.TestPOFile):
         posource = u'''#: nb\nmsgid "Norwegian Bokm\xe5l"\nmsgstr ""\n'''
         pofile = self.StoreClass(wStringIO.StringIO(posource.encode("UTF-8")), encoding="UTF-8")
         assert len(pofile.units) == 1
-        print str(pofile)
+        print(str(pofile))
         thepo = pofile.units[0]
 #        assert str(pofile) == posource.encode("UTF-8")
         # extra test: what if we set the msgid to a unicode? this happens in prop2po etc
@@ -165,7 +170,7 @@ class TestCPOFile(test_po.TestPOFile):
         """checks the content of all the expected sections of a PO message"""
         posource = '# other comment\n#. automatic comment\n#: source comment\n#, fuzzy\nmsgid "One"\nmsgstr "Een"\n'
         pofile = self.poparse(posource)
-        print pofile
+        print(pofile)
         assert len(pofile.units) == 1
         assert str(pofile) == posource
 
@@ -173,8 +178,8 @@ class TestCPOFile(test_po.TestPOFile):
         """Tests for correct output of mulitline obsolete messages"""
         posource = '#~ msgid ""\n#~ "Old thing\\n"\n#~ "Second old thing"\n#~ msgstr ""\n#~ "Ou ding\\n"\n#~ "Tweede ou ding"\n'
         pofile = self.poparse(posource)
-        print "Source:\n%s" % posource
-        print "Output:\n%s" % str(pofile)
+        print("Source:\n%s" % posource)
+        print("Output:\n%s" % str(pofile))
         assert len(pofile.units) == 1
         assert pofile.units[0].isobsolete()
         assert not pofile.units[0].istranslatable()
@@ -184,6 +189,6 @@ class TestCPOFile(test_po.TestPOFile):
         """tests behaviour of unassociated comments."""
         oldsource = '# old lonesome comment\n\nmsgid "one"\nmsgstr "een"\n'
         oldfile = self.poparse(oldsource)
-        print "__str__", str(oldfile)
+        print("__str__", str(oldfile))
         assert len(oldfile.units) == 1
         assert str(oldfile).find("# old lonesome comment\nmsgid") >= 0
diff --git a/translate/storage/test_csvl10n.py b/translate/storage/test_csvl10n.py
index 1426433..7722af8 100644
--- a/translate/storage/test_csvl10n.py
+++ b/translate/storage/test_csvl10n.py
@@ -1,7 +1,6 @@
 #!/usr/bin/env python
 
-from translate.storage import csvl10n
-from translate.storage import test_base
+from translate.storage import csvl10n, test_base
 
 
 class TestCSVUnit(test_base.TestTranslationUnit):
diff --git a/translate/storage/test_directory.py b/translate/storage/test_directory.py
index 4658dd9..f6bf356 100644
--- a/translate/storage/test_directory.py
+++ b/translate/storage/test_directory.py
@@ -12,7 +12,7 @@ class TestDirectory(object):
 
     def setup_method(self, method):
         """sets up a test directory"""
-        print "setup_method called on", self.__class__.__name__
+        print("setup_method called on", self.__class__.__name__)
         self.testdir = "%s_testdir" % (self.__class__.__name__)
         self.cleardir(self.testdir)
         os.mkdir(self.testdir)
@@ -46,7 +46,7 @@ class TestDirectory(object):
 
     def test_created(self):
         """test that the directory actually exists"""
-        print self.testdir
+        print(self.testdir)
         assert os.path.isdir(self.testdir)
 
     def test_basic(self):
diff --git a/translate/storage/test_dtd.py b/translate/storage/test_dtd.py
index 684c97d..0b83a40 100644
--- a/translate/storage/test_dtd.py
+++ b/translate/storage/test_dtd.py
@@ -18,13 +18,10 @@
 # You should have received a copy of the GNU General Public License
 # along with this program; if not, see <http://www.gnu.org/licenses/>.
 
-import warnings
-
 from pytest import mark
 
 from translate.misc import wStringIO
-from translate.storage import dtd
-from translate.storage import test_monolingual
+from translate.storage import dtd, test_monolingual
 
 
 def test_roundtrip_quoting():
@@ -62,19 +59,15 @@ def test_roundtrip_quoting():
     for special in specials:
         quoted_special = dtd.quotefordtd(special)
         unquoted_special = dtd.unquotefromdtd(quoted_special)
-        print "special: %r\nquoted: %r\nunquoted: %r\n" % (special,
+        print("special: %r\nquoted: %r\nunquoted: %r\n" % (special,
                                                            quoted_special,
-                                                           unquoted_special)
+                                                           unquoted_special))
         assert special == unquoted_special
 
 
 @mark.xfail(reason="Not Implemented")
 def test_quotefordtd_unimplemented_cases():
     """Test unimplemented quoting DTD cases."""
-    assert dtd.quotefordtd("Color & Light") == '"Color & Light"'
-    assert dtd.quotefordtd("Color & █") == '"Color & █"'
-    assert dtd.quotefordtd("Color&Light &red;") == '"Color&Light &red;"'
-    assert dtd.quotefordtd("Color & Light; Yes") == '"Color & Light; Yes"'
     assert dtd.quotefordtd("Between <p> and </p>") == ('"Between <p> and'
                                                        ' </p>"')
 
@@ -95,13 +88,16 @@ def test_quotefordtd():
     assert dtd.quotefordtd("A \"thing\"") == '"A "thing""'
     # The " character is not escaped when it indicates an attribute value.
     assert dtd.quotefordtd("<a href=\"http") == "'<a href=\"http'"
+    # &
+    assert dtd.quotefordtd("Color & Light") == '"Color & Light"'
+    assert dtd.quotefordtd("Color & █") == '"Color & █"'
+    assert dtd.quotefordtd("Color&Light &red;") == '"Color&Light &red;"'
+    assert dtd.quotefordtd("Color & Light; Yes") == '"Color & Light; Yes"'
 
 
 @mark.xfail(reason="Not Implemented")
 def test_unquotefromdtd_unimplemented_cases():
     """Test unimplemented unquoting DTD cases."""
-    assert dtd.unquotefromdtd('"Color & Light"') == "Color & Light"
-    assert dtd.unquotefromdtd('"Color & █"') == "Color & █"
     assert dtd.unquotefromdtd('"<p> and </p>"') == "<p> and </p>"
 
 
@@ -117,6 +113,9 @@ def test_unquotefromdtd():
     assert dtd.unquotefromdtd('"&blockAttackSites;"') == "&blockAttackSites;"
     assert dtd.unquotefromdtd('"&intro-point2-a;"') == "&intro-point2-a;"
     assert dtd.unquotefromdtd('"&basePBMenu.label"') == "&basePBMenu.label"
+    # &
+    assert dtd.unquotefromdtd('"Color & Light"') == "Color & Light"
+    assert dtd.unquotefromdtd('"Color & █"') == "Color & █"
     # nbsp
     assert dtd.unquotefromdtd('"&#x00A0;"') == "&#x00A0;"
     # '
@@ -138,9 +137,9 @@ def test_android_roundtrip_quoting():
     for special in specials:
         quoted_special = dtd.quoteforandroid(special)
         unquoted_special = dtd.unquotefromandroid(quoted_special)
-        print "special: %r\nquoted: %r\nunquoted: %r\n" % (special,
+        print("special: %r\nquoted: %r\nunquoted: %r\n" % (special,
                                                            quoted_special,
-                                                           unquoted_special)
+                                                           unquoted_special))
         assert special == unquoted_special
 
 
@@ -161,11 +160,16 @@ def test_unquotefromandroid():
 def test_removeinvalidamp(recwarn):
     """tests the the removeinvalidamps function"""
 
-    def tester(actual, expected):
+    def tester(actual, expected=None):
+        if expected is None:
+            expected = actual
         assert dtd.removeinvalidamps("test.name", actual) == expected
-    tester("Valid &entity; included", "Valid &entity; included")
-    tester("Valid &entity.name; included", "Valid &entity.name; included")
-    tester("Valid Ӓ included", "Valid Ӓ included")
+    # No errors
+    tester("Valid &entity; included")
+    tester("Valid &entity.name; included")
+    tester("Valid Ӓ included")
+    tester("Valid &entity_name;")
+    # Errors that require & removal
     tester("This &amp is broken", "This amp is broken")
     tester("Mad & &amp &", "Mad  amp &")
     dtd.removeinvalidamps("simple.warningtest", "Dimpled &Ring")
@@ -239,7 +243,7 @@ class TestDTD(test_monolingual.TestMonolingualStore):
         dtdfile = self.dtdparse(dtdsource)
         assert len(dtdfile.units) == 1
         dtdunit = dtdfile.units[0]
-        print dtdunit
+        print(dtdunit)
         assert dtdunit.isnull()
 
     def test_newlines_in_entity(self):
@@ -252,16 +256,16 @@ class TestDTD(test_monolingual.TestMonolingualStore):
 ">
 '''
         dtdregen = self.dtdregen(dtdsource)
-        print dtdregen
-        print dtdsource
+        print(dtdregen)
+        print(dtdsource)
         assert dtdsource == dtdregen
 
     def test_conflate_comments(self):
         """Tests that comments don't run onto the same line"""
         dtdsource = '<!-- test comments -->\n<!-- getting conflated -->\n<!ENTITY sample.txt "hello">\n'
         dtdregen = self.dtdregen(dtdsource)
-        print dtdsource
-        print dtdregen
+        print(dtdsource)
+        print(dtdregen)
         assert dtdsource == dtdregen
 
     def test_localisation_notes(self):
@@ -295,8 +299,8 @@ class TestDTD(test_monolingual.TestMonolingualStore):
         # FIXME: The following line is necessary, because of dtdfile's inability to remember the spacing of
         # the source DTD file when converting back to DTD.
         dtdregen = self.dtdregen(dtdsource).replace('realBrandDTD SYSTEM', 'realBrandDTD\n SYSTEM')
-        print dtdsource
-        print dtdregen
+        print(dtdsource)
+        print(dtdregen)
         assert dtdsource == dtdregen
 
     @mark.xfail(reason="Not Implemented")
diff --git a/translate/storage/test_factory.py b/translate/storage/test_factory.py
index 530e647..08c596a 100644
--- a/translate/storage/test_factory.py
+++ b/translate/storage/test_factory.py
@@ -1,9 +1,9 @@
 #!/usr/bin/env python
 # -*- coding: utf-8 -*-
 
+import os
 from bz2 import BZ2File
 from gzip import GzipFile
-import os
 
 from translate.misc import wStringIO
 from translate.storage import factory
@@ -35,6 +35,7 @@ class BaseTestFactory:
         """removes the attributes set up by setup_method"""
         self.cleardir(self.testdir)
 
+    @classmethod
     def cleardir(self, dirname):
         """removes the given directory"""
         if os.path.exists(dirname):
@@ -46,7 +47,6 @@ class BaseTestFactory:
         if os.path.exists(dirname):
             os.rmdir(dirname)
         assert not os.path.exists(dirname)
-    cleardir = classmethod(cleardir)
 
     def test_getclass(self):
         assert classname("file.po") == "pofile"
@@ -170,5 +170,5 @@ class TestWordfastFactory(BaseTestFactory):
     from translate.storage import wordfast
     expected_instance = wordfast.WordfastTMFile
     filename = 'dummy.txt'
-    file_content = '''%20070801~103212	%User ID,S,S SMURRAY,SMS Samuel Murray-Smit,SM Samuel Murray-Smit,MW Mary White,DS Deepak Shota,MT! Machine translation (15),AL! Alignment (10),SM Samuel Murray,	%TU=00000075	%AF-ZA	%Wordfast TM v.5.51r/00	%EN-ZA	%---80597535	Subject (5),EL,EL Electronics,AC Accounting,LE Legal,ME Mechanics,MD Medical,LT Literary,AG Agriculture,CO Commercial	Client (5),LS,LS LionSoft Corp,ST SuperTron Inc,CA CompArt Ltd			
-20070801~103248	SM	0	AF-ZA	Langeraad en duimpie	EN-ZA	Big Ben and Little John	EL	LS'''
+    file_content = ('''%20070801~103212	%User ID,S,S SMURRAY,SMS Samuel Murray-Smit,SM Samuel Murray-Smit,MW Mary White,DS Deepak Shota,MT! Machine translation (15),AL! Alignment (10),SM Samuel Murray,	%TU=00000075	%AF-ZA	%Wordfast TM v.5.51r/00	%EN-ZA	%---80597535	Subject (5),EL,EL Electronics,AC Accounting,LE Legal,ME Mechanics,MD Medical,LT Literary,AG Agriculture,CO Commercial	Client (5),LS,LS LionSoft Corp,ST SuperTron Inc,CA CompArt Ltd			'''
+'''20070801~103248	SM	0	AF-ZA	Langeraad en duimpie	EN-ZA	Big Ben and Little John	EL	LS''')
diff --git a/translate/storage/test_html.py b/translate/storage/test_html.py
index b50da6e..b0b5b72 100644
--- a/translate/storage/test_html.py
+++ b/translate/storage/test_html.py
@@ -20,10 +20,9 @@
 
 """Tests for the HTML classes"""
 
-from pytest import raises, mark
+from pytest import mark, raises
 
-from translate.storage import base
-from translate.storage import html
+from translate.storage import base, html
 
 
 def test_guess_encoding():
@@ -36,7 +35,7 @@ def test_guess_encoding():
 def test_strip_html():
     assert html.strip_html("<a>Something</a>") == "Something"
     assert html.strip_html("You are <a>Something</a>") == "You are <a>Something</a>"
-    #assert html.strip_html("<b>You</b> are <a>Something</a>") == "<b>You</b> are <a>Something</a>"
+    assert html.strip_html("<b>You</b> are <a>Something</a>") == "<b>You</b> are <a>Something</a>"
     assert html.strip_html('<strong><font class="headingwhite">Projects</font></strong>') == "Projects"
     assert html.strip_html("<strong>Something</strong> else.") == "<strong>Something</strong> else."
     assert html.strip_html("<h1><strong>Something</strong> else.</h1>") == "<strong>Something</strong> else."
@@ -81,5 +80,95 @@ class TestHTMLParsing:
         interpretted as tags"""
         h = html.htmlfile()
         store = h.parsestring("<p>We are here</p><script>Some </tag>like data<script></p>")
-        print store.units[0].source
+        print(store.units[0].source)
         assert len(store.units) == 1
+
+
+class TestHTMLExtraction(object):
+
+    h = html.htmlfile
+
+    def test_extraction_tag_figcaption(self):
+        """Check that we can extract figcaption"""
+        h = html.htmlfile()
+        # Example form http://www.w3schools.com/tags/tag_figcaption.asp
+        store = h.parsestring("""
+               <figure>
+                   <img src="img_pulpit.jpg" alt="The Pulpit Rock" width="304" height="228">
+                   <figcaption>Fig1. - A view of the pulpit rock in Norway.</figcaption>
+               </figure>""")
+        print(store.units[0].source)
+        assert len(store.units) == 2
+        assert store.units[0].source == "The Pulpit Rock"
+        assert store.units[1].source == "Fig1. - A view of the pulpit rock in Norway."
+
+    def test_extraction_tag_caption_td_th(self):
+        """Check that we can extract table related translatable: th, td and caption"""
+        h = html.htmlfile()
+        # Example form http://www.w3schools.com/tags/tag_caption.asp
+        store = h.parsestring("""
+            <table>
+                <caption>Monthly savings</caption>
+                <tr>
+                    <th>Month</th>
+                    <th>Savings</th>
+                </tr>
+                <tr>
+                    <td>January</td>
+                    <td>$100</td>
+                </tr>
+            </table>""")
+        print(store.units[0].source)
+        assert len(store.units) == 5
+        assert store.units[0].source == "Monthly savings"
+        assert store.units[1].source == "Month"
+        assert store.units[2].source == "Savings"
+        assert store.units[3].source == "January"
+        assert store.units[4].source == "$100"
+
+    def test_extraction_attr_alt(self):
+        """Check that we can extract title attribute"""
+        h = html.htmlfile()
+        # Example from http://www.netmechanic.com/news/vol6/html_no1.htm
+        store = h.parsestring("""
+            <img src="cafeteria.jpg" height="200" width="200" alt="UAHC campers enjoy a meal in the camp cafeteria">
+        """)
+        assert len(store.units) == 1
+        assert store.units[0].source == "UAHC campers enjoy a meal in the camp cafeteria"
+
+
+    def test_extraction_attr_title(self):
+        """Check that we can extract title attribute"""
+        h = html.htmlfile()
+
+        # Example form http://www.w3schools.com/tags/att_global_title.asp
+        store = h.parsestring("""
+            <p><abbr title="World Health Organization">WHO</abbr> was founded in 1948.</p>
+            <p title="Free Web tutorials">W3Schools.com</p>""")
+        print(store.units[0].source)
+        assert len(store.units) == 4
+        assert store.units[0].source == "World Health Organization"
+        # FIXME this is not ideal we need to either drop title= as we've
+        # extracted it already or not extract it earlier
+        assert store.units[1].source == '<abbr title="World Health Organization">WHO</abbr> was founded in 1948.'
+        assert store.units[2].source == "Free Web tutorials"
+        assert store.units[3].source == "W3Schools.com"
+
+        # Example from http://www.netmechanic.com/news/vol6/html_no1.htm
+        store = h.parsestring("""
+            <table width="100" border="2" title="Henry Jacobs Camp summer 2003 schedule">
+        """)
+        assert len(store.units) == 1
+        assert store.units[0].source == "Henry Jacobs Camp summer 2003 schedule"
+        # FIXME this doesn't extract as I'd have expected
+        #store = h.parsestring("""
+        #    <a href="page1.html" title="HS Jacobs - a UAHC camp in Utica, MS">Henry S. Jacobs Camp</a>
+        #""")
+        #assert len(store.units) == 2
+        #assert store.units[0].source == "HS Jacobs - a UAHC camp in Utica, MS"
+        #assert store.units[1].source == "Henry S. Jacobs Camp"
+        store = h.parsestring("""
+            <form name="application" title="Henry Jacobs camper application" method="  " action="  ">
+        """)
+        assert len(store.units) == 1
+        assert store.units[0].source == "Henry Jacobs camper application"
diff --git a/translate/storage/test_mo.py b/translate/storage/test_mo.py
index 62bc99a..3324ceb 100644
--- a/translate/storage/test_mo.py
+++ b/translate/storage/test_mo.py
@@ -1,13 +1,12 @@
 #!/usr/bin/env python
 
 import os
-import sys
-import StringIO
 import subprocess
+import sys
+from cStringIO import StringIO
+
+from translate.storage import factory, mo, test_base
 
-from translate.storage import factory
-from translate.storage import mo
-from translate.storage import test_base
 
 # get directory of this test
 dir = os.path.dirname(os.path.abspath(__file__))
@@ -21,6 +20,7 @@ os.environ["PYTHONPATH"] = os.pathsep.join(sys.path)
 os.environ["PATH"] = os.pathsep.join([os.path.join(dir, "translate", "tools"),
                                       os.environ["PATH"]])
 
+
 class TestMOUnit(test_base.TestTranslationUnit):
     UnitClass = mo.mounit
 
@@ -139,8 +139,8 @@ class TestMOFile(test_base.TestTranslationStore):
 
     def test_output(self):
         for posource in posources:
-            print "PO source file"
-            print posource
+            print("PO source file")
+            print(posource)
             PO_FILE, MO_MSGFMT, MO_POCOMPILE = self.get_mo_and_po()
 
             out_file = open(PO_FILE, 'w')
@@ -150,7 +150,7 @@ class TestMOFile(test_base.TestTranslationStore):
             subprocess.call(['msgfmt', PO_FILE, '-o', MO_MSGFMT])
             subprocess.call(['pocompile', '--errorlevel=traceback', PO_FILE, MO_POCOMPILE])
 
-            store = factory.getobject(StringIO.StringIO(posource))
+            store = factory.getobject(StringIO(posource))
             if store.isempty() and not os.path.exists(MO_POCOMPILE):
                 # pocompile doesn't create MO files for empty PO files, so we
                 # can skip the checks here.
@@ -161,11 +161,11 @@ class TestMOFile(test_base.TestTranslationStore):
 
             try:
                 mo_msgfmt = mo_msgfmt_f.read()
-                print "msgfmt output:"
-                print repr(mo_msgfmt)
+                print("msgfmt output:")
+                print(repr(mo_msgfmt))
                 mo_pocompile = mo_pocompile_f.read()
-                print "pocompile output:"
-                print repr(mo_pocompile)
+                print("pocompile output:")
+                print(repr(mo_pocompile))
                 assert mo_msgfmt == mo_pocompile
             finally:
                 mo_msgfmt_f.close()
diff --git a/translate/storage/test_monolingual.py b/translate/storage/test_monolingual.py
index f5f22df..8d44adb 100644
--- a/translate/storage/test_monolingual.py
+++ b/translate/storage/test_monolingual.py
@@ -41,11 +41,11 @@ class TestMonolingualStore(test_base.TestTranslationStore):
             store2unit = store2.units[n]
 
             if str(store1unit) != str(store2unit):
-                print("match failed between elements %d of %d" % ((n + 1), len(store1.units)))
+                print(("match failed between elements %d of %d" % ((n + 1), len(store1.units))))
                 print("store1:")
-                print(str(store1))
+                print((str(store1)))
                 print("store2:")
-                print(str(store2))
-                print("store1.units[%d].__dict__:" % n, store1unit.__dict__)
-                print("store2.units[%d].__dict__:" % n, store2unit.__dict__)
+                print((str(store2)))
+                print(("store1.units[%d].__dict__:" % n, store1unit.__dict__))
+                print(("store2.units[%d].__dict__:" % n, store2unit.__dict__))
                 assert str(store1unit) == str(store2unit)
diff --git a/translate/storage/test_mozilla_lang.py b/translate/storage/test_mozilla_lang.py
index 46463b5..60e82f4 100644
--- a/translate/storage/test_mozilla_lang.py
+++ b/translate/storage/test_mozilla_lang.py
@@ -1,8 +1,7 @@
 #!/usr/bin/env python
 # -*- coding: utf-8 -*-
 
-from translate.storage import mozilla_lang
-from translate.storage import test_base
+from translate.storage import mozilla_lang, test_base
 
 
 class TestMozLangUnit(test_base.TestTranslationUnit):
@@ -17,7 +16,7 @@ class TestMozLangUnit(test_base.TestTranslationUnit):
         assert str(unit).endswith(" {ok}")
 
     def test_untranslated(self):
-	"""The target is always written to files and is never blank. If it is
+        """The target is always written to files and is never blank. If it is
         truly untranslated then it won't end with '{ok}."""
         unit = self.UnitClass("Open")
         assert unit.target is None
diff --git a/translate/storage/test_omegat.py b/translate/storage/test_omegat.py
index ca04afe..e063f98 100644
--- a/translate/storage/test_omegat.py
+++ b/translate/storage/test_omegat.py
@@ -3,8 +3,7 @@
 
 from pytest import mark
 
-from translate.storage import omegat as ot
-from translate.storage import test_base
+from translate.storage import omegat as ot, test_base
 
 
 class TestOtUnit(test_base.TestTranslationUnit):
@@ -14,7 +13,7 @@ class TestOtUnit(test_base.TestTranslationUnit):
 class TestOtFile(test_base.TestTranslationStore):
     StoreClass = ot.OmegaTFile
 
-    @mark.xfail(reason="This doesn't work, due to two store classes handling different " \
-        "extensions, but factory listing it as one supported file type")
+    @mark.xfail(reason="This doesn't work, due to two store classes handling different "
+                       "extensions, but factory listing it as one supported file type")
     def test_extensions(self):
         assert False
diff --git a/translate/storage/test_php.py b/translate/storage/test_php.py
index c3c0bbe..1a63a9d 100644
--- a/translate/storage/test_php.py
+++ b/translate/storage/test_php.py
@@ -3,9 +3,8 @@
 
 from pytest import mark
 
-from translate.storage import php
-from translate.storage import test_monolingual
 from translate.misc import wStringIO
+from translate.storage import php, test_monolingual
 
 
 def test_php_escaping_single_quote():
@@ -143,8 +142,7 @@ $foo = "bar";
         assert phpunit._comments == ["""/*""",
                                      """ * Comment line 1""",
                                      """ * Comment line 2""",
-                                     """ */"""
-                                    ]
+                                     """ */"""]
 
     def test_comment_blocks(self):
         """check that we don't process name value pairs in comment blocks"""
@@ -172,7 +170,7 @@ $foo='bar';
         phpfile = self.phpparse(phpsource)
         assert len(phpfile.units) == 1
         phpunit = phpfile.units[0]
-        assert phpunit.getoutput() == phpsource
+        assert str(phpunit) == phpsource
 
     def test_multiline(self):
         """check that we preserve newlines in a multiline message"""
@@ -198,6 +196,18 @@ $foo='bar';
             assert phpunit.name == "$lang->'item1'"
             assert phpunit.source == "value1"
 
+    def test_parsing_array_no_array_syntax(self):
+        """parse the array syntax"""
+        phpsource = '''global $_LANGPDF;
+        $_LANGPDF = array();
+        $_LANGPDF['PDF065ab3a28ca4f16f55f103adc7d0226f'] = 'Delivery';
+        '''
+        phpfile = self.phpparse(phpsource)
+        assert len(phpfile.units) == 1
+        phpunit = phpfile.units[0]
+        assert phpunit.name == "$_LANGPDF['PDF065ab3a28ca4f16f55f103adc7d0226f']"
+        assert phpunit.source == "Delivery"
+
     def test_parsing_arrays_keys_with_spaces(self):
         """Ensure that our identifiers can have spaces. Bug #1683"""
         phpsource = '''$lang = array(
@@ -218,7 +228,7 @@ $foo='bar';
          'item 3' => 'value3',
       );'''
         phpfile = self.phpparse(phpsource)
-        print len(phpfile.units)
+        print(len(phpfile.units))
         assert len(phpfile.units) == 2
         phpunit = phpfile.units[1]
         assert phpunit.name == "$lang->'item 3'"
@@ -229,7 +239,7 @@ $foo='bar';
         phpsource = """define("_FINISH", "Rematar");
 define('_POSTEDON', 'Enviado o');"""
         phpfile = self.phpparse(phpsource)
-        print len(phpfile.units)
+        print(len(phpfile.units))
         assert len(phpfile.units) == 2
         phpunit = phpfile.units[0]
         assert phpunit.name == 'define("_FINISH"'
@@ -243,7 +253,7 @@ define('_POSTEDON', 'Enviado o');"""
         phpsource = """define( "_FINISH", "Rematar");
 define( '_CM_POSTED', 'Enviado');"""
         phpfile = self.phpparse(phpsource)
-        print len(phpfile.units)
+        print(len(phpfile.units))
         assert len(phpfile.units) == 2
         phpunit = phpfile.units[0]
         assert phpunit.name == 'define( "_FINISH"'
@@ -257,7 +267,7 @@ define( '_CM_POSTED', 'Enviado');"""
         phpsource = """define("_RELOAD",       "Recargar");
 define('_CM_POSTED',    'Enviado');"""
         phpfile = self.phpparse(phpsource)
-        print len(phpfile.units)
+        print(len(phpfile.units))
         assert len(phpfile.units) == 2
         phpunit = phpfile.units[0]
         assert phpunit.name == 'define("_RELOAD"'
@@ -273,7 +283,7 @@ define('_CM_POSTED',    'Enviado');"""
         phpsource = """define( "_FINISH",       "Rematar");
 define(  '_UPGRADE_CHARSET',    'Upgrade charset');"""
         phpfile = self.phpparse(phpsource)
-        print len(phpfile.units)
+        print(len(phpfile.units))
         assert len(phpfile.units) == 2
         phpunit = phpfile.units[0]
         assert phpunit.name == 'define( "_FINISH"'
@@ -287,7 +297,7 @@ define(  '_UPGRADE_CHARSET',    'Upgrade charset');"""
         phpsource = """define("_POSTEDON","Enviado o");
 define('_UPGRADE_CHARSET','Upgrade charset');"""
         phpfile = self.phpparse(phpsource)
-        print len(phpfile.units)
+        print(len(phpfile.units))
         assert len(phpfile.units) == 2
         phpunit = phpfile.units[0]
         assert phpunit.name == 'define("_POSTEDON"'
@@ -296,7 +306,6 @@ define('_UPGRADE_CHARSET','Upgrade charset');"""
         assert phpunit.name == "define('_UPGRADE_CHARSET'"
         assert phpunit.source == "Upgrade charset"
 
-
     def test_parsing_define_no_spaces_after_equaldel_but_before_key(self):
         """Parse define syntax without spaces after the equal delimiter but
         with spaces before the key
@@ -304,7 +313,7 @@ define('_UPGRADE_CHARSET','Upgrade charset');"""
         phpsource = """define( "_FINISH","Rematar");
 define( '_CM_POSTED','Enviado');"""
         phpfile = self.phpparse(phpsource)
-        print len(phpfile.units)
+        print(len(phpfile.units))
         assert len(phpfile.units) == 2
         phpunit = phpfile.units[0]
         assert phpunit.name == 'define( "_FINISH"'
@@ -319,7 +328,7 @@ define( '_CM_POSTED','Enviado');"""
 define('_YOUR_USERNAME', 'O seu nome de usuario: "cookie"');
 define("_REGISTER", "Register <a href=\"register.php\">here</a>");"""
         phpfile = self.phpparse(phpsource)
-        print len(phpfile.units)
+        print(len(phpfile.units))
         assert len(phpfile.units) == 3
         phpunit = phpfile.units[0]
         assert phpunit.name == "define('_SETTINGS_COOKIEPREFIX'"
@@ -336,7 +345,7 @@ define("_REGISTER", "Register <a href=\"register.php\">here</a>");"""
         phpsource = """define("_POSTEDON", "Enviado o");// Keep this short
 define('_CM_POSTED', 'Enviado'); // Posted date"""
         phpfile = self.phpparse(phpsource)
-        print len(phpfile.units)
+        print(len(phpfile.units))
         assert len(phpfile.units) == 2
         phpunit = phpfile.units[0]
         assert phpunit.name == 'define("_POSTEDON"'
@@ -356,7 +365,7 @@ define("_FINISH", "Rematar");
 // It appears besides posts
 define('_CM_POSTED', 'Enviado');"""
         phpfile = self.phpparse(phpsource)
-        print len(phpfile.units)
+        print(len(phpfile.units))
         assert len(phpfile.units) == 2
         phpunit = phpfile.units[0]
         assert phpunit.name == 'define("_FINISH"'
@@ -366,8 +375,7 @@ define('_CM_POSTED', 'Enviado');"""
         assert phpunit.name == "define('_CM_POSTED'"
         assert phpunit.source == "Enviado"
         assert phpunit._comments == ["// This means it was published",
-                                     "// It appears besides posts"
-                                    ]
+                                     "// It appears besides posts"]
 
     def test_parsing_define_spaces_before_end_delimiter(self):
         """Parse define syntax with spaces before the end delimiter"""
@@ -375,7 +383,7 @@ define('_CM_POSTED', 'Enviado');"""
 define("_FINISH", "Rematar"     );
 define("_RELOAD", "Recargar");"""
         phpfile = self.phpparse(phpsource)
-        print len(phpfile.units)
+        print(len(phpfile.units))
         assert len(phpfile.units) == 3
         phpunit = phpfile.units[0]
         assert phpunit.name == 'define("_POSTEDON"'
@@ -394,7 +402,7 @@ define("_RELOAD", "Recargar");"""
 $month_feb = 'Feb'  ;
 $month_mar = 'Mar';"""
         phpfile = self.phpparse(phpsource)
-        print len(phpfile.units)
+        print(len(phpfile.units))
         assert len(phpfile.units) == 3
         phpunit = phpfile.units[0]
         assert phpunit.name == '$month_jan'
@@ -593,3 +601,17 @@ $month_mar = 'Mar';
         phpunit = phpfile.units[3]
         assert phpunit.name == '$month_mar'
         assert phpunit.source == "Mar"
+
+    def test_simpledefinition_after_define(self):
+        """Check that a simple definition after define is parsed correctly."""
+        phpsource = """define("_FINISH", "Rematar");
+$lang['mediaselect'] = 'Bestand selectie';"""
+        phpfile = self.phpparse(phpsource)
+        print(len(phpfile.units))
+        assert len(phpfile.units) == 2
+        phpunit = phpfile.units[0]
+        assert phpunit.name == 'define("_FINISH"'
+        assert phpunit.source == "Rematar"
+        phpunit = phpfile.units[1]
+        assert phpunit.name == "$lang['mediaselect']"
+        assert phpunit.source == "Bestand selectie"
diff --git a/translate/storage/test_po.py b/translate/storage/test_po.py
index f270e2e..d7dff09 100644
--- a/translate/storage/test_po.py
+++ b/translate/storage/test_po.py
@@ -1,13 +1,11 @@
 #!/usr/bin/env python
 # -*- coding: utf-8 -*-
 
-from pytest import raises, mark
+from pytest import mark, raises
 
 from translate.misc import wStringIO
 from translate.misc.multistring import multistring
-from translate.storage import po
-from translate.storage import pypo
-from translate.storage import test_base
+from translate.storage import po, pypo, test_base
 
 
 def test_roundtrip_quoting():
@@ -19,7 +17,7 @@ def test_roundtrip_quoting():
     for special in specials:
         quoted_special = pypo.quoteforpo(special)
         unquoted_special = pypo.unquotefrompo(quoted_special)
-        print "special: %r\nquoted: %r\nunquoted: %r\n" % (special, quoted_special, unquoted_special)
+        print("special: %r\nquoted: %r\nunquoted: %r\n" % (special, quoted_special, unquoted_special))
         assert special == unquoted_special
 
 
@@ -69,7 +67,7 @@ class TestPOUnit(test_base.TestTranslationUnit):
 
     def test_adding_empty_note(self):
         unit = self.UnitClass("bla")
-        print str(unit)
+        print(str(unit))
         assert not '#' in str(unit)
         for empty_string in ["", " ", "\t", "\n"]:
             unit.addnote(empty_string)
@@ -89,7 +87,7 @@ class TestPOUnit(test_base.TestTranslationUnit):
 
         assert not unit.isreview()
         unit.markreviewneeded()
-        print unit.getnotes()
+        print(unit.getnotes())
         assert unit.isreview()
         unit.markreviewneeded(False)
         assert not unit.isreview()
@@ -119,7 +117,7 @@ class TestPOUnit(test_base.TestTranslationUnit):
         # plain text, no plural test
         unit = self.UnitClass("Tree")
         unit.target = "ki"
-        assert unit.hasplural() == False
+        assert not unit.hasplural()
 
         # plural test with multistring
         unit.setsource(["Tree", "Trees"])
@@ -131,14 +129,14 @@ class TestPOUnit(test_base.TestTranslationUnit):
         # test of msgid with no plural and msgstr with plural
         unit = self.UnitClass("Tree")
         assert raises(ValueError, unit.settarget, [u"ki", u"ni ki"])
-        assert unit.hasplural() == False
+        assert not unit.hasplural()
 
     def test_wrapping_bug(self):
         """This tests for a wrapping bug that existed at some stage."""
         unit = self.UnitClass("")
         message = 'Projeke ya Pootle ka boyona e ho <a href="http://translate.sourceforge.net/">translate.sourceforge.net</a> moo o ka fumanang dintlha ka source code, di mailing list jwalo jwalo.'
         unit.target = message
-        print unit.target
+        print(unit.target)
         assert unit.target == message
 
     def test_extract_msgidcomments_from_text(self):
@@ -192,7 +190,7 @@ class TestPOFile(test_base.TestTranslationStore):
         else:
             newunit = oldpofile.UnitClass()
         oldunit.merge(newunit, authoritative=authoritative)
-        print oldunit
+        print(oldunit)
         return str(oldunit)
 
     def poreflow(self, posource):
@@ -256,7 +254,7 @@ msgstr "TRANSLATED-STRING"'''
         """check that the po class can handle Unicode characters"""
         posource = 'msgid ""\nmsgstr ""\n"Content-Type: text/plain; charset=UTF-8\\n"\n\n#: test.c\nmsgid "test"\nmsgstr "rest\xe2\x80\xa6"\n'
         pofile = self.poparse(posource)
-        print pofile
+        print(pofile)
         assert len(pofile.units) == 2
 
     def test_plurals(self):
@@ -269,7 +267,7 @@ msgstr[1] "Koeie"
         assert len(pofile.units) == 1
         unit = pofile.units[0]
         assert isinstance(unit.target, multistring)
-        print unit.target.strings
+        print(unit.target.strings)
         assert unit.target == "Koei"
         assert unit.target.strings == ["Koei", "Koeie"]
 
@@ -281,7 +279,7 @@ msgstr[0] "Sheep"
         assert len(pofile.units) == 1
         unit = pofile.units[0]
         assert isinstance(unit.target, multistring)
-        print unit.target.strings
+        print(unit.target.strings)
         assert unit.target == "Sheep"
         assert unit.target.strings == ["Sheep"]
 
@@ -304,7 +302,7 @@ msgstr[1] "Kóeie"
         u = pofile.units[-1]
 
         locations = u.getlocations()
-        print locations
+        print(locations)
         assert len(locations) == 1
         assert locations[0] == u"programming/C/programming.xml:44(para)"
         assert isinstance(locations[0], unicode)
@@ -321,13 +319,13 @@ msgstr "Een\\n"
         pofile = self.poparse(posource)
         assert len(pofile.units) == 1
         unit = pofile.units[0]
-        assert unit.hasplural() == True
+        assert unit.hasplural()
         assert isinstance(unit.source, multistring)
-        print unit.source.strings
+        print(unit.source.strings)
         assert unit.source == "Singular"
         assert unit.source.strings == ["Singular", "Plural"]
         assert isinstance(unit.target, multistring)
-        print unit.target.strings
+        print(unit.target.strings)
         assert unit.target == "Een"
         assert unit.target.strings == ["Een", "Twee", "Drie"]
 
@@ -349,7 +347,7 @@ msgstr "POT-Creation-Date: 2006-03-08 17:30+0200\n"
         posource = '#, fuzzy\nmsgid "ball"\nmsgstr "bal"\n'
         expectednonfuzzy = 'msgid "ball"\nmsgstr "bal"\n'
         pofile = self.poparse(posource)
-        print pofile
+        print(pofile)
         assert pofile.units[0].isfuzzy()
         pofile.units[0].markfuzzy(False)
         assert not pofile.units[0].isfuzzy()
@@ -359,13 +357,13 @@ msgstr "POT-Creation-Date: 2006-03-08 17:30+0200\n"
         expectednonfuzzy = '#, python-format\nmsgid "ball"\nmsgstr "bal"\n'
         expectedfuzzyagain = '#, fuzzy, python-format\nmsgid "ball"\nmsgstr "bal"\n'  # must be sorted
         pofile = self.poparse(posource)
-        print pofile
+        print(pofile)
         assert pofile.units[0].isfuzzy()
         pofile.units[0].markfuzzy(False)
         assert not pofile.units[0].isfuzzy()
         assert str(pofile) == expectednonfuzzy
         pofile.units[0].markfuzzy()
-        print str(pofile)
+        print(str(pofile))
         assert str(pofile) == expectedfuzzyagain
 
         # test the same, but with flags in a different order
@@ -373,14 +371,14 @@ msgstr "POT-Creation-Date: 2006-03-08 17:30+0200\n"
         expectednonfuzzy = '#, python-format\nmsgid "ball"\nmsgstr "bal"\n'
         expectedfuzzyagain = '#, fuzzy, python-format\nmsgid "ball"\nmsgstr "bal"\n'  # must be sorted
         pofile = self.poparse(posource)
-        print pofile
+        print(pofile)
         assert pofile.units[0].isfuzzy()
         pofile.units[0].markfuzzy(False)
         assert not pofile.units[0].isfuzzy()
-        print str(pofile)
+        print(str(pofile))
         assert str(pofile) == expectednonfuzzy
         pofile.units[0].markfuzzy()
-        print str(pofile)
+        print(str(pofile))
         assert str(pofile) == expectedfuzzyagain
 
     @mark.xfail(reason="Check differing behaviours between pypo and cpo")
@@ -389,7 +387,7 @@ msgstr "POT-Creation-Date: 2006-03-08 17:30+0200\n"
         posource = '#. The automatic one\n#: test.c\nmsgid "test"\nmsgstr ""\n'
         pofile = self.poparse(posource)
         unit = pofile.units[0]
-        print str(pofile)
+        print(str(pofile))
         assert not unit.isobsolete()
         unit.makeobsolete()
         assert str(unit) == ""
@@ -407,7 +405,7 @@ msgstr "POT-Creation-Date: 2006-03-08 17:30+0200\n"
         posource = 'msgid "thing\nmsgstr "ding"\nmsgid "Second thing"\nmsgstr "Tweede ding"\n'
         pofile = self.poparse(posource)
         assert len(pofile.units) == 2
-        print repr(pofile.units[0].source)
+        print(repr(pofile.units[0].source))
         assert pofile.units[0].source == u"thing"
 
     def test_malformed_obsolete_units(self):
@@ -463,7 +461,7 @@ msgstr "een"
         unit = pofile.units[1]
         assert unit.isobsolete()
 
-        print str(pofile)
+        print(str(pofile))
         # Doesn't work with CPO if obsolete units are mixed with non-obsolete units
         assert str(pofile) == posource
         unit.resurrect()
@@ -493,20 +491,20 @@ msgstr "een"
         pofile = self.poparse(posource)
         assert len(pofile.units) == 3
         unit = pofile.units[2]
-        print str(unit)
+        print(str(unit))
         assert unit.isobsolete()
         assert unit.isfuzzy()
         assert not unit.istranslatable()
 
-        print posource
-        print str(pofile)
+        print(posource)
+        print(str(pofile))
         assert str(pofile) == posource
 
     def test_header_escapes(self):
         pofile = self.StoreClass()
         pofile.updateheader(add=True, **{"Report-Msgid-Bugs-To": r"http://qa.openoffice.org/issues/enter_bug.cgi?subcomponent=ui&comment=&short_desc=Localization%20issue%20in%20file%3A%20dbaccess\source\core\resource.oo&component=l10n&form_name=enter_issue"})
         filecontents = str(pofile)
-        print filecontents
+        print(filecontents)
         # We need to make sure that the \r didn't get misrepresented as a
         # carriage return, but as a slash (escaped) followed by a normal 'r'
         assert r'\source\core\resource' in pofile.header().target
@@ -517,12 +515,12 @@ msgstr "een"
         posource = '#. The automatic one\n#: test.c\nmsgid "test"\nmsgstr "rest"\n'
         poexpected = '#~ msgid "test"\n#~ msgstr "rest"\n'
         pofile = self.poparse(posource)
-        print pofile
+        print(pofile)
         unit = pofile.units[0]
         assert not unit.isobsolete()
         unit.makeobsolete()
         assert unit.isobsolete()
-        print pofile
+        print(pofile)
         assert str(unit) == poexpected
 
     def test_makeobsolete_plural(self):
@@ -538,12 +536,12 @@ msgstr[1] "Koeie"
 #~ msgstr[1] "Koeie"
 '''
         pofile = self.poparse(posource)
-        print pofile
+        print(pofile)
         unit = pofile.units[0]
         assert not unit.isobsolete()
         unit.makeobsolete()
         assert unit.isobsolete()
-        print pofile
+        print(pofile)
         assert str(unit) == poexpected
 
     def test_makeobsolete_msgctxt(self):
@@ -551,28 +549,28 @@ msgstr[1] "Koeie"
         posource = '#: test.c\nmsgctxt "Context"\nmsgid "test"\nmsgstr "rest"\n'
         poexpected = '#~ msgctxt "Context"\n#~ msgid "test"\n#~ msgstr "rest"\n'
         pofile = self.poparse(posource)
-        print pofile
+        print(pofile)
         unit = pofile.units[0]
         assert not unit.isobsolete()
         assert unit.istranslatable()
         unit.makeobsolete()
         assert unit.isobsolete()
         assert not unit.istranslatable()
-        print pofile
+        print(pofile)
         assert str(unit) == poexpected
 
     def test_makeobsolete_msgidcomments(self):
         """Tests making a unit with msgidcomments obsolete"""
         posource = '#: first.c\nmsgid ""\n"_: first.c\\n"\n"test"\nmsgstr "rest"\n\n#: second.c\nmsgid ""\n"_: second.c\\n"\n"test"\nmsgstr "rest"'
         poexpected = '#~ msgid ""\n#~ "_: first.c\\n"\n#~ "test"\n#~ msgstr "rest"\n'
-        print "Source:\n%s" % posource
-        print "Expected:\n%s" % poexpected
+        print("Source:\n%s" % posource)
+        print("Expected:\n%s" % poexpected)
         pofile = self.poparse(posource)
         unit = pofile.units[0]
         assert not unit.isobsolete()
         unit.makeobsolete()
         assert unit.isobsolete()
-        print "Result:\n%s" % pofile
+        print("Result:\n%s" % pofile)
         assert str(unit) == poexpected
 
     def test_multiline_obsolete(self):
@@ -583,8 +581,8 @@ msgstr[1] "Koeie"
         assert len(pofile.units) == 1
         unit = pofile.units[0]
         assert unit.isobsolete()
-        print str(pofile)
-        print posource
+        print(str(pofile))
+        print(posource)
         assert str(pofile) == posource
 
     def test_merge_duplicates(self):
@@ -595,7 +593,7 @@ msgstr[1] "Koeie"
         pofile.removeduplicates("merge")
         assert len(pofile.units) == 1
         assert pofile.units[0].getlocations() == ["source1", "source2"]
-        print pofile
+        print(pofile)
 
     def test_merge_mixed_sources(self):
         """checks that merging works with different source location styles"""
@@ -610,9 +608,9 @@ msgid "test"
 msgstr ""
 '''
         pofile = self.poparse(posource)
-        print str(pofile)
+        print(str(pofile))
         pofile.removeduplicates("merge")
-        print str(pofile)
+        print(str(pofile))
         assert len(pofile.units) == 1
         assert pofile.units[0].getlocations() == ["source1", "source2"]
 
@@ -823,10 +821,10 @@ msgid "I cannot locate the project\\"
 msgstr "プロジェクトが見つかりませんでした"
 '''
         pofile1 = self.poparse(posource)
-        print pofile1.units[1].source
+        print(pofile1.units[1].source)
         assert pofile1.units[1].source == u"I cannot locate the project\\"
         pofile2 = self.poparse(str(pofile1))
-        print str(pofile2)
+        print(str(pofile2))
         assert str(pofile1) == str(pofile2)
 
     def test_unfinished_lines(self):
@@ -842,11 +840,11 @@ msgstr "start thing dingis fish"
 "
 '''
         pofile1 = self.poparse(posource)
-        print repr(pofile1.units[1].target)
+        print(repr(pofile1.units[1].target))
         assert pofile1.units[1].target == u"start thing dingis fish"
         pofile2 = self.poparse(str(pofile1))
         assert pofile2.units[1].target == u"start thing dingis fish"
-        print str(pofile2)
+        print(str(pofile2))
         assert str(pofile1) == str(pofile2)
 
     def test_encoding_change(self):
@@ -884,7 +882,7 @@ msgstr[0] ""
 '''
         pofile = self.poparse(posource)
         unit = pofile.units[1]
-        print str(unit)
+        print(str(unit))
         assert "msgid_plural" in str(unit)
         assert not unit.istranslated()
         assert unit.get_state_n() == 0
@@ -914,7 +912,7 @@ msgstr ""
 msgid "bla\t12345 12345 12345 12345 12345 12 12345 12345 12345 12345 12345 12345 123"
 msgstr "bla\t12345 12345 12345 12345 12345 15 12345 12345 12345 12345 12345 12345 123"
 '''
-        posource_wanted =r'''#: 7
+        posource_wanted = r'''#: 7
 msgid ""
 "bla\t12345 12345 12345 12345 12345 12 12345 12345 12345 12345 12345 12345 123"
 msgstr ""
diff --git a/translate/storage/test_poheader.py b/translate/storage/test_poheader.py
index 9621fad..8f6a0b9 100644
--- a/translate/storage/test_poheader.py
+++ b/translate/storage/test_poheader.py
@@ -4,12 +4,15 @@
 import os
 import time
 
-from translate.storage import po
-from translate.storage import poxliff
-from translate.storage import poheader
-from translate.misc.dictutils import ordereddict
-from translate.misc import wStringIO
+try:
+    from collections import OrderedDict
+except ImportError:
+    # Python <= 2.6 fallback
+    from translate.misc.dictutils import ordereddict as OrderedDict
+
 from translate.lang.team import guess_language
+from translate.misc import wStringIO
+from translate.storage import po, poheader, poxliff
 
 
 def test_parseheaderstring():
@@ -20,8 +23,7 @@ this item must get ignored because there is no colon sign in it
 item3: three
 '''
     d = poheader.parseheaderstring(source)
-    print type(d)
-    assert type(d) == ordereddict
+    print(type(d))
     assert len(d) == 3
     assert d['item1'] == 'one'
     assert d['item2'] == 'two:two'
@@ -45,7 +47,7 @@ def test_update():
     d = poheader.update({}, add=True, test_me='hello')
     assert d['Test-Me'] == 'hello'
     # is the order correct ?
-    d = ordereddict()
+    d = OrderedDict()
     d['Project-Id-Version'] = 'abc'
     d['POT-Creation-Date'] = 'now'
     d = poheader.update(d, add=True, Test='hello', Report_Msgid_Bugs_To='bugs at list.org')
@@ -144,7 +146,7 @@ def test_timezones():
 def test_header_blank():
 
     def compare(pofile):
-        print pofile
+        print(pofile)
         assert len(pofile.units) == 1
         header = pofile.header()
         assert header.isheader()
@@ -220,7 +222,7 @@ msgstr ""
 '''
     for colon in ("", ";"):
         pofile = poparse(posource % colon)
-        print pofile
+        print(pofile)
         assert len(pofile.units) == 1
         header = pofile.units[0]
         assert header.isheader()
@@ -241,7 +243,7 @@ msgstr ""
 "10<=4 && (n%100<10 || n%100>=20) ? 1 : 2);\n"
 '''
     pofile = poparse(posource)
-    print pofile
+    print(pofile)
     assert len(pofile.units) == 1
     header = pofile.units[0]
     assert header.isheader()
@@ -269,7 +271,7 @@ msgstr ""
 
     pofile.header().addnote("Khaled Hosny <khaledhosny at domain.org>, 2006, 2007, 2008.")
     pofile.updatecontributor("Khaled Hosny", "khaledhosny at domain.org")
-    print str(pofile)
+    print(str(pofile))
     assert "# Khaled Hosny <khaledhosny at domain.org>, 2006, 2007, 2008, %s." % time.strftime("%Y") in str(pofile)
 
 
@@ -281,7 +283,7 @@ msgstr ""
 '''
 
     pofile = poparse(posource)
-    assert pofile.gettargetlanguage() == None
+    assert pofile.gettargetlanguage() is None
 
     posource += '"Language-Team: translate-discuss-af at lists.sourceforge.net\\n"\n'
     pofile = poparse(posource)
@@ -304,7 +306,7 @@ msgstr ""
 '''
 
     pofile = poparse(posource)
-    assert pofile.getprojectstyle() == None
+    assert pofile.getprojectstyle() is None
 
     posource += '"X-Accelerator-Marker: ~\\n"\n'
     pofile = poparse(posource)
diff --git a/translate/storage/test_poxliff.py b/translate/storage/test_poxliff.py
index 4e2992e..d39f080 100644
--- a/translate/storage/test_poxliff.py
+++ b/translate/storage/test_poxliff.py
@@ -1,8 +1,7 @@
 #!/usr/bin/env python
 
 from translate.misc.multistring import multistring
-from translate.storage import poxliff
-from translate.storage import test_xliff
+from translate.storage import poxliff, test_xliff
 
 
 class TestPOXLIFFUnit(test_xliff.TestXLIFFUnit):
@@ -11,8 +10,8 @@ class TestPOXLIFFUnit(test_xliff.TestXLIFFUnit):
     def test_plurals(self):
         """Tests that plurals are handled correctly."""
         unit = self.UnitClass(multistring(["Cow", "Cows"]))
-        print type(unit.source)
-        print repr(unit.source)
+        print(type(unit.source))
+        print(repr(unit.source))
         assert isinstance(unit.source, multistring)
         assert unit.source.strings == ["Cow", "Cows"]
         assert unit.source == "Cow"
diff --git a/translate/storage/test_properties.py b/translate/storage/test_properties.py
index bd61496..c70a3ab 100644
--- a/translate/storage/test_properties.py
+++ b/translate/storage/test_properties.py
@@ -1,11 +1,10 @@
 #!/usr/bin/env python
 # -*- coding: utf-8 -*-
 
-from pytest import deprecated_call
+from pytest import deprecated_call, raises
 
 from translate.misc import wStringIO
-from translate.storage import properties
-from translate.storage import test_monolingual
+from translate.storage import properties, test_monolingual
 
 
 def test_find_delimiter_pos_simple():
@@ -55,12 +54,12 @@ def test_find_delimiter_deprecated_fn():
 
 
 def test_is_line_continuation():
-    assert properties.is_line_continuation(u"") == False
-    assert properties.is_line_continuation(u"some text") == False
-    assert properties.is_line_continuation(u"""some text\\""") == True
-    assert properties.is_line_continuation(u"""some text\\\\""") == False  # Escaped \
-    assert properties.is_line_continuation(u"""some text\\\\\\""") == True  # Odd num. \ is line continuation
-    assert properties.is_line_continuation(u"""\\\\\\""") == True
+    assert not properties.is_line_continuation(u"")
+    assert not properties.is_line_continuation(u"some text")
+    assert properties.is_line_continuation(u"""some text\\""")
+    assert not properties.is_line_continuation(u"""some text\\\\""")  # Escaped \
+    assert properties.is_line_continuation(u"""some text\\\\\\""")  # Odd num. \ is line continuation
+    assert properties.is_line_continuation(u"""\\\\\\""")
 
 
 def test_key_strip():
@@ -150,15 +149,16 @@ class TestProp(test_monolingual.TestMonolingualStore):
 
     def test_whitespace_handling(self):
         """check that we remove extra whitespace around property"""
-        whitespaces = (('key = value', 'key', 'value'),      # Standard for baseline
-                       (' key =  value', 'key', 'value'),    # Extra \s before key and value
-                       ('\ key\ = value', '\ key\ ', 'value'),  # extra space at start and end of key
-                       ('key = \ value ', 'key', ' value '),  # extra space at start end end of value
-                      )
+        whitespaces = (
+            ('key = value', 'key', 'value'),      # Standard for baseline
+            (' key =  value', 'key', 'value'),    # Extra \s before key and value
+            ('\ key\ = value', '\ key\ ', 'value'),  # extra space at start and end of key
+            ('key = \ value ', 'key', ' value '),  # extra space at start end end of value
+        )
         for propsource, key, value in whitespaces:
             propfile = self.propparse(propsource)
             propunit = propfile.units[0]
-            print repr(propsource), repr(propunit.name), repr(propunit.source)
+            print(repr(propsource), repr(propunit.name), repr(propunit.source))
             assert propunit.name == key
             assert propunit.source == value
             # let's reparse the output to ensure good serialisation->parsing roundtrip:
@@ -173,7 +173,7 @@ class TestProp(test_monolingual.TestMonolingualStore):
         delimiters = [":", "=", " "]
         for delimiter in delimiters:
             propsource = "key%svalue" % delimiter
-            print "source: '%s'\ndelimiter: '%s'" % (propsource, delimiter)
+            print("source: '%s'\ndelimiter: '%s'" % (propsource, delimiter))
             propfile = self.propparse(propsource)
             assert len(propfile.units) == 1
             propunit = propfile.units[0]
@@ -188,8 +188,8 @@ class TestProp(test_monolingual.TestMonolingualStore):
 key=value
 ''' % comment_marker
             propfile = self.propparse(propsource)
-            print repr(propsource)
-            print "Comment marker: '%s'" % comment_marker
+            print(repr(propsource))
+            print("Comment marker: '%s'" % comment_marker)
             assert len(propfile.units) == 1
             propunit = propfile.units[0]
             assert propunit.comments == ['%s A comment' % comment_marker]
@@ -208,7 +208,7 @@ key=value
         for propsource in proplist:
             propfile = self.propparse(propsource)
             propunit = propfile.units[0]
-            print propunit
+            print(propunit)
             assert propunit.name == "Truth"
             assert propunit.source == "Beauty"
 
@@ -218,7 +218,7 @@ key=value
         prop_store = self.propparse(prop_source)
         assert len(prop_store.units) == 1
         unit = prop_store.units[0]
-        print unit
+        print(unit)
         assert unit.name == u"\:\="
 
     def test_fullspec_line_continuation(self):
@@ -228,10 +228,10 @@ key=value
                                   kiwi, mango
 """
         prop_store = self.propparse(prop_source)
-        print prop_store
+        print(prop_store)
         assert len(prop_store.units) == 1
         unit = prop_store.units[0]
-        print unit
+        print(unit)
         assert properties._find_delimiter(prop_source, [u"=", u":", u" "]) == (' ', 6)
         assert unit.name == u"fruits"
         assert unit.source == u"apple, banana, pear, cantaloupe, watermelon, kiwi, mango"
@@ -242,7 +242,7 @@ key=value
         prop_store = self.propparse(prop_source)
         assert len(prop_store.units) == 1
         unit = prop_store.units[0]
-        print unit
+        print(unit)
         assert unit.name == u"cheeses"
         assert unit.source == u""
 
@@ -264,6 +264,15 @@ key=value
         assert propunit.name == ur'I am a “key”'
         assert propfile.personality.encode(propunit.source) == u'I am a “value”'
 
+    def test_mac_strings_utf8(self):
+        """Ensure we can handle Unicode"""
+        propsource = ur'''"I am a “key”" = "I am a “value”";'''.encode('utf-8')
+        propfile = self.propparse(propsource, personality="strings-utf8")
+        assert len(propfile.units) == 1
+        propunit = propfile.units[0]
+        assert propunit.name == ur'I am a “key”'
+        assert propfile.personality.encode(propunit.source) == u'I am a “value”'
+
     def test_mac_strings_newlines(self):
         """test newlines \n within a strings files"""
         propsource = ur'''"key" = "value\nvalue";'''.encode('utf-16')
@@ -358,3 +367,9 @@ key=value
         bom = propsource[:2]
         assert result.startswith(bom)
         assert bom not in result[2:]
+
+    def test_raise_ioerror_if_cannot_detect_encoding(self):
+        """Test that IOError is thrown if file encoding cannot be detected."""
+        propsource = u"key = ąćęłńóśźż".encode("cp1250")
+        with raises(IOError):
+            self.propparse(propsource, personality="strings")
diff --git a/translate/storage/test_pypo.py b/translate/storage/test_pypo.py
index 20eef47..150b95a 100644
--- a/translate/storage/test_pypo.py
+++ b/translate/storage/test_pypo.py
@@ -5,8 +5,7 @@ from pytest import raises
 
 from translate.misc import wStringIO
 from translate.misc.multistring import multistring
-from translate.storage import pypo
-from translate.storage import test_po
+from translate.storage import pypo, test_po
 
 
 class TestHelpers():
@@ -57,8 +56,8 @@ class TestHelpers():
     def test_quoteforpo_escaped_quotes(self):
         """Ensure that we don't break \" in two when wrapping
 
-	See :bug:`3140`
-	"""
+        See :issue:`3140`
+        """
         assert pypo.quoteforpo('''You can get a copy of your Recovery Key by going to &syncBrand.shortName.label; Options on your other device, and selecting  "My Recovery Key" under "Manage Account".''') == [u'""', u'"You can get a copy of your Recovery Key by going to "', u'"&syncBrand.shortName.label; Options on your other device, and selecting  \\""', u'"My Recovery Key\\" under \\"Manage Account\\"."']
 
 
@@ -136,13 +135,13 @@ class TestPYPOUnit(test_po.TestPOUnit):
         str_max = "123456789 123456789 123456789 123456789 123456789 123456789 123456789 1"
         unit = self.UnitClass(str_max)
         expected = 'msgid "%s"\nmsgstr ""\n' % str_max
-        print expected, str(unit)
+        print(expected, str(unit))
         assert str(unit) == expected
         # at this length we wrap
         str_wrap = str_max + '2'
         unit = self.UnitClass(str_wrap)
         expected = 'msgid ""\n"%s"\nmsgstr ""\n' % str_wrap
-        print expected, str(unit)
+        print(expected, str(unit))
         assert str(unit) == expected
 
     def test_wrap_on_newlines(self):
@@ -151,7 +150,7 @@ class TestPYPOUnit(test_po.TestPOUnit):
         postring = ('"123456789\\n"\n' * 3)[:-1]
         unit = self.UnitClass(string)
         expected = 'msgid ""\n%s\nmsgstr ""\n' % postring
-        print expected, str(unit)
+        print(expected, str(unit))
         assert str(unit) == expected
 
         # Now check for long newlines segments
@@ -166,7 +165,7 @@ class TestPYPOUnit(test_po.TestPOUnit):
 msgstr ""
 '''
         unit = self.UnitClass(longstring)
-        print expected, str(unit)
+        print(expected, str(unit))
         assert str(unit) == expected
 
     def test_wrap_on_max_line_length(self):
@@ -174,10 +173,10 @@ msgstr ""
         string = "1 3 5 7 N " * 11
         expected = 'msgid ""\n%s\nmsgstr ""\n' % '"1 3 5 7 N 1 3 5 7 N 1 3 5 7 N 1 3 5 7 N 1 3 5 7 N 1 3 5 7 N 1 3 5 7 N 1 3 5 "\n"7 N 1 3 5 7 N 1 3 5 7 N 1 3 5 7 N "'
         unit = self.UnitClass(string)
-        print "Expected:"
-        print expected
-        print "Actual:"
-        print str(unit)
+        print("Expected:")
+        print(expected)
+        print("Actual:")
+        print(str(unit))
         assert str(unit) == expected
 
     def test_spacing_max_line(self):
@@ -190,10 +189,10 @@ msgstr ""
 msgstr ""
 '''
         unit = self.UnitClass(idstring)
-        print "Expected:"
-        print expected
-        print "Actual:"
-        print str(unit)
+        print("Expected:")
+        print(expected)
+        print("Actual:")
+        print(str(unit))
         assert str(unit) == expected
 
 
@@ -216,7 +215,7 @@ class TestPYPOFile(test_po.TestPOFile):
         pofile = self.poparse(posource)
         assert len(pofile.units) == 2
         pofile.removeduplicates("msgctxt")
-        print pofile
+        print(pofile)
         assert len(pofile.units) == 2
         assert str(pofile.units[0]).count("source1") == 2
         assert str(pofile.units[1]).count("source2") == 2
@@ -228,8 +227,8 @@ class TestPYPOFile(test_po.TestPOFile):
         assert len(pofile.units) == 2
         pofile.removeduplicates("merge")
         assert len(pofile.units) == 2
-        print pofile.units[0].msgidcomments
-        print pofile.units[1].msgidcomments
+        print(pofile.units[0].msgidcomments)
+        print(pofile.units[1].msgidcomments)
         assert pypo.unquotefrompo(pofile.units[0].msgidcomments) == "_: source1\n"
         assert pypo.unquotefrompo(pofile.units[1].msgidcomments) == "_: source2\n"
 
@@ -238,7 +237,7 @@ class TestPYPOFile(test_po.TestPOFile):
         posource = u'''#: nb\nmsgid "Norwegian Bokm\xe5l"\nmsgstr ""\n'''
         pofile = self.StoreClass(wStringIO.StringIO(posource.encode("UTF-8")), encoding="UTF-8")
         assert len(pofile.units) == 1
-        print str(pofile)
+        print(str(pofile))
         thepo = pofile.units[0]
         assert str(thepo) == posource.encode("UTF-8")
         # extra test: what if we set the msgid to a unicode? this happens in prop2po etc
@@ -256,7 +255,7 @@ class TestPYPOFile(test_po.TestPOFile):
         """checks the content of all the expected sections of a PO message"""
         posource = '# other comment\n#. automatic comment\n#: source comment\n#, fuzzy\nmsgid "One"\nmsgstr "Een"\n'
         pofile = self.poparse(posource)
-        print pofile
+        print(pofile)
         assert len(pofile.units) == 1
         assert str(pofile) == posource
         assert pofile.units[0].othercomments == ["# other comment\n"]
@@ -268,7 +267,7 @@ class TestPYPOFile(test_po.TestPOFile):
         """tests behaviour of unassociated comments."""
         oldsource = '# old lonesome comment\n\nmsgid "one"\nmsgstr "een"\n'
         oldfile = self.poparse(oldsource)
-        print str(oldfile)
+        print(str(oldfile))
         assert len(oldfile.units) == 1
 
     def test_prevmsgid_parse(self):
diff --git a/translate/storage/test_qm.py b/translate/storage/test_qm.py
index 91b78fe..a4aa839 100644
--- a/translate/storage/test_qm.py
+++ b/translate/storage/test_qm.py
@@ -3,8 +3,7 @@
 
 import pytest
 
-from translate.storage import test_base
-from translate.storage import qm
+from translate.storage import qm, test_base
 
 
 class TestQtUnit(test_base.TestTranslationUnit):
diff --git a/translate/storage/test_qph.py b/translate/storage/test_qph.py
index 868aa03..2050e67 100644
--- a/translate/storage/test_qph.py
+++ b/translate/storage/test_qph.py
@@ -20,10 +20,8 @@
 
 """Tests for Qt Linguist phase book storage class"""
 
-from translate.storage import qph
-from translate.storage import test_base
-from translate.storage.placeables import parse
-from translate.storage.placeables import xliff
+from translate.storage import qph, test_base
+from translate.storage.placeables import parse, xliff
 
 
 xliffparsers = []
@@ -53,7 +51,7 @@ class TestQphFile(test_base.TestTranslationStore):
         qphfile.addsourceunit("Bla")
         assert len(qphfile.units) == 1
         newfile = qph.QphFile.parsestring(str(qphfile))
-        print str(qphfile)
+        print(str(qphfile))
         assert len(newfile.units) == 1
         assert newfile.units[0].source == "Bla"
         assert newfile.findunit("Bla").source == "Bla"
@@ -64,7 +62,7 @@ class TestQphFile(test_base.TestTranslationStore):
         qphunit = qphfile.addsourceunit("Concept")
         qphunit.source = "Term"
         newfile = qph.QphFile.parsestring(str(qphfile))
-        print str(qphfile)
+        print(str(qphfile))
         assert newfile.findunit("Concept") is None
         assert newfile.findunit("Term") is not None
 
@@ -73,7 +71,7 @@ class TestQphFile(test_base.TestTranslationStore):
         qphunit = qphfile.addsourceunit("Concept")
         qphunit.target = "Konsep"
         newfile = qph.QphFile.parsestring(str(qphfile))
-        print str(qphfile)
+        print(str(qphfile))
         assert newfile.findunit("Concept").target == "Konsep"
 
     def test_language(self):
diff --git a/translate/storage/test_rc.py b/translate/storage/test_rc.py
index 0810181..8b779c5 100644
--- a/translate/storage/test_rc.py
+++ b/translate/storage/test_rc.py
@@ -1,4 +1,5 @@
-from translate.storage import rc
+from translate.misc import wStringIO
+from translate.storage import rc, test_monolingual
 
 
 def test_escaping():
@@ -10,3 +11,226 @@ second line''') == "First line second line"
     assert rc.escape_to_python("A backslash \\\\ in a string") == "A backslash \\ in a string"
     assert rc.escape_to_python(r'''First line " \
  "second line''') == "First line second line"
+
+
+class TestRcFile(object):
+    StoreClass = rc.rcfile
+
+    def source_parse(self, source):
+        """Helper that parses source without requiring files."""
+        dummy_file = wStringIO.StringIO(source)
+        parsed_file = self.StoreClass(dummy_file)
+        return parsed_file
+
+    def source_regenerate(self, source):
+        """Helper that converts source to store object and back."""
+        return str(self.source_parse(source))
+
+    def test_parse_only_comments(self):
+        """Test parsing a RC string with only comments."""
+        rc_source = """
+/*
+ * Mini test file.
+ * Multiline comments.
+ */
+
+// Test file, one line comment. //
+
+#include "other_file.h" // This must be ignored
+
+LANGUAGE LANG_ENGLISH, SUBLANG_DEFAULT
+
+/////////////////////////////////////////////////////////////////////////////
+//
+// Icon
+//
+
+// Icon with lowest ID value placed first to ensure application icon
+// remains consistent on all systems.
+IDR_MAINFRAME           ICON                    "res\\ico00007.ico"
+IDR_MAINFRAME1          ICON                    "res\\idr_main.ico"
+IDR_MAINFRAME2          ICON                    "res\\ico00006.ico"
+
+
+/////////////////////////////////////////////////////////////////////////////
+//
+// Commented STRINGTABLE must be ignored
+//
+
+/*
+STRINGTABLE
+BEGIN
+    IDP_REGISTRONOV         "Data isn't valid"
+    IDS_ACTIVARINSTALACION  "You need to try again and again."
+    IDS_NOREGISTRADO        "Error when making something important"
+    IDS_REGISTRADO          "All done correctly.\nThank you very much."
+    IDS_ACTIVADA            "This is what you do:\n%s"
+    IDS_ERRORACTIV          "Error doing things"
+END
+*/
+
+#ifndef APSTUDIO_INVOKED
+/////////////////////////////////////////////////////////////////////////////
+//
+// Generated from the TEXTINCLUDE 3 resource.
+//
+#define _AFX_NO_SPLITTER_RESOURCES
+#define _AFX_NO_OLE_RESOURCES
+#define _AFX_NO_TRACKER_RESOURCES
+#define _AFX_NO_PROPERTY_RESOURCES
+
+#if !defined(AFX_RESOURCE_DLL) || defined(AFX_TARG_ESN)
+// This will change the default language
+LANGUAGE 10, 3
+#pragma code_page(1252)
+#include "res\regGHC.rc2"  // Recursos editados que no son de Microsoft Visual C++
+#include "afxres.rc"         // Standar components
+#endif
+
+/////////////////////////////////////////////////////////////////////////////
+#endif    // not APSTUDIO_INVOKED
+"""
+        rc_file = self.source_parse(rc_source)
+        assert len(rc_file.units) == 0
+
+    def test_parse_only_textinclude(self):
+        """Test parsing a RC string with TEXTINCLUDE blocks and comments."""
+        rc_source = """
+#include "other_file.h" // This must be ignored
+
+LANGUAGE LANG_ENGLISH, SUBLANG_DEFAULT
+
+#ifdef APSTUDIO_INVOKED
+/////////////////////////////////////////////////////////////////////////////
+//
+// TEXTINCLUDE
+//
+
+1 TEXTINCLUDE
+BEGIN
+    "resource.h\0"
+END
+
+2 TEXTINCLUDE
+BEGIN
+    "#include ""afxres.h""\r\n"
+    "\0"
+END
+
+3 TEXTINCLUDE
+BEGIN
+    "LANGUAGE 10, 3\r\n"  // This language must be ignored, is a string.
+    "And this strings don't need to be translated!"
+END
+
+#endif    // APSTUDIO_INVOKED
+"""
+        rc_file = self.source_parse(rc_source)
+        assert len(rc_file.units) == 0
+
+    def test_parse_dialog(self):
+        """Test parsing a RC string with a DIALOG block."""
+        rc_source = """
+#include "other_file.h" // This must be ignored
+
+LANGUAGE LANG_ENGLISH, SUBLANG_DEFAULT
+
+/////////////////////////////////////////////////////////////////////////////
+//
+// Dialog
+//
+
+IDD_REGGHC_DIALOG DIALOGEX 0, 0, 211, 191
+STYLE DS_SETFONT | DS_MODALFRAME | DS_FIXEDSYS | WS_POPUP | WS_VISIBLE | WS_CAPTION | WS_SYSMENU
+EXSTYLE WS_EX_APPWINDOW
+CAPTION "License dialog"
+FONT 8, "MS Shell Dlg", 0, 0, 0x1
+BEGIN
+    PUSHBUTTON      "Help",ID_HELP,99,162,48,15
+    PUSHBUTTON      "Close",IDCANCEL,151,162,48,15
+    PUSHBUTTON      "Activate instalation",IDC_BUTTON1,74,76,76,18
+    CTEXT           "My very good program",IDC_STATIC1,56,21,109,19,SS_SUNKEN
+    CTEXT           "You can use it without registering it",IDC_STATIC,35,131,128,19,SS_SUNKEN
+    PUSHBUTTON      "Offline",IDC_OFFLINE,149,108,42,13
+    PUSHBUTTON      "See license",IDC_LICENCIA,10,162,85,15
+    RTEXT           "If you don't have internet, please use magic.",IDC_STATIC,23,105,120,18
+    ICON            IDR_MAINFRAME,IDC_STATIC,44,74,20,20
+    CTEXT           "Use your finger to activate the program.",IDC_ACTIVADA,17,50,175,17
+    ICON            IDR_MAINFRAME1,IDC_STATIC6,18,19,20,20
+END
+"""
+        rc_file = self.source_parse(rc_source)
+        assert len(rc_file.units) == 10
+        rc_unit = rc_file.units[0]
+        assert rc_unit.name == "DIALOGEX.IDD_REGGHC_DIALOG.CAPTION"
+        assert rc_unit.source == "License dialog"
+        rc_unit = rc_file.units[1]
+        assert rc_unit.name == "DIALOGEX.IDD_REGGHC_DIALOG.PUSHBUTTON.ID_HELP"
+        assert rc_unit.source == "Help"
+        rc_unit = rc_file.units[2]
+        assert rc_unit.name == "DIALOGEX.IDD_REGGHC_DIALOG.PUSHBUTTON.IDCANCEL"
+        assert rc_unit.source == "Close"
+        rc_unit = rc_file.units[3]
+        assert rc_unit.name == "DIALOGEX.IDD_REGGHC_DIALOG.PUSHBUTTON.IDC_BUTTON1"
+        assert rc_unit.source == "Activate instalation"
+        rc_unit = rc_file.units[4]
+        assert rc_unit.name == "DIALOGEX.IDD_REGGHC_DIALOG.CTEXT.IDC_STATIC1"
+        assert rc_unit.source == "My very good program"
+        rc_unit = rc_file.units[5]
+        assert rc_unit.name == "DIALOGEX.IDD_REGGHC_DIALOG.CTEXT.IDC_STATIC"
+        assert rc_unit.source == "You can use it without registering it"
+        rc_unit = rc_file.units[6]
+        assert rc_unit.name == "DIALOGEX.IDD_REGGHC_DIALOG.PUSHBUTTON.IDC_OFFLINE"
+        assert rc_unit.source == "Offline"
+        rc_unit = rc_file.units[7]
+        assert rc_unit.name == "DIALOGEX.IDD_REGGHC_DIALOG.PUSHBUTTON.IDC_LICENCIA"
+        assert rc_unit.source == "See license"
+        rc_unit = rc_file.units[8]
+        assert rc_unit.name == "DIALOGEX.IDD_REGGHC_DIALOG.RTEXT.IDC_STATIC"
+        assert rc_unit.source == "If you don't have internet, please use magic."
+        rc_unit = rc_file.units[9]
+        assert rc_unit.name == "DIALOGEX.IDD_REGGHC_DIALOG.CTEXT.IDC_ACTIVADA"
+        assert rc_unit.source == "Use your finger to activate the program."
+
+    def test_parse_stringtable(self):
+        """Test parsing a RC string with a STRINGTABLE block."""
+        rc_source = """
+#include "other_file.h" // This must be ignored
+
+LANGUAGE LANG_ENGLISH, SUBLANG_DEFAULT
+
+/////////////////////////////////////////////////////////////////////////////
+//
+// String Table
+//
+
+STRINGTABLE
+BEGIN
+    IDP_REGISTRONOV         "Data isn't valid"
+    IDS_ACTIVARINSTALACION  "You need to try again and again."
+    IDS_NOREGISTRADO        "Error when making something important"
+    IDS_REGISTRADO          "All done correctly.\nThank you very much."
+    IDS_ACTIVADA            "This is what you do:\n%s"
+    IDS_ERRORACTIV          "Error doing things"
+END
+"""
+        rc_file = self.source_parse(rc_source)
+        assert len(rc_file.units) == 6
+        rc_unit = rc_file.units[0]
+        assert rc_unit.name == "STRINGTABLE.IDP_REGISTRONOV"
+        assert rc_unit.source == "Data isn't valid"
+        rc_unit = rc_file.units[1]
+        assert rc_unit.name == "STRINGTABLE.IDS_ACTIVARINSTALACION"
+        assert rc_unit.source == "You need to try again and again."
+        rc_unit = rc_file.units[2]
+        assert rc_unit.name == "STRINGTABLE.IDS_NOREGISTRADO"
+        assert rc_unit.source == "Error when making something important"
+        rc_unit = rc_file.units[3]
+        assert rc_unit.name == "STRINGTABLE.IDS_REGISTRADO"
+        assert rc_unit.source == "All done correctly.\nThank you very much."
+        rc_unit = rc_file.units[4]
+        assert rc_unit.name == "STRINGTABLE.IDS_ACTIVADA"
+        assert rc_unit.source == "This is what you do:\n%s"
+        rc_unit = rc_file.units[5]
+        assert rc_unit.name == "STRINGTABLE.IDS_ERRORACTIV"
+        assert rc_unit.source == "Error doing things"
diff --git a/translate/storage/test_statsdb.py b/translate/storage/test_statsdb.py
index 4e4992b..61b294c 100644
--- a/translate/storage/test_statsdb.py
+++ b/translate/storage/test_statsdb.py
@@ -2,14 +2,10 @@
 
 import os
 import os.path
-import warnings
 
-import pytest
-
-from translate import storage
-from translate.storage import statsdb, factory
-from translate.misc import wStringIO
 from translate.filters import checks
+from translate.storage import factory, statsdb
+
 
 fr_terminology_extract = r"""
 msgid ""
@@ -81,7 +77,7 @@ def rm_rf(path):
     for dirpath, _, filenames in os.walk(path):
         for filename in filenames:
             os.remove(os.path.join(dirpath, filename))
-    os.removedirs(path)
+    os.rmdir(dirpath)
 
 
 class TestStatsDb:
@@ -140,12 +136,12 @@ class TestStatsDb:
     def test_if_cached_after_filestats(self):
         f, cache = self.setup_file_and_db(jtoolkit_extract)
         cache.filestats(f.filename, checks.UnitChecker())
-        assert self.make_file_and_return_id(cache, f.filename) != None
+        assert self.make_file_and_return_id(cache, f.filename) is not None
 
     def test_if_cached_after_unitstats(self):
         f, cache = self.setup_file_and_db(jtoolkit_extract)
         cache.unitstats(f.filename, checks.UnitChecker())
-        assert self.make_file_and_return_id(cache, f.filename) != None
+        assert self.make_file_and_return_id(cache, f.filename) is not None
 
     def test_singletonness(self):
         f1, cache1 = self.setup_file_and_db(jtoolkit_extract)
diff --git a/translate/storage/test_tbx.py b/translate/storage/test_tbx.py
index 5496a3a..3b62256 100644
--- a/translate/storage/test_tbx.py
+++ b/translate/storage/test_tbx.py
@@ -1,7 +1,6 @@
 #!/usr/bin/env python
 
-from translate.storage import tbx
-from translate.storage import test_base
+from translate.storage import tbx, test_base
 
 
 class TestTBXUnit(test_base.TestTranslationUnit):
@@ -17,7 +16,7 @@ class TestTBXfile(test_base.TestTranslationStore):
         tbxfile.addsourceunit("Bla")
         assert len(tbxfile.units) == 1
         newfile = tbx.tbxfile.parsestring(str(tbxfile))
-        print str(tbxfile)
+        print(str(tbxfile))
         assert len(newfile.units) == 1
         assert newfile.units[0].source == "Bla"
         assert newfile.findunit("Bla").source == "Bla"
@@ -28,7 +27,7 @@ class TestTBXfile(test_base.TestTranslationStore):
         tbxunit = tbxfile.addsourceunit("Concept")
         tbxunit.source = "Term"
         newfile = tbx.tbxfile.parsestring(str(tbxfile))
-        print str(tbxfile)
+        print(str(tbxfile))
         assert newfile.findunit("Concept") is None
         assert newfile.findunit("Term") is not None
 
@@ -37,5 +36,5 @@ class TestTBXfile(test_base.TestTranslationStore):
         tbxunit = tbxfile.addsourceunit("Concept")
         tbxunit.target = "Konsep"
         newfile = tbx.tbxfile.parsestring(str(tbxfile))
-        print str(tbxfile)
+        print(str(tbxfile))
         assert newfile.findunit("Concept").target == "Konsep"
diff --git a/translate/storage/test_tmx.py b/translate/storage/test_tmx.py
index 5107471..de76b4a 100644
--- a/translate/storage/test_tmx.py
+++ b/translate/storage/test_tmx.py
@@ -1,8 +1,7 @@
 #!/usr/bin/env python
 
-from translate.storage import tmx
-from translate.storage import test_base
 from translate.misc import wStringIO
+from translate.storage import test_base, tmx
 
 
 class TestTMXUnit(test_base.TestTranslationUnit):
@@ -35,7 +34,7 @@ class TestTMXfile(test_base.TestTranslationStore):
     def tmxparse(self, tmxsource):
         """helper that parses tmx source without requiring files"""
         dummyfile = wStringIO.StringIO(tmxsource)
-        print tmxsource
+        print(tmxsource)
         tmxfile = tmx.tmxfile(dummyfile)
         return tmxfile
 
@@ -50,16 +49,16 @@ class TestTMXfile(test_base.TestTranslationStore):
         tmxfile = tmx.tmxfile()
         tmxfile.addtranslation("A string of characters", "en", "'n String karakters", "af")
         newfile = self.tmxparse(str(tmxfile))
-        print str(tmxfile)
+        print(str(tmxfile))
         assert newfile.translate("A string of characters") == "'n String karakters"
-        
+
     def test_withcomment(self):
         """tests that addtranslation() stores string's comments correctly"""
         tmxfile = tmx.tmxfile()
         tmxfile.addtranslation("A string of chars",
                                "en", "'n String karakters", "af", "comment")
         newfile = self.tmxparse(str(tmxfile))
-        print str(tmxfile)
+        print(str(tmxfile))
         assert newfile.findunit("A string of chars").getnotes() == "comment"
 
     def test_withnewlines(self):
@@ -67,7 +66,7 @@ class TestTMXfile(test_base.TestTranslationStore):
         tmxfile = tmx.tmxfile()
         tmxfile.addtranslation("First line\nSecond line", "en", "Eerste lyn\nTweede lyn", "af")
         newfile = self.tmxparse(str(tmxfile))
-        print str(tmxfile)
+        print(str(tmxfile))
         assert newfile.translate("First line\nSecond line") == "Eerste lyn\nTweede lyn"
 
     def test_xmlentities(self):
@@ -76,8 +75,8 @@ class TestTMXfile(test_base.TestTranslationStore):
         tmxfile.addtranslation("Mail & News", "en", "Nuus & pos", "af")
         tmxfile.addtranslation("Five < ten", "en", "Vyf < tien", "af")
         xmltext = str(tmxfile)
-        print "The generated xml:"
-        print xmltext
+        print("The generated xml:")
+        print(xmltext)
         assert tmxfile.translate('Mail & News') == 'Nuus & pos'
         assert xmltext.index('Mail & News')
         assert xmltext.find('Mail & News') == -1
diff --git a/translate/storage/test_trados.py b/translate/storage/test_trados.py
index ddf2634..8dc4f77 100644
--- a/translate/storage/test_trados.py
+++ b/translate/storage/test_trados.py
@@ -1,22 +1,21 @@
 # -*- coding: utf-8 -*-
 
-from pytest import mark, importorskip
-importorskip("BeautifulSoup")
+from pytest import importorskip
+importorskip("bs4")
 
-from translate.storage import test_base
 from translate.storage import trados
 
 
 def test_unescape():
     # NBSP
-    assert trados.unescape(ur"Ordre du jour\~:") == u"Ordre du jour\u00a0:"
-    assert trados.unescape(ur"Association for Road Safety \endash  Conference") == u"Association for Road Safety –  Conference"
+    assert trados.unescape(u"Ordre du jour\\~:") == u"Ordre du jour\u00a0:"
+    assert trados.unescape(u"Association for Road Safety \\endash  Conference") == u"Association for Road Safety –  Conference"
 
 
 def test_escape():
     # NBSP
-    assert trados.escape(u"Ordre du jour\u00a0:") == ur"Ordre du jour\~:"
-    assert trados.escape(u"Association for Road Safety –  Conference") == ur"Association for Road Safety \endash  Conference"
+    assert trados.escape(u"Ordre du jour\u00a0:") == u"Ordre du jour\\~:"
+    assert trados.escape(u"Association for Road Safety –  Conference") == u"Association for Road Safety \\endash  Conference"
 
 #@mark.xfail(reason="Lots to implement")
 #class TestTradosTxtTmUnit(test_base.TestTranslationUnit):
diff --git a/translate/storage/test_ts2.py b/translate/storage/test_ts2.py
index ae2f0f4..5decda3 100644
--- a/translate/storage/test_ts2.py
+++ b/translate/storage/test_ts2.py
@@ -18,15 +18,17 @@
 # You should have received a copy of the GNU General Public License
 # along with this program; if not, see <http://www.gnu.org/licenses/>.
 
-"""Tests for Qt Linguist storage class"""
+"""Tests for Qt Linguist storage class
+
+Reference implementation & tests:
+gitorious:qt5-tools/src/qttools/tests/auto/linguist/lconvert/data
+"""
 
 from lxml import etree
 
 from translate.misc.multistring import multistring
-from translate.storage import ts2 as ts
-from translate.storage import test_base
-from translate.storage.placeables import parse
-from translate.storage.placeables import xliff
+from translate.storage import test_base, ts2 as ts
+from translate.storage.placeables import parse, xliff
 from translate.storage.placeables.lisa import xml_to_strelem
 
 
@@ -57,7 +59,7 @@ class TestTSfile(test_base.TestTranslationStore):
         tsfile.addsourceunit("Bla")
         assert len(tsfile.units) == 1
         newfile = ts.tsfile.parsestring(str(tsfile))
-        print str(tsfile)
+        print(str(tsfile))
         assert len(newfile.units) == 1
         assert newfile.units[0].source == "Bla"
         assert newfile.findunit("Bla").source == "Bla"
@@ -68,7 +70,7 @@ class TestTSfile(test_base.TestTranslationStore):
         tsunit = tsfile.addsourceunit("Concept")
         tsunit.source = "Term"
         newfile = ts.tsfile.parsestring(str(tsfile))
-        print str(tsfile)
+        print(str(tsfile))
         assert newfile.findunit("Concept") is None
         assert newfile.findunit("Term") is not None
 
@@ -77,7 +79,7 @@ class TestTSfile(test_base.TestTranslationStore):
         tsunit = tsfile.addsourceunit("Concept")
         tsunit.target = "Konsep"
         newfile = ts.tsfile.parsestring(str(tsfile))
-        print str(tsfile)
+        print(str(tsfile))
         assert newfile.findunit("Concept").target == "Konsep"
 
     def test_plurals(self):
@@ -86,7 +88,7 @@ class TestTSfile(test_base.TestTranslationStore):
         tsunit = tsfile.addsourceunit("File(s)")
         tsunit.target = [u"Leêr", u"Leêrs"]
         newfile = ts.tsfile.parsestring(str(tsfile))
-        print str(tsfile)
+        print(str(tsfile))
         checkunit = newfile.findunit("File(s)")
         assert checkunit.target == [u"Leêr", u"Leêrs"]
         assert checkunit.hasplural()
@@ -112,6 +114,32 @@ class TestTSfile(test_base.TestTranslationStore):
         tsfile = ts.tsfile.parsestring(tsstr)
         assert tsfile.getsourcelanguage() == 'en'
 
+    def test_edit(self):
+        """test editing works well"""
+        tsstr = '''<?xml version='1.0' encoding='utf-8'?>
+<!DOCTYPE TS>
+<TS version="2.0" language="hu">
+<context>
+    <name>MainWindow</name>
+    <message>
+        <source>ObsoleteString</source>
+        <translation type="obsolete">Groepen</translation>
+    </message>
+    <message>
+        <source>SourceString</source>
+        <translation>TargetString</translation>
+    </message>
+</context>
+</TS>
+'''
+        tsfile = ts.tsfile.parsestring(tsstr)
+        tsfile.units[1].settarget('TestTarget')
+        tsfile.units[1].markfuzzy(True)
+        newtsstr = tsstr.decode('utf-8').replace(
+            '>TargetString', ' type="unfinished">TestTarget'
+        ).encode('utf-8')
+        assert newtsstr == str(tsfile)
+
     def test_locations(self):
         """test that locations work well"""
         tsstr = '''<?xml version="1.0" encoding="utf-8"?>
@@ -181,7 +209,42 @@ class TestTSfile(test_base.TestTranslationStore):
         assert len(tsfile.units) == 2
         assert len(tsfile2.units) == 2
 
-        tsfile2.units[0].merge(tsfile.units[0]) #fuzzy
-        tsfile2.units[1].merge(tsfile.units[1]) #not fuzzy
-        assert tsfile2.units[0].isfuzzy() == True
-        assert tsfile2.units[1].isfuzzy() == False
+        tsfile2.units[0].merge(tsfile.units[0])  # fuzzy
+        tsfile2.units[1].merge(tsfile.units[1])  # not fuzzy
+        assert tsfile2.units[0].isfuzzy()
+        assert not tsfile2.units[1].isfuzzy()
+
+    def test_getid(self):
+        """test that getid works well"""
+        tsstr = """<?xml version="1.0" encoding="utf-8"?>
+<!DOCTYPE TS>
+<TS version="2.1">
+<context>
+    <name>Dialog2</name>
+    <message numerus="yes">
+        <source>%n files</source>
+        <translation type="unfinished">
+            <numerusform></numerusform>
+        </translation>
+    </message>
+    <message id="this_is_some_id" numerus="yes">
+        <source>%n cars</source>
+        <translation type="unfinished">
+            <numerusform></numerusform>
+        </translation>
+    </message>
+    <message>
+        <source>Age: %1</source>
+        <translation type="unfinished"></translation>
+    </message>
+    <message id="this_is_another_id">
+        <source>func3</source>
+        <translation type="unfinished"></translation>
+    </message>
+</context>
+</TS>"""
+
+        tsfile = ts.tsfile.parsestring(tsstr)
+        assert tsfile.units[0].getid() == "Dialog2%n files"
+        assert tsfile.units[1].getid() == "Dialog2\nthis_is_some_id%n cars"
+        assert tsfile.units[3].getid() == "Dialog2\nthis_is_another_idfunc3"
diff --git a/translate/storage/test_txt.py b/translate/storage/test_txt.py
index 305a6b4..85ed704 100644
--- a/translate/storage/test_txt.py
+++ b/translate/storage/test_txt.py
@@ -1,8 +1,7 @@
 #!/usr/bin/env python
 
 from translate.misc import wStringIO
-from translate.storage import txt
-from translate.storage import test_monolingual
+from translate.storage import test_monolingual, txt
 
 
 class TestTxtUnit(test_monolingual.TestMonolingualUnit):
@@ -35,8 +34,8 @@ class TestTxtFile(test_monolingual.TestMonolingualStore):
         txtsource = '''One\nOne\n\nTwo\n---\n\nThree'''
         txtfile = self.txtparse(txtsource)
         assert len(txtfile.units) == 3
-        print txtsource
-        print str(txtfile)
-        print "*%s*" % txtfile.units[0]
+        print(txtsource)
+        print(str(txtfile))
+        print("*%s*" % txtfile.units[0])
         assert str(txtfile) == txtsource
         assert self.txtregen(txtsource) == txtsource
diff --git a/translate/storage/test_utx.py b/translate/storage/test_utx.py
index 3067e66..db0f4fc 100644
--- a/translate/storage/test_utx.py
+++ b/translate/storage/test_utx.py
@@ -1,8 +1,7 @@
 #!/usr/bin/env python
 # -*- coding: utf-8 -*-
 
-from translate.storage import utx
-from translate.storage import test_base
+from translate.storage import test_base, utx
 
 
 class TestUtxUnit(test_base.TestTranslationUnit):
diff --git a/translate/storage/test_wordfast.py b/translate/storage/test_wordfast.py
index d76cd66..6a1e949 100644
--- a/translate/storage/test_wordfast.py
+++ b/translate/storage/test_wordfast.py
@@ -1,8 +1,7 @@
 #!/usr/bin/env python
 # -*- coding: utf-8 -*-
 
-from translate.storage import test_base
-from translate.storage import wordfast as wf
+from translate.storage import test_base, wordfast as wf
 
 
 class TestWFTime(object):
@@ -10,14 +9,14 @@ class TestWFTime(object):
     def test_timestring(self):
         """Setting and getting times set using a timestring"""
         wftime = wf.WordfastTime()
-        assert wftime.timestring == None
+        assert wftime.timestring is None
         wftime.timestring = "19710820~050000"
         assert wftime.time[:6] == (1971, 8, 20, 5, 0, 0)
 
     def test_time(self):
         """Setting and getting times set using time tuple"""
         wftime = wf.WordfastTime()
-        assert wftime.time == None
+        assert wftime.time is None
         wftime.time = (1999, 3, 27)
         wftime.timestring = "19990327~000000"
 
@@ -37,8 +36,8 @@ class TestWFUnit(test_base.TestTranslationUnit):
                     '\\\n', '\\\t', '\\\\r', '\\\\"']
         for special in specials:
             unit.source = special
-            print "unit.source:", repr(unit.source) + '|'
-            print "special:", repr(special) + '|'
+            print("unit.source:", repr(unit.source) + '|')
+            print("special:", repr(special) + '|')
             assert unit.source == special
 
     def test_wordfast_escaping(self):
@@ -46,7 +45,7 @@ class TestWFUnit(test_base.TestTranslationUnit):
 
         def compare(real, escaped):
             unit = self.UnitClass(real)
-            print real.encode('utf-8'), unit.source.encode('utf-8')
+            print(real.encode('utf-8'), unit.source.encode('utf-8'))
             assert unit.source == real
             assert unit.dict['source'] == escaped
             unit.target = real
diff --git a/translate/storage/test_xliff.py b/translate/storage/test_xliff.py
index 443b0a3..a3a7e5f 100644
--- a/translate/storage/test_xliff.py
+++ b/translate/storage/test_xliff.py
@@ -2,10 +2,9 @@
 
 from lxml import etree
 
-from translate.storage import xliff, lisa
-from translate.storage import test_base
+from translate.storage import lisa, test_base, xliff
 from translate.storage.placeables import StringElem
-from translate.storage.placeables.xliff import X, G
+from translate.storage.placeables.xliff import G, X
 
 
 class TestXLIFFUnit(test_base.TestTranslationUnit):
@@ -67,7 +66,7 @@ class TestXLIFFfile(test_base.TestTranslationStore):
         xlifffile.addsourceunit("Bla")
         assert len(xlifffile.units) == 1
         newfile = xliff.xlifffile.parsestring(str(xlifffile))
-        print str(xlifffile)
+        print(str(xlifffile))
         assert len(newfile.units) == 1
         assert newfile.units[0].source == "Bla"
         assert newfile.findunit("Bla").source == "Bla"
@@ -86,7 +85,7 @@ class TestXLIFFfile(test_base.TestTranslationStore):
     </xliff:file>
 </xliff:xliff>'''
         xlifffile = xliff.xlifffile.parsestring(xlfsource)
-        print str(xlifffile)
+        print(str(xlifffile))
         assert xlifffile.units[0].source == "File 1"
 
     def test_rich_source(self):
@@ -105,7 +104,7 @@ class TestXLIFFfile(test_base.TestTranslationStore):
         assert x_placeable.tail == 'baz'
 
         xliffunit.rich_source[0].print_tree(2)
-        print xliffunit.rich_source
+        print(xliffunit.rich_source)
         assert xliffunit.rich_source == [StringElem([StringElem(u'foo'), X(id='bar'), StringElem(u'baz')])]
 
         # Test 2
@@ -153,7 +152,7 @@ class TestXLIFFfile(test_base.TestTranslationStore):
         assert target_dom_node.text == u'foobaz'
 
         assert g_placeable.tag == u'g'
-        print 'g_placeable.text: %s (%s)' % (g_placeable.text, type(g_placeable.text))
+        print('g_placeable.text: %s (%s)' % (g_placeable.text, type(g_placeable.text)))
         assert g_placeable.text is None
         assert g_placeable.attrib[u'id'] == u'oof'
         assert g_placeable.tail is None
@@ -171,7 +170,7 @@ class TestXLIFFfile(test_base.TestTranslationStore):
         xliffunit = xlifffile.addsourceunit("Concept")
         xliffunit.source = "Term"
         newfile = xliff.xlifffile.parsestring(str(xlifffile))
-        print str(xlifffile)
+        print(str(xlifffile))
         assert newfile.findunit("Concept") is None
         assert newfile.findunit("Term") is not None
 
@@ -180,20 +179,20 @@ class TestXLIFFfile(test_base.TestTranslationStore):
         xliffunit = xlifffile.addsourceunit("Concept")
         xliffunit.target = "Konsep"
         newfile = xliff.xlifffile.parsestring(str(xlifffile))
-        print str(xlifffile)
+        print(str(xlifffile))
         assert newfile.findunit("Concept").target == "Konsep"
 
     def test_sourcelanguage(self):
         xlifffile = xliff.xlifffile(sourcelanguage="xh")
         xmltext = str(xlifffile)
-        print xmltext
+        print(xmltext)
         assert xmltext.find('source-language="xh"') > 0
         #TODO: test that it also works for new files.
 
     def test_targetlanguage(self):
         xlifffile = xliff.xlifffile(sourcelanguage="zu", targetlanguage="af")
         xmltext = str(xlifffile)
-        print xmltext
+        print(xmltext)
         assert xmltext.find('source-language="zu"') > 0
         assert xmltext.find('target-language="af"') > 0
 
@@ -230,8 +229,8 @@ class TestXLIFFfile(test_base.TestTranslationStore):
         assert not notenodes[2].get("from") == "Mom"
         assert not "from" in notenodes[0].attrib
         assert unit.getnotes() == "Please buy bread\nPlease buy milk\nDon't forget the beer"
-        assert unit.correctorigin(notenodes[2], "ad") == True
-        assert unit.correctorigin(notenodes[2], "om") == False
+        assert unit.correctorigin(notenodes[2], "ad")
+        assert not unit.correctorigin(notenodes[2], "om")
 
     def test_alttrans(self):
         """Test xliff <alt-trans> accessors"""
@@ -269,13 +268,13 @@ class TestXLIFFfile(test_base.TestTranslationStore):
         # test that the source node is before the target node:
         alt = unit.getalttrans()[0]
         altformat = etree.tostring(alt.xmlelement)
-        print altformat
+        print(altformat)
         assert altformat.find("<source") < altformat.find("<target")
 
         # test that a new target is still before alt-trans (bug 1098)
         unit.target = u"newester target"
         unitformat = str(unit)
-        print unitformat
+        print(unitformat)
         assert unitformat.find("<source") < unitformat.find("<target") < unitformat.find("<alt-trans")
 
     def test_fuzzy(self):
@@ -297,7 +296,7 @@ class TestXLIFFfile(test_base.TestTranslationStore):
         #be uncommented
         unit.target = None
         assert unit.target is None
-        print unit
+        print(unit)
         unit.markfuzzy(True)
         assert 'approved="no"' in str(unit)
         #assert unit.isfuzzy()
diff --git a/translate/storage/test_zip.py b/translate/storage/test_zip.py
index 9274c47..e95771c 100644
--- a/translate/storage/test_zip.py
+++ b/translate/storage/test_zip.py
@@ -5,8 +5,7 @@
 import os
 from zipfile import ZipFile
 
-from translate.storage import directory
-from translate.storage import zip
+from translate.storage import directory, zip
 
 
 class TestZIPFile(object):
@@ -14,7 +13,7 @@ class TestZIPFile(object):
 
     def setup_method(self, method):
         """sets up a test directory"""
-        print "setup_method called on", self.__class__.__name__
+        print("setup_method called on", self.__class__.__name__)
         self.testzip = "%s_testzip.zip" % (self.__class__.__name__)
         self.cleardir(self.testzip)
         self.zip = ZipFile(self.testzip, mode="w")
@@ -44,7 +43,7 @@ class TestZIPFile(object):
 
     def test_created(self):
         """test that the directory actually exists"""
-        print self.testzip
+        print(self.testzip)
         assert os.path.isfile(self.testzip)
 
     def test_basic(self):
diff --git a/translate/storage/tiki.py b/translate/storage/tiki.py
index 02eb721..ded07b4 100644
--- a/translate/storage/tiki.py
+++ b/translate/storage/tiki.py
@@ -20,7 +20,9 @@
 
 """Class that manages TikiWiki files for translation.  Tiki files are <strike>ugly and
 inconsistent</strike> formatted as a single large PHP array with several special
-sections identified by comments.  Example current as of 2008-12-01::
+sections identified by comments.  Example current as of 2008-12-01:
+
+.. code-block:: php
 
   <?php
     // Many comments at the top
diff --git a/translate/storage/tmdb.py b/translate/storage/tmdb.py
index a745a59..dcb19de 100644
--- a/translate/storage/tmdb.py
+++ b/translate/storage/tmdb.py
@@ -26,10 +26,7 @@ import math
 import re
 import threading
 import time
-try:
-    from sqlite3 import dbapi2
-except ImportError:
-    from pysqlite2 import dbapi2
+from sqlite3 import dbapi2
 
 from translate.lang import data
 from translate.search.lshtein import LevenshteinComparer
@@ -65,7 +62,7 @@ class TMDB(object):
             self._tm_dbs[db_file] = {}
         self._tm_db = self._tm_dbs[db_file]
 
-        #FIXME: do we want to do any checks before we initialize the DB?
+        # FIXME: do we want to do any checks before we initialize the DB?
         self.init_database()
         self.fulltext = False
         self.init_fulltext()
@@ -125,7 +122,7 @@ CREATE UNIQUE INDEX IF NOT EXISTS targets_uniq_idx ON targets (sid, text, lang);
     def init_fulltext(self):
         """detects if fts3 fulltext indexing module exists, initializes fulltext table if it does"""
 
-        #HACKISH: no better way to detect fts3 support except trying to
+        # HACKISH: no better way to detect fts3 support except trying to
         # construct a dummy table?!
         try:
             script = """
@@ -170,7 +167,7 @@ END;
             logging.debug("created fulltext triggers")
             self.fulltext = True
 
-        except dbapi2.OperationalError, e:
+        except dbapi2.OperationalError as e:
             self.fulltext = False
             logging.debug("failed to initialize fts3 support: " + str(e))
             script = """
@@ -194,7 +191,7 @@ DROP TRIGGER IF EXISTS sources_delete_trig;
 
     def add_unit(self, unit, source_lang=None, target_lang=None, commit=True):
         """inserts unit in the database"""
-        #TODO: is that really the best way to handle unspecified
+        # TODO: is that really the best way to handle unspecified
         # source and target languages? what about conflicts between
         # unit attributes and passed arguments
         if unit.getsourcelanguage():
@@ -207,10 +204,11 @@ DROP TRIGGER IF EXISTS sources_delete_trig;
         if not target_lang:
             raise LanguageError("undefined target language")
 
-        unitdict = {"source": unit.source,
-                    "target": unit.target,
-                    "context": unit.getcontext(),
-                   }
+        unitdict = {
+            "source": unit.source,
+            "target": unit.target,
+            "context": unit.getcontext(),
+        }
         self.add_dict(unitdict, source_lang, target_lang, commit)
 
     def add_dict(self, unit, source_lang, target_lang, commit=True):
@@ -234,8 +232,8 @@ DROP TRIGGER IF EXISTS sources_delete_trig;
                 sid = self.cursor.fetchone()
                 (sid,) = sid
             try:
-                #FIXME: get time info from translation store
-                #FIXME: do we need so store target length?
+                # FIXME: get time info from translation store
+                # FIXME: do we need so store target length?
                 self.cursor.execute("INSERT INTO targets (sid, text, lang, time) VALUES (?, ?, ?, ?)",
                                     (sid,
                                      unit["target"],
diff --git a/translate/storage/tmx.py b/translate/storage/tmx.py
index 9684c5d..83c8365 100644
--- a/translate/storage/tmx.py
+++ b/translate/storage/tmx.py
@@ -86,7 +86,7 @@ class tmxunit(lisa.LISAunit):
 
     def adderror(self, errorname, errortext):
         """Adds an error message to this unit."""
-        #TODO: consider factoring out: some duplication between XLIFF and TMX
+        # TODO: consider factoring out: some duplication between XLIFF and TMX
         text = errorname
         if errortext:
             text += ': ' + errortext
@@ -94,7 +94,7 @@ class tmxunit(lisa.LISAunit):
 
     def geterrors(self):
         """Get all error messages."""
-        #TODO: consider factoring out: some duplication between XLIFF and TMX
+        # TODO: consider factoring out: some duplication between XLIFF and TMX
         notelist = self._getnotelist(origin="pofilter")
         errordict = {}
         for note in notelist:
@@ -107,7 +107,7 @@ class tmxunit(lisa.LISAunit):
 
         We don't want to make a deep copy - this could duplicate the whole XML
         tree. For now we just serialise and reparse the unit's XML."""
-        #TODO: check performance
+        # TODO: check performance
         new_unit = self.__class__(None, empty=True)
         new_unit.xmlelement = etree.fromstring(etree.tostring(self.xmlelement))
         return new_unit
@@ -116,7 +116,7 @@ class tmxunit(lisa.LISAunit):
 class tmxfile(lisa.LISAfile):
     """Class representing a TMX file store."""
     UnitClass = tmxunit
-    Name = _("TMX Translation Memory")
+    Name = "TMX Translation Memory"
     Mimetypes = ["application/x-tmx"]
     Extensions = ["tmx"]
     rootNode = "tmx"
@@ -135,7 +135,8 @@ class tmxfile(lisa.LISAfile):
         headernode.set("segtype", "sentence")
         headernode.set("o-tmf", "UTF-8")
         headernode.set("adminlang", "en")
-        #TODO: consider adminlang. Used for notes, etc. Possibly same as targetlanguage
+        # TODO: consider adminlang. Used for notes, etc. Possibly same as
+        # targetlanguage
         headernode.set("srclang", self.sourcelanguage)
         headernode.set("datatype", "PlainText")
         #headernode.set("creationdate", "YYYYMMDDTHHMMSSZ"
diff --git a/translate/storage/trados.py b/translate/storage/trados.py
index 14fbbf4..920a701 100644
--- a/translate/storage/trados.py
+++ b/translate/storage/trados.py
@@ -20,7 +20,10 @@
 
 """Manage the Trados .txt Translation Memory format
 
-A Trados file looks like this::
+A Trados file looks like this:
+
+.. code-block:: xml
+
     <TrU>
     <CrD>18012000, 13:18:35
     <CrU>CAROL-ANN
@@ -35,6 +38,7 @@ A Trados file looks like this::
     <Seg L=EN_GB>Road Safety Education in our Schools
     <Seg L=DE_DE>Verkehrserziehung an Schulen
     </TrU>
+
 """
 
 import re
@@ -42,34 +46,35 @@ import time
 
 try:
     # FIXME see if we can't use lxml
-    from BeautifulSoup import BeautifulStoneSoup
+    from bs4 import BeautifulSoup
 except ImportError:
-    raise ImportError("BeautifulSoup is not installed. Support for Trados txt is disabled.")
+    raise ImportError("BeautifulSoup 4 is not installed. Support for Trados txt is disabled.")
 
 from translate.storage import base
 
+
 TRADOS_TIMEFORMAT = "%d%m%Y, %H:%M:%S"
 """Time format used by Trados .txt"""
 
 RTF_ESCAPES = {
-ur"\emdash": u"—",
-ur"\endash": u"–",
-# Nonbreaking space equal to width of character "m" in current font.
-ur"\emspace": u"\u2003",
-# Nonbreaking space equal to width of character "n" in current font.
-ur"\enspace": u"\u2002",
-#ur"\qmspace": "",    # One-quarter em space.
-ur"\bullet": u"•",     # Bullet character.
-ur"\lquote": u"‘",     # Left single quotation mark. \u2018
-ur"\rquote": u"’",     # Right single quotation mark. \u2019
-ur"\ldblquote": u"“",  # Left double quotation mark. \u201C
-ur"\rdblquote": u"”",  # Right double quotation mark. \u201D
-ur"\~": u"\u00a0",  # Nonbreaking space
-ur"\-": u"\u00ad",  # Optional hyphen.
-ur"\_": u"‑",  # Nonbreaking hyphen \U2011
-# A hexadecimal value, based on the specified character set (may be used to
-# identify 8-bit values).
-#ur"\'hh": "",
+    u"\\emdash": u"—",
+    u"\\endash": u"–",
+    # Nonbreaking space equal to width of character "m" in current font.
+    u"\\emspace": u"\u2003",
+    # Nonbreaking space equal to width of character "n" in current font.
+    u"\\enspace": u"\u2002",
+    #u"\\qmspace": "",    # One-quarter em space.
+    u"\\bullet": u"•",     # Bullet character.
+    u"\\lquote": u"‘",     # Left single quotation mark. \u2018
+    u"\\rquote": u"’",     # Right single quotation mark. \u2019
+    u"\\ldblquote": u"“",  # Left double quotation mark. \u201C
+    u"\\rdblquote": u"”",  # Right double quotation mark. \u201D
+    u"\\~": u"\u00a0",  # Nonbreaking space
+    u"\\-": u"\u00ad",  # Optional hyphen.
+    u"\\_": u"‑",  # Nonbreaking hyphen \U2011
+    # A hexadecimal value, based on the specified character set (may be used to
+    # identify 8-bit values).
+    #u"\\'hh": "",
 }
 """RTF control to Unicode map. See
 http://msdn.microsoft.com/en-us/library/aa140283(v=office.10).aspx
@@ -155,7 +160,7 @@ class TradosUnit(base.TranslationUnit):
     target = property(gettarget, None)
 
 
-class TradosSoup(BeautifulStoneSoup):
+class TradosSoup(BeautifulSoup):
 
     MARKUP_MASSAGE = [
         (re.compile('<(?P<fulltag>(?P<tag>[^\s\/]+).*?)>(?P<content>.+)\r'),
@@ -165,7 +170,7 @@ class TradosSoup(BeautifulStoneSoup):
 
 class TradosTxtTmFile(base.TranslationStore):
     """A Trados translation memory file"""
-    Name = _("Trados Translation Memory")
+    Name = "Trados Translation Memory"
     Mimetypes = ["application/x-trados-tm"]
     Extensions = ["txt"]
 
diff --git a/translate/storage/ts.py b/translate/storage/ts.py
index 1f1b0ef..8f8fd09 100644
--- a/translate/storage/ts.py
+++ b/translate/storage/ts.py
@@ -85,7 +85,8 @@ class QtTsParser:
     def getxml(self):
         """return the ts file as xml"""
         xml = self.document.toprettyxml(indent="    ", encoding="utf-8")
-        #This line causes empty lines in the translation text to be removed (when there are two newlines)
+        # This line causes empty lines in the translation text to be removed
+        # (when there are two newlines)
         xml = "\n".join([line for line in xml.split("\n") if line.strip()])
         return xml
 
diff --git a/translate/storage/ts2.py b/translate/storage/ts2.py
index b7cfbc0..a6be907 100644
--- a/translate/storage/ts2.py
+++ b/translate/storage/ts2.py
@@ -41,25 +41,26 @@ from translate.storage import base, lisa
 from translate.storage.placeables import general
 from translate.storage.workflow import StateEnum as state
 
+
 # TODO: handle translation types
 
 NPLURALS = {
-'jp': 1,
-'en': 2,
-'fr': 2,
-'lv': 3,
-'ga': 3,
-'cs': 3,
-'sk': 3,
-'mk': 3,
-'lt': 3,
-'ru': 3,
-'pl': 3,
-'ro': 3,
-'sl': 4,
-'mt': 4,
-'cy': 5,
-'ar': 6,
+    'jp': 1,
+    'en': 2,
+    'fr': 2,
+    'lv': 3,
+    'ga': 3,
+    'cs': 3,
+    'sk': 3,
+    'mk': 3,
+    'lt': 3,
+    'ru': 3,
+    'pl': 3,
+    'ro': 3,
+    'sl': 4,
+    'mt': 4,
+    'cy': 5,
+    'ar': 6,
 }
 
 
@@ -101,8 +102,8 @@ class tsunit(lisa.LISAunit):
         if purpose == "target":
             purpose = "translation"
         langset = etree.Element(self.namespaced(purpose))
-        #TODO: check language
-#        lisa.setXMLlang(langset, lang)
+        # TODO: check language
+        #lisa.setXMLlang(langset, lang)
 
         langset.text = text
         return langset
@@ -195,7 +196,7 @@ class tsunit(lisa.LISAunit):
             note.text = text.strip()
 
     def getnotes(self, origin=None):
-        #TODO: consider only responding when origin has certain values
+        # TODO: consider only responding when origin has certain values
         comments = []
         if origin in ["programmer", "developer", "source code", None]:
             notenode = self.xmlelement.find(self.namespaced("extracomment"))
@@ -248,13 +249,17 @@ class tsunit(lisa.LISAunit):
             self._settype(None)
 
     def getid(self):
-        if self.source is None:
-            return None
         context_name = self.getcontext()
-        #XXX: context_name is not supposed to be able to be None (the <name>
+        if self.source is None and context_name is None:
+            return None
+
+        # XXX: context_name is not supposed to be able to be None (the <name>
         # tag is compulsary in the <context> tag)
         if context_name is not None:
-            return context_name + self.source
+            if self.source:
+                return context_name + self.source
+            else:
+                return context_name
         else:
             return self.source
 
@@ -280,6 +285,9 @@ class tsunit(lisa.LISAunit):
         commentnode = self.xmlelement.find(self.namespaced("comment"))
         if commentnode is not None and commentnode.text is not None:
             contexts.append(commentnode.text)
+        message_id = self.xmlelement.get('id')
+        if message_id is not None:
+            contexts.append(message_id)
         contexts = filter(None, contexts)
         return '\n'.join(contexts)
 
@@ -312,7 +320,7 @@ class tsunit(lisa.LISAunit):
 
     def merge(self, otherunit, overwrite=False, comments=True, authoritative=False):
         super(tsunit, self).merge(otherunit, overwrite, comments)
-        #TODO: check if this is necessary:
+        # TODO: check if this is necessary:
         if otherunit.isfuzzy():
             self.markfuzzy()
         else:
@@ -346,7 +354,7 @@ class tsunit(lisa.LISAunit):
 class tsfile(lisa.LISAfile):
     """Class representing a TS file store."""
     UnitClass = tsunit
-    Name = _("Qt Linguist Translation File")
+    Name = "Qt Linguist Translation File"
     Mimetypes = ["application/x-linguist"]
     Extensions = ["ts"]
     rootNode = "TS"
@@ -472,18 +480,6 @@ class tsfile(lisa.LISAfile):
             return 1
 
     def __str__(self):
-        """Converts to a string containing the file's XML.
-
-        We have to override this to ensure mimic the Qt convention:
-            - no XML decleration
-            - plain DOCTYPE that lxml seems to ignore
-        """
-        # A bug in lxml means we have to output the doctype ourselves. For
-        # more information, see:
-        # http://codespeak.net/pipermail/lxml-dev/2008-October/004112.html
-        # The problem was fixed in lxml 2.1.3
-        output = etree.tostring(self.document, pretty_print=True,
-                                xml_declaration=False, encoding='utf-8')
-        if not "<!DOCTYPE TS>" in output[:30]:
-            output = "<!DOCTYPE TS>" + output
-        return output
+        """Converts to a string containing the file's XML."""
+        return etree.tostring(self.document, pretty_print=True,
+                              xml_declaration=True, encoding='utf-8')
diff --git a/translate/storage/txt.py b/translate/storage/txt.py
index fd26c7f..5bd06b0 100644
--- a/translate/storage/txt.py
+++ b/translate/storage/txt.py
@@ -31,6 +31,7 @@ import re
 
 from translate.storage import base
 
+
 dokuwiki = []
 dokuwiki.append(("Dokuwiki heading", re.compile(r"( ?={2,6}[\s]*)(.+)"), re.compile("([\s]*={2,6}[\s]*)$")))
 dokuwiki.append(("Dokuwiki bullet", re.compile(r"([\s]{2,}\*[\s]*)(.+)"), re.compile("[\s]+$")))
@@ -42,10 +43,10 @@ mediawiki.append(("MediaWiki bullet", re.compile(r"(\*+[\s]*)(.+)"), re.compile(
 mediawiki.append(("MediaWiki numbered item", re.compile(r"(#+[\s]*)(.+)"), re.compile("[\s]+$")))
 
 flavours = {
-"dokuwiki": dokuwiki,
-"mediawiki": mediawiki,
-None: [],
-"plain": [],
+    "dokuwiki": dokuwiki,
+    "mediawiki": mediawiki,
+    None: [],
+    "plain": [],
 }
 
 
diff --git a/translate/storage/utx.py b/translate/storage/utx.py
index 36f7efd..92b09e5 100644
--- a/translate/storage/utx.py
+++ b/translate/storage/utx.py
@@ -45,7 +45,6 @@ Encoding
 """
 
 import csv
-import sys
 import time
 
 from translate.storage import base
@@ -125,7 +124,9 @@ class UtxUnit(base.TranslationUnit):
 
     def addnote(self, text, origin=None, position="append"):
         currentnote = self._get_field('comment')
-        if position == "append" and currentnote is not None and currentnote != u'':
+        if (position == "append" and
+            currentnote is not None and
+            currentnote != u''):
             self._set_field('comment', currentnote + '\n' + text)
         else:
             self._set_field('comment', text)
@@ -162,7 +163,7 @@ class UtxUnit(base.TranslationUnit):
 
 class UtxFile(base.TranslationStore):
     """A UTX dictionary file"""
-    Name = _("UTX Dictionary")
+    Name = "UTX Dictionary"
     Mimetypes = ["text/x-utx"]
     Extensions = ["utx"]
 
@@ -174,9 +175,12 @@ class UtxFile(base.TranslationStore):
         self.filename = ''
         self.extension = ''
         self._fieldnames = ['src', 'tgt', 'src:pos']
-        self._header = {"version": "1.00",
-                        "source_language": "en",
-                        "date_created": time.strftime("%FT%TZ%z", time.localtime(time.time()))}
+        self._header = {
+            "version": "1.00",
+             "source_language": "en",
+             "date_created": time.strftime("%FT%TZ%z",
+                                           time.localtime(time.time()))
+        }
         if inputfile is not None:
             self.parse(inputfile)
 
@@ -210,15 +214,16 @@ class UtxFile(base.TranslationStore):
 
     def _write_header(self):
         """Create a UTX header"""
-        header = "#UTX-S %(version)s; %(src)s/%(tgt)s; %(date)s" % \
-                  {"version": self._header["version"],
-                   "src": self._header["source_language"],
-                   "tgt": self._header.get("target_language", ""),
-                   "date": self._header["date_created"],
-                  }
+        header = "#UTX-S %(version)s; %(src)s/%(tgt)s; %(date)s" % {
+                    "version": self._header["version"],
+                    "src": self._header["source_language"],
+                    "tgt": self._header.get("target_language", ""),
+                    "date": self._header["date_created"],
+                 }
         items = []
         for key, value in self._header.iteritems():
-            if key in ["version", "source_language", "target_language", "date_created"]:
+            if key in ["version", "source_language",
+                       "target_language", "date_created"]:
                 continue
             items.append("%s: %s" % (key, value))
         if len(items):
@@ -254,9 +259,10 @@ class UtxFile(base.TranslationStore):
             header_length = self._read_header(input)
         except:
             raise base.ParseError("Cannot parse header")
-        lines = csv.DictReader(input.split(UtxDialect.lineterminator)[header_length:],
-                               fieldnames=self._fieldnames,
-                               dialect="utx")
+        lines = csv.DictReader(
+                    input.split(UtxDialect.lineterminator)[header_length:],
+                    fieldnames=self._fieldnames,
+                    dialect="utx")
         for line in lines:
             newunit = UtxUnit()
             newunit.dict = line
diff --git a/translate/storage/versioncontrol/__init__.py b/translate/storage/versioncontrol/__init__.py
index 4556bac..ea25da6 100644
--- a/translate/storage/versioncontrol/__init__.py
+++ b/translate/storage/versioncontrol/__init__.py
@@ -35,6 +35,7 @@ import os
 import re
 import subprocess
 
+
 DEFAULT_RCS = ["svn", "cvs", "darcs", "git", "bzr", "hg"]
 """the names of all supported revision control systems
 
@@ -54,9 +55,9 @@ def __get_rcs_class(name):
             module = __import__("translate.storage.versioncontrol.%s" % name,
                     globals(), {}, name)
             # the module function "is_available" must return "True"
-            if (hasattr(module, "is_available") and \
-                    callable(module.is_available) and \
-                    module.is_available()):
+            if (hasattr(module, "is_available") and
+                callable(module.is_available) and
+                module.is_available()):
                 # we found an appropriate module
                 rcs_class = getattr(module, name)
             else:
@@ -93,7 +94,7 @@ def run_command(command, cwd=None):
         (output, error) = proc.communicate()
         ret = proc.returncode
         return ret, output, error
-    except OSError, err_msg:
+    except OSError as err_msg:
         # failed to run the program (e.g. the executable was not found)
         return -1, "", err_msg
 
@@ -107,6 +108,7 @@ def prepare_filelist(files):
 def youngest_ancestor(files):
     return os.path.commonprefix([os.path.dirname(f) for f in files])
 
+
 class GenericRevisionControlSystem(object):
     """The super class for all version control classes.
 
@@ -158,8 +160,8 @@ class GenericRevisionControlSystem(object):
         location = os.path.normpath(location)
         result = self._find_rcs_directory(location, oldest_parent)
         if result is None:
-            raise IOError("Could not find revision control information: %s" \
-                    % location)
+            raise IOError("Could not find revision control information: %s" %
+                          location)
 
         self.root_dir, self.location_abs, self.location_rel = result
         if not os.path.isdir(location):
@@ -243,11 +245,11 @@ class GenericRevisionControlSystem(object):
         something like :attr:`RCS_METADIR`
         """
         if self.RCS_METADIR is None:
-            raise IOError("Incomplete RCS interface implementation: " \
-                    + "self.RCS_METADIR is None")
+            raise IOError("Incomplete RCS interface implementation: "
+                          "self.RCS_METADIR is None")
         if self.SCAN_PARENTS is None:
-            raise IOError("Incomplete RCS interface implementation: " \
-                    + "self.SCAN_PARENTS is None")
+            raise IOError("Incomplete RCS interface implementation: "
+                          "self.SCAN_PARENTS is None")
         # we do not check for implemented functions - they raise
         # NotImplementedError exceptions anyway
         return True
@@ -260,23 +262,23 @@ class GenericRevisionControlSystem(object):
 
     def getcleanfile(self, revision=None):
         """Dummy to be overridden by real implementations"""
-        raise NotImplementedError("Incomplete RCS interface implementation:" \
-                + " 'getcleanfile' is missing")
+        raise NotImplementedError("Incomplete RCS interface implementation:"
+                                  " 'getcleanfile' is missing")
 
     def commit(self, message=None, author=None):
         """Dummy to be overridden by real implementations"""
-        raise NotImplementedError("Incomplete RCS interface implementation:" \
-                + " 'commit' is missing")
+        raise NotImplementedError("Incomplete RCS interface implementation:"
+                                  " 'commit' is missing")
 
     def add(self, files, message=None, author=None):
         """Dummy to be overridden by real implementations"""
-        raise NotImplementedError("Incomplete RCS interface implementation:" \
-                + " 'add' is missing")
+        raise NotImplementedError("Incomplete RCS interface implementation:"
+                                  " 'add' is missing")
 
     def update(self, revision=None, needs_revert=True):
         """Dummy to be overridden by real implementations"""
-        raise NotImplementedError("Incomplete RCS interface implementation:" \
-                + " 'update' is missing")
+        raise NotImplementedError("Incomplete RCS interface implementation:"
+                                  " 'update' is missing")
 
 
 def get_versioned_objects_recursive(
@@ -289,17 +291,16 @@ def get_versioned_objects_recursive(
     if versioning_systems is None:
         versioning_systems = DEFAULT_RCS[:]
 
-    def scan_directory(arg, dirname, fnames):
+    for dirpath, dirnames, filenames in os.walk(location):
+        fnames = dirnames + filenames
         for fname in fnames:
-            full_fname = os.path.join(dirname, fname)
+            full_fname = os.path.join(dirpath, fname)
             if os.path.isfile(full_fname):
                 try:
                     rcs_objs.append(get_versioned_object(full_fname,
                             versioning_systems, follow_symlinks))
                 except IOError:
                     pass
-
-    os.path.walk(location, scan_directory, None)
     return rcs_objs
 
 
@@ -307,8 +308,7 @@ def get_versioned_object(
         location,
         versioning_systems=None,
         follow_symlinks=True,
-        oldest_parent=None,
-    ):
+        oldest_parent=None):
     """return a versioned object for the given file"""
     if versioning_systems is None:
         versioning_systems = DEFAULT_RCS[:]
@@ -411,4 +411,4 @@ if __name__ == "__main__":
         import translate.storage.versioncontrol
         # print the names of locally available version control systems
         for rcs in get_available_version_control_systems():
-            print rcs
+            print(rcs)
diff --git a/translate/storage/versioncontrol/bzr.py b/translate/storage/versioncontrol/bzr.py
index 83e9f62..89a7e9f 100644
--- a/translate/storage/versioncontrol/bzr.py
+++ b/translate/storage/versioncontrol/bzr.py
@@ -19,9 +19,9 @@
 # along with this program; if not, see <http://www.gnu.org/licenses/>.
 
 
-import os.path
-from translate.storage.versioncontrol import GenericRevisionControlSystem
-from translate.storage.versioncontrol import run_command, prepare_filelist, youngest_ancestor
+from translate.storage.versioncontrol import (GenericRevisionControlSystem,
+                                              prepare_filelist, run_command,
+                                              youngest_ancestor)
 
 
 def is_available():
@@ -68,15 +68,15 @@ class bzr(GenericRevisionControlSystem):
             command = ["bzr", "revert", self.location_abs]
             exitcode, output_revert, error = run_command(command)
             if exitcode != 0:
-                raise IOError("[BZR] revert of '%s' failed: %s" \
-                        % (self.location_abs, error))
+                raise IOError("[BZR] revert of '%s' failed: %s" % (
+                              self.location_abs, error))
 
         # bzr pull
         command = ["bzr", "pull"]
         exitcode, output_pull, error = run_command(command)
         if exitcode != 0:
-            raise IOError("[BZR] pull of '%s' failed: %s" \
-                    % (self.location_abs, error))
+            raise IOError("[BZR] pull of '%s' failed: %s" % (
+                          self.location_abs, error))
         return output_revert + output_pull
 
     def add(self, files, message=None, author=None):
@@ -85,8 +85,8 @@ class bzr(GenericRevisionControlSystem):
         command = ["bzr", "add"] + files
         exitcode, output, error = run_command(command)
         if exitcode != 0:
-            raise IOError("[BZR] add in '%s' failed: %s" \
-                    % (self.location_abs, error))
+            raise IOError("[BZR] add in '%s' failed: %s" % (
+                          self.location_abs, error))
 
         # go down as deep as possible in the tree to avoid accidental commits
         # TODO: explicitly commit files by name
@@ -106,14 +106,14 @@ class bzr(GenericRevisionControlSystem):
         command.append(self.location_abs)
         exitcode, output_commit, error = run_command(command)
         if exitcode != 0:
-            raise IOError("[BZR] commit of '%s' failed: %s" \
-                    % (self.location_abs, error))
+            raise IOError("[BZR] commit of '%s' failed: %s" % (
+                          self.location_abs, error))
         # bzr push
         command = ["bzr", "push"]
         exitcode, output_push, error = run_command(command)
         if exitcode != 0:
-            raise IOError("[BZR] push of '%s' failed: %s" \
-                    % (self.location_abs, error))
+            raise IOError("[BZR] push of '%s' failed: %s" % (
+                          self.location_abs, error))
         return output_commit + output_push
 
     def getcleanfile(self, revision=None):
@@ -122,6 +122,6 @@ class bzr(GenericRevisionControlSystem):
         command = ["bzr", "cat", self.location_abs]
         exitcode, output, error = run_command(command)
         if exitcode != 0:
-            raise IOError("[BZR] cat failed for '%s': %s" \
-                    % (self.location_abs, error))
+            raise IOError("[BZR] cat failed for '%s': %s" % (
+                          self.location_abs, error))
         return output
diff --git a/translate/storage/versioncontrol/cvs.py b/translate/storage/versioncontrol/cvs.py
index 087ef39..63669dc 100644
--- a/translate/storage/versioncontrol/cvs.py
+++ b/translate/storage/versioncontrol/cvs.py
@@ -20,8 +20,9 @@
 
 import os
 
-from translate.storage.versioncontrol import GenericRevisionControlSystem
-from translate.storage.versioncontrol import run_command, prepare_filelist, youngest_ancestor
+from translate.storage.versioncontrol import (GenericRevisionControlSystem,
+                                              prepare_filelist, run_command,
+                                              youngest_ancestor)
 
 
 def is_available():
@@ -52,8 +53,8 @@ class cvs(GenericRevisionControlSystem):
         command.append(path)
         exitcode, output, error = run_command(command)
         if exitcode != 0:
-            raise IOError("[CVS] Could not read '%s' from '%s': %s / %s" % \
-                    (path, cvsroot, output, error))
+            raise IOError("[CVS] Could not read '%s' from '%s': %s / %s" % (
+                          path, cvsroot, output, error))
         return output
 
     def getcleanfile(self, revision=None):
@@ -73,16 +74,16 @@ class cvs(GenericRevisionControlSystem):
 
     def update(self, revision=None, needs_revert=True):
         """Does a clean update of the given path"""
-        #TODO: take needs_revert parameter into account
+        # TODO: take needs_revert parameter into account
         working_dir = os.path.dirname(self.location_abs)
         filename = self.location_abs
         filename_backup = filename + os.path.extsep + "bak"
         # rename the file to be updated
         try:
             os.rename(filename, filename_backup)
-        except OSError, error:
-            raise IOError("[CVS] could not move the file '%s' to '%s': %s" % \
-                    (filename, filename_backup, error))
+        except OSError as error:
+            raise IOError("[CVS] could not move the file '%s' to '%s': %s" % (
+                          filename, filename_backup, error))
         command = ["cvs", "-Q", "update", "-C"]
         if revision:
             command.extend(["-r", revision])
diff --git a/translate/storage/versioncontrol/darcs.py b/translate/storage/versioncontrol/darcs.py
index b092e6e..f73cdbf 100644
--- a/translate/storage/versioncontrol/darcs.py
+++ b/translate/storage/versioncontrol/darcs.py
@@ -19,10 +19,9 @@
 # along with this program; if not, see <http://www.gnu.org/licenses/>.
 
 
-import os
-
-from translate.storage.versioncontrol import GenericRevisionControlSystem
-from translate.storage.versioncontrol import run_command, prepare_filelist, youngest_ancestor
+from translate.storage.versioncontrol import (GenericRevisionControlSystem,
+                                              prepare_filelist, run_command,
+                                              youngest_ancestor)
 
 
 def is_available():
@@ -64,8 +63,8 @@ class darcs(GenericRevisionControlSystem):
         command = ["darcs", "add", "--repodir", self.root_dir] + files
         exitcode, output, error = run_command(command)
         if exitcode != 0:
-            raise IOError("[Darcs] Error running darcs command '%s': %s" \
-                    % (command, error))
+            raise IOError("[Darcs] Error running darcs command '%s': %s" % (
+                          command, error))
 
         # go down as deep as possible in the tree to avoid accidental commits
         # TODO: explicitly commit files by name
@@ -86,14 +85,14 @@ class darcs(GenericRevisionControlSystem):
         command.append(self.location_rel)
         exitcode, output_record, error = run_command(command)
         if exitcode != 0:
-            raise IOError("[Darcs] Error running darcs command '%s': %s" \
-                    % (command, error))
+            raise IOError("[Darcs] Error running darcs command '%s': %s" % (
+                          command, error))
         # push changes
         command = ["darcs", "push", "-a", "--repodir", self.root_dir]
         exitcode, output_push, error = run_command(command)
         if exitcode != 0:
-            raise IOError("[Darcs] Error running darcs command '%s': %s" \
-                    % (command, error))
+            raise IOError("[Darcs] Error running darcs command '%s': %s" % (
+                          command, error))
         return output_record + output_push
 
     def getcleanfile(self, revision=None):
@@ -108,7 +107,7 @@ class darcs(GenericRevisionControlSystem):
             darcs_file = open(filename)
             output = darcs_file.read()
             darcs_file.close()
-        except IOError, error:
-            raise IOError("[Darcs] error reading original file '%s': %s" % \
-                    (filename, error))
+        except IOError as error:
+            raise IOError("[Darcs] error reading original file '%s': %s" % (
+                          filename, error))
         return output
diff --git a/translate/storage/versioncontrol/git.py b/translate/storage/versioncontrol/git.py
index 3286bae..4019a68 100644
--- a/translate/storage/versioncontrol/git.py
+++ b/translate/storage/versioncontrol/git.py
@@ -24,8 +24,8 @@
 
 import os
 
-from translate.storage.versioncontrol import GenericRevisionControlSystem
-from translate.storage.versioncontrol import run_command, prepare_filelist
+from translate.storage.versioncontrol import (GenericRevisionControlSystem,
+                                              prepare_filelist, run_command)
 
 
 def is_available():
@@ -80,8 +80,8 @@ class git(GenericRevisionControlSystem):
         command = self._get_git_command(args)
         exitcode, output, error = run_command(command, self.root_dir)
         if exitcode != 0:
-            raise IOError("[GIT] add of files in '%s') failed: %s" \
-                    % (self.root_dir, error))
+            raise IOError("[GIT] add of files in '%s') failed: %s" % (
+                          self.root_dir, error))
 
         return output + self.commit(message, author, add=False)
 
@@ -93,8 +93,8 @@ class git(GenericRevisionControlSystem):
             command = self._get_git_command(["add", self.location_rel])
             exitcode, output_add, error = run_command(command, self.root_dir)
             if exitcode != 0:
-                raise IOError("[GIT] add of ('%s', '%s') failed: %s" \
-                        % (self.root_dir, self.location_rel, error))
+                raise IOError("[GIT] add of ('%s', '%s') failed: %s" % (
+                              self.root_dir, self.location_rel, error))
 
         if not self._has_changes():
             raise IOError("[GIT] no changes to commit")
@@ -111,14 +111,14 @@ class git(GenericRevisionControlSystem):
                 msg = error
             else:
                 msg = output_commit
-            raise IOError("[GIT] commit of ('%s', '%s') failed: %s" \
-                    % (self.root_dir, self.location_rel, msg))
+            raise IOError("[GIT] commit of ('%s', '%s') failed: %s" % (
+                          self.root_dir, self.location_rel, msg))
         # push changes
         command = self._get_git_command(["push"])
         exitcode, output_push, error = run_command(command, self.root_dir)
         if exitcode != 0:
-            raise IOError("[GIT] push of ('%s', '%s') failed: %s" \
-                    % (self.root_dir, self.location_rel, error))
+            raise IOError("[GIT] push of ('%s', '%s') failed: %s" % (
+                          self.root_dir, self.location_rel, error))
         return output_add + output_commit + output_push
 
     def getcleanfile(self, revision=None):
@@ -127,6 +127,6 @@ class git(GenericRevisionControlSystem):
         command = self._get_git_command(["show", "HEAD:%s" % self.location_rel])
         exitcode, output, error = run_command(command, self.root_dir)
         if exitcode != 0:
-            raise IOError("[GIT] 'show' failed for ('%s', %s): %s" \
-                    % (self.root_dir, self.location_rel, error))
+            raise IOError("[GIT] 'show' failed for ('%s', %s): %s" % (
+                          self.root_dir, self.location_rel, error))
         return output
diff --git a/translate/storage/versioncontrol/hg.py b/translate/storage/versioncontrol/hg.py
index 65dcccb..9161e34 100644
--- a/translate/storage/versioncontrol/hg.py
+++ b/translate/storage/versioncontrol/hg.py
@@ -19,10 +19,9 @@
 # along with this program; if not, see <http://www.gnu.org/licenses/>.
 
 
-import os
-
-from translate.storage.versioncontrol import GenericRevisionControlSystem
-from translate.storage.versioncontrol import run_command, prepare_filelist, youngest_ancestor
+from translate.storage.versioncontrol import (GenericRevisionControlSystem,
+                                              prepare_filelist, run_command,
+                                              youngest_ancestor)
 
 
 def is_available():
@@ -118,14 +117,14 @@ class hg(GenericRevisionControlSystem):
         command.append(self.location_abs)
         exitcode, output_commit, error = run_command(command)
         if exitcode != 0:
-            raise IOError("[Mercurial] Error running '%s': %s" \
-                    % (command, error))
+            raise IOError("[Mercurial] Error running '%s': %s" % (
+                          command, error))
         # push changes
         command = ["hg", "-R", self.root_dir, "push"]
         exitcode, output_push, error = run_command(command)
         if exitcode != 0:
-            raise IOError("[Mercurial] Error running '%s': %s" \
-                    % (command, error))
+            raise IOError("[Mercurial] Error running '%s': %s" % (
+                          command, error))
         return output_commit + output_push
 
     def getcleanfile(self, revision=None):
@@ -135,6 +134,6 @@ class hg(GenericRevisionControlSystem):
                 self.location_abs]
         exitcode, output, error = run_command(command)
         if exitcode != 0:
-            raise IOError("[Mercurial] Error running '%s': %s" \
-                    % (command, error))
+            raise IOError("[Mercurial] Error running '%s': %s" % (
+                          command, error))
         return output
diff --git a/translate/storage/versioncontrol/svn.py b/translate/storage/versioncontrol/svn.py
index 00c4e23..889e20e 100644
--- a/translate/storage/versioncontrol/svn.py
+++ b/translate/storage/versioncontrol/svn.py
@@ -19,10 +19,9 @@
 # along with this program; if not, see <http://www.gnu.org/licenses/>.
 
 
-import os
-
-from translate.storage.versioncontrol import GenericRevisionControlSystem
-from translate.storage.versioncontrol import run_command, prepare_filelist, youngest_ancestor
+from translate.storage.versioncontrol import (GenericRevisionControlSystem,
+                                              prepare_filelist, run_command,
+                                              youngest_ancestor)
 
 
 def is_available():
@@ -54,7 +53,7 @@ class svn(GenericRevisionControlSystem):
     """Class to manage items under revision control of Subversion."""
 
     RCS_METADIR = ".svn"
-    SCAN_PARENTS = False
+    SCAN_PARENTS = True
 
     def update(self, revision=None, needs_revert=True):
         """update the working copy - remove local modifications if necessary"""
@@ -83,7 +82,7 @@ class svn(GenericRevisionControlSystem):
     def add(self, files, message=None, author=None):
         """Add and commit the new files."""
         files = prepare_filelist(files)
-        command = ["svn", "add", "-q", "--non-interactive", "--parents"] + files
+        command = ["svn", "add", "-q", "--non-interactive", "--parents", "--force"] + files
         exitcode, output, error = run_command(command)
         if exitcode != 0:
             raise IOError("[SVN] Error running SVN command '%s': %s" %
diff --git a/translate/storage/versioncontrol/test_helper.py b/translate/storage/versioncontrol/test_helper.py
index c9b61f0..73d2129 100644
--- a/translate/storage/versioncontrol/test_helper.py
+++ b/translate/storage/versioncontrol/test_helper.py
@@ -3,7 +3,8 @@
 import os.path
 import shutil
 
-from translate.storage.versioncontrol import run_command, get_versioned_object
+from translate.storage.versioncontrol import get_versioned_object, run_command
+
 
 class HelperTest(object):
 
@@ -38,7 +39,7 @@ class HelperTest(object):
     def create_files(self, files_dict):
         """Creates file(s) named after the keys, with contents from the values
         of the dictionary."""
-        for name, content in files_dict.iteritems():
+        for name, content in files_dict.items():
             assert not os.path.isabs(name)
             dirs = os.path.dirname(name)
             if dirs:
diff --git a/translate/storage/versioncontrol/test_svn.py b/translate/storage/versioncontrol/test_svn.py
index ec791ed..33090f0 100644
--- a/translate/storage/versioncontrol/test_svn.py
+++ b/translate/storage/versioncontrol/test_svn.py
@@ -2,9 +2,10 @@
 
 import os.path
 
+from translate.storage.versioncontrol import (get_versioned_object, run_command,
+                                              svn)
 from translate.storage.versioncontrol.test_helper import HelperTest
-from translate.storage.versioncontrol import svn
-from translate.storage.versioncontrol import run_command, get_versioned_object
+
 
 class TestSVN(HelperTest):
 
@@ -13,12 +14,11 @@ class TestSVN(HelperTest):
         run_command(["svn", "co", "file:///%s/repo" % self.path, "checkout"], cwd=self.path)
 
     def test_detection(self):
-        print self.co_path
+        print(self.co_path)
         o = get_versioned_object(self.co_path)
         assert isinstance(o, svn.svn)
         assert o.location_abs == self.co_path
 
-
     def test_add(self):
         o = get_versioned_object(self.co_path)
         self.create_files({
diff --git a/translate/storage/wordfast.py b/translate/storage/wordfast.py
index 05a0a92..f65e0ba 100644
--- a/translate/storage/wordfast.py
+++ b/translate/storage/wordfast.py
@@ -64,16 +64,16 @@ Header
 Escaping
     Wordfast TM implements a form of escaping that covers two aspects:
 
-    1. Placeable: bold, formating, etc.  These are left as is and ignored.
-    It is up to the editor and future placeable implementation to manage these.
+    1. Placeable: bold, formating, etc.  These are left as is and ignored.  It
+       is up to the editor and future placeable implementation to manage these.
 
-    2. Escapes: items that may confuse Excel or translators are
-    escaped as &'XX;. These are fully implemented and are converted to and from
-    Unicode.  By observing behaviour and reading documentation we where able
-    to observe all possible escapes. Unfortunately the escaping differs slightly
-    between Windows and Mac version.  This might cause errors in future.
-    Functions allow for ``<_wf_to_char>`` and back to Wordfast escape
-    (``<_char_to_wf>``).
+    2. Escapes: items that may confuse Excel or translators are escaped as
+       ``&'XX;``. These are fully implemented and are converted to and from
+       Unicode.  By observing behaviour and reading documentation we where able
+       to observe all possible escapes. Unfortunately the escaping differs
+       slightly between Windows and Mac version.  This might cause errors in
+       future.  Functions allow for ``<_wf_to_char>`` and back to Wordfast
+       escape (``<_char_to_wf>``).
 
 Extended Attributes
     The last 4 columns allow users to define and manage extended attributes.
@@ -81,11 +81,11 @@ Extended Attributes
 """
 
 import csv
-import sys
 import time
 
 from translate.storage import base
 
+
 WF_TIMEFORMAT = "%Y%m%d~%H%M%S"
 """Time format used by Wordfast"""
 
@@ -99,17 +99,18 @@ WF_FIELDNAMES = ["date", "user", "reuse", "src-lang", "source", "target-lang",
 """Field names for a Wordfast TU"""
 
 WF_FIELDNAMES_HEADER_DEFAULTS = {
-"date": "%19000101~121212",
-"userlist": "%User ID,TT,TT Translate-Toolkit",
-"tucount": "%TU=00000001",
-"src-lang": "%EN-US",
-"version": "%Wordfast TM v.5.51w9/00",
-"target-lang": "",
-"license": "%---00000001",
-"attr1list": "",
-"attr2list": "",
-"attr3list": "",
-"attr4list": ""}
+    "date": "%19000101~121212",
+    "userlist": "%User ID,TT,TT Translate-Toolkit",
+    "tucount": "%TU=00000001",
+    "src-lang": "%EN-US",
+    "version": "%Wordfast TM v.5.51w9/00",
+    "target-lang": "",
+    "license": "%---00000001",
+    "attr1list": "",
+    "attr2list": "",
+    "attr3list": "",
+    "attr4list": "",
+}
 """Default or minimum header entries for a Wordfast file"""
 
 # TODO Needs validation.  The following need to be checked against a WF TM file
@@ -353,7 +354,7 @@ class WordfastUnit(base.TranslationUnit):
 
 class WordfastTMFile(base.TranslationStore):
     """A Wordfast translation memory file"""
-    Name = _("Wordfast Translation Memory")
+    Name = "Wordfast Translation Memory"
     Mimetypes = ["application/x-wordfast"]
     Extensions = ["txt"]
 
diff --git a/translate/storage/workflow.py b/translate/storage/workflow.py
index ab3be5c..cd4e13f 100644
--- a/translate/storage/workflow.py
+++ b/translate/storage/workflow.py
@@ -228,7 +228,8 @@ class Workflow(object):
         if to_state not in self.states:
             raise StateNotInWorkflowError(to_state)
         if (self._current_state, to_state) not in self.edges:
-            raise TransitionError('No edge between edges %s and %s' % (self._current_state, to_state))
+            raise TransitionError('No edge between edges %s and %s' % (
+                                  self._current_state, to_state))
         self._current_state.leave(self._workflow_obj)
         self._current_state = to_state
         self._current_state.enter(self._workflow_obj)
diff --git a/translate/storage/xliff.py b/translate/storage/xliff.py
index 40b7e4f..c012552 100644
--- a/translate/storage/xliff.py
+++ b/translate/storage/xliff.py
@@ -28,9 +28,10 @@ from lxml import etree
 from translate.misc.multistring import multistring
 from translate.storage import base, lisa
 from translate.storage.lisa import getXMLspace
-from translate.storage.placeables.lisa import xml_to_strelem, strelem_to_xml
+from translate.storage.placeables.lisa import strelem_to_xml, xml_to_strelem
 from translate.storage.workflow import StateEnum as state
 
+
 # TODO: handle translation types
 
 ID_SEPARATOR = u"\04"
@@ -52,7 +53,7 @@ class xliffunit(lisa.LISAunit):
 
     _default_xml_space = "default"
 
-    #TODO: id and all the trans-unit level stuff
+    # TODO: id and all the trans-unit level stuff
 
     S_UNTRANSLATED = state.EMPTY
     S_NEEDS_TRANSLATION = state.NEEDS_WORK
@@ -184,8 +185,8 @@ class xliffunit(lisa.LISAunit):
         :param txt: Alternative translation of the source text.
         """
 
-        #TODO: support adding a source tag ad match quality attribute.  At
-        # the source tag is needed to inject fuzzy matches from a TM.
+        # TODO: support adding a source tag ad match quality attribute.  At the
+        # source tag is needed to inject fuzzy matches from a TM.
         if isinstance(txt, str):
             txt = txt.decode("utf-8")
         alttrans = etree.SubElement(self.xmlelement, self.namespaced("alt-trans"))
@@ -226,8 +227,8 @@ class xliffunit(lisa.LISAunit):
                 targetnode = node.iterdescendants(self.namespaced("target"))
                 newunit.target = lisa.getText(targetnode.next(),
                                               getXMLspace(node, self._default_xml_space))
-                #TODO: support multiple targets better
-                #TODO: support notes in alt-trans
+                # TODO: support multiple targets better
+                # TODO: support notes in alt-trans
                 newunit.xmlelement = node
 
                 translist.append(newunit)
@@ -285,7 +286,7 @@ class xliffunit(lisa.LISAunit):
 
     def adderror(self, errorname, errortext):
         """Adds an error message to this unit."""
-        #TODO: consider factoring out: some duplication between XLIFF and TMX
+        # TODO: consider factoring out: some duplication between XLIFF and TMX
         text = errorname
         if errortext:
             text += ': ' + errortext
@@ -293,7 +294,7 @@ class xliffunit(lisa.LISAunit):
 
     def geterrors(self):
         """Get all error messages."""
-        #TODO: consider factoring out: some duplication between XLIFF and TMX
+        # TODO: consider factoring out: some duplication between XLIFF and TMX
         notelist = self._getnotelist(origin="pofilter")
         errordict = {}
         for note in notelist:
@@ -321,7 +322,7 @@ class xliffunit(lisa.LISAunit):
         if not self.isapproved() and state_n > self.S_UNREVIEWED:
             state_n = self.S_UNREVIEWED
 
-        return  state_n
+        return state_n
 
     def set_state_n(self, value):
         if value not in self.statemap_r:
@@ -329,7 +330,7 @@ class xliffunit(lisa.LISAunit):
 
         targetnode = self.getlanguageNode(lang=None, index=1)
 
-        #FIXME: handle state qualifiers
+        # FIXME: handle state qualifiers
         if value == self.S_UNTRANSLATED:
             if targetnode is not None and "state" in targetnode.attrib:
                 del targetnode.attrib["state"]
@@ -368,10 +369,10 @@ class xliffunit(lisa.LISAunit):
             self.set_state_n(self.S_UNREVIEWED)
 
     def isfuzzy(self):
-#        targetnode = self.getlanguageNode(lang=None, index=1)
-#        return not targetnode is None and \
-#                (targetnode.get("state-qualifier") == "fuzzy-match" or \
-#                targetnode.get("state") == "needs-review-translation")
+        # targetnode = self.getlanguageNode(lang=None, index=1)
+        # return not targetnode is None and \
+        #         (targetnode.get("state-qualifier") == "fuzzy-match" or \
+        #         targetnode.get("state") == "needs-review-translation")
         return not self.isapproved() and bool(self.target)
 
     def markfuzzy(self, value=True):
@@ -464,7 +465,7 @@ class xliffunit(lisa.LISAunit):
         """Returns the contexts in the context groups with the specified name"""
         groups = []
         grouptags = self.xmlelement.iterdescendants(self.namespaced("context-group"))
-        #TODO: conbine name in query
+        # TODO: conbine name in query
         for group in grouptags:
             if group.get("name") == name:
                 contexts = group.iterdescendants(self.namespaced("context"))
@@ -479,7 +480,7 @@ class xliffunit(lisa.LISAunit):
         return self.xmlelement.get("restype")
 
     def merge(self, otherunit, overwrite=False, comments=True, authoritative=False):
-        #TODO: consider other attributes like "approved"
+        # TODO: consider other attributes like "approved"
         super(xliffunit, self).merge(otherunit, overwrite, comments)
         if self.target:
             self.marktranslated()
@@ -492,7 +493,7 @@ class xliffunit(lisa.LISAunit):
 
     def correctorigin(self, node, origin):
         """Check against node tag's origin (e.g note or alt-trans)"""
-        if origin == None:
+        if origin is None:
             return True
         elif origin in node.get("from", ""):
             return True
@@ -501,6 +502,7 @@ class xliffunit(lisa.LISAunit):
         else:
             return False
 
+    @classmethod
     def multistring_to_rich(cls, mstr):
         """Override :meth:`TranslationUnit.multistring_to_rich` which is used
         by the ``rich_source`` and ``rich_target`` properties."""
@@ -511,19 +513,18 @@ class xliffunit(lisa.LISAunit):
             strings = [mstr]
 
         return [xml_to_strelem(s) for s in strings]
-    multistring_to_rich = classmethod(multistring_to_rich)
 
+    @classmethod
     def rich_to_multistring(cls, elem_list):
         """Override :meth:`TranslationUnit.rich_to_multistring` which is used
         by the ``rich_source`` and ``rich_target`` properties."""
         return multistring([unicode(elem) for elem in elem_list])
-    rich_to_multistring = classmethod(rich_to_multistring)
 
 
 class xlifffile(lisa.LISAfile):
     """Class representing a XLIFF file store."""
     UnitClass = xliffunit
-    Name = _("XLIFF Translation File")
+    Name = "XLIFF Translation File"
     Mimetypes = ["application/x-xliff", "application/x-xliff+xml"]
     Extensions = ["xlf", "xliff", "sdlxliff"]
     rootNode = "xliff"
@@ -753,11 +754,11 @@ class xlifffile(lisa.LISAfile):
         if self.body is None:
             return False
         self._messagenum = len(list(self.body.iterdescendants(self.namespaced("trans-unit"))))
-        #TODO: was 0 based before - consider
+        # TODO: was 0 based before - consider
     #    messagenum = len(self.units)
-        #TODO: we want to number them consecutively inside a body/file tag
-        #instead of globally in the whole XLIFF file, but using len(self.units)
-        #will be much faster
+        # TODO: we want to number them consecutively inside a body/file tag
+        # instead of globally in the whole XLIFF file, but using
+        # len(self.units) will be much faster
         return True
 
     def creategroup(self, filename="NoName", createifmissing=False, restype=None):
@@ -774,15 +775,15 @@ class xlifffile(lisa.LISAfile):
         self.removedefaultfile()
         return super(xlifffile, self).__str__()
 
+    @classmethod
     def parsestring(cls, storestring):
         """Parses the string to return the correct file object"""
         xliff = super(xlifffile, cls).parsestring(storestring)
         if xliff.units:
             header = xliff.units[0]
-            if ("gettext-domain-header" in (header.getrestype() or "") \
-                    or xliff.getdatatype() == "po") \
-                    and cls.__name__.lower() != "poxlifffile":
+            if (("gettext-domain-header" in (header.getrestype() or "") or
+                 xliff.getdatatype() == "po") and
+                 cls.__name__.lower() != "poxlifffile"):
                 from translate.storage import poxliff
                 xliff = poxliff.PoXliffFile.parsestring(storestring)
         return xliff
-    parsestring = classmethod(parsestring)
diff --git a/translate/storage/xml_extract/extract.py b/translate/storage/xml_extract/extract.py
index 1f43c6c..a66dae4 100644
--- a/translate/storage/xml_extract/extract.py
+++ b/translate/storage/xml_extract/extract.py
@@ -18,29 +18,19 @@
 # You should have received a copy of the GNU General Public License
 # along with this program; if not, see <http://www.gnu.org/licenses/>.
 
+from contextlib import contextmanager, nested
+
 from lxml import etree
 
 from translate.storage import base
-from translate.misc.typecheck import accepts, Self, IsCallable, IsOneOf, Any, Class
-from translate.misc.typecheck.typeclasses import Number
-from translate.misc.contextlib import contextmanager, nested
-from translate.misc.context import with_
-from translate.storage.xml_extract import xpath_breadcrumb
-from translate.storage.xml_extract import misc
-from translate.storage.placeables import xliff, StringElem
-
-
-def Nullable(t):
-    return IsOneOf(t, type(None))
-
-TranslatableClass = Class('Translatable')
+from translate.storage.placeables import StringElem, xliff
+from translate.storage.xml_extract import misc, xpath_breadcrumb
 
 
 class Translatable(object):
     """A node corresponds to a translatable element. A node may
        have children, which correspond to placeables."""
 
-    @accepts(Self(), unicode, unicode, etree._Element, [IsOneOf(TranslatableClass, unicode)])
     def __init__(self, placeable_name, xpath, dom_node, source):
         self.placeable_name = placeable_name
         self.source = source
@@ -54,7 +44,6 @@ class Translatable(object):
     placeables = property(_get_placeables)
 
 
- at accepts(IsCallable(), Translatable, state=[Any()])
 def reduce_unit_tree(f, unit_node, *state):
     return misc.reduce_tree(f, unit_node, unit_node, lambda unit_node: unit_node.placeables, *state)
 
@@ -72,7 +61,6 @@ class ParseState(object):
         self.nsmap = nsmap
 
 
- at accepts(etree._Element, ParseState)
 def _process_placeable(dom_node, state):
     """Run find_translatable_dom_nodes on the current dom_node"""
     placeable = find_translatable_dom_nodes(dom_node, state)
@@ -89,7 +77,6 @@ def _process_placeable(dom_node, state):
         raise Exception("BUG: find_translatable_dom_nodes should never return more than a single translatable")
 
 
- at accepts(etree._Element, ParseState)
 def _process_placeables(dom_node, state):
     """Return a list of placeables and list with
     alternating string-placeable objects. The former is
@@ -102,7 +89,6 @@ def _process_placeables(dom_node, state):
     return source
 
 
- at accepts(etree._Element, ParseState)
 def _process_translatable(dom_node, state):
     source = [unicode(dom_node.text or u"")] + _process_placeables(dom_node, state)
     translatable = Translatable(state.placeable_name, state.xpath_breadcrumb.xpath, dom_node, source)
@@ -110,7 +96,6 @@ def _process_translatable(dom_node, state):
     return [translatable]
 
 
- at accepts(etree._Element, ParseState)
 def _process_children(dom_node, state):
     _namespace, tag = misc.parse_tag(dom_node.tag)
     children = [find_translatable_dom_nodes(child, state) for child in dom_node]
@@ -130,7 +115,6 @@ def compact_tag(nsmap, namespace, tag):
         return u'{%s}%s' % (namespace, tag)
 
 
- at accepts(etree._Element, ParseState)
 def find_translatable_dom_nodes(dom_node, state):
     # For now, we only want to deal with XML elements.
     # And we want to avoid processing instructions, which
@@ -164,12 +148,11 @@ def find_translatable_dom_nodes(dom_node, state):
         yield state.is_inline
         state.is_inline = old_inline
 
-    def with_block(xpath_breadcrumb, placeable_name, is_inline):
+    with nested(xpath_set(), placeable_set(), inline_set()):
         if (namespace, tag) not in state.no_translate_content_elements:
             return _process_translatable(dom_node, state)
         else:
             return _process_children(dom_node, state)
-    return with_(nested(xpath_set(), placeable_set(), inline_set()), with_block)
 
 
 class IdMaker(object):
@@ -188,7 +171,6 @@ class IdMaker(object):
         return obj in self._obj_id_map
 
 
- at accepts(Nullable(Translatable), Translatable, IdMaker)
 def _to_placeables(parent_translatable, translatable, id_maker):
     result = []
     for chunk in translatable.source:
@@ -203,7 +185,6 @@ def _to_placeables(parent_translatable, translatable, id_maker):
     return result
 
 
- at accepts(base.TranslationStore, Nullable(Translatable), Translatable, IdMaker)
 def _add_translatable_to_store(store, parent_translatable, translatable, id_maker):
     """Construct a new translation unit, set its source and location
     information and add it to 'store'.
@@ -214,7 +195,6 @@ def _add_translatable_to_store(store, parent_translatable, translatable, id_make
     store.addunit(unit)
 
 
- at accepts(Translatable)
 def _contains_translatable_text(translatable):
     """Checks whether translatable contains any chunks of text which contain
     more than whitespace.
@@ -227,7 +207,6 @@ def _contains_translatable_text(translatable):
     return False
 
 
- at accepts(base.TranslationStore)
 def _make_store_adder(store):
     """Return a function which, when called with a Translatable will add
     a unit to 'store'. The placeables will represented as strings according
@@ -240,7 +219,6 @@ def _make_store_adder(store):
     return add_to_store
 
 
- at accepts([Translatable], IsCallable(), Nullable(Translatable), Number)
 def _walk_translatable_tree(translatables, f, parent_translatable, rid):
     for translatable in translatables:
         if _contains_translatable_text(translatable) and not translatable.is_inline:
@@ -257,7 +235,6 @@ def reverse_map(a_map):
     return dict((value, key) for key, value in a_map.iteritems())
 
 
- at accepts(lambda obj: hasattr(obj, "read"), base.TranslationStore, ParseState, Nullable(IsCallable()))
 def build_store(odf_file, store, parse_state, store_adder=None):
     """Utility function for loading xml_filename"""
     store_adder = store_adder or _make_store_adder(store)
diff --git a/translate/storage/xml_extract/generate.py b/translate/storage/xml_extract/generate.py
index 5834145..6dae05e 100644
--- a/translate/storage/xml_extract/generate.py
+++ b/translate/storage/xml_extract/generate.py
@@ -22,16 +22,10 @@
 import lxml.etree as etree
 
 from translate.storage import base
-
-from translate.misc.typecheck import accepts, IsCallable
-from translate.misc.typecheck.typeclasses import Number
-from translate.storage.xml_extract import misc
-from translate.storage.xml_extract import extract
-from translate.storage.xml_extract import unit_tree
+from translate.storage.xml_extract import extract, misc, unit_tree
 from translate.storage.xml_name import XmlNamer
 
 
- at accepts(etree._Element)
 def _get_tag_arrays(dom_node):
     """Return a dictionary indexed by child tag names, where each tag is associated with an array
     of all the child nodes with matching the tag name, in the order in which they appear as children
@@ -50,7 +44,6 @@ def _get_tag_arrays(dom_node):
     return child_dict
 
 
- at accepts(etree._Element, unit_tree.XPathTree, IsCallable())
 def apply_translations(dom_node, unit_node, do_translate):
     tag_array = _get_tag_arrays(dom_node)
     for unit_child_index, unit_child in unit_node.children.iteritems():
@@ -67,18 +60,16 @@ def apply_translations(dom_node, unit_node, do_translate):
         except IndexError:
             pass
     # If there is a translation unit associated with this unit_node...
-    if unit_node.unit != None:
+    if unit_node.unit is not None:
         # The invoke do_translate on the dom_node and the unit; do_translate
         # should replace the text in dom_node with the text in unit_node.
         do_translate(dom_node, unit_node.unit)
 
 
- at accepts(IsCallable(), etree._Element, state=[Number])
 def reduce_dom_tree(f, dom_node, *state):
     return misc.reduce_tree(f, dom_node, dom_node, lambda dom_node: dom_node, *state)
 
 
- at accepts(etree._Element, etree._Element)
 def find_dom_root(parent_dom_node, dom_node):
     """
     .. seealso:: :meth:`find_placeable_dom_tree_roots`
@@ -93,7 +84,6 @@ def find_dom_root(parent_dom_node, dom_node):
         return find_dom_root(parent_dom_node, dom_node.getparent())
 
 
- at accepts(extract.Translatable)
 def find_placeable_dom_tree_roots(unit_node):
     """For an inline placeable, find the root DOM node for the placeable in its
     parent.
@@ -117,7 +107,6 @@ def find_placeable_dom_tree_roots(unit_node):
     return extract.reduce_unit_tree(set_dom_root_for_unit_node, unit_node, {})
 
 
- at accepts(extract.Translatable, etree._Element)
 def _map_source_dom_to_doc_dom(unit_node, source_dom_node):
     """Creating a mapping from the DOM nodes in source_dom_node which correspond to
     placeables, with DOM nodes in the XML document template (this information is obtained
@@ -148,7 +137,6 @@ def _map_source_dom_to_doc_dom(unit_node, source_dom_node):
     return source_dom_to_doc_dom
 
 
- at accepts(etree._Element, etree._Element)
 def _map_target_dom_to_source_dom(source_dom_node, target_dom_node):
     """Associate placeables in source_dom_node and target_dom_node which
     have the same 'id' attributes.
@@ -192,7 +180,6 @@ def _build_target_dom_to_doc_dom(unit_node, source_dom, target_dom):
     return misc.compose_mappings(target_dom_to_source_dom, source_dom_to_doc_dom)
 
 
- at accepts(etree._Element, {etree._Element: etree._Element})
 def _get_translated_node(target_node, target_dom_to_doc_dom):
     """Convenience function to get node corresponding to 'target_node'
     and to assign the tail text of 'target_node' to this node."""
@@ -201,7 +188,6 @@ def _get_translated_node(target_node, target_dom_to_doc_dom):
     return dom_node
 
 
- at accepts(etree._Element, etree._Element, {etree._Element: etree._Element})
 def _build_translated_dom(dom_node, target_node, target_dom_to_doc_dom):
     """Use the "shape" of 'target_node' (which is a DOM tree) to insert nodes
     into the DOM tree rooted at 'dom_node'.
@@ -224,7 +210,6 @@ def _build_translated_dom(dom_node, target_node, target_dom_to_doc_dom):
         _build_translated_dom(dom_child, target_child, target_dom_to_doc_dom)
 
 
- at accepts(IsCallable())
 def replace_dom_text(make_parse_state):
     """Return a function::
 
@@ -235,7 +220,6 @@ def replace_dom_text(make_parse_state):
       positions in unit.source).
     """
 
-    @accepts(etree._Element, base.TranslationUnit)
     def action(dom_node, unit):
         """Use the unit's target (or source in the case where there is no translation)
         to update the text in the dom_node and at the tails of its children."""
diff --git a/translate/storage/xml_extract/misc.py b/translate/storage/xml_extract/misc.py
index 63f64c1..bae5770 100644
--- a/translate/storage/xml_extract/misc.py
+++ b/translate/storage/xml_extract/misc.py
@@ -20,10 +20,14 @@
 
 import re
 
-from translate.misc.typecheck import accepts, IsCallable, Any
+
+# Python 3 compatibility
+try:
+    unicode
+except NameError:
+    unicode = str
 
 
- at accepts(IsCallable(), Any(), Any(), IsCallable(), state=[Any()])
 def reduce_tree(f, parent_unit_node, unit_node, get_children, *state):
     """Enumerate a tree, applying f to in a pre-order fashion to each node.
 
@@ -55,7 +59,7 @@ def compose_mappings(left, right):
     which have corresponding keys in right will have their keys mapped
     to values in right. """
     result_map = {}
-    for left_key, left_val in left.iteritems():
+    for left_key, left_val in left.items():
         try:
             result_map[left_key] = right[left_val]
         except KeyError:
@@ -72,6 +76,13 @@ def parse_tag(full_tag):
     """
     match = tag_pattern.match(full_tag)
     if match is not None:
-        return unicode(match.groupdict()['namespace']), unicode(match.groupdict()['tag'])
+        # Slightly hacky way of supporting 2+3
+        ret = []
+        for k in ("namespace", "tag"):
+            value = match.groupdict()[k] or ""
+            if not isinstance(value, unicode):
+                value = unicode(value, encoding="utf-8")
+            ret.append(value)
+        return ret[0], ret[1]
     else:
         raise Exception('Passed an invalid tag')
diff --git a/translate/storage/xml_extract/test_misc.py b/translate/storage/xml_extract/test_misc.py
index 8dc33df..56490dd 100644
--- a/translate/storage/xml_extract/test_misc.py
+++ b/translate/storage/xml_extract/test_misc.py
@@ -21,6 +21,7 @@
 
 from translate.storage.xml_extract import misc
 
+
 # reduce_tree
 
 test_tree_1 = (u'a',
diff --git a/translate/storage/xml_extract/test_unit_tree.py b/translate/storage/xml_extract/test_unit_tree.py
index d22907d..0d89e47 100644
--- a/translate/storage/xml_extract/test_unit_tree.py
+++ b/translate/storage/xml_extract/test_unit_tree.py
@@ -19,8 +19,9 @@
 # along with this program; if not, see <http://www.gnu.org/licenses/>.
 #
 
-from translate.storage.xml_extract import unit_tree
 from translate.storage import factory
+from translate.storage.xml_extract import unit_tree
+
 
 # _split_xpath_component
 
diff --git a/translate/storage/xml_extract/unit_tree.py b/translate/storage/xml_extract/unit_tree.py
index eabb270..f7b8ab7 100644
--- a/translate/storage/xml_extract/unit_tree.py
+++ b/translate/storage/xml_extract/unit_tree.py
@@ -22,13 +22,10 @@
 from lxml import etree
 
 from translate.storage import base, xliff
-from translate.misc.typecheck import accepts, Self, IsOneOf
-from translate.misc.typecheck.typeclasses import Number
 
 
 class XPathTree(object):
 
-    @accepts(Self(), IsOneOf(base.TranslationUnit, type(None)))
     def __init__(self, unit=None):
         self.unit = unit
         self.children = {}
@@ -39,7 +36,6 @@ class XPathTree(object):
             self.children == other.children
 
 
- at accepts(unicode)
 def _split_xpath_component(xpath_component):
     """Split an xpath component into a tag-index tuple.
 
@@ -53,7 +49,6 @@ def _split_xpath_component(xpath_component):
     return tag, index
 
 
- at accepts(unicode)
 def _split_xpath(xpath):
     """Split an 'xpath' string separated by / into a reversed list of its components. Thus:
 
@@ -70,7 +65,6 @@ def _split_xpath(xpath):
     return list(reversed(components))
 
 
- at accepts(IsOneOf(etree._Element, XPathTree), [(unicode, Number)], base.TranslationUnit)
 def _add_unit_to_tree(node, xpath_components, unit):
     """Walk down the tree rooted a node, and follow nodes which correspond to the
     components of xpath_components. When reaching the end of xpath_components,
@@ -101,7 +95,6 @@ def _add_unit_to_tree(node, xpath_components, unit):
         node.unit = unit
 
 
- at accepts(base.TranslationStore)
 def build_unit_tree(store):
     """Enumerate a translation store and build a tree with XPath components as nodes
     and where a node contains a unit if a path from the root of the tree to the node
diff --git a/translate/storage/xml_extract/xpath_breadcrumb.py b/translate/storage/xml_extract/xpath_breadcrumb.py
index 55d2af7..94a6765 100644
--- a/translate/storage/xml_extract/xpath_breadcrumb.py
+++ b/translate/storage/xml_extract/xpath_breadcrumb.py
@@ -19,8 +19,6 @@
 # along with this program; if not, see <http://www.gnu.org/licenses/>.
 #
 
-from translate.misc.typecheck import accepts, Self
-
 
 class XPathBreadcrumb(object):
     """A class which is used to build XPath-like paths as a DOM tree is
@@ -59,7 +57,6 @@ class XPathBreadcrumb(object):
         self._xpath = []
         self._tagtally = [{}]
 
-    @accepts(Self(), unicode)
     def start_tag(self, tag):
         tally_dict = self._tagtally[-1]
         tally = tally_dict.get(tag, -1) + 1
diff --git a/translate/storage/zip.py b/translate/storage/zip.py
index 4f03029..5fd12fb 100644
--- a/translate/storage/zip.py
+++ b/translate/storage/zip.py
@@ -23,14 +23,13 @@
 # Perhaps all methods should work with a wildcard to limit searches in some
 # way (examples: *.po, base.xlf, pootle-terminology.tbx)
 
-#TODO: consider also providing directories as we currently provide files
+# TODO: consider also providing directories as we currently provide files
 
 from os import path
 from zipfile import ZipFile
 
-from translate.storage import factory
-from translate.storage import directory
 from translate.misc import wStringIO
+from translate.storage import directory, factory
 
 
 class ZIPFile(directory.Directory):
@@ -46,7 +45,7 @@ class ZIPFile(directory.Directory):
             strfile = wStringIO.StringIO(self.archive.read(path.join(dirname, filename)))
             strfile.filename = filename
             store = factory.getobject(strfile)
-            #TODO: don't regenerate all the storage objects
+            # TODO: don't regenerate all the storage objects
             for unit in store.unit_iter():
                 yield unit
 
diff --git a/translate/tools/build_tmdb.py b/translate/tools/build_tmdb.py
index 6c9cb28..0d907c9 100644
--- a/translate/tools/build_tmdb.py
+++ b/translate/tools/build_tmdb.py
@@ -20,17 +20,16 @@
 
 """Import units from translations files into tmdb."""
 
-import os
-import sys
 import logging
+import os
+from argparse import ArgumentParser
 
-from optparse import OptionParser
+from translate.storage import factory, tmdb
 
-from translate.storage import factory
-from translate.storage import tmdb
 
 logger = logging.getLogger(__name__)
 
+
 class Builder:
 
     def __init__(self, tmdbfile, source_lang, target_lang, filenames):
@@ -51,15 +50,15 @@ class Builder:
     def handlefile(self, filename):
         try:
             store = factory.getobject(filename)
-        except Exception, e:
+        except Exception as e:
             logger.error(str(e))
             return
         # do something useful with the store and db
         try:
             self.tmdb.add_store(store, self.source_lang, self.target_lang, commit=False)
-        except Exception, e:
-            print e
-        print "File added:", filename
+        except Exception as e:
+            print(e)
+        print("File added:", filename)
 
     def handlefiles(self, dirname, filenames):
         for filename in filenames:
@@ -78,27 +77,24 @@ class Builder:
 
 
 def main():
-    parser = OptionParser(usage="%prog [options] <input files>")
-    parser.add_option(
+    parser = ArgumentParser()
+    parser.add_argument(
         "-d", "--tmdb", dest="tmdb_file", default="tm.db",
         help="translation memory database file (default: tm.db)")
-    parser.add_option(
+    parser.add_argument(
         "-s", "--import-source-lang", dest="source_lang", default="en",
         help="source language of translation files (default: en)")
-    parser.add_option(
+    parser.add_argument(
         "-t", "--import-target-lang", dest="target_lang",
-        help="target language of translation files")
-    (options, args) = parser.parse_args()
-
-    if not options.target_lang:
-        parser.error('No target language specified.')
-
-    if len(args) < 1:
-        parser.error('No input file(s) specified.')
+        help="target language of translation files", required=True)
+    parser.add_argument(
+        "files", metavar="input files", nargs="+"
+    )
+    args = parser.parse_args()
 
     logging.basicConfig(format="%(name)s: %(levelname)s: %(message)s")
 
-    Builder(options.tmdb_file, options.source_lang, options.target_lang, args)
+    Builder(args.tmdb_file, args.source_lang, args.target_lang, args.files)
 
 if __name__ == '__main__':
     main()
diff --git a/translate/tools/phppo2pypo.py b/translate/tools/phppo2pypo.py
index 6c252bf..554c011 100644
--- a/translate/tools/phppo2pypo.py
+++ b/translate/tools/phppo2pypo.py
@@ -23,8 +23,8 @@
 
 import re
 
-from translate.storage import po
 from translate.misc.multistring import multistring
+from translate.storage import po
 
 
 class phppo2pypo:
diff --git a/translate/tools/poclean.py b/translate/tools/poclean.py
index 0d089cf..becb693 100644
--- a/translate/tools/poclean.py
+++ b/translate/tools/poclean.py
@@ -27,8 +27,9 @@ with only the target text in from a text version of the RTF.
 
 import re
 
-from translate.storage import factory
 from translate.misc.multistring import multistring
+from translate.storage import factory
+
 
 tw4winre = re.compile(r"\{0>.*?<\}\d{1,3}\{>(.*?)<0\}", re.M | re.S)
 
diff --git a/translate/tools/pocompile.py b/translate/tools/pocompile.py
index 91f6bd6..4c1e3a2 100644
--- a/translate/tools/pocompile.py
+++ b/translate/tools/pocompile.py
@@ -24,9 +24,8 @@ See: http://docs.translatehouse.org/projects/translate-toolkit/en/latest/command
 for examples and usage instructions.
 """
 
-from translate.storage import factory
-from translate.storage import mo
 from translate.misc.multistring import multistring
+from translate.storage import factory, mo
 
 
 def _do_msgidcomment(string):
diff --git a/translate/tools/poconflicts.py b/translate/tools/poconflicts.py
index 6e68478..43afada 100644
--- a/translate/tools/poconflicts.py
+++ b/translate/tools/poconflicts.py
@@ -24,12 +24,11 @@ See: http://docs.translatehouse.org/projects/translate-toolkit/en/latest/command
 for examples and usage instructions.
 """
 
-import sys
 import os
+import sys
 
-from translate.storage import factory
-from translate.storage import po
 from translate.misc import optrecurse
+from translate.storage import factory, po
 
 
 class ConflictOptionParser(optrecurse.RecursiveOptionParser):
@@ -97,7 +96,7 @@ class ConflictOptionParser(optrecurse.RecursiveOptionParser):
             fullinputpath = self.getfullinputpath(options, inputpath)
             try:
                 success = self.processfile(None, options, fullinputpath)
-            except Exception, error:
+            except Exception as error:
                 if isinstance(error, KeyboardInterrupt):
                     raise
                 self.warning("Error processing: input %s" % (fullinputpath), options, sys.exc_info())
@@ -157,7 +156,7 @@ class ConflictOptionParser(optrecurse.RecursiveOptionParser):
 
     def outputconflicts(self, options):
         """saves the result of the conflict match"""
-        print "%d/%d different strings have conflicts" % (len(self.conflictmap), len(self.textmap))
+        print("%d/%d different strings have conflicts" % (len(self.conflictmap), len(self.textmap)))
         reducedmap = {}
 
         def str_len(x):
diff --git a/translate/tools/pocount.py b/translate/tools/pocount.py
index 9e27521..0f9d09e 100644
--- a/translate/tools/pocount.py
+++ b/translate/tools/pocount.py
@@ -26,13 +26,15 @@ See: http://docs.translatehouse.org/projects/translate-toolkit/en/latest/command
 for examples and usage instructions.
 """
 
-from optparse import OptionParser
+from __future__ import print_function
+
+import logging
 import os
 import sys
-import logging
+from argparse import ArgumentParser
+
+from translate.storage import factory, statsdb
 
-from translate.storage import factory
-from translate.storage import statsdb
 
 logger = logging.getLogger(__name__)
 
@@ -49,7 +51,7 @@ def calcstats_old(filename):
     # ignore totally blank or header units
     try:
         store = factory.getobject(filename)
-    except ValueError, e:
+    except ValueError as e:
         logger.warning(e)
         return {}
     units = filter(lambda unit: unit.istranslatable(), store.units)
@@ -62,7 +64,7 @@ def calcstats_old(filename):
     targetwords = lambda elementlist: sum(map(lambda unit: wordcounts[unit][1], elementlist))
     stats = {}
 
-    #units
+    # units
     stats["translated"] = len(translated)
     stats["fuzzy"] = len(fuzzy)
     stats["untranslated"] = len(untranslated)
@@ -71,7 +73,7 @@ def calcstats_old(filename):
                      stats["fuzzy"] + \
                      stats["untranslated"]
 
-    #words
+    # words
     stats["translatedsourcewords"] = sourcewords(translated)
     stats["translatedtargetwords"] = targetwords(translated)
     stats["fuzzysourcewords"] = sourcewords(fuzzy)
@@ -111,66 +113,68 @@ def summarize(title, stats, style=style_full, indent=8, incomplete_only=False):
         return 1
 
     if (style == style_csv):
-        print "%s, " % title,
-        print "%d, %d, %d," % (stats["translated"],
+        print("%s, " % title, end=' ')
+        print("%d, %d, %d," % (stats["translated"],
                                stats["translatedsourcewords"],
-                               stats["translatedtargetwords"]),
-        print "%d, %d," % (stats["fuzzy"], stats["fuzzysourcewords"]),
-        print "%d, %d," % (stats["untranslated"],
-                           stats["untranslatedsourcewords"]),
-        print "%d, %d" % (stats["total"], stats["totalsourcewords"]),
+                               stats["translatedtargetwords"]), end=' ')
+        print("%d, %d," % (stats["fuzzy"], stats["fuzzysourcewords"]), end=' ')
+        print("%d, %d," % (stats["untranslated"],
+                           stats["untranslatedsourcewords"]), end=' ')
+        print("%d, %d" % (stats["total"], stats["totalsourcewords"]), end=' ')
         if stats["review"] > 0:
-            print ", %d, %d" % (stats["review"], stats["reviewsourdcewords"]),
-        print
+            print(", %d, %d" % (stats["review"], stats["reviewsourdcewords"]), end=' ')
+        print()
     elif (style == style_short_strings):
         spaces = " " * (indent - len(title))
-        print "%s%s strings: total: %d\t| %dt\t%df\t%du\t| %d%%t\t%d%%f\t%d%%u" % (title, spaces, \
-              stats["total"], stats["translated"], stats["fuzzy"], stats["untranslated"], \
-              percent(stats["translated"], stats["total"]), \
-              percent(stats["fuzzy"], stats["total"]), \
-              percent(stats["untranslated"], stats["total"]))
+        print("%s%s strings: total: %d\t| %dt\t%df\t%du\t| %d%%t\t%d%%f\t%d%%u" % (
+              title, spaces,
+              stats["total"], stats["translated"], stats["fuzzy"], stats["untranslated"],
+              percent(stats["translated"], stats["total"]),
+              percent(stats["fuzzy"], stats["total"]),
+              percent(stats["untranslated"], stats["total"])))
     elif (style == style_short_words):
         spaces = " " * (indent - len(title))
-        print "%s%s source words: total: %d\t| %dt\t%df\t%du\t| %d%%t\t%d%%f\t%d%%u" % (title, spaces, \
-              stats["totalsourcewords"], stats["translatedsourcewords"], stats["fuzzysourcewords"], stats["untranslatedsourcewords"], \
-              percent(stats["translatedsourcewords"], stats["totalsourcewords"]), \
-              percent(stats["fuzzysourcewords"], stats["totalsourcewords"]), \
-              percent(stats["untranslatedsourcewords"], stats["totalsourcewords"]))
+        print("%s%s source words: total: %d\t| %dt\t%df\t%du\t| %d%%t\t%d%%f\t%d%%u" % (
+              title, spaces,
+              stats["totalsourcewords"], stats["translatedsourcewords"], stats["fuzzysourcewords"], stats["untranslatedsourcewords"],
+              percent(stats["translatedsourcewords"], stats["totalsourcewords"]),
+              percent(stats["fuzzysourcewords"], stats["totalsourcewords"]),
+              percent(stats["untranslatedsourcewords"], stats["totalsourcewords"])))
     else:  # style == style_full
-        print title
-        print "type              strings      words (source)    words (translation)"
-        print "translated:   %5d (%3d%%) %10d (%3d%%) %15d" % \
-                (stats["translated"], \
-                percent(stats["translated"], stats["total"]), \
-                stats["translatedsourcewords"], \
-                percent(stats["translatedsourcewords"], stats["totalsourcewords"]), \
-                stats["translatedtargetwords"])
-        print "fuzzy:        %5d (%3d%%) %10d (%3d%%)             n/a" % \
-                (stats["fuzzy"], \
-                percent(stats["fuzzy"], stats["total"]), \
-                stats["fuzzysourcewords"], \
-                percent(stats["fuzzysourcewords"], stats["totalsourcewords"]))
-        print "untranslated: %5d (%3d%%) %10d (%3d%%)             n/a" % \
-                (stats["untranslated"], \
-                percent(stats["untranslated"], stats["total"]), \
-                stats["untranslatedsourcewords"], \
-                percent(stats["untranslatedsourcewords"], stats["totalsourcewords"]))
-        print "Total:        %5d %17d %22d" % \
-                (stats["total"], \
-                stats["totalsourcewords"], \
-                stats["translatedtargetwords"])
+        print(title)
+        print("type              strings      words (source)    words (translation)")
+        print("translated:   %5d (%3d%%) %10d (%3d%%) %15d" % (
+              stats["translated"],
+              percent(stats["translated"], stats["total"]),
+              stats["translatedsourcewords"],
+              percent(stats["translatedsourcewords"], stats["totalsourcewords"]),
+              stats["translatedtargetwords"]))
+        print("fuzzy:        %5d (%3d%%) %10d (%3d%%)             n/a" % (
+              stats["fuzzy"],
+              percent(stats["fuzzy"], stats["total"]),
+              stats["fuzzysourcewords"],
+              percent(stats["fuzzysourcewords"], stats["totalsourcewords"])))
+        print("untranslated: %5d (%3d%%) %10d (%3d%%)             n/a" % (
+              stats["untranslated"],
+              percent(stats["untranslated"], stats["total"]),
+              stats["untranslatedsourcewords"],
+              percent(stats["untranslatedsourcewords"], stats["totalsourcewords"])))
+        print("Total:        %5d %17d %22d" % (
+              stats["total"],
+              stats["totalsourcewords"],
+              stats["translatedtargetwords"]))
         if "extended" in stats:
-            print ""
+            print("")
             for state, e_stats in stats["extended"].iteritems():
-                print "%s:    %5d (%3d%%) %10d (%3d%%) %15d" % (
-                    state, e_stats["units"], percent(e_stats["units"], stats["total"]),
-                    e_stats["sourcewords"], percent(e_stats["sourcewords"], stats["totalsourcewords"]),
-                    e_stats["targetwords"])
+                print("%s:    %5d (%3d%%) %10d (%3d%%) %15d" % (
+                      state, e_stats["units"], percent(e_stats["units"], stats["total"]),
+                      e_stats["sourcewords"], percent(e_stats["sourcewords"], stats["totalsourcewords"]),
+                      e_stats["targetwords"]))
 
         if stats["review"] > 0:
-            print "review:       %5d %17d                    n/a" % \
-                    (stats["review"], stats["reviewsourcewords"])
-        print
+            print("review:       %5d %17d                    n/a" % (
+                  stats["review"], stats["reviewsourcewords"]))
+        print()
     return 0
 
 
@@ -197,10 +201,10 @@ class summarizer:
         self.complete_count = 0
 
         if (self.style == style_csv):
-            print "Filename, Translated Messages, Translated Source Words, Translated \
-Target Words, Fuzzy Messages, Fuzzy Source Words, Untranslated Messages, \
-Untranslated Source Words, Total Message, Total Source Words, \
-Review Messages, Review Source Words"
+            print("""Filename, Translated Messages, Translated Source Words, Translated
+Target Words, Fuzzy Messages, Fuzzy Source Words, Untranslated Messages,
+Untranslated Source Words, Total Message, Total Source Words,
+Review Messages, Review Source Words""")
         if (self.style == style_short_strings or self.style == style_short_words):
             for filename in filenames:  # find longest filename
                 if (len(filename) > self.longestfilename):
@@ -217,17 +221,17 @@ Review Messages, Review Source Words"
             if self.incomplete_only:
                 summarize("TOTAL (incomplete only):", self.totals,
                 incomplete_only=True)
-                print "File count (incomplete):   %5d" % (self.filecount - self.complete_count)
+                print("File count (incomplete):   %5d" % (self.filecount - self.complete_count))
             else:
                 summarize("TOTAL:", self.totals, incomplete_only=False)
-            print "File count:   %5d" % (self.filecount)
-            print
+            print("File count:   %5d" % (self.filecount))
+            print()
 
     def updatetotals(self, stats):
         """Update self.totals with the statistics in stats."""
         for key in stats.keys():
             if key == "extended":
-                #FIXME: calculate extended totals
+                # FIXME: calculate extended totals
                 continue
             if not key in self.totals:
                 self.totals[key] = 0
@@ -241,7 +245,7 @@ Review Messages, Review Source Words"
                                              self.longestfilename,
                                              self.incomplete_only)
             self.filecount += 1
-        except:  # This happens if we have a broken file.
+        except Exception:  # This happens if we have a broken file.
             logger.error(sys.exc_info()[1])
 
     def handlefiles(self, dirname, filenames):
@@ -261,54 +265,42 @@ Review Messages, Review Source Words"
 
 
 def main():
-    parser = OptionParser(usage="usage: %prog [options] po-files")
-    parser.add_option("--incomplete", action="store_const", const=True,
-                      dest="incomplete_only",
-                      help="skip 100% translated files.")
-    # options controlling output format:
-    parser.add_option("--full", action="store_const", const=style_csv,
-                      dest="style_full",
-                      help="(default) statistics in full, verbose format")
-    parser.add_option("--csv", action="store_const", const=style_csv,
-                      dest="style_csv",
-                      help="statistics in CSV format")
-    parser.add_option("--short", action="store_const", const=style_csv,
-                      dest="style_short_strings",
-                      help="same as --short-strings")
-    parser.add_option("--short-strings", action="store_const",
-                      const=style_csv, dest="style_short_strings",
-                      help="statistics of strings in short format - one line per file")
-    parser.add_option("--short-words", action="store_const",
-                      const=style_csv, dest="style_short_words",
-                      help="statistics of words in short format - one line per file")
-
-    (options, args) = parser.parse_args()
-
-    if (options.incomplete_only == None):
-        options.incomplete_only = False
-
-    if (options.style_full and options.style_csv) or \
-       (options.style_full and options.style_short_strings) or \
-       (options.style_full and options.style_short_words) or \
-       (options.style_csv and options.style_short_strings) or \
-       (options.style_csv and options.style_short_words) or \
-       (options.style_short_strings and options.style_short_words):
-        parser.error("options --full, --csv, --short-strings and --short-words are mutually exclusive")
-        sys.exit(2)
-
-    style = default_style   # default output style
-    if options.style_csv:
-        style = style_csv
-    if options.style_full:
-        style = style_full
-    if options.style_short_strings:
-        style = style_short_strings
-    if options.style_short_words:
-        style = style_short_words
+    parser = ArgumentParser()
+    parser.add_argument("--incomplete", action="store_true", default=False,
+                        dest="incomplete_only",
+                        help="skip 100%% translated files.")
+    if sys.version_info[:2] <= (2, 6):
+        # Python 2.6 using argparse from PyPI cannot define a mutually
+        # exclusive group as a child of a group, but it works if it is a child
+        # of the parser.  We lose the group title but the functionality works.
+        # See https://code.google.com/p/argparse/issues/detail?id=90
+        megroup = parser.add_mutually_exclusive_group()
+    else:
+        output_group = parser.add_argument_group("Output format")
+        megroup = output_group.add_mutually_exclusive_group()
+    megroup.add_argument("--full", action="store_const", const=style_full,
+                        dest="style", default=style_full,
+                        help="(default) statistics in full, verbose format")
+    megroup.add_argument("--csv", action="store_const", const=style_csv,
+                        dest="style",
+                        help="statistics in CSV format")
+    megroup.add_argument("--short", action="store_const", const=style_short_strings,
+                        dest="style",
+                        help="same as --short-strings")
+    megroup.add_argument("--short-strings", action="store_const",
+                        const=style_short_strings, dest="style",
+                        help="statistics of strings in short format - one line per file")
+    megroup.add_argument("--short-words", action="store_const",
+                        const=style_short_words, dest="style",
+                        help="statistics of words in short format - one line per file")
+
+    parser.add_argument("files", nargs="+")
+
+    args = parser.parse_args()
 
     logging.basicConfig(format="%(name)s: %(levelname)s: %(message)s")
 
-    summarizer(args, style, options.incomplete_only)
+    summarizer(args.files, args.style, args.incomplete_only)
 
 if __name__ == '__main__':
     main()
diff --git a/translate/tools/podebug.py b/translate/tools/podebug.py
index 2f7e246..b838b49 100644
--- a/translate/tools/podebug.py
+++ b/translate/tools/podebug.py
@@ -26,12 +26,12 @@ for examples and usage instructions.
 
 import os
 import re
+from hashlib import md5
 
-from translate.misc import hash
-from translate.storage import factory
-from translate.storage.placeables import StringElem, general
-from translate.storage.placeables import parse as rich_parse
 from translate.convert import dtd2po
+from translate.storage import factory
+from translate.storage.placeables import (StringElem, general,
+                                          parse as rich_parse)
 
 
 def add_prefix(prefix, stringelems):
@@ -63,9 +63,9 @@ class podebug:
             lambda e: e.apply_to_strings(func),
             lambda e: e.isleaf() and e.istranslatable)
 
+    @classmethod
     def rewritelist(cls):
         return [rewrite.replace("rewrite_", "") for rewrite in dir(cls) if rewrite.startswith("rewrite_")]
-    rewritelist = classmethod(rewritelist)
 
     def _rewrite_prepend_append(self, string, prepend, append=None):
         if append is None:
@@ -158,15 +158,15 @@ class podebug:
     REWRITE_FLIPPED_MAP = u"¡„#$%⅋,()⁎+´-˙/012Ɛᔭ59Ƚ86:;<=>¿@" + \
             u"∀ԐↃᗡƎℲ⅁HIſӼ⅂WNOԀÒᴚS⊥∩ɅMX⅄Z" + u"[\\]ᵥ_," + \
             u"ɐqɔpǝɟƃɥıɾʞʅɯuodbɹsʇnʌʍxʎz"
-        # Brackets should be swapped if the string will be reversed in memory.
-        # If a right-to-left override is used, the brackets should be
-        # unchanged.
-        #Some alternatives:
-        # D: ᗡ◖
-        # K: Ж⋊Ӽ
-        # @: Ҩ - Seems only related in Dejavu Sans
-        # Q: Ὄ Ό Ὀ Ὃ Ὄ Ṑ Ò Ỏ
-        # _: ‾ - left out for now for the sake of GTK accelerators
+    # Brackets should be swapped if the string will be reversed in memory.
+    # If a right-to-left override is used, the brackets should be
+    # unchanged.
+    # Some alternatives:
+    #  D: ᗡ◖
+    #  K: Ж⋊Ӽ
+    #  @: Ҩ - Seems only related in Dejavu Sans
+    #  Q: Ὄ Ό Ὀ Ὃ Ὄ Ṑ Ò Ỏ
+    #  _: ‾ - left out for now for the sake of GTK accelerators
 
     def rewrite_flipped(self, string):
         """Convert the string to look flipped upside down."""
@@ -186,9 +186,9 @@ class podebug:
         self.apply_to_translatables(string, transformer)
         return string
 
+    @classmethod
     def ignorelist(cls):
         return [ignore.replace("ignore_", "") for ignore in dir(cls) if ignore.startswith("ignore_")]
-    ignorelist = classmethod(ignorelist)
 
     def ignore_openoffice(self, unit):
         for location in unit.getlocations():
@@ -200,6 +200,9 @@ class podebug:
                 return True
         return False
 
+    def ignore_libreoffice(self, unit):
+        return ignore_openoffice(unit)
+
     def ignore_mozilla(self, unit):
         locations = unit.getlocations()
         if len(locations) == 1 and locations[0].lower().endswith(".accesskey"):
@@ -232,7 +235,7 @@ class podebug:
                 hashable = unit.getlocations()[0]
             else:
                 hashable = unit.source
-            prefix = prefix.replace("@hash_placeholder@", hash.md5_f(hashable).hexdigest()[:self.hash_len])
+            prefix = prefix.replace("@hash_placeholder@", md5(hashable).hexdigest()[:self.hash_len])
         if unit.istranslated():
             rich_string = unit.rich_target
         else:
diff --git a/translate/tools/pogrep.py b/translate/tools/pogrep.py
index d12d979..2eab850 100644
--- a/translate/tools/pogrep.py
+++ b/translate/tools/pogrep.py
@@ -28,14 +28,14 @@ See: http://docs.translatehouse.org/projects/translate-toolkit/en/latest/command
 for examples and usage instructions.
 """
 
-import re
 import locale
+import re
 
-from translate.storage import factory
-from translate.storage.poheader import poheader
+from translate.lang import data
 from translate.misc import optrecurse
 from translate.misc.multistring import multistring
-from translate.lang import data
+from translate.storage import factory
+from translate.storage.poheader import poheader
 
 
 class GrepMatch(object):
diff --git a/translate/tools/pomerge.py b/translate/tools/pomerge.py
index c8950c9..92a1860 100644
--- a/translate/tools/pomerge.py
+++ b/translate/tools/pomerge.py
@@ -108,12 +108,13 @@ def main():
     pooutput = ("po", mergestore)
     potoutput = ("pot", mergestore)
     xliffoutput = ("xlf", mergestore)
-    formats = {("po", "po"): pooutput, ("po", "pot"): pooutput,
-               ("pot", "po"): pooutput, ("pot", "pot"): potoutput,
-               "po": pooutput, "pot": pooutput,
-               ("xlf", "po"): pooutput, ("xlf", "pot"): pooutput,
-               ("xlf", "xlf"): xliffoutput, ("po", "xlf"): xliffoutput,
-              }
+    formats = {
+        ("po", "po"): pooutput, ("po", "pot"): pooutput,
+        ("pot", "po"): pooutput, ("pot", "pot"): potoutput,
+        "po": pooutput, "pot": pooutput,
+        ("xlf", "po"): pooutput, ("xlf", "pot"): pooutput,
+        ("xlf", "xlf"): xliffoutput, ("po", "xlf"): xliffoutput,
+    }
     mergeblanksoption = convert.optparse.Option("", "--mergeblanks",
         dest="mergeblanks", action="store", default="yes",
         help="whether to overwrite existing translations with blank translations (yes/no). Default is yes.")
diff --git a/translate/tools/porestructure.py b/translate/tools/porestructure.py
index 70051ba..458d9c3 100644
--- a/translate/tools/porestructure.py
+++ b/translate/tools/porestructure.py
@@ -29,8 +29,8 @@ for examples and usage instructions.
 import os
 import sys
 
-from translate.storage import po
 from translate.misc import optrecurse
+from translate.storage import po
 
 
 class SplitOptionParser(optrecurse.RecursiveOptionParser):
@@ -47,7 +47,8 @@ class SplitOptionParser(optrecurse.RecursiveOptionParser):
         """sets the usage string - if usage not given, uses getusagestring for each option"""
         if usage is None:
             self.usage = "%prog " + " ".join([self.getusagestring(option) for option in self.option_list]) + \
-            "\n  input directory is searched for PO files with (poconflicts) comments, all entries are written to files in a directory structure for pomerge"
+                         "\n  " + \
+                         "input directory is searched for PO files with (poconflicts) comments, all entries are written to files in a directory structure for pomerge"
         else:
             super(SplitOptionParser, self).set_usage(usage)
 
@@ -56,7 +57,8 @@ class SplitOptionParser(optrecurse.RecursiveOptionParser):
         if not self.isrecursive(options.output, 'output'):
             try:
                 self.warning("Output directory does not exist. Attempting to create")
-                #TODO: maybe we should only allow it to be created, otherwise we mess up an existing tree...
+                # TODO: maybe we should only allow it to be created, otherwise
+                # we mess up an existing tree.
                 os.mkdir(options.output)
             except:
                 self.error(optrecurse.optparse.OptionValueError("Output directory does not exist, attempt to create failed"))
@@ -77,7 +79,7 @@ class SplitOptionParser(optrecurse.RecursiveOptionParser):
             fullinputpath = self.getfullinputpath(options, inputpath)
             try:
                 success = self.processfile(options, fullinputpath)
-            except Exception, error:
+            except Exception as error:
                 if isinstance(error, KeyboardInterrupt):
                     raise self.warning("Error processing: input %s" % (fullinputpath), options, sys.exc_info())
                 success = False
@@ -95,7 +97,7 @@ class SplitOptionParser(optrecurse.RecursiveOptionParser):
                         if comment.find("# (poconflicts)") == 0:
                             pounit.othercomments.remove(comment)
                             break
-                    #TODO: refactor writing out
+                    # TODO: refactor writing out
                     outputpath = comment[comment.find(")") + 2:].strip()
                     self.checkoutputsubdir(options, os.path.dirname(outputpath))
                     fulloutputpath = os.path.join(options.output, outputpath)
@@ -110,7 +112,8 @@ class SplitOptionParser(optrecurse.RecursiveOptionParser):
 
 
 def main():
-    #outputfile extentions will actually be determined by the comments in the po files
+    # outputfile extentions will actually be determined by the comments in the
+    # po files
     pooutput = ("po", None)
     formats = {(None, None): pooutput, ("po", "po"): pooutput, "po": pooutput}
     parser = SplitOptionParser(formats, description=__doc__)
diff --git a/translate/tools/posegment.py b/translate/tools/posegment.py
index e3b1cc7..b14b4ab 100644
--- a/translate/tools/posegment.py
+++ b/translate/tools/posegment.py
@@ -24,8 +24,8 @@ See: http://docs.translatehouse.org/projects/translate-toolkit/en/latest/command
 for examples and usage instructions.
 """
 
-from translate.storage import factory
 from translate.lang import factory as lang_factory
+from translate.storage import factory
 
 
 class segment:
diff --git a/translate/tools/poswap.py b/translate/tools/poswap.py
index e4faa08..7afbd3c 100644
--- a/translate/tools/poswap.py
+++ b/translate/tools/poswap.py
@@ -26,18 +26,18 @@ source language.
 
 To translate Kurdish (ku) through French::
 
-    po2swap -i fr/ -t ku -o fr-ku
+    poswap -i fr/ -t ku -o fr-ku
 
 To convert the fr-ku files back to en-ku::
 
-    po2swap --reverse -i fr/ -t fr-ku -o en-ku
+    poswap --reverse -i fr/ -t fr-ku -o en-ku
 
 See: http://docs.translatehouse.org/projects/translate-toolkit/en/latest/commands/poswap.html
 for examples and usage instructions.
 """
 
-from translate.storage import po
 from translate.convert import convert
+from translate.storage import po
 
 
 def swapdir(store):
diff --git a/translate/tools/poterminology.py b/translate/tools/poterminology.py
index d2a54a6..fe0f22e 100644
--- a/translate/tools/poterminology.py
+++ b/translate/tools/poterminology.py
@@ -21,19 +21,19 @@
 See: http://docs.translatehouse.org/projects/translate-toolkit/en/latest/commands/poterminology.html
 for examples and usage instructions.
 """
+import logging
 import os
 import re
 import sys
-import logging
 
 from translate.lang import factory as lang_factory
-from translate.misc import optrecurse
-from translate.storage import po
-from translate.storage import factory
-from translate.misc import file_discovery
+from translate.misc import file_discovery, optrecurse
+from translate.storage import factory, po
+
 
 logger = logging.getLogger(__name__)
 
+
 def create_termunit(term, unit, targets, locations, sourcenotes, transnotes, filecounts):
     termunit = po.pounit(term)
     if unit is not None:
@@ -125,7 +125,7 @@ class TerminologyExtractor(object):
                     self.stoprelist.append(re.compile(stopline[1:-1] + '$'))
                 else:
                     self.stopwords[stopline[1:-1]] = actions[stoptype]
-        except KeyError, character:
+        except KeyError as character:
             logger.warning("%s:%d - bad stopword entry starts with '%s'",
                            self.stopfile, line, str(character))
             logger.warning("%s:%d all lines after error ignored",
@@ -196,7 +196,7 @@ class TerminologyExtractor(object):
                         ignore = self.stopwords[stword]
                     else:
                         for stopre in self.stoprelist:
-                            if stopre.match(stword) != None:
+                            if stopre.match(stword) is not None:
                                 ignore = rematchignore
                                 break
                     translation = (source, target, unit, fullinputpath)
@@ -254,7 +254,7 @@ class TerminologyExtractor(object):
             for source, target, unit, filename in translations:
                 sources.add(source)
                 filecounts[filename] = filecounts.setdefault(filename, 0) + 1
-                #FIXME: why reclean source and target?!
+                # FIXME: why reclean source and target?!
                 if term.lower() == self.clean(unit.source).lower():
                     fullmsg = True
                     target = self.clean(unit.target)
@@ -268,8 +268,8 @@ class TerminologyExtractor(object):
                         transnotes.add(unit.getnotes("translator"))
                     unit.source = term
                     bestunit = unit
-                #FIXME: figure out why we did a merge to begin with
-                #termunit.merge(unit, overwrite=False, comments=False)
+                # FIXME: figure out why we did a merge to begin with
+                # termunit.merge(unit, overwrite=False, comments=False)
                 for loc in unit.getlocations():
                     locations.add(locre.sub("", loc))
 
@@ -305,7 +305,7 @@ class TerminologyExtractor(object):
         for term in termlist:
             words = term.split()
             nonstop = [word for word in words if not self.stopword(word)]
-            if len(nonstop) < nonstopmin and  len(nonstop) != len(words):
+            if len(nonstop) < nonstopmin and len(nonstop) != len(words):
                 del terms[term]
                 continue
             if len(words) <= 2:
@@ -365,12 +365,12 @@ class TerminologyOptionParser(optrecurse.RecursiveOptionParser):
             self.error("No input file or directory was specified")
         if isinstance(options.input, list) and len(options.input) == 1:
             options.input = options.input[0]
-            if options.inputmin == None:
+            if options.inputmin is None:
                 options.inputmin = 1
         elif not isinstance(options.input, list) and not os.path.isdir(options.input):
-            if options.inputmin == None:
+            if options.inputmin is None:
                 options.inputmin = 1
-        elif options.inputmin == None:
+        elif options.inputmin is None:
             options.inputmin = 2
         if options.update:
             options.output = options.update
@@ -427,7 +427,7 @@ class TerminologyOptionParser(optrecurse.RecursiveOptionParser):
             success = True
             try:
                 self.processfile(None, options, fullinputpath)
-            except Exception, error:
+            except Exception as error:
                 if isinstance(error, KeyboardInterrupt):
                     raise
                 self.warning("Error processing: input %s" % (fullinputpath), options, sys.exc_info())
diff --git a/translate/tools/pretranslate.py b/translate/tools/pretranslate.py
index af2e835..372eaff 100644
--- a/translate/tools/pretranslate.py
+++ b/translate/tools/pretranslate.py
@@ -25,9 +25,9 @@ See: http://docs.translatehouse.org/projects/translate-toolkit/en/latest/command
 for examples and usage instructions.
 """
 
-from translate.storage import factory
-from translate.storage import xliff, po
 from translate.search import match
+from translate.storage import factory, xliff
+
 
 # We don't want to reinitialise the TM each time, so let's store it here.
 tmmatcher = None
@@ -134,20 +134,20 @@ def pretranslate_unit(input_unit, template_store, matchers=None,
         matching_unit = match_source(input_unit, template_store)
 
         if not matching_unit or not matching_unit.gettargetlen():
-            #do fuzzy matching
+            # do fuzzy matching
             matching_unit = match_fuzzy(input_unit, matchers)
 
         if matching_unit and matching_unit.gettargetlen() > 0:
-            #FIXME: should we dispatch here instead of this crude type check
+            # FIXME: should we dispatch here instead of this crude type check
             if isinstance(input_unit, xliff.xliffunit):
-                #FIXME: what about origin, lang and matchquality
+                # FIXME: what about origin, lang and matchquality
                 input_unit.addalttrans(matching_unit.target, origin="fish",
                                        sourcetxt=matching_unit.source)
             else:
                 input_unit.merge(matching_unit, authoritative=True)
 
-    #FIXME: ugly hack required by pot2po to mark old
-    #translations reused for new file. loops over
+    # FIXME: ugly hack required by pot2po to mark old
+    # translations reused for new file. loops over
     if mark_reused and matching_unit and template_store:
         original_unit = template_store.findunit(matching_unit.source)
         if original_unit is not None:
@@ -159,29 +159,29 @@ def pretranslate_unit(input_unit, template_store, matchers=None,
 def pretranslate_store(input_store, template_store, tm=None,
                        min_similarity=75, fuzzymatching=True):
     """Do the actual pretranslation of a whole store."""
-    #preperation
+    # preperation
     matchers = []
-    #prepare template
+    # prepare template
     if template_store is not None:
         template_store.makeindex()
-        #template preparation based on type
+        # template preparation based on type
         prepare_template = "prepare_template_%s" % template_store.__class__.__name__
         if prepare_template in globals():
             globals()[prepare_template](template_store)
 
         if fuzzymatching:
-            #create template matcher
-            #FIXME: max_length hardcoded
+            # create template matcher
+            # FIXME: max_length hardcoded
             matcher = match.matcher(template_store, max_candidates=1,
                                     min_similarity=min_similarity,
                                     max_length=3000, usefuzzy=True)
             matcher.addpercentage = False
             matchers.append(matcher)
 
-    #prepare tm
-    #create tm matcher
+    # prepare tm
+    # create tm matcher
     if tm and fuzzymatching:
-        #FIXME: max_length hardcoded
+        # FIXME: max_length hardcoded
         matcher = memory(tm, max_candidates=1, min_similarity=min_similarity,
                          max_length=1000)
         matcher.addpercentage = False
@@ -189,7 +189,7 @@ def pretranslate_store(input_store, template_store, tm=None,
 
     # Main loop
     for input_unit in input_store.units:
-        if  input_unit.istranslatable():
+        if input_unit.istranslatable():
             input_unit = pretranslate_unit(input_unit, template_store,
                                            matchers,
                                            merge_on=input_store.merge_on)
diff --git a/translate/tools/pydiff.py b/translate/tools/pydiff.py
index 85ffd60..12ceed7 100644
--- a/translate/tools/pydiff.py
+++ b/translate/tools/pydiff.py
@@ -22,64 +22,64 @@
 that are useful in dealing with PO files"""
 
 import difflib
-import optparse
-import time
+import fnmatch
 import os
 import sys
-import fnmatch
+import time
+from argparse import ArgumentParser
+
 
 lineterm = "\n"
 
 
 def main():
     """main program for pydiff"""
-    usage = "usage: %prog [options] fromfile tofile"
-    parser = optparse.OptionParser(usage)
+    parser = ArgumentParser()
     # GNU diff like options
-    parser.add_option("-i", "--ignore-case", default=False, action="store_true",
-                      help='Ignore case differences in file contents.')
-    parser.add_option("-U", "--unified", type="int", metavar="NUM", default=3,
-                      dest="unified_lines",
-                      help='Output NUM (default 3) lines of unified context')
-    parser.add_option("-r", "--recursive", default=False, action="store_true",
-                      help='Recursively compare any subdirectories found.')
-    parser.add_option("-N", "--new-file", default=False, action="store_true",
-                      help='Treat absent files as empty.')
-    parser.add_option("", "--unidirectional-new-file", default=False,
-                      action="store_true",
-                      help='Treat absent first files as empty.')
-    parser.add_option("-s", "--report-identical-files", default=False,
-                      action="store_true",
-                      help='Report when two files are the same.')
-    parser.add_option("-x", "--exclude", default=["CVS", "*.po~"],
-                      action="append", metavar="PAT",
-                      help='Exclude files that match PAT.')
+    parser.add_argument("-i", "--ignore-case", default=False, action="store_true",
+                        help='Ignore case differences in file contents.')
+    parser.add_argument("-U", "--unified", type=int, metavar="NUM", default=3,
+                        dest="unified_lines",
+                        help='Output NUM (default 3) lines of unified context')
+    parser.add_argument("-r", "--recursive", default=False, action="store_true",
+                        help='Recursively compare any subdirectories found.')
+    parser.add_argument("-N", "--new-file", default=False, action="store_true",
+                        help='Treat absent files as empty.')
+    parser.add_argument("--unidirectional-new-file", default=False,
+                        action="store_true",
+                        help='Treat absent first files as empty.')
+    parser.add_argument("-s", "--report-identical-files", default=False,
+                        action="store_true",
+                        help='Report when two files are the same.')
+    parser.add_argument("-x", "--exclude", default=["CVS", "*.po~"],
+                        action="append", metavar="PAT",
+                        help='Exclude files that match PAT.')
     # our own options
-    parser.add_option("", "--fromcontains", type="string", default=None,
-                      metavar="TEXT",
-                      help='Only show changes where fromfile contains TEXT')
-    parser.add_option("", "--tocontains", type="string", default=None,
-                      metavar="TEXT",
-                      help='Only show changes where tofile contains TEXT')
-    parser.add_option("", "--contains", type="string", default=None,
-                      metavar="TEXT",
-                      help='Only show changes where fromfile or tofile contains TEXT')
-    parser.add_option("-I", "--ignore-case-contains", default=False, action="store_true",
-                      help='Ignore case differences when matching any of the changes')
-    parser.add_option("", "--accelerator", dest="accelchars", default="",
-                      metavar="ACCELERATORS",
-                      help="ignores the given accelerator characters when matching")
-    (options, args) = parser.parse_args()
+    parser.add_argument("--fromcontains", type=str, default=None,
+                        metavar="TEXT",
+                        help='Only show changes where fromfile contains TEXT')
+    parser.add_argument("--tocontains", type=str, default=None,
+                        metavar="TEXT",
+                        help='Only show changes where tofile contains TEXT')
+    parser.add_argument("--contains", type=str, default=None,
+                        metavar="TEXT",
+                        help='Only show changes where fromfile or tofile contains TEXT')
+    parser.add_argument("-I", "--ignore-case-contains", default=False, action="store_true",
+                        help='Ignore case differences when matching any of the changes')
+    parser.add_argument("--accelerator", dest="accelchars", default="",
+                        metavar="ACCELERATORS",
+                        help="ignores the given accelerator characters when matching")
+    parser.add_argument("fromfile", nargs=1)
+    parser.add_argument("tofile", nargs=1)
+    args = parser.parse_args()
 
-    if len(args) != 2:
-        parser.error("fromfile and tofile required")
-    fromfile, tofile = args
+    fromfile, tofile = args.fromfile[0], args.tofile[0]
     if fromfile == "-" and tofile == "-":
         parser.error("Only one of fromfile and tofile can be read from stdin")
 
     if os.path.isdir(fromfile):
         if os.path.isdir(tofile):
-            differ = DirDiffer(fromfile, tofile, options)
+            differ = DirDiffer(fromfile, tofile, args)
         else:
             parser.error("File %s is a directory while file %s is a regular file" %
                          (fromfile, tofile))
@@ -88,7 +88,7 @@ def main():
             parser.error("File %s is a regular file while file %s is a directory" %
                          (fromfile, tofile))
         else:
-            differ = FileDiffer(fromfile, tofile, options)
+            differ = FileDiffer(fromfile, tofile, args)
     differ.writediff(sys.stdout)
 
 
diff --git a/translate/tools/pypo2phppo.py b/translate/tools/pypo2phppo.py
index dc3207a..9ce845f 100644
--- a/translate/tools/pypo2phppo.py
+++ b/translate/tools/pypo2phppo.py
@@ -23,8 +23,8 @@
 
 import re
 
-from translate.storage import po
 from translate.misc.multistring import multistring
+from translate.storage import po
 
 
 class pypo2phppo:
diff --git a/translate/tools/test_phppo2pypo.py b/translate/tools/test_phppo2pypo.py
index 6b7d4b0..0a236f2 100644
--- a/translate/tools/test_phppo2pypo.py
+++ b/translate/tools/test_phppo2pypo.py
@@ -5,10 +5,9 @@
 # Author: Wil Clouser <wclouser at mozilla.com>
 # Date: 2009-12-03
 
-from translate.tools import phppo2pypo
-from translate.storage import po
 from translate.convert import test_convert
 from translate.misc import wStringIO
+from translate.tools import phppo2pypo
 
 
 class TestPhpPo2PyPo:
diff --git a/translate/tools/test_pocount.py b/translate/tools/test_pocount.py
index 3890d27..6a88b96 100644
--- a/translate/tools/test_pocount.py
+++ b/translate/tools/test_pocount.py
@@ -1,13 +1,12 @@
 #!/usr/bin/env python
 # -*- coding: utf-8 -*-
 
-import StringIO
-from translate.tools import pocount
+from cStringIO import StringIO
 
 from pytest import mark
 
-from translate.storage import po
-from translate.storage import statsdb
+from translate.storage import po, statsdb
+from translate.tools import pocount
 
 
 class TestCount:
@@ -18,10 +17,10 @@ class TestCount:
         if target is not None:
             poelement.target = target
         wordssource, wordstarget = statsdb.wordsinunit(poelement)
-        print 'Source (expected=%d; actual=%d): "%s"' % (expectedsource, wordssource, source)
+        print('Source (expected=%d; actual=%d): "%s"' % (expectedsource, wordssource, source))
         assert wordssource == expectedsource
         if target is not None:
-            print 'Target (expected=%d; actual=%d): "%s"' % (expectedtarget, wordstarget, target)
+            print('Target (expected=%d; actual=%d): "%s"' % (expectedtarget, wordstarget, target))
             assert wordstarget == expectedtarget
 
     def test_simple_count_zero(self):
@@ -49,6 +48,8 @@ class TestCount:
         self.count("A word<br />Another word", 4)
         # \n is a word break
         self.count("<p>A word</p>\n<p>Another word</p>", 4)
+        # Not really an XML tag
+        self.count("<no label>", 2)
 
     def test_newlines(self):
         """test to see that newlines divide words"""
@@ -119,41 +120,41 @@ msgstr ""
 '''
 
     def test_translated(self):
-        pofile = StringIO.StringIO(self.inputdata)
+        pofile = StringIO(self.inputdata)
         stats = pocount.calcstats_old(pofile)
         assert stats['translated'] == 1
 
     def test_fuzzy(self):
-        pofile = StringIO.StringIO(self.inputdata)
+        pofile = StringIO(self.inputdata)
         stats = pocount.calcstats_old(pofile)
         assert stats['fuzzy'] == 1
 
     def test_untranslated(self):
-        pofile = StringIO.StringIO(self.inputdata)
+        pofile = StringIO(self.inputdata)
         stats = pocount.calcstats_old(pofile)
         assert stats['untranslated'] == 1
 
     def test_total(self):
-        pofile = StringIO.StringIO(self.inputdata)
+        pofile = StringIO(self.inputdata)
         stats = pocount.calcstats_old(pofile)
         assert stats['total'] == 3
 
     def test_translatedsourcewords(self):
-        pofile = StringIO.StringIO(self.inputdata)
+        pofile = StringIO(self.inputdata)
         stats = pocount.calcstats_old(pofile)
         assert stats['translatedsourcewords'] == 2
 
     def test_fuzzysourcewords(self):
-        pofile = StringIO.StringIO(self.inputdata)
+        pofile = StringIO(self.inputdata)
         stats = pocount.calcstats_old(pofile)
         assert stats['fuzzysourcewords'] == 2
 
     def test_untranslatedsourcewords(self):
-        pofile = StringIO.StringIO(self.inputdata)
+        pofile = StringIO(self.inputdata)
         stats = pocount.calcstats_old(pofile)
         assert stats['untranslatedsourcewords'] == 2
 
     def test_totalsourcewords(self):
-        pofile = StringIO.StringIO(self.inputdata)
+        pofile = StringIO(self.inputdata)
         stats = pocount.calcstats_old(pofile)
         assert stats['totalsourcewords'] == 6
diff --git a/translate/tools/test_podebug.py b/translate/tools/test_podebug.py
index 50f9cdb..954948c 100644
--- a/translate/tools/test_podebug.py
+++ b/translate/tools/test_podebug.py
@@ -1,7 +1,8 @@
 # -*- coding: utf-8 -*-
 
-from translate.tools import podebug
 from translate.storage import base, po, xliff
+from translate.tools import podebug
+
 
 PO_DOC = """
 msgid "This is a %s test, hooray."
@@ -31,7 +32,7 @@ class TestPODebug:
     def test_ignore_gtk(self):
         """Test operation of GTK message ignoring"""
         unit = base.TranslationUnit("default:LTR")
-        assert self.debug.ignore_gtk(unit) == True
+        assert self.debug.ignore_gtk(unit)
 
     def test_keep_target(self):
         """Test that we use the target for rewriting if it exists."""
@@ -72,7 +73,7 @@ class TestPODebug:
     def test_rewrite_flipped(self):
         """Test the unicode rewrite function"""
         assert unicode(self.debug.rewrite_flipped(u"Test")) == u"\u202e⊥ǝsʇ"
-        #alternative with reversed string and no RTL override:
+        # alternative with reversed string and no RTL override:
         #assert unicode(self.debug.rewrite_flipped("Test")) == u"ʇsǝ⊥"
         # Chars < ! and > z are returned as is
         assert unicode(self.debug.rewrite_flipped(u" ")) == u"\u202e "
@@ -94,8 +95,8 @@ class TestPODebug:
         out_unit = po_out.units[0]
 
         assert in_unit.source == out_unit.source
-        print out_unit.target
-        print str(po_out)
+        print(out_unit.target)
+        print(str(po_out))
         rewrite_func = self.debug.rewrite_unicode
         assert out_unit.target == u"%s%%s%s" % (rewrite_func(u'This is a '), rewrite_func(u' test, hooray.'))
 
@@ -107,8 +108,8 @@ class TestPODebug:
         out_unit = xliff_out.units[0]
 
         assert in_unit.source == out_unit.source
-        print out_unit.target
-        print str(xliff_out)
+        print(out_unit.target)
+        print(str(xliff_out))
         assert out_unit.target == u'xxx%sxxx' % (in_unit.source)
 
     def test_hash(self):
@@ -127,15 +128,13 @@ msgctxt "test context 3"
 msgid "Test msgid 3"
 msgstr "Test msgstr 3"
 """)
-        debugs = (
-            podebug.podebug(format="%h "),
-            podebug.podebug(format="%6h."),
-            podebug.podebug(format="zzz%7h.zzz"),
-            podebug.podebug(format="%f %F %b %B %d %s "),
-            podebug.podebug(format="%3f %4F %5b %6B %7d %8s "),
-            podebug.podebug(format="%cf %cF %cb %cB %cd %cs "),
-            podebug.podebug(format="%3cf %4cF %5cb %6cB %7cd %8cs "),
-            )
+        debugs = (podebug.podebug(format="%h "),
+                  podebug.podebug(format="%6h."),
+                  podebug.podebug(format="zzz%7h.zzz"),
+                  podebug.podebug(format="%f %F %b %B %d %s "),
+                  podebug.podebug(format="%3f %4F %5b %6B %7d %8s "),
+                  podebug.podebug(format="%cf %cF %cb %cB %cd %cs "),
+                  podebug.podebug(format="%3cf %4cF %5cb %6cB %7cd %8cs "),)
         results = ["85a9 Test msgstr 1", "a15d Test msgstr 2", "6398 Test msgstr 3",
                    "85a917.Test msgstr 1", "a15d71.Test msgstr 2", "639898.Test msgstr 3",
                    "zzz85a9170.zzzTest msgstr 1", "zzza15d718.zzzTest msgstr 2", "zzz639898c.zzzTest msgstr 3",
diff --git a/translate/tools/test_pogrep.py b/translate/tools/test_pogrep.py
index 6aa4636..7acfb21 100644
--- a/translate/tools/test_pogrep.py
+++ b/translate/tools/test_pogrep.py
@@ -1,11 +1,10 @@
 #!/usr/bin/env python
 # -*- coding: utf-8 -*-
 
-from translate.storage import po
-from translate.storage import xliff
+from translate.misc import wStringIO
+from translate.storage import po, xliff
 from translate.storage.test_base import first_translatable, headerless_len
 from translate.tools import pogrep
-from translate.misc import wStringIO
 
 
 class TestPOGrep:
@@ -23,7 +22,7 @@ class TestPOGrep:
         options, args = pogrep.cmdlineparser().parse_args(["xxx.po"] + cmdlineoptions)
         grepfilter = pogrep.GrepFilter(searchstring, options.searchparts, options.ignorecase, options.useregexp, options.invertmatch, options.keeptranslations, options.accelchar)
         tofile = grepfilter.filterfile(self.poparse(posource))
-        print str(tofile)
+        print(str(tofile))
         return str(tofile)
 
     def test_simplegrep_msgid(self):
@@ -60,7 +59,7 @@ class TestPOGrep:
 
     def test_simplegrep_locations_with_comment_enabled(self):
         """grep for a string in "locations", while also "comment" is checked
-        see http://bugs.locamotion.org/show_bug.cgi?id=1036
+        see https://github.com/translate/translate/issues/1036
         """
         posource = '# (review) comment\n#: test.c\nmsgid "test"\nmsgstr "rest"\n'
         poresult = self.pogrep(posource, "test", ["--search=comment", "--search=locations"])
@@ -78,7 +77,7 @@ class TestPOGrep:
                                          (poascii, queryunicode, ''),
                                          (pounicode, queryascii, ''),
                                          (pounicode, queryunicode, pounicode)]:
-            print "Source:\n%s\nSearch: %s\n" % (source, search)
+            print("Source:\n%s\nSearch: %s\n" % (source, search))
             poresult = self.pogrep(source, search)
             assert poresult.index(expected) >= 0
 
@@ -92,7 +91,7 @@ class TestPOGrep:
                                          (poascii, queryunicode, ''),
                                          (pounicode, queryascii, ''),
                                          (pounicode, queryunicode, pounicode)]:
-            print "Source:\n%s\nSearch: %s\n" % (source, search)
+            print("Source:\n%s\nSearch: %s\n" % (source, search))
             poresult = self.pogrep(source, search, ["--regexp"])
             assert poresult.index(expected) >= 0
 
@@ -110,14 +109,16 @@ class TestPOGrep:
         # é, e + '
         # Ḽ, L + ^
         # Ṏ
-        groups = [(u"\u00e9", u"\u0065\u0301"), \
-                  (u"\u1e3c", u"\u004c\u032d"), \
-                  (u"\u1e4e", u"\u004f\u0303\u0308", u"\u00d5\u0308")]
+        groups = [
+            (u"\u00e9", u"\u0065\u0301"),
+            (u"\u1e3c", u"\u004c\u032d"),
+            (u"\u1e4e", u"\u004f\u0303\u0308", u"\u00d5\u0308")
+        ]
         for letters in groups:
             for source_letter in letters:
                 source = source_template % source_letter
                 for search_letter in letters:
-                    print search_letter.encode('utf-8')
+                    print(search_letter.encode('utf-8'))
                     poresult = self.pogrep(source, search_letter)
                     assert poresult.index(source.encode('utf-8')) >= 0
 
diff --git a/translate/tools/test_pomerge.py b/translate/tools/test_pomerge.py
index e5b6aa7..b2cfb5e 100644
--- a/translate/tools/test_pomerge.py
+++ b/translate/tools/test_pomerge.py
@@ -4,11 +4,9 @@
 import pytest
 from pytest import mark
 
-from translate.tools import pomerge
-from translate.storage import factory
-from translate.storage import po
-from translate.storage import xliff
 from translate.misc import wStringIO
+from translate.storage import po, xliff
+from translate.tools import pomerge
 
 
 def test_str2bool():
@@ -43,8 +41,7 @@ class TestPOMerge:
         assert pomerge.mergestore(inputfile, outputfile, templatefile,
                                   mergeblanks=mergeblanks,
                                   mergefuzzy=mergefuzzy,
-                                  mergecomments=mergecomments,
-        )
+                                  mergecomments=mergecomments,)
         outputpostring = outputfile.getvalue()
         outputpofile = po.pofile(outputpostring)
         return outputpofile
@@ -62,8 +59,8 @@ class TestPOMerge:
                                   mergefuzzy=mergefuzzy,
                                   mergecomments=mergecomments)
         outputxliffstring = outputfile.getvalue()
-        print "Generated XML:"
-        print outputxliffstring
+        print("Generated XML:")
+        print(outputxliffstring)
         outputxlifffile = xliff.xlifffile(outputxliffstring)
         return outputxlifffile
 
@@ -145,7 +142,7 @@ msgstr "Dimpled Ring"'''
         pounit = self.singleunit(pofile)
         assert pounit.source == "Simple String"
         assert pounit.target == "Dimpled Ring"
-        assert pounit.isfuzzy() == False
+        assert not pounit.isfuzzy()
 
     def test_merging_locations(self):
         """check that locations on separate lines are output in Gettext form
@@ -154,7 +151,7 @@ msgstr "Dimpled Ring"'''
         inputpo = '''#: location.c:1\n#: location.c:2\nmsgid "Simple String"\nmsgstr "Dimpled Ring"\n'''
         expectedpo = '''#: location.c:1%slocation.c:2\nmsgid "Simple String"\nmsgstr "Dimpled Ring"\n''' % po.lsep
         pofile = self.mergestore(templatepo, inputpo)
-        print pofile
+        print(pofile)
         assert str(pofile) == expectedpo
 
     def test_unit_missing_in_template_with_locations(self):
@@ -174,7 +171,7 @@ msgid "Simple String"
 msgstr "Dimpled Ring"
 '''
         pofile = self.mergestore(templatepo, inputpo)
-        print pofile
+        print(pofile)
         assert str(pofile) == expectedpo
 
     def test_unit_missing_in_template_no_locations(self):
@@ -190,7 +187,7 @@ msgstr "Perplexa ring"'''
 msgstr "Dimpled Ring"
 '''
         pofile = self.mergestore(templatepo, inputpo)
-        print pofile
+        print(pofile)
         assert str(pofile) == expectedpo
 
     def test_reflowed_source_comments(self):
@@ -201,7 +198,7 @@ msgstr "Dimpled Ring"
         expectedpo = '''#: newMenu.label%snewMenu.accesskey\nmsgid "&New"\nmsgstr "&Nuwe"\n''' % po.lsep
         pofile = self.mergestore(templatepo, newpo)
         pounit = self.singleunit(pofile)
-        print pofile
+        print(pofile)
         assert str(pofile) == expectedpo
 
     def test_comments_with_blank_lines(self):
@@ -217,7 +214,7 @@ msgstr "blabla"
         expectedpo = templatepo
         pofile = self.mergestore(templatepo, newpo)
         pounit = self.singleunit(pofile)
-        print pofile
+        print(pofile)
         assert str(pofile) == expectedpo
 
     def test_merge_dont_delete_unassociated_comments(self):
@@ -228,7 +225,7 @@ msgstr "blabla"
         expectedpo = '''# Lonely comment\n# Translation comment\nmsgid "Bob"\nmsgstr "Builder"\n'''
         pofile = self.mergestore(templatepo, mergepo)
 #        pounit = self.singleunit(pofile)
-        print pofile
+        print(pofile)
         assert str(pofile) == expectedpo
 
     def test_preserve_format_trailing_newlines(self):
@@ -237,7 +234,7 @@ msgstr "blabla"
         mergepo = '''msgid "Simple string\\n"\nmsgstr "Dimpled ring\\n"\n'''
         expectedpo = '''msgid "Simple string\\n"\nmsgstr "Dimpled ring\\n"\n'''
         pofile = self.mergestore(templatepo, mergepo)
-        print "Expected:\n%s\n\nMerged:\n%s" % (expectedpo, str(pofile))
+        print("Expected:\n%s\n\nMerged:\n%s" % (expectedpo, str(pofile)))
         assert str(pofile) == expectedpo
 
         templatepo = '''msgid ""\n"Simple string\\n"\nmsgstr ""\n'''
@@ -245,7 +242,7 @@ msgstr "blabla"
         expectedpo = '''msgid ""\n"Simple string\\n"\nmsgstr "Dimpled ring\\n"\n'''
         expectedpo2 = '''msgid "Simple string\\n"\nmsgstr "Dimpled ring\\n"\n'''
         pofile = self.mergestore(templatepo, mergepo)
-        print "Expected:\n%s\n\nMerged:\n%s" % (expectedpo, str(pofile))
+        print("Expected:\n%s\n\nMerged:\n%s" % (expectedpo, str(pofile)))
         assert str(pofile) == expectedpo or str(pofile) == expectedpo2
 
     def test_preserve_format_minor_start_and_end_of_sentence_changes(self):
@@ -255,21 +252,21 @@ msgstr "blabla"
         mergepo = '''msgid "Target type:"\nmsgstr "Doelsoort:"\n'''
         expectedpo = mergepo
         pofile = self.mergestore(templatepo, mergepo)
-        print "Expected:\n%s\n\nMerged:\n%s" % (expectedpo, str(pofile))
+        print("Expected:\n%s\n\nMerged:\n%s" % (expectedpo, str(pofile)))
         assert str(pofile) == expectedpo
 
         templatepo = '''msgid "&Select"\nmsgstr "Kies"\n\n'''
         mergepo = '''msgid "&Select"\nmsgstr "&Kies"\n'''
         expectedpo = mergepo
         pofile = self.mergestore(templatepo, mergepo)
-        print "Expected:\n%s\n\nMerged:\n%s" % (expectedpo, str(pofile))
+        print("Expected:\n%s\n\nMerged:\n%s" % (expectedpo, str(pofile)))
         assert str(pofile) == expectedpo
 
         templatepo = '''msgid "en-us, en"\nmsgstr "en-us, en"\n'''
         mergepo = '''msgid "en-us, en"\nmsgstr "af-za, af, en-za, en-gb, en-us, en"\n'''
         expectedpo = mergepo
         pofile = self.mergestore(templatepo, mergepo)
-        print "Expected:\n%s\n\nMerged:\n%s" % (expectedpo, str(pofile))
+        print("Expected:\n%s\n\nMerged:\n%s" % (expectedpo, str(pofile)))
         assert str(pofile) == expectedpo
 
     def test_preserve_format_last_entry_in_a_file(self):
@@ -279,14 +276,14 @@ msgstr "blabla"
         mergepo = '''msgid "First"\nmsgstr "Eerste"\n\nmsgid "Second"\nmsgstr "Tweede"\n'''
         expectedpo = '''msgid "First"\nmsgstr "Eerste"\n\nmsgid "Second"\nmsgstr "Tweede"\n'''
         pofile = self.mergestore(templatepo, mergepo)
-        print "Expected:\n%s\n\nMerged:\n%s" % (expectedpo, str(pofile))
+        print("Expected:\n%s\n\nMerged:\n%s" % (expectedpo, str(pofile)))
         assert str(pofile) == expectedpo
 
         templatepo = '''msgid "First"\nmsgstr ""\n\nmsgid "Second"\nmsgstr ""\n\n'''
         mergepo = '''msgid "First"\nmsgstr "Eerste"\n\nmsgid "Second"\nmsgstr "Tweede"\n'''
         expectedpo = '''msgid "First"\nmsgstr "Eerste"\n\nmsgid "Second"\nmsgstr "Tweede"\n'''
         pofile = self.mergestore(templatepo, mergepo)
-        print "Expected:\n%s\n\nMerged:\n%s" % (expectedpo, str(pofile))
+        print("Expected:\n%s\n\nMerged:\n%s" % (expectedpo, str(pofile)))
         assert str(pofile) == expectedpo
 
     @mark.xfail(reason="Not Implemented")
@@ -301,7 +298,7 @@ msgstr "blabla"
 msgstr "Eerste\tTweede"
 '''
         pofile = self.mergestore(templatepo, mergepo)
-        print "Expected:\n%s\n\nMerged:\n%s" % (expectedpo, str(pofile))
+        print("Expected:\n%s\n\nMerged:\n%s" % (expectedpo, str(pofile)))
         assert str(pofile) == expectedpo
 
     def test_preserve_comments_layout(self):
@@ -311,7 +308,7 @@ msgstr "Eerste\tTweede"
         mergepo = '''# (pofilter) unchanged: please translate\n#: filename\nmsgid "Desktop Background.bmp"\nmsgstr "Desktop Background.bmp"\n'''
         expectedpo = mergepo
         pofile = self.mergestore(templatepo, mergepo)
-        print "Expected:\n%s\n\nMerged:\n%s" % (expectedpo, str(pofile))
+        print("Expected:\n%s\n\nMerged:\n%s" % (expectedpo, str(pofile)))
         assert str(pofile) == expectedpo
 
     def test_merge_dos2unix(self):
@@ -382,12 +379,12 @@ msgstr "Eerste\tTweede"
         mergepo = '''msgid "_: KDE comment\\n"\n"File"\nmsgstr "_: KDE comment\\n"\n"Ifayile"\n\n'''
         expectedpo = '''msgid ""\n"_: KDE comment\\n"\n"File"\nmsgstr "Ifayile"\n'''
         pofile = self.mergestore(templatepo, mergepo)
-        print "Expected:\n%s\n\nMerged:\n%s" % (expectedpo, str(pofile))
+        print("Expected:\n%s\n\nMerged:\n%s" % (expectedpo, str(pofile)))
         assert str(pofile) == expectedpo
 
         # Translated kde comment.
         mergepo = '''msgid "_: KDE comment\\n"\n"File"\nmsgstr "_: KDE kommentaar\\n"\n"Ifayile"\n\n'''
-        print "Expected:\n%s\n\nMerged:\n%s" % (expectedpo, str(pofile))
+        print("Expected:\n%s\n\nMerged:\n%s" % (expectedpo, str(pofile)))
         assert str(pofile) == expectedpo
 
         # multiline KDE comment
@@ -395,7 +392,7 @@ msgstr "Eerste\tTweede"
         mergepo = '''msgid "_: KDE "\n"comment\\n"\n"File"\nmsgstr "_: KDE "\n"comment\\n"\n"Ifayile"\n\n'''
         expectedpo = '''msgid ""\n"_: KDE comment\\n"\n"File"\nmsgstr "Ifayile"\n'''
         pofile = self.mergestore(templatepo, mergepo)
-        print "Expected:\n%s\n\nMerged:\n%s" % (expectedpo, str(pofile))
+        print("Expected:\n%s\n\nMerged:\n%s" % (expectedpo, str(pofile)))
         assert str(pofile) == expectedpo
 
     def test_merging_untranslated_with_kde_disambiguation(self):
@@ -427,7 +424,7 @@ msgstr "Stuur"
 ''' % (po.lsep, po.lsep)
         expectedpo = mergepo
         pofile = self.mergestore(templatepo, mergepo)
-        print "Expected:\n%s\n---\nMerged:\n%s\n---" % (expectedpo, str(pofile))
+        print("Expected:\n%s\n---\nMerged:\n%s\n---" % (expectedpo, str(pofile)))
         assert str(pofile) == expectedpo
 
     def test_merging_header_entries(self):
@@ -490,7 +487,7 @@ msgid "Simple String"
 msgstr "Dimpled Ring"
 '''
         pofile = self.mergestore(templatepo, mergepo)
-        print "Expected:\n%s\n---\nMerged:\n%s\n---" % (expectedpo, str(pofile))
+        print("Expected:\n%s\n---\nMerged:\n%s\n---" % (expectedpo, str(pofile)))
         assert str(pofile) == expectedpo
 
     def test_merging_different_locations(self):
@@ -547,5 +544,5 @@ msgstr "ZERSTÖRE WACHPOSTEN"
 
         expectedpo = mergepo
         pofile = self.mergestore(templatepo, mergepo)
-        print "Expected:\n%s\n---\nMerged:\n%s\n---" % (expectedpo, str(pofile))
+        print("Expected:\n%s\n---\nMerged:\n%s\n---" % (expectedpo, str(pofile)))
         assert str(pofile) == expectedpo or str(pofile) == expectedpo2
diff --git a/translate/tools/test_pretranslate.py b/translate/tools/test_pretranslate.py
index 6a101db..8b5f314 100644
--- a/translate/tools/test_pretranslate.py
+++ b/translate/tools/test_pretranslate.py
@@ -5,11 +5,10 @@ import warnings
 
 from pytest import mark
 
-from translate.tools import pretranslate
 from translate.convert import test_convert
 from translate.misc import wStringIO
-from translate.storage import po
-from translate.storage import xliff
+from translate.storage import po, xliff
+from translate.tools import pretranslate
 
 
 class TestPretranslate:
@@ -57,11 +56,11 @@ class TestPretranslate:
     def singleunit(self, pofile):
         """checks that the pofile contains a single non-header unit, and
         returns it"""
-        if len(pofile.units) == 2 and  pofile.units[0].isheader():
-            print pofile.units[1]
+        if len(pofile.units) == 2 and pofile.units[0].isheader():
+            print(pofile.units[1])
             return pofile.units[1]
         else:
-            print pofile.units[0]
+            print(pofile.units[0])
             return pofile.units[0]
 
     def test_pretranslatepo_blank(self):
@@ -122,7 +121,7 @@ msgstr[1] "%d handleidings."
         template_source = '''#: simple.label\n#: simple.accesskey\nmsgid "A &hard coded newline.\\n"\nmsgstr "&Hart gekoeerde nuwe lyne\\n"\n'''
         poexpected = '''#: simple.label\n#: simple.accesskey\n#, fuzzy\nmsgid "Its &hard coding a newline.\\n"\nmsgstr "&Hart gekoeerde nuwe lyne\\n"\n'''
         newpo = self.pretranslatepo(input_source, template_source)
-        print newpo
+        print(newpo)
         assert str(newpo) == poexpected
 
     def test_merging_location_change(self):
@@ -132,7 +131,7 @@ msgstr[1] "%d handleidings."
         template_source = '''#: simple.label%ssimple.accesskey\nmsgid "A &hard coded newline.\\n"\nmsgstr "&Hart gekoeerde nuwe lyne\\n"\n''' % po.lsep
         poexpected = '''#: new_simple.label%snew_simple.accesskey\nmsgid "A &hard coded newline.\\n"\nmsgstr "&Hart gekoeerde nuwe lyne\\n"\n''' % po.lsep
         newpo = self.pretranslatepo(input_source, template_source)
-        print newpo
+        print(newpo)
         assert str(newpo) == poexpected
 
     def test_merging_location_and_whitespace_change(self):
@@ -142,7 +141,7 @@ msgstr[1] "%d handleidings."
         template_source = '''#: doublespace.label%sdoublespace.accesskey\nmsgid "&We  have  spaces"\nmsgstr "&One  het  spasies"\n''' % po.lsep
         poexpected = '''#: singlespace.label%ssinglespace.accesskey\n#, fuzzy\nmsgid "&We have spaces"\nmsgstr "&One  het  spasies"\n''' % po.lsep
         newpo = self.pretranslatepo(input_source, template_source)
-        print newpo
+        print(newpo)
         assert str(newpo) == poexpected
 
     @mark.xfail(reason="Not Implemented")
@@ -153,7 +152,7 @@ msgstr[1] "%d handleidings."
         template_source = '''#: someline.c\nmsgid "&About"\nmsgstr "&Info"\n'''
         poexpected = '''#: someline.c\nmsgid "A&bout"\nmsgstr "&Info"\n'''
         newpo = self.pretranslatepo(input_source, template_source)
-        print newpo
+        print(newpo)
         assert str(newpo) == poexpected
 
     @mark.xfail(reason="Not Implemented")
@@ -209,10 +208,10 @@ msgstr "Sekuriteit"
         poexpected = template_source
         newpo = self.pretranslatepo(input_source, template_source)
         newpounit = self.singleunit(newpo)
-        print "expected"
-        print poexpected
-        print "got:"
-        print str(newpounit)
+        print("expected")
+        print(poexpected)
+        print("got:")
+        print(str(newpounit))
         assert str(newpounit) == poexpected
 
     def test_merging_msgidcomments(self):
@@ -238,7 +237,7 @@ msgstr "36em"
         input_source = '''msgid "One"\nmsgid_plural "Two"\nmsgstr[0] ""\nmsgstr[1] ""\n'''
         template_source = '''msgid "One"\nmsgid_plural "Two"\nmsgstr[0] "Een"\nmsgstr[1] "Twee"\nmsgstr[2] "Drie"\n'''
         newpo = self.pretranslatepo(input_source, template_source)
-        print newpo
+        print(newpo)
         newpounit = self.singleunit(newpo)
         assert str(newpounit) == template_source
 
@@ -249,7 +248,7 @@ msgstr "36em"
         template_source = '''#~ msgid "&About"\n#~ msgstr "&Omtrent"\n'''
         expected = '''#: resurect.c\nmsgid "&About"\nmsgstr "&Omtrent"\n'''
         newpo = self.pretranslatepo(input_source, template_source)
-        print newpo
+        print(newpo)
         assert str(newpo) == expected
 
     def test_merging_comments(self):
@@ -258,7 +257,7 @@ msgstr "36em"
         template_source = '''#. Don't do it!\n#: file.py:2\nmsgid "One"\nmsgstr "Een"\n'''
         poexpected = '''#. Don't do it!\n#: file.py:1\nmsgid "One"\nmsgstr "Een"\n'''
         newpo = self.pretranslatepo(input_source, template_source)
-        print newpo
+        print(newpo)
         newpounit = self.singleunit(newpo)
         assert str(newpounit) == poexpected
 
@@ -269,7 +268,7 @@ msgstr "36em"
         poexpected = '''#: file.c:1\n#, c-format\nmsgid "%d pipes"\nmsgstr "%d pype"\n'''
         newpo = self.pretranslatepo(input_source, template_source)
         newpounit = self.singleunit(newpo)
-        print newpounit
+        print(newpounit)
         assert str(newpounit) == poexpected
 
         input_source = '''#: file.c:1\n#, c-format\nmsgid "%d computers"\nmsgstr ""\n'''
@@ -295,9 +294,9 @@ msgstr "36em"
         template = xliff.xlifffile.parsestring(xlf_template)
         old = xliff.xlifffile.parsestring(xlf_old)
         new = self.pretranslatexliff(template, old)
-        print str(old)
-        print '---'
-        print str(new)
+        print(str(old))
+        print('---')
+        print(str(new))
         assert new.units[0].isapproved()
         # Layout might have changed, so we won't compare the serialised
         # versions
diff --git a/translate/tools/test_pypo2phppo.py b/translate/tools/test_pypo2phppo.py
index b6753be..b2223ec 100644
--- a/translate/tools/test_pypo2phppo.py
+++ b/translate/tools/test_pypo2phppo.py
@@ -5,10 +5,9 @@
 # Author: Wil Clouser <wclouser at mozilla.com>
 # Date: 2009-12-03
 
-from translate.tools import pypo2phppo
-from translate.storage import po
 from translate.convert import test_convert
 from translate.misc import wStringIO
+from translate.tools import pypo2phppo
 
 
 class TestPyPo2PhpPo:
diff --git a/PKG-INFO b/translate_toolkit.egg-info/PKG-INFO
similarity index 93%
copy from PKG-INFO
copy to translate_toolkit.egg-info/PKG-INFO
index 647b960..22d7624 100644
--- a/PKG-INFO
+++ b/translate_toolkit.egg-info/PKG-INFO
@@ -1,12 +1,12 @@
 Metadata-Version: 1.0
 Name: translate-toolkit
-Version: 1.11.0
+Version: 1.12.0
 Summary: Tools and API for translation and localization engineering.
 Home-page: http://toolkit.translatehouse.org/
 Author: Translate
 Author-email: translate-devel at lists.sourceforge.net
 License: GNU General Public License (GPL)
-Download-URL: http://sourceforge.net/projects/translate/files/Translate Toolkit/1.11.0
+Download-URL: http://sourceforge.net/projects/translate/files/Translate Toolkit/1.12.0
 Description: 
         The `Translate Toolkit <http://toolkit.translatehouse.org/>`_ is created by
         localizers for localizers. It contains several utilities, as well as an API for
@@ -36,9 +36,11 @@ Classifier: Development Status :: 5 - Production/Stable
 Classifier: Environment :: Console
 Classifier: Intended Audience :: Developers
 Classifier: License :: OSI Approved :: GNU General Public License (GPL)
-Classifier: Programming Language :: Python
-Classifier: Topic :: Software Development :: Localization
-Classifier: Topic :: Software Development :: Libraries :: Python Modules
 Classifier: Operating System :: OS Independent
 Classifier: Operating System :: Microsoft :: Windows
 Classifier: Operating System :: Unix
+Classifier: Programming Language :: Python
+Classifier: Programming Language :: Python :: 2.6
+Classifier: Programming Language :: Python :: 2.7
+Classifier: Topic :: Software Development :: Libraries :: Python Modules
+Classifier: Topic :: Software Development :: Localization
diff --git a/translate_toolkit.egg-info/SOURCES.txt b/translate_toolkit.egg-info/SOURCES.txt
new file mode 100644
index 0000000..a03aa09
--- /dev/null
+++ b/translate_toolkit.egg-info/SOURCES.txt
@@ -0,0 +1,1150 @@
+COPYING
+MANIFEST.in
+README.rst
+min-required.txt
+requirements.txt
+setup.py
+docs/Makefile
+docs/changelog.rst
+docs/conf.py
+docs/contents.rst.inc
+docs/features.rst
+docs/history.rst
+docs/index.rst
+docs/installation.rst
+docs/license.rst
+docs/make.bat
+docs/_build/doctrees/changelog.doctree
+docs/_build/doctrees/environment.pickle
+docs/_build/doctrees/features.doctree
+docs/_build/doctrees/history.doctree
+docs/_build/doctrees/index.doctree
+docs/_build/doctrees/installation.doctree
+docs/_build/doctrees/license.doctree
+docs/_build/doctrees/api/convert.doctree
+docs/_build/doctrees/api/filters.doctree
+docs/_build/doctrees/api/index.doctree
+docs/_build/doctrees/api/lang.doctree
+docs/_build/doctrees/api/misc.doctree
+docs/_build/doctrees/api/search.doctree
+docs/_build/doctrees/api/services.doctree
+docs/_build/doctrees/api/storage.doctree
+docs/_build/doctrees/api/tools.doctree
+docs/_build/doctrees/commands/csv2po.doctree
+docs/_build/doctrees/commands/csv2tbx.doctree
+docs/_build/doctrees/commands/general_usage.doctree
+docs/_build/doctrees/commands/html2po.doctree
+docs/_build/doctrees/commands/ical2po.doctree
+docs/_build/doctrees/commands/index.doctree
+docs/_build/doctrees/commands/ini2po.doctree
+docs/_build/doctrees/commands/json2po.doctree
+docs/_build/doctrees/commands/junitmsgfmt.doctree
+docs/_build/doctrees/commands/levenshtein_distance.doctree
+docs/_build/doctrees/commands/moz-l10n-builder.doctree
+docs/_build/doctrees/commands/moz2po.doctree
+docs/_build/doctrees/commands/mozilla_l10n_scripts.doctree
+docs/_build/doctrees/commands/odf2xliff.doctree
+docs/_build/doctrees/commands/oo2po.doctree
+docs/_build/doctrees/commands/option_accelerator.doctree
+docs/_build/doctrees/commands/option_duplicates.doctree
+docs/_build/doctrees/commands/option_errorlevel.doctree
+docs/_build/doctrees/commands/option_filteraction.doctree
+docs/_build/doctrees/commands/option_multifile.doctree
+docs/_build/doctrees/commands/option_personality.doctree
+docs/_build/doctrees/commands/option_progress.doctree
+docs/_build/doctrees/commands/option_rewrite.doctree
+docs/_build/doctrees/commands/phase.doctree
+docs/_build/doctrees/commands/php2po.doctree
+docs/_build/doctrees/commands/po2tmx.doctree
+docs/_build/doctrees/commands/po2wordfast.doctree
+docs/_build/doctrees/commands/poclean.doctree
+docs/_build/doctrees/commands/pocommentclean.doctree
+docs/_build/doctrees/commands/pocompendium.doctree
+docs/_build/doctrees/commands/pocompile.doctree
+docs/_build/doctrees/commands/poconflicts.doctree
+docs/_build/doctrees/commands/pocount.doctree
+docs/_build/doctrees/commands/podebug.doctree
+docs/_build/doctrees/commands/pofilter.doctree
+docs/_build/doctrees/commands/pofilter_tests.doctree
+docs/_build/doctrees/commands/pogrep.doctree
+docs/_build/doctrees/commands/pomerge.doctree
+docs/_build/doctrees/commands/pomigrate2.doctree
+docs/_build/doctrees/commands/popuretext.doctree
+docs/_build/doctrees/commands/poreencode.doctree
+docs/_build/doctrees/commands/porestructure.doctree
+docs/_build/doctrees/commands/posegment.doctree
+docs/_build/doctrees/commands/posplit.doctree
+docs/_build/doctrees/commands/poswap.doctree
+docs/_build/doctrees/commands/pot2po.doctree
+docs/_build/doctrees/commands/poterminology.doctree
+docs/_build/doctrees/commands/poterminology_stopword_file.doctree
+docs/_build/doctrees/commands/pretranslate.doctree
+docs/_build/doctrees/commands/prop2po.doctree
+docs/_build/doctrees/commands/rc2po.doctree
+docs/_build/doctrees/commands/sub2po.doctree
+docs/_build/doctrees/commands/symb2po.doctree
+docs/_build/doctrees/commands/tiki2po.doctree
+docs/_build/doctrees/commands/tmserver.doctree
+docs/_build/doctrees/commands/ts2po.doctree
+docs/_build/doctrees/commands/txt2po.doctree
+docs/_build/doctrees/commands/web2py2po.doctree
+docs/_build/doctrees/commands/xliff2po.doctree
+docs/_build/doctrees/developers/building.doctree
+docs/_build/doctrees/developers/contributing.doctree
+docs/_build/doctrees/developers/deprecation.doctree
+docs/_build/doctrees/developers/developers.doctree
+docs/_build/doctrees/developers/releasing.doctree
+docs/_build/doctrees/developers/styleguide.doctree
+docs/_build/doctrees/developers/testing.doctree
+docs/_build/doctrees/formats/android.doctree
+docs/_build/doctrees/formats/base_classes.doctree
+docs/_build/doctrees/formats/catkeys.doctree
+docs/_build/doctrees/formats/conformance.doctree
+docs/_build/doctrees/formats/csv.doctree
+docs/_build/doctrees/formats/dtd.doctree
+docs/_build/doctrees/formats/flex.doctree
+docs/_build/doctrees/formats/gsi.doctree
+docs/_build/doctrees/formats/html.doctree
+docs/_build/doctrees/formats/ical.doctree
+docs/_build/doctrees/formats/index.doctree
+docs/_build/doctrees/formats/ini.doctree
+docs/_build/doctrees/formats/json.doctree
+docs/_build/doctrees/formats/l20n.doctree
+docs/_build/doctrees/formats/mo.doctree
+docs/_build/doctrees/formats/odf.doctree
+docs/_build/doctrees/formats/omegat_glossary.doctree
+docs/_build/doctrees/formats/php.doctree
+docs/_build/doctrees/formats/po.doctree
+docs/_build/doctrees/formats/properties.doctree
+docs/_build/doctrees/formats/qm.doctree
+docs/_build/doctrees/formats/qt_phrase_book.doctree
+docs/_build/doctrees/formats/quoting_and_escaping.doctree
+docs/_build/doctrees/formats/rc.doctree
+docs/_build/doctrees/formats/strings.doctree
+docs/_build/doctrees/formats/subtitles.doctree
+docs/_build/doctrees/formats/tbx.doctree
+docs/_build/doctrees/formats/text.doctree
+docs/_build/doctrees/formats/tmx.doctree
+docs/_build/doctrees/formats/ts.doctree
+docs/_build/doctrees/formats/utx.doctree
+docs/_build/doctrees/formats/wiki.doctree
+docs/_build/doctrees/formats/wml.doctree
+docs/_build/doctrees/formats/wordfast.doctree
+docs/_build/doctrees/formats/xliff.doctree
+docs/_build/doctrees/guides/checking_for_inconsistencies.doctree
+docs/_build/doctrees/guides/cleanup_translator_comments.doctree
+docs/_build/doctrees/guides/creating_a_terminology_list_from_your_existing_translations.doctree
+docs/_build/doctrees/guides/creating_mozilla_pot_files.doctree
+docs/_build/doctrees/guides/document_translation.doctree
+docs/_build/doctrees/guides/index.doctree
+docs/_build/doctrees/guides/migrating_translations.doctree
+docs/_build/doctrees/guides/running_the_tools_on_microsoft_windows.doctree
+docs/_build/doctrees/guides/using_csv2po.doctree
+docs/_build/doctrees/guides/using_oo2po.doctree
+docs/_build/doctrees/guides/using_pofilter.doctree
+docs/_build/doctrees/releases/1.10.0.doctree
+docs/_build/doctrees/releases/1.11.0-rc1.doctree
+docs/_build/doctrees/releases/1.11.0.doctree
+docs/_build/doctrees/releases/1.12.0-rc1.doctree
+docs/_build/doctrees/releases/1.12.0.doctree
+docs/_build/doctrees/releases/1.8.1.doctree
+docs/_build/doctrees/releases/1.9.0.doctree
+docs/_build/doctrees/releases/dev.doctree
+docs/_build/doctrees/releases/index.doctree
+docs/_build/html/.buildinfo
+docs/_build/html/changelog.html
+docs/_build/html/features.html
+docs/_build/html/genindex.html
+docs/_build/html/history.html
+docs/_build/html/index.html
+docs/_build/html/installation.html
+docs/_build/html/license.html
+docs/_build/html/objects.inv
+docs/_build/html/py-modindex.html
+docs/_build/html/search.html
+docs/_build/html/searchindex.js
+docs/_build/html/_sources/changelog.txt
+docs/_build/html/_sources/features.txt
+docs/_build/html/_sources/history.txt
+docs/_build/html/_sources/index.txt
+docs/_build/html/_sources/installation.txt
+docs/_build/html/_sources/license.txt
+docs/_build/html/_sources/api/convert.txt
+docs/_build/html/_sources/api/filters.txt
+docs/_build/html/_sources/api/index.txt
+docs/_build/html/_sources/api/lang.txt
+docs/_build/html/_sources/api/misc.txt
+docs/_build/html/_sources/api/search.txt
+docs/_build/html/_sources/api/services.txt
+docs/_build/html/_sources/api/storage.txt
+docs/_build/html/_sources/api/tools.txt
+docs/_build/html/_sources/commands/csv2po.txt
+docs/_build/html/_sources/commands/csv2tbx.txt
+docs/_build/html/_sources/commands/general_usage.txt
+docs/_build/html/_sources/commands/html2po.txt
+docs/_build/html/_sources/commands/ical2po.txt
+docs/_build/html/_sources/commands/index.txt
+docs/_build/html/_sources/commands/ini2po.txt
+docs/_build/html/_sources/commands/json2po.txt
+docs/_build/html/_sources/commands/junitmsgfmt.txt
+docs/_build/html/_sources/commands/levenshtein_distance.txt
+docs/_build/html/_sources/commands/moz-l10n-builder.txt
+docs/_build/html/_sources/commands/moz2po.txt
+docs/_build/html/_sources/commands/mozilla_l10n_scripts.txt
+docs/_build/html/_sources/commands/odf2xliff.txt
+docs/_build/html/_sources/commands/oo2po.txt
+docs/_build/html/_sources/commands/option_accelerator.txt
+docs/_build/html/_sources/commands/option_duplicates.txt
+docs/_build/html/_sources/commands/option_errorlevel.txt
+docs/_build/html/_sources/commands/option_filteraction.txt
+docs/_build/html/_sources/commands/option_multifile.txt
+docs/_build/html/_sources/commands/option_personality.txt
+docs/_build/html/_sources/commands/option_progress.txt
+docs/_build/html/_sources/commands/option_rewrite.txt
+docs/_build/html/_sources/commands/phase.txt
+docs/_build/html/_sources/commands/php2po.txt
+docs/_build/html/_sources/commands/po2tmx.txt
+docs/_build/html/_sources/commands/po2wordfast.txt
+docs/_build/html/_sources/commands/poclean.txt
+docs/_build/html/_sources/commands/pocommentclean.txt
+docs/_build/html/_sources/commands/pocompendium.txt
+docs/_build/html/_sources/commands/pocompile.txt
+docs/_build/html/_sources/commands/poconflicts.txt
+docs/_build/html/_sources/commands/pocount.txt
+docs/_build/html/_sources/commands/podebug.txt
+docs/_build/html/_sources/commands/pofilter.txt
+docs/_build/html/_sources/commands/pofilter_tests.txt
+docs/_build/html/_sources/commands/pogrep.txt
+docs/_build/html/_sources/commands/pomerge.txt
+docs/_build/html/_sources/commands/pomigrate2.txt
+docs/_build/html/_sources/commands/popuretext.txt
+docs/_build/html/_sources/commands/poreencode.txt
+docs/_build/html/_sources/commands/porestructure.txt
+docs/_build/html/_sources/commands/posegment.txt
+docs/_build/html/_sources/commands/posplit.txt
+docs/_build/html/_sources/commands/poswap.txt
+docs/_build/html/_sources/commands/pot2po.txt
+docs/_build/html/_sources/commands/poterminology.txt
+docs/_build/html/_sources/commands/poterminology_stopword_file.txt
+docs/_build/html/_sources/commands/pretranslate.txt
+docs/_build/html/_sources/commands/prop2po.txt
+docs/_build/html/_sources/commands/rc2po.txt
+docs/_build/html/_sources/commands/sub2po.txt
+docs/_build/html/_sources/commands/symb2po.txt
+docs/_build/html/_sources/commands/tiki2po.txt
+docs/_build/html/_sources/commands/tmserver.txt
+docs/_build/html/_sources/commands/ts2po.txt
+docs/_build/html/_sources/commands/txt2po.txt
+docs/_build/html/_sources/commands/web2py2po.txt
+docs/_build/html/_sources/commands/xliff2po.txt
+docs/_build/html/_sources/developers/building.txt
+docs/_build/html/_sources/developers/contributing.txt
+docs/_build/html/_sources/developers/deprecation.txt
+docs/_build/html/_sources/developers/developers.txt
+docs/_build/html/_sources/developers/releasing.txt
+docs/_build/html/_sources/developers/styleguide.txt
+docs/_build/html/_sources/developers/testing.txt
+docs/_build/html/_sources/formats/android.txt
+docs/_build/html/_sources/formats/base_classes.txt
+docs/_build/html/_sources/formats/catkeys.txt
+docs/_build/html/_sources/formats/conformance.txt
+docs/_build/html/_sources/formats/csv.txt
+docs/_build/html/_sources/formats/dtd.txt
+docs/_build/html/_sources/formats/flex.txt
+docs/_build/html/_sources/formats/gsi.txt
+docs/_build/html/_sources/formats/html.txt
+docs/_build/html/_sources/formats/ical.txt
+docs/_build/html/_sources/formats/index.txt
+docs/_build/html/_sources/formats/ini.txt
+docs/_build/html/_sources/formats/json.txt
+docs/_build/html/_sources/formats/l20n.txt
+docs/_build/html/_sources/formats/mo.txt
+docs/_build/html/_sources/formats/odf.txt
+docs/_build/html/_sources/formats/omegat_glossary.txt
+docs/_build/html/_sources/formats/php.txt
+docs/_build/html/_sources/formats/po.txt
+docs/_build/html/_sources/formats/properties.txt
+docs/_build/html/_sources/formats/qm.txt
+docs/_build/html/_sources/formats/qt_phrase_book.txt
+docs/_build/html/_sources/formats/quoting_and_escaping.txt
+docs/_build/html/_sources/formats/rc.txt
+docs/_build/html/_sources/formats/strings.txt
+docs/_build/html/_sources/formats/subtitles.txt
+docs/_build/html/_sources/formats/tbx.txt
+docs/_build/html/_sources/formats/text.txt
+docs/_build/html/_sources/formats/tmx.txt
+docs/_build/html/_sources/formats/ts.txt
+docs/_build/html/_sources/formats/utx.txt
+docs/_build/html/_sources/formats/wiki.txt
+docs/_build/html/_sources/formats/wml.txt
+docs/_build/html/_sources/formats/wordfast.txt
+docs/_build/html/_sources/formats/xliff.txt
+docs/_build/html/_sources/guides/checking_for_inconsistencies.txt
+docs/_build/html/_sources/guides/cleanup_translator_comments.txt
+docs/_build/html/_sources/guides/creating_a_terminology_list_from_your_existing_translations.txt
+docs/_build/html/_sources/guides/creating_mozilla_pot_files.txt
+docs/_build/html/_sources/guides/document_translation.txt
+docs/_build/html/_sources/guides/index.txt
+docs/_build/html/_sources/guides/migrating_translations.txt
+docs/_build/html/_sources/guides/running_the_tools_on_microsoft_windows.txt
+docs/_build/html/_sources/guides/using_csv2po.txt
+docs/_build/html/_sources/guides/using_oo2po.txt
+docs/_build/html/_sources/guides/using_pofilter.txt
+docs/_build/html/_sources/releases/1.10.0.txt
+docs/_build/html/_sources/releases/1.11.0-rc1.txt
+docs/_build/html/_sources/releases/1.11.0.txt
+docs/_build/html/_sources/releases/1.12.0-rc1.txt
+docs/_build/html/_sources/releases/1.12.0.txt
+docs/_build/html/_sources/releases/1.8.1.txt
+docs/_build/html/_sources/releases/1.9.0.txt
+docs/_build/html/_sources/releases/dev.txt
+docs/_build/html/_sources/releases/index.txt
+docs/_build/html/_static/README.txt
+docs/_build/html/_static/ajax-loader.gif
+docs/_build/html/_static/basic.css
+docs/_build/html/_static/bootstrap-responsive.css
+docs/_build/html/_static/bootstrap-sphinx.css
+docs/_build/html/_static/bootstrap-sphinx.js
+docs/_build/html/_static/bootstrap.css
+docs/_build/html/_static/bootstrap.js
+docs/_build/html/_static/comment-bright.png
+docs/_build/html/_static/comment-close.png
+docs/_build/html/_static/comment.png
+docs/_build/html/_static/doctools.js
+docs/_build/html/_static/down-pressed.png
+docs/_build/html/_static/down.png
+docs/_build/html/_static/file.png
+docs/_build/html/_static/jquery.js
+docs/_build/html/_static/minus.png
+docs/_build/html/_static/plus.png
+docs/_build/html/_static/pygments.css
+docs/_build/html/_static/searchtools.js
+docs/_build/html/_static/underscore.js
+docs/_build/html/_static/up-pressed.png
+docs/_build/html/_static/up.png
+docs/_build/html/_static/websupport.js
+docs/_build/html/_static/font/fontawesome-webfont.eot
+docs/_build/html/_static/font/fontawesome-webfont.svg
+docs/_build/html/_static/font/fontawesome-webfont.ttf
+docs/_build/html/_static/font/fontawesome-webfont.woff
+docs/_build/html/_static/less/font-awesome.less
+docs/_build/html/_static/less/theme.less
+docs/_build/html/_static/less/variables.less
+docs/_build/html/api/convert.html
+docs/_build/html/api/filters.html
+docs/_build/html/api/index.html
+docs/_build/html/api/lang.html
+docs/_build/html/api/misc.html
+docs/_build/html/api/search.html
+docs/_build/html/api/services.html
+docs/_build/html/api/storage.html
+docs/_build/html/api/tools.html
+docs/_build/html/commands/csv2po.html
+docs/_build/html/commands/csv2tbx.html
+docs/_build/html/commands/general_usage.html
+docs/_build/html/commands/html2po.html
+docs/_build/html/commands/ical2po.html
+docs/_build/html/commands/index.html
+docs/_build/html/commands/ini2po.html
+docs/_build/html/commands/json2po.html
+docs/_build/html/commands/junitmsgfmt.html
+docs/_build/html/commands/levenshtein_distance.html
+docs/_build/html/commands/moz-l10n-builder.html
+docs/_build/html/commands/moz2po.html
+docs/_build/html/commands/mozilla_l10n_scripts.html
+docs/_build/html/commands/odf2xliff.html
+docs/_build/html/commands/oo2po.html
+docs/_build/html/commands/option_accelerator.html
+docs/_build/html/commands/option_duplicates.html
+docs/_build/html/commands/option_errorlevel.html
+docs/_build/html/commands/option_filteraction.html
+docs/_build/html/commands/option_multifile.html
+docs/_build/html/commands/option_personality.html
+docs/_build/html/commands/option_progress.html
+docs/_build/html/commands/option_rewrite.html
+docs/_build/html/commands/phase.html
+docs/_build/html/commands/php2po.html
+docs/_build/html/commands/po2tmx.html
+docs/_build/html/commands/po2wordfast.html
+docs/_build/html/commands/poclean.html
+docs/_build/html/commands/pocommentclean.html
+docs/_build/html/commands/pocompendium.html
+docs/_build/html/commands/pocompile.html
+docs/_build/html/commands/poconflicts.html
+docs/_build/html/commands/pocount.html
+docs/_build/html/commands/podebug.html
+docs/_build/html/commands/pofilter.html
+docs/_build/html/commands/pofilter_tests.html
+docs/_build/html/commands/pogrep.html
+docs/_build/html/commands/pomerge.html
+docs/_build/html/commands/pomigrate2.html
+docs/_build/html/commands/popuretext.html
+docs/_build/html/commands/poreencode.html
+docs/_build/html/commands/porestructure.html
+docs/_build/html/commands/posegment.html
+docs/_build/html/commands/posplit.html
+docs/_build/html/commands/poswap.html
+docs/_build/html/commands/pot2po.html
+docs/_build/html/commands/poterminology.html
+docs/_build/html/commands/poterminology_stopword_file.html
+docs/_build/html/commands/pretranslate.html
+docs/_build/html/commands/prop2po.html
+docs/_build/html/commands/rc2po.html
+docs/_build/html/commands/sub2po.html
+docs/_build/html/commands/symb2po.html
+docs/_build/html/commands/tiki2po.html
+docs/_build/html/commands/tmserver.html
+docs/_build/html/commands/ts2po.html
+docs/_build/html/commands/txt2po.html
+docs/_build/html/commands/web2py2po.html
+docs/_build/html/commands/xliff2po.html
+docs/_build/html/developers/building.html
+docs/_build/html/developers/contributing.html
+docs/_build/html/developers/deprecation.html
+docs/_build/html/developers/developers.html
+docs/_build/html/developers/releasing.html
+docs/_build/html/developers/styleguide.html
+docs/_build/html/developers/testing.html
+docs/_build/html/formats/android.html
+docs/_build/html/formats/base_classes.html
+docs/_build/html/formats/catkeys.html
+docs/_build/html/formats/conformance.html
+docs/_build/html/formats/csv.html
+docs/_build/html/formats/dtd.html
+docs/_build/html/formats/flex.html
+docs/_build/html/formats/gsi.html
+docs/_build/html/formats/html.html
+docs/_build/html/formats/ical.html
+docs/_build/html/formats/index.html
+docs/_build/html/formats/ini.html
+docs/_build/html/formats/json.html
+docs/_build/html/formats/l20n.html
+docs/_build/html/formats/mo.html
+docs/_build/html/formats/odf.html
+docs/_build/html/formats/omegat_glossary.html
+docs/_build/html/formats/php.html
+docs/_build/html/formats/po.html
+docs/_build/html/formats/properties.html
+docs/_build/html/formats/qm.html
+docs/_build/html/formats/qt_phrase_book.html
+docs/_build/html/formats/quoting_and_escaping.html
+docs/_build/html/formats/rc.html
+docs/_build/html/formats/strings.html
+docs/_build/html/formats/subtitles.html
+docs/_build/html/formats/tbx.html
+docs/_build/html/formats/text.html
+docs/_build/html/formats/tmx.html
+docs/_build/html/formats/ts.html
+docs/_build/html/formats/utx.html
+docs/_build/html/formats/wiki.html
+docs/_build/html/formats/wml.html
+docs/_build/html/formats/wordfast.html
+docs/_build/html/formats/xliff.html
+docs/_build/html/guides/checking_for_inconsistencies.html
+docs/_build/html/guides/cleanup_translator_comments.html
+docs/_build/html/guides/creating_a_terminology_list_from_your_existing_translations.html
+docs/_build/html/guides/creating_mozilla_pot_files.html
+docs/_build/html/guides/document_translation.html
+docs/_build/html/guides/index.html
+docs/_build/html/guides/migrating_translations.html
+docs/_build/html/guides/running_the_tools_on_microsoft_windows.html
+docs/_build/html/guides/using_csv2po.html
+docs/_build/html/guides/using_oo2po.html
+docs/_build/html/guides/using_pofilter.html
+docs/_build/html/releases/1.10.0.html
+docs/_build/html/releases/1.11.0-rc1.html
+docs/_build/html/releases/1.11.0.html
+docs/_build/html/releases/1.12.0-rc1.html
+docs/_build/html/releases/1.12.0.html
+docs/_build/html/releases/1.8.1.html
+docs/_build/html/releases/1.9.0.html
+docs/_build/html/releases/dev.html
+docs/_build/html/releases/index.html
+docs/_ext/translate_docs.py
+docs/_ext/translate_docs.pyc
+docs/_static/README.txt
+docs/_themes/.git
+docs/_themes/.gitignore
+docs/_themes/README.rst
+docs/_themes/sphinx-bootstrap/globaltoc.html
+docs/_themes/sphinx-bootstrap/layout.html
+docs/_themes/sphinx-bootstrap/localtoc.html
+docs/_themes/sphinx-bootstrap/relations.html
+docs/_themes/sphinx-bootstrap/search.html
+docs/_themes/sphinx-bootstrap/searchbox.html
+docs/_themes/sphinx-bootstrap/sourcelink.html
+docs/_themes/sphinx-bootstrap/theme.conf
+docs/_themes/sphinx-bootstrap/static/bootstrap-responsive.css
+docs/_themes/sphinx-bootstrap/static/bootstrap-sphinx.css_t
+docs/_themes/sphinx-bootstrap/static/bootstrap-sphinx.js
+docs/_themes/sphinx-bootstrap/static/bootstrap.css
+docs/_themes/sphinx-bootstrap/static/bootstrap.js
+docs/_themes/sphinx-bootstrap/static/jquery.js
+docs/_themes/sphinx-bootstrap/static/font/fontawesome-webfont.eot
+docs/_themes/sphinx-bootstrap/static/font/fontawesome-webfont.svg
+docs/_themes/sphinx-bootstrap/static/font/fontawesome-webfont.ttf
+docs/_themes/sphinx-bootstrap/static/font/fontawesome-webfont.woff
+docs/_themes/sphinx-bootstrap/static/less/font-awesome.less
+docs/_themes/sphinx-bootstrap/static/less/theme.less
+docs/_themes/sphinx-bootstrap/static/less/variables.less
+docs/api/convert.rst
+docs/api/filters.rst
+docs/api/index.rst
+docs/api/lang.rst
+docs/api/misc.rst
+docs/api/search.rst
+docs/api/services.rst
+docs/api/storage.rst
+docs/api/tools.rst
+docs/commands/csv2po.rst
+docs/commands/csv2tbx.rst
+docs/commands/general_usage.rst
+docs/commands/html2po.rst
+docs/commands/ical2po.rst
+docs/commands/index.rst
+docs/commands/ini2po.rst
+docs/commands/json2po.rst
+docs/commands/junitmsgfmt.rst
+docs/commands/levenshtein_distance.rst
+docs/commands/moz-l10n-builder.rst
+docs/commands/moz2po.rst
+docs/commands/mozilla_l10n_scripts.rst
+docs/commands/odf2xliff.rst
+docs/commands/oo2po.rst
+docs/commands/option_accelerator.rst
+docs/commands/option_duplicates.rst
+docs/commands/option_errorlevel.rst
+docs/commands/option_filteraction.rst
+docs/commands/option_multifile.rst
+docs/commands/option_personality.rst
+docs/commands/option_progress.rst
+docs/commands/option_rewrite.rst
+docs/commands/phase.rst
+docs/commands/php2po.rst
+docs/commands/po2tmx.rst
+docs/commands/po2wordfast.rst
+docs/commands/poclean.rst
+docs/commands/pocommentclean.rst
+docs/commands/pocompendium.rst
+docs/commands/pocompile.rst
+docs/commands/poconflicts.rst
+docs/commands/pocount.rst
+docs/commands/podebug.rst
+docs/commands/pofilter.rst
+docs/commands/pofilter_tests.rst
+docs/commands/pogrep.rst
+docs/commands/pomerge.rst
+docs/commands/pomigrate2.rst
+docs/commands/popuretext.rst
+docs/commands/poreencode.rst
+docs/commands/porestructure.rst
+docs/commands/posegment.rst
+docs/commands/posplit.rst
+docs/commands/poswap.rst
+docs/commands/pot2po.rst
+docs/commands/poterminology.rst
+docs/commands/poterminology_stopword_file.rst
+docs/commands/pretranslate.rst
+docs/commands/prop2po.rst
+docs/commands/rc2po.rst
+docs/commands/sub2po.rst
+docs/commands/symb2po.rst
+docs/commands/tiki2po.rst
+docs/commands/tmserver.rst
+docs/commands/ts2po.rst
+docs/commands/txt2po.rst
+docs/commands/web2py2po.rst
+docs/commands/xliff2po.rst
+docs/developers/building.rst
+docs/developers/contributing.rst
+docs/developers/deprecation.rst
+docs/developers/developers.rst
+docs/developers/releasing.rst
+docs/developers/styleguide.rst
+docs/developers/testing.rst
+docs/formats/android.rst
+docs/formats/base_classes.rst
+docs/formats/catkeys.rst
+docs/formats/conformance.rst
+docs/formats/csv.rst
+docs/formats/dtd.rst
+docs/formats/flex.rst
+docs/formats/gsi.rst
+docs/formats/html.rst
+docs/formats/ical.rst
+docs/formats/index.rst
+docs/formats/ini.rst
+docs/formats/json.rst
+docs/formats/l20n.rst
+docs/formats/mo.rst
+docs/formats/odf.rst
+docs/formats/omegat_glossary.rst
+docs/formats/php.rst
+docs/formats/po.rst
+docs/formats/properties.rst
+docs/formats/qm.rst
+docs/formats/qt_phrase_book.rst
+docs/formats/quoting_and_escaping.rst
+docs/formats/rc.rst
+docs/formats/strings.rst
+docs/formats/subtitles.rst
+docs/formats/tbx.rst
+docs/formats/text.rst
+docs/formats/tmx.rst
+docs/formats/ts.rst
+docs/formats/utx.rst
+docs/formats/wiki.rst
+docs/formats/wml.rst
+docs/formats/wordfast.rst
+docs/formats/xliff.rst
+docs/guides/checking_for_inconsistencies.rst
+docs/guides/cleanup_translator_comments.rst
+docs/guides/creating_a_terminology_list_from_your_existing_translations.rst
+docs/guides/creating_mozilla_pot_files.rst
+docs/guides/document_translation.rst
+docs/guides/index.rst
+docs/guides/migrating_translations.rst
+docs/guides/running_the_tools_on_microsoft_windows.rst
+docs/guides/using_csv2po.rst
+docs/guides/using_oo2po.rst
+docs/guides/using_pofilter.rst
+docs/releases/1.10.0.rst
+docs/releases/1.11.0-rc1.rst
+docs/releases/1.11.0.rst
+docs/releases/1.12.0-rc1.rst
+docs/releases/1.12.0.rst
+docs/releases/1.8.1.rst
+docs/releases/1.9.0.rst
+docs/releases/README.rst
+docs/releases/dev.rst
+docs/releases/index.rst
+requirements/dev.txt
+requirements/optional.txt
+requirements/recommended.txt
+requirements/required.txt
+share/stoplist-en
+share/langmodels/Ndebele.lm
+share/langmodels/NorthernSotho.lm
+share/langmodels/README
+share/langmodels/Sotho.lm
+share/langmodels/Swati.lm
+share/langmodels/Tsonga.lm
+share/langmodels/Tswana.lm
+share/langmodels/Venda.lm
+share/langmodels/Xhosa.lm
+share/langmodels/Zulu.lm
+share/langmodels/afrikaans.lm
+share/langmodels/albanian.lm
+share/langmodels/arabic.lm
+share/langmodels/basque.lm
+share/langmodels/belarus.lm
+share/langmodels/bosnian.lm
+share/langmodels/breton.lm
+share/langmodels/catalan.lm
+share/langmodels/chinese_simplified.lm
+share/langmodels/chinese_traditional.lm
+share/langmodels/croatian.lm
+share/langmodels/czech.lm
+share/langmodels/danish.lm
+share/langmodels/dutch.lm
+share/langmodels/english.lm
+share/langmodels/esperanto.lm
+share/langmodels/estonian.lm
+share/langmodels/finnish.lm
+share/langmodels/fpdb.conf
+share/langmodels/french.lm
+share/langmodels/frisian.lm
+share/langmodels/german.lm
+share/langmodels/greek.lm
+share/langmodels/hebrew.lm
+share/langmodels/hungarian.lm
+share/langmodels/icelandic.lm
+share/langmodels/indonesian.lm
+share/langmodels/irish_gaelic.lm
+share/langmodels/italian.lm
+share/langmodels/japanese.lm
+share/langmodels/latin.lm
+share/langmodels/latvian.lm
+share/langmodels/lithuanian.lm
+share/langmodels/malay.lm
+share/langmodels/manx_gaelic.lm
+share/langmodels/norwegian.lm
+share/langmodels/polish.lm
+share/langmodels/portuguese.lm
+share/langmodels/quechua.lm
+share/langmodels/romanian.lm
+share/langmodels/romansh.lm
+share/langmodels/russian.lm
+share/langmodels/scots.lm
+share/langmodels/scots_gaelic.lm
+share/langmodels/serbian_ascii.lm
+share/langmodels/slovak_ascii.lm
+share/langmodels/slovenian.lm
+share/langmodels/spanish.lm
+share/langmodels/swahili.lm
+share/langmodels/swedish.lm
+share/langmodels/tagalog.lm
+share/langmodels/turkish.lm
+share/langmodels/ukrainian.lm
+share/langmodels/vietnamese.lm
+share/langmodels/welsh.lm
+tests/cli/data/test_pocount/stderr.txt
+tests/cli/data/test_pocount_help/stdout.txt
+tests/cli/data/test_pocount_mutually_exclusive/stderr.txt
+tests/cli/data/test_pocount_nonexistant/stderr.txt
+tests/cli/data/test_pocount_po_file/stdout.txt
+tests/cli/data/test_pofilter_listfilters/stdout.txt
+tests/cli/data/test_pofilter_manpage/stdout.txt
+tests/cli/data/test_prop2po/stderr.txt
+tests/cli/data/test_prop2po_dirs/stderr.txt
+tools/junitmsgfmt
+tools/pocommentclean
+tools/pocompendium
+tools/pomigrate2
+tools/popuretext
+tools/poreencode
+tools/posplit
+tools/mozilla/build_firefox.sh
+tools/mozilla/buildxpi.py
+tools/mozilla/get_moz_enUS.py
+translate/__init__.py
+translate/__version__.py
+translate/convert/__init__.py
+translate/convert/accesskey.py
+translate/convert/convert.py
+translate/convert/csv2po
+translate/convert/csv2po.py
+translate/convert/csv2tbx
+translate/convert/csv2tbx.py
+translate/convert/dtd2po.py
+translate/convert/factory.py
+translate/convert/html2po
+translate/convert/html2po.py
+translate/convert/ical2po
+translate/convert/ical2po.py
+translate/convert/ini2po
+translate/convert/ini2po.py
+translate/convert/json2po
+translate/convert/json2po.py
+translate/convert/moz2po
+translate/convert/moz2po.py
+translate/convert/mozfunny2prop.py
+translate/convert/mozlang2po.py
+translate/convert/odf2xliff
+translate/convert/odf2xliff.py
+translate/convert/oo2po
+translate/convert/oo2po.py
+translate/convert/oo2xliff
+translate/convert/oo2xliff.py
+translate/convert/php2po
+translate/convert/php2po.py
+translate/convert/po2csv
+translate/convert/po2csv.py
+translate/convert/po2dtd.py
+translate/convert/po2html
+translate/convert/po2html.py
+translate/convert/po2ical
+translate/convert/po2ical.py
+translate/convert/po2ini
+translate/convert/po2ini.py
+translate/convert/po2json
+translate/convert/po2json.py
+translate/convert/po2moz
+translate/convert/po2moz.py
+translate/convert/po2mozlang.py
+translate/convert/po2oo
+translate/convert/po2oo.py
+translate/convert/po2php
+translate/convert/po2php.py
+translate/convert/po2prop
+translate/convert/po2prop.py
+translate/convert/po2rc
+translate/convert/po2rc.py
+translate/convert/po2sub
+translate/convert/po2sub.py
+translate/convert/po2symb
+translate/convert/po2symb.py
+translate/convert/po2tiki
+translate/convert/po2tiki.py
+translate/convert/po2tmx
+translate/convert/po2tmx.py
+translate/convert/po2ts
+translate/convert/po2ts.py
+translate/convert/po2txt
+translate/convert/po2txt.py
+translate/convert/po2web2py
+translate/convert/po2web2py.py
+translate/convert/po2wordfast
+translate/convert/po2wordfast.py
+translate/convert/po2xliff
+translate/convert/po2xliff.py
+translate/convert/poreplace.py
+translate/convert/pot2po
+translate/convert/pot2po.py
+translate/convert/prop2mozfunny.py
+translate/convert/prop2po
+translate/convert/prop2po.py
+translate/convert/rc2po
+translate/convert/rc2po.py
+translate/convert/sub2po
+translate/convert/sub2po.py
+translate/convert/symb2po
+translate/convert/symb2po.py
+translate/convert/test_accesskey.py
+translate/convert/test_convert.py
+translate/convert/test_csv2po.py
+translate/convert/test_dtd2po.py
+translate/convert/test_html2po.py
+translate/convert/test_json2po.py
+translate/convert/test_moz2po.py
+translate/convert/test_mozfunny2prop.py
+translate/convert/test_mozlang2po.py
+translate/convert/test_oo2po.py
+translate/convert/test_oo2xliff.py
+translate/convert/test_php2po.py
+translate/convert/test_po2csv.py
+translate/convert/test_po2dtd.py
+translate/convert/test_po2html.py
+translate/convert/test_po2ical.py
+translate/convert/test_po2ini.py
+translate/convert/test_po2moz.py
+translate/convert/test_po2mozlang.py
+translate/convert/test_po2oo.py
+translate/convert/test_po2php.py
+translate/convert/test_po2prop.py
+translate/convert/test_po2sub.py
+translate/convert/test_po2tiki.py
+translate/convert/test_po2tmx.py
+translate/convert/test_po2ts.py
+translate/convert/test_po2txt.py
+translate/convert/test_po2xliff.py
+translate/convert/test_pot2po.py
+translate/convert/test_prop2mozfunny.py
+translate/convert/test_prop2po.py
+translate/convert/test_tiki2po.py
+translate/convert/test_ts2po.py
+translate/convert/test_txt2po.py
+translate/convert/test_xliff2po.py
+translate/convert/tiki2po
+translate/convert/tiki2po.py
+translate/convert/ts2po
+translate/convert/ts2po.py
+translate/convert/txt2po
+translate/convert/txt2po.py
+translate/convert/web2py2po
+translate/convert/web2py2po.py
+translate/convert/xliff2odf
+translate/convert/xliff2odf.py
+translate/convert/xliff2oo
+translate/convert/xliff2oo.py
+translate/convert/xliff2po
+translate/convert/xliff2po.py
+translate/filters/__init__.py
+translate/filters/autocorrect.py
+translate/filters/checks.py
+translate/filters/decoration.py
+translate/filters/decorators.py
+translate/filters/helpers.py
+translate/filters/pofilter
+translate/filters/pofilter.py
+translate/filters/prefilters.py
+translate/filters/spelling.py
+translate/filters/test_autocorrect.py
+translate/filters/test_checks.py
+translate/filters/test_decoration.py
+translate/filters/test_pofilter.py
+translate/filters/test_prefilters.py
+translate/lang/__init__.py
+translate/lang/af.py
+translate/lang/ak.py
+translate/lang/am.py
+translate/lang/ar.py
+translate/lang/az.py
+translate/lang/bn.py
+translate/lang/code_or.py
+translate/lang/common.py
+translate/lang/data.py
+translate/lang/de.py
+translate/lang/dz.py
+translate/lang/el.py
+translate/lang/es.py
+translate/lang/fa.py
+translate/lang/factory.py
+translate/lang/fi.py
+translate/lang/fr.py
+translate/lang/gd.py
+translate/lang/gu.py
+translate/lang/he.py
+translate/lang/hi.py
+translate/lang/hy.py
+translate/lang/identify.py
+translate/lang/ja.py
+translate/lang/km.py
+translate/lang/kn.py
+translate/lang/ko.py
+translate/lang/kw.py
+translate/lang/lo.py
+translate/lang/ml.py
+translate/lang/mr.py
+translate/lang/ms.py
+translate/lang/my.py
+translate/lang/ne.py
+translate/lang/ngram.py
+translate/lang/nqo.py
+translate/lang/nso.py
+translate/lang/pa.py
+translate/lang/poedit.py
+translate/lang/si.py
+translate/lang/son.py
+translate/lang/st.py
+translate/lang/su.py
+translate/lang/sv.py
+translate/lang/ta.py
+translate/lang/te.py
+translate/lang/team.py
+translate/lang/test_af.py
+translate/lang/test_am.py
+translate/lang/test_ar.py
+translate/lang/test_common.py
+translate/lang/test_data.py
+translate/lang/test_el.py
+translate/lang/test_es.py
+translate/lang/test_fa.py
+translate/lang/test_factory.py
+translate/lang/test_fr.py
+translate/lang/test_hy.py
+translate/lang/test_identify.py
+translate/lang/test_km.py
+translate/lang/test_ko.py
+translate/lang/test_ne.py
+translate/lang/test_nqo.py
+translate/lang/test_or.py
+translate/lang/test_poedit.py
+translate/lang/test_team.py
+translate/lang/test_th.py
+translate/lang/test_tr.py
+translate/lang/test_uk.py
+translate/lang/test_vi.py
+translate/lang/test_zh.py
+translate/lang/th.py
+translate/lang/tr.py
+translate/lang/ug.py
+translate/lang/ur.py
+translate/lang/ve.py
+translate/lang/vi.py
+translate/lang/wo.py
+translate/lang/zh.py
+translate/lang/zh_cn.py
+translate/lang/zh_hk.py
+translate/lang/zh_tw.py
+translate/misc/__init__.py
+translate/misc/autoencode.py
+translate/misc/deprecation.py
+translate/misc/dictutils.py
+translate/misc/diff_match_patch.py
+translate/misc/file_discovery.py
+translate/misc/lru.py
+translate/misc/multistring.py
+translate/misc/optrecurse.py
+translate/misc/ourdom.py
+translate/misc/progressbar.py
+translate/misc/quote.py
+translate/misc/selector.py
+translate/misc/sparse.py
+translate/misc/stdiotell.py
+translate/misc/test_autoencode.py
+translate/misc/test_dictutils.py
+translate/misc/test_multistring.py
+translate/misc/test_optrecurse.py
+translate/misc/test_progressbar.py
+translate/misc/test_quote.py
+translate/misc/wStringIO.py
+translate/misc/wsgi.py
+translate/misc/xml_helpers.py
+translate/misc/wsgiserver/LICENSE.txt
+translate/misc/wsgiserver/__init__.py
+translate/misc/wsgiserver/ssl_builtin.py
+translate/misc/wsgiserver/ssl_pyopenssl.py
+translate/misc/wsgiserver/wsgiserver2.py
+translate/misc/wsgiserver/wsgiserver3.py
+translate/search/__init__.py
+translate/search/lshtein.py
+translate/search/match.py
+translate/search/segment.py
+translate/search/terminology.py
+translate/search/test_lshtein.py
+translate/search/test_match.py
+translate/search/test_terminology.py
+translate/search/indexing/CommonIndexer.py
+translate/search/indexing/PyLuceneIndexer.py
+translate/search/indexing/PyLuceneIndexer1.py
+translate/search/indexing/XapianIndexer.py
+translate/search/indexing/__init__.py
+translate/search/indexing/test_indexers.py
+translate/services/__init__.py
+translate/services/tmserver
+translate/services/tmserver.py
+translate/storage/__init__.py
+translate/storage/_factory_classes.py
+translate/storage/aresource.py
+translate/storage/base.py
+translate/storage/benchmark.py
+translate/storage/bundleprojstore.py
+translate/storage/catkeys.py
+translate/storage/cpo.py
+translate/storage/csvl10n.py
+translate/storage/directory.py
+translate/storage/dtd.py
+translate/storage/factory.py
+translate/storage/fpo.py
+translate/storage/html.py
+translate/storage/ical.py
+translate/storage/ini.py
+translate/storage/jsonl10n.py
+translate/storage/lisa.py
+translate/storage/mo.py
+translate/storage/mozilla_lang.py
+translate/storage/odf_io.py
+translate/storage/odf_shared.py
+translate/storage/omegat.py
+translate/storage/oo.py
+translate/storage/php.py
+translate/storage/po.py
+translate/storage/pocommon.py
+translate/storage/poheader.py
+translate/storage/poparser.py
+translate/storage/poxliff.py
+translate/storage/project.py
+translate/storage/projstore.py
+translate/storage/properties.py
+translate/storage/pypo.py
+translate/storage/qm.py
+translate/storage/qph.py
+translate/storage/rc.py
+translate/storage/statistics.py
+translate/storage/statsdb.py
+translate/storage/subtitles.py
+translate/storage/symbian.py
+translate/storage/tbx.py
+translate/storage/test_aresource.py
+translate/storage/test_base.py
+translate/storage/test_catkeys.py
+translate/storage/test_cpo.py
+translate/storage/test_csvl10n.py
+translate/storage/test_directory.py
+translate/storage/test_dtd.py
+translate/storage/test_factory.py
+translate/storage/test_html.py
+translate/storage/test_mo.py
+translate/storage/test_monolingual.py
+translate/storage/test_mozilla_lang.py
+translate/storage/test_omegat.py
+translate/storage/test_oo.py
+translate/storage/test_php.py
+translate/storage/test_po.py
+translate/storage/test_pocommon.py
+translate/storage/test_poheader.py
+translate/storage/test_poxliff.py
+translate/storage/test_properties.py
+translate/storage/test_pypo.py
+translate/storage/test_qm.py
+translate/storage/test_qph.py
+translate/storage/test_rc.py
+translate/storage/test_statsdb.py
+translate/storage/test_tbx.py
+translate/storage/test_tiki.py
+translate/storage/test_tmx.py
+translate/storage/test_trados.py
+translate/storage/test_ts.py
+translate/storage/test_ts2.py
+translate/storage/test_txt.py
+translate/storage/test_utx.py
+translate/storage/test_wordfast.py
+translate/storage/test_xliff.py
+translate/storage/test_zip.py
+translate/storage/tiki.py
+translate/storage/tmdb.py
+translate/storage/tmx.py
+translate/storage/trados.py
+translate/storage/ts.py
+translate/storage/ts2.py
+translate/storage/txt.py
+translate/storage/utx.py
+translate/storage/wordfast.py
+translate/storage/workflow.py
+translate/storage/xliff.py
+translate/storage/xml_name.py
+translate/storage/zip.py
+translate/storage/placeables/__init__.py
+translate/storage/placeables/base.py
+translate/storage/placeables/general.py
+translate/storage/placeables/interfaces.py
+translate/storage/placeables/lisa.py
+translate/storage/placeables/parse.py
+translate/storage/placeables/strelem.py
+translate/storage/placeables/terminology.py
+translate/storage/placeables/test_base.py
+translate/storage/placeables/test_general.py
+translate/storage/placeables/test_lisa.py
+translate/storage/placeables/test_terminology.py
+translate/storage/placeables/xliff.py
+translate/storage/versioncontrol/__init__.py
+translate/storage/versioncontrol/bzr.py
+translate/storage/versioncontrol/cvs.py
+translate/storage/versioncontrol/darcs.py
+translate/storage/versioncontrol/git.py
+translate/storage/versioncontrol/hg.py
+translate/storage/versioncontrol/svn.py
+translate/storage/versioncontrol/test_helper.py
+translate/storage/versioncontrol/test_svn.py
+translate/storage/xml_extract/__init__.py
+translate/storage/xml_extract/extract.py
+translate/storage/xml_extract/generate.py
+translate/storage/xml_extract/misc.py
+translate/storage/xml_extract/test_misc.py
+translate/storage/xml_extract/test_unit_tree.py
+translate/storage/xml_extract/test_xpath_breadcrumb.py
+translate/storage/xml_extract/unit_tree.py
+translate/storage/xml_extract/xpath_breadcrumb.py
+translate/tools/__init__.py
+translate/tools/build_tmdb
+translate/tools/build_tmdb.py
+translate/tools/phppo2pypo.py
+translate/tools/poclean
+translate/tools/poclean.py
+translate/tools/pocompile
+translate/tools/pocompile.py
+translate/tools/poconflicts
+translate/tools/poconflicts.py
+translate/tools/pocount
+translate/tools/pocount.py
+translate/tools/podebug
+translate/tools/podebug.py
+translate/tools/pogrep
+translate/tools/pogrep.py
+translate/tools/pomerge
+translate/tools/pomerge.py
+translate/tools/porestructure
+translate/tools/porestructure.py
+translate/tools/posegment
+translate/tools/posegment.py
+translate/tools/poswap
+translate/tools/poswap.py
+translate/tools/poterminology
+translate/tools/poterminology.py
+translate/tools/pretranslate
+translate/tools/pretranslate.py
+translate/tools/pydiff.py
+translate/tools/pypo2phppo.py
+translate/tools/test_phppo2pypo.py
+translate/tools/test_pocount.py
+translate/tools/test_podebug.py
+translate/tools/test_pogrep.py
+translate/tools/test_pomerge.py
+translate/tools/test_pretranslate.py
+translate/tools/test_pypo2phppo.py
+translate_toolkit.egg-info/PKG-INFO
+translate_toolkit.egg-info/SOURCES.txt
+translate_toolkit.egg-info/dependency_links.txt
+translate_toolkit.egg-info/requires.txt
+translate_toolkit.egg-info/top_level.txt
\ No newline at end of file
diff --git a/translate_toolkit.egg-info/dependency_links.txt b/translate_toolkit.egg-info/dependency_links.txt
new file mode 100644
index 0000000..8b13789
--- /dev/null
+++ b/translate_toolkit.egg-info/dependency_links.txt
@@ -0,0 +1 @@
+
diff --git a/translate_toolkit.egg-info/requires.txt b/translate_toolkit.egg-info/requires.txt
new file mode 100644
index 0000000..b68c603
--- /dev/null
+++ b/translate_toolkit.egg-info/requires.txt
@@ -0,0 +1,3 @@
+argparse
+six
+diff-match-patch
\ No newline at end of file
diff --git a/translate_toolkit.egg-info/top_level.txt b/translate_toolkit.egg-info/top_level.txt
new file mode 100644
index 0000000..0671813
--- /dev/null
+++ b/translate_toolkit.egg-info/top_level.txt
@@ -0,0 +1 @@
+translate

-- 
Alioth's /usr/local/bin/git-commit-notice on /srv/git.debian.org/git/debian-l10n/translate-toolkit.git



More information about the Debian-l10n-commits mailing list