[Debian-l10n-commits] translate-toolkit branch upstream updated. upstream/2.0.0_b6

Stuart Prescott stuart at moszumanska.debian.org
Thu Nov 3 06:50:00 UTC 2016


This is an automated email from the git hooks/post-receive script. It was
generated because a ref change was pushed to the repository containing
the project "translate-toolkit".

The branch, upstream has been updated
       via  632e63bef63c9be2fb6edb2542bcb356f83e01d6 (commit)
      from  93671ffe84cf7c4128cebfb423578284c8390712 (commit)

Those revisions listed above that are new to this repository have
not appeared on any other notification email; so we list those
revisions in full, below.

- Log -----------------------------------------------------------------
-----------------------------------------------------------------------

Summary of changes:
 docs/conf.py                                    |   2 +-
 docs/releases/2.0.0b5.rst                       |   5 +-
 docs/releases/{2.0.0b5.rst => 2.0.0b6.rst}      |  28 ++---
 docs/releases/index.rst                         |   1 +
 requirements/dev.txt                            |   2 +-
 tests/cli/data/test_pofilter_manpage/stdout.txt |   2 +-
 translate/__version__.py                        |   4 +-
 translate/filters/checks.py                     |  65 ++++++++++-
 translate/filters/test_checks.py                |  56 ++++++++++
 translate/lang/bo.by                            |  32 ------
 translate/lang/bo.py                            |   3 +
 translate/lang/data.py                          |   2 +
 translate/storage/mozilla_lang.py               |  95 +++++++++++-----
 translate/storage/test_mozilla_lang.py          | 141 +++++++++++++++++++++++-
 14 files changed, 350 insertions(+), 88 deletions(-)
 copy docs/releases/{2.0.0b5.rst => 2.0.0b6.rst} (92%)
 delete mode 100644 translate/lang/bo.by

diff --git a/docs/conf.py b/docs/conf.py
index 16891e8..af08330 100644
--- a/docs/conf.py
+++ b/docs/conf.py
@@ -67,7 +67,7 @@ copyright = u'2002-2016, Translate'
 # The short X.Y version.
 version = '2.0.0'
 # The full version, including alpha/beta/rc tags.
-release = '2.0.0b5'
+release = '2.0.0b6'
 
 # The language for content autogenerated by Sphinx. Refer to documentation
 # for a list of supported languages.
diff --git a/docs/releases/2.0.0b5.rst b/docs/releases/2.0.0b5.rst
index 6ee836e..27ae408 100644
--- a/docs/releases/2.0.0b5.rst
+++ b/docs/releases/2.0.0b5.rst
@@ -1,7 +1,7 @@
 Translate Toolkit 2.0.0b5
 *************************
 
-*Released on 29 August 2016*
+*Released on 30 August 2016*
 
 This release contains many improvements and bug fixes. While it contains many
 general improvements, it also specifically contains needed changes and
@@ -27,6 +27,7 @@ Major changes
 
 - Python 3 compatibility thanks to Claude Paroz
 - Dropped support for Python 2.6
+- Support for new l20n format
 - Translate Toolkit can now easily be installed on Windows
 - Changes in storage API to expose a more standardized API
 
@@ -84,7 +85,7 @@ Formats and Converters
 
 - Mozilla's l20n
 
-  - Added this new format storage classe
+  - Added this new format storage class
   - Added new converters :doc:`l20n2po </commands/l20n2po>` and
     :ref:`po2l20n <po2l20n>`
 
diff --git a/docs/releases/2.0.0b5.rst b/docs/releases/2.0.0b6.rst
similarity index 92%
copy from docs/releases/2.0.0b5.rst
copy to docs/releases/2.0.0b6.rst
index 6ee836e..df6fb22 100644
--- a/docs/releases/2.0.0b5.rst
+++ b/docs/releases/2.0.0b6.rst
@@ -1,7 +1,7 @@
-Translate Toolkit 2.0.0b5
+Translate Toolkit 2.0.0b6
 *************************
 
-*Released on 29 August 2016*
+*Released on 30 September 2016*
 
 This release contains many improvements and bug fixes. While it contains many
 general improvements, it also specifically contains needed changes and
@@ -9,17 +9,13 @@ optimizations for the upcoming `Pootle <http://pootle.translatehouse.org/>`_
 2.8.0 and `Virtaal <http://virtaal.translatehouse.org>`_ releases.
 
 
-2.0.0b5 vs 2.0.0b4
+2.0.0b6 vs 2.0.0b5
 ==================
 
-- Added new Mozilla's l20n format, including converters
-  :doc:`l20n2po </commands/l20n2po>` and :ref:`po2l20n <po2l20n>`
-- Added ical format support for Python 3
-- Mozilla's lang format always outputs last unit followed by trailing newlines
-- :ref:`po2prop <po2prop>` skips first entry in PO file (:issue:`3463`)
-- Added Danish valid accelerators characters (:issue:`3487`)
-- Updated and pinned requirements
-- Removed misleading extra requirements files
+- Added ``l20nChecker`` to do custom checking for Mozilla's new l20n format.
+- Mozilla .lang now has support for headers, tag comments and remembers line
+  endings
+- Added Silesian plurals
 
 
 Major changes
@@ -27,6 +23,7 @@ Major changes
 
 - Python 3 compatibility thanks to Claude Paroz
 - Dropped support for Python 2.6
+- Support for new l20n format
 - Translate Toolkit can now easily be installed on Windows
 - Changes in storage API to expose a more standardized API
 
@@ -81,10 +78,12 @@ Formats and Converters
 
   - ``{ok}`` marker is now more cleanly removed
   - Always output last unit followed by trailing newlines
+  - Added support for headers and tag comments
+  - File line endings are now remembered, defaulting to Unix LF
 
 - Mozilla's l20n
 
-  - Added this new format storage classe
+  - Added this new format storage class
   - Added new converters :doc:`l20n2po </commands/l20n2po>` and
     :ref:`po2l20n <po2l20n>`
 
@@ -131,6 +130,7 @@ Filters and Checks
   - ``accelerators`` check is now skipped by several Indic languages in
     ``MozillaChecker`` checker.
 
+- Added ``l20nChecker`` to do custom checking for Mozilla's new l20n format.
 - LibreOffice checker no longer checks for Python brace format (:issue:`3303`).
 - LibreOffice validxml check correctly matches self-closing tags.
 - Numbers check now handles non latin numbers. Support for non latin numbers
@@ -154,8 +154,8 @@ Languages
 ---------
 
 - Fixed plural form for Slovenian and Turkish.
-- Added plural forms for Bengali (Bangladesh), Konkani, Kashmiri, Sanskrit and
-  Yue (Cantonese).
+- Added plural forms for Bengali (Bangladesh), Konkani, Kashmiri, Sanskrit,
+  Silesian and Yue (Cantonese).
 - Renamed Oriya to Odia.
 - Altered Manipuri name to include its most common name Meithei.
 - Added language settings for Brazilian Portuguese.
diff --git a/docs/releases/index.rst b/docs/releases/index.rst
index 52cee1d..c7e3646 100644
--- a/docs/releases/index.rst
+++ b/docs/releases/index.rst
@@ -8,6 +8,7 @@ The following are release notes for Translate Toolkit:
 .. toctree::
    :maxdepth: 1
 
+   2.0.0b6 <2.0.0b6>
    2.0.0b5 <2.0.0b5>
    2.0.0b4 <2.0.0b4>
    2.0.0b3 <2.0.0b3>
diff --git a/requirements/dev.txt b/requirements/dev.txt
index 1a34f2c..0e1cbc2 100644
--- a/requirements/dev.txt
+++ b/requirements/dev.txt
@@ -3,7 +3,7 @@
 isort>=4.2.3
 pep257
 pep8
-pytest==2.9.2
+pytest==3.0.2
 pytest-cov
 pytest-xdist
 Sphinx>=1.2.2
diff --git a/tests/cli/data/test_pofilter_manpage/stdout.txt b/tests/cli/data/test_pofilter_manpage/stdout.txt
index b13044a..ddfe877 100644
--- a/tests/cli/data/test_pofilter_manpage/stdout.txt
+++ b/tests/cli/data/test_pofilter_manpage/stdout.txt
@@ -1,5 +1,5 @@
 .\" Autogenerated manpage
-.TH pofilter 1 "Translate Toolkit 2.0.0b5" "" "Translate Toolkit 2.0.0b5"
+.TH pofilter 1 "Translate Toolkit 2.0.0b6" "" "Translate Toolkit 2.0.0b6"
 .SH NAME
 pofilter \- Perform quality checks on Gettext PO, XLIFF and TMX localization files.
 .SH SYNOPSIS
diff --git a/translate/__version__.py b/translate/__version__.py
index d696310..d9ac7e6 100644
--- a/translate/__version__.py
+++ b/translate/__version__.py
@@ -19,14 +19,14 @@
 
 """This file contains the version of the Translate Toolkit."""
 
-build = 20005
+build = 20006
 """The build number is used by external users of the Translate Toolkit to
 trigger refreshes.  Thus increase the build number whenever changes are made to
 code touching stats or quality checks.  An increased build number will force a
 toolkit user, like Pootle, to regenerate it's stored stats and check
 results."""
 
-sver = "2.0.0b5"
+sver = "2.0.0b6"
 """Human readable version number. Used for version number display."""
 
 ver = (2, 0, 0)
diff --git a/translate/filters/checks.py b/translate/filters/checks.py
index fd2665a..7354fe4 100644
--- a/translate/filters/checks.py
+++ b/translate/filters/checks.py
@@ -413,12 +413,15 @@ class UnitChecker(object):
         """
         return test(unit)
 
+    @property
+    def checker_name(self):
+        """Extract checker name, for example 'mozilla' from MozillaChecker."""
+        return str(self.__class__.__name__).lower()[:-len("checker")]
+
     def get_ignored_filters(self):
-        """Return checker's ignored filters for current language."""
-        # Extract checker name, for example 'mozilla' from MozillaChecker.
-        checker_name = str(self.__class__.__name__).lower()[:-len("checker")]
-        return list(set(self.config.lang.ignoretests.get(checker_name, []) +
-                        self.config.lang.ignoretests.get('all', [])))
+        """Return checker's additional filters for current language."""
+        return list(set(self.config.lang.ignoretests.get(self.checker_name, [])
+                        + self.config.lang.ignoretests.get('all', [])))
 
     def run_filters(self, unit, categorised=False):
         """Run all the tests in this suite.
@@ -2271,6 +2274,57 @@ class TermChecker(StandardChecker):
         StandardChecker.__init__(self, **kwargs)
 
 
+l20nconfig = CheckerConfig(
+    varmatches=[("$", None), ],
+)
+
+
+class L20nChecker(MozillaChecker):
+    excluded_filters_for_complex_units = [
+        "escapes",
+        "newlines",
+        "tabs",
+        "singlequoting",
+        "doublequoting",
+        "doublespacing",
+        "brackets",
+        "pythonbraceformat",
+        "sentencecount",
+        "variables",
+    ]
+    complex_unit_pattern = "->"
+
+    def __init__(self, **kwargs):
+        checkerconfig = kwargs.get("checkerconfig", None)
+
+        if checkerconfig is None:
+            checkerconfig = CheckerConfig()
+            kwargs["checkerconfig"] = checkerconfig
+
+        checkerconfig.update(l20nconfig)
+        MozillaChecker.__init__(self, **kwargs)
+
+    def run_filters(self, unit, categorised=False):
+        is_unit_complex = (self.complex_unit_pattern in unit.source
+                           or self.complex_unit_pattern in unit.target)
+
+        saved_default_filters = {}
+        if is_unit_complex:
+            saved_default_filters = self.defaultfilters
+            self.defaultfilters = {
+                key: value for (key, value) in self.defaultfilters.items()
+                if key not in self.excluded_filters_for_complex_units
+            }
+
+        result = MozillaChecker.run_filters(self, unit,
+                                            categorised=categorised)
+
+        if is_unit_complex:
+            self.defaultfilters = saved_default_filters
+
+        return result
+
+
 projectcheckers = {
     "standard": StandardChecker,
     "openoffice": OpenOfficeChecker,
@@ -2282,6 +2336,7 @@ projectcheckers = {
     "creativecommons": CCLicenseChecker,
     "drupal": DrupalChecker,
     "terminology": TermChecker,
+    "l20n": L20nChecker,
 }
 
 
diff --git a/translate/filters/test_checks.py b/translate/filters/test_checks.py
index 00d063f..2d3ed1b 100644
--- a/translate/filters/test_checks.py
+++ b/translate/filters/test_checks.py
@@ -1005,6 +1005,14 @@ def test_variables_gnome():
     assert fails_serious(gnomechecker.variables, "Save $(file)", "Stoor $(leer)")
 
 
+def test_variables_l20n():
+    """tests variables in L20n translations"""
+    # L20n variables
+    l20nchecker = checks.L20nChecker()
+    assert passes(l20nchecker.variables, "Welcome { $user }", "Welkom { $user }")
+    assert fails_serious(l20nchecker.variables, "Welcome { $user }", "Welkom { $gebruiker }")
+
+
 def test_variables_mozilla():
     """tests variables in Mozilla translations"""
     # Mozilla variables
@@ -1374,3 +1382,51 @@ def test_skip_checks_per_language_in_some_checkers():
     failures = stdchecker.run_filters(unit)
     # But it is not in StandardChecker.
     assert 'accelerators' in failures.keys()
+
+
+def test_skip_checks_for_l20n_complex_units():
+    """Test some checks are skipped for some languages in L20n checker."""
+    from translate.storage import base
+
+    str1, str2, __ = strprep(
+        u"""{ PLURAL($num) ->
+          [one] { $num } fish.
+         *[other] { $num } fishes.
+        }""",
+        u"""{ PLURAL($num) ->
+         *[other] { $num } balık.
+        }"""
+    )
+    unit = base.TranslationUnit(str1)
+    unit.target = str2
+
+    l20nchecker = checks.L20nChecker()
+    failures = l20nchecker.run_filters(unit)
+    assert 'variables' not in failures.keys()
+    assert 'brackets' not in failures.keys()
+    assert 'newlines' not in failures.keys()
+
+    mozillachecker = checks.MozillaChecker()
+    failures = mozillachecker.run_filters(unit)
+    # But it is not in MozillaChecker.
+    assert 'variables' in failures.keys()
+    assert 'brackets' in failures.keys()
+    assert 'newlines' in failures.keys()
+
+    # Nothing skipped if unit is not complex.
+    str1, str2, __ = strprep(
+        u"""Foo { $num }
+          Bar { $num }
+        """,
+        u"""ooF { $num }
+          raB { $num }
+          Bot { $another }
+        }"""
+    )
+    unit = base.TranslationUnit(str1)
+    unit.target = str2
+
+    failures = l20nchecker.run_filters(unit)
+    assert 'variables' in failures.keys()
+    assert 'brackets' in failures.keys()
+    assert 'newlines' in failures.keys()
diff --git a/translate/lang/bo.by b/translate/lang/bo.by
deleted file mode 100644
index 77445be..0000000
--- a/translate/lang/bo.by
+++ /dev/null
@@ -1,32 +0,0 @@
-# -*- coding: utf-8 -*-
-#
-# Copyright 2013 Zuza Software Foundation
-#
-# This file is part of translate.
-#
-# translate is free software; you can redistribute it and/or modify
-# it under the terms of the GNU General Public License as published by
-# the Free Software Foundation; either version 2 of the License, or
-# (at your option) any later version.
-#
-# translate is distributed in the hope that it will be useful,
-# but WITHOUT ANY WARRANTY; without even the implied warranty of
-# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
-# GNU General Public License for more details.
-#
-# You should have received a copy of the GNU General Public License
-# along with this program; if not, see <http://www.gnu.org/licenses/>.
-
-"""This module represents the Tibetan language.
-
-.. seealso:: http://en.wikipedia.org/wiki/Tibetan_language
-"""
-
-from translate.lang import common
-
-
-class bo(common.Common):
-    """This class represents Tibetan."""
-
-    mozilla_nplurals = 2
-    mozilla_pluralequation = "n!=1 ? 1 : 0"
diff --git a/translate/lang/bo.py b/translate/lang/bo.py
index b2e6242..d20dda7 100644
--- a/translate/lang/bo.py
+++ b/translate/lang/bo.py
@@ -31,3 +31,6 @@ class bo(common.Common):
     ignoretests = {
         'mozilla': ["accelerators"],
     }
+
+    mozilla_nplurals = 2
+    mozilla_pluralequation = "n!=1 ? 1 : 0"
diff --git a/translate/lang/data.py b/translate/lang/data.py
index 853cda7..95e502e 100644
--- a/translate/lang/data.py
+++ b/translate/lang/data.py
@@ -170,6 +170,8 @@ languages = {
     'su': (u'Sundanese', 1, '0'),
     'sv': (u'Swedish', 2, '(n != 1)'),
     'sw': (u'Swahili', 2, '(n != 1)'),
+    'szl': (u'Silesian', 3,
+            '(n==1 ? 0 : n%10>=2 && n%10<=4 && (n%100<10 || n%100>=20) ? 1 : 2)'),
     'ta': (u'Tamil', 2, '(n != 1)'),
     'te': (u'Telugu', 2, '(n != 1)'),
     'tg': (u'Tajik', 2, '(n != 1)'),
diff --git a/translate/storage/mozilla_lang.py b/translate/storage/mozilla_lang.py
index 85568a4..31a9b24 100644
--- a/translate/storage/mozilla_lang.py
+++ b/translate/storage/mozilla_lang.py
@@ -31,27 +31,43 @@ import six
 from translate.storage import base, txt
 
 
+def strip_ok(string):
+    tmpstring = string.rstrip()
+    if tmpstring.endswith("{ok}") or tmpstring.endswith("{OK}"):
+        return tmpstring[:-4].rstrip()
+    return string
+
+
 @six.python_2_unicode_compatible
 class LangUnit(base.TranslationUnit):
     """This is just a normal unit with a weird string output"""
 
     def __init__(self, source=None):
         self.locations = []
+        self.eol = "\n"
+        self.rawtarget = None
         base.TranslationUnit.__init__(self, source)
 
     def __str__(self):
-        if self.source == self.target:
-            unchanged = " {ok}"
+        if self.istranslated():
+            target = self.target
         else:
-            unchanged = ""
-        if not self.istranslated():
             target = self.source
-        else:
-            target = self.target
+        if self.source == self.target:
+            target = self.target + " {ok}"
+        if (self.rawtarget is not None
+            and self.target == strip_ok(self.rawtarget)):
+            target = self.rawtarget
         if self.getnotes():
-            notes = ('\n').join(["# %s" % note for note in self.getnotes('developer').split("\n")])
-            return u"%s\n;%s\n%s%s" % (notes, self.source, target, unchanged)
-        return u";%s\n%s%s" % (self.source, target, unchanged)
+            notes = (self.eol).join(
+                [("#%s" % note
+                  if note.startswith("#")
+                  else "# %s" % note)
+                 for note
+                 in self.getnotes('developer').split("\n")])
+            return u"%s%s;%s%s%s" % (
+                notes, self.eol, self.source, self.eol, target)
+        return u";%s%s%s" % (self.source, self.eol, target)
 
     def getlocations(self):
         return self.locations
@@ -71,52 +87,73 @@ class LangStore(txt.TxtFile):
     def __init__(self, inputfile=None, mark_active=False, **kwargs):
         self.is_active = False
         self.mark_active = mark_active
+        self._headers = []
+        self.eol = "\n"
         super(LangStore, self).__init__(inputfile, **kwargs)
 
     def parse(self, lines):
-        # Have we just seen a ';' line, and so are ready for a translation
-        readyTrans = False
+        source_unit = None
         comment = ""
         if not isinstance(lines, list):
             lines = lines.split(b"\n")
-        unit = None
 
         for lineoffset, line in enumerate(lines):
+            if line.endswith(b"\r"):
+                self.eol = "\r\n"
             line = line.decode(self.encoding).rstrip("\n").rstrip("\r")
 
             if lineoffset == 0 and line == "## active ##":
                 self.is_active = True
                 continue
 
-            if len(line) == 0 and not readyTrans:  # Skip blank lines
+            if line.startswith("## ") and not line.startswith('## TAG'):
+                self._headers.append(line)
                 continue
 
-            if readyTrans:  # If we are expecting a translation, set the target
-                if line != unit.source:
-                    if line.rstrip().endswith("{ok}"):
-                        unit.target = line.rstrip()[:-4].rstrip()
-                    else:
-                        unit.target = line
+            if len(line) == 0 and not source_unit:
+                if len(self.units) == 0:
+                    self._headers.append(line)  # Append blank lines to header
+                # else skip blank lines
+                continue
+
+            if source_unit:
+                # If we have a source_unit get the target
+                source_unit.rawtarget = line
+                if line != source_unit.source:
+                    source_unit.target = strip_ok(line)
                 else:
-                    unit.target = ""
-                readyTrans = False  # We already have our translation
+                    source_unit.target = ""
+                source_unit = None
                 continue
 
-            if line.startswith('#') and not line.startswith('##'):
-                # Read comments, but not meta tags (e.g. '## TAG')
+            is_comment = (
+                line.startswith('#')
+                and (not line.startswith("##")
+                     or line.startswith('## TAG')))
+            if is_comment:
+                # Read comments, *including* meta tags (i.e. '## TAG')
                 comment += line[1:].strip() + "\n"
 
             if line.startswith(';'):
-                unit = self.addsourceunit(line[1:])
-                readyTrans = True  # Now expecting a translation on the next line
-                unit.addlocation("%s:%d" % (self.filename, lineoffset + 1))
+                source_unit = self.addsourceunit(line[1:])
+                source_unit.eol = self.eol
+                source_unit.addlocation(
+                    "%s:%d" % (self.filename, lineoffset + 1))
                 if comment is not None:
-                    unit.addnote(comment[:-1], 'developer')
+                    source_unit.addnote(comment[:-1], 'developer')
                     comment = ""
 
     def serialize(self, out):
+        eol = self.eol.encode('utf-8')
         if self.is_active or self.mark_active:
-            out.write(b"## active ##\n")
+            out.write(b"## active ##")
+            out.write(eol)
+        for header in self._headers:
+            out.write(six.text_type(header).encode('utf-8'))
+            out.write(eol)
         for unit in self.units:
             out.write(six.text_type(unit).encode('utf-8'))
-            out.write(b"\n\n\n")
+            out.write(eol * 3)
+
+    def getlangheaders(self):
+        return self._headers
diff --git a/translate/storage/test_mozilla_lang.py b/translate/storage/test_mozilla_lang.py
index 178f127..1b3c98a 100644
--- a/translate/storage/test_mozilla_lang.py
+++ b/translate/storage/test_mozilla_lang.py
@@ -1,10 +1,31 @@
 # -*- coding: utf-8 -*-
 
+import io
+
+import six
+
 import pytest
 
 from translate.storage import mozilla_lang, test_base
 
 
+ at pytest.mark.parametrize(
+    "orig, stripped", [
+        ("", ""),
+        ("String", "String"),        # No {ok}
+        ("String {ok}", "String"),   # correct form
+        ("String {OK}", "String"),   # capitals
+        ("Şŧřīƞɠ {ok}", "Şŧřīƞɠ"),   # Unicode
+        ("String{ok}", "String"),    # No leading space
+        ("String{OK}", "String"),    # Caps no leading space
+        ("String  {ok}", "String"),  # multispace leading
+        ("String {ok} ", "String"),  # trailing space
+    ])
+def test_strip_ok(orig, stripped):
+    """Test various permutations of {ok} stripping"""
+    assert mozilla_lang.strip_ok(orig) == stripped
+
+
 class TestMozLangUnit(test_base.TestTranslationUnit):
     UnitClass = mozilla_lang.LangUnit
 
@@ -33,11 +54,15 @@ class TestMozLangUnit(test_base.TestTranslationUnit):
         assert not str(unit).endswith(" {ok}")
 
     def test_comments(self):
-        """Comments start with #."""
+        """Comments start with #, tags start with ## TAG:."""
         unit = self.UnitClass("One")
         unit.addnote("Hello")
         assert str(unit).find("Hello") == 2
         assert str(unit).find("# Hello") == 0
+        unit.addnote("# TAG: goodbye")
+        assert (
+            "# TAG: goodbye"
+            in unit.getnotes(origin="developer").split("\n"))
 
 
 class TestMozLangFile(test_base.TestTranslationStore):
@@ -62,6 +87,20 @@ class TestMozLangFile(test_base.TestTranslationStore):
         assert "Comment" in unit.getnotes()
         assert bytes(store).decode('utf-8') == lang
 
+    def test_crlf(self):
+        """While \n is preferred \r\n is allowed"""
+        lang = ("# Comment\r\n"
+                ";Source\r\n"
+                "Target\r\n"
+                "\r\n\r\n")
+        store = self.StoreClass.parsestring(lang)
+        store.mark_active = False
+        unit = store.units[0]
+        assert unit.source == "Source"
+        assert unit.target == "Target"
+        assert "Comment" in unit.getnotes()
+        assert bytes(store).decode('utf-8') == lang
+
     def test_active_flag(self):
         """Test the ## active ## flag"""
         lang = ("## active ##\n"
@@ -121,3 +160,103 @@ class TestMozLangFile(test_base.TestTranslationStore):
         assert unit.source == "Source"
         assert unit.target == target
         assert unit.istranslated() == istranslated
+
+    def test_headers(self):
+        """Ensure we can handle and preserve file headers"""
+        lang = ("## active ##\n"
+                "## some_tag ##\n"
+                "## another_tag ##\n"
+                "## NOTE: foo\n"
+                "\n\n"
+                ";Source\n"
+                "Target\n"
+                "\n\n")
+        store = self.StoreClass.parsestring(lang)
+        assert (
+            store.getlangheaders()
+            == [u'## some_tag ##',
+                u'## another_tag ##',
+                u'## NOTE: foo',
+                u'', u''])
+        out = io.BytesIO()
+        store.serialize(out)
+        out.seek(0)
+        assert (
+            out.read()
+            == six.text_type(
+                "## active ##\n"
+                "## some_tag ##\n"
+                "## another_tag ##\n"
+                "## NOTE: foo\n"
+                "\n\n"
+                ";Source\n"
+                "Target\n"
+                "\n\n").encode('utf-8'))
+
+    def test_not_headers(self):
+        """Ensure we dont treat a tag immediately after headers as header"""
+        lang = ("## active ##\n"
+                "## some_tag ##\n"
+                "## another_tag ##\n"
+                "## NOTE: foo\n"
+                "## TAG: fooled_you ##\n"
+                ";Source\n"
+                "Target\n"
+                "\n\n")
+        store = self.StoreClass.parsestring(lang)
+        assert "## TAG: fooled_you ##" not in store.getlangheaders()
+
+    @pytest.mark.parametrize("nl", [0, 1, 2, 3])
+    def test_header_blanklines(self, nl):
+        """Ensure that blank lines following a header are recorded"""
+        lang_header = ("## active ##\n"
+                       "## some_tag ##\n")
+        lang_unit1 = ("# Comment\n"
+                      ";Source\n"
+                      "Target\n"
+                      "\n\n")
+        lang = lang_header + '\n' * nl + lang_unit1
+        store = self.StoreClass.parsestring(lang)
+        assert bytes(store).decode('utf-8') == lang
+
+    def test_tag_comments(self):
+        """Ensure we can handle comments and distinguish from headers"""
+        lang = ("## active ##\n"
+                "# First comment\n"
+                "## TAG: important_tag\n"
+                "# Second comment\n"
+                "# Third comment\n"
+                "## TAG: another_important_tag\n"
+                ";Source\n"
+                "Target\n"
+                "\n\n")
+        store = self.StoreClass.parsestring(lang)
+        assert not store.getlangheaders()
+        assert bytes(store).decode('utf-8') == lang
+        assert (
+            "# TAG: important_tag"
+            in store.units[0].getnotes(origin="developer").split("\n"))
+        lang = ("## active ##\n"
+                "# First comment\n"
+                "## TAG: important_tag\n"
+                "# Second comment\n"
+                "# Third comment\n"
+                "## TAG: another_important_tag\n"
+                "# Another comment\n"
+                ";Source\n"
+                "Target\n"
+                "\n\n")
+        store = self.StoreClass.parsestring(lang)
+        assert not store.getlangheaders()
+        assert (
+            "First comment"
+            in store.units[0].getnotes(origin="developer").split("\n"))
+        assert (
+            "Second comment"
+            in store.units[0].getnotes(origin="developer").split("\n"))
+        assert (
+            "Another comment"
+            in store.units[0].getnotes(origin="developer").split("\n"))
+        assert (
+            "# TAG: another_important_tag"
+            in store.units[0].getnotes(origin="developer").split("\n"))
-----------------------------------------------------------------------


hooks/post-receive
-- 
translate-toolkit



More information about the Debian-l10n-commits mailing list