[Debian-l10n-commits] r1570 - in /translate-toolkit/branches/upstream/current: ./ tools/ translate/ translate/convert/ translate/doc/ translate/doc/user/ translate/filters/ translate/lang/ translate/misc/ translate/search/ translate/services/ translate/storage/ translate/storage/xml_extract/ translate/tools/
nekral-guest at users.alioth.debian.org
nekral-guest at users.alioth.debian.org
Sun Feb 8 16:49:33 UTC 2009
Author: nekral-guest
Date: Sun Feb 8 16:49:31 2009
New Revision: 1570
URL: http://svn.debian.org/wsvn/?sc=1&rev=1570
Log:
[svn-upgrade] Integrating new upstream version, translate-toolkit (1.3.0)
Added:
translate-toolkit/branches/upstream/current/translate/convert/accesskey.py
translate-toolkit/branches/upstream/current/translate/convert/po2symb (with props)
translate-toolkit/branches/upstream/current/translate/convert/po2symb.py
translate-toolkit/branches/upstream/current/translate/convert/po2tiki
translate-toolkit/branches/upstream/current/translate/convert/po2tiki.py
translate-toolkit/branches/upstream/current/translate/convert/symb2po (with props)
translate-toolkit/branches/upstream/current/translate/convert/symb2po.py
translate-toolkit/branches/upstream/current/translate/convert/test_accesskey.py
translate-toolkit/branches/upstream/current/translate/convert/test_po2tiki.py
translate-toolkit/branches/upstream/current/translate/convert/test_tiki2po.py
translate-toolkit/branches/upstream/current/translate/convert/tiki2po
translate-toolkit/branches/upstream/current/translate/convert/tiki2po.py
translate-toolkit/branches/upstream/current/translate/doc/epydoc-config.ini
translate-toolkit/branches/upstream/current/translate/doc/gen_api_docs.sh (with props)
translate-toolkit/branches/upstream/current/translate/doc/user/toolkit-symb2po.html
translate-toolkit/branches/upstream/current/translate/doc/user/toolkit-tiki2po.html
translate-toolkit/branches/upstream/current/translate/doc/user/toolkit-tmserver.html
translate-toolkit/branches/upstream/current/translate/i18n.py
translate-toolkit/branches/upstream/current/translate/lang/es.py
translate-toolkit/branches/upstream/current/translate/lang/poedit.py
translate-toolkit/branches/upstream/current/translate/lang/test_poedit.py
translate-toolkit/branches/upstream/current/translate/misc/hash.py
translate-toolkit/branches/upstream/current/translate/misc/selector.py
translate-toolkit/branches/upstream/current/translate/misc/test_optrecurse.py
translate-toolkit/branches/upstream/current/translate/services/tmserver (with props)
translate-toolkit/branches/upstream/current/translate/services/tmserver.py (with props)
translate-toolkit/branches/upstream/current/translate/storage/poparser.py
translate-toolkit/branches/upstream/current/translate/storage/symbian.py
translate-toolkit/branches/upstream/current/translate/storage/test_rc.py
translate-toolkit/branches/upstream/current/translate/storage/test_tiki.py
translate-toolkit/branches/upstream/current/translate/storage/tiki.py
translate-toolkit/branches/upstream/current/translate/storage/tmdb.py
translate-toolkit/branches/upstream/current/translate/tools/build_tmdb (with props)
translate-toolkit/branches/upstream/current/translate/tools/build_tmdb.py (with props)
translate-toolkit/branches/upstream/current/translate/tools/test_podebug.py
Removed:
translate-toolkit/branches/upstream/current/translate/convert/odf2po
translate-toolkit/branches/upstream/current/translate/convert/odf2po.py
translate-toolkit/branches/upstream/current/translate/convert/test_odf2po.py
translate-toolkit/branches/upstream/current/translate/misc/setup.py
translate-toolkit/branches/upstream/current/translate/storage/odf.py
translate-toolkit/branches/upstream/current/translate/storage/test_odf.py
Modified:
translate-toolkit/branches/upstream/current/PKG-INFO
translate-toolkit/branches/upstream/current/setup.py
translate-toolkit/branches/upstream/current/tools/pocommentclean
translate-toolkit/branches/upstream/current/translate/ChangeLog
translate-toolkit/branches/upstream/current/translate/README
translate-toolkit/branches/upstream/current/translate/__init__.py
translate-toolkit/branches/upstream/current/translate/__version__.py
translate-toolkit/branches/upstream/current/translate/convert/__init__.py
translate-toolkit/branches/upstream/current/translate/convert/convert.py
translate-toolkit/branches/upstream/current/translate/convert/csv2po
translate-toolkit/branches/upstream/current/translate/convert/dtd2po.py
translate-toolkit/branches/upstream/current/translate/convert/html2po
translate-toolkit/branches/upstream/current/translate/convert/html2po.py
translate-toolkit/branches/upstream/current/translate/convert/ical2po
translate-toolkit/branches/upstream/current/translate/convert/ical2po.py
translate-toolkit/branches/upstream/current/translate/convert/ini2po
translate-toolkit/branches/upstream/current/translate/convert/ini2po.py
translate-toolkit/branches/upstream/current/translate/convert/moz2po
translate-toolkit/branches/upstream/current/translate/convert/mozfunny2prop.py
translate-toolkit/branches/upstream/current/translate/convert/odf2xliff.py
translate-toolkit/branches/upstream/current/translate/convert/oo2po
translate-toolkit/branches/upstream/current/translate/convert/oo2po.py
translate-toolkit/branches/upstream/current/translate/convert/oo2xliff.py
translate-toolkit/branches/upstream/current/translate/convert/php2po
translate-toolkit/branches/upstream/current/translate/convert/php2po.py
translate-toolkit/branches/upstream/current/translate/convert/po2csv
translate-toolkit/branches/upstream/current/translate/convert/po2dtd.py
translate-toolkit/branches/upstream/current/translate/convert/po2html
translate-toolkit/branches/upstream/current/translate/convert/po2ical
translate-toolkit/branches/upstream/current/translate/convert/po2ini
translate-toolkit/branches/upstream/current/translate/convert/po2ini.py
translate-toolkit/branches/upstream/current/translate/convert/po2moz
translate-toolkit/branches/upstream/current/translate/convert/po2oo
translate-toolkit/branches/upstream/current/translate/convert/po2oo.py
translate-toolkit/branches/upstream/current/translate/convert/po2php
translate-toolkit/branches/upstream/current/translate/convert/po2php.py
translate-toolkit/branches/upstream/current/translate/convert/po2prop
translate-toolkit/branches/upstream/current/translate/convert/po2prop.py
translate-toolkit/branches/upstream/current/translate/convert/po2rc
translate-toolkit/branches/upstream/current/translate/convert/po2tmx
translate-toolkit/branches/upstream/current/translate/convert/po2ts
translate-toolkit/branches/upstream/current/translate/convert/po2txt
translate-toolkit/branches/upstream/current/translate/convert/po2xliff
translate-toolkit/branches/upstream/current/translate/convert/pot2po
translate-toolkit/branches/upstream/current/translate/convert/pot2po.py
translate-toolkit/branches/upstream/current/translate/convert/prop2po
translate-toolkit/branches/upstream/current/translate/convert/prop2po.py
translate-toolkit/branches/upstream/current/translate/convert/rc2po
translate-toolkit/branches/upstream/current/translate/convert/rc2po.py
translate-toolkit/branches/upstream/current/translate/convert/test_convert.py
translate-toolkit/branches/upstream/current/translate/convert/test_dtd2po.py
translate-toolkit/branches/upstream/current/translate/convert/test_html2po.py
translate-toolkit/branches/upstream/current/translate/convert/test_oo2po.py
translate-toolkit/branches/upstream/current/translate/convert/test_php2po.py
translate-toolkit/branches/upstream/current/translate/convert/test_po2dtd.py
translate-toolkit/branches/upstream/current/translate/convert/test_po2html.py
translate-toolkit/branches/upstream/current/translate/convert/test_po2php.py
translate-toolkit/branches/upstream/current/translate/convert/test_pot2po.py
translate-toolkit/branches/upstream/current/translate/convert/ts2po
translate-toolkit/branches/upstream/current/translate/convert/txt2po
translate-toolkit/branches/upstream/current/translate/convert/xliff2odf.py
translate-toolkit/branches/upstream/current/translate/convert/xliff2oo.py
translate-toolkit/branches/upstream/current/translate/convert/xliff2po
translate-toolkit/branches/upstream/current/translate/doc/user/toolkit-formats.html
translate-toolkit/branches/upstream/current/translate/doc/user/toolkit-index.html
translate-toolkit/branches/upstream/current/translate/doc/user/toolkit-ini.html
translate-toolkit/branches/upstream/current/translate/doc/user/toolkit-odf2xliff.html
translate-toolkit/branches/upstream/current/translate/doc/user/toolkit-php.html
translate-toolkit/branches/upstream/current/translate/doc/user/toolkit-php2po.html
translate-toolkit/branches/upstream/current/translate/doc/user/toolkit-podebug.html
translate-toolkit/branches/upstream/current/translate/doc/user/toolkit-pofilter.html
translate-toolkit/branches/upstream/current/translate/doc/user/toolkit-pofilter_tests.html
translate-toolkit/branches/upstream/current/translate/doc/user/toolkit-tbx.html
translate-toolkit/branches/upstream/current/translate/filters/checks.py
translate-toolkit/branches/upstream/current/translate/filters/pofilter.py
translate-toolkit/branches/upstream/current/translate/filters/test_checks.py
translate-toolkit/branches/upstream/current/translate/filters/test_pofilter.py
translate-toolkit/branches/upstream/current/translate/lang/__init__.py
translate-toolkit/branches/upstream/current/translate/lang/ar.py
translate-toolkit/branches/upstream/current/translate/lang/bn.py
translate-toolkit/branches/upstream/current/translate/lang/code_or.py
translate-toolkit/branches/upstream/current/translate/lang/common.py
translate-toolkit/branches/upstream/current/translate/lang/data.py
translate-toolkit/branches/upstream/current/translate/lang/fr.py
translate-toolkit/branches/upstream/current/translate/lang/test_fr.py
translate-toolkit/branches/upstream/current/translate/lang/test_or.py
translate-toolkit/branches/upstream/current/translate/lang/zh.py
translate-toolkit/branches/upstream/current/translate/misc/contextlib.py
translate-toolkit/branches/upstream/current/translate/misc/file_discovery.py
translate-toolkit/branches/upstream/current/translate/misc/optrecurse.py
translate-toolkit/branches/upstream/current/translate/misc/textwrap.py
translate-toolkit/branches/upstream/current/translate/misc/wStringIO.py
translate-toolkit/branches/upstream/current/translate/search/match.py
translate-toolkit/branches/upstream/current/translate/services/__init__.py
translate-toolkit/branches/upstream/current/translate/services/lookupclient.py
translate-toolkit/branches/upstream/current/translate/services/lookupservice.py
translate-toolkit/branches/upstream/current/translate/storage/__init__.py
translate-toolkit/branches/upstream/current/translate/storage/base.py
translate-toolkit/branches/upstream/current/translate/storage/cpo.py
translate-toolkit/branches/upstream/current/translate/storage/csvl10n.py
translate-toolkit/branches/upstream/current/translate/storage/dtd.py
translate-toolkit/branches/upstream/current/translate/storage/factory.py
translate-toolkit/branches/upstream/current/translate/storage/html.py
translate-toolkit/branches/upstream/current/translate/storage/ini.py
translate-toolkit/branches/upstream/current/translate/storage/lisa.py
translate-toolkit/branches/upstream/current/translate/storage/mo.py
translate-toolkit/branches/upstream/current/translate/storage/odf_io.py
translate-toolkit/branches/upstream/current/translate/storage/oo.py
translate-toolkit/branches/upstream/current/translate/storage/php.py
translate-toolkit/branches/upstream/current/translate/storage/pocommon.py
translate-toolkit/branches/upstream/current/translate/storage/poheader.py
translate-toolkit/branches/upstream/current/translate/storage/poxliff.py
translate-toolkit/branches/upstream/current/translate/storage/properties.py
translate-toolkit/branches/upstream/current/translate/storage/pypo.py
translate-toolkit/branches/upstream/current/translate/storage/qm.py
translate-toolkit/branches/upstream/current/translate/storage/qph.py
translate-toolkit/branches/upstream/current/translate/storage/tbx.py
translate-toolkit/branches/upstream/current/translate/storage/test_base.py
translate-toolkit/branches/upstream/current/translate/storage/test_dtd.py
translate-toolkit/branches/upstream/current/translate/storage/test_oo.py
translate-toolkit/branches/upstream/current/translate/storage/test_php.py
translate-toolkit/branches/upstream/current/translate/storage/test_po.py
translate-toolkit/branches/upstream/current/translate/storage/test_poheader.py
translate-toolkit/branches/upstream/current/translate/storage/test_pypo.py
translate-toolkit/branches/upstream/current/translate/storage/test_wordfast.py
translate-toolkit/branches/upstream/current/translate/storage/tmx.py
translate-toolkit/branches/upstream/current/translate/storage/ts2.py
translate-toolkit/branches/upstream/current/translate/storage/wordfast.py
translate-toolkit/branches/upstream/current/translate/storage/xliff.py
translate-toolkit/branches/upstream/current/translate/storage/xml_extract/generate.py
translate-toolkit/branches/upstream/current/translate/storage/xml_extract/unit_tree.py
translate-toolkit/branches/upstream/current/translate/storage/xml_name.py
translate-toolkit/branches/upstream/current/translate/storage/xpi.py
translate-toolkit/branches/upstream/current/translate/tools/podebug.py
translate-toolkit/branches/upstream/current/translate/tools/pogrep.py
translate-toolkit/branches/upstream/current/translate/tools/pretranslate.py
translate-toolkit/branches/upstream/current/translate/tools/test_pomerge.py
Modified: translate-toolkit/branches/upstream/current/PKG-INFO
URL: http://svn.debian.org/wsvn/translate-toolkit/branches/upstream/current/PKG-INFO?rev=1570&op=diff
==============================================================================
--- translate-toolkit/branches/upstream/current/PKG-INFO (original)
+++ translate-toolkit/branches/upstream/current/PKG-INFO Sun Feb 8 16:49:31 2009
@@ -1,6 +1,6 @@
Metadata-Version: 1.0
Name: translate-toolkit
-Version: 1.2.1
+Version: 1.3.0
Summary: The Translate Toolkit is a Python package that assists in localization of software.
Home-page: http://translate.sourceforge.net/wiki/toolkit/index
Author: Translate.org.za
@@ -12,8 +12,17 @@
See U{http://translate.sourceforge.net/wiki/toolkit/index} or U{http://translate.org.za} for more information.
@organization: Zuza Software Foundation
- @copyright: 2002-2008 Zuza Software Foundation
+ @copyright: 2002-2009 Zuza Software Foundation
@license: U{GPL <http://www.fsf.org/licensing/licenses/gpl.html>}
+
+ @group Localization and Localizable File Formats: storage
+ @group Format Converters: convert
+ @group Localisation File Checker: filters
+ @group Localization File Manipulation Tools: tools
+ @group Language Specifications: lang
+ @group Search and String Matching: search
+ @group Services: services
+ @group Miscellaneous: misc source_tree_infrastructure __version__
Platform: any
Modified: translate-toolkit/branches/upstream/current/setup.py
URL: http://svn.debian.org/wsvn/translate-toolkit/branches/upstream/current/setup.py?rev=1570&op=diff
==============================================================================
--- translate-toolkit/branches/upstream/current/setup.py (original)
+++ translate-toolkit/branches/upstream/current/setup.py Sun Feb 8 16:49:31 2009
@@ -20,7 +20,7 @@
join = os.path.join
PRETTY_NAME = 'Translate Toolkit'
-translateversion = __version__.ver
+translateversion = __version__.sver
packagesdir = distutils.sysconfig.get_python_lib()
sitepackages = packagesdir.replace(sys.prefix + os.sep, '')
@@ -30,8 +30,9 @@
initfiles = [(join(sitepackages,'translate'),[join('translate','__init__.py')])]
subpackages = ["convert", "misc", "storage", join("storage", "versioncontrol"),
- join("storage", "placeables"), join("storage", "xml_extract"), join("misc", "typecheck"),
- "filters", "tools", "services", "search", join("search", "indexing"), "lang"]
+ join("storage", "xml_extract"), join("storage", "placeables"),
+ "filters", "tools", "services", "search", join("search", "indexing"),
+ "lang", join("misc", "typecheck")]
# TODO: elementtree doesn't work in sdist, fix this
packages = ["translate"]
@@ -47,10 +48,11 @@
('convert', 'html2po'), ('convert', 'po2html'),
('convert', 'ical2po'), ('convert', 'po2ical'),
('convert', 'ini2po'), ('convert', 'po2ini'),
+ ('convert', 'tiki2po'), ('convert', 'po2tiki'),
('convert', 'php2po'), ('convert', 'po2php'),
('convert', 'rc2po'), ('convert', 'po2rc'),
('convert', 'xliff2po'), ('convert', 'po2xliff'),
- ('convert', 'odf2po'),
+ ('convert', 'symb2po'), ('convert', 'po2symb'),
('convert', 'po2tmx'),
('convert', 'po2wordfast'),
('convert', 'csv2tbx'),
@@ -67,9 +69,11 @@
('tools', 'poswap'),
('tools', 'poclean'),
('tools', 'poterminology'),
- ('tools', 'pretranslate'),
+ ('tools', 'pretranslate'),
('services', 'lookupclient.py'),
- ('services', 'lookupservice')]
+ ('services', 'lookupservice'),
+ ('services', 'tmserver'),
+ ('tools', 'build_tmdb')]
translatebashscripts = [apply(join, ('tools', ) + (script, )) for script in [
'pomigrate2', 'pocompendium',
Modified: translate-toolkit/branches/upstream/current/tools/pocommentclean
URL: http://svn.debian.org/wsvn/translate-toolkit/branches/upstream/current/tools/pocommentclean?rev=1570&op=diff
==============================================================================
--- translate-toolkit/branches/upstream/current/tools/pocommentclean (original)
+++ translate-toolkit/branches/upstream/current/tools/pocommentclean Sun Feb 8 16:49:31 2009
@@ -29,7 +29,7 @@
;;
esac
-if [ $# -ne 2 ]; then
+if [ $# -ne 1 ]; then
echo "Usage: pocommmentclean [--backup] po-dir"
exit 1
fi
Modified: translate-toolkit/branches/upstream/current/translate/ChangeLog
URL: http://svn.debian.org/wsvn/translate-toolkit/branches/upstream/current/translate/ChangeLog?rev=1570&op=diff
==============================================================================
--- translate-toolkit/branches/upstream/current/translate/ChangeLog (original)
+++ translate-toolkit/branches/upstream/current/translate/ChangeLog Sun Feb 8 16:49:31 2009
@@ -1,3 +1,3187 @@
+2009-02-04 14:46 friedelwolff
+
+ * README: Mention more recent version of lxml that should be better
+
+2009-02-04 13:01 friedelwolff
+
+ * __version__.py: Version 1.3.0
+
+2009-02-04 11:29 friedelwolff
+
+ * lang/bn.py: Data for proper sentence endings, sentencere and
+ puncdict. We should now do punctranslate correctly for Bengali.
+
+2009-02-04 11:28 friedelwolff
+
+ * lang/code_or.py: Remove unnecessary data members that are filled
+ in by the factory
+
+2009-02-04 11:21 friedelwolff
+
+ * lang/code_or.py, lang/test_or.py: Only replace full stop with
+ space or newline with the DEVANAGARI DANDA. This should avoid
+ problems with numbers, code, etc. Needs more testing, though.
+
+2009-02-04 11:20 friedelwolff
+
+ * lang/common.py: Small cleanup to ensure we are using unicode
+ strings, and strip space on the end of we are replacing
+ punctuation at the end of strings and our transformation might
+ have added a space
+
+2009-02-03 15:41 friedelwolff
+
+ * lang/fr.py, lang/test_fr.py: Support fancy (curly) quotes when
+ translating punctuation for French
+
+2009-02-02 13:30 friedelwolff
+
+ * storage/ts2.py: Rewrite settarget to take loads of things into
+ account. This mostly fixes bug 774.
+
+2009-01-31 18:30 friedelwolff
+
+ * lang/data.py: Make the gettext_lang() function use the system
+ locale if none is given as paramter
+
+2009-01-30 18:13 alaaosh
+
+ * storage/tmdb.py: how many corners does this thing have
+
+2009-01-30 17:41 alaaosh
+
+ * storage/tmdb.py: handle the weird situation where a db
+ initialized with fts3 support is
+ opened without fts3 support
+
+2009-01-30 17:23 friedelwolff
+
+ * __version__.py: Version 1.3-rc1
+
+2009-01-30 16:48 alaaosh
+
+ * lang/data.py: simpler more correct simplercode
+ simplify_to_common takes an optional language list/dict now
+
+2009-01-30 11:04 friedelwolff
+
+ * storage/poheader.py: Only add the current year if the contributor
+ is already listed for earlier years
+
+2009-01-30 10:55 friedelwolff
+
+ * storage/poheader.py: In .upatecontributor(), also test if we have
+ the current year listed in the contributor comments
+
+2009-01-29 18:35 dwaynebailey
+
+ * filters/checks.py: Gconf test. Gconf settings should not be
+ translated. Test is enabled only with --gnome.
+ Fixed bug 781
+
+2009-01-29 16:48 winterstream
+
+ * storage/poparser.py, storage/pypo.py, storage/test_pypo.py: Added
+ the functionality to deal with previous msgid info (i.e.
+ everything appearing in #| comments).
+
+ There are three new member variables in the pypo unit class:
+ prev_msgctxt, prev_msgid and prev_msgid_plural.
+
+ The previous msgid and msgid_plural can be set via prev_source.
+
+ The prev_source and source properties use the same code which
+ was factored out from getsource and setsource (used by the
+ source property).
+
+2009-01-29 16:32 dwaynebailey
+
+ * storage/ts2.py: Add target language support. Fixes bug 789
+
+2009-01-29 15:08 alaaosh
+
+ * services/tmserver.py: add psyco support
+
+2009-01-29 15:01 friedelwolff
+
+ * storage/mo.py: The .mo class also implements poheader. This
+ ensures that we can read language and other headers correctly
+ from mo files.
+
+2009-01-29 14:29 alaaosh
+
+ * services/opentranclient.py, services/restclient.py,
+ services/tmclient.py: move gtk dependant client code to virtaal,
+ need to create simple
+ example tmclient in toolkit
+
+2009-01-29 13:52 alaaosh
+
+ * services/tmserver.py: add --debug command line option
+
+2009-01-29 09:33 friedelwolff
+
+ * filters/test_checks.py: Test that startpunc handles the inverted
+ Spanish question mark correctly
+
+2009-01-29 08:42 dupuy
+
+ * lang/es.py: language class for Spanish suppresses only bogus
+ pofilter startpunc gripes
+
+2009-01-29 08:07 friedelwolff
+
+ * filters/checks.py, filters/test_checks.py: Adapt to the new API
+ where forcing to unicode doesn't normalize. Call
+ data.normalized_unicode() instead
+
+2009-01-29 08:05 friedelwolff
+
+ * lang/data.py: Don't automatically normalize when coercing to
+ unicode. Provide the normalized_unicode() function for that
+ purpose.
+
+2009-01-28 22:08 friedelwolff
+
+ * filters/checks.py: Only strip on the right for the endpunc test
+
+2009-01-28 21:48 friedelwolff
+
+ * filters/checks.py: Also test for colon (:) in puncend, and strip
+ spaces to ensure that we test punctuation even if the message is
+ spaced
+
+2009-01-28 21:11 friedelwolff
+
+ * filters/pofilter.py: Remove debug statement
+
+2009-01-28 20:30 friedelwolff
+
+ * storage/ts2.py: Rewrite .getlocations() to avoid lxml warning of
+ deprecated API
+
+2009-01-28 14:20 alaaosh
+
+ * storage/tmdb.py, tools/build_tmdb.py: cleanup
+
+2009-01-28 13:34 dwaynebailey
+
+ * tools/podebug.py: ignore_kde function appears twice.
+
+2009-01-28 04:34 dwaynebailey
+
+ * convert/test_oo2po.py, storage/oo.py, storage/test_oo.py: Convert
+ OOo help escaping, resolves bug 694
+
+ Previously we escaped all of OOo help tags except tags that we
+ identified as unescaped. This caused problems where translators
+ introduced new 'tags' in their transations.
+
+ We now don't escape anything unless it is on our list of tags
+ that should be escaped. A translator who uses one of these tags,
+ which could be a valid word in their language, will still suffer.
+ But that is unlikely to happen and can be worked around.
+
+2009-01-27 14:42 friedelwolff
+
+ * storage/lisa.py: In .settaret(), try at all cost to not replace
+ the whole targetNode. This keeps extra XML stuff around that we
+ might not be aware of. This fixes bug 751.
+
+2009-01-27 14:33 friedelwolff
+
+ * lang/ar.py: Don't translate the Arabic percentage sign as part of
+ punctuation translation - it is likely to cause too much problems
+ with variables
+
+2009-01-26 10:11 winterstream
+
+ * misc/contextlib.py: contextlib.py still contained a 'yield'
+ inside a try: except: clause,
+ which is invalid in Python 2.4. The yield was moved out of this
+ clause.
+
+2009-01-26 09:49 winterstream
+
+ * storage/xliff.py: When merging XLIFF files, comments weren't
+ merged. This patch fixes the
+ problem.
+
+2009-01-26 09:48 alaaosh
+
+ * services/tmserver.py: log to logging.info instead of stderror
+
+2009-01-26 09:11 alaaosh
+
+ * services/tmserver.py: disable reverse dns lookup on every request
+
+2009-01-26 08:38 winterstream
+
+ * convert/oo2xliff.py: When converting from OpenOffice.org SDF
+ files to XLIFF files we should
+ not indiscriminately mark the units in the generated XLIFF files
+ as
+ "approved".
+
+ This simple fix will mark a unit as approved (or "non-fuzzy" in
+ the
+ terminology of the toolkit) if the target string is not empty.
+ Otherwise
+ it is marked as non-approved (or "fuzzy" in the terminology of
+ the
+ toolkit).
+
+2009-01-26 07:56 friedelwolff
+
+ * lang/common.py: Remove debug print statements
+
+2009-01-25 17:20 dwaynebailey
+
+ * lang/poedit.py, lang/test_poedit.py, storage/poheader.py: Add
+ support for Poedit X-Poedit-Language markers. This will lookup
+ the correct ISO code based
+ on the Poedit language name. The code will also cleanup PO
+ headers to use the correct
+ 'Language' entry and drop the old X-Poedit-Language and
+ X-Poedit-Country tags.
+
+ This fixes bug 737.
+
+2009-01-24 17:37 friedelwolff
+
+ * filters/pofilter.py, filters/test_pofilter.py: Add the --nonotes
+ parameter to pofilter so that the addition of notes can be
+ supressed. This fixes bug 745
+
+2009-01-24 17:36 friedelwolff
+
+ * storage/xliff.py: Add an API to remove comments from a specific
+ origin
+
+2009-01-24 07:52 friedelwolff
+
+ * lang/data.py: Change several language names to align well with
+ iso-codes. This gives slightly clumsy names when it is
+ untranslated, but gives reasonable coverage for those languages
+ with translations. There are still some where we are not aligned,
+ and probably won't be since the untranslated forms are ugly (fy,
+ el, km, nb, nn, nso)
+
+2009-01-23 10:45 alaaosh
+
+ * services/restclient.py: we where being too smart for our own
+ good. restclient is unable to
+ make efficiency decisions about duplicate requests since it
+ doesn't
+ know anything about who's connected to it's own signals.
+
+ queue management should be done in virtaal.
+
+2009-01-22 15:45 friedelwolff
+
+ * __version__.py: Version 1.3beta1
+
+2009-01-22 11:13 alaaosh
+
+ * misc/selector.py: don't fail when resolver module is missing, not
+ required for tmserver
+
+2009-01-22 10:20 alaaosh
+
+ * services/tmserver, services/tmserver.py, tools/build_tmdb,
+ tools/build_tmdb.py: when in windows ...
+
+2009-01-22 08:29 friedelwolff
+
+ * storage/tmdb.py: Fall back to the older sqlite module in case
+ we're running on Python 2.4
+
+2009-01-21 14:50 alaaosh
+
+ * services/tmclient.py, services/tmserver.py: implemented add store
+ functionality (POST and PUT methods for /store urls)
+
+2009-01-21 14:48 alaaosh
+
+ * services/restclient.py: One does not simply walk into cURL It's
+ black gates are guarded by
+ more than just orcs. There is evil there that does not sleep.
+
+ fixed POST and PUT requests.
+
+2009-01-21 14:44 alaaosh
+
+ * storage/tmdb.py: added add_list method to add list of dictionary
+ units to tmdb
+ add_list and add_store now return a count of translated units
+ (should
+ replace that with an actual count of newly added units)
+
+2009-01-21 14:05 friedelwolff
+
+ * tools/pogrep.py: Refactor common parts of the matches() code.
+ Provide correct start and end indexes in the original string,
+ considering that it was never normalised.
+
+2009-01-20 10:53 alaaosh
+
+ * lang/data.py: added simply_to_common function, useful for
+ stripping extra
+ information from language codes when not needed.
+
+2009-01-20 09:05 alaaosh
+
+ * services/tmclient.py, storage/tmdb.py: do language code
+ normalization as late as possible
+
+2009-01-20 09:02 alaaosh
+
+ * services/restclient.py: make sure url is in utf-8, fixes bug #706
+
+2009-01-19 21:20 friedelwolff
+
+ * lang/data.py, services/tmclient.py, storage/tmdb.py: Rename new
+ .normalize() to .normalize_code() to avoid name clash and rename
+ all users. Tests should now be restored.
+
+2009-01-19 19:18 dwaynebailey
+
+ * lang/data.py: United Kingdom iso3166 code is GB not UK
+
+2009-01-19 16:38 alaaosh
+
+ * services/tmclient.py: normalize language codes before creating
+ requests
+
+2009-01-19 16:34 alaaosh
+
+ * storage/tmdb.py: now we can save units represented as
+ dictionaires.
+ also normalize language before insertion
+
+2009-01-19 16:30 alaaosh
+
+ * lang/data.py: seperate normalization of language code in it's own
+ function
+
+2009-01-19 15:18 friedelwolff
+
+ * storage/pypo.py: pounit.isheader() should not return false just
+ because we have locations. This fixes bug 629.
+
+2009-01-19 15:08 friedelwolff
+
+ * convert/convert.py, convert/html2po.py, convert/mozfunny2prop.py,
+ convert/oo2xliff.py, convert/test_html2po.py: Remove --duplicates
+ styles from the converters: msgid_comment, keep,
+ msgid_comment_all. This is part of fixing 663.
+
+2009-01-19 13:48 friedelwolff
+
+ * convert/oo2po.py: Only set the source and target language after
+ we have a valid header, otherwise we might add a second
+
+2009-01-19 13:07 dwaynebailey
+
+ * lang/data.py: Expand forceunicode docstring
+
+2009-01-19 08:51 alaaosh
+
+ * search/match.py, services/tmserver.py, storage/tmdb.py: committed
+ debug code by mistake
+
+2009-01-17 17:39 dwaynebailey
+
+ * lang/data.py: Document language and country codes dictionaries.
+
+2009-01-17 17:31 dwaynebailey
+
+ * lang/data.py: Add variable comment
+
+2009-01-17 17:28 dwaynebailey
+
+ * lang/data.py: Move comment to an epydoc description for the
+ dictionary.
+
+2009-01-17 17:23 dwaynebailey
+
+ * lang/data.py: Fix docstring. I think maybe I need to give some
+ epydoc lessons.
+
+2009-01-17 10:49 dwaynebailey
+
+ * convert/csv2po, convert/html2po, convert/ini2po, convert/moz2po,
+ convert/oo2po, convert/php2po, convert/po2csv, convert/po2html,
+ convert/po2ini, convert/po2moz, convert/po2oo, convert/po2php,
+ convert/po2prop, convert/po2rc, convert/po2symb, convert/po2tiki,
+ convert/po2tmx, convert/po2ts, convert/po2txt, convert/po2xliff,
+ convert/pot2po, convert/prop2po, convert/rc2po, convert/symb2po,
+ convert/tiki2po, convert/ts2po, convert/txt2po, convert/xliff2po:
+ Fix indents, remove unneeded import
+
+2009-01-17 10:28 dwaynebailey
+
+ * convert/ical2po, convert/po2ical: Fix indentation
+
+2009-01-17 10:00 dwaynebailey
+
+ * tools/podebug.py: Ignore 'LTR' config for KDE files. Even though
+ we don't check the actual filename (its only present in
+ kdelibs(4).po
+ it should be OK).
+
+2009-01-16 15:51 alaaosh
+
+ * search/match.py: use getnotes() instead of .othercomments, fixes
+ #660
+
+2009-01-16 14:24 alaaosh
+
+ * services/opentranclient.py: make it possible to change languages
+ in the middle of a session
+ clean up language negotiation
+
+2009-01-16 14:23 alaaosh
+
+ * services/tmclient.py: tmserver is language aware now
+
+2009-01-16 14:19 alaaosh
+
+ * services/restclient.py: make curl verbose when logging level is
+ DEBUG
+
+2009-01-16 14:08 friedelwolff
+
+ * misc/file_discovery.py: Add the RESOURCEPATH from the mac app
+ bundle to the BASE_DIRS where we search for data
+
+2009-01-16 09:55 dwaynebailey
+
+ * convert/ini2po.py, convert/po2ini.py, storage/ini.py: Add
+ Innosetup support
+
+2009-01-15 19:27 walter_l
+
+ * storage/pocommon.py: Swap the parent classes of pofile to make
+ sure that the right settargetlanguage() is used.
+
+2009-01-15 19:26 walter_l
+
+ * storage/poheader.py: "basestr" -> "basestring"
+
+2009-01-15 16:32 alaaosh
+
+ * tools/build_tmdb.py: clean up, nothing is hardcoded now
+
+2009-01-15 15:36 friedelwolff
+
+ * lang/data.py: Handle hyphen (-) underscore (_) and at-sign (@) as
+ delimiters in .simplercode()
+
+2009-01-14 16:49 alaaosh
+
+ * storage/tmdb.py: don't do full text matching on small strings
+
+2009-01-14 14:59 friedelwolff
+
+ * storage/tmx.py: Replace lxml.etree calls with faster parts of the
+ API
+
+2009-01-14 14:45 alaaosh
+
+ * services/tmserver.py: migrate server to tmdb backend
+
+2009-01-14 14:43 alaaosh
+
+ * lang/ar.py: acronyms are transliterated in arabic
+
+2009-01-14 14:36 friedelwolff
+
+ * storage/poxliff.py: Implement .istranslatable() to ensure that
+ headers are not considered translatable by our tools. The general
+ XLIFF implementation doesn't work since the PO representation
+ guide prescribes that PO headers should be translatable in the
+ XLIFF file. Now at least Virtaal won't give it to users.
+
+2009-01-14 14:30 friedelwolff
+
+ * storage/poxliff.py: Use the proper inherrited .addunit() so that
+ ._store is correct. This fixes bug 696.
+
+2009-01-14 12:31 friedelwolff
+
+ * storage/poxliff.py: Replace lxml.etree calls with faster parts of
+ the API
+
+2009-01-14 08:39 alaaosh
+
+ * tools/build_tmdb.py: build tmdb out of translation files
+
+2009-01-14 08:15 alaaosh
+
+ * storage/tm_db.py, storage/tmdb.py: tmdb.py is a better name
+
+2009-01-13 16:08 alaaosh
+
+ * storage/tm_db.py: full text indexing support
+
+2009-01-13 15:28 walter_l
+
+ * tools/pogrep.py: Also return empty list for indexes if the there
+ is no search to perform.
+
+2009-01-13 15:09 walter_l
+
+ * tools/pogrep.py: Check if a unit has plurals before assuming its
+ source(s) and target(s) have a "strings" attribute.
+ This is part of the fix for bug 693.
+
+2009-01-13 13:48 walter_l
+
+ * tools/pogrep.py: virtaal.modes.searchmode.SearchMatch ->
+ translate.tools.pogrep.GrepMatch
+ virtaal.modes.searchmode.SearchMode.get_matches() ->
+ translate.tools.pogrep.GrepFilter.getmatches()
+
+2009-01-13 08:49 walter_l
+
+ * misc/file_discovery.py: Prefer $XDG_DATA_HOME if available.
+
+2009-01-12 16:03 winterstream
+
+ * storage/poparser.py: Oops. With no space after "msgid" in the
+ startswith function, we'd
+ also get x.startswith('msgid') == True if x == 'msgid_plural',
+ which
+ is not what we want.
+
+ The fix is just to add spaces after "msgid".
+
+2009-01-12 15:56 winterstream
+
+ * storage/poparser.py: The new PO parser broke if it saw CHARSET as
+ the encoding of POT
+ files. It now assumes that such files are UTF-8 encoded.
+
+2009-01-12 15:49 walter_l
+
+ * misc/optrecurse.py, storage/poheader.py, storage/tmx.py,
+ storage/xpi.py: Use __version__.sver in stead of __version__.ver.
+
+2009-01-12 15:00 walter_l
+
+ * __init__.py, __version__.py: __version__.ver -> __version__.sver
+ + __version__.ver = (1, 2, 1)
+ + License header for __version__.py
+ Updated header for __init__.py
+
+2009-01-12 11:56 alaaosh
+
+ * storage/tm_db.py: new much simpler implementation
+
+2009-01-12 06:00 dwaynebailey
+
+ * i18n.py: gettext.install define _ so we don't need to redefine.
+
+2009-01-10 15:28 dwaynebailey
+
+ * i18n.py, po/POTFILES.in, storage/base.py, storage/csvl10n.py,
+ storage/mo.py, storage/pocommon.py, storage/qm.py,
+ storage/qph.py, storage/tbx.py, storage/tmx.py, storage/ts2.py,
+ storage/wordfast.py, storage/xliff.py: Introduce some
+ localisation framework. Make all bilingual format description
+ localisable.
+
+2009-01-10 14:10 dwaynebailey
+
+ * po, po/Makevars, po/POTFILES.in, po/POTFILES.skip: Put structure
+ in place for localisation of toolkit files.
+
+2009-01-10 14:09 friedelwolff
+
+ * storage/xliff.py: Use better lxml APIs for speedup
+
+2009-01-10 12:18 friedelwolff
+
+ * misc/setup.py: Remove the installer for the old C CSV library
+ that was removed earlier
+
+2009-01-09 14:40 winterstream
+
+ * convert/test_pot2po.py, storage/poparser.py: After we've read the
+ optional msgctxt and the msgid when doing
+ obsolete parsing, then we should terminate the parsing of the
+ current obsolete unit when we see either a msgctxt or a msgid,
+ since either means that we've hit a new obsolete unit.
+
+ Also fixed the unit test case that tests this functionality.
+
+2009-01-09 14:30 winterstream
+
+ * storage/poparser.py: Our parser didn't deal with msgctxt in
+ obsolete units properly, due
+ to the fact that we bailed if we saw a msgid when parsing an
+ obsolete;
+ of course, one would see a msgid directly after a msgctxt...
+
+2009-01-09 14:16 walter_l
+
+ * convert/test_pot2po.py: +Test for preservation of msgctxt values
+ in obsolete units.
+
+2009-01-09 10:06 walter_l
+
+ * storage/poparser.py: Ensure that all header fields are decoded.
+
+2009-01-09 09:58 winterstream
+
+ * storage/test_po.py: Turns out that our new parser forgets to
+ decode header comments.
+ So if you're calling getnotes() on the header, you get a
+ UnicodeDecodeError.
+
+ A fix is in the pipeline, but for now we have a test to trigger
+ the error.
+
+2009-01-08 21:54 friedelwolff
+
+ * storage/lisa.py: Massive optimisation of lisa class by using
+ better lxml methods. Some operations are running at less than 40%
+ of previous time. XLIFF and possibly other child classes still to
+ be done for possible small extra gain.
+
+2009-01-08 10:38 walter_l
+
+ * misc/file_discovery.py: Updated get_abs_data_filename() to allow
+ directories to be specified as well as using XDG_DATA_DIRS.
+
+2009-01-07 14:17 walter_l
+
+ * services/__init__.py, services/lookupclient.py,
+ services/lookupservice.py, services/restclient.py,
+ services/tmclient.py: Removed unnecessary trailing whitespace.
+
+2009-01-06 20:57 friedelwolff
+
+ * lang/zh.py: Provide a slightly different length estimation
+ heuristic for Chinese
+
+2009-01-06 20:55 friedelwolff
+
+ * lang/common.py, lang/data.py: Provide a method for a language
+ object .alter_length() to return a string optionally made longer
+ or shorter to use in length estimations when leaving space in the
+ GUI. This uses a basic heuristic using constants defined in
+ data::expansion_factors, although the length difference can be
+ redefined per language to use a different heuristic.
+
+2009-01-06 14:56 alaaosh
+
+ * services/opentranclient.py: don't query opentran till we
+ negotiate a supported language, fixes #635
+
+2009-01-06 07:49 winterstream
+
+ * misc/quote.py: Reverted misc/quote.py. This was supposed to be
+ experimental code, but
+ it seems to have slipped in anyway.
+
+2009-01-05 15:10 alaaosh
+
+ * services/restclient.py: hack to fix 636 doesn't really fix it,
+ but logs instead of barfing. we
+ need to know when and why certain requests get deleted twice.
+
+2009-01-05 07:43 friedelwolff
+
+ * services/opentranclient.py: Check for fuzzyness at the 'flag'
+ value - we really don't want to suggest fuzzy translations
+
+2009-01-02 18:15 friedelwolff
+
+ * storage/qm.py: Raise an exception when somebody tries to write
+ out a QM file. This sort of fixes bug 516.
+
+2008-12-30 14:04 friedelwolff
+
+ * storage/tm_db.py: rough first version of a tm db optimised for
+ reading
+
+2008-12-27 19:41 friedelwolff
+
+ * services/tmserver.py: Remove unused imports and clean up
+ whitespace
+
+2008-12-27 19:36 friedelwolff
+
+ * search/match.py, storage/base.py: Remove some unused attributes
+ that weren't being used anywhere
+
+2008-12-27 13:26 friedelwolff
+
+ * storage/pypo.py: Privatise .msgidlen() and .msgstrlen() - can't
+ we get rid of these?
+
+2008-12-26 14:24 friedelwolff
+
+ * misc/hash.py, storage/html.py, tools/podebug.py: [Contributed by
+ Leonardo Ferreira Fontenelle] Provide a wrapper for the md5
+ library that is located in different places in different versions
+ of Python. Use this rather to avoid deprecation warnings. This
+ fixes bug 634.
+
+2008-12-19 14:55 alaaosh
+
+ * services/opentranclient.py: commented out debugging output
+
+2008-12-19 14:53 alaaosh
+
+ * search/match.py, services/tmserver.py: move unit2dict to
+ search.match
+
+2008-12-19 12:53 walter_l
+
+ * services/opentranclient.py: Removed prints and trailing
+ whitespace.
+
+2008-12-19 09:03 alaaosh
+
+ * services/opentranclient.py: something stinks in the state of
+ unicode
+
+2008-12-19 08:21 alaaosh
+
+ * services/opentranclient.py: move quality calculation and result
+ filtering code from virtaal
+
+2008-12-18 23:53 clouserw
+
+ * convert/po2tiki.py, convert/test_po2tiki.py,
+ convert/test_tiki2po.py, convert/tiki2po.py,
+ storage/test_tiki.py, storage/tiki.py: po's addlocations()
+ appears to split on spaces. Fixing tiki2po to use location names
+ with no spaces.
+
+2008-12-18 21:35 friedelwolff
+
+ * services/opentranclient.py: Don't assign a bogus quality - let
+ the consumer decide what to do with it
+
+2008-12-18 17:19 alaaosh
+
+ * services/tmserver.py: getting closer to update tm features
+
+2008-12-18 17:18 alaaosh
+
+ * services/opentranclient.py: negotiate target language
+
+2008-12-18 17:15 alaaosh
+
+ * services/restclient.py: better handling of running state to allow
+ recursive non blocking requests
+
+2008-12-17 16:40 alaaosh
+
+ * services/opentranclient.py: first attempts at opentran client
+ used by virtaal
+
+2008-12-17 14:22 alaaosh
+
+ * services/tmserver.py: switch to own copy of selector
+
+2008-12-17 14:21 alaaosh
+
+ * misc/selector.py: match multiline urls
+
+2008-12-17 14:20 alaaosh
+
+ * misc/selector.py: use by tmserver
+
+2008-12-17 11:13 friedelwolff
+
+ * tests/odf_xliff/test_odf_xliff.py: Factor out file name constats
+ and do proper module cleanup. Minor whitespace cleanup.
+
+2008-12-17 11:09 alaaosh
+
+ * services/restclient.py, services/tmclient.py: move json code to
+ tmclient, make restclient more generic
+
+2008-12-17 10:15 friedelwolff
+
+ * convert/po2symb, convert/po2symb.py, convert/symb2po,
+ convert/symb2po.py: Clean up license headers, copyright dates,
+ whitespace and some docstrings
+
+2008-12-17 09:12 alaaosh
+
+ * services/restclient.py, services/tmserver.py: urllib barfs on
+ unicode, fixes #631
+
+2008-12-17 08:55 winterstream
+
+ * convert/symb2po.py: This broke when calling symb2po without a
+ template, since
+ template_dict['r_string_languagegroup_name'] would raise
+ a key error. This is now fixed.
+
+2008-12-17 08:53 friedelwolff
+
+ * convert/symb_common.py: Remove moved module symb_common
+
+2008-12-17 08:52 friedelwolff
+
+ * convert/po2symb.py, convert/symb2po.py, storage/symbian.py: Move
+ the symb_common module to storage/symbian, since it mostly deals
+ with the format
+
+2008-12-17 08:43 dwaynebailey
+
+ * convert/test_php2po.py: Fix newline test to follow new escaping
+ rules.
+
+2008-12-17 08:35 winterstream
+
+ * convert/po2symb, convert/po2symb.py, convert/symb2po,
+ convert/symb2po.py, convert/symb_common.py: Added basic
+ converters to convert from Symbian-like translation formats
+ to PO and vice versa. The format is heavily biased towards the
+ way that
+ the Buddycloud translation files look.
+
+2008-12-17 08:33 winterstream
+
+ * misc/quote.py: Replace the old horrid extractwithoutquotes with a
+ more maintainable version.
+
+2008-12-16 16:15 friedelwolff
+
+ * services/tmserver.py: +Docstring. -unused variables. Whitespace
+ cleanup.
+
+2008-12-15 18:29 clouserw
+
+ * convert/po2tiki, convert/po2tiki.py, convert/test_po2tiki.py,
+ convert/test_tiki2po.py, convert/tiki2po, convert/tiki2po.py,
+ storage/test_tiki.py, storage/tiki.py: Import tiki2po
+
+2008-12-15 17:59 dwaynebailey
+
+ * tools/podebug.py: Drop default format string.
+
+2008-12-15 15:32 dwaynebailey
+
+ * tools/podebug.py: Add ignore rules for KDE
+
+2008-12-15 14:55 friedelwolff
+
+ * convert/test_odf2po.py: Another removal of obsolete ODF support.
+ (bug 608)
+
+2008-12-15 14:54 friedelwolff
+
+ * convert/odf2po, convert/odf2po.py, storage/odf.py,
+ storage/test_odf.py: Remove obsolete ODF support. This closes bug
+ 608.
+
+2008-12-15 14:44 dwaynebailey
+
+ * convert/po2php.py, convert/test_po2php.py: Preserve inline
+ comments in the PHP file. Fixes bug 590.
+
+2008-12-15 14:37 dwaynebailey
+
+ * convert/po2php.py: Remove unused variable
+
+2008-12-15 10:45 alaaosh
+
+ * services/restclient.py, services/tmserver.py: handle http errors
+
+2008-12-15 10:25 dwaynebailey
+
+ * tools/podebug.py: s/rewrite/ignore/
+
+2008-12-15 10:11 alaaosh
+
+ * services/restclient.py: lost commit
+
+2008-12-15 09:40 dwaynebailey
+
+ * storage/php.py, storage/test_php.py: Don't back convert real \n
+ to \\n
+ This allows us to, as best we can, preserve layout of multiline
+ entries.
+
+2008-12-15 09:21 dwaynebailey
+
+ * storage/php.py, storage/test_php.py: Fix bug 589 and escaped \'
+ character in a double quote string.
+
+2008-12-15 08:51 dwaynebailey
+
+ * storage/php.py, storage/test_php.py: Implement full escaping
+ functionality for PHP. We now treat single and double
+ quote escaping differently as PHP does. The tests all pass but
+ this has not
+ been widely tested on files in the field.
+
+ This solves most, if not all, of bug 593
+
+2008-12-15 08:46 alaaosh
+
+ * services/tmserver.py: no more 404s on punctuation
+
+2008-12-14 18:46 winterstream
+
+ * storage/pypo.py: Making allcomments a property saves us yet
+ another few cycles.
+
+2008-12-14 18:45 winterstream
+
+ * storage/pypo.py, storage/test_pypo.py: First, having
+
+ if target == self.target:
+
+ in gettarget slows things down enormously, since self.target is
+ not that cheap.
+
+ When I removed this, I uncovered a bug in a test. That's fixed
+ now.
+
+ This also pointed to a bit of incompleteness in setsource, which
+ is now fixed.
+
+ Finally, we don't have to call settarget in the constructor,
+ since the base
+ class does it.
+
+2008-12-14 18:44 winterstream
+
+ * storage/base.py, storage/pypo.py: Suprisingly, not using super,
+ makes a tangible speed difference.
+
+2008-12-14 18:43 winterstream
+
+ * storage/poparser.py: Optimized a few if clauses so that common
+ cases are in the if part.
+
+2008-12-14 13:31 winterstream
+
+ * storage/poparser.py: Make the decode process more C-like, since
+ we want to use this
+ with Cython.
+
+2008-12-14 13:30 winterstream
+
+ * storage/poparser.py: Avoid looking up methods dynamically every
+ time.
+
+ The string and list methods we constantly use are stored
+ at the top of the file in variables. This speeds up
+ processing somewhat.
+
+2008-12-14 13:29 winterstream
+
+ * storage/poparser.py: The readcallback added a lot of unnecessary
+ overhead and
+ we only used it to build a buffer when reading the first
+ unit, so that we could reparse the buffer. But why don't
+ we just read the first unit as a str and then decode all
+ fields in the unit when we get to know the encoding?
+
+2008-12-13 12:07 friedelwolff
+
+ * storage/poparser.py, storage/pypo.py: Replace the old PO parser
+ with a cleaner one by Wynand. This should make maintenance and
+ optimisation easier.
+
+2008-12-13 11:57 friedelwolff
+
+ * storage/test_po.py, storage/test_pypo.py, tools/test_pomerge.py:
+ Some more tests for PO parsing. We don't want to keep lonely
+ comments disassociated anymore. They are joined with the
+ following unit like gettext does.
+
+2008-12-13 07:33 friedelwolff
+
+ * storage/lisa.py: Ask lxml not to convert CDATA to raw XML (only
+ available from lxml 2.1.0). This fixes bug 458.
+
+2008-12-13 04:50 friedelwolff
+
+ * misc/textwrap.py: Let % be a wrapping character as well. This
+ brings our PO wrapping closer to gettext and closes bug 622.
+
+2008-12-12 19:17 winterstream
+
+ * convert/xliff2odf.py, storage/odf_io.py: Modified xliff2odf to
+ embed the XLIFF file in the output ODF container.
+
+2008-12-12 17:11 clouserw
+
+ * convert/test_convert.py: Fix bug 624; tests fail if psyco library
+ doesn't exist
+
+2008-12-12 15:58 friedelwolff
+
+ * tests/odf_xliff/test_odf_xliff.py: Printing a unified diff when
+ comparing files
+
+2008-12-12 15:17 alaaosh
+
+ * services/tmserver.py: random delay was only needed for testing
+
+2008-12-12 15:15 alaaosh
+
+ * services/restclient.py, services/tmclient.py,
+ services/tmserver.py: CRUD REST and all that jazz
+
+2008-12-12 10:56 friedelwolff
+
+ * storage/pypo.py, storage/test_po.py: Only allow KDE style (msgid)
+ comments right at the start of a line next to the quote
+ character. This fixes bug 625.
+
+2008-12-12 10:46 friedelwolff
+
+ * storage/cpo.py: Handle addunit more carefully to ensure we call
+ the base class' .addunit
+
+2008-12-11 15:33 dwaynebailey
+
+ * lang/data.py: Minor: simple layout fix in list
+
+2008-12-11 15:31 dwaynebailey
+
+ * lang/data.py: Add a number of plural forms from
+ http://translate.sourceforge.net/wiki/l10n/pluralforms
+
+2008-12-06 11:41 dupuy
+
+ * misc/file_discovery.py, tools/poterminology.py: propagate
+ winterstream's r9062 changes to poterminology from 1.2 branch to
+ trunk
+
+ winterstream * r9062 /src/branches/Pootle-toolkit-1.2/translate/
+ (misc/file_discovery.py tools/poterminology.py):
+ find_installed_file doesn't work under all conditions. So I moved
+ get_abs_data_filename from virtaal into the toolkit.
+ get_abs_data_filename works correctly for all our current cases.
+
+2008-12-04 06:50 clouserw
+
+ * misc/wStringIO.py: Fix TypeError; bug 623
+
+2008-12-01 13:23 winterstream
+
+ * misc/contextlib.py: Modified contextlib.py so that a yield won't
+ appear inside a try,
+ finally clause.
+
+ This is a limitation of Python 2.4 and we had to make similar
+ changes to
+ contextlib.py before to accomodate Python 2.4.
+
+ For this reason also there are some limits as to what can be done
+ with
+ context blocks in Python 2.4.
+
+2008-12-01 12:39 friedelwolff
+
+ * lang/te.py: Add basic module for Telugu (te) to disable
+ capitalisation checks
+
+2008-12-01 10:39 friedelwolff
+
+ * lang/bn.py: Add basic module for Bengali (bn) to disable
+ capitalisation checks
+
+2008-12-01 10:06 winterstream
+
+ * storage/xml_extract/extract.py: Oops, forgot to update the code
+ in extract.py, so that it won't
+ use unit.placeable_id.
+
+ Now xid assignments should work.
+
+2008-12-01 09:52 winterstream
+
+ * storage/base.py, storage/lisa.py, storage/placeables/base.py: The
+ idea of having a separated PlaceableId structure for
+ units, which would contain and xid and rid, was not going
+ to work well.
+
+ Now, xid and rid are properties of the unit classes which
+ do nothing unless overridden.
+
+2008-11-28 14:24 winterstream
+
+ * misc/rich.py, tools/podebug.py: Modified podebug to work with
+ rich sources and targets.
+
+ It will now respect placeables in XLIFF files.
+
+2008-11-28 13:12 winterstream
+
+ * storage/base.py, storage/lisa.py, storage/test_po.py,
+ storage/test_xliff.py, storage/xml_extract/extract.py: Most
+ importantly, moved the rich source and text functionality
+ into the unit bass class.
+
+ This prompted the need for plurals, something which wasn't taken
+ into account before. Now the rich sources and targets are lists
+ of
+ lists of chunks.
+
+ Thus
+ [['a', X('42')] <- First string
+ ['foo', G('43', 'baz')]] <- Second string
+
+2008-11-28 13:11 winterstream
+
+ * storage/xml_extract/extract.py: Added a bit of ad-hoc guard code
+ to find_translatable_dom_nodes
+ to ensure that we avoid processing things like XML processing
+ instructions.
+
+2008-11-28 13:10 winterstream
+
+ * storage/placeables/base.py: Fixed the __repr__ and __unicode__
+ conversions for placeables and
+ added comments.
+
+2008-11-28 07:20 friedelwolff
+
+ * convert/po2html.py: add an option to optionally not use tidy if
+ installed
+
+2008-11-27 09:45 friedelwolff
+
+ * convert/prop2po.py: Properly handle 'discard' units (with
+ DONT_TRANSLATE comments). This fixes bug 619.
+
+2008-11-26 14:31 friedelwolff
+
+ * storage/xml_name.py: Fix a few typos
+
+2008-11-26 13:54 winterstream
+
+ * storage/xml_extract/generate.py, storage/xml_extract/misc.py,
+ storage/xml_extract/test_misc.py, storage/xml_name.py: Replaced
+ full_xml_name with XmlNamer (which does the same task).
+
+2008-11-26 13:53 winterstream
+
+ * convert/xliff2odf.py: The XPathTree produced by
+ unit_tree.build_unit_tree contains
+ XML names using shortcut names (such as
+ 'office:document-content').
+ XmlNamer returns fully qualified XML names, so obviously it
+ won't work where we're trying to reference the short names.
+
+2008-11-26 13:52 winterstream
+
+ * storage/xml_extract/generate.py: Python warned me about
+
+ unit.target_dom or unit.source_dom
+
+ And rightfully so, since it's bad style. Replaced this with an
+ if block.
+
+2008-11-26 10:22 winterstream
+
+ * storage/xml_extract/misc.py: XML namespaces can include URLs,
+ which means that the regular
+ expression for parsing them must accept the "/" character.
+
+2008-11-26 09:57 winterstream
+
+ * convert/xliff2odf.py: Added other ODF filetypes to xliff2odf.
+
+2008-11-26 09:52 winterstream
+
+ * convert/odf2xliff.py, convert/xliff2odf.py, storage/odf_io.py:
+ Modified odf2xliff to use meta.xml and styles.xml in
+ addition to content.xml when extracting translatable
+ strings.
+
+ Moved some ODF routines into odf_io.py.
+
+2008-11-25 17:13 winterstream
+
+ * convert/xliff2odf.py, storage/xml_name.py: Modified xliff2odf to
+ work not only on content.xml, but also
+ meta.xml and styles.xml.
+
+ Added the XmlNamer class to make working with fully qualified
+ XML names a bit less of a pain.
+
+2008-11-25 08:38 alaaosh
+
+ * storage/test_dtd.py: test for bug #610
+
+2008-11-25 08:04 friedelwolff
+
+ * storage/factory.py: If we are creating a new store, keep the
+ filename so that store.save() will work correctly later on
+
+2008-11-24 16:38 alaaosh
+
+ * storage/dtd.py: fixed bug #610 HACKISH
+
+2008-11-24 08:46 friedelwolff
+
+ * misc/optrecurse.py: Play safe to ensure we test correctly for
+ psyco's presence even if the module doesn't exist
+
+2008-11-21 22:24 dwaynebailey
+
+ * convert/po2php.py: Refactor: 1/0 -> True/False
+
+2008-11-21 22:24 dwaynebailey
+
+ * convert/php2po.py: Remove remnant of header preservation code.
+
+2008-11-21 22:21 dwaynebailey
+
+ * convert/po2prop.py, convert/prop2po.py: Refactor: 1/0 ->
+ True/False
+
+2008-11-21 22:15 dwaynebailey
+
+ * storage/pypo.py: Rafactor: 1/0 -> True/False
+
+2008-11-21 21:53 dwaynebailey
+
+ * storage/dtd.py: Refactor: 1/0 -> True/False
+
+2008-11-21 21:43 dwaynebailey
+
+ * storage/dtd.py: Refactor: change 1/0 to True/False
+
+2008-11-21 21:40 dwaynebailey
+
+ * storage/dtd.py: Refactor. remove continual redifining of
+ self.units which is already defined in base.py
+
+2008-11-21 15:20 friedelwolff
+
+ * storage/qph.py: Ensure we output QPH files like Qt Linguist does
+ it, always with a <!DOCTYPE QPH>. This is a similar workaround
+ used in the new ts class to work around a bug in lxml that was
+ fixed in lxml 2.1.3
+
+2008-11-21 13:41 friedelwolff
+
+ * misc/optrecurse.py: Don't add psyco options if psyco is not
+ installed. This closes bug 606.
+
+2008-11-21 12:16 dwaynebailey
+
+ * tools/podebug.py: Spelling fix
+
+2008-11-20 15:25 alaaosh
+
+ * tools/pretranslate.py: fill origin attribute when adding
+ alt-trans
+
+2008-11-20 10:18 alaaosh
+
+ * tools/pretranslate.py: when in xliff do as the xliffians do nad
+ add fuzzy matches to alt-trans
+
+2008-11-20 10:12 alaaosh
+
+ * storage/xliff.py: addalttrans should support adding source tags
+ and match-quality attributes
+
+2008-11-19 20:52 dwaynebailey
+
+ * storage/dtd.py: Refactor, place functions on seperate lines.
+
+2008-11-19 20:48 dwaynebailey
+
+ * storage/dtd.py: Refactor DTD validation into its own method.
+
+2008-11-19 16:56 winterstream
+
+ * storage/xml_extract/extract.py, storage/xml_extract/generate.py,
+ storage/xml_extract/misc.py, storage/xml_extract/test_misc.py:
+ Use the XML namespace table to create shorter X-Paths for XLIFF
+ ids.
+
+2008-11-19 16:55 winterstream
+
+ * storage/base.py, storage/placeables/base.py,
+ storage/xml_extract/extract.py, storage/xml_extract/generate.py,
+ storage/xml_extract/misc.py: Sorry for the intertwined changes,
+ but it was hard to separate them.
+
+ 1. Fixed type declarations
+ 2. Removed unnecessary bookkeeping code, like the 'level' member
+ in
+ the ParseState structure.
+ 3. Moved reduce_unit_tree to extract.py (so it can be close to
+ Translatable class)
+ 4. Removed make_translatable (since it is unnecessary)
+ 5. ID values for placeables are now computed while walking the
+ Translatable tree, thanks to the IdMaker class.
+
+2008-11-19 16:54 winterstream
+
+ * storage/xml_extract/extract.py: If we see a tag which is not in
+ our inline namespace,
+ then we should assume that it is not inline (this sounds
+ obvious here, but you'll have to trust me that it's a
+ bit more subtle than that).
+
+2008-11-18 18:44 dwaynebailey
+
+ * storage/dtd.py: Remove unused rewrap function
+
+2008-11-18 18:27 dwaynebailey
+
+ * storage/csvl10n.py, storage/dtd.py, storage/oo.py,
+ storage/php.py, storage/properties.py, storage/pypo.py: Remove
+ various __main__. They where originally intended to allow quick
+ testing of the
+ storage format. BUt nobody uses them and they're not maintained.
+ So lets rather
+ drop them.
+
+2008-11-18 17:54 dwaynebailey
+
+ * misc/optrecurse.py, misc/test_optrecurse.py: Document fn
+ splitext. Add optrecurse test file and test slitext fDocument fn.
+
+2008-11-18 17:34 dwaynebailey
+
+ * doc/epydoc-config.ini, doc/gen_api_docs.sh: Add scripts and
+ config to generate epydoc documentation
+
+2008-11-18 07:57 friedelwolff
+
+ * storage/ts2.py: Update comment about lxml versions having a
+ problem outputting the doctype
+
+2008-11-17 19:47 dwaynebailey
+
+ * misc/optrecurse.py: Refactor, move multiple functions on a line
+ to seperate lines.
+
+2008-11-17 19:40 dwaynebailey
+
+ * convert/po2dtd.py: Consolidate calls to removeinvalidamps
+
+2008-11-17 19:39 dwaynebailey
+
+ * storage/dtd.py: Refactor removeinvalidamp
+
+2008-11-17 19:37 dwaynebailey
+
+ * storage/test_dtd.py: Flesh out tests for removeinvalidamp
+
+2008-11-17 19:36 dwaynebailey
+
+ * convert/po2dtd.py, convert/test_po2dtd.py, storage/dtd.py,
+ storage/test_dtd.py: Move fn removeinvalidamps from po2dtd into
+ dtd storage class.
+
+2008-11-17 19:27 dwaynebailey
+
+ * storage/test_dtd.py: Test to see that we raise a warning when we
+ have broken DTD entries.
+
+2008-11-17 19:26 dwaynebailey
+
+ * storage/test_base.py: Enable exception testing through warnings
+ to be reset during teardown and setup of storage tests.
+
+2008-11-17 19:18 dwaynebailey
+
+ * lang/__init__.py, lang/common.py: Cleanup lang module
+ documentation
+
+2008-11-17 13:07 alaaosh
+
+ * storage/xliff.py: merged units with matching source should be
+ marked unfuzzy
+
+2008-11-17 12:58 alaaosh
+
+ * convert/pot2po.py: never commit before testing (mistyped function
+ names)
+
+2008-11-17 11:02 alaaosh
+
+ * convert/pot2po.py, convert/test_pot2po.py: refactored pot2po, now
+ supports multiple formats
+
+2008-11-17 10:38 alaaosh
+
+ * storage/lisa.py, storage/pocommon.py, storage/poheader.py,
+ storage/poxliff.py: makeheader should live in poheader instead of
+ pocommon to work with poxliff
+
+2008-11-17 10:32 alaaosh
+
+ * filters/checks.py, filters/pofilter.py: add drupal support
+
+2008-11-17 07:41 dwaynebailey
+
+ * storage/wordfast.py: There are more embarrassing things I am
+ sure.
+
+2008-11-13 15:05 winterstream
+
+ * storage/xml_extract/extract.py: placeable_name was changed from
+ an array to a string, but this one
+ site still assumed it was an array. This should be fixed now.
+
+2008-11-13 15:04 winterstream
+
+ * misc/typecheck/mixins.py, misc/typecheck/sets.py: Converted
+ relative imports in the typecheck package to
+ fully qualified imports.
+
+2008-11-13 13:51 dwaynebailey
+
+ * storage/wordfast.py: Um... logic crock - back to programmer
+ school.
+
+2008-11-13 06:48 friedelwolff
+
+ * lang/kn.py: New class for Kannada (kn) to disable capitalisation
+ checks
+
+2008-11-12 17:48 dwaynebailey
+
+ * storage/wordfast.py: Adapt dialect description to correct
+ problems with csv pre Python 2.5. We retain
+ 2.5 behaviour and only adjust on older version, so that we can
+ protect most
+ users from potential brokenness.
+
+2008-11-12 15:34 alaaosh
+
+ * tools/pretranslate.py: now works with xliff
+
+2008-11-12 14:37 dwaynebailey
+
+ * storage/wordfast.py: Default to using Latin1 instead of UTF-16.
+ We had the problem that in po2wordfast we always produce UTF-16
+ files.
+ This was because the wordfast files where initiated to be utf-16
+ (we only did the right thing if we parsed an
+ existing wordfast file).
+
+ We do proper detection of the need for UTF-16 in __str__, so
+ rather use Latin1 as the default.
+
+2008-11-12 08:10 alaaosh
+
+ * storage/xliff.py: make xliffunit.merge() match pounit.merge()
+
+2008-11-12 07:54 alaaosh
+
+ * tools/pretranslate.py: fixes #602 still ugly coupling between
+ pot2po and pretranslate
+
+2008-11-12 06:21 dwaynebailey
+
+ * convert/roundtrip-OOo: Remove all traces of wget
+
+2008-11-11 10:29 alaaosh
+
+ * convert/pot2po.py, tools/pretranslate, tools/pretranslate.py,
+ tools/test_pretranslate.py: split pretranslation code from pot2po
+
+2008-11-11 04:08 dwaynebailey
+
+ * storage/test_wordfast.py: Fix epydoc by using raw string
+
+2008-11-10 17:26 dwaynebailey
+
+ * storage/qph.py: Provide a refence to the Qt Linguist implemention
+ of .qph, its thebest thing short of a valid DTD.
+
+2008-11-10 14:09 friedelwolff
+
+ * storage/test_poheader.py: Remove failing timezone tests (DST
+ settings probably changed)
+
+2008-11-10 07:57 dwaynebailey
+
+ * convert/__init__.py: Add epydoc groups for clarity
+
+2008-11-10 07:55 dwaynebailey
+
+ * storage/__init__.py: Add epydoc groups for clarity.
+
+2008-11-10 07:53 dwaynebailey
+
+ * __init__.py: Add __version__ to the Misc. group
+
+2008-11-10 07:49 dwaynebailey
+
+ * __init__.py: Add epydoc groups to make it easier to distinguish
+ between various modules.
+
+2008-11-09 14:18 friedelwolff
+
+ * storage/ts2.py: Privatise several methods, remove dead code and
+ add some comments
+
+2008-11-09 13:19 friedelwolff
+
+ * storage/ts2.py: Properly create the name tag of a new context
+
+2008-11-09 13:17 friedelwolff
+
+ * storage/ts2.py: temporarily fix a non-unicode assignment test
+
+2008-11-09 13:12 friedelwolff
+
+ * storage/qph.py: Somewhat simplify qph - a simple implementation
+ for a simple format
+
+2008-11-08 10:14 friedelwolff
+
+ * storage/cpo.py, storage/pypo.py: Align behaviour of addnote for
+ both PO implementations: we don't add comments unless there are
+ non-spacing characters.
+
+2008-11-08 10:04 friedelwolff
+
+ * storage/test_po.py: Rewrite posource of obsolete units to the way
+ that it should actually be output (pypo maintains it either way,
+ but cpo doesn't)
+
+2008-11-08 09:55 friedelwolff
+
+ * storage/cpo.py: Don't pass obsolete parameter to unquotefrompo
+ (removed in r7417)
+
+2008-11-07 15:45 dwaynebailey
+
+ * convert/dtd2po.py, convert/po2dtd.py, storage/dtd.py: Move
+ labelsuffixes and accesskeysuffixes to dtd.py
+
+2008-11-07 09:11 dwaynebailey
+
+ * convert/test_php2po.py: Don't do fancy comment manipulation. This
+ aligns the test with the
+ changes to allow us to take multiline comments from PHP files.
+
+2008-11-06 15:00 dwaynebailey
+
+ * convert/roundtrip-OOo: Lets do XLIFF also
+
+2008-11-06 14:30 dwaynebailey
+
+ * convert/roundtrip-OOo: Lots of cleanups and make it use curl not
+ wget
+
+2008-11-06 10:45 dwaynebailey
+
+ * storage/placeables, storage/xml_extract: Ignore *.pyc
+
+2008-11-06 09:38 dwaynebailey
+
+ * tools/test_podebug.py: Add a swedish chef rewrite test. Mostly to
+ ensure it
+ continues to work.
+
+2008-11-06 09:34 dwaynebailey
+
+ * tools/test_podebug.py: Add test for unicode rewrite function
+
+2008-11-06 09:32 dwaynebailey
+
+ * tools/test_podebug.py: Add tests for blank and en rewrite rules
+
+2008-11-06 09:27 dwaynebailey
+
+ * tools/test_podebug.py: Add comments to ignore_gtk test
+
+2008-11-06 09:26 dwaynebailey
+
+ * tools/test_podebug.py: Add tests for xxx rewrite style
+
+2008-11-06 09:14 dwaynebailey
+
+ * tools/test_podebug.py: Add a test for ignoring certain GTK
+ messages.
+
+2008-11-06 07:00 dwaynebailey
+
+ * tools/podebug.py: Add gtk as a type of application. Pass whole
+ units to
+ the application ignore function.
+
+2008-11-06 06:51 dwaynebailey
+
+ * tools/podebug.py: Protect line endings. Could probably be better
+ abstracted but good enough for now.
+
+2008-11-03 14:03 walter_l
+
+ * storage/test_dtd.py: Augmented
+ TestDTD.test_entitityreference_in_source() with a test for bug
+ 597.
+
+2008-11-03 14:03 walter_l
+
+ * storage/dtd.py: Fixed a bug where multi-line external parameter
+ entities cause the rest of the .dtd file to be parsed
+ incorrectly.
+
+2008-10-31 14:37 dwaynebailey
+
+ * storage/php.py: We canhandle block comments now
+
+2008-10-31 14:36 dwaynebailey
+
+ * storage/php.py, storage/test_php.py: Correctly ignore block
+ comments fixes bug 587
+
+2008-10-31 14:27 dwaynebailey
+
+ * storage/test_php.py: Split the escaping tests into a single and
+ double quote version
+
+2008-10-31 14:26 dwaynebailey
+
+ * storage/php.py: Store the escape type in the phpunit
+
+2008-10-31 13:48 dwaynebailey
+
+ * storage/php.py: Add documentation for the escaping rule reference
+ in the PHP encode and decode functions.
+
+2008-10-31 12:52 dwaynebailey
+
+ * storage/php.py: Fix typo
+
+2008-10-31 12:50 dwaynebailey
+
+ * storage/php.py: Improve PHP documentation
+
+2008-10-30 13:29 winterstream
+
+ * storage/xml_extract/generate.py: Modified xliff2odf to make use
+ of the newly exposed source_dom
+ and target_dom properties of LISA units.
+
+ This means that we're quite far to supporting placeables
+ properly.
+
+2008-10-29 16:52 winterstream
+
+ * storage/xml_extract/extract.py: Added proper placeable support in
+ the ODF->XLIFF direction
+ (i.e. for odf2xliff).
+
+2008-10-29 16:51 winterstream
+
+ * storage/lisa.py: Refactored source and target accessors in
+ lisa.py so that common
+ functionality is maintained in get_source_dom, set_source_dom,
+ get_target_dom and set_target_dom.
+
+ This was done, since we are going to use these accessors to
+ restructure
+ a template (such as an ODF file) in accordance with how
+ placeables
+ differ between the source and target.
+
+2008-10-29 06:55 dwaynebailey
+
+ * convert/po2php.py: Remove debug output.
+
+2008-10-29 06:53 dwaynebailey
+
+ * storage/php.py: Improve documentation after research for bug 589
+
+2008-10-28 15:58 winterstream
+
+ * storage/placeables/misc.py: Forgot to add this file earlier.
+ Sorry!
+
+2008-10-28 15:36 winterstream
+
+ * convert/test_po2html.py: Fixed another test. The expected output
+ was incorrectly specified.
+
+2008-10-28 15:01 winterstream
+
+ * convert/test_dtd2po.py: Fixed a test which was taking the wrong
+ code path because
+ not ''
+ as well as
+ not None
+ evaluate to True.
+
+2008-10-28 14:42 winterstream
+
+ * storage/test_xml_extract.py: Removed a dead file which was
+ related to very old xml extraction code
+ (which has long since moved into the sub-package "xml_extract").
+
+2008-10-28 14:39 winterstream
+
+ * storage/lisa.py: Forgot to add the rich_source and rich_target
+ accessors for the LISA
+ store types.
+
+2008-10-28 14:39 winterstream
+
+ * storage/placeables/lisa.py, storage/placeables/test_lisa.py:
+ Fixed some str/unicode interaction bugs and added a utility
+ function
+ to convert strings to unicode correctly.
+
+2008-10-28 14:38 winterstream
+
+ * storage/base.py, storage/lisa.py,
+ storage/placeables/baseplaceables.py,
+ storage/placeables/chunk.py,
+ storage/placeables/lisaplaceables.py,
+ storage/placeables/test_baseplaceables.py,
+ storage/placeables/test_chunk.py,
+ storage/placeables/test_lisaplaceables.py: Removed the majority
+ of Enrique's placeables support, since it
+ seems likely that we'll approach this problem slightly
+ differently.
+
+2008-10-28 10:28 winterstream
+
+ * storage/placeables/__init__.py, storage/placeables/base.py,
+ storage/placeables/lisa.py, storage/placeables/test_lisa.py,
+ storage/test_xliff.py: In the process of replacing placeables
+ support for the toolkit.
+
+ The toolkit's placeable support is based on XLIFF's placeables.
+ There are structures defined for all of the XLIFF placeables.
+
+ The support is currently limited. We don't properly support
+ marked content yet. We also don't enforce constraints such
+ as that <sub> tags cannot be children of <g> tags. This will
+ all follow in the future.
+
+ This patch includes placeable support for the LISA store types.
+ The accessors get_rich_source and set_rich_source are used to
+ set the source of a unit with placeables. Likewise the
+ accessors get_rich_target and set_rich_target are used to
+ access the target of a unit.
+
+ See the unit tests for how these should be used.
+
+2008-10-28 10:27 winterstream
+
+ * storage/xml_extract/misc.py: Fixed the tag regex to deal with
+ cases where no namespace is
+ specified.
+
+2008-10-28 10:26 winterstream
+
+ * storage/placeables/baseplaceables.py,
+ storage/placeables/chunk.py,
+ storage/placeables/lisaplaceables.py,
+ storage/placeables/test_chunk.py: 1. Added the class attribute
+ 'type' which is used to distinguish
+ between placeables. We use the XLIFF specification's
+ characterization
+ of different types of placeables (see the comments in the code).
+ 2. Added a chunk list type which will be used to set placeables
+ for units.
+ Units should also return chunk types through the attributes
+ marked_source and marked_target.
+ 3. Added some tests for the chunk type.
+
+2008-10-28 10:18 winterstream
+
+ * storage/placeables/baseplaceables.py: Fixed source errors in the
+ placeables base class.
+
+2008-10-23 17:00 friedelwolff
+
+ * lang/zh.py: Reword to make it clearer that we don't yet support
+ translating commas
+
+2008-10-21 11:38 winterstream
+
+ * storage/base.py, storage/pypo.py: Let ParseError take an inner
+ exception, so that we'll be able to
+ know what caused the ParseError.
+
+2008-10-20 20:09 dwaynebailey
+
+ * storage/xml_extract/generate.py: Fix escaped backslash in
+ docstring
+
+2008-10-20 20:08 dwaynebailey
+
+ * storage/xml_extract/generate.py: Fix docstrings
+
+2008-10-20 19:57 dwaynebailey
+
+ * storage/xml_extract/unit_tree.py: Fix docstrings
+
+2008-10-20 19:51 dwaynebailey
+
+ * storage/oo.py: Fix docstrings
+
+2008-10-20 15:33 winterstream
+
+ * convert/xliff2odf.py, tests/odf_xliff/test_odf_xliff.py: Fixed
+ the round-trip ODF-XLIFF test.
+
+ The ODF type encapsulates an ODF file. It defines equality on
+ such files
+ for the purposes of unit testing.
+
+2008-10-19 23:38 dwaynebailey
+
+ * storage/test_rc.py: Add basic rest for escaping.
+
+2008-10-16 12:40 winterstream
+
+ * storage/statsdb.py, tools/pocount.py: Fixed possible stats
+ database inconsistency problems. Also modified
+ statsdb never to catch any exceptions. The user code must handle
+ exceptions.
+
+ In the previous code, if an error occurred before a database
+ commit
+ was issued, then a next database commit would pull in possibly
+ inconsistent changes from the previous failed call.
+
+ To ensure consistency, the database MUST be rolled back if ANY
+ exception, whatsoever is raised in the database code. Why?
+ Because
+ it's impossible to know whether the database state is consistent
+ at the point when an exception is thrown.
+
+ The transaction decorator will ensure a database commit if
+ a decorated function executes without problems. Otherwise (if
+ an exception occured), it will roll back the database and
+ reraise the exception.
+
+ Also note that pocount now handles exceptions from statsdb.
+
+2008-10-16 12:39 winterstream
+
+ * tests/odf_xliff/test_2-test_odf2xliff-reference.xlf,
+ tests/odf_xliff/test_odf_xliff.py: Added some functional tests to
+ test odf2xliff and xliff2odf using
+ both the translate toolkit and itools as their engines.
+
+2008-10-16 12:37 winterstream
+
+ * convert/odf2xliff.py, misc/contextlib.py: Integrated a patch from
+ David Versmisse (from Itaapy) to use itools
+ as the ODF extraction engine.
+
+ Now a user can convert a document from ODF to XLIFF using either
+ itools or the translate toolkit using the flag --engine=itools
+ or --engine=toolkit.
+
+2008-10-16 08:22 dupuy
+
+ * .: svn:ignore of various links and directories for cleaner svn
+ status output
+
+2008-10-16 08:15 dupuy
+
+ * storage/mo.py: fix for bug 575 on 64-bit systems
+
+2008-10-15 17:20 dwaynebailey
+
+ * tools/podebug.py: Allow .pot files as input and drop the -P
+ option since we won't ever want .pot output. [Friedel Wolff's
+ patch]
+ Closes bug #573
+
+2008-10-15 17:17 dwaynebailey
+
+ * tools/pogrep.py: Add .mo files for grepping and sort file types
+
+2008-10-14 15:54 winterstream
+
+ * storage/xml_extract/extract.py, storage/xml_extract/test_misc.py,
+ storage/xml_extract/test_unit_tree.py,
+ storage/xml_extract/test_xpath_breadcrumb.py,
+ storage/xml_extract/unit_tree.py: Added quite a few unit tests
+ for the XML extraction code.
+
+2008-10-14 12:24 winterstream
+
+ * convert/odf2xliff.py, convert/xliff2odf.py,
+ storage/odf_shared.py, storage/xml_extract/extract.py,
+ storage/xml_extract/misc.py, tests/odf_xliff/test_2.odt: Modified
+ storage/odf_shared.py to attempt first to import itools and to
+ use
+ its ODF information. Failing that, it falls back to a copy of the
+ itools
+ information in storage/odf_shared.py (which may be out of date).
+
+ The important change is that we initially listed the tags in
+ which we
+ were interested, whereas itools lists tags that should be
+ ignored.
+
+ Due to integration of the code with itools, the specification
+ mechanism
+ has also been simplified. We only have a table of tags we reject
+ and a
+ table of inline placeables.
+
+2008-10-14 12:21 winterstream
+
+ * storage/xml_extract/extract.py: Added additional comments to
+ extract.py.
+
+2008-10-13 08:48 winterstream
+
+ * storage/xml_extract/generate.py: Added more comments to the code.
+ More to follow.
+
+2008-10-11 15:18 friedelwolff
+
+ * storage/placeables/lisaplaceables.py: Ommit optional parameter
+ for compatibility with python 2.3 and 2.4
+
+2008-10-11 06:16 dwaynebailey
+
+ * convert/odf2xliff.py: Add all OpenDocument filetypes for
+ conversion to XLIFF
+
+2008-10-10 17:11 winterstream
+
+ * storage/xml_extract/extract.py: Added a missing functional call
+ parameter.
+
+2008-10-10 17:10 winterstream
+
+ * misc/contextlib.py: Further modify contextlib for Python 2.4. If
+ an exception occurs in
+ body(), then we first finish off the generator (which is our
+ context
+ manager) and then raise the exception again.
+
+2008-10-10 08:51 winterstream
+
+ * convert/record.py: Removed unused module.
+
+2008-10-10 08:50 winterstream
+
+ * convert/odf2xliff, convert/odf2xliff.py, convert/record.py,
+ convert/xliff2odf, convert/xliff2odf.py, misc/context.py,
+ misc/contextlib.py, misc/typecheck, misc/typecheck/__init__.py,
+ misc/typecheck/doctest_support.py, misc/typecheck/mixins.py,
+ misc/typecheck/sets.py, misc/typecheck/typeclasses.py,
+ storage/base.py, storage/lisa.py, storage/odf_shared.py,
+ storage/placeables, storage/placeables/__init__.py,
+ storage/placeables/baseplaceables.py,
+ storage/placeables/lisaplaceables.py,
+ storage/placeables/test_baseplaceables.py,
+ storage/placeables/test_lisaplaceables.py,
+ storage/test_xml_extract.py, storage/xml_extract,
+ storage/xml_extract/__init__.py, storage/xml_extract/extract.py,
+ storage/xml_extract/generate.py, storage/xml_extract/misc.py,
+ storage/xml_extract/unit_tree.py,
+ storage/xml_extract/xpath_breadcrumb.py, tests/odf_xliff,
+ tests/odf_xliff/test_1.odt, tests/odf_xliff/test_2.odt: Merged in
+ https://translate.svn.sourceforge.net/svnroot/translate/src/branches/translate/odf-xliff-first-try@8734
+
+ Squashed commit of the following:
+
+ commit a20def7ba7b82e5d71318f4c95604bed6526470b
+ Author: winterstream
+ <winterstream at 54714841-351b-0410-a198-e36a94b762f5>
+ Date: Fri Oct 10 08:40:11 2008 +0000
+
+ Merged in
+ https://translate.svn.sourceforge.net/svnroot/translate/src/trunk/translate@8722
+
+ git-svn-id:
+ https://translate.svn.sourceforge.net/svnroot/translate/src/branches/translate/odf-xliff-first-try@8734
+ 54714841-351b-0410-a198-e36a94b762f5
+
+ commit c8ec4ef169fda66e446dbad86228e67ac8b612cb
+ Author: winterstream
+ <winterstream at 54714841-351b-0410-a198-e36a94b762f5>
+ Date: Thu Oct 9 15:26:39 2008 +0000
+
+ Fixed an incorrect type annotation.
+
+ git-svn-id:
+ https://translate.svn.sourceforge.net/svnroot/translate/src/branches/translate/odf-xliff-first-try@8733
+ 54714841-351b-0410-a198-e36a94b762f5
+
+ commit 50f57ec89effac0e5fd23ce59a89dac39809c695
+ Author: winterstream
+ <winterstream at 54714841-351b-0410-a198-e36a94b762f5>
+ Date: Thu Oct 9 15:23:21 2008 +0000
+
+ A big reorganization of the xml_extract functionality into a
+ package call xml_extract.
+
+ This should help to reduce the mental overload that was induced
+ by the previous file.
+
+ git-svn-id:
+ https://translate.svn.sourceforge.net/svnroot/translate/src/branches/translate/odf-xliff-first-try@8732
+ 54714841-351b-0410-a198-e36a94b762f5
+
+ commit 2427db87a62239dc2bca3e3bb024a27d6a206dda
+ Author: winterstream
+ <winterstream at 54714841-351b-0410-a198-e36a94b762f5>
+ Date: Thu Oct 9 15:22:05 2008 +0000
+
+ This is a rather massive commit.
+
+ The code includes cleanups, as well as a mechanism to reorder
+ placeables in an arbitrary fashion.
+
+ The next step is to break this into a package and to add
+ comments,
+ since the code is very dense.
+
+ git-svn-id:
+ https://translate.svn.sourceforge.net/svnroot/translate/src/branches/translate/odf-xliff-first-try@8731
+ 54714841-351b-0410-a198-e36a94b762f5
+
+ commit e3cb6153c1d3b8c95ecf00dd27e7e126c37c7909
+ Author: winterstream
+ <winterstream at 54714841-351b-0410-a198-e36a94b762f5>
+ Date: Thu Oct 9 15:20:59 2008 +0000
+
+ Made the behaviour of apply_translations post-fix. This is so
+ ensure that
+ child nodes are processed before parent nodes.
+
+ Why?
+
+ Because we might re-order the child nodes (depending on whether
+ the
+ translator re-ordered placeables) and therefore we must FIRST
+ deal with children, since we use XPath-like identifiers to find
+ children.
+
+ git-svn-id:
+ https://translate.svn.sourceforge.net/svnroot/translate/src/branches/translate/odf-xliff-first-try@8730
+ 54714841-351b-0410-a198-e36a94b762f5
+
+ commit 7c435527f8dfcce309f75e06e856936c7d42010e
+ Author: winterstream
+ <winterstream at 54714841-351b-0410-a198-e36a94b762f5>
+ Date: Thu Oct 9 15:20:09 2008 +0000
+
+ Moved more ODF specific code out of xml_extract.py
+
+ git-svn-id:
+ https://translate.svn.sourceforge.net/svnroot/translate/src/branches/translate/odf-xliff-first-try@8729
+ 54714841-351b-0410-a198-e36a94b762f5
+
+ commit 134e118ee4fba73496a4b082ee58cbc85a1d6979
+ Author: winterstream
+ <winterstream at 54714841-351b-0410-a198-e36a94b762f5>
+ Date: Thu Oct 9 15:19:19 2008 +0000
+
+ Moved code from xliff2odf to xml_extract.
+
+ git-svn-id:
+ https://translate.svn.sourceforge.net/svnroot/translate/src/branches/translate/odf-xliff-first-try@8728
+ 54714841-351b-0410-a198-e36a94b762f5
+
+ commit 6208bc6b4a69febf2a1de0389c0e4d72543eefd5
+ Author: winterstream
+ <winterstream at 54714841-351b-0410-a198-e36a94b762f5>
+ Date: Tue Oct 7 07:42:05 2008 +0000
+
+ Generators in Python 2.4 don't have the "throw" method, which
+ makes
+ contextlib break. We just naively call next() to ask the
+ generator
+ to finish its work.
+
+ git-svn-id:
+ https://translate.svn.sourceforge.net/svnroot/translate/src/branches/translate/odf-xliff-first-try@8691
+ 54714841-351b-0410-a198-e36a94b762f5
+
+ commit a6bbb8a25aa6bd1fbf3cc7fc4d59d950f73cb168
+ Author: winterstream
+ <winterstream at 54714841-351b-0410-a198-e36a94b762f5>
+ Date: Tue Oct 7 07:41:10 2008 +0000
+
+ Initial support for inline translatables.
+
+ Removed the placeables member from Translatable.
+ This is derived from self.source via _get_placeables.
+
+ Sprinkled code with references to things like
+ inline_placeable_table which contains info on which
+ tags are inline.
+
+ git-svn-id:
+ https://translate.svn.sourceforge.net/svnroot/translate/src/branches/translate/odf-xliff-first-try@8690
+ 54714841-351b-0410-a198-e36a94b762f5
+
+ commit 91eaff16ebb084680bde2a1d1dc0d567267cc20b
+ Author: winterstream
+ <winterstream at 54714841-351b-0410-a198-e36a94b762f5>
+ Date: Fri Oct 3 13:37:04 2008 +0000
+
+ Fixed a silly logic error (used a "not" where I should not
+ have).
+
+ git-svn-id:
+ https://translate.svn.sourceforge.net/svnroot/translate/src/branches/translate/odf-xliff-first-try@8663
+ 54714841-351b-0410-a198-e36a94b762f5
+
+ commit a76ed0144a002290eb29d23933d01199650031e0
+ Author: winterstream
+ <winterstream at 54714841-351b-0410-a198-e36a94b762f5>
+ Date: Fri Oct 3 13:36:04 2008 +0000
+
+ Added type annotations and updated string constants to unicode,
+ so
+ that they wouldn't trigger type errors.
+
+ git-svn-id:
+ https://translate.svn.sourceforge.net/svnroot/translate/src/branches/translate/odf-xliff-first-try@8662
+ 54714841-351b-0410-a198-e36a94b762f5
+
+ commit 224fd4250a32357c5b4af80877ee416fc6c73bf1
+ Author: winterstream
+ <winterstream at 54714841-351b-0410-a198-e36a94b762f5>
+ Date: Fri Oct 3 13:35:03 2008 +0000
+
+ Fixed an import which broke due to the integration of the type
+ checker with
+ the toolkit.
+
+ git-svn-id:
+ https://translate.svn.sourceforge.net/svnroot/translate/src/branches/translate/odf-xliff-first-try@8661
+ 54714841-351b-0410-a198-e36a94b762f5
+
+ commit ca0b38566c9900f768fdf125070226af3bd6d86b
+ Author: winterstream
+ <winterstream at 54714841-351b-0410-a198-e36a94b762f5>
+ Date: Fri Oct 3 13:34:05 2008 +0000
+
+ Merged
+ https://translate.svn.sourceforge.net/svnroot/translate/src/branches/translate/typecheck@8656.
+
+ git-svn-id:
+ https://translate.svn.sourceforge.net/svnroot/translate/src/branches/translate/odf-xliff-first-try@8660
+ 54714841-351b-0410-a198-e36a94b762f5
+
+ commit 1c52e8dfc06cd21597d358f4d4eca365ba7f5a55
+ Author: winterstream
+ <winterstream at 54714841-351b-0410-a198-e36a94b762f5>
+ Date: Fri Oct 3 08:43:22 2008 +0000
+
+ Merged in
+ https://translate.svn.sourceforge.net/svnroot/translate/src/branches/translate/typecheck@8651.
+
+ git-svn-id:
+ https://translate.svn.sourceforge.net/svnroot/translate/src/branches/translate/odf-xliff-first-try@8654
+ 54714841-351b-0410-a198-e36a94b762f5
+
+ commit 2b0a87b7edf67dc1ca719dbdd63c1f0033917cc3
+ Author: winterstream
+ <winterstream at 54714841-351b-0410-a198-e36a94b762f5>
+ Date: Fri Oct 3 08:41:41 2008 +0000
+
+ Merged in r8648 from
+ https://translate.svn.sourceforge.net/svnroot/translate/src/trunk/translate/
+
+ git-svn-id:
+ https://translate.svn.sourceforge.net/svnroot/translate/src/branches/translate/odf-xliff-first-try@8653
+ 54714841-351b-0410-a198-e36a94b762f5
+
+ commit 91680dad131806541b925bdb6a54491685388e45
+ Author: winterstream
+ <winterstream at 54714841-351b-0410-a198-e36a94b762f5>
+ Date: Fri Oct 3 08:40:12 2008 +0000
+
+ Applied Enrique's latest patch from
+ http://bugs.locamotion.org/attachment.cgi?id=211 for
+ placeables support.
+
+ git-svn-id:
+ https://translate.svn.sourceforge.net/svnroot/translate/src/branches/translate/odf-xliff-first-try@8652
+ 54714841-351b-0410-a198-e36a94b762f5
+
+ commit 48265495f577912ca8b06ab9ad0c3e4a0c1d7756
+ Author: winterstream
+ <winterstream at 54714841-351b-0410-a198-e36a94b762f5>
+ Date: Thu Oct 2 12:50:14 2008 +0000
+
+ A wee bit of refactoring to make the code clearer :).
+
+ git-svn-id:
+ https://translate.svn.sourceforge.net/svnroot/translate/src/branches/translate/odf-xliff-first-try@8624
+ 54714841-351b-0410-a198-e36a94b762f5
+
+ commit 751c98ce090221b8e152c3eddcd08a8ceeb8cc2a
+ Author: winterstream
+ <winterstream at 54714841-351b-0410-a198-e36a94b762f5>
+ Date: Thu Oct 2 11:44:16 2008 +0000
+
+ Pilfered Python 2.5's contextlib which simplifies the context
+ quite nicely and should make it easier to upgrade our code
+ to Python 2.5+ in the future.
+
+ git-svn-id:
+ https://translate.svn.sourceforge.net/svnroot/translate/src/branches/translate/odf-xliff-first-try@8620
+ 54714841-351b-0410-a198-e36a94b762f5
+
+ commit b826557ac29b2449b6c278f754b087d9f2fb18c2
+ Author: winterstream
+ <winterstream at 54714841-351b-0410-a198-e36a94b762f5>
+ Date: Thu Oct 2 11:03:45 2008 +0000
+
+ Moved the context manager to a sensible place. Also fixed the
+ broken
+ import in xml_extract.
+
+ git-svn-id:
+ https://translate.svn.sourceforge.net/svnroot/translate/src/branches/translate/odf-xliff-first-try@8619
+ 54714841-351b-0410-a198-e36a94b762f5
+
+ commit 38276c2cb128d0d659be4e1993d34ed5164e0cf2
+ Author: winterstream
+ <winterstream at 54714841-351b-0410-a198-e36a94b762f5>
+ Date: Thu Oct 2 08:23:02 2008 +0000
+
+ Merged in r8580 from
+ https://translate.svn.sourceforge.net/svnroot/translate/src/trunk/translate/
+
+ git-svn-id:
+ https://translate.svn.sourceforge.net/svnroot/translate/src/branches/translate/odf-xliff-first-try@8609
+ 54714841-351b-0410-a198-e36a94b762f5
+
+ commit a21e3bffa5efb52f569d2e134c64aea098b5bdae
+ Author: winterstream
+ <winterstream at 54714841-351b-0410-a198-e36a94b762f5>
+ Date: Thu Oct 2 07:54:35 2008 +0000
+
+ Created scripts to call odf2xliff and xliff2odf.
+
+ Moved the ODF-XLIFF machinery to the storage package. Updated
+ import statements to reflect this.
+
+ git-svn-id:
+ https://translate.svn.sourceforge.net/svnroot/translate/src/branches/translate/odf-xliff-first-try@8605
+ 54714841-351b-0410-a198-e36a94b762f5
+
+ commit 8d8f5ccb981b84cd89fc4ed6b9c8ce2551412a80
+ Author: winterstream
+ <winterstream at 54714841-351b-0410-a198-e36a94b762f5>
+ Date: Wed Oct 1 16:58:09 2008 +0000
+
+ Use deflate compression for the generated zip file. OpenOffice
+ expects this.
+
+ Also, the first child of the unit tree matches the root of the
+ XML DOM tree.
+
+ git-svn-id:
+ https://translate.svn.sourceforge.net/svnroot/translate/src/branches/translate/odf-xliff-first-try@8597
+ 54714841-351b-0410-a198-e36a94b762f5
+
+ commit b74578085639fe2ab66d71b66753693127b01c76
+ Author: winterstream
+ <winterstream at 54714841-351b-0410-a198-e36a94b762f5>
+ Date: Wed Oct 1 14:19:56 2008 +0000
+
+ This is a first pass at xliff2odf. It produces incorrect output
+ for translated XML files.
+
+ It also lacks comments. These are scheduled for the next commit.
+
+ git-svn-id:
+ https://translate.svn.sourceforge.net/svnroot/translate/src/branches/translate/odf-xliff-first-try@8592
+ 54714841-351b-0410-a198-e36a94b762f5
+
+ commit 2636d3aede6dca604d8de83af762a3a111f5e3dc
+ Author: winterstream
+ <winterstream at 54714841-351b-0410-a198-e36a94b762f5>
+ Date: Mon Sep 29 17:22:03 2008 +0000
+
+ Added a test file for use in the ODF-XLIFF functional tests.
+
+ git-svn-id:
+ https://translate.svn.sourceforge.net/svnroot/translate/src/branches/translate/odf-xliff-first-try@8538
+ 54714841-351b-0410-a198-e36a94b762f5
+
+ commit 4df78b8c7ecd2ec746d3d1a7c28aa88eb5ecf300
+ Author: winterstream
+ <winterstream at 54714841-351b-0410-a198-e36a94b762f5>
+ Date: Mon Sep 29 17:20:21 2008 +0000
+
+ Added the first utility for converting from ODF to XLIFF. It
+ follows a similar pattern to the other conversion utilities.
+
+ git-svn-id:
+ https://translate.svn.sourceforge.net/svnroot/translate/src/branches/translate/odf-xliff-first-try@8535
+ 54714841-351b-0410-a198-e36a94b762f5
+
+ commit c0f920d924894cdbe77c365127e3999d1588ad82
+ Author: winterstream
+ <winterstream at 54714841-351b-0410-a198-e36a94b762f5>
+ Date: Tue Sep 23 14:11:01 2008 +0000
+
+ A lot of comments.
+
+ git-svn-id:
+ https://translate.svn.sourceforge.net/svnroot/translate/src/branches/translate/odf-xliff-first-try@8455
+ 54714841-351b-0410-a198-e36a94b762f5
+
+ commit 8889232349c475e4274bc4053f10693f1f3c808f
+ Author: winterstream
+ <winterstream at 54714841-351b-0410-a198-e36a94b762f5>
+ Date: Tue Sep 23 14:09:58 2008 +0000
+
+ replace_dom_text will take apart the translated text in unit,
+ discover which parts are placeables and which are not, and modify
+ the text in the dom node, and the tail text of the children of
+ the
+ dom node.
+
+ In other words, this is what pulls a translation from a unit and
+ updates the DOM accordingly.
+
+ git-svn-id:
+ https://translate.svn.sourceforge.net/svnroot/translate/src/branches/translate/odf-xliff-first-try@8453
+ 54714841-351b-0410-a198-e36a94b762f5
+
+ commit bb2ea84bf876dbc7733d79ec6bef378f6fb764b5
+ Author: winterstream
+ <winterstream at 54714841-351b-0410-a198-e36a94b762f5>
+ Date: Tue Sep 23 14:08:55 2008 +0000
+
+ Cosmetic.
+
+ git-svn-id:
+ https://translate.svn.sourceforge.net/svnroot/translate/src/branches/translate/odf-xliff-first-try@8452
+ 54714841-351b-0410-a198-e36a94b762f5
+
+ commit 6156c9a042a98778d946e04a0bed5c86c0420012
+ Author: winterstream
+ <winterstream at 54714841-351b-0410-a198-e36a94b762f5>
+ Date: Tue Sep 23 14:07:52 2008 +0000
+
+ Simplified the units test functions to use the convenience
+ function xml_extract.build_store.
+
+ git-svn-id:
+ https://translate.svn.sourceforge.net/svnroot/translate/src/branches/translate/odf-xliff-first-try@8451
+ 54714841-351b-0410-a198-e36a94b762f5
+
+ commit 5d336df0322a1d67c4c2c02ec25e9e66ab287f5c
+ Author: winterstream
+ <winterstream at 54714841-351b-0410-a198-e36a94b762f5>
+ Date: Tue Sep 23 14:06:46 2008 +0000
+
+ Added a utility method to load odf files into stores.
+
+ git-svn-id:
+ https://translate.svn.sourceforge.net/svnroot/translate/src/branches/translate/odf-xliff-first-try@8450
+ 54714841-351b-0410-a198-e36a94b762f5
+
+ commit 314a3d3e58f4b3e1d229abd65fd00fd584a6d21a
+ Author: winterstream
+ <winterstream at 54714841-351b-0410-a198-e36a94b762f5>
+ Date: Tue Sep 23 14:05:46 2008 +0000
+
+ Comments + neatification of apply_translations.
+
+ git-svn-id:
+ https://translate.svn.sourceforge.net/svnroot/translate/src/branches/translate/odf-xliff-first-try@8449
+ 54714841-351b-0410-a198-e36a94b762f5
+
+ commit 20e993849a11cd257d9bdc8591d085e58e638dc6
+ Author: winterstream
+ <winterstream at 54714841-351b-0410-a198-e36a94b762f5>
+ Date: Tue Sep 23 14:04:41 2008 +0000
+
+ Some comments.
+
+ git-svn-id:
+ https://translate.svn.sourceforge.net/svnroot/translate/src/branches/translate/odf-xliff-first-try@8448
+ 54714841-351b-0410-a198-e36a94b762f5
+
+ commit 112c3b0ac31124f57c59bb2ea479bdd6c6138cd8
+ Author: winterstream
+ <winterstream at 54714841-351b-0410-a198-e36a94b762f5>
+ Date: Tue Sep 23 07:23:26 2008 +0000
+
+ Test that a country code doesn't mix up the factory in the case
+ of special codes (python reserved words)
+
+ git-svn-id:
+ https://translate.svn.sourceforge.net/svnroot/translate/src/branches/translate/odf-xliff-first-try@8438
+ 54714841-351b-0410-a198-e36a94b762f5
+
+ commit 63429ed8255f0d3aa5e2969e6baeab9bb6de3e61
+ Author: winterstream
+ <winterstream at 54714841-351b-0410-a198-e36a94b762f5>
+ Date: Tue Sep 23 07:22:03 2008 +0000
+
+ adding version dependencies for the "author" attribute:
+ * svn: since v1.5
+ * bzr: sinve v0.91
+ * cvs: not supported
+ * darcs: at least since v1.09 according to changelog (this
+ version is in debian
+ stable - thus a check does not seem to be important)
+ * git: since v1.4.3 (this is way older than the package in debian
+ stable, thus
+ a check should not be necessary)
+ * hg: since v1.0
+
+ git-svn-id:
+ https://translate.svn.sourceforge.net/svnroot/translate/src/branches/translate/odf-xliff-first-try@8437
+ 54714841-351b-0410-a198-e36a94b762f5
+
+ commit ea11747f9539e68614b55a90a39ec684dfca9ded
+ Author: winterstream
+ <winterstream at 54714841-351b-0410-a198-e36a94b762f5>
+ Date: Tue Sep 23 07:20:16 2008 +0000
+
+ Return an empty string if the unit is untranslated
+
+ git-svn-id:
+ https://translate.svn.sourceforge.net/svnroot/translate/src/branches/translate/odf-xliff-first-try@8436
+ 54714841-351b-0410-a198-e36a94b762f5
+
+ commit c53b32a2313acaffa3554387d77a90ab09cc8846
+ Author: winterstream
+ <winterstream at 54714841-351b-0410-a198-e36a94b762f5>
+ Date: Tue Sep 23 07:18:54 2008 +0000
+
+ Return an empty string if the translation is empty
+
+ git-svn-id:
+ https://translate.svn.sourceforge.net/svnroot/translate/src/branches/translate/odf-xliff-first-try@8435
+ 54714841-351b-0410-a198-e36a94b762f5
+
+ commit debe43a174e0155a5bdb3a4ccf0d1776aac181bd
+ Author: winterstream
+ <winterstream at 54714841-351b-0410-a198-e36a94b762f5>
+ Date: Tue Sep 23 07:17:30 2008 +0000
+
+ Manage empty context name
+
+ git-svn-id:
+ https://translate.svn.sourceforge.net/svnroot/translate/src/branches/translate/odf-xliff-first-try@8434
+ 54714841-351b-0410-a198-e36a94b762f5
+
+ commit dcb657a3252b9a65f66a7ed04ee47c12d42e49a0
+ Author: winterstream
+ <winterstream at 54714841-351b-0410-a198-e36a94b762f5>
+ Date: Tue Sep 23 07:16:10 2008 +0000
+
+ Add .qph - Qt Phrase Book support based on ts2
+
+ git-svn-id:
+ https://translate.svn.sourceforge.net/svnroot/translate/src/branches/translate/odf-xliff-first-try@8433
+ 54714841-351b-0410-a198-e36a94b762f5
+
+ commit 2f03af4fafd03e68a9273fb5dbea87f8f7ee58ea
+ Author: winterstream
+ <winterstream at 54714841-351b-0410-a198-e36a94b762f5>
+ Date: Tue Sep 23 07:14:48 2008 +0000
+
+ Fixes to get plural entries working in virtaal:
+ * Add list of languages and plural forms, include reference to
+ source. This might be better placed
+ in lang/data.py but since it is hard coded for all of Qt this is
+ probably a better spot.
+ * Implement getsource: this allows us to force the source into a
+ multistring, in .ts the source
+ will always be a single entry never multiple as in PO. With this
+ the generic hasplural will work
+ * Add decorators for source and target, seems we get the parent
+ ones if not added
+ * Add nplural fuction to find the language and return the number
+ of plural forms
+ * Retrieve the nplural value if we are editing a .ts store.
+
+ git-svn-id:
+ https://translate.svn.sourceforge.net/svnroot/translate/src/branches/translate/odf-xliff-first-try@8432
+ 54714841-351b-0410-a198-e36a94b762f5
+
+ commit 5a4efd3775fc38ab8faa2bb16b8f7c4d423c2c29
+ Author: winterstream
+ <winterstream at 54714841-351b-0410-a198-e36a94b762f5>
+ Date: Tue Sep 23 07:13:30 2008 +0000
+
+ Add format support for detecting .ts content
+
+ git-svn-id:
+ https://translate.svn.sourceforge.net/svnroot/translate/src/branches/translate/odf-xliff-first-try@8431
+ 54714841-351b-0410-a198-e36a94b762f5
+
+ commit 5a063bfdc6412d0cb97cbb63c95baeb9aec96f2e
+ Author: winterstream
+ <winterstream at 54714841-351b-0410-a198-e36a94b762f5>
+ Date: Tue Sep 23 07:12:13 2008 +0000
+
+ Add support for ts2 as ts
+
+ git-svn-id:
+ https://translate.svn.sourceforge.net/svnroot/translate/src/branches/translate/odf-xliff-first-try@8430
+ 54714841-351b-0410-a198-e36a94b762f5
+
+ commit 58c39f0bcfb7b83c0ac1365a8395aff8820830e3
+ Author: winterstream
+ <winterstream at 54714841-351b-0410-a198-e36a94b762f5>
+ Date: Tue Sep 23 07:11:03 2008 +0000
+
+ Initial support for new Qt linguist (.ts) files
+
+ git-svn-id:
+ https://translate.svn.sourceforge.net/svnroot/translate/src/branches/translate/odf-xliff-first-try@8429
+ 54714841-351b-0410-a198-e36a94b762f5
+
+ commit 2373f2b9dab3877616742155ac8e13ed2adcc12b
+ Author: winterstream
+ <winterstream at 54714841-351b-0410-a198-e36a94b762f5>
+ Date: Tue Sep 23 07:09:59 2008 +0000
+
+ Update copyright dates
+
+ git-svn-id:
+ https://translate.svn.sourceforge.net/svnroot/translate/src/branches/translate/odf-xliff-first-try@8428
+ 54714841-351b-0410-a198-e36a94b762f5
+
+ commit e76a60f9c4dd9107f5bb0bd0d1cb23d1d3530775
+ Author: winterstream
+ <winterstream at 54714841-351b-0410-a198-e36a94b762f5>
+ Date: Tue Sep 23 07:08:53 2008 +0000
+
+ Remove unused imports
+
+ git-svn-id:
+ https://translate.svn.sourceforge.net/svnroot/translate/src/branches/translate/odf-xliff-first-try@8427
+ 54714841-351b-0410-a198-e36a94b762f5
+
+ commit 42723f2014b398df159050478fa25ed6b50c660b
+ Author: winterstream
+ <winterstream at 54714841-351b-0410-a198-e36a94b762f5>
+ Date: Tue Sep 23 07:07:54 2008 +0000
+
+ s/profile/cProfile/
+
+ git-svn-id:
+ https://translate.svn.sourceforge.net/svnroot/translate/src/branches/translate/odf-xliff-first-try@8426
+ 54714841-351b-0410-a198-e36a94b762f5
+
+ commit 5da8f4e1e5b96f959d7585eac6cc8449cd017f02
+ Author: winterstream
+ <winterstream at 54714841-351b-0410-a198-e36a94b762f5>
+ Date: Tue Sep 23 07:06:58 2008 +0000
+
+ Ensure that we return unicode strings when dealing using
+ xpath("string()")
+
+ git-svn-id:
+ https://translate.svn.sourceforge.net/svnroot/translate/src/branches/translate/odf-xliff-first-try@8425
+ 54714841-351b-0410-a198-e36a94b762f5
+
+ commit 042e2165b9cb6c02a4756c4fead15b593695eb1b
+ Author: winterstream
+ <winterstream at 54714841-351b-0410-a198-e36a94b762f5>
+ Date: Tue Sep 23 07:05:51 2008 +0000
+
+ Bring in something non-ASCII for better testing
+
+ git-svn-id:
+ https://translate.svn.sourceforge.net/svnroot/translate/src/branches/translate/odf-xliff-first-try@8424
+ 54714841-351b-0410-a198-e36a94b762f5
+
+ commit 45f75dc753885708ce4934d48c80984d242bb633
+ Author: winterstream
+ <winterstream at 54714841-351b-0410-a198-e36a94b762f5>
+ Date: Tue Sep 23 07:04:53 2008 +0000
+
+ Version 1.2.0-rc1
+
+ git-svn-id:
+ https://translate.svn.sourceforge.net/svnroot/translate/src/branches/translate/odf-xliff-first-try@8423
+ 54714841-351b-0410-a198-e36a94b762f5
+
+ commit b53b87aba7ec354391b7fb597c0c5935443430ce
+ Author: winterstream
+ <winterstream at 54714841-351b-0410-a198-e36a94b762f5>
+ Date: Tue Sep 23 07:04:04 2008 +0000
+
+ quote.extractstr is called very often. The underlying function is
+ quite heavyweight and adds quite a bit of runtime overhead.
+
+ And yet, all we need is to find the left " (the quote) and the
+ right
+ " in a string and to return a string with the quotes intact. This
+ is done much faster as done in the new little extractstr
+ implementation
+ in pypo.py.
+
+ git-svn-id:
+ https://translate.svn.sourceforge.net/svnroot/translate/src/branches/translate/odf-xliff-first-try@8422
+ 54714841-351b-0410-a198-e36a94b762f5
+
+ commit bfb4098f6561c69330ec21b6151c26411ad8fd84
+ Author: winterstream
+ <winterstream at 54714841-351b-0410-a198-e36a94b762f5>
+ Date: Tue Sep 23 07:03:04 2008 +0000
+
+ 1. Change the members 'keys' and 'values' in Record to
+ 'record_keys'
+ and 'record_values', so as to avoid confusion with the methods
+ named 'keys' and 'values'.
+
+ 2. Added the callback compute_derived_values to Record, so that
+ it
+ can compute values which are derived from its other values and
+ keep these up to date. The class FileTotals makes use of this;
+ the method FileTotals._compute_derived_values computes values for
+ "total", "totalsourcewords" and "review" from the values
+ retrieved
+ from the database.
+
+ 3. Renamed OTHER to UNTRANSLATED.
+
+ 4. Renamed Record.db_repr to Record.as_string_for_db.
+
+ 5. Updated get_unit_stats to retrieve targetwords from the
+ database,
+ since FileTotals now also requires this value.
+
+ 6. Changed the values stored inside the database.
+
+ 7. Bumped up the toolkit build number.
+
+ git-svn-id:
+ https://translate.svn.sourceforge.net/svnroot/translate/src/branches/translate/odf-xliff-first-try@8421
+ 54714841-351b-0410-a198-e36a94b762f5
+
+ commit 8f6193a67174681b92b5acc59ed98c8193866426
+ Author: winterstream
+ <winterstream at 54714841-351b-0410-a198-e36a94b762f5>
+ Date: Tue Sep 23 07:01:52 2008 +0000
+
+ If we bump up the toolkit's version number, we delete the current
+ stats
+ cache database, if it has an old version number.
+
+ This is not ideal and in the future, we'll probably name stats
+ cache
+ database files differently as we change the database layout, so
+ that
+ multiple versions of the toolkit software will be able to
+ coexist.
+
+ git-svn-id:
+ https://translate.svn.sourceforge.net/svnroot/translate/src/branches/translate/odf-xliff-first-try@8420
+ 54714841-351b-0410-a198-e36a94b762f5
+
+ commit 6c99736e34916c367473a3fa2293dd043865e6b4
+ Author: winterstream
+ <winterstream at 54714841-351b-0410-a198-e36a94b762f5>
+ Date: Tue Sep 23 07:00:45 2008 +0000
+
+ +Link to wiki in poterminology's docstring (and therefore also
+ --help text)
+
+ git-svn-id:
+ https://translate.svn.sourceforge.net/svnroot/translate/src/branches/translate/odf-xliff-first-try@8419
+ 54714841-351b-0410-a198-e36a94b762f5
+
+ commit cda1ca7719142bef81f967b45a7e9205b2f7beb7
+ Author: winterstream
+ <winterstream at 54714841-351b-0410-a198-e36a94b762f5>
+ Date: Tue Sep 23 06:59:39 2008 +0000
+
+ Benchmark the creation of files
+
+ git-svn-id:
+ https://translate.svn.sourceforge.net/svnroot/translate/src/branches/translate/odf-xliff-first-try@8418
+ 54714841-351b-0410-a198-e36a94b762f5
+
+ commit e4edca4056435de282154ce4d81686c38813a8a8
+ Author: winterstream
+ <winterstream at 54714841-351b-0410-a198-e36a94b762f5>
+ Date: Fri Sep 19 10:53:52 2008 +0000
+
+ Added code that traverses a store and finds all elements in a DOM
+ tree
+ which correspond to the units in the store. It then calls the
+ given
+ function on a dom_node and unit which match (presumably so that
+ the
+ text in the DOM node can be replaced with the translated text in
+ the
+ unit.
+
+ git-svn-id:
+ https://translate.svn.sourceforge.net/svnroot/translate/src/branches/translate/odf-xliff-first-try@8368
+ 54714841-351b-0410-a198-e36a94b762f5
+
+ commit 5771dae8666d0715cf486798a2090a7f899a872b
+ Author: winterstream
+ <winterstream at 54714841-351b-0410-a198-e36a94b762f5>
+ Date: Fri Sep 19 10:51:19 2008 +0000
+
+ Added a context manager which mimics the with statement found
+ in the newest Python versions. This ensures that finalization
+ code is executed, even if something goes wrong within a
+ with_block.
+
+ Thus, it adds a bit of transactional semantics to the code.
+
+ git-svn-id:
+ https://translate.svn.sourceforge.net/svnroot/translate/src/branches/translate/odf-xliff-first-try@8367
+ 54714841-351b-0410-a198-e36a94b762f5
+
+ commit fb04e1c760d25661771e21e9db7af0e776cd2654
+ Author: winterstream
+ <winterstream at 54714841-351b-0410-a198-e36a94b762f5>
+ Date: Fri Sep 19 10:48:32 2008 +0000
+
+ Only add a translatable to a store if it contains content.
+
+ git-svn-id:
+ https://translate.svn.sourceforge.net/svnroot/translate/src/branches/translate/odf-xliff-first-try@8366
+ 54714841-351b-0410-a198-e36a94b762f5
+
+ commit d7dfc9ba696e1be123541b99004973def61a1ec7
+ Author: winterstream
+ <winterstream at 54714841-351b-0410-a198-e36a94b762f5>
+ Date: Fri Sep 19 10:46:27 2008 +0000
+
+ XLIFF only supports a single location source. We're already using
+ the
+ location to store the XPath of dom node from which we got the
+ translatable
+ element.
+
+ git-svn-id:
+ https://translate.svn.sourceforge.net/svnroot/translate/src/branches/translate/odf-xliff-first-try@8365
+ 54714841-351b-0410-a198-e36a94b762f5
+
+ commit d72b99a54f67962a88b0b04635984c2af918afaa
+ Author: winterstream
+ <winterstream at 54714841-351b-0410-a198-e36a94b762f5>
+ Date: Wed Sep 17 08:59:56 2008 +0000
+
+ Added a function to add a translatable unit to a store and to
+ fill in
+ its location (which is just the XPath of the corresponding
+ translatable).
+
+ git-svn-id:
+ https://translate.svn.sourceforge.net/svnroot/translate/src/branches/translate/odf-xliff-first-try@8354
+ 54714841-351b-0410-a198-e36a94b762f5
+
+ commit 4c43d05ce42abf34a2189b691cf58571b071bd6e
+ Author: winterstream
+ <winterstream at 54714841-351b-0410-a198-e36a94b762f5>
+ Date: Wed Sep 17 08:58:59 2008 +0000
+
+ Set the placeable ID to -1 for top-level translatable elements.
+ This is to be able to tell whether an element is top-level or
+ not.
+
+ git-svn-id:
+ https://translate.svn.sourceforge.net/svnroot/translate/src/branches/translate/odf-xliff-first-try@8353
+ 54714841-351b-0410-a198-e36a94b762f5
+
+ commit 7d708a25afd01f8e82c535e55a46b02c74741c32
+ Author: winterstream
+ <winterstream at 54714841-351b-0410-a198-e36a94b762f5>
+ Date: Wed Sep 17 08:58:07 2008 +0000
+
+ The text in .text or .tail of a DOM node can be None. If that is
+ the case,
+ we want empty strings.
+
+ git-svn-id:
+ https://translate.svn.sourceforge.net/svnroot/translate/src/branches/translate/odf-xliff-first-try@8352
+ 54714841-351b-0410-a198-e36a94b762f5
+
+ commit 474ac24534726fca9e57dc14c5c50fcfeda4ba1d
+ Author: winterstream
+ <winterstream at 54714841-351b-0410-a198-e36a94b762f5>
+ Date: Wed Sep 17 08:57:16 2008 +0000
+
+ Fixed a typo in a member name.
+
+ git-svn-id:
+ https://translate.svn.sourceforge.net/svnroot/translate/src/branches/translate/odf-xliff-first-try@8351
+ 54714841-351b-0410-a198-e36a94b762f5
+
+ commit 2889b2471213130200e21abbe008f60fdcbb5d72
+ Author: winterstream
+ <winterstream at 54714841-351b-0410-a198-e36a94b762f5>
+ Date: Tue Sep 16 15:13:21 2008 +0000
+
+ Resurrected XPathBreadcrumb and put it to work in the parse
+ state.
+
+ It replaces the explicit xpath stack. Instead we call start_tag
+ and
+ end_tag. This class takes care of keeping track of the number o
+ occurrences of a given tag (so that it can give an index to a
+ tag).
+
+ We now also store the full XPath of a translatable as a string,
+ which we get from the breadcrumb.
+
+ git-svn-id:
+ https://translate.svn.sourceforge.net/svnroot/translate/src/branches/translate/odf-xliff-first-try@8343
+ 54714841-351b-0410-a198-e36a94b762f5
+
+ commit 76b93e24536e84b3da3a339a604ecd456dbcf0b9
+ Author: winterstream
+ <winterstream at 54714841-351b-0410-a198-e36a94b762f5>
+ Date: Tue Sep 16 14:57:16 2008 +0000
+
+ In our code, we always pass a placeable_id and placeable_name. So
+ simplify this code.
+
+ git-svn-id:
+ https://translate.svn.sourceforge.net/svnroot/translate/src/branches/translate/odf-xliff-first-try@8342
+ 54714841-351b-0410-a198-e36a94b762f5
+
+ commit 111f35bc4fb10903eb760b01285e2197baf9988c
+ Author: winterstream
+ <winterstream at 54714841-351b-0410-a198-e36a94b762f5>
+ Date: Tue Sep 16 14:56:21 2008 +0000
+
+ If an XML node contains placeables, then the text appearing after
+ the placeable is contained in the .tail member of the XML node
+ representing the root of the placeable. We need to add this text
+ to build the text for a translatable.
+
+ For the second hunk, we must pass the top of the placeable_name
+ stack, so that the translatable will know what it is called :).
+
+ git-svn-id:
+ https://translate.svn.sourceforge.net/svnroot/translate/src/branches/translate/odf-xliff-first-try@8341
+ 54714841-351b-0410-a198-e36a94b762f5
+
+ commit a69109a7cb1f65d8d09252ccd6d5517c2d82c3ea
+ Author: winterstream
+ <winterstream at 54714841-351b-0410-a198-e36a94b762f5>
+ Date: Tue Sep 16 14:55:13 2008 +0000
+
+ Maintain a stack of placeable names in the parse state.
+
+ Recall that a placeable might be a whole nested XML structure. We
+ might want to use a tag somewhere in the middle of this structure
+ to name the placeable.
+
+ Thus, when we hit a tag, we check whether it appears in
+ the placeable_table. If so, we push a name onto the placeable
+ stack.
+
+ git-svn-id:
+ https://translate.svn.sourceforge.net/svnroot/translate/src/branches/translate/odf-xliff-first-try@8340
+ 54714841-351b-0410-a198-e36a94b762f5
+
+ commit 05c53722817a7cf6fcf58a4141c8875d4a1df21f
+ Author: winterstream
+ <winterstream at 54714841-351b-0410-a198-e36a94b762f5>
+ Date: Tue Sep 16 14:53:58 2008 +0000
+
+ Placeables should also be indexed by fully qualified XML tags.
+
+ git-svn-id:
+ https://translate.svn.sourceforge.net/svnroot/translate/src/branches/translate/odf-xliff-first-try@8339
+ 54714841-351b-0410-a198-e36a94b762f5
+
+ commit fc445b267f45309669dde89f5fe319e8afaed9c2
+ Author: winterstream
+ <winterstream at 54714841-351b-0410-a198-e36a94b762f5>
+ Date: Tue Sep 16 12:40:39 2008 +0000
+
+ An attempt to write more imperative code, since Python can be
+ quite
+ hostile to functional style programming sometimes.
+
+ git-svn-id:
+ https://translate.svn.sourceforge.net/svnroot/translate/src/branches/translate/odf-xliff-first-try@8338
+ 54714841-351b-0410-a198-e36a94b762f5
+
+ commit d67fbd553b7f76d4ffe013c7f4a3be2d8cbbfe71
+ Author: winterstream
+ <winterstream at 54714841-351b-0410-a198-e36a94b762f5>
+ Date: Mon Sep 15 14:03:26 2008 +0000
+
+ This file is a hangover from a previous effort to integrate our
+ software directly with itools for XML extraction. I am keeping it
+ as a reference.
+
+ git-svn-id:
+ https://translate.svn.sourceforge.net/svnroot/translate/src/branches/translate/odf-xliff-first-try@8322
+ 54714841-351b-0410-a198-e36a94b762f5
+
+ commit abb342dcbabbc6e543ce84cbdb22d08c0a082712
+ Author: winterstream
+ <winterstream at 54714841-351b-0410-a198-e36a94b762f5>
+ Date: Mon Sep 15 14:01:56 2008 +0000
+
+ Added a test file to test the xml_extract code. The test file
+ contains
+ an embedded XML file which comes from an OpenOffice.org file.
+ This
+ file is fed to the XML extraction code along with the
+ ODF-specific
+ XML namespace imformation and placable information.
+
+ git-svn-id:
+ https://translate.svn.sourceforge.net/svnroot/translate/src/branches/translate/odf-xliff-first-try@8321
+ 54714841-351b-0410-a198-e36a94b762f5
+
+ commit ed34166d946ec32ef7f69bb918042984a46e8f6c
+ Author: winterstream
+ <winterstream at 54714841-351b-0410-a198-e36a94b762f5>
+ Date: Mon Sep 15 14:00:25 2008 +0000
+
+ This is some fairly dense code to extract XML from an arbitrary
+ document,
+ given tables of
+ 1. XML namespaces which should be converted, and
+ 2. XML namespaces which should appear as placables within other
+ translatable hunks.
+
+ The code works by searching through the DOM tree (the function
+ apply),
+ until it hits a translatable tag. Then it calls
+ process_translatable_tag.
+
+ process_translatable_tag sees whether there are any
+ sub-translatable
+ tags in the current translatable tag. An example of this is
+ footnotes
+ in OpenOffice.org documents. The XML code for a footnote appears
+ within
+ the paragraph tag with which the footnote is associated. Or in
+ XML:
+
+ <text:p text:style-name="Standard">First. This should
+ <text:note text:id="ftn0" text:note-class="footnote">
+ <text:note-citation>1</text:note-citation>
+ <text:note-body>
+ <text:p text:style-name="Footnote">Footnote 1</text:p>
+ </text:note-body>
+ </text:note>not be segmented. Even with etc. and so.
+ </text:p>
+
+ We need to treat tags like <text:note> as placables, which means
+ that
+ the above should be presented to the translator as something
+ like:
+
+ First. This should&footnote_1; not be segmented. Even with etc.
+ and so.
+
+ Note that the entire XML block related to the footnote is
+ represented
+ by:
+
+ &footnote_1;
+
+ Thus, process_translatable_tag is responsible for finding any
+ placables
+ in the current translatable_tag. If there are any placables, it
+ should
+ create placable tags for them (such as &footnote_1;) and
+ construct a
+ translatable string containing these placable tags. Then it
+ proceeds to
+ deal with the children (that is, the placables) by invoking apply
+ on them.
+
+ Note that the current implementation uses the Record type, which
+ provides
+ immutable records (for stateless programming). Because it is
+ immutable,
+ every modification creates a new record.
+
+ git-svn-id:
+ https://translate.svn.sourceforge.net/svnroot/translate/src/branches/translate/odf-xliff-first-try@8320
+ 54714841-351b-0410-a198-e36a94b762f5
+
+ commit 38e8b341e82e738169002c03c01212e8ea0d6fbe
+ Author: winterstream
+ <winterstream at 54714841-351b-0410-a198-e36a94b762f5>
+ Date: Mon Sep 15 13:59:05 2008 +0000
+
+ Immutable record type from
+ http://www.valuedlessons.com/2007/12/immutable-data-in-python-record-or.html
+ (author's name not found).
+
+ This is useful for stateless code.
+
+ git-svn-id:
+ https://translate.svn.sourceforge.net/svnroot/translate/src/branches/translate/odf-xliff-first-try@8319
+ 54714841-351b-0410-a198-e36a94b762f5
+
+ commit 496cf7f386a9d241c68aaab1b536272bda1b221c
+ Author: winterstream
+ <winterstream at 54714841-351b-0410-a198-e36a94b762f5>
+ Date: Mon Sep 15 13:57:38 2008 +0000
+
+ odf_shared.py contains the information needed by the XML parser
+ to
+ extract translatables and placables from ODF documents.
+
+ The information is derived from itools, as indicated in the
+ comments.
+
+ git-svn-id:
+ https://translate.svn.sourceforge.net/svnroot/translate/src/branches/translate/odf-xliff-first-try@8318
+ 54714841-351b-0410-a198-e36a94b762f5
+
+ commit 898f5b2af87c52e66b544c0803ff20f2d84f64da
+ Author: winterstream
+ <winterstream at 54714841-351b-0410-a198-e36a94b762f5>
+ Date: Mon Sep 15 12:15:41 2008 +0000
+
+ Finally branched the toolkit for the ODF-XLIFF stuff.
+
+ git-svn-id:
+ https://translate.svn.sourceforge.net/svnroot/translate/src/branches/translate/odf-xliff-first-try@8313
+ 54714841-351b-0410-a198-e36a94b762f5
+
+2008-10-08 19:40 dwaynebailey
+
+ * storage/csvl10n.py, storage/qm.py: Add filetype names for .qm and
+ our brand of CSV.
+
+2008-10-07 22:47 dwaynebailey
+
+ * convert/ical2po.py, convert/ini2po.py, convert/rc2po.py: Revert
+ r8712: broke the conversion even though it isn't used. Refactor
+ at
+ a later stage.
+
+2008-10-07 22:26 dwaynebailey
+
+ * convert/ical2po.py, convert/ini2po.py: Remove unused xliff import
+
+2008-10-07 22:21 dwaynebailey
+
+ * convert/ical2po.py, convert/ini2po.py, convert/rc2po.py:
+ commenttype is never used
+
+2008-10-07 21:31 dwaynebailey
+
+ * convert/rc2po.py: CAPITALISE a constant
+
+2008-10-07 21:19 dwaynebailey
+
+ * convert/rc2po.py: Align terminology
+
+2008-10-07 20:36 dwaynebailey
+
+ * convert/ical2po.py, convert/ini2po.py: Align naming
+
+2008-10-07 16:31 dwaynebailey
+
+ * convert/accesskey.py, convert/dtd2po.py, convert/po2dtd.py,
+ convert/test_accesskey.py: Change accesskey function names to
+ something more readable:
+ get_label_and_accesskey => extract
+ combine_label_accesskey => combine
+
+2008-10-07 16:28 dwaynebailey
+
+ * convert/accesskey.py, convert/dtd2po.py,
+ convert/test_accesskey.py: Move the accesskey+label combining
+ functionaity out of dtd2po and into
+ the generic accesskey module. Adjust dtd2po to use this function.
+ Include tests for the combining.
+
+2008-10-07 16:22 dwaynebailey
+
+ * convert/accesskey.py, convert/po2dtd.py,
+ convert/test_accesskey.py: Remove the getlabel and getacceskey
+ functions. We'd rather use the combined function.
+ po2dtd.py is adapted with some unused variables, these will
+ probably disappear as we
+ refactor that code.
+
+2008-10-07 16:19 dwaynebailey
+
+ * convert/accesskey.py, convert/test_accesskey.py: Deal with the
+ empty string
+
+2008-10-07 16:18 dwaynebailey
+
+ * convert/accesskey.py, convert/test_accesskey.py: Make everything
+ Unicode. There are some asserts which should probably be removed
+ in the future.
+
+2008-10-07 16:16 dwaynebailey
+
+ * convert/accesskey.py: Merge functions from getlabel and
+ getaccesskey into get_label_and_accesskey. They were mostly
+ identical anyway.
+
+2008-10-07 16:14 dwaynebailey
+
+ * convert/accesskey.py, convert/test_accesskey.py: Create a
+ combined function that returns botht the label and accesskey
+
+2008-10-07 16:12 dwaynebailey
+
+ * convert/accesskey.py, convert/test_accesskey.py: Allow the
+ accesskeyto be specified, also set default to '&'
+
+2008-10-07 16:10 dwaynebailey
+
+ * convert/accesskey.py, convert/po2dtd.py,
+ convert/test_accesskey.py: Move getlabel and getaccesskey
+ functions out into its own module. Provide tests. Adjust
+ po2dtd.py to use the new module.
+
+2008-10-07 16:04 dwaynebailey
+
+ * convert/oo2po.py, convert/oo2xliff.py, convert/po2oo.py,
+ convert/xliff2oo.py, storage/oo.py, storage/test_oo.py: Move the
+ 4 makekey functions into the storage class. Create a test to
+ validate that it works.
+
+2008-10-07 15:36 dupuy
+
+ * .cvsignore: ignore generated files
+
+2008-10-07 12:23 friedelwolff
+
+ * convert/pot2po.py: Massive renaming to clarify store vs file.
+ Also got rid of several PO references.
+
+2008-10-07 12:00 friedelwolff
+
+ * filters/checks.py: Use unicode literals - this provides a small
+ speedup, and is just good in general
+
+2008-10-06 13:03 friedelwolff
+
+ * ChangeLog: Update ChangeLog before the release of 1.2.0
+
+2008-10-06 12:58 friedelwolff
+
+ * __version__.py: Version 1.2.0
+
2008-10-06 12:37 friedelwolff
* CREDITS: Credit Miklos for work on version control
Modified: translate-toolkit/branches/upstream/current/translate/README
URL: http://svn.debian.org/wsvn/translate-toolkit/branches/upstream/current/translate/README?rev=1570&op=diff
==============================================================================
--- translate-toolkit/branches/upstream/current/translate/README (original)
+++ translate-toolkit/branches/upstream/current/translate/README Sun Feb 8 16:49:31 2009
@@ -76,9 +76,8 @@
reporting it as a bug.
The package lxml is needed for XML file processing. Version 1.3.4 and upwards
-should work in most cases, although the odf2xliff and xliff2odf tools
-require at least version 2.0.0 to function correctly.
-from http://codespeak.net/lxml/
+should work, but lxml 2.1.0 or later is recommended.
+http://codespeak.net/lxml/
The package lxml has dependencies on libxml2 and libxslt. Please check the lxml
site for the recommended versions of these libraries.
@@ -91,11 +90,6 @@
use libgettextpo from the gettext-tools package (it might have a slightly
different name on your distribution). This can greatly speed up access to PO
files, but has not yet been tested as extensively. Feedback is most welcome.
-
-When the environment variable PYTHONTYPECHECK is defined, the toolkit will
-enable dynamic type checking for certain functions in the toolkit (mostly
-those belonging the ODF-XLIFF code). This adds quite a bit of overhead and
-is only of use to programmers.
Psyco can help to speed up several of the programs in the toolkit. It is
optional, but highly recommended.
@@ -211,8 +205,6 @@
csv2tbx - Create TBX (TermBase eXchange) files from Comma Separated Value (CSV) files
ini2po - convert .ini files to to PO
ical2po - Convert iCalendar files (*.ics) to PO
-odf2xliff - Extract translatable strings from an ODF file into an XLIFF file
-xliff2odf - Combine an XLIFF file with an ODF template to generate a translated ODF file
* Tools (Quality Assurance)
pofilter - run any of the 40+ checks on your PO files
Modified: translate-toolkit/branches/upstream/current/translate/__init__.py
URL: http://svn.debian.org/wsvn/translate-toolkit/branches/upstream/current/translate/__init__.py?rev=1570&op=diff
==============================================================================
--- translate-toolkit/branches/upstream/current/translate/__init__.py (original)
+++ translate-toolkit/branches/upstream/current/translate/__init__.py Sun Feb 8 16:49:31 2009
@@ -1,29 +1,38 @@
#!/usr/bin/env python
# -*- coding: utf-8 -*-
-#
-# This file is part of translate.
#
-# translate is free software; you can redistribute it and/or modify
+# Copyright 2008-2009 Zuza Software Foundation
+#
+# This file is part of the Translate Toolkit.
+#
+# This program is free software; you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation; either version 2 of the License, or
# (at your option) any later version.
-#
-# translate is distributed in the hope that it will be useful,
+#
+# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
-# along with translate; if not, write to the Free Software
-# Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
+# along with this program; if not, see <http://www.gnu.org/licenses/>.
"""The Translate Toolkit is a Python package that assists in localization of software.
See U{http://translate.sourceforge.net/wiki/toolkit/index} or U{http://translate.org.za} for more information.
@organization: Zuza Software Foundation
- at copyright: 2002-2008 Zuza Software Foundation
+ at copyright: 2002-2009 Zuza Software Foundation
@license: U{GPL <http://www.fsf.org/licensing/licenses/gpl.html>}
+ at group Localization and Localizable File Formats: storage
+ at group Format Converters: convert
+ at group Localisation File Checker: filters
+ at group Localization File Manipulation Tools: tools
+ at group Language Specifications: lang
+ at group Search and String Matching: search
+ at group Services: services
+ at group Miscellaneous: misc source_tree_infrastructure __version__
+
"""
-
Modified: translate-toolkit/branches/upstream/current/translate/__version__.py
URL: http://svn.debian.org/wsvn/translate-toolkit/branches/upstream/current/translate/__version__.py?rev=1570&op=diff
==============================================================================
--- translate-toolkit/branches/upstream/current/translate/__version__.py (original)
+++ translate-toolkit/branches/upstream/current/translate/__version__.py Sun Feb 8 16:49:31 2009
@@ -1,5 +1,25 @@
#!/usr/bin/env python
# -*- coding: utf-8 -*-
-"""this file contains the version of translate"""
-ver = "1.2.1"
+#
+# Copyright 2008-2009 Zuza Software Foundation
+#
+# This file is part of the Translate Toolkit.
+#
+# This program is free software; you can redistribute it and/or modify
+# it under the terms of the GNU General Public License as published by
+# the Free Software Foundation; either version 2 of the License, or
+# (at your option) any later version.
+#
+# This program is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+# GNU General Public License for more details.
+#
+# You should have received a copy of the GNU General Public License
+# along with this program; if not, see <http://www.gnu.org/licenses/>.
+
+"""This file contains the version of the Translate Toolkit."""
+
build = 12001
+sver = "1.3.0"
+ver = (1, 3, 0)
Modified: translate-toolkit/branches/upstream/current/translate/convert/__init__.py
URL: http://svn.debian.org/wsvn/translate-toolkit/branches/upstream/current/translate/convert/__init__.py?rev=1570&op=diff
==============================================================================
--- translate-toolkit/branches/upstream/current/translate/convert/__init__.py (original)
+++ translate-toolkit/branches/upstream/current/translate/convert/__init__.py Sun Feb 8 16:49:31 2009
@@ -20,5 +20,12 @@
# Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
"""translate.convert is part of the translate package
-It contains code to convert between different storage formats for localizations"""
+It contains code to convert between different storage formats for localizations
+ at group XLIFF: *xliff*
+ at group Bilingual: pot2po po2tmx oo2po po2oo csv2tbx *wordfast* *ts*
+ at group Monolingual: *prop* *dtd* csv2po po2csv *html* *ical* *ini* odf2po po2odf *rc* *txt* moz2po po2moz *php*
+ at group Support: accesskey convert
+ at group Other: poreplace
+"""
+
Added: translate-toolkit/branches/upstream/current/translate/convert/accesskey.py
URL: http://svn.debian.org/wsvn/translate-toolkit/branches/upstream/current/translate/convert/accesskey.py?rev=1570&op=file
==============================================================================
--- translate-toolkit/branches/upstream/current/translate/convert/accesskey.py (added)
+++ translate-toolkit/branches/upstream/current/translate/convert/accesskey.py Sun Feb 8 16:49:31 2009
@@ -1,0 +1,109 @@
+#!/usr/bin/env python
+# -*- coding: utf-8 -*-
+#
+# Copyright 2002-2008 Zuza Software Foundation
+#
+# This file is part of The Translate Toolkit.
+#
+# This program is free software; you can redistribute it and/or modify
+# it under the terms of the GNU General Public License as published by
+# the Free Software Foundation; either version 2 of the License, or
+# (at your option) any later version.
+#
+# This program is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+# GNU General Public License for more details.
+#
+# You should have received a copy of the GNU General Public License
+# along with translate; if not, write to the Free Software
+# Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
+
+"""functions used to manipulate access keys in strings"""
+
+DEFAULT_ACCESSKEY_MARKER = u"&"
+
+def extract(string, accesskey_marker=DEFAULT_ACCESSKEY_MARKER):
+ """Extract the label and accesskey form a label+accesskey string
+
+ The function will also try to ignore &entities; which would obviously not
+ contain accesskeys.
+
+ @type string: Unicode
+ @param string: A string that might contain a label with accesskey marker
+ @type accesskey_marker: Char
+ @param accesskey_marker: The character that is used to prefix an access key
+ """
+ assert isinstance(string, unicode)
+ assert isinstance(accesskey_marker, unicode)
+ assert len(accesskey_marker) == 1
+ if string == u"":
+ return u"", u""
+ accesskey = u""
+ label = string
+ marker_pos = 0
+ while marker_pos >= 0:
+ marker_pos = string.find(accesskey_marker, marker_pos)
+ if marker_pos != -1:
+ marker_pos += 1
+ semicolon_pos = string.find(";", marker_pos)
+ if semicolon_pos != -1:
+ if string[marker_pos:semicolon_pos].isalnum():
+ continue
+ label = string[:marker_pos-1] + string[marker_pos:]
+ accesskey = string[marker_pos]
+ break
+ return label, accesskey
+
+def combine(label, accesskey,
+ accesskey_marker=DEFAULT_ACCESSKEY_MARKER):
+ """Combine a label and and accesskey to form a label+accesskey string
+
+ We place an accesskey marker before the accesskey in the label and this create a string
+ with the two combined e.g. "File" + "F" = "&File"
+
+ @type label: unicode
+ @param label: a label
+ @type accesskey: unicode char
+ @param accesskey: The accesskey
+ @rtype: unicode or None
+ @return: label+accesskey string or None if uncombineable
+ """
+ assert isinstance(label, unicode)
+ assert isinstance(accesskey, unicode)
+ if len(accesskey) == 0:
+ return None
+ searchpos = 0
+ accesskeypos = -1
+ in_entity = False
+ accesskeyaltcasepos = -1
+ while (accesskeypos < 0) and searchpos < len(label):
+ searchchar = label[searchpos]
+ if searchchar == '&':
+ in_entity = True
+ elif searchchar == ';':
+ in_entity = False
+ else:
+ if not in_entity:
+ if searchchar == accesskey.upper():
+ # always prefer uppercase
+ accesskeypos = searchpos
+ if searchchar == accesskey.lower():
+ # take lower case otherwise...
+ if accesskeyaltcasepos == -1:
+ # only want to remember first altcasepos
+ accesskeyaltcasepos = searchpos
+ # note: we keep on looping through in hope
+ # of exact match
+ searchpos += 1
+ # if we didn't find an exact case match, use an alternate one if available
+ if accesskeypos == -1:
+ accesskeypos = accesskeyaltcasepos
+ # now we want to handle whatever we found...
+ if accesskeypos >= 0:
+ string = label[:accesskeypos] + accesskey_marker + label[accesskeypos:]
+ string = string.encode("UTF-8", "replace")
+ return string
+ else:
+ # can't currently mix accesskey if it's not in label
+ return None
Modified: translate-toolkit/branches/upstream/current/translate/convert/convert.py
URL: http://svn.debian.org/wsvn/translate-toolkit/branches/upstream/current/translate/convert/convert.py?rev=1570&op=diff
==============================================================================
--- translate-toolkit/branches/upstream/current/translate/convert/convert.py (original)
+++ translate-toolkit/branches/upstream/current/translate/convert/convert.py Sun Feb 8 16:49:31 2009
@@ -55,8 +55,8 @@
def add_duplicates_option(self, default="msgctxt"):
"""adds an option to say what to do with duplicate strings"""
self.add_option("", "--duplicates", dest="duplicatestyle", default=default,
- type="choice", choices=["msgid_comment", "msgctxt", "merge", "keep", "msgid_comment_all"],
- help="what to do with duplicate strings (identical source text): merge, msgid_comment, msgctxt, keep, msgid_comment_all (default: '%s')" % default, metavar="DUPLICATESTYLE")
+ type="choice", choices=["msgctxt", "merge"],
+ help="what to do with duplicate strings (identical source text): merge, msgctxt (default: '%s')" % default, metavar="DUPLICATESTYLE")
self.passthrough.append("duplicatestyle")
def add_multifile_option(self, default="single"):
Modified: translate-toolkit/branches/upstream/current/translate/convert/csv2po
URL: http://svn.debian.org/wsvn/translate-toolkit/branches/upstream/current/translate/convert/csv2po?rev=1570&op=diff
==============================================================================
--- translate-toolkit/branches/upstream/current/translate/convert/csv2po (original)
+++ translate-toolkit/branches/upstream/current/translate/convert/csv2po Sun Feb 8 16:49:31 2009
@@ -23,5 +23,5 @@
from translate.convert import csv2po
if __name__ == '__main__':
- csv2po.main()
+ csv2po.main()
Modified: translate-toolkit/branches/upstream/current/translate/convert/dtd2po.py
URL: http://svn.debian.org/wsvn/translate-toolkit/branches/upstream/current/translate/convert/dtd2po.py?rev=1570&op=diff
==============================================================================
--- translate-toolkit/branches/upstream/current/translate/convert/dtd2po.py (original)
+++ translate-toolkit/branches/upstream/current/translate/convert/dtd2po.py Sun Feb 8 16:49:31 2009
@@ -27,6 +27,7 @@
from translate.storage import po
from translate.storage import dtd
from translate.misc import quote
+from translate.convert import accesskey as accesskeyfn
class dtd2po:
def __init__(self, blankmsgstr=False, duplicatestyle="msgctxt"):
@@ -124,10 +125,6 @@
else:
return thepo
- # labelsuffixes and accesskeysuffixes are combined to accelerator notation
- labelsuffixes = (".label", ".title")
- accesskeysuffixes = (".accesskey", ".accessKey", ".akey")
-
def convertmixedunit(self, labeldtd, accesskeydtd):
labelpo = self.convertunit(labeldtd)
accesskeypo = self.convertunit(accesskeydtd)
@@ -147,41 +144,8 @@
# redo the strings from original dtd...
label = dtd.unquotefromdtd(labeldtd.definition).decode('UTF-8')
accesskey = dtd.unquotefromdtd(accesskeydtd.definition).decode('UTF-8')
- if len(accesskey) == 0:
- return None
- # try and put the & in front of the accesskey in the label...
- # make sure to avoid muddling up &-type strings
- searchpos = 0
- accesskeypos = -1
- inentity = 0
- accesskeyaltcasepos = -1
- while (accesskeypos < 0) and searchpos < len(label):
- searchchar = label[searchpos]
- if searchchar == '&':
- inentity = 1
- elif searchchar == ';':
- inentity = 0
- else:
- if not inentity:
- if searchchar == accesskey.upper():
- # always prefer uppercase
- accesskeypos = searchpos
- if searchchar == accesskey.lower():
- # take lower case otherwise...
- if accesskeyaltcasepos == -1:
- # only want to remember first altcasepos
- accesskeyaltcasepos = searchpos
- # note: we keep on looping through in hope of exact match
- searchpos += 1
- # if we didn't find an exact case match, use an alternate one if available
- if accesskeypos == -1:
- accesskeypos = accesskeyaltcasepos
- # now we want to handle whatever we found...
- if accesskeypos >= 0:
- label = label[:accesskeypos] + '&' + label[accesskeypos:]
- label = label.encode("UTF-8", "replace")
- else:
- # can't currently mix accesskey if it's not in label
+ label = accesskeyfn.combine(label, accesskey)
+ if label is None:
return None
thepo.source = label
thepo.target = ""
@@ -191,12 +155,12 @@
"""creates self.mixedentities from the dtd file..."""
self.mixedentities = {} # those entities which have a .label/.title and .accesskey combined
for entity in thedtdfile.index.keys():
- for labelsuffix in self.labelsuffixes:
+ for labelsuffix in dtd.labelsuffixes:
if entity.endswith(labelsuffix):
entitybase = entity[:entity.rfind(labelsuffix)]
# see if there is a matching accesskey in this line, making this a
# mixed entity
- for akeytype in self.accesskeysuffixes:
+ for akeytype in dtd.accesskeysuffixes:
if thedtdfile.index.has_key(entitybase + akeytype):
# add both versions to the list of mixed entities
self.mixedentities[entity] = {}
@@ -217,20 +181,20 @@
# depending on what we come across first, work out the label and the accesskey
labeldtd, accesskeydtd = None, None
labelentity, accesskeyentity = None, None
- for labelsuffix in self.labelsuffixes:
+ for labelsuffix in dtd.labelsuffixes:
if thedtd.entity.endswith(labelsuffix):
entitybase = thedtd.entity[:thedtd.entity.rfind(labelsuffix)]
- for akeytype in self.accesskeysuffixes:
+ for akeytype in dtd.accesskeysuffixes:
if thedtdfile.index.has_key(entitybase + akeytype):
labelentity, labeldtd = thedtd.entity, thedtd
accesskeyentity = labelentity[:labelentity.rfind(labelsuffix)]+akeytype
accesskeydtd = thedtdfile.index[accesskeyentity]
break
else:
- for akeytype in self.accesskeysuffixes:
+ for akeytype in dtd.accesskeysuffixes:
if thedtd.entity.endswith(akeytype):
accesskeyentity, accesskeydtd = thedtd.entity, thedtd
- for labelsuffix in self.labelsuffixes:
+ for labelsuffix in dtd.labelsuffixes:
labelentity = accesskeyentity[:accesskeyentity.rfind(akeytype)]+labelsuffix
if thedtdfile.index.has_key(labelentity):
labeldtd = thedtdfile.index[labelentity]
Modified: translate-toolkit/branches/upstream/current/translate/convert/html2po
URL: http://svn.debian.org/wsvn/translate-toolkit/branches/upstream/current/translate/convert/html2po?rev=1570&op=diff
==============================================================================
--- translate-toolkit/branches/upstream/current/translate/convert/html2po (original)
+++ translate-toolkit/branches/upstream/current/translate/convert/html2po Sun Feb 8 16:49:31 2009
@@ -23,8 +23,7 @@
You can merge translated strings back using po2html"""
from translate.convert import html2po
-from translate.convert import convert
if __name__ == '__main__':
- html2po.main()
+ html2po.main()
Modified: translate-toolkit/branches/upstream/current/translate/convert/html2po.py
URL: http://svn.debian.org/wsvn/translate-toolkit/branches/upstream/current/translate/convert/html2po.py?rev=1570&op=diff
==============================================================================
--- translate-toolkit/branches/upstream/current/translate/convert/html2po.py (original)
+++ translate-toolkit/branches/upstream/current/translate/convert/html2po.py Sun Feb 8 16:49:31 2009
@@ -30,7 +30,7 @@
from translate.storage import html
class html2po:
- def convertfile(self, inputfile, filename, includeheader, includeuntagged=False, duplicatestyle="msgid_comment"):
+ def convertfile(self, inputfile, filename, includeheader, includeuntagged=False, duplicatestyle="msgctxt"):
"""converts a html file to .po format"""
thetargetfile = po.pofile()
htmlparser = html.htmlfile(includeuntaggeddata=includeuntagged, inputfile=inputfile)
Modified: translate-toolkit/branches/upstream/current/translate/convert/ical2po
URL: http://svn.debian.org/wsvn/translate-toolkit/branches/upstream/current/translate/convert/ical2po?rev=1570&op=diff
==============================================================================
--- translate-toolkit/branches/upstream/current/translate/convert/ical2po (original)
+++ translate-toolkit/branches/upstream/current/translate/convert/ical2po Sun Feb 8 16:49:31 2009
@@ -23,5 +23,5 @@
from translate.convert import ical2po
if __name__ == '__main__':
- ical2po.main()
+ ical2po.main()
Modified: translate-toolkit/branches/upstream/current/translate/convert/ical2po.py
URL: http://svn.debian.org/wsvn/translate-toolkit/branches/upstream/current/translate/convert/ical2po.py?rev=1570&op=diff
==============================================================================
--- translate-toolkit/branches/upstream/current/translate/convert/ical2po.py (original)
+++ translate-toolkit/branches/upstream/current/translate/convert/ical2po.py Sun Feb 8 16:49:31 2009
@@ -23,75 +23,74 @@
import sys
from translate.storage import po
-from translate.storage import xliff
from translate.storage import ical
class ical2po:
"""convert a iCal file to a .po file for handling the translation..."""
- def convertstore(self, theinifile, duplicatestyle="msgctxt"):
+ def convert_store(self, input_store, duplicatestyle="msgctxt"):
"""converts a iCal file to a .po file..."""
- thetargetfile = po.pofile()
- targetheader = thetargetfile.makeheader(charset="UTF-8", encoding="8bit")
- targetheader.addnote("extracted from %s" % theinifile.filename, "developer")
- thetargetfile.addunit(targetheader)
- for iniunit in theinifile.units:
- pounit = self.convertunit(iniunit, "developer")
- if pounit is not None:
- thetargetfile.addunit(pounit)
- thetargetfile.removeduplicates(duplicatestyle)
- return thetargetfile
+ output_store = po.pofile()
+ output_header = output_store.makeheader(charset="UTF-8", encoding="8bit")
+ output_header.addnote("extracted from %s" % input_store.filename, "developer")
+ output_store.addunit(output_header)
+ for input_unit in input_store.units:
+ output_unit = self.convert_unit(input_unit, "developer")
+ if output_unit is not None:
+ output_store.addunit(output_unit)
+ output_store.removeduplicates(duplicatestyle)
+ return output_store
- def mergestore(self, originifile, translatedinifile, blankmsgstr=False, duplicatestyle="msgctxt"):
+ def merge_store(self, template_store, input_store, blankmsgstr=False, duplicatestyle="msgctxt"):
"""converts two iCal files to a .po file..."""
- thetargetfile = po.pofile()
- targetheader = thetargetfile.makeheader(charset="UTF-8", encoding="8bit")
- targetheader.addnote("extracted from %s, %s" % (originifile.filename, translatedinifile.filename), "developer")
- thetargetfile.addunit(targetheader)
- translatedinifile.makeindex()
- for origini in originifile.units:
- origpo = self.convertunit(origini, "developer")
+ output_store = po.pofile()
+ output_header = output_store.makeheader(charset="UTF-8", encoding="8bit")
+ output_header.addnote("extracted from %s, %s" % (template_store.filename, input_store.filename), "developer")
+ output_store.addunit(output_header)
+ input_store.makeindex()
+ for template_unit in template_store.units:
+ origpo = self.convert_unit(template_unit, "developer")
# try and find a translation of the same name...
- origininame = "".join(origini.getlocations())
- if origininame in translatedinifile.locationindex:
- translatedini = translatedinifile.locationindex[origininame]
- translatedpo = self.convertunit(translatedini, "translator")
+ template_unit_name = "".join(template_unit.getlocations())
+ if template_unit_name in input_store.locationindex:
+ translatedini = input_store.locationindex[template_unit_name]
+ translatedpo = self.convert_unit(translatedini, "translator")
else:
translatedpo = None
# if we have a valid po unit, get the translation and add it...
if origpo is not None:
if translatedpo is not None and not blankmsgstr:
origpo.target = translatedpo.source
- thetargetfile.addunit(origpo)
+ output_store.addunit(origpo)
elif translatedpo is not None:
print >> sys.stderr, "error converting original iCal definition %s" % origini.name
- thetargetfile.removeduplicates(duplicatestyle)
- return thetargetfile
+ output_store.removeduplicates(duplicatestyle)
+ return output_store
- def convertunit(self, inputunit, commenttype):
+ def convert_unit(self, input_unit, commenttype):
"""Converts a .ini unit to a .po unit. Returns None if empty
or not for translation."""
- if inputunit is None:
+ if input_unit is None:
return None
# escape unicode
- pounit = po.pounit(encoding="UTF-8")
- pounit.addlocation("".join(inputunit.getlocations()))
- pounit.addnote(inputunit.getnotes("developer"), "developer")
- pounit.source = inputunit.source
- pounit.target = ""
- return pounit
+ output_unit = po.pounit(encoding="UTF-8")
+ output_unit.addlocation("".join(input_unit.getlocations()))
+ output_unit.addnote(input_unit.getnotes("developer"), "developer")
+ output_unit.source = input_unit.source
+ output_unit.target = ""
+ return output_unit
-def convertical(inputfile, outputfile, templatefile, pot=False, duplicatestyle="msgctxt"):
- """reads in inputfile using iCal, converts using ical2po, writes to outputfile"""
- inputstore = ical.icalfile(inputfile)
+def convertical(input_file, output_file, template_file, pot=False, duplicatestyle="msgctxt"):
+ """Reads in L{input_file} using iCal, converts using L{ical2po}, writes to L{output_file}"""
+ input_store = ical.icalfile(input_file)
convertor = ical2po()
- if templatefile is None:
- outputstore = convertor.convertstore(inputstore, duplicatestyle=duplicatestyle)
+ if template_file is None:
+ output_store = convertor.convert_store(input_store, duplicatestyle=duplicatestyle)
else:
- templatestore = ical.icalfile(templatefile)
- outputstore = convertor.mergestore(templatestore, inputstore, blankmsgstr=pot, duplicatestyle=duplicatestyle)
- if outputstore.isempty():
+ template_store = ical.icalfile(template_file)
+ output_store = convertor.merge_store(template_store, input_store, blankmsgstr=pot, duplicatestyle=duplicatestyle)
+ if output_store.isempty():
return 0
- outputfile.write(str(outputstore))
+ output_file.write(str(output_store))
return 1
def main(argv=None):
Modified: translate-toolkit/branches/upstream/current/translate/convert/ini2po
URL: http://svn.debian.org/wsvn/translate-toolkit/branches/upstream/current/translate/convert/ini2po?rev=1570&op=diff
==============================================================================
--- translate-toolkit/branches/upstream/current/translate/convert/ini2po (original)
+++ translate-toolkit/branches/upstream/current/translate/convert/ini2po Sun Feb 8 16:49:31 2009
@@ -23,5 +23,5 @@
from translate.convert import ini2po
if __name__ == '__main__':
- ini2po.main()
+ ini2po.main()
Modified: translate-toolkit/branches/upstream/current/translate/convert/ini2po.py
URL: http://svn.debian.org/wsvn/translate-toolkit/branches/upstream/current/translate/convert/ini2po.py?rev=1570&op=diff
==============================================================================
--- translate-toolkit/branches/upstream/current/translate/convert/ini2po.py (original)
+++ translate-toolkit/branches/upstream/current/translate/convert/ini2po.py Sun Feb 8 16:49:31 2009
@@ -23,79 +23,84 @@
import sys
from translate.storage import po
-from translate.storage import xliff
from translate.storage import ini
class ini2po:
"""convert a .ini file to a .po file for handling the translation..."""
- def convertstore(self, theinifile, duplicatestyle="msgctxt"):
+ def convert_store(self, input_store, duplicatestyle="msgctxt"):
"""converts a .ini file to a .po file..."""
- thetargetfile = po.pofile()
- targetheader = thetargetfile.makeheader(charset="UTF-8", encoding="8bit")
- targetheader.addnote("extracted from %s" % theinifile.filename, "developer")
- thetargetfile.addunit(targetheader)
- for iniunit in theinifile.units:
- pounit = self.convertunit(iniunit, "developer")
- if pounit is not None:
- thetargetfile.addunit(pounit)
- thetargetfile.removeduplicates(duplicatestyle)
- return thetargetfile
+ output_store = po.pofile()
+ output_header = output_store.makeheader(charset="UTF-8", encoding="8bit")
+ output_header.addnote("extracted from %s" % input_store.filename, "developer")
+ output_store.addunit(output_header)
+ for input_unit in input_store.units:
+ output_unit = self.convert_unit(input_unit, "developer")
+ if output_unit is not None:
+ output_store.addunit(output_unit)
+ output_store.removeduplicates(duplicatestyle)
+ return output_store
- def mergestore(self, originifile, translatedinifile, blankmsgstr=False, duplicatestyle="msgctxt"):
+ def merge_store(self, template_store, input_store, blankmsgstr=False, duplicatestyle="msgctxt"):
"""converts two .ini files to a .po file..."""
- thetargetfile = po.pofile()
- targetheader = thetargetfile.makeheader(charset="UTF-8", encoding="8bit")
- targetheader.addnote("extracted from %s, %s" % (originifile.filename, translatedinifile.filename), "developer")
- thetargetfile.addunit(targetheader)
- translatedinifile.makeindex()
- for origini in originifile.units:
- origpo = self.convertunit(origini, "developer")
+ output_store = po.pofile()
+ output_header = output_store.makeheader(charset="UTF-8", encoding="8bit")
+ output_header.addnote("extracted from %s, %s" % (template_store.filename, input_store.filename), "developer")
+ output_store.addunit(output_header)
+ input_store.makeindex()
+ for template_unit in template_store.units:
+ origpo = self.convert_unit(template_unit, "developer")
# try and find a translation of the same name...
- origininame = "".join(origini.getlocations())
- if origininame in translatedinifile.locationindex:
- translatedini = translatedinifile.locationindex[origininame]
- translatedpo = self.convertunit(translatedini, "translator")
+ template_unit_name = "".join(template_unit.getlocations())
+ if template_unit_name in input_store.locationindex:
+ translatedini = input_store.locationindex[template_unit_name]
+ translatedpo = self.convert_unit(translatedini, "translator")
else:
translatedpo = None
# if we have a valid po unit, get the translation and add it...
if origpo is not None:
if translatedpo is not None and not blankmsgstr:
origpo.target = translatedpo.source
- thetargetfile.addunit(origpo)
+ output_store.addunit(origpo)
elif translatedpo is not None:
print >> sys.stderr, "error converting original ini definition %s" % origini.name
- thetargetfile.removeduplicates(duplicatestyle)
- return thetargetfile
+ output_store.removeduplicates(duplicatestyle)
+ return output_store
- def convertunit(self, iniunit, commenttype):
+ def convert_unit(self, input_unit, commenttype):
"""Converts a .ini unit to a .po unit. Returns None if empty
or not for translation."""
- if iniunit is None:
+ if input_unit is None:
return None
# escape unicode
- pounit = po.pounit(encoding="UTF-8")
- pounit.addlocation("".join(iniunit.getlocations()))
- pounit.source = iniunit.source
- pounit.target = ""
- return pounit
+ output_unit = po.pounit(encoding="UTF-8")
+ output_unit.addlocation("".join(input_unit.getlocations()))
+ output_unit.source = input_unit.source
+ output_unit.target = ""
+ return output_unit
-def convertini(inputfile, outputfile, templatefile, pot=False, duplicatestyle="msgctxt"):
- """reads in inputfile using ini, converts using ini2po, writes to outputfile"""
- inputstore = ini.inifile(inputfile)
+def convertini(input_file, output_file, template_file, pot=False, duplicatestyle="msgctxt", dialect="default"):
+ """Reads in L{input_file} using ini, converts using L{ini2po}, writes to L{output_file}"""
+ input_store = ini.inifile(input_file, dialect=dialect)
convertor = ini2po()
- if templatefile is None:
- outputstore = convertor.convertstore(inputstore, duplicatestyle=duplicatestyle)
+ if template_file is None:
+ output_store = convertor.convert_store(input_store, duplicatestyle=duplicatestyle)
else:
- templatestore = ini.inifile(templatefile)
- outputstore = convertor.mergestore(templatestore, inputstore, blankmsgstr=pot, duplicatestyle=duplicatestyle)
- if outputstore.isempty():
+ template_store = ini.inifile(template_file, dialect=dialect)
+ output_store = convertor.merge_store(template_store, input_store, blankmsgstr=pot, duplicatestyle=duplicatestyle)
+ if output_store.isempty():
return 0
- outputfile.write(str(outputstore))
+ output_file.write(str(output_store))
return 1
+
+def convertisl(input_file, output_file, template_file, pot=False, duplicatestyle="msgctxt", dialect="inno"):
+ return convertini(input_file, output_file, template_file, pot=False, duplicatestyle="msgctxt", dialect=dialect)
def main(argv=None):
from translate.convert import convert
- formats = {"ini": ("po", convertini), ("ini", "ini"): ("po", convertini)}
+ formats = {
+ "ini": ("po", convertini), ("ini", "ini"): ("po", convertini),
+ "isl": ("po", convertisl), ("isl", "isl"): ("po", convertisl),
+ }
parser = convert.ConvertOptionParser(formats, usetemplates=True, usepots=True, description=__doc__)
parser.add_duplicates_option()
parser.passthrough.append("pot")
Modified: translate-toolkit/branches/upstream/current/translate/convert/moz2po
URL: http://svn.debian.org/wsvn/translate-toolkit/branches/upstream/current/translate/convert/moz2po?rev=1570&op=diff
==============================================================================
--- translate-toolkit/branches/upstream/current/translate/convert/moz2po (original)
+++ translate-toolkit/branches/upstream/current/translate/convert/moz2po Sun Feb 8 16:49:31 2009
@@ -23,5 +23,5 @@
from translate.convert import moz2po
if __name__ == '__main__':
- moz2po.main()
+ moz2po.main()
Modified: translate-toolkit/branches/upstream/current/translate/convert/mozfunny2prop.py
URL: http://svn.debian.org/wsvn/translate-toolkit/branches/upstream/current/translate/convert/mozfunny2prop.py?rev=1570&op=diff
==============================================================================
--- translate-toolkit/branches/upstream/current/translate/convert/mozfunny2prop.py (original)
+++ translate-toolkit/branches/upstream/current/translate/convert/mozfunny2prop.py Sun Feb 8 16:49:31 2009
@@ -91,7 +91,7 @@
for line in it2prop(lines, encoding=itencoding):
yield encodepropline(line)
-def inc2po(inputfile, outputfile, templatefile, encoding=None, pot=False, duplicatestyle="msgid_comment"):
+def inc2po(inputfile, outputfile, templatefile, encoding=None, pot=False, duplicatestyle="msgctxt"):
"""wraps prop2po but converts input/template files to properties first"""
inputlines = inputfile.readlines()
inputproplines = [encodepropline(line) for line in inc2prop(inputlines)]
@@ -104,7 +104,7 @@
templatepropfile = None
return prop2po.convertprop(inputpropfile, outputfile, templatepropfile, pot=pot, duplicatestyle=duplicatestyle)
-def it2po(inputfile, outputfile, templatefile, encoding="cp1252", pot=False, duplicatestyle="msgid_comment"):
+def it2po(inputfile, outputfile, templatefile, encoding="cp1252", pot=False, duplicatestyle="msgctxt"):
"""wraps prop2po but converts input/template files to properties first"""
inputlines = inputfile.readlines()
inputproplines = [encodepropline(line) for line in it2prop(inputlines, encoding=encoding)]
@@ -117,7 +117,7 @@
templatepropfile = None
return prop2po.convertprop(inputpropfile, outputfile, templatepropfile, pot=pot, duplicatestyle=duplicatestyle)
-def ini2po(inputfile, outputfile, templatefile, encoding="UTF-8", pot=False, duplicatestyle="msgid_comment"):
+def ini2po(inputfile, outputfile, templatefile, encoding="UTF-8", pot=False, duplicatestyle="msgctxt"):
return it2po(inputfile=inputfile, outputfile=outputfile, templatefile=templatefile, encoding=encoding, pot=pot, duplicatestyle=duplicatestyle)
def main(argv=None):
Modified: translate-toolkit/branches/upstream/current/translate/convert/odf2xliff.py
URL: http://svn.debian.org/wsvn/translate-toolkit/branches/upstream/current/translate/convert/odf2xliff.py?rev=1570&op=diff
==============================================================================
--- translate-toolkit/branches/upstream/current/translate/convert/odf2xliff.py (original)
+++ translate-toolkit/branches/upstream/current/translate/convert/odf2xliff.py Sun Feb 8 16:49:31 2009
@@ -20,11 +20,7 @@
# Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
#
-"""convert OpenDocument (ODF) files to XLIFF localization files
-
-see: http://translate.sourceforge.net/wiki/toolkit/odf2xliff for examples and
-usage instructions.
-"""
+"""convert OpenDocument (ODF) files to XLIFF localization files"""
# Import from ttk
from translate.storage import factory
@@ -37,13 +33,6 @@
"""reads in stdin using fromfileclass, converts using convertorclass,
writes to stdout
"""
-
- # Temporary hack.
- # inputfile is a Zip file, and needs to be
- # read and written as a binary file under Windows, but
- # they isn't initially in binary mode (under Windows);
- # thus, we have to reopen it as such.
- inputfile = open(inputfile.name, 'rb')
def translate_toolkit_implementation(store):
import cStringIO
@@ -104,10 +93,25 @@
return parser
from translate.convert import convert
+ # For formats see OpenDocument 1.2 draft 7 Appendix C
formats = {"sxw": ("xlf", convertodf),
- "odt": ("xlf", convertodf),
- "ods": ("xlf", convertodf),
- "odp": ("xlf", convertodf)}
+ "odt": ("xlf", convertodf), # Text
+ "ods": ("xlf", convertodf), # Spreadsheet
+ "odp": ("xlf", convertodf), # Presentation
+ "odg": ("xlf", convertodf), # Drawing
+ "odc": ("xlf", convertodf), # Chart
+ "odf": ("xlf", convertodf), # Formula
+ "odi": ("xlf", convertodf), # Image
+ "odm": ("xlf", convertodf), # Master Document
+ "ott": ("xlf", convertodf), # Text template
+ "ots": ("xlf", convertodf), # Spreadsheet template
+ "otp": ("xlf", convertodf), # Presentation template
+ "otg": ("xlf", convertodf), # Drawing template
+ "otc": ("xlf", convertodf), # Chart template
+ "otf": ("xlf", convertodf), # Formula template
+ "oti": ("xlf", convertodf), # Image template
+ "oth": ("xlf", convertodf), # Web page template
+ }
parser = convert.ConvertOptionParser(formats, description=__doc__)
add_options(parser)
parser.run(argv)
Modified: translate-toolkit/branches/upstream/current/translate/convert/oo2po
URL: http://svn.debian.org/wsvn/translate-toolkit/branches/upstream/current/translate/convert/oo2po?rev=1570&op=diff
==============================================================================
--- translate-toolkit/branches/upstream/current/translate/convert/oo2po (original)
+++ translate-toolkit/branches/upstream/current/translate/convert/oo2po Sun Feb 8 16:49:31 2009
@@ -24,5 +24,5 @@
from translate.convert import oo2po
if __name__ == '__main__':
- oo2po.main()
+ oo2po.main()
Modified: translate-toolkit/branches/upstream/current/translate/convert/oo2po.py
URL: http://svn.debian.org/wsvn/translate-toolkit/branches/upstream/current/translate/convert/oo2po.py?rev=1570&op=diff
==============================================================================
--- translate-toolkit/branches/upstream/current/translate/convert/oo2po.py (original)
+++ translate-toolkit/branches/upstream/current/translate/convert/oo2po.py Sun Feb 8 16:49:31 2009
@@ -56,24 +56,6 @@
unit.addnote(getattr(translators_comment, subkey), origin="developer")
return unit
- def makekey(self, ookey):
- """converts an oo key tuple into a key identifier for the base class file (.po or XLIFF)"""
- project, sourcefile, resourcetype, groupid, localid, platform = ookey
- sourcefile = sourcefile.replace('\\','/')
- if self.long_keys:
- sourcebase = os.path.join(project, sourcefile)
- else:
- sourceparts = sourcefile.split('/')
- sourcebase = "".join(sourceparts[-1:])
- if (groupid) == 0 or len(localid) == 0:
- ooid = groupid + localid
- else:
- ooid = groupid + "." + localid
- if resourcetype:
- ooid = ooid + "." + resourcetype
- key = "%s#%s" % (sourcebase, ooid)
- return oo.normalizefilename(key)
-
def convertelement(self, theoo):
"""convert an oo element into a list of base units (.po or XLIFF)"""
if self.sourcelanguage in theoo.languages:
@@ -94,7 +76,7 @@
translators_comment = theoo.languages["x-comment"]
else:
translators_comment = oo.ooline()
- key = self.makekey(part1.getkey())
+ key = oo.makekey(part1.getkey(), self.long_keys)
unitlist = []
for subkey in ("text", "quickhelptext", "title"):
unit = self.maketargetunit(part1, part2, translators_comment, key, subkey)
@@ -105,13 +87,13 @@
def convertstore(self, theoofile, duplicatestyle="msgctxt"):
"""converts an entire oo file to a base class format (.po or XLIFF)"""
thetargetfile = po.pofile()
- thetargetfile.setsourcelanguage(self.sourcelanguage)
- thetargetfile.settargetlanguage(self.targetlanguage)
# create a header for the file
bug_url = 'http://qa.openoffice.org/issues/enter_bug.cgi' + ('''?subcomponent=ui&comment=&short_desc=Localization issue in file: %(filename)s&component=l10n&form_name=enter_issue''' % {"filename": theoofile.filename}).replace(" ", "%20").replace(":", "%3A")
targetheader = thetargetfile.makeheader(charset="UTF-8", encoding="8bit", x_accelerator_marker="~", report_msgid_bugs_to=bug_url)
targetheader.addnote("extracted from %s" % theoofile.filename, "developer")
thetargetfile.addunit(targetheader)
+ thetargetfile.setsourcelanguage(self.sourcelanguage)
+ thetargetfile.settargetlanguage(self.targetlanguage)
# go through the oo and convert each element
for theoo in theoofile.units:
unitlist = self.convertelement(theoo)
Modified: translate-toolkit/branches/upstream/current/translate/convert/oo2xliff.py
URL: http://svn.debian.org/wsvn/translate-toolkit/branches/upstream/current/translate/convert/oo2xliff.py?rev=1570&op=diff
==============================================================================
--- translate-toolkit/branches/upstream/current/translate/convert/oo2xliff.py (original)
+++ translate-toolkit/branches/upstream/current/translate/convert/oo2xliff.py Sun Feb 8 16:49:31 2009
@@ -50,29 +50,14 @@
unit = xliff.xliffunit(text1)
unit.target = text2
- unit.markfuzzy(False)
+ if unit.target:
+ unit.markfuzzy(False)
+ else:
+ unit.markfuzzy(True)
unit.addlocation(key + "." + subkey)
if getattr(translators_comment, subkey).strip() != "":
unit.addnote(getattr(translators_comment, subkey), origin="developer")
return unit
-
- def makekey(self, ookey):
- """converts an oo key tuple into a key identifier for the base class file (.po or XLIFF)"""
- project, sourcefile, resourcetype, groupid, localid, platform = ookey
- sourcefile = sourcefile.replace('\\','/')
- if self.long_keys:
- sourcebase = os.path.join(project, sourcefile)
- else:
- sourceparts = sourcefile.split('/')
- sourcebase = "".join(sourceparts[-1:])
- if (groupid) == 0 or len(localid) == 0:
- ooid = groupid + localid
- else:
- ooid = groupid + "." + localid
- if resourcetype:
- ooid = ooid + "." + resourcetype
- key = "%s#%s" % (sourcebase, ooid)
- return oo.normalizefilename(key)
def convertelement(self, theoo):
"""convert an oo element into a list of base units (.po or XLIFF)"""
@@ -94,7 +79,7 @@
translators_comment = theoo.languages["x-comment"]
else:
translators_comment = oo.ooline()
- key = self.makekey(part1.getkey())
+ key = oo.makekey(part1.getkey(), self.long_keys)
unitlist = []
for subkey in ("text", "quickhelptext", "title"):
unit = self.maketargetunit(part1, part2, translators_comment, key, subkey)
@@ -121,7 +106,7 @@
if not options.targetlanguage:
raise ValueError("You must specify the target language.")
-def convertoo(inputfile, outputfile, templates, pot=False, sourcelanguage=None, targetlanguage=None, duplicatestyle="msgid_comment", multifilestyle="single"):
+def convertoo(inputfile, outputfile, templates, pot=False, sourcelanguage=None, targetlanguage=None, duplicatestyle="msgctxt", multifilestyle="single"):
"""reads in stdin using inputstore class, converts using convertorclass, writes to stdout"""
inputstore = oo.oofile()
if hasattr(inputfile, "filename"):
Modified: translate-toolkit/branches/upstream/current/translate/convert/php2po
URL: http://svn.debian.org/wsvn/translate-toolkit/branches/upstream/current/translate/convert/php2po?rev=1570&op=diff
==============================================================================
--- translate-toolkit/branches/upstream/current/translate/convert/php2po (original)
+++ translate-toolkit/branches/upstream/current/translate/convert/php2po Sun Feb 8 16:49:31 2009
@@ -23,8 +23,7 @@
You can merge translated strings back using po2php"""
from translate.convert import php2po
-from translate.convert import convert
if __name__ == '__main__':
- php2po.main()
+ php2po.main()
Modified: translate-toolkit/branches/upstream/current/translate/convert/php2po.py
URL: http://svn.debian.org/wsvn/translate-toolkit/branches/upstream/current/translate/convert/php2po.py?rev=1570&op=diff
==============================================================================
--- translate-toolkit/branches/upstream/current/translate/convert/php2po.py (original)
+++ translate-toolkit/branches/upstream/current/translate/convert/php2po.py Sun Feb 8 16:49:31 2009
@@ -51,8 +51,6 @@
outputheader.addnote("extracted from %s, %s" % (templatestore.filename, inputstore.filename), "developer")
outputstore.addunit(outputheader)
inputstore.makeindex()
- # we try and merge the header po with any comments at the start of the properties file
- appendedheader = 0
# loop through the original file, looking at units one by one
for templateunit in templatestore.units:
outputunit = self.convertunit(templateunit, "developer")
Modified: translate-toolkit/branches/upstream/current/translate/convert/po2csv
URL: http://svn.debian.org/wsvn/translate-toolkit/branches/upstream/current/translate/convert/po2csv?rev=1570&op=diff
==============================================================================
--- translate-toolkit/branches/upstream/current/translate/convert/po2csv (original)
+++ translate-toolkit/branches/upstream/current/translate/convert/po2csv Sun Feb 8 16:49:31 2009
@@ -23,5 +23,5 @@
from translate.convert import po2csv
if __name__ == '__main__':
- po2csv.main()
+ po2csv.main()
Modified: translate-toolkit/branches/upstream/current/translate/convert/po2dtd.py
URL: http://svn.debian.org/wsvn/translate-toolkit/branches/upstream/current/translate/convert/po2dtd.py?rev=1570&op=diff
==============================================================================
--- translate-toolkit/branches/upstream/current/translate/convert/po2dtd.py (original)
+++ translate-toolkit/branches/upstream/current/translate/convert/po2dtd.py Sun Feb 8 16:49:31 2009
@@ -25,93 +25,19 @@
from translate.storage import dtd
from translate.storage import po
from translate.misc import quote
+from translate.convert import accesskey
import warnings
-
-# labelsuffixes and accesskeysuffixes are combined to accelerator notation
-labelsuffixes = (".label", ".title")
-accesskeysuffixes = (".accesskey", ".accessKey", ".akey")
-
-def getlabel(unquotedstr):
- """retrieve the label from a mixed label+accesskey entity"""
- if isinstance(unquotedstr, str):
- unquotedstr = unquotedstr.decode("UTF-8")
- # mixed labels just need the & taken out
- # except that &entity; needs to be avoided...
- amppos = 0
- while amppos >= 0:
- amppos = unquotedstr.find("&", amppos)
- if amppos != -1:
- amppos += 1
- semipos = unquotedstr.find(";", amppos)
- if semipos != -1:
- if unquotedstr[amppos:semipos].isalnum():
- continue
- # otherwise, cut it out... only the first one need be changed
- # (see below to see how the accesskey is done)
- unquotedstr = unquotedstr[:amppos-1] + unquotedstr[amppos:]
- break
- return unquotedstr.encode("UTF-8")
-
-def getaccesskey(unquotedstr):
- """retrieve the access key from a mixed label+accesskey entity"""
- if isinstance(unquotedstr, str):
- unquotedstr = unquotedstr.decode("UTF-8")
- # mixed access keys need the key extracted from after the &
- # but we must avoid proper entities i.e. > etc...
- amppos = 0
- while amppos >= 0:
- amppos = unquotedstr.find("&", amppos)
- if amppos != -1:
- amppos += 1
- semipos = unquotedstr.find(";", amppos)
- if semipos != -1:
- if unquotedstr[amppos:semipos].isalnum():
- # what we have found is an entity, not a shortcut key...
- continue
- # otherwise, we found the shortcut key
- return unquotedstr[amppos].encode("UTF-8")
- # if we didn't find the shortcut key, return an empty string rather than the original string
- # this will come out as "don't have a translation for this" because the string is not changed...
- # so the string from the original dtd will be used instead
- return ""
-
-def removeinvalidamps(entity, unquotedstr):
- """find ampersands that aren't part of an entity definition..."""
- amppos = 0
- invalidamps = []
- while amppos >= 0:
- amppos = unquotedstr.find("&", amppos)
- if amppos != -1:
- amppos += 1
- semipos = unquotedstr.find(";", amppos)
- if semipos != -1:
- checkentity = unquotedstr[amppos:semipos]
- if checkentity.replace('.', '').isalnum():
- # what we have found is an entity, not a problem...
- continue
- elif checkentity[0] == '#' and checkentity[1:].isalnum():
- # what we have found is an entity, not a problem...
- continue
- # otherwise, we found a problem
- invalidamps.append(amppos-1)
- if len(invalidamps) > 0:
- warnings.warn("invalid ampersands in dtd entity %s" % (entity))
- comp = 0
- for amppos in invalidamps:
- unquotedstr = unquotedstr[:amppos-comp] + unquotedstr[amppos-comp+1:]
- comp += 1
- return unquotedstr
def getmixedentities(entities):
"""returns a list of mixed .label and .accesskey entities from a list of entities"""
mixedentities = [] # those entities which have a .label and .accesskey combined
# search for mixed entities...
for entity in entities:
- for labelsuffix in labelsuffixes:
+ for labelsuffix in dtd.labelsuffixes:
if entity.endswith(labelsuffix):
entitybase = entity[:entity.rfind(labelsuffix)]
# see if there is a matching accesskey, making this a mixed entity
- for akeytype in accesskeysuffixes:
+ for akeytype in dtd.accesskeysuffixes:
if entitybase + akeytype in entities:
# add both versions to the list of mixed entities
mixedentities += [entity, entitybase+akeytype]
@@ -125,16 +51,16 @@
if len(unquotedstr.strip()) == 0:
return
# handle mixed entities
- for labelsuffix in labelsuffixes:
+ for labelsuffix in dtd.labelsuffixes:
if entity.endswith(labelsuffix):
if entity in mixedentities:
- unquotedstr = getlabel(unquotedstr)
+ unquotedstr, akey = accesskey.extract(unquotedstr)
break
else:
- for akeytype in accesskeysuffixes:
+ for akeytype in dtd.accesskeysuffixes:
if entity.endswith(akeytype):
if entity in mixedentities:
- unquotedstr = getaccesskey(unquotedstr)
+ label, unquotedstr = accesskey.extract(unquotedstr)
if not unquotedstr:
warnings.warn("Could not find accesskey for %s" % entity)
else:
@@ -143,11 +69,8 @@
unquotedstr = unquotedstr.upper()
elif original.islower() and unquotedstr.isupper():
unquotedstr = unquotedstr.lower()
- # handle invalid left-over ampersands (usually unneeded access key shortcuts)
- unquotedstr = removeinvalidamps(entity, unquotedstr)
- # finally set the new definition in the dtd, but not if its empty
if len(unquotedstr) > 0:
- dtdunit.definition = dtd.quotefordtd(unquotedstr)
+ dtdunit.definition = dtd.quotefordtd(dtd.removeinvalidamps(entity, unquotedstr))
class redtd:
"""this is a convertor class that creates a new dtd based on a template using translations in a po"""
@@ -205,8 +128,7 @@
unquoted = inputunit.target
else:
unquoted = inputunit.source
- unquoted = removeinvalidamps(dtdunit.entity, unquoted)
- dtdunit.definition = dtd.quotefordtd(unquoted)
+ dtdunit.definition = dtd.quotefordtd(dtd.removeinvalidamps(dtdunit.entity, unquoted))
def convertunit(self, inputunit):
dtdunit = dtd.dtdunit()
Modified: translate-toolkit/branches/upstream/current/translate/convert/po2html
URL: http://svn.debian.org/wsvn/translate-toolkit/branches/upstream/current/translate/convert/po2html?rev=1570&op=diff
==============================================================================
--- translate-toolkit/branches/upstream/current/translate/convert/po2html (original)
+++ translate-toolkit/branches/upstream/current/translate/convert/po2html Sun Feb 8 16:49:31 2009
@@ -25,5 +25,5 @@
from translate.convert import po2html
if __name__ == '__main__':
- po2html.main()
+ po2html.main()
Modified: translate-toolkit/branches/upstream/current/translate/convert/po2ical
URL: http://svn.debian.org/wsvn/translate-toolkit/branches/upstream/current/translate/convert/po2ical?rev=1570&op=diff
==============================================================================
--- translate-toolkit/branches/upstream/current/translate/convert/po2ical (original)
+++ translate-toolkit/branches/upstream/current/translate/convert/po2ical Sun Feb 8 16:49:31 2009
@@ -23,5 +23,5 @@
from translate.convert import po2ical
if __name__ == '__main__':
- po2ical.main()
+ po2ical.main()
Modified: translate-toolkit/branches/upstream/current/translate/convert/po2ini
URL: http://svn.debian.org/wsvn/translate-toolkit/branches/upstream/current/translate/convert/po2ini?rev=1570&op=diff
==============================================================================
--- translate-toolkit/branches/upstream/current/translate/convert/po2ini (original)
+++ translate-toolkit/branches/upstream/current/translate/convert/po2ini Sun Feb 8 16:49:31 2009
@@ -23,5 +23,5 @@
from translate.convert import po2ini
if __name__ == '__main__':
- po2ini.main()
+ po2ini.main()
Modified: translate-toolkit/branches/upstream/current/translate/convert/po2ini.py
URL: http://svn.debian.org/wsvn/translate-toolkit/branches/upstream/current/translate/convert/po2ini.py?rev=1570&op=diff
==============================================================================
--- translate-toolkit/branches/upstream/current/translate/convert/po2ini.py (original)
+++ translate-toolkit/branches/upstream/current/translate/convert/po2ini.py Sun Feb 8 16:49:31 2009
@@ -26,9 +26,9 @@
from translate.storage import ini
class reini:
- def __init__(self, templatefile):
+ def __init__(self, templatefile, dialect):
self.templatefile = templatefile
- self.templatestore = ini.inifile(templatefile)
+ self.templatestore = ini.inifile(templatefile, dialect=dialect)
self.inputdict = {}
def convertstore(self, inputstore, includefuzzy=False):
@@ -49,20 +49,26 @@
inistring = unit.source
self.inputdict[location] = inistring
-def convertini(inputfile, outputfile, templatefile, includefuzzy=False):
+def convertini(inputfile, outputfile, templatefile, includefuzzy=False, dialect="default"):
inputstore = factory.getobject(inputfile)
if templatefile is None:
raise ValueError("must have template file for ini files")
else:
- convertor = reini(templatefile)
+ convertor = reini(templatefile, dialect)
outputstring = convertor.convertstore(inputstore, includefuzzy)
outputfile.write(outputstring)
return 1
+def convertisl(inputfile, outputfile, templatefile, includefuzzy=False, dialect="inno"):
+ convertini(inputfile, outputfile, templatefile, includefuzzy=False, dialect=dialect)
+
def main(argv=None):
# handle command line options
from translate.convert import convert
- formats = {("po", "ini"): ("ini", convertini)}
+ formats = {
+ ("po", "ini"): ("ini", convertini),
+ ("po", "isl"): ("isl", convertisl),
+ }
parser = convert.ConvertOptionParser(formats, usetemplates=True, description=__doc__)
parser.add_fuzzy_option()
parser.run(argv)
Modified: translate-toolkit/branches/upstream/current/translate/convert/po2moz
URL: http://svn.debian.org/wsvn/translate-toolkit/branches/upstream/current/translate/convert/po2moz?rev=1570&op=diff
==============================================================================
--- translate-toolkit/branches/upstream/current/translate/convert/po2moz (original)
+++ translate-toolkit/branches/upstream/current/translate/convert/po2moz Sun Feb 8 16:49:31 2009
@@ -24,5 +24,5 @@
from translate.convert import po2moz
if __name__ == '__main__':
- po2moz.main()
+ po2moz.main()
Modified: translate-toolkit/branches/upstream/current/translate/convert/po2oo
URL: http://svn.debian.org/wsvn/translate-toolkit/branches/upstream/current/translate/convert/po2oo?rev=1570&op=diff
==============================================================================
--- translate-toolkit/branches/upstream/current/translate/convert/po2oo (original)
+++ translate-toolkit/branches/upstream/current/translate/convert/po2oo Sun Feb 8 16:49:31 2009
@@ -27,5 +27,5 @@
from translate.convert import po2oo
if __name__ == '__main__':
- po2oo.main()
+ po2oo.main()
Modified: translate-toolkit/branches/upstream/current/translate/convert/po2oo.py
URL: http://svn.debian.org/wsvn/translate-toolkit/branches/upstream/current/translate/convert/po2oo.py?rev=1570&op=diff
==============================================================================
--- translate-toolkit/branches/upstream/current/translate/convert/po2oo.py (original)
+++ translate-toolkit/branches/upstream/current/translate/convert/po2oo.py Sun Feb 8 16:49:31 2009
@@ -53,29 +53,11 @@
self.timestamp_str = None
self.includefuzzy = includefuzzy
- def makekey(self, ookey):
- """converts an oo key tuple into a key identifier for the source file"""
- project, sourcefile, resourcetype, groupid, localid, platform = ookey
- sourcefile = sourcefile.replace('\\','/')
- if self.long_keys:
- sourcebase = os.path.join(project, sourcefile)
- else:
- sourceparts = sourcefile.split('/')
- sourcebase = "".join(sourceparts[-1:])
- if len(groupid) == 0 or len(localid) == 0:
- fullid = groupid + localid
- else:
- fullid = groupid + "." + localid
- if resourcetype:
- fullid = fullid + "." + resourcetype
- key = "%s#%s" % (sourcebase, fullid)
- return oo.normalizefilename(key)
-
def makeindex(self):
"""makes an index of the oo keys that are used in the source file"""
self.index = {}
for ookey, theoo in self.o.ookeys.iteritems():
- sourcekey = self.makekey(ookey)
+ sourcekey = oo.makekey(ookey, self.long_keys)
self.index[sourcekey] = theoo
def readoo(self, of):
Modified: translate-toolkit/branches/upstream/current/translate/convert/po2php
URL: http://svn.debian.org/wsvn/translate-toolkit/branches/upstream/current/translate/convert/po2php?rev=1570&op=diff
==============================================================================
--- translate-toolkit/branches/upstream/current/translate/convert/po2php (original)
+++ translate-toolkit/branches/upstream/current/translate/convert/po2php Sun Feb 8 16:49:31 2009
@@ -25,5 +25,5 @@
from translate.convert import po2php
if __name__ == '__main__':
- po2php.main()
+ po2php.main()
Modified: translate-toolkit/branches/upstream/current/translate/convert/po2php.py
URL: http://svn.debian.org/wsvn/translate-toolkit/branches/upstream/current/translate/convert/po2php.py?rev=1570&op=diff
==============================================================================
--- translate-toolkit/branches/upstream/current/translate/convert/po2php.py (original)
+++ translate-toolkit/branches/upstream/current/translate/convert/po2php.py Sun Feb 8 16:49:31 2009
@@ -39,7 +39,7 @@
def convertstore(self, inputstore, includefuzzy=False):
self.inmultilinemsgid = False
- self.inecho = 0
+ self.inecho = False
self.makestoredict(inputstore, includefuzzy)
outputlines = []
for line in self.templatefile.readlines():
@@ -61,7 +61,6 @@
returnline = ""
# handle multiline msgid if we're in one
if self.inmultilinemsgid:
- msgid = quote.rstripeol(line).strip()
# see if there's more
endpos = line.rfind("%s;" % self.quotechar)
# if there was no '; or the quote is escaped, we have to continue
@@ -90,15 +89,15 @@
postspaceend = len(line[equalspos+1:].lstrip())
postspace = line[equalspos+1:equalspos+(postspacestart-postspaceend)+1]
self.quotechar = line[equalspos+(postspacestart-postspaceend)+1]
- print key
+ inlinecomment = line[line.rfind("%s;" % self.quotechar)+2:]
if self.inputdict.has_key(lookupkey):
- self.inecho = 0
+ self.inecho = False
value = php.phpencode(self.inputdict[lookupkey], self.quotechar)
if isinstance(value, str):
value = value.decode('utf8')
- returnline = key + prespace + "=" + postspace + self.quotechar + value + self.quotechar + ';' + eol
+ returnline = key + prespace + "=" + postspace + self.quotechar + value + self.quotechar + ';' + inlinecomment + eol
else:
- self.inecho = 1
+ self.inecho = True
returnline = line+eol
# no string termination means carry string on to next line
endpos = line.rfind("%s;" % self.quotechar)
Modified: translate-toolkit/branches/upstream/current/translate/convert/po2prop
URL: http://svn.debian.org/wsvn/translate-toolkit/branches/upstream/current/translate/convert/po2prop?rev=1570&op=diff
==============================================================================
--- translate-toolkit/branches/upstream/current/translate/convert/po2prop (original)
+++ translate-toolkit/branches/upstream/current/translate/convert/po2prop Sun Feb 8 16:49:31 2009
@@ -23,5 +23,5 @@
from translate.convert import po2prop
if __name__ == '__main__':
- po2prop.main()
+ po2prop.main()
Modified: translate-toolkit/branches/upstream/current/translate/convert/po2prop.py
URL: http://svn.debian.org/wsvn/translate-toolkit/branches/upstream/current/translate/convert/po2prop.py?rev=1570&op=diff
==============================================================================
--- translate-toolkit/branches/upstream/current/translate/convert/po2prop.py (original)
+++ translate-toolkit/branches/upstream/current/translate/convert/po2prop.py Sun Feb 8 16:49:31 2009
@@ -38,8 +38,8 @@
def convertstore(self, inputstore, personality, includefuzzy=False):
self.personality = personality
- self.inmultilinemsgid = 0
- self.inecho = 0
+ self.inmultilinemsgid = False
+ self.inecho = False
self.makestoredict(inputstore, includefuzzy)
outputlines = []
for line in self.templatefile.readlines():
@@ -83,7 +83,7 @@
else:
# backslash at end means carry string on to next line
if quote.rstripeol(line)[-1:] == '\\':
- self.inmultilinemsgid = 1
+ self.inmultilinemsgid = True
# now deal with the current string...
key = line[:equalspos].strip()
# Calculate space around the equal sign
@@ -92,7 +92,7 @@
postspaceend = len(line[equalspos+1:].lstrip())
postspace = line[equalspos+1:equalspos+(postspacestart-postspaceend)+1]
if self.inputdict.has_key(key):
- self.inecho = 0
+ self.inecho = False
value = self.inputdict[key]
if isinstance(value, str):
value = value.decode('utf8')
@@ -101,7 +101,7 @@
else:
returnline = key+prespace+"="+postspace+quote.javapropertiesencode(value)+eol
else:
- self.inecho = 1
+ self.inecho = True
returnline = line+eol
if isinstance(returnline, unicode):
returnline = returnline.encode('utf-8')
Modified: translate-toolkit/branches/upstream/current/translate/convert/po2rc
URL: http://svn.debian.org/wsvn/translate-toolkit/branches/upstream/current/translate/convert/po2rc?rev=1570&op=diff
==============================================================================
--- translate-toolkit/branches/upstream/current/translate/convert/po2rc (original)
+++ translate-toolkit/branches/upstream/current/translate/convert/po2rc Sun Feb 8 16:49:31 2009
@@ -24,5 +24,5 @@
from translate.convert import po2rc
if __name__ == '__main__':
- po2rc.main()
+ po2rc.main()
Added: translate-toolkit/branches/upstream/current/translate/convert/po2symb
URL: http://svn.debian.org/wsvn/translate-toolkit/branches/upstream/current/translate/convert/po2symb?rev=1570&op=file
==============================================================================
--- translate-toolkit/branches/upstream/current/translate/convert/po2symb (added)
+++ translate-toolkit/branches/upstream/current/translate/convert/po2symb Sun Feb 8 16:49:31 2009
@@ -1,0 +1,27 @@
+#!/usr/bin/env python
+# -*- coding: utf-8 -*-
+#
+# Copyright 2008 Zuza Software Foundation
+#
+# This file is part of Virtaal.
+#
+# This program is free software; you can redistribute it and/or modify
+# it under the terms of the GNU General Public License as published by
+# the Free Software Foundation; either version 2 of the License, or
+# (at your option) any later version.
+#
+# This program is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+# GNU General Public License for more details.
+#
+# You should have received a copy of the GNU General Public License
+# along with this program; if not, see <http://www.gnu.org/licenses/>.
+
+"""Convert a gettext .po localization file to a Symbian localisation file"""
+
+from translate.convert import po2symb
+
+if __name__ == '__main__':
+ po2symb.main()
+
Propchange: translate-toolkit/branches/upstream/current/translate/convert/po2symb
------------------------------------------------------------------------------
svn:executable = *
Added: translate-toolkit/branches/upstream/current/translate/convert/po2symb.py
URL: http://svn.debian.org/wsvn/translate-toolkit/branches/upstream/current/translate/convert/po2symb.py?rev=1570&op=file
==============================================================================
--- translate-toolkit/branches/upstream/current/translate/convert/po2symb.py (added)
+++ translate-toolkit/branches/upstream/current/translate/convert/po2symb.py Sun Feb 8 16:49:31 2009
@@ -1,0 +1,104 @@
+#!/usr/bin/env python
+# -*- coding: utf-8 -*-
+#
+# Copyright 2008 Zuza Software Foundation
+#
+# This file is part of Virtaal.
+#
+# This program is free software; you can redistribute it and/or modify
+# it under the terms of the GNU General Public License as published by
+# the Free Software Foundation; either version 2 of the License, or
+# (at your option) any later version.
+#
+# This program is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+# GNU General Public License for more details.
+#
+# You should have received a copy of the GNU General Public License
+# along with this program; if not, see <http://www.gnu.org/licenses/>.
+
+"""convert Gettext PO localization files to Symbian translation files."""
+
+import sys
+from translate.storage import factory
+from translate.storage.pypo import po_escape_map
+from translate.storage.symbian import *
+
+def escape(text):
+ for key, val in po_escape_map.iteritems():
+ text = text.replace(key, val)
+ return '"%s"' % text
+
+def replace_header_items(ps, replacments):
+ match = read_while(ps, header_item_or_end_re.match, lambda match: match is None)
+ while not ps.current_line.startswith('*/'):
+ match = header_item_re.match(ps.current_line)
+ if match is not None:
+ key = match.groupdict()['key']
+ if key in replacments:
+ ps.current_line = match.expand('\g<key>\g<space>%s\n' % replacments[key])
+ ps.read_line()
+
+def parse(ps, header_replacements, body_replacements):
+ replace_header_items(ps, header_replacements)
+ try:
+ while True:
+ eat_whitespace(ps)
+ skip_no_translate(ps)
+ match = string_entry_re.match(ps.current_line)
+ if match is not None:
+ key = match.groupdict()['id']
+ if key in body_replacements:
+ value = body_replacements[key].target or body_replacements[key].source
+ ps.current_line = match.expand(u'\g<start>\g<id>\g<space>%s\n' % escape(value))
+ ps.read_line()
+ except StopIteration:
+ pass
+
+def line_saver(charset):
+ result = []
+ def save_line(line):
+ result.append(line.encode(charset))
+ return result, save_line
+
+def write_symbian(f, header_replacements, body_replacements):
+ lines = list(f)
+ charset = read_charset(lines)
+ result, save_line = line_saver(charset)
+ parse(ParseState(iter(lines), charset, save_line), header_replacements, body_replacements)
+ return result
+
+def build_location_index(store):
+ po_header = store.parseheader()
+ index = {}
+ for unit in store.units:
+ for location in unit.getlocations():
+ index[location] = unit
+ index['r_string_languagegroup_name'] = store.UnitClass(po_header['Language-Team'])
+ return index
+
+def build_header_index(store):
+ po_header = store.parseheader()
+ return {'Author': po_header['Last-Translator']}
+
+def convert_symbian(input_file, output_file, template_file, pot=False, duplicatestyle="msgctxt"):
+ store = factory.getobject(input_file)
+ location_index = build_location_index(store)
+ header_index = build_header_index(store)
+ output = write_symbian(template_file, header_index, location_index)
+ for line in output:
+ output_file.write(line)
+ return 1
+
+def main(argv=None):
+ from translate.convert import convert
+ formats = {"po": ("r0", convert_symbian)}
+ parser = convert.ConvertOptionParser(formats, usetemplates=True, usepots=True, description=__doc__)
+ parser.add_duplicates_option()
+ parser.passthrough.append("pot")
+ parser.run(argv)
+
+if __name__ == '__main__':
+ main()
+
Added: translate-toolkit/branches/upstream/current/translate/convert/po2tiki
URL: http://svn.debian.org/wsvn/translate-toolkit/branches/upstream/current/translate/convert/po2tiki?rev=1570&op=file
==============================================================================
--- translate-toolkit/branches/upstream/current/translate/convert/po2tiki (added)
+++ translate-toolkit/branches/upstream/current/translate/convert/po2tiki Sun Feb 8 16:49:31 2009
@@ -1,0 +1,26 @@
+#!/usr/bin/env python
+#
+# Copyright 2008 Mozilla Corporation, Zuza Software Foundation
+#
+# This file is part of translate.
+#
+# translate is free software; you can redistribute it and/or modify
+# it under the terms of the GNU General Public License as published by
+# the Free Software Foundation; either version 2 of the License, or
+# (at your option) any later version.
+#
+# translate is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+# GNU General Public License for more details.
+#
+# You should have received a copy of the GNU General Public License
+# along with translate; if not, write to the Free Software
+# Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
+
+"""simple script to convert a gettext .po localization file to a TikiWiki style language.php file"""
+
+from translate.convert import po2tiki
+
+if __name__ == '__main__':
+ po2tiki.main()
Added: translate-toolkit/branches/upstream/current/translate/convert/po2tiki.py
URL: http://svn.debian.org/wsvn/translate-toolkit/branches/upstream/current/translate/convert/po2tiki.py?rev=1570&op=file
==============================================================================
--- translate-toolkit/branches/upstream/current/translate/convert/po2tiki.py (added)
+++ translate-toolkit/branches/upstream/current/translate/convert/po2tiki.py Sun Feb 8 16:49:31 2009
@@ -1,0 +1,79 @@
+#!/usr/bin/env python
+# -*- coding: utf-8 -*-
+#
+# Copyright 2008 Mozilla Corporation, Zuza Software Foundation
+#
+# This file is part of translate.
+#
+# translate is free software; you can redistribute it and/or modify
+# it under the terms of the GNU General Public License as published by
+# the Free Software Foundation; either version 2 of the License, or
+# (at your option) any later version.
+#
+# translate is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+# GNU General Public License for more details.
+#
+# You should have received a copy of the GNU General Public License
+# along with translate; if not, write to the Free Software
+# Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
+
+""" Convert .po files to TikiWiki's language.php files. """
+
+import sys
+from translate.storage import tiki
+from translate.storage import po
+
+class po2tiki:
+ def convertstore(self, thepofile):
+ """Converts a given (parsed) po file to a tiki file.
+
+ @param thepofile: a pofile pre-loaded with input data
+ """
+ thetargetfile = tiki.TikiStore()
+ for unit in thepofile.units:
+ if not (unit.isblank() or unit.isheader()):
+ newunit = tiki.TikiUnit(unit.source)
+ newunit.settarget(unit.target)
+ locations = unit.getlocations()
+ if locations:
+ newunit.addlocations(locations)
+ # If a word is "untranslated" but the target isn't empty and isn't the same as the source
+ # it's been translated and we switch it. This is an assumption but should remain true as long
+ # as these scripts are used.
+ if newunit.getlocations() == ["untranslated"] and unit.source != unit.target and unit.target != "":
+ newunit.location = []
+ newunit.addlocation("translated")
+
+ thetargetfile.addunit(newunit)
+ return thetargetfile
+
+def convertpo(inputfile, outputfile, template=None):
+ """Converts from po file format to tiki.
+
+ @param inputfile: file handle of the source
+ @param outputfile: file handle to write to
+ @param template: unused
+ """
+ inputstore = po.pofile(inputfile)
+ if inputstore.isempty():
+ return False
+ convertor = po2tiki()
+ outputstore = convertor.convertstore(inputstore)
+ outputfile.write(str(outputstore))
+ return True
+
+def main(argv=None):
+ """Will convert from .po to tiki style .php"""
+ from translate.convert import convert
+ from translate.misc import stdiotell
+ sys.stdout = stdiotell.StdIOWrapper(sys.stdout)
+
+ formats = {"po":("tiki",convertpo)}
+
+ parser = convert.ConvertOptionParser(formats, description=__doc__)
+ parser.run(argv)
+
+if __name__ == '__main__':
+ main()
Modified: translate-toolkit/branches/upstream/current/translate/convert/po2tmx
URL: http://svn.debian.org/wsvn/translate-toolkit/branches/upstream/current/translate/convert/po2tmx?rev=1570&op=diff
==============================================================================
--- translate-toolkit/branches/upstream/current/translate/convert/po2tmx (original)
+++ translate-toolkit/branches/upstream/current/translate/convert/po2tmx Sun Feb 8 16:49:31 2009
@@ -25,5 +25,5 @@
from translate.convert import po2tmx
if __name__ == '__main__':
- po2tmx.main()
+ po2tmx.main()
Modified: translate-toolkit/branches/upstream/current/translate/convert/po2ts
URL: http://svn.debian.org/wsvn/translate-toolkit/branches/upstream/current/translate/convert/po2ts?rev=1570&op=diff
==============================================================================
--- translate-toolkit/branches/upstream/current/translate/convert/po2ts (original)
+++ translate-toolkit/branches/upstream/current/translate/convert/po2ts Sun Feb 8 16:49:31 2009
@@ -25,5 +25,5 @@
from translate.convert import po2ts
if __name__ == '__main__':
- po2ts.main()
+ po2ts.main()
Modified: translate-toolkit/branches/upstream/current/translate/convert/po2txt
URL: http://svn.debian.org/wsvn/translate-toolkit/branches/upstream/current/translate/convert/po2txt?rev=1570&op=diff
==============================================================================
--- translate-toolkit/branches/upstream/current/translate/convert/po2txt (original)
+++ translate-toolkit/branches/upstream/current/translate/convert/po2txt Sun Feb 8 16:49:31 2009
@@ -25,5 +25,5 @@
from translate.convert import po2txt
if __name__ == '__main__':
- po2txt.main()
+ po2txt.main()
Modified: translate-toolkit/branches/upstream/current/translate/convert/po2xliff
URL: http://svn.debian.org/wsvn/translate-toolkit/branches/upstream/current/translate/convert/po2xliff?rev=1570&op=diff
==============================================================================
--- translate-toolkit/branches/upstream/current/translate/convert/po2xliff (original)
+++ translate-toolkit/branches/upstream/current/translate/convert/po2xliff Sun Feb 8 16:49:31 2009
@@ -25,5 +25,5 @@
from translate.convert import po2xliff
if __name__ == '__main__':
- po2xliff.main()
+ po2xliff.main()
Modified: translate-toolkit/branches/upstream/current/translate/convert/pot2po
URL: http://svn.debian.org/wsvn/translate-toolkit/branches/upstream/current/translate/convert/pot2po?rev=1570&op=diff
==============================================================================
--- translate-toolkit/branches/upstream/current/translate/convert/pot2po (original)
+++ translate-toolkit/branches/upstream/current/translate/convert/pot2po Sun Feb 8 16:49:31 2009
@@ -23,5 +23,5 @@
from translate.convert import pot2po
if __name__ == '__main__':
- pot2po.main()
+ pot2po.main()
Modified: translate-toolkit/branches/upstream/current/translate/convert/pot2po.py
URL: http://svn.debian.org/wsvn/translate-toolkit/branches/upstream/current/translate/convert/pot2po.py?rev=1570&op=diff
==============================================================================
--- translate-toolkit/branches/upstream/current/translate/convert/pot2po.py (original)
+++ translate-toolkit/branches/upstream/current/translate/convert/pot2po.py Sun Feb 8 16:49:31 2009
@@ -25,39 +25,145 @@
usage instructions
"""
-from translate.storage import po
from translate.storage import factory
from translate.search import match
from translate.misc.multistring import multistring
-
-# We don't want to reinitialise the TM each time, so let's store it here.
-tmmatcher = None
-
-def memory(tmfiles, max_candidates=1, min_similarity=75, max_length=1000):
- """Returns the TM store to use. Only initialises on first call."""
- global tmmatcher
- # Only initialise first time
- if tmmatcher is None:
- if isinstance(tmfiles, list):
- tmstore = [factory.getobject(tmfile) for tmfile in tmfiles]
- else:
- tmstore = factory.getobject(tmfiles)
- tmmatcher = match.matcher(tmstore, max_candidates=max_candidates, min_similarity=min_similarity, max_length=max_length)
- return tmmatcher
-
-def convertpot(inputpotfile, outputpofile, templatepofile, tm=None, min_similarity=75, fuzzymatching=True, **kwargs):
- inputpot = po.pofile(inputpotfile)
- templatepo = None
- if templatepofile is not None:
- templatepo = po.pofile(templatepofile)
- outputpo = convertpot_stores(inputpot, templatepo, tm, min_similarity, fuzzymatching, **kwargs)
- outputpofile.write(str(outputpo))
+from translate.tools import pretranslate
+from translate.storage import poheader
+
+
+def convertpot(input_file, output_file, template_file, tm=None, min_similarity=75, fuzzymatching=True, **kwargs):
+ """Main conversion function"""
+
+ input_store = factory.getobject(input_file)
+ template_store = None
+ if template_file is not None:
+ template_store = factory.getobject(template_file)
+ output_store = convert_stores(input_store, template_store, tm, min_similarity, fuzzymatching, **kwargs)
+ output_file.write(str(output_store))
return 1
-def convertpot_stores(inputpot, templatepo, tm=None, min_similarity=75, fuzzymatching=True, **kwargs):
- """reads in inputpotfile, adjusts header, writes to outputpofile. if templatepofile exists, merge translations from it into outputpofile"""
- inputpot.makeindex()
- thetargetfile = po.pofile()
+def convert_stores(input_store, template_store, tm=None, min_similarity=75, fuzzymatching=True, **kwargs):
+ """Actual conversion function, works on stores not files, returns
+ a properly initialized pretranslated output store, with structure
+ based on input_store, metadata based on template_store, migrates
+ old translations from template_store and pretranslating from tm"""
+
+ #prepare for merging
+ output_store = type(input_store)()
+ #create fuzzy matchers to be used by pretrnaslate.pretranslate_unit
+ matchers = []
+ if fuzzymatching:
+ if template_store:
+ matcher = match.matcher(template_store, max_candidates=1, min_similarity=min_similarity, max_length=3000, usefuzzy=True)
+ matcher.addpercentage = False
+ matchers.append(matcher)
+ if tm:
+ matcher = pretranslate.memory(tm, max_candidates=1, min_similarity=min_similarity, max_length=1000)
+ matcher.addpercentage = False
+ matchers.append(matcher)
+ _prepare_merge(input_store, output_store, template_store)
+
+ #initialize store
+ _store_pre_merge(input_store, output_store, template_store)
+
+ # Do matching
+ for input_unit in input_store.units:
+ if input_unit.istranslatable():
+ input_unit = pretranslate.pretranslate_unit(input_unit, template_store, matchers, mark_reused=True)
+ _unit_post_merge(input_unit, input_store, output_store, template_store)
+ output_store.addunit(input_unit)
+
+ #finalize store
+ _store_post_merge(input_store, output_store, template_store)
+
+ return output_store
+
+
+##dispatchers
+def _prepare_merge(input_store, output_store, template_store, **kwargs):
+ """prepare stores & TM matchers before merging"""
+ #dispatch to format specific functions
+ prepare_merge_hook = "_prepare_merge_%s" % input_store.__class__.__name__
+ if globals().has_key(prepare_merge_hook):
+ globals()[prepare_merge_hook](input_store, output_store, template_store, **kwargs)
+
+ #generate an index so we can search by source string and location later on
+ input_store.makeindex()
+ if template_store:
+ template_store.makeindex()
+
+
+def _store_pre_merge(input_store, output_store, template_store, **kwargs) :
+ """initialize the new file with things like headers and metadata"""
+ #formats that implement poheader interface are a special case
+ if isinstance(input_store, poheader.poheader):
+ _do_poheaders(input_store, output_store, template_store)
+
+ #dispatch to format specific functions
+ store_pre_merge_hook = "_store_pre_merge_%s" % input_store.__class__.__name__
+ if globals().has_key(store_pre_merge_hook):
+ globals()[store_pre_merge_hook](input_store, output_store, template_store, **kwargs)
+
+
+def _store_post_merge(input_store, output_store, template_store, **kwargs) :
+ """close file after merging all translations, used for adding
+ statistics, obselete messages and similar wrapup tasks"""
+ #dispatch to format specific functions
+ store_post_merge_hook = "_store_post_merge_%s" % input_store.__class__.__name__
+ if globals().has_key(store_post_merge_hook):
+ globals()[store_post_merge_hook](input_store, output_store, template_store, **kwargs)
+
+def _unit_post_merge(input_unit, input_store, output_store, template_store, **kwargs):
+ """handle any unit level cleanup and situations not handled by the merge()
+ function"""
+ #dispatch to format specific functions
+ unit_post_merge_hook = "_unit_post_merge_%s" % input_unit.__class__.__name__
+ if globals().has_key(unit_post_merge_hook):
+ globals()[unit_post_merge_hook](input_unit, input_store, output_store, template_store, **kwargs)
+
+
+##format specific functions
+def _prepare_merge_pofile(input_store, output_store, template_store):
+ """po format specific template preparation logic"""
+ #we need to revive obselete units to be able to consider
+ #their translation when matching
+ if template_store:
+ for unit in template_store.units:
+ if unit.isobsolete():
+ unit.resurrect()
+
+
+def _unit_post_merge_pounit(input_unit, input_store, output_store, template_store):
+ """po format specific plural string initializtion logic"""
+ #FIXME: do we want to do that for poxliff also?
+ if input_unit.hasplural() and len(input_unit.target) == 0:
+ # untranslated plural unit; Let's ensure that we have the correct number of plural forms:
+ nplurals, plural = output_store.getheaderplural()
+ if nplurals and nplurals.isdigit() and nplurals != '2':
+ input_unit.target = multistring([""]*int(nplurals))
+
+
+def _store_post_merge_pofile(input_store, output_store, template_store):
+ """po format specific, adds newly obseleted messages to end of store"""
+ #Let's take care of obsoleted messages
+ if template_store:
+ newlyobsoleted = []
+ for unit in template_store.units:
+ if unit.isheader():
+ continue
+ if unit.target and not (input_store.findunit(unit.source) or hasattr(unit, "reused")):
+ #not in .pot, make it obsolete
+ unit.makeobsolete()
+ newlyobsoleted.append(unit)
+ elif unit.isobsolete():
+ output_store.addunit(unit)
+ for unit in newlyobsoleted:
+ output_store.addunit(unit)
+
+
+def _do_poheaders(input_store, output_store, template_store):
+ """adds initialized ph headers to output store"""
# header values
charset = "UTF-8"
encoding = "8bit"
@@ -69,21 +175,9 @@
mime_version = None
plural_forms = None
kwargs = {}
- if templatepo is not None:
- fuzzyfilematcher = None
- if fuzzymatching:
- for unit in templatepo.units:
- if unit.isobsolete():
- unit.resurrect()
- try:
- fuzzyfilematcher = match.matcher(templatepo, max_candidates=1, min_similarity=min_similarity, max_length=3000, usefuzzy=True)
- fuzzyfilematcher.addpercentage = False
- except ValueError:
- # Probably no usable units
- pass
-
- templatepo.makeindex()
- templateheadervalues = templatepo.parseheader()
+
+ if template_store is not None:
+ templateheadervalues = template_store.parseheader()
for key, value in templateheadervalues.iteritems():
if key == "Project-Id-Version":
project_id_version = value
@@ -104,11 +198,8 @@
plural_forms = value
else:
kwargs[key] = value
- fuzzyglobalmatcher = None
- if fuzzymatching and tm:
- fuzzyglobalmatcher = memory(tm, max_candidates=1, min_similarity=min_similarity, max_length=1000)
- fuzzyglobalmatcher.addpercentage = False
- inputheadervalues = inputpot.parseheader()
+
+ inputheadervalues = input_store.parseheader()
for key, value in inputheadervalues.iteritems():
if key in ("Project-Id-Version", "Last-Translator", "Language-Team", "PO-Revision-Date", "Content-Type", "Content-Transfer-Encoding", "Plural-Forms"):
# want to carry these from the template so we ignore them
@@ -119,81 +210,30 @@
mime_version = value
else:
kwargs[key] = value
- targetheader = thetargetfile.makeheader(charset=charset, encoding=encoding, project_id_version=project_id_version,
+
+ output_header = output_store.makeheader(charset=charset, encoding=encoding, project_id_version=project_id_version,
pot_creation_date=pot_creation_date, po_revision_date=po_revision_date, last_translator=last_translator,
language_team=language_team, mime_version=mime_version, plural_forms=plural_forms, **kwargs)
+
# Get the header comments and fuzziness state
- if templatepo is not None and len(templatepo.units) > 0:
- if templatepo.units[0].isheader():
- if templatepo.units[0].getnotes("translator"):
- targetheader.addnote(templatepo.units[0].getnotes("translator"), "translator")
- if inputpot.units[0].getnotes("developer"):
- targetheader.addnote(inputpot.units[0].getnotes("developer"), "developer")
- targetheader.markfuzzy(templatepo.units[0].isfuzzy())
- elif len(inputpot.units) > 0 and inputpot.units[0].isheader():
- targetheader.addnote(inputpot.units[0].getnotes())
- thetargetfile.addunit(targetheader)
- # Do matching
- for inputpotunit in inputpot.units:
- if not (inputpotunit.isheader() or inputpotunit.isobsolete()):
- if templatepo:
- possiblematches = []
- for location in inputpotunit.getlocations():
- templatepounit = templatepo.locationindex.get(location, None)
- if templatepounit is not None:
- possiblematches.append(templatepounit)
- if len(inputpotunit.getlocations()) == 0:
- templatepounit = templatepo.findunit(inputpotunit.source)
- if templatepounit:
- possiblematches.append(templatepounit)
- for templatepounit in possiblematches:
- if inputpotunit.source == templatepounit.source and templatepounit.target:
- inputpotunit.merge(templatepounit, authoritative=True)
- break
- else:
- fuzzycandidates = []
- if fuzzyfilematcher:
- fuzzycandidates = fuzzyfilematcher.matches(inputpotunit.source)
- if fuzzycandidates:
- inputpotunit.merge(fuzzycandidates[0])
- original = templatepo.findunit(fuzzycandidates[0].source)
- if original:
- original.reused = True
- if fuzzyglobalmatcher and not fuzzycandidates:
- fuzzycandidates = fuzzyglobalmatcher.matches(inputpotunit.source)
- if fuzzycandidates:
- inputpotunit.merge(fuzzycandidates[0])
- else:
- if fuzzyglobalmatcher:
- fuzzycandidates = fuzzyglobalmatcher.matches(inputpotunit.source)
- if fuzzycandidates:
- inputpotunit.merge(fuzzycandidates[0])
- if inputpotunit.hasplural() and len(inputpotunit.target) == 0:
- # Let's ensure that we have the correct number of plural forms:
- nplurals, plural = thetargetfile.getheaderplural()
- if nplurals and nplurals.isdigit() and nplurals != '2':
- inputpotunit.target = multistring([""]*int(nplurals))
- thetargetfile.addunit(inputpotunit)
-
- #Let's take care of obsoleted messages
- if templatepo:
- newlyobsoleted = []
- for unit in templatepo.units:
- if unit.isheader():
- continue
- if unit.target and not (inputpot.findunit(unit.source) or hasattr(unit, "reused")):
- #not in .pot, make it obsolete
- unit.makeobsolete()
- newlyobsoleted.append(unit)
- elif unit.isobsolete():
- thetargetfile.addunit(unit)
- for unit in newlyobsoleted:
- thetargetfile.addunit(unit)
- return thetargetfile
+ if template_store is not None and len(template_store.units) > 0:
+ if template_store.units[0].isheader():
+ if template_store.units[0].getnotes("translator"):
+ output_header.addnote(template_store.units[0].getnotes("translator"), "translator")
+ if input_store.units[0].getnotes("developer"):
+ output_header.addnote(input_store.units[0].getnotes("developer"), "developer")
+ output_header.markfuzzy(template_store.units[0].isfuzzy())
+ elif len(input_store.units) > 0 and input_store.units[0].isheader():
+ output_header.addnote(input_store.units[0].getnotes())
+
+ output_store.addunit(output_header)
+
def main(argv=None):
from translate.convert import convert
- formats = {"pot": ("po", convertpot), ("pot", "po"): ("po", convertpot)}
+ formats = {"pot": ("po", convertpot), ("pot", "po"): ("po", convertpot),
+ "xlf": ("xlf", convertpot), ("xlf", "xlf"): ("xlf", convertpot),
+ }
parser = convert.ConvertOptionParser(formats, usepots=True, usetemplates=True,
allowmissingtemplate=True, description=__doc__)
parser.add_option("", "--tm", dest="tm", default=None,
Modified: translate-toolkit/branches/upstream/current/translate/convert/prop2po
URL: http://svn.debian.org/wsvn/translate-toolkit/branches/upstream/current/translate/convert/prop2po?rev=1570&op=diff
==============================================================================
--- translate-toolkit/branches/upstream/current/translate/convert/prop2po (original)
+++ translate-toolkit/branches/upstream/current/translate/convert/prop2po Sun Feb 8 16:49:31 2009
@@ -23,5 +23,5 @@
from translate.convert import prop2po
if __name__ == '__main__':
- prop2po.main()
+ prop2po.main()
Modified: translate-toolkit/branches/upstream/current/translate/convert/prop2po.py
URL: http://svn.debian.org/wsvn/translate-toolkit/branches/upstream/current/translate/convert/prop2po.py?rev=1570&op=diff
==============================================================================
--- translate-toolkit/branches/upstream/current/translate/convert/prop2po.py (original)
+++ translate-toolkit/branches/upstream/current/translate/convert/prop2po.py Sun Feb 8 16:49:31 2009
@@ -37,7 +37,7 @@
targetheader = thetargetfile.makeheader(charset="UTF-8", encoding="8bit", x_accelerator_marker="&")
targetheader.addnote("extracted from %s" % thepropfile.filename, "developer")
# we try and merge the header po with any comments at the start of the properties file
- appendedheader = 0
+ appendedheader = False
waitingcomments = []
for propunit in thepropfile.units:
pounit = self.convertunit(propunit, "developer")
@@ -51,7 +51,7 @@
pounit = targetheader
else:
thetargetfile.addunit(targetheader)
- appendedheader = 1
+ appendedheader = True
if pounit is not None:
pounit.addnote("".join(waitingcomments).rstrip(), "developer", position="prepend")
waitingcomments = []
@@ -66,7 +66,7 @@
targetheader.addnote("extracted from %s, %s" % (origpropfile.filename, translatedpropfile.filename), "developer")
translatedpropfile.makeindex()
# we try and merge the header po with any comments at the start of the properties file
- appendedheader = 0
+ appendedheader = False
waitingcomments = []
# loop through the original file, looking at units one by one
for origprop in origpropfile.units:
@@ -82,7 +82,7 @@
origpo = targetheader
else:
thetargetfile.addunit(targetheader)
- appendedheader = 1
+ appendedheader = True
# try and find a translation of the same name...
if origprop.name in translatedpropfile.locationindex:
translatedprop = translatedpropfile.locationindex[origprop.name]
Modified: translate-toolkit/branches/upstream/current/translate/convert/rc2po
URL: http://svn.debian.org/wsvn/translate-toolkit/branches/upstream/current/translate/convert/rc2po?rev=1570&op=diff
==============================================================================
--- translate-toolkit/branches/upstream/current/translate/convert/rc2po (original)
+++ translate-toolkit/branches/upstream/current/translate/convert/rc2po Sun Feb 8 16:49:31 2009
@@ -23,5 +23,5 @@
from translate.convert import rc2po
if __name__ == '__main__':
- rc2po.main()
+ rc2po.main()
Modified: translate-toolkit/branches/upstream/current/translate/convert/rc2po.py
URL: http://svn.debian.org/wsvn/translate-toolkit/branches/upstream/current/translate/convert/rc2po.py?rev=1570&op=diff
==============================================================================
--- translate-toolkit/branches/upstream/current/translate/convert/rc2po.py (original)
+++ translate-toolkit/branches/upstream/current/translate/convert/rc2po.py Sun Feb 8 16:49:31 2009
@@ -30,69 +30,69 @@
def __init__(self, charset=None):
self.charset = charset
- def convertstore(self, thercfile, duplicatestyle="msgctxt"):
+ def convert_store(self, input_store, duplicatestyle="msgctxt"):
"""converts a .rc file to a .po file..."""
- thetargetfile = po.pofile()
- targetheader = thetargetfile.makeheader(charset="UTF-8", encoding="8bit")
- targetheader.addnote("extracted from %s" % thercfile.filename, "developer")
- thetargetfile.addunit(targetheader)
- for rcunit in thercfile.units:
- pounit = self.convertunit(rcunit, "developer")
- if pounit is not None:
- thetargetfile.addunit(pounit)
- thetargetfile.removeduplicates(duplicatestyle)
- return thetargetfile
+ output_store = po.pofile()
+ output_header = output_store.makeheader(charset="UTF-8", encoding="8bit")
+ output_header.addnote("extracted from %s" % input_store.filename, "developer")
+ output_store.addunit(output_header)
+ for input_unit in input_store.units:
+ output_unit = self.convert_unit(input_unit, "developer")
+ if output_unit is not None:
+ output_store.addunit(output_unit)
+ output_store.removeduplicates(duplicatestyle)
+ return output_store
- def mergestore(self, origrcfile, translatedrcfile, blankmsgstr=False, duplicatestyle="msgctxt"):
+ def merge_store(self, template_store, input_store, blankmsgstr=False, duplicatestyle="msgctxt"):
"""converts two .rc files to a .po file..."""
- thetargetfile = po.pofile()
- targetheader = thetargetfile.makeheader(charset="UTF-8", encoding="8bit")
- targetheader.addnote("extracted from %s, %s" % (origrcfile.filename, translatedrcfile.filename), "developer")
- thetargetfile.addunit(targetheader)
- translatedrcfile.makeindex()
- for origrc in origrcfile.units:
- origpo = self.convertunit(origrc, "developer")
+ output_store = po.pofile()
+ output_header = output_store.makeheader(charset="UTF-8", encoding="8bit")
+ output_header.addnote("extracted from %s, %s" % (template_store.filename, input_store.filename), "developer")
+ output_store.addunit(output_header)
+ input_store.makeindex()
+ for template_unit in template_store.units:
+ origpo = self.convert_unit(template_unit, "developer")
# try and find a translation of the same name...
- origrcname = "".join(origrc.getlocations())
- if origrcname in translatedrcfile.locationindex:
- translatedrc = translatedrcfile.locationindex[origrcname]
- translatedpo = self.convertunit(translatedrc, "translator")
+ template_unit_name = "".join(template_unit.getlocations())
+ if template_unit_name in input_store.locationindex:
+ translatedrc = input_store.locationindex[template_unit_name]
+ translatedpo = self.convert_unit(translatedrc, "translator")
else:
translatedpo = None
# if we have a valid po unit, get the translation and add it...
if origpo is not None:
if translatedpo is not None and not blankmsgstr:
origpo.target = translatedpo.source
- thetargetfile.addunit(origpo)
+ output_store.addunit(origpo)
elif translatedpo is not None:
print >> sys.stderr, "error converting original rc definition %s" % origrc.name
- thetargetfile.removeduplicates(duplicatestyle)
- return thetargetfile
+ output_store.removeduplicates(duplicatestyle)
+ return output_store
- def convertunit(self, rcunit, commenttype):
+ def convert_unit(self, input_unit, commenttype):
"""Converts a .rc unit to a .po unit. Returns None if empty
or not for translation."""
- if rcunit is None:
+ if input_unit is None:
return None
# escape unicode
- pounit = po.pounit(encoding="UTF-8")
- pounit.addlocation("".join(rcunit.getlocations()))
- pounit.source = rcunit.source.decode(self.charset)
- pounit.target = ""
- return pounit
+ output_unit = po.pounit(encoding="UTF-8")
+ output_unit.addlocation("".join(input_unit.getlocations()))
+ output_unit.source = input_unit.source.decode(self.charset)
+ output_unit.target = ""
+ return output_unit
-def convertrc(inputfile, outputfile, templatefile, pot=False, duplicatestyle="msgctxt", charset=None):
- """reads in inputfile using rc, converts using rc2po, writes to outputfile"""
- inputstore = rc.rcfile(inputfile)
+def convertrc(input_file, output_file, template_file, pot=False, duplicatestyle="msgctxt", charset=None):
+ """reads in input_file using rc, converts using rc2po, writes to output_file"""
+ input_store = rc.rcfile(input_file)
convertor = rc2po(charset=charset)
- if templatefile is None:
- outputstore = convertor.convertstore(inputstore, duplicatestyle=duplicatestyle)
+ if template_file is None:
+ output_store = convertor.convert_store(input_store, duplicatestyle=duplicatestyle)
else:
- templatestore = rc.rcfile(templatefile)
- outputstore = convertor.mergestore(templatestore, inputstore, blankmsgstr=pot, duplicatestyle=duplicatestyle)
- if outputstore.isempty():
+ template_store = rc.rcfile(template_file)
+ output_store = convertor.merge_store(template_store, input_store, blankmsgstr=pot, duplicatestyle=duplicatestyle)
+ if output_store.isempty():
return 0
- outputfile.write(str(outputstore))
+ output_file.write(str(output_store))
return 1
def main(argv=None):
@@ -100,9 +100,9 @@
formats = {"rc": ("po", convertrc), ("rc", "rc"): ("po", convertrc),
"nls": ("po", convertrc), ("nls", "nls"): ("po", convertrc)}
parser = convert.ConvertOptionParser(formats, usetemplates=True, usepots=True, description=__doc__)
- defaultcharset="cp1252"
- parser.add_option("", "--charset", dest="charset", default=defaultcharset,
- help="charset to use to decode the RC files (default: %s)" % defaultcharset, metavar="CHARSET")
+ DEFAULTCHARSET = "cp1252"
+ parser.add_option("", "--charset", dest="charset", default=DEFAULTCHARSET,
+ help="charset to use to decode the RC files (default: %s)" % DEFAULTCHARSET, metavar="CHARSET")
parser.add_duplicates_option()
parser.passthrough.append("pot")
parser.passthrough.append("charset")
Added: translate-toolkit/branches/upstream/current/translate/convert/symb2po
URL: http://svn.debian.org/wsvn/translate-toolkit/branches/upstream/current/translate/convert/symb2po?rev=1570&op=file
==============================================================================
--- translate-toolkit/branches/upstream/current/translate/convert/symb2po (added)
+++ translate-toolkit/branches/upstream/current/translate/convert/symb2po Sun Feb 8 16:49:31 2009
@@ -1,0 +1,27 @@
+#!/usr/bin/env python
+# -*- coding: utf-8 -*-
+#
+# Copyright 2008 Zuza Software Foundation
+#
+# This file is part of Virtaal.
+#
+# This program is free software; you can redistribute it and/or modify
+# it under the terms of the GNU General Public License as published by
+# the Free Software Foundation; either version 2 of the License, or
+# (at your option) any later version.
+#
+# This program is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+# GNU General Public License for more details.
+#
+# You should have received a copy of the GNU General Public License
+# along with this program; if not, see <http://www.gnu.org/licenses/>.
+
+"""Convert a Symbian translation file to a gettext .po localization file."""
+
+from translate.convert import symb2po
+
+if __name__ == '__main__':
+ symb2po.main()
+
Propchange: translate-toolkit/branches/upstream/current/translate/convert/symb2po
------------------------------------------------------------------------------
svn:executable = *
Added: translate-toolkit/branches/upstream/current/translate/convert/symb2po.py
URL: http://svn.debian.org/wsvn/translate-toolkit/branches/upstream/current/translate/convert/symb2po.py?rev=1570&op=file
==============================================================================
--- translate-toolkit/branches/upstream/current/translate/convert/symb2po.py (added)
+++ translate-toolkit/branches/upstream/current/translate/convert/symb2po.py Sun Feb 8 16:49:31 2009
@@ -1,0 +1,111 @@
+#!/usr/bin/env python
+# -*- coding: utf-8 -*-
+#
+# Copyright 2008 Zuza Software Foundation
+#
+# This file is part of Virtaal.
+#
+# This program is free software; you can redistribute it and/or modify
+# it under the terms of the GNU General Public License as published by
+# the Free Software Foundation; either version 2 of the License, or
+# (at your option) any later version.
+#
+# This program is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+# GNU General Public License for more details.
+#
+# You should have received a copy of the GNU General Public License
+# along with this program; if not, see <http://www.gnu.org/licenses/>.
+
+"""Convert Symbian localisation files to Gettext PO localization files."""
+
+import sys
+import re
+from translate.storage import factory
+from translate.storage.pypo import extractpoline
+from translate.storage.symbian import *
+
+def read_header_items(ps):
+ match = read_while(ps, header_item_or_end_re.match, lambda match: match is None)
+ if match.groupdict()['end_comment'] is not None:
+ return {}
+
+ results = {}
+ while match:
+ match_chunks = match.groupdict()
+ ps.read_line()
+ results[match_chunks['key']] = match_chunks['value']
+ match = header_item_re.match(ps.current_line)
+
+ match = read_while(ps, identity, lambda line: not line.startswith('*/'))
+ ps.read_line()
+ return results
+
+def parse(ps):
+ header = read_header_items(ps)
+ units = []
+ try:
+ while True:
+ eat_whitespace(ps)
+ skip_no_translate(ps)
+ match = string_entry_re.match(ps.current_line)
+ if match is not None:
+ units.append((match.groupdict()['id'], extractpoline(match.groupdict()['str'])))
+ ps.read_line()
+ except StopIteration:
+ pass
+ return header, units
+
+def read_symbian(f):
+ lines = list(f)
+ charset = read_charset(lines)
+ return parse(ParseState(iter(lines), charset))
+
+def get_template_dict(template_file):
+ if template_file is not None:
+ template_header, template_units = read_symbian(template_file)
+ return template_header, dict(template_units)
+ else:
+ return {}, {}
+
+def build_output(units, template_header, template_dict):
+ output_store = factory.classes['po']()
+ ignore = set(['r_string_languagegroup_name'])
+ header_entries = {
+ 'Last-Translator': template_header.get('Author', ''),
+ 'Language-Team': template_dict.get('r_string_languagegroup_name', '')
+ }
+ output_store.updateheader(add=True, **header_entries)
+ output_store.changeencoding('UTF-8')
+ for id, source in units:
+ if id in ignore:
+ continue
+ unit = output_store.UnitClass(source)
+ unit.target = template_dict.get(id, '')
+ unit.addlocation(id)
+ output_store.addunit(unit)
+ return output_store
+
+def convert_symbian(input_file, output_file, template_file, pot=False, duplicatestyle="msgctxt"):
+ header, units = read_symbian(input_file)
+ template_header, template_dict = get_template_dict(template_file)
+ output_store = build_output(units, template_header, template_dict)
+
+ if output_store.isempty():
+ return 0
+ else:
+ output_file.write(str(output_store))
+ return 1
+
+def main(argv=None):
+ from translate.convert import convert
+ formats = {"r01": ("po", convert_symbian)}
+ parser = convert.ConvertOptionParser(formats, usetemplates=True, usepots=True, description=__doc__)
+ parser.add_duplicates_option()
+ parser.passthrough.append("pot")
+ parser.run(argv)
+
+if __name__ == '__main__':
+ main()
+
Added: translate-toolkit/branches/upstream/current/translate/convert/test_accesskey.py
URL: http://svn.debian.org/wsvn/translate-toolkit/branches/upstream/current/translate/convert/test_accesskey.py?rev=1570&op=file
==============================================================================
--- translate-toolkit/branches/upstream/current/translate/convert/test_accesskey.py (added)
+++ translate-toolkit/branches/upstream/current/translate/convert/test_accesskey.py Sun Feb 8 16:49:31 2009
@@ -1,0 +1,66 @@
+# -*- coding: utf-8 -*-
+#
+# Copyright 2008 Zuza Software Foundation
+#
+# This file is part of The Translate Toolkit.
+#
+# This program is free software; you can redistribute it and/or modify
+# it under the terms of the GNU General Public License as published by
+# the Free Software Foundation; either version 2 of the License, or
+# (at your option) any later version.
+#
+# This program is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+# GNU General Public License for more details.
+#
+# You should have received a copy of the GNU General Public License
+# along with translate; if not, write to the Free Software
+# Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
+
+"""Test the various functions for combining and extracting accesskeys and
+labels"""
+
+from translate.convert import accesskey
+
+def test_get_label_and_accesskey():
+ """test that we can extract the label and accesskey components from an
+ accesskey+label string"""
+ assert accesskey.extract(u"&File") == (u"File", u"F")
+ assert accesskey.extract(u"~File", u"~") == (u"File", u"F")
+ assert accesskey.extract(u"~File", u"~") == (u"File", u"F")
+
+def test_ignore_entities():
+ """test that we don't get confused with entities and a & access key
+ marker"""
+ assert accesskey.extract(u"Set &browserName; as &Default") != (u"Set &browserName; as &Default", u"b")
+ assert accesskey.extract(u"Set &browserName; as &Default") == (u"Set &browserName; as Default", u"D")
+
+def test_alternate_accesskey_marker():
+ """check that we can identify the accesskey if the marker is different"""
+ assert accesskey.extract(u"~File", u"~") == (u"File", u"F")
+ assert accesskey.extract(u"&File", u"~") == (u"&File", u"")
+
+def test_unicode():
+ """test that we can do the same with unicode strings"""
+ assert accesskey.extract(u"Eá¸iá¹±") == (u"Eá¸iá¹±", u"")
+ assert accesskey.extract(u"E&á¸iá¹±") == (u"Eá¸iá¹±", u"á¸")
+ assert accesskey.extract(u"E_á¸iá¹±", u"_") == (u"Eá¸iá¹±", u"á¸")
+ label, akey = accesskey.extract(u"E&á¸iá¹±")
+ assert label, akey == (u"Eá¸iá¹±", u"á¸")
+ assert isinstance(label, unicode) and isinstance(akey, unicode)
+
+def test_empty_string():
+ """test that we can handle and empty label+accesskey string"""
+ assert accesskey.extract(u"") == (u"", u"")
+ assert accesskey.extract(u"", u"~") == (u"", u"")
+
+def test_combine_label_accesskey():
+ """test that we can combine accesskey and label to create a label+accesskey
+ string"""
+ assert accesskey.combine(u"File", u"F") == u"&File"
+ assert accesskey.combine(u"File", u"F", u"~") == u"~File"
+
+def test_uncombinable():
+ """test our behaviour when we cannot combine label and accesskey"""
+ assert accesskey.combine(u"File", u"D") is None
Modified: translate-toolkit/branches/upstream/current/translate/convert/test_convert.py
URL: http://svn.debian.org/wsvn/translate-toolkit/branches/upstream/current/translate/convert/test_convert.py?rev=1570&op=diff
==============================================================================
--- translate-toolkit/branches/upstream/current/translate/convert/test_convert.py (original)
+++ translate-toolkit/branches/upstream/current/translate/convert/test_convert.py Sun Feb 8 16:49:31 2009
@@ -117,7 +117,11 @@
options = self.help_check(options, "-h, --help")
options = self.help_check(options, "--manpage")
options = self.help_check(options, "--errorlevel=ERRORLEVEL")
- options = self.help_check(options, "--psyco=MODE")
+ try:
+ import psyco
+ options = self.help_check(options, "--psyco=MODE")
+ except Exception:
+ pass
options = self.help_check(options, "-i INPUT, --input=INPUT")
options = self.help_check(options, "-x EXCLUDE, --exclude=EXCLUDE")
options = self.help_check(options, "-o OUTPUT, --output=OUTPUT")
Modified: translate-toolkit/branches/upstream/current/translate/convert/test_dtd2po.py
URL: http://svn.debian.org/wsvn/translate-toolkit/branches/upstream/current/translate/convert/test_dtd2po.py?rev=1570&op=diff
==============================================================================
--- translate-toolkit/branches/upstream/current/translate/convert/test_dtd2po.py (original)
+++ translate-toolkit/branches/upstream/current/translate/convert/test_dtd2po.py Sun Feb 8 16:49:31 2009
@@ -13,7 +13,7 @@
inputfile = wStringIO.StringIO(dtdsource)
inputdtd = dtd.dtdfile(inputfile)
convertor = dtd2po.dtd2po()
- if not dtdtemplate:
+ if dtdtemplate is None:
outputpo = convertor.convertstore(inputdtd)
else:
templatefile = wStringIO.StringIO(dtdtemplate)
Modified: translate-toolkit/branches/upstream/current/translate/convert/test_html2po.py
URL: http://svn.debian.org/wsvn/translate-toolkit/branches/upstream/current/translate/convert/test_html2po.py?rev=1570&op=diff
==============================================================================
--- translate-toolkit/branches/upstream/current/translate/convert/test_html2po.py (original)
+++ translate-toolkit/branches/upstream/current/translate/convert/test_html2po.py Sun Feb 8 16:49:31 2009
@@ -254,11 +254,11 @@
self.compareunit(pofile, 4, "Ordered Two")
def test_duplicates(self):
- """check that we use the default style of msgid_comments to disambiguate duplicate messages"""
+ """check that we use the default style of msgctxt to disambiguate duplicate messages"""
markup = "<html><head></head><body><p>Duplicate</p><p>Duplicate</p></body></html>"
pofile = self.html2po(markup)
self.countunits(pofile, 2)
- # FIXME change this so that we check that the KDE comment is correctly added
+ # FIXME change this so that we check that the msgctxt is correctly added
self.compareunit(pofile, 1, "Duplicate")
self.compareunit(pofile, 2, "Duplicate")
Modified: translate-toolkit/branches/upstream/current/translate/convert/test_oo2po.py
URL: http://svn.debian.org/wsvn/translate-toolkit/branches/upstream/current/translate/convert/test_oo2po.py?rev=1570&op=diff
==============================================================================
--- translate-toolkit/branches/upstream/current/translate/convert/test_oo2po.py (original)
+++ translate-toolkit/branches/upstream/current/translate/convert/test_oo2po.py Sun Feb 8 16:49:31 2009
@@ -82,11 +82,11 @@
self.check_roundtrip('strings.src', r'The given command is not a SELECT statement.\nOnly queries are allowed.')
self.check_roundtrip('source\ui\dlg\AutoControls_tmpl.hrc', r';\t59\t,\t44\t:\t58\t{Tab}\t9\t{Space}\t32')
self.check_roundtrip('inc_openoffice\windows\msi_languages\Nsis.ulf', r'The installation files must be unpacked and copied to your hard disk in preparation for the installation. After that, the %PRODUCTNAME installation will start automatically.\r\n\r\nClick \'Next\' to continue.')
- self.check_roundtrip('file.xhp', r'\<asdf\>')
- self.check_roundtrip('file.xhp', r'\<asdf prop=\"value\"\>')
- self.check_roundtrip('file.xhp', r'\<asdf prop=\"value\"\>marked up text\</asdf\>')
- self.check_roundtrip('file.xhp', r'\<asdf prop=\"value>>\"\>')
- self.check_roundtrip('file.xhp', r'''\<asdf prop=\"value>>\"\>'Next'>> or "<<Previous"\</asdf\>''')
+ self.check_roundtrip('file.xhp', r'\<ahelp\>')
+ self.check_roundtrip('file.xhp', r'\<ahelp prop=\"value\"\>')
+ self.check_roundtrip('file.xhp', r'\<ahelp prop=\"value\"\>marked up text\</ahelp\>')
+ self.check_roundtrip('file.xhp', r'\<ahelp prop=\"value>>\"\>')
+ self.check_roundtrip('file.xhp', r'''\<ahelp prop=\"value>>\"\>'Next'>> or "<<Previous"\</ahelp\>''')
self.check_roundtrip('address_auto.xhp', r'''example, \<item type=\"literal\"\>'Harry\\'s Bar'.\</item\>''')
def xtest_roundtrip_whitespaceonly(self):
Modified: translate-toolkit/branches/upstream/current/translate/convert/test_php2po.py
URL: http://svn.debian.org/wsvn/translate-toolkit/branches/upstream/current/translate/convert/test_php2po.py?rev=1570&op=diff
==============================================================================
--- translate-toolkit/branches/upstream/current/translate/convert/test_php2po.py (original)
+++ translate-toolkit/branches/upstream/current/translate/convert/test_php2po.py Sun Feb 8 16:49:31 2009
@@ -84,7 +84,7 @@
$lang['prefPanel-smime'] = 'Security';'''
pofile = self.php2po(phpsource)
pounit = self.singleelement(pofile)
- assert pounit.getnotes("developer") == "Comment"
+ assert pounit.getnotes("developer") == "/* Comment"
# TODO write test for inline comments and check for // comments that precede an entry
def test_emptyentry(self):
@@ -94,7 +94,7 @@
pounit = self.singleelement(pofile)
assert pounit.getlocations() == ["$lang['credit']"]
assert pounit.getcontext() == "$lang['credit']"
- assert "#. comment" in str(pofile)
+ assert "#. /* comment" in str(pofile)
assert pounit.source == ""
def test_emptyentry_translated(self):
@@ -109,7 +109,13 @@
def test_newlines_in_value(self):
"""check that we can carry newlines that appear in the entry value into the PO"""
+ # Single quotes - \n is not a newline
phpsource = r'''$lang['name'] = 'value1\nvalue2';'''
+ pofile = self.php2po(phpsource)
+ unit = self.singleelement(pofile)
+ assert unit.source == r"value1\nvalue2"
+ # Double quotes - \n is a newline
+ phpsource = r'''$lang['name'] = "value1\nvalue2";'''
pofile = self.php2po(phpsource)
unit = self.singleelement(pofile)
assert unit.source == "value1\nvalue2"
Modified: translate-toolkit/branches/upstream/current/translate/convert/test_po2dtd.py
URL: http://svn.debian.org/wsvn/translate-toolkit/branches/upstream/current/translate/convert/test_po2dtd.py?rev=1570&op=diff
==============================================================================
--- translate-toolkit/branches/upstream/current/translate/convert/test_po2dtd.py (original)
+++ translate-toolkit/branches/upstream/current/translate/convert/test_po2dtd.py Sun Feb 8 16:49:31 2009
@@ -83,12 +83,6 @@
dtdsource = str(dtdfile)
assert "Good day\nAll" in dtdsource
- def test_ampersandwarning(self):
- """tests that proper warnings are given if invalid ampersands occur"""
- simplestring = '''#: simple.warningtest\nmsgid "Simple String"\nmsgstr "Dimpled &Ring"\n'''
- warnings.simplefilter("error")
- assert test.raises(Warning, po2dtd.removeinvalidamps, "simple.warningtest", "Dimpled &Ring")
-
def test_missingaccesskey(self):
"""tests that proper warnings are given if access key is missing"""
simplepo = '''#: simple.label\n#: simple.accesskey\nmsgid "Simple &String"\nmsgstr "Dimpled Ring"\n'''
Modified: translate-toolkit/branches/upstream/current/translate/convert/test_po2html.py
URL: http://svn.debian.org/wsvn/translate-toolkit/branches/upstream/current/translate/convert/test_po2html.py?rev=1570&op=diff
==============================================================================
--- translate-toolkit/branches/upstream/current/translate/convert/test_po2html.py (original)
+++ translate-toolkit/branches/upstream/current/translate/convert/test_po2html.py Sun Feb 8 16:49:31 2009
@@ -47,10 +47,12 @@
"dit bestaan uit ten minste een sin."
'''
htmlexpected = '''<body>
-<div>'n Paragraaf is 'n afdeling in 'n geskrewe stuk wat gewoonlik
+<div>
+'n Paragraaf is 'n afdeling in 'n geskrewe stuk wat gewoonlik
'n spesifieke punt uitlig. Dit begin altyd op 'n nuwe lyn
(gewoonlik met indentasie) en dit bestaan uit ten minste een
-sin.</div>
+sin.
+</div>
</body>'''
assert htmlexpected.replace("\n", " ") in self.converthtml(posource, htmlsource).replace("\n", " ")
Modified: translate-toolkit/branches/upstream/current/translate/convert/test_po2php.py
URL: http://svn.debian.org/wsvn/translate-toolkit/branches/upstream/current/translate/convert/test_po2php.py?rev=1570&op=diff
==============================================================================
--- translate-toolkit/branches/upstream/current/translate/convert/test_po2php.py (original)
+++ translate-toolkit/branches/upstream/current/translate/convert/test_po2php.py Sun Feb 8 16:49:31 2009
@@ -75,6 +75,15 @@
print phpfile
assert phpfile == [phpexpected]
+ def test_inline_comments(self):
+ """check that we include inline comments from the template. Bug 590"""
+ posource = '''#: $lang['name']\nmsgid "value"\nmsgstr "waarde"\n'''
+ phptemplate = '''$lang[ 'name' ] = 'value'; //inline comment\n'''
+ phpexpected = '''$lang[ 'name' ] = 'waarde'; //inline comment\n'''
+ phpfile = self.merge2php(phptemplate, posource)
+ print phpfile
+ assert phpfile == [phpexpected]
+
# def test_merging_propertyless_template(self):
# """check that when merging with a template with no property values that we copy the template"""
# posource = ""
Added: translate-toolkit/branches/upstream/current/translate/convert/test_po2tiki.py
URL: http://svn.debian.org/wsvn/translate-toolkit/branches/upstream/current/translate/convert/test_po2tiki.py?rev=1570&op=file
==============================================================================
--- translate-toolkit/branches/upstream/current/translate/convert/test_po2tiki.py (added)
+++ translate-toolkit/branches/upstream/current/translate/convert/test_po2tiki.py Sun Feb 8 16:49:31 2009
@@ -1,0 +1,40 @@
+#!/usr/bin/env python
+# -*- coding: utf-8 -*-
+
+# po2tiki unit tests
+# Author: Wil Clouser <wclouser at mozilla.com>
+# Date: 2008-12-01
+
+from translate.convert import po2tiki
+from translate.storage import tiki
+from translate.convert import test_convert
+from translate.misc import wStringIO
+
+class TestPo2Tiki:
+ def test_convertpo(self):
+ inputfile = """
+#: translated
+msgid "zero_source"
+msgstr "zero_target"
+
+#: unused
+msgid "one_source"
+msgstr "one_target"
+ """
+ outputfile = wStringIO.StringIO()
+ po2tiki.convertpo(inputfile, outputfile)
+
+ output = outputfile.getvalue()
+
+ assert '"one_source" => "one_target",' in output
+ assert '"zero_source" => "zero_target",' in output
+
+
+class TestPo2TikiCommand(test_convert.TestConvertCommand, TestPo2Tiki):
+ """Tests running actual po2tiki commands on files"""
+ convertmodule = po2tiki
+ defaultoptions = {}
+
+ def test_help(self):
+ """tests getting help"""
+ options = test_convert.TestConvertCommand.test_help(self)
Modified: translate-toolkit/branches/upstream/current/translate/convert/test_pot2po.py
URL: http://svn.debian.org/wsvn/translate-toolkit/branches/upstream/current/translate/convert/test_pot2po.py?rev=1570&op=diff
==============================================================================
--- translate-toolkit/branches/upstream/current/translate/convert/test_pot2po.py (original)
+++ translate-toolkit/branches/upstream/current/translate/convert/test_pot2po.py Sun Feb 8 16:49:31 2009
@@ -270,7 +270,8 @@
def test_merging_obsoleting_messages(self):
"""check that we obsolete messages no longer present in the new file"""
- potsource = ''
+ #add emtpy msgid line to help factory identify format
+ potsource = 'msgid ""\nmsgstr ""\n'
posource = '# Some comment\n#. Extracted comment\n#: obsoleteme:10\nmsgid "One"\nmsgstr "Een"\n'
expected = '# Some comment\n#~ msgid "One"\n#~ msgstr "Een"\n'
newpo = self.convertpot(potsource, posource)
@@ -280,7 +281,8 @@
def test_not_obsoleting_empty_messages(self):
"""check that we don't obsolete (and keep) untranslated messages"""
- potsource = ''
+ #add emtpy msgid line to help factory identify format
+ potsource = 'msgid ""\nmsgstr ""\n'
posource = '#: obsoleteme:10\nmsgid "One"\nmsgstr ""\n'
newpo = self.convertpot(potsource, posource)
print str(newpo)
@@ -411,6 +413,34 @@
assert newpounit.isfuzzy()
assert newpounit.hastypecomment("c-format")
+ def test_obsolete_msgctxt(self):
+ """Test that obsolete units' msgctxt is preserved."""
+ potsource = 'msgctxt "newContext"\nmsgid "First unit"\nmsgstr ""'
+ posource = """
+msgctxt "newContext"
+msgid "First unit"
+msgstr "Eerste eenheid"
+
+#~ msgctxt "context"
+#~ msgid "Old unit"
+#~ msgstr "Ou eenheid1"
+
+#~ msgctxt "context2"
+#~ msgid "Old unit"
+#~ msgstr "Ou eenheid2"
+
+#~ msgid "Old unit"
+#~ msgstr "Ou eenheid3"
+"""
+ newpo = self.convertpot(potsource, posource)
+ print newpo
+ assert len(newpo.units) == 5
+ assert newpo.units[1].getcontext() == 'newContext'
+ # Search in unit string, because obsolete units can't return a context
+ assert 'msgctxt "context"' in str(newpo.units[2])
+ assert 'msgctxt "context2"' in str(newpo.units[3])
+
+
class TestPOT2POCommand(test_convert.TestConvertCommand, TestPOT2PO):
"""Tests running actual pot2po commands on files"""
convertmodule = pot2po
Added: translate-toolkit/branches/upstream/current/translate/convert/test_tiki2po.py
URL: http://svn.debian.org/wsvn/translate-toolkit/branches/upstream/current/translate/convert/test_tiki2po.py?rev=1570&op=file
==============================================================================
--- translate-toolkit/branches/upstream/current/translate/convert/test_tiki2po.py (added)
+++ translate-toolkit/branches/upstream/current/translate/convert/test_tiki2po.py Sun Feb 8 16:49:31 2009
@@ -1,0 +1,56 @@
+#!/usr/bin/env python
+# -*- coding: utf-8 -*-
+
+# tiki2po unit tests
+# Author: Wil Clouser <wclouser at mozilla.com>
+# Date: 2008-12-01
+
+from translate.convert import tiki2po
+from translate.storage import tiki
+from translate.convert import test_convert
+from translate.misc import wStringIO
+
+class TestTiki2Po:
+ def test_converttiki_defaults(self):
+ inputfile = """
+"zero_source" => "zero_target",
+// ### Start of unused words
+"one_source" => "one_target",
+// ### end of unused words
+ """
+ outputfile = wStringIO.StringIO()
+ tiki2po.converttiki(inputfile, outputfile)
+
+ output = outputfile.getvalue()
+
+ assert '#: translated' in output
+ assert 'msgid "zero_source"' in output
+ assert "one_source" not in output
+
+ def test_converttiki_includeunused(self):
+ inputfile = """
+"zero_source" => "zero_target",
+// ### Start of unused words
+"one_source" => "one_target",
+// ### end of unused words
+ """
+ outputfile = wStringIO.StringIO()
+ tiki2po.converttiki(inputfile, outputfile, includeunused=True)
+
+ output = outputfile.getvalue()
+
+ assert '#: translated' in output
+ assert 'msgid "zero_source"' in output
+ assert '#: unused' in output
+ assert 'msgid "one_source"' in output
+
+
+class TestTiki2PoCommand(test_convert.TestConvertCommand, TestTiki2Po):
+ """Tests running actual tiki2po commands on files"""
+ convertmodule = tiki2po
+ defaultoptions = {}
+
+ def test_help(self):
+ """tests getting help"""
+ options = test_convert.TestConvertCommand.test_help(self)
+ options = self.help_check(options, "--include-unused")
Added: translate-toolkit/branches/upstream/current/translate/convert/tiki2po
URL: http://svn.debian.org/wsvn/translate-toolkit/branches/upstream/current/translate/convert/tiki2po?rev=1570&op=file
==============================================================================
--- translate-toolkit/branches/upstream/current/translate/convert/tiki2po (added)
+++ translate-toolkit/branches/upstream/current/translate/convert/tiki2po Sun Feb 8 16:49:31 2009
@@ -1,0 +1,27 @@
+#!/usr/bin/env python
+#
+# Copyright 2008 Mozilla Corporation, Zuza Software Foundation
+#
+# This file is part of translate.
+#
+# translate is free software; you can redistribute it and/or modify
+# it under the terms of the GNU General Public License as published by
+# the Free Software Foundation; either version 2 of the License, or
+# (at your option) any later version.
+#
+# translate is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+# GNU General Public License for more details.
+#
+# You should have received a copy of the GNU General Public License
+# along with translate; if not, write to the Free Software
+# Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
+
+"""simple script to convert a TikiWiki style language.php file to a gettext .po localization file"""
+
+from translate.convert import tiki2po
+
+if __name__ == '__main__':
+ tiki2po.main()
+
Added: translate-toolkit/branches/upstream/current/translate/convert/tiki2po.py
URL: http://svn.debian.org/wsvn/translate-toolkit/branches/upstream/current/translate/convert/tiki2po.py?rev=1570&op=file
==============================================================================
--- translate-toolkit/branches/upstream/current/translate/convert/tiki2po.py (added)
+++ translate-toolkit/branches/upstream/current/translate/convert/tiki2po.py Sun Feb 8 16:49:31 2009
@@ -1,0 +1,90 @@
+#!/usr/bin/env python
+# -*- coding: utf-8 -*-
+#
+# Copyright 2008 Mozilla Corporation, Zuza Software Foundation
+#
+# This file is part of translate.
+#
+# translate is free software; you can redistribute it and/or modify
+# it under the terms of the GNU General Public License as published by
+# the Free Software Foundation; either version 2 of the License, or
+# (at your option) any later version.
+#
+# translate is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+# GNU General Public License for more details.
+#
+# You should have received a copy of the GNU General Public License
+# along with translate; if not, write to the Free Software
+# Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
+
+""" Convert TikiWiki's language.php files to GetText PO files. """
+
+import re
+import sys
+from translate.storage import tiki
+from translate.storage import po
+
+class tiki2po:
+ def __init__(self, includeunused=False):
+ """
+ @param includeunused: On conversion, should the "unused" section be preserved? Default: False
+ """
+ self.includeunused = includeunused
+
+ def convertstore(self, thetikifile):
+ """Converts a given (parsed) tiki file to a po file.
+
+ @param thetikifile: a tikifile pre-loaded with input data
+ """
+ thetargetfile = po.pofile()
+
+ # Set up the header
+ targetheader = thetargetfile.makeheader(charset="UTF-8", encoding="8bit")
+ thetargetfile.addunit(targetheader)
+
+ # For each lang unit, make the new po unit accordingly
+ for unit in thetikifile.units:
+ if not self.includeunused and "unused" in unit.getlocations():
+ continue
+ newunit = po.pounit()
+ newunit.source = unit.source
+ newunit.settarget(unit.target)
+ locations = unit.getlocations()
+ if locations:
+ newunit.addlocations(locations)
+ thetargetfile.addunit(newunit)
+ return thetargetfile
+
+def converttiki(inputfile, outputfile, template=None, includeunused=False):
+ """Converts from tiki file format to po.
+
+ @param inputfile: file handle of the source
+ @param outputfile: file handle to write to
+ @param template: unused
+ @param includeunused: Include the "usused" section of the tiki file? Default: False
+ """
+ convertor = tiki2po(includeunused=includeunused)
+ inputstore = tiki.TikiStore(inputfile)
+ outputstore = convertor.convertstore(inputstore)
+ if outputstore.isempty():
+ return False
+ outputfile.write(str(outputstore))
+ return True
+
+def main(argv=None):
+ """Converts tiki .php files to .po."""
+ from translate.convert import convert
+ from translate.misc import stdiotell
+ sys.stdout = stdiotell.StdIOWrapper(sys.stdout)
+
+ formats = {"php":("po",converttiki)}
+
+ parser = convert.ConvertOptionParser(formats, description=__doc__)
+ parser.add_option("", "--include-unused", dest="includeunused", action="store_true", default=False, help="Include strings in the unused section")
+ parser.passthrough.append("includeunused")
+ parser.run(argv)
+
+if __name__ == '__main__':
+ main()
Modified: translate-toolkit/branches/upstream/current/translate/convert/ts2po
URL: http://svn.debian.org/wsvn/translate-toolkit/branches/upstream/current/translate/convert/ts2po?rev=1570&op=diff
==============================================================================
--- translate-toolkit/branches/upstream/current/translate/convert/ts2po (original)
+++ translate-toolkit/branches/upstream/current/translate/convert/ts2po Sun Feb 8 16:49:31 2009
@@ -25,5 +25,5 @@
from translate.convert import ts2po
if __name__ == '__main__':
- ts2po.main()
+ ts2po.main()
Modified: translate-toolkit/branches/upstream/current/translate/convert/txt2po
URL: http://svn.debian.org/wsvn/translate-toolkit/branches/upstream/current/translate/convert/txt2po?rev=1570&op=diff
==============================================================================
--- translate-toolkit/branches/upstream/current/translate/convert/txt2po (original)
+++ translate-toolkit/branches/upstream/current/translate/convert/txt2po Sun Feb 8 16:49:31 2009
@@ -25,5 +25,5 @@
from translate.convert import txt2po
if __name__ == '__main__':
- txt2po.main()
+ txt2po.main()
Modified: translate-toolkit/branches/upstream/current/translate/convert/xliff2odf.py
URL: http://svn.debian.org/wsvn/translate-toolkit/branches/upstream/current/translate/convert/xliff2odf.py?rev=1570&op=diff
==============================================================================
--- translate-toolkit/branches/upstream/current/translate/convert/xliff2odf.py (original)
+++ translate-toolkit/branches/upstream/current/translate/convert/xliff2odf.py Sun Feb 8 16:49:31 2009
@@ -20,11 +20,7 @@
# Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
#
-"""convert OpenDocument (ODF) files to Gettext PO localization files
-
-see: http://translate.sourceforge.net/wiki/toolkit/odf2xliff for examples and
-usage instructions.
-"""
+"""convert OpenDocument (ODF) files to Gettext PO localization files"""
import cStringIO
import zipfile
@@ -74,28 +70,22 @@
unit_trees = load_unit_tree(input_file, dom_trees)
return translate_dom_trees(unit_trees, dom_trees)
-def write_odf(template, output_file, dom_trees):
-
+def write_odf(xlf_data, template, output_file, dom_trees):
def write_content_to_odf(output_zip, dom_trees):
for filename, dom_tree in dom_trees.iteritems():
output_zip.writestr(filename, etree.tostring(dom_tree, encoding='UTF-8', xml_declaration=True))
- output_zip = odf_io.copy_odf(template, output_file, dom_trees.keys())
+ template_zip = zipfile.ZipFile(template, 'r')
+ output_zip = zipfile.ZipFile(output_file, 'w', compression=zipfile.ZIP_DEFLATED)
+ output_zip = odf_io.copy_odf(template_zip, output_zip, dom_trees.keys() + ['META-INF/manifest.xml'])
+ output_zip = odf_io.add_file(output_zip, template_zip.read('META-INF/manifest.xml'), 'translation.xlf', xlf_data)
write_content_to_odf(output_zip, dom_trees)
def convertxliff(input_file, output_file, template):
"""reads in stdin using fromfileclass, converts using convertorclass, writes to stdout"""
-
- # Temporary hack.
- # template and output_file are Zip files, and need to be
- # read and written as binary files under Windows, but
- # they aren't initially in binary mode (under Windows);
- # thus, we have to reopen them as such.
- template = open(template.name, 'rb')
- output_file = open(output_file.name, 'wb')
-
- dom_trees = translate_odf(template, input_file)
- write_odf(template, output_file, dom_trees)
+ xlf_data = input_file.read()
+ dom_trees = translate_odf(template, cStringIO.StringIO(xlf_data))
+ write_odf(xlf_data, template, output_file, dom_trees)
return True
def main(argv=None):
Modified: translate-toolkit/branches/upstream/current/translate/convert/xliff2oo.py
URL: http://svn.debian.org/wsvn/translate-toolkit/branches/upstream/current/translate/convert/xliff2oo.py?rev=1570&op=diff
==============================================================================
--- translate-toolkit/branches/upstream/current/translate/convert/xliff2oo.py (original)
+++ translate-toolkit/branches/upstream/current/translate/convert/xliff2oo.py Sun Feb 8 16:49:31 2009
@@ -49,29 +49,11 @@
self.timestamp_str = None
self.includefuzzy = includefuzzy
- def makekey(self, ookey):
- """converts an oo key tuple into a key identifier for the source file"""
- project, sourcefile, resourcetype, groupid, localid, platform = ookey
- sourcefile = sourcefile.replace('\\','/')
- if self.long_keys:
- sourcebase = os.path.join(project, sourcefile)
- else:
- sourceparts = sourcefile.split('/')
- sourcebase = "".join(sourceparts[-1:])
- if len(groupid) == 0 or len(localid) == 0:
- fullid = groupid + localid
- else:
- fullid = groupid + "." + localid
- if resourcetype:
- fullid = fullid + "." + resourcetype
- key = "%s#%s" % (sourcebase, fullid)
- return oo.normalizefilename(key)
-
def makeindex(self):
"""makes an index of the oo keys that are used in the source file"""
self.index = {}
for ookey, theoo in self.o.ookeys.iteritems():
- sourcekey = self.makekey(ookey)
+ sourcekey = oo.makekey(ookey, self.long_keys)
self.index[sourcekey] = theoo
def readoo(self, of):
Modified: translate-toolkit/branches/upstream/current/translate/convert/xliff2po
URL: http://svn.debian.org/wsvn/translate-toolkit/branches/upstream/current/translate/convert/xliff2po?rev=1570&op=diff
==============================================================================
--- translate-toolkit/branches/upstream/current/translate/convert/xliff2po (original)
+++ translate-toolkit/branches/upstream/current/translate/convert/xliff2po Sun Feb 8 16:49:31 2009
@@ -25,5 +25,5 @@
from translate.convert import xliff2po
if __name__ == '__main__':
- xliff2po.main()
+ xliff2po.main()
Added: translate-toolkit/branches/upstream/current/translate/doc/epydoc-config.ini
URL: http://svn.debian.org/wsvn/translate-toolkit/branches/upstream/current/translate/doc/epydoc-config.ini?rev=1570&op=file
==============================================================================
--- translate-toolkit/branches/upstream/current/translate/doc/epydoc-config.ini (added)
+++ translate-toolkit/branches/upstream/current/translate/doc/epydoc-config.ini Sun Feb 8 16:49:31 2009
@@ -1,0 +1,111 @@
+[epydoc] # Epydoc section marker (required by ConfigParser)
+
+# modules
+# The list of objects to document. Objects can be named using
+# dotted names, module filenames, or package directory names.
+# Alases for this option include "objects" and "values".
+modules: translate
+
+# output
+# The type of output that should be generated. Should be one
+# of: html, text, latex, dvi, ps, pdf.
+output: html
+
+# target
+# The path to the output directory. May be relative or absolute.
+target: apidocs/
+
+# docformat
+# The default markup language for docstrings, for modules that do
+# not define __docformat__. Defaults to epytext.
+docformat: epytext
+
+# css
+# The CSS stylesheet for HTML output. Can be the name of a builtin
+# stylesheet, or the name of a file.
+css: white
+
+# name
+# The documented project's name.
+name: Translate Toolkit
+
+# url
+# The documented project's URL.
+url: http://translate.sourceforge.net/wiki/toolkit/index
+
+# link
+# HTML code for the project link in the navigation bar. If left
+# unspecified, the project link will be generated based on the
+# project's name and URL.
+# link: <a href="somewhere">My Cool Project</a>
+
+# top
+# The "top" page for the documentation. Can be a URL, the name
+# of a module or class, or one of the special names "trees.html",
+# "indices.html", or "help.html"
+# top: translate.storage
+
+# help
+# An alternative help file. The named file should contain the
+# body of an HTML file; navigation bars will be added to it.
+# help: my_helpfile.html
+
+# frames
+# Whether or not to include a frames-based table of contents.
+frames: yes
+
+# private
+# Whether or not to inclue private variables. (Even if included,
+# private variables will be hidden by default.)
+private: yes
+
+# imports
+# Whether or not to list each module's imports.
+imports: yes
+
+# verbosity
+# An integer indicating how verbose epydoc should be. The default
+# value is 0; negative values will supress warnings and errors;
+# positive values will give more verbose output.
+verbosity: 1
+
+# parse
+# Whether or not parsing should be used to examine objects.
+parse: yes
+
+# introspect
+# Whether or not introspection should be used to examine objects.
+introspect: yes
+
+# graph
+# The list of graph types that should be automatically included
+# in the output. Graphs are generated using the Graphviz "dot"
+# executable. Graph types include: "classtree", "callgraph",
+# "umlclass". Use "all" to include all graph types
+graph: all
+
+# dotpath
+# The path to the Graphviz "dot" executable, used to generate
+# graphs.
+# dotpath: /usr/bin/
+
+# sourcecode
+# Whether or not to include syntax highlighted source code in
+# the output (HTML only).
+sourcecode: yes
+
+# pstat
+# The name of one or more pstat files (generated by the profile
+# or hotshot module). These are used to generate call graphs.
+# pstat: profile.out
+
+# separate-classes
+# Whether each class should be listed in its own section when
+# generating LaTeX or PDF output.
+separate-classes: no
+
+# The format for showing inheritance objects, should be one
+# of: grouped, listed, included.
+inheritance: listed
+exclude: test_*
+
Added: translate-toolkit/branches/upstream/current/translate/doc/gen_api_docs.sh
URL: http://svn.debian.org/wsvn/translate-toolkit/branches/upstream/current/translate/doc/gen_api_docs.sh?rev=1570&op=file
==============================================================================
--- translate-toolkit/branches/upstream/current/translate/doc/gen_api_docs.sh (added)
+++ translate-toolkit/branches/upstream/current/translate/doc/gen_api_docs.sh Sun Feb 8 16:49:31 2009
@@ -1,0 +1,18 @@
+#!/bin/sh
+
+# The translate toolkit must be in your PYTHONPATH when you
+# build these documents. Either install them or run:
+# . setpath
+#
+# The script will then find them, build docs and export them
+# to sourceforge.
+#
+# You should also have a setup in .ssh/config that defines
+# sftranslate as your sourceforge account for the translate
+# project.
+
+outputdir=apidocs/
+
+rm -rf $outputdir
+epydoc --config=epydoc-config.ini
+rsync -az -e ssh --delete $outputdir sftranslate:htdocs/doc/api
Propchange: translate-toolkit/branches/upstream/current/translate/doc/gen_api_docs.sh
------------------------------------------------------------------------------
svn:executable = *
Modified: translate-toolkit/branches/upstream/current/translate/doc/user/toolkit-formats.html
URL: http://svn.debian.org/wsvn/translate-toolkit/branches/upstream/current/translate/doc/user/toolkit-formats.html?rev=1570&op=diff
==============================================================================
--- translate-toolkit/branches/upstream/current/translate/doc/user/toolkit-formats.html (original)
+++ translate-toolkit/branches/upstream/current/translate/doc/user/toolkit-formats.html Sun Feb 8 16:49:31 2009
@@ -44,7 +44,7 @@
<ul>
<li class="level1"><div class="li"> <a href="toolkit-csv.html" class="wikilink1" title="toolkit-csv.html">CSV</a></div>
</li>
-<li class="level1"><div class="li"> <a href="toolkit-ini.html" class="wikilink1" title="toolkit-ini.html">.ini</a></div>
+<li class="level1"><div class="li"> <a href="toolkit-ini.html" class="wikilink1" title="toolkit-ini.html">.ini</a> (including Inno Setup .isl dialect (from > v1.2.1)</div>
</li>
<li class="level1"><div class="li"> Java <a href="toolkit-properties.html" class="wikilink1" title="toolkit-properties.html">properties</a> (also Mozilla derived properties files)</div>
</li>
@@ -56,12 +56,14 @@
</li>
<li class="level1"><div class="li"> Qt Linguist <a href="toolkit-ts.html" class="wikilink1" title="toolkit-ts.html">.ts</a> (both 1.0 and 1.1 supported, 1.0 has a converter)</div>
</li>
+<li class="level1"><div class="li"> <a href="toolkit-symbian.html" class="wikilink2" title="toolkit-symbian.html">Symbian</a> localization file (from > v1.2.1)</div>
+</li>
<li class="level1"><div class="li"> Windows <a href="toolkit-rc.html" class="wikilink1" title="toolkit-rc.html">RC</a> files (from v1.2)</div>
</li>
</ul>
</div>
-<!-- SECTION "Other translation formats" [423-757] -->
+<!-- SECTION "Other translation formats" [423-858] -->
<h2><a name="translation_memory_formats" id="translation_memory_formats">Translation Memory formats</a></h2>
<div class="level2">
<ul>
@@ -72,7 +74,7 @@
</ul>
</div>
-<!-- SECTION "Translation Memory formats" [758-832] -->
+<!-- SECTION "Translation Memory formats" [859-933] -->
<h2><a name="glossary_formats" id="glossary_formats">Glossary formats</a></h2>
<div class="level2">
<ul>
@@ -83,7 +85,7 @@
</ul>
</div>
-<!-- SECTION "Glossary formats" [833-910] -->
+<!-- SECTION "Glossary formats" [934-1011] -->
<h2><a name="formats_of_translatable_documents" id="formats_of_translatable_documents">Formats of translatable documents</a></h2>
<div class="level2">
<ul>
@@ -91,6 +93,8 @@
</li>
<li class="level1"><div class="li"> <a href="toolkit-ical.html" class="wikilink1" title="toolkit-ical.html">iCal</a> (from v1.2)</div>
</li>
+<li class="level1"><div class="li"> <a href="http://en.wikipedia.org/wiki/OpenDocument" class="interwiki iw_wp" title="http://en.wikipedia.org/wiki/OpenDocument">OpenDocument</a> - all ODF file types (from v1.2.1)</div>
+</li>
<li class="level1"><div class="li"> <a href="toolkit-text.html" class="wikilink1" title="toolkit-text.html">Text</a> - plain text with blocks separated by whitespace</div>
</li>
<li class="level1"><div class="li"> <a href="toolkit-wiki.html" class="wikilink1" title="toolkit-wiki.html">Wiki</a> - <a href="http://en.wikipedia.org/wiki/dokuwiki" class="interwiki iw_wp" title="http://en.wikipedia.org/wiki/dokuwiki">dokuwiki</a> and <a href="http://en.wikipedia.org/wiki/MediaWiki" class="interwiki iw_wp" title="http://en.wikipedia.org/wiki/MediaWiki">MediaWiki</a> supported</div>
@@ -98,7 +102,7 @@
</ul>
</div>
-<!-- SECTION "Formats of translatable documents" [911-1119] -->
+<!-- SECTION "Formats of translatable documents" [1012-1279] -->
<h2><a name="machine_readable_formats" id="machine_readable_formats">Machine readable formats</a></h2>
<div class="level2">
<ul>
@@ -109,18 +113,18 @@
</ul>
</div>
-<!-- SECTION "Machine readable formats" [1120-1204] -->
+<!-- SECTION "Machine readable formats" [1280-1364] -->
<h2><a name="in_development" id="in_development">In development</a></h2>
<div class="level2">
<ul>
-<li class="level1"><div class="li"> <a href="http://en.wikipedia.org/wiki/OpenDocument" class="interwiki iw_wp" title="http://en.wikipedia.org/wiki/OpenDocument">OpenDocument</a> Format: ODT, ODS and ODP (We have an old sxw converter)</div>
-</li>
<li class="level1"><div class="li"> <a href="toolkit-wml.html" class="wikilink1" title="toolkit-wml.html">WML</a></div>
</li>
-</ul>
-
-</div>
-<!-- SECTION "In development" [1205-1325] -->
+<li class="level1"><div class="li"> <a href="http://bugs.locamotion.org/show_bug.cgi?id=703" class="interwiki iw_bug" title="http://bugs.locamotion.org/show_bug.cgi?id=703">Subtitles</a></div>
+</li>
+</ul>
+
+</div>
+<!-- SECTION "In development" [1365-1431] -->
<h2><a name="unsupported_formats" id="unsupported_formats">Unsupported formats</a></h2>
<div class="level2">
@@ -198,10 +202,12 @@
</li>
<li class="level1"><div class="li"> Tcl: .msg files. <a href="http://www.google.com/codesearch?hl=en&q=show:XvsRBDCljVk:M2kzUbm70Ts:D5EHICz0aaQ&sa=N&ct=rd&cs_p=http://www.scilab.org/download/4.0/scilab-4.0-src.tar.gz&cs_f=scilab-4.0/tcl/scipadsources/msg_files/AddingTranslations.txt" class="urlextern" title="http://www.google.com/codesearch?hl=en&q=show:XvsRBDCljVk:M2kzUbm70Ts:D5EHICz0aaQ&sa=N&ct=rd&cs_p=http://www.scilab.org/download/4.0/scilab-4.0-src.tar.gz&cs_f=scilab-4.0/tcl/scipadsources/msg_files/AddingTranslations.txt">Good documentation</a></div>
</li>
-</ul>
-
-</div>
-<!-- SECTION "Unsupported formats" [1326-4136] -->
+<li class="level1"><div class="li"> NSIS installer: <a href="http://trac.vidalia-project.net/browser/vidalia/trunk/src/tools" class="urlextern" title="http://trac.vidalia-project.net/browser/vidalia/trunk/src/tools">Existing C++ implementation</a></div>
+</li>
+</ul>
+
+</div>
+<!-- SECTION "Unsupported formats" [1432-4358] -->
<h2><a name="unlikely_to_be_supported" id="unlikely_to_be_supported">Unlikely to be supported</a></h2>
<div class="level2">
@@ -216,5 +222,5 @@
</ul>
</div>
-<!-- SECTION "Unlikely to be supported" [4137-] --></body>
+<!-- SECTION "Unlikely to be supported" [4359-] --></body>
</html>
Modified: translate-toolkit/branches/upstream/current/translate/doc/user/toolkit-index.html
URL: http://svn.debian.org/wsvn/translate-toolkit/branches/upstream/current/translate/doc/user/toolkit-index.html?rev=1570&op=diff
==============================================================================
--- translate-toolkit/branches/upstream/current/translate/doc/user/toolkit-index.html (original)
+++ translate-toolkit/branches/upstream/current/translate/doc/user/toolkit-index.html Sun Feb 8 16:49:31 2009
@@ -131,26 +131,20 @@
</li>
<li class="level1"><div class="li"> <a href="toolkit-rc2po.html" class="wikilink1" title="toolkit-rc2po.html">rc2po</a> - Windows Resource .rc (C++ Resource Compiler) converter (v1.2)</div>
</li>
+<li class="level1"><div class="li"> <a href="toolkit-tiki2po.html" class="wikilink1" title="toolkit-tiki2po.html">tiki2po</a> - <a href="http://tikiwiki.org/" class="urlextern" title="http://tikiwiki.org/">TikiWiki</a> language.php converter</div>
+</li>
<li class="level1"><div class="li"> <a href="toolkit-ts2po.html" class="wikilink1" title="toolkit-ts2po.html">ts2po</a> - Qt Linguist .ts converter</div>
</li>
<li class="level1"><div class="li"> <a href="toolkit-txt2po.html" class="wikilink1" title="toolkit-txt2po.html">txt2po</a> - Plain text to <acronym title="Gettext Portable Object">PO</acronym> converter</div>
</li>
<li class="level1"><div class="li"> <a href="toolkit-xliff2po.html" class="wikilink1" title="toolkit-xliff2po.html">xliff2po</a> - <acronym title="XML Localization Interchange File Format">XLIFF</acronym> (<acronym title="Extensible Markup Language">XML</acronym> Localisation Interchange File Format) converter</div>
</li>
-</ul>
-
-<p>
-
-These use the toolkit but are not part of the toolkit, please refer to the site for usage instructions
-
-</p>
-<ul>
-<li class="level1"><div class="li"> <a href="http://tikiwiki.org/tiki-index.php?page=PO%20Convertor%20for%20TikiWiki" class="urlextern" title="http://tikiwiki.org/tiki-index.php?page=PO%20Convertor%20for%20TikiWiki">tiki2po</a> - convert TikiWiki translation files to po</div>
-</li>
-</ul>
-
-</div>
-<!-- SECTION "Converters" [2618-4155] -->
+<li class="level1"><div class="li"> <a href="toolkit-symb2po.html" class="wikilink1" title="toolkit-symb2po.html">symb2po</a> - Symbian-style translation to <acronym title="Gettext Portable Object">PO</acronym> converter (not in an official release yet)</div>
+</li>
+</ul>
+
+</div>
+<!-- SECTION "Converters" [2618-4087] -->
<h1><a name="tools" id="tools">Tools</a></h1>
<div class="level1">
@@ -160,7 +154,7 @@
</p>
</div>
-<!-- SECTION "Tools" [4156-4238] -->
+<!-- SECTION "Tools" [4088-4170] -->
<h3><a name="quality_assurance" id="quality_assurance">Quality Assurance</a></h3>
<div class="level3">
@@ -182,7 +176,7 @@
</ul>
</div>
-<!-- SECTION "Quality Assurance" [4239-4740] -->
+<!-- SECTION "Quality Assurance" [4171-4672] -->
<h3><a name="other_tools" id="other_tools">Other tools</a></h3>
<div class="level3">
<ul>
@@ -202,10 +196,12 @@
</li>
<li class="level1"><div class="li"> <a href="toolkit-poterminology.html" class="wikilink1" title="toolkit-poterminology.html">poterminology</a> - extracts potential terminology from your translation files.</div>
</li>
-</ul>
-
-</div>
-<!-- SECTION "Other tools" [4741-5576] -->
+<li class="level1"><div class="li"> <a href="toolkit-tmserver.html" class="wikilink1" title="toolkit-tmserver.html">tmserver</a> - a Translation Memory server, can be queried over <acronym title="Hyper Text Transfer Protocol">HTTP</acronym> using JSON</div>
+</li>
+</ul>
+
+</div>
+<!-- SECTION "Other tools" [4673-5592] -->
<h1><a name="scripts" id="scripts">Scripts</a></h1>
<div class="level1">
@@ -239,5 +235,5 @@
</ul>
</div>
-<!-- SECTION "Scripts" [5577-] --></body>
+<!-- SECTION "Scripts" [5593-] --></body>
</html>
Modified: translate-toolkit/branches/upstream/current/translate/doc/user/toolkit-ini.html
URL: http://svn.debian.org/wsvn/translate-toolkit/branches/upstream/current/translate/doc/user/toolkit-ini.html?rev=1570&op=diff
==============================================================================
--- translate-toolkit/branches/upstream/current/translate/doc/user/toolkit-ini.html (original)
+++ translate-toolkit/branches/upstream/current/translate/doc/user/toolkit-ini.html Sun Feb 8 16:49:31 2009
@@ -34,6 +34,23 @@
</div>
<!-- SECTION "Conformance" [116-313] -->
+<h3><a name="dialects" id="dialects">Dialects</a></h3>
+<div class="level3">
+
+<p>
+
+The format supports two dialects:
+
+</p>
+<ul>
+<li class="level1"><div class="li"> default: standard iniparse handling of INI files</div>
+</li>
+<li class="level1"><div class="li"> inno: follows <a href="http://www.innosetup.com/files/istrans/" class="urlextern" title="http://www.innosetup.com/files/istrans/">Inno</a> escaping conventions</div>
+</li>
+</ul>
+
+</div>
+<!-- SECTION "Dialects" [314-510] -->
<h2><a name="references" id="references">References</a></h2>
<div class="level2">
@@ -50,5 +67,5 @@
</ul>
</div>
-<!-- SECTION "References" [314-] --></body>
+<!-- SECTION "References" [511-] --></body>
</html>
Modified: translate-toolkit/branches/upstream/current/translate/doc/user/toolkit-odf2xliff.html
URL: http://svn.debian.org/wsvn/translate-toolkit/branches/upstream/current/translate/doc/user/toolkit-odf2xliff.html?rev=1570&op=diff
==============================================================================
--- translate-toolkit/branches/upstream/current/translate/doc/user/toolkit-odf2xliff.html (original)
+++ translate-toolkit/branches/upstream/current/translate/doc/user/toolkit-odf2xliff.html Sun Feb 8 16:49:31 2009
@@ -29,8 +29,12 @@
tool.
</p>
+<p>
+If you are more used to software translation or l10n, you might want to read a bit about <a href="guide-document_translation.html" class="wikilink1" title="guide-document_translation.html">document translation</a>. This should help you to get the most out of translating ODF with <acronym title="XML Localization Interchange File Format">XLIFF</acronym>.
+</p>
+
</div>
-<!-- SECTION "odf2xliff and xliff2odf" [1-525] -->
+<!-- SECTION "odf2xliff and xliff2odf" [1-719] -->
<h2><a name="usage" id="usage">Usage</a></h2>
<div class="level2">
<pre class="code">odf2xliff [options] <original_odf> <xliff>
@@ -120,7 +124,7 @@
</table>
</div>
-<!-- SECTION "Usage" [526-2259] -->
+<!-- SECTION "Usage" [720-2453] -->
<h2><a name="examples" id="examples">Examples</a></h2>
<div class="level2">
<pre class="code">odf2xliff english.odt english_français.xlf</pre>
@@ -138,16 +142,16 @@
</p>
</div>
-<!-- SECTION "Examples" [2260-2662] -->
+<!-- SECTION "Examples" [2454-2856] -->
<h2><a name="bugs" id="bugs">Bugs</a></h2>
<div class="level2">
<p>
-This filter is not yet extensively used⦠expect bugs. See <a href="toolkit-xliff.html" class="wikilink1" title="toolkit-xliff.html">xliff</a> to see how well our implementation conforms to the standard.
+This filter is not yet extensively used - we appreciate your feedback. See <a href="toolkit-xliff.html" class="wikilink1" title="toolkit-xliff.html">xliff</a> to see how well our implementation conforms to the <acronym title="XML Localization Interchange File Format">XLIFF</acronym> standard. Possible issues were listed during <a href="odf-testing.html" class="wikilink1" title="odf-testing.html">testing</a>.
</p>
</div>
-<!-- SECTION "Bugs" [2663-] --></body>
+<!-- SECTION "Bugs" [2857-] --></body>
</html>
Modified: translate-toolkit/branches/upstream/current/translate/doc/user/toolkit-php.html
URL: http://svn.debian.org/wsvn/translate-toolkit/branches/upstream/current/translate/doc/user/toolkit-php.html?rev=1570&op=diff
==============================================================================
--- translate-toolkit/branches/upstream/current/translate/doc/user/toolkit-php.html (original)
+++ translate-toolkit/branches/upstream/current/translate/doc/user/toolkit-php.html Sun Feb 8 16:49:31 2009
@@ -19,15 +19,11 @@
<p>
-Only in v1.1
-</p>
-
-<p>
Many <a href="http://en.wikipedia.org/wiki/PHP" class="interwiki iw_wp" title="http://en.wikipedia.org/wiki/PHP">PHP</a> programs make use of a localisable string array. The toolkit support the full localisation of such files.
</p>
</div>
-<!-- SECTION "PHP" [1-157] -->
+<!-- SECTION "PHP" [1-143] -->
<h2><a name="example" id="example">Example</a></h2>
<div class="level2">
@@ -38,7 +34,7 @@
<pre class="code php"><pre class="code php"><span class="kw2"><?php</span>
<span class="re0">$string</span><span class="br0">[</span><span class="st0">'name'</span><span class="br0">]</span> = <span class="st0">'value'</span></pre></pre>
</div>
-<!-- SECTION "Example" [158-279] -->
+<!-- SECTION "Example" [144-265] -->
<h2><a name="conformance" id="conformance">Conformance</a></h2>
<div class="level2">
@@ -47,24 +43,37 @@
Our format support allows:
</p>
<ul>
-<li class="level1"><div class="li"> <acronym title="Hypertext Preprocessor">PHP</acronym> escaping</div>
-</li>
-<li class="level1"><div class="li"> Different types of surrounding quotes: â or '</div>
+<li class="level1"><div class="li"> <acronym title="Hypertext Preprocessor">PHP</acronym> escaping (both for <a href="http://www.php.net/manual/en/language.types.string.php#language.types.string.syntax.single" class="urlextern" title="http://www.php.net/manual/en/language.types.string.php#language.types.string.syntax.single">single</a> and <a href="http://www.php.net/manual/en/language.types.string.php#language.types.string.syntax.double" class="urlextern" title="http://www.php.net/manual/en/language.types.string.php#language.types.string.syntax.double">double</a> quoted strings)</div>
</li>
<li class="level1"><div class="li"> Multiline entries</div>
</li>
<li class="level1"><div class="li"> Various layouts of the id: </div>
-<ul>
-<li class="level2"><div class="li"> $string['name']</div>
-</li>
-<li class="level2"><div class="li"> $string[name]</div>
-</li>
-<li class="level2"><div class="li"> $string[ 'name' ]</div>
</li>
</ul>
+<pre class="code php"><span class="re1">$string</span><span class="br0">[</span><span class="st0">'name'</span><span class="br0">]</span>
+<span class="re1">$string</span><span class="br0">[</span>name<span class="br0">]</span>
+<span class="re1">$string</span><span class="br0">[</span> <span class="st0">'name'</span> <span class="br0">]</span></pre>
+</div>
+<!-- SECTION "Conformance" [266-690] -->
+<h2><a name="non-conformance" id="non-conformance">Non-Conformance</a></h2>
+<div class="level2">
+
+<p>
+
+The following are not yet supported:
+</p>
+<ul>
+<li class="level1"><div class="li"> <acronym title="Hypertext Preprocessor">PHP</acronym> array syntax for localisation</div>
+</li>
+</ul>
+<pre class="code php"><span class="re1">$lang</span> <span class="sy0">=</span> <a href="http://www.php.net/array"><span class="kw3">array</span></a><span class="br0">(</span>
+ <span class="st0">'name'</span> <span class="sy0">=></span> <span class="st0">'value'</span><span class="sy0">,</span>
+ <span class="st0">'name2'</span> <span class="sy0">=></span> <span class="st0">'value2'</span><span class="sy0">,</span>
+<span class="br0">)</span><span class="sy0">;</span></pre><ul>
+<li class="level1"><div class="li"> <a href="http://www.php.net/manual/en/language.types.string.php#language.types.string.syntax.heredoc" class="urlextern" title="http://www.php.net/manual/en/language.types.string.php#language.types.string.syntax.heredoc">herdoc</a> and <a href="http://www.php.net/manual/en/language.types.string.php#language.types.string.syntax.nowdoc" class="urlextern" title="http://www.php.net/manual/en/language.types.string.php#language.types.string.syntax.nowdoc">nowdoc</a> are not managed</div>
</li>
</ul>
</div>
-<!-- SECTION "Conformance" [280-] --></body>
+<!-- SECTION "Non-Conformance" [691-] --></body>
</html>
Modified: translate-toolkit/branches/upstream/current/translate/doc/user/toolkit-php2po.html
URL: http://svn.debian.org/wsvn/translate-toolkit/branches/upstream/current/translate/doc/user/toolkit-php2po.html?rev=1570&op=diff
==============================================================================
--- translate-toolkit/branches/upstream/current/translate/doc/user/toolkit-php2po.html (original)
+++ translate-toolkit/branches/upstream/current/translate/doc/user/toolkit-php2po.html Sun Feb 8 16:49:31 2009
@@ -22,13 +22,8 @@
Converts <acronym title="Hypertext Preprocessor">PHP</acronym> localisable string arrays to Gettext <acronym title="Gettext Portable Object">PO</acronym> format.
</p>
-<p>
-<p><div class="noteclassic">New in v1.1
-</div></p>
-</p>
-
-</div>
-<!-- SECTION "php2po" [1-110] -->
+</div>
+<!-- SECTION "php2po" [1-84] -->
<h2><a name="usage" id="usage">Usage</a></h2>
<div class="level2">
<pre class="code">php2po [options] <php> <po>
@@ -134,7 +129,7 @@
</table>
</div>
-<!-- SECTION "Usage" [111-2295] -->
+<!-- SECTION "Usage" [85-2269] -->
<h2><a name="formats_supported" id="formats_supported">Formats Supported</a></h2>
<div class="level2">
@@ -151,7 +146,7 @@
</p>
</div>
-<!-- SECTION "Formats Supported" [2296-2557] -->
+<!-- SECTION "Formats Supported" [2270-2531] -->
<h3><a name="unsupported" id="unsupported">Unsupported</a></h3>
<div class="level3">
@@ -167,7 +162,7 @@
</p>
</div>
-<!-- SECTION "Unsupported" [2558-2774] -->
+<!-- SECTION "Unsupported" [2532-2748] -->
<h2><a name="examples" id="examples">Examples</a></h2>
<div class="level2">
@@ -217,16 +212,14 @@
</p>
</div>
-<!-- SECTION "Examples" [2775-4117] -->
+<!-- SECTION "Examples" [2749-4091] -->
<h2><a name="issues" id="issues">Issues</a></h2>
<div class="level2">
<ul>
<li class="level1"><div class="li"> Support localisation variables using <code>array</code> is missing</div>
</li>
-<li class="level1"><div class="li"> Proper escaping of single vs double quotes is missing. See <a href="http://bugs.locamotion.org/show_bug.cgi?id=593" class="interwiki iw_bug" title="http://bugs.locamotion.org/show_bug.cgi?id=593">593</a></div>
-</li>
</ul>
</div>
-<!-- SECTION "Issues" [4118-] --></body>
+<!-- SECTION "Issues" [4092-] --></body>
</html>
Modified: translate-toolkit/branches/upstream/current/translate/doc/user/toolkit-podebug.html
URL: http://svn.debian.org/wsvn/translate-toolkit/branches/upstream/current/translate/doc/user/toolkit-podebug.html?rev=1570&op=diff
==============================================================================
--- translate-toolkit/branches/upstream/current/translate/doc/user/toolkit-podebug.html (original)
+++ translate-toolkit/branches/upstream/current/translate/doc/user/toolkit-podebug.html Sun Feb 8 16:49:31 2009
@@ -52,7 +52,7 @@
</p>
<table class="inline">
<tr class="row0">
- <td class="col0 leftalign"> -f </td><td class="col1 leftalign"> is an optional format strings. The default format is â[%s]â </td>
+ <td class="col0 leftalign"> -f </td><td class="col1 leftalign"> is an optional format strings </td>
</tr>
<tr class="row1">
<td class="col0 leftalign"> <in> </td><td class="col1 leftalign"> is an input directory or localisation file file </td>
@@ -105,7 +105,7 @@
<td class="col0 leftalign"> <a href="toolkit-rewrite_style.html" class="wikilink1" title="toolkit-rewrite_style.html">--rewrite=STYLE</a> </td><td class="col1"> the translation rewrite style: xxx, en, blank, chef (v1.2), unicode (v1.2) </td>
</tr>
<tr class="row12">
- <td class="col0 leftalign"> --ignore=APPLICATION </td><td class="col1 leftalign"> apply tagging ignore rules for the given application: openoffice, mozilla (v1.1.1) </td>
+ <td class="col0 leftalign"> --ignore=APPLICATION </td><td class="col1 leftalign"> apply tagging ignore rules for the given application: kde, gtk, openoffice, mozilla (v1.1.1) </td>
</tr>
<tr class="row13">
<td class="col0 leftalign"> --hash=LENGTH </td><td class="col1"> add an md5 hash to translations (v1.1) </td>
@@ -113,7 +113,7 @@
</table>
</div>
-<!-- SECTION "Usage" [840-2295] -->
+<!-- SECTION "Usage" [840-2274] -->
<h2><a name="formats" id="formats">Formats</a></h2>
<div class="level2">
@@ -169,7 +169,7 @@
</p>
</div>
-<!-- SECTION "Formats" [2296-3273] -->
+<!-- SECTION "Formats" [2275-3252] -->
<h2><a name="rewriting_style" id="rewriting_style">Rewriting (style)</a></h2>
<div class="level2">
@@ -183,7 +183,7 @@
</p>
</div>
-<!-- SECTION "Rewriting (style)" [3274-3913] -->
+<!-- SECTION "Rewriting (style)" [3253-3892] -->
<h2><a name="ignoring_messages" id="ignoring_messages">Ignoring messages</a></h2>
<div class="level2">
@@ -201,7 +201,7 @@
</p>
</div>
-<!-- SECTION "Ignoring messages" [3914-4468] -->
+<!-- SECTION "Ignoring messages" [3893-4447] -->
<h2><a name="hashing" id="hashing">Hashing</a></h2>
<div class="level2">
@@ -214,7 +214,7 @@
</p>
</div>
-<!-- SECTION "Hashing" [4469-4959] -->
+<!-- SECTION "Hashing" [4448-4938] -->
<h2><a name="bugs" id="bugs">Bugs</a></h2>
<div class="level2">
@@ -225,5 +225,5 @@
</p>
</div>
-<!-- SECTION "Bugs" [4960-] --></body>
+<!-- SECTION "Bugs" [4939-] --></body>
</html>
Modified: translate-toolkit/branches/upstream/current/translate/doc/user/toolkit-pofilter.html
URL: http://svn.debian.org/wsvn/translate-toolkit/branches/upstream/current/translate/doc/user/toolkit-pofilter.html?rev=1570&op=diff
==============================================================================
--- translate-toolkit/branches/upstream/current/translate/doc/user/toolkit-pofilter.html (original)
+++ translate-toolkit/branches/upstream/current/translate/doc/user/toolkit-pofilter.html Sun Feb 8 16:49:31 2009
@@ -104,45 +104,48 @@
<td class="col0 leftalign"> --header </td><td class="col1 leftalign"> include a <acronym title="Gettext Portable Object">PO</acronym> header in the output </td>
</tr>
<tr class="row15">
+ <td class="col0 leftalign"> --nonotes </td><td class="col1 leftalign"> don't add notes about the errors (since version 1.3) </td>
+ </tr>
+ <tr class="row16">
<td class="col0 leftalign"> --autocorrect </td><td class="col1 leftalign"> output automatic corrections where possible rather than describing issues </td>
</tr>
- <tr class="row16">
+ <tr class="row17">
<td class="col0 leftalign"> --language=LANG </td><td class="col1"> set target language code (e.g. af-ZA) [required for spell check]. This will help to make pofilter aware of the conventions of your language </td>
</tr>
- <tr class="row17">
+ <tr class="row18">
<td class="col0 leftalign"> --openoffice </td><td class="col1 leftalign"> use the standard checks for OpenOffice translations </td>
</tr>
- <tr class="row18">
+ <tr class="row19">
<td class="col0 leftalign"> --mozilla </td><td class="col1 leftalign"> use the standard checks for Mozilla translations </td>
</tr>
- <tr class="row19">
+ <tr class="row20">
<td class="col0 leftalign"> --gnome </td><td class="col1 leftalign"> use the standard checks for Gnome translations </td>
</tr>
- <tr class="row20">
+ <tr class="row21">
<td class="col0 leftalign"> --kde </td><td class="col1 leftalign"> use the standard checks for KDE translations </td>
</tr>
- <tr class="row21">
+ <tr class="row22">
<td class="col0 leftalign"> --wx </td><td class="col1"> use the standard checks for wxWidgets translations (since version 1.1) - identical to --kde </td>
</tr>
- <tr class="row22">
+ <tr class="row23">
<td class="col0 leftalign"> --excludefilter=FILTER </td><td class="col1 leftalign"> don't use FILTER when filtering </td>
</tr>
- <tr class="row23">
+ <tr class="row24">
<td class="col0 leftalign"> -tFILTER, --test=FILTER </td><td class="col1 leftalign"> only use test FILTERs specified with this option when filtering </td>
</tr>
- <tr class="row24">
+ <tr class="row25">
<td class="col0 leftalign"> --notranslatefile=FILE </td><td class="col1 leftalign"> read list of untranslatable words from FILE (must not be translated) </td>
</tr>
- <tr class="row25">
+ <tr class="row26">
<td class="col0 leftalign"> --musttranslatefile=FILE </td><td class="col1 leftalign"> read list of translatable words from FILE (must be translated) </td>
</tr>
- <tr class="row26">
+ <tr class="row27">
<td class="col0 leftalign"> --validcharsfile=FILE </td><td class="col1 leftalign"> read list of all valid characters from FILE (must be in UTF-8) </td>
</tr>
</table>
</div>
-<!-- SECTION "Usage" [501-2999] -->
+<!-- SECTION "Usage" [501-3080] -->
<h2><a name="example" id="example">Example</a></h2>
<div class="level2">
@@ -191,7 +194,7 @@
</p>
</div>
-<!-- SECTION "Example" [3000-4411] -->
+<!-- SECTION "Example" [3081-4492] -->
<h2><a name="bugs" id="bugs">Bugs</a></h2>
<div class="level2">
@@ -202,5 +205,5 @@
</p>
</div>
-<!-- SECTION "Bugs" [4412-] --></body>
+<!-- SECTION "Bugs" [4493-] --></body>
</html>
Modified: translate-toolkit/branches/upstream/current/translate/doc/user/toolkit-pofilter_tests.html
URL: http://svn.debian.org/wsvn/translate-toolkit/branches/upstream/current/translate/doc/user/toolkit-pofilter_tests.html?rev=1570&op=diff
==============================================================================
--- translate-toolkit/branches/upstream/current/translate/doc/user/toolkit-pofilter_tests.html (original)
+++ translate-toolkit/branches/upstream/current/translate/doc/user/toolkit-pofilter_tests.html Sun Feb 8 16:49:31 2009
@@ -19,17 +19,17 @@
<p>
-The following are descriptions of the tests available in pofilter and Pootle with some
+The following are descriptions of the tests available in <a href="toolkit-pofilter.html" class="wikilink1" title="toolkit-pofilter.html">pofilter</a> and Pootle with some
details about what type of errors they are useful to test for and the
limitations of each test.
</p>
<p>
-Keep in mind that the software might point to errors which are not necessarily wrong.
-</p>
-
-<p>
-Currently there are 45 tests. If you are using pofilter, you can always run:
+Keep in mind that the software might point to errors which are not necessarily wrong (false positives).
+</p>
+
+<p>
+Currently there are 47 tests. You can always get a list of the currently available tests by running:
</p>
<p>
@@ -37,17 +37,25 @@
</p>
<p>
-to get a list of the current tests available in your installation.
-</p>
-
-<p>
-If you have an idea for a new test or want to add target language adaptations for your
-language then please help us with information about your test idea and the specifics
-of your language. Better yet why not try coding the change yourself.
-</p>
-
-</div>
-<!-- SECTION "Descriptions of all pofilter tests" [1-730] -->
+To see test specific to a specific targetted application or group of applications run:
+</p>
+
+<p>
+<code>pofilter --gnome -l</code>
+</p>
+
+</div>
+<!-- SECTION "Descriptions of all pofilter tests" [1-579] -->
+<h2><a name="adding_new_tests_and_new_language_adaptations" id="adding_new_tests_and_new_language_adaptations">Adding new tests and new language adaptations</a></h2>
+<div class="level2">
+
+<p>
+
+If you have an idea for a new test or want to add target language adaptations for your language then please help us with information about your test idea and the specifics of your language.
+</p>
+
+</div>
+<!-- SECTION "Adding new tests and new language adaptations" [580-829] -->
<h2><a name="test_classification" id="test_classification">Test Classification</a></h2>
<div class="level2">
@@ -68,7 +76,7 @@
<ul>
<li class="level1"><div class="li"> Functional -- may confuse the user</div>
<ul>
-<li class="level2"><div class="li"> <a href="toolkit-pofilter_tests.html#acronyms" class="wikilink1" title="toolkit-pofilter_tests.html">acronyms</a>, <a href="toolkit-pofilter_tests.html#blank" class="wikilink1" title="toolkit-pofilter_tests.html">blank</a>, <a href="toolkit-pofilter_tests.html#emails" class="wikilink1" title="toolkit-pofilter_tests.html">emails</a>, <a href="toolkit-pofilter_tests.html#filepaths" class="wikilink1" title="toolkit-pofilter_tests.html">filepaths</a>, <a href="toolkit-pofilter_tests.html#functions" class="wikilink1" title="toolkit-pofilter_tests.html">functions</a>, <a href="toolkit-pofilter_tests.html#kdecomments" class="wikilink1" title="toolkit-pofilter_tests.html">kdecomments</a>, <a href="toolkit-pofilter_tests.html#long" class="wikilink1" title="toolkit-pofilter_tests.html">long</a>, <a href="toolkit-pofilter_tests.html#musttranslatewords" class="wikilink1" title="toolkit-pofilter_tests.html">musttranslatewords</a>, <a href="toolkit-pofilter_tests.html#notranslatewords" class="wikilink1" title="toolkit-pofilter_tests.html">notranslatewords</a>, <a href="toolkit-pofilter_tests.html#numbers" class="wikilink1" title="toolkit-pofilter_tests.html">numbers</a>, <a href="toolkit-pofilter_tests.html#options" class="wikilink1" title="toolkit-pofilter_tests.html">options</a> (v1.1), <a href="toolkit-pofilter_tests.html#purepunc" class="wikilink1" title="toolkit-pofilter_tests.html">purepunc</a>, <a href="toolkit-pofilter_tests.html#sentencecount" class="wikilink1" title="toolkit-pofilter_tests.html">sentencecount</a>, <a href="toolkit-pofilter_tests.html#short" class="wikilink1" title="toolkit-pofilter_tests.html">short</a>, <a href="toolkit-pofilter_tests.html#spellcheck" class="wikilink1" title="toolkit-pofilter_tests.html">spellcheck</a>, <a href="toolkit-pofilter_tests.html#urls" class="wikilink1" title="toolkit-pofilter_tests.html">urls</a>, <a href="toolkit-pofilter_tests.html#unchanged" class="wikilink1" title="toolkit-pofilter_tests.html">unchanged</a></div>
+<li class="level2"><div class="li"> <a href="toolkit-pofilter_tests.html#acronyms" class="wikilink1" title="toolkit-pofilter_tests.html">acronyms</a>, <a href="toolkit-pofilter_tests.html#blank" class="wikilink1" title="toolkit-pofilter_tests.html">blank</a>, <a href="toolkit-pofilter_tests.html#emails" class="wikilink1" title="toolkit-pofilter_tests.html">emails</a>, <a href="toolkit-pofilter_tests.html#filepaths" class="wikilink1" title="toolkit-pofilter_tests.html">filepaths</a>, <a href="toolkit-pofilter_tests.html#functions" class="wikilink1" title="toolkit-pofilter_tests.html">functions</a>, <a href="toolkit-pofilter_tests.html#gconf" class="wikilink1" title="toolkit-pofilter_tests.html">gconf</a>, <a href="toolkit-pofilter_tests.html#kdecomments" class="wikilink1" title="toolkit-pofilter_tests.html">kdecomments</a>, <a href="toolkit-pofilter_tests.html#long" class="wikilink1" title="toolkit-pofilter_tests.html">long</a>, <a href="toolkit-pofilter_tests.html#musttranslatewords" class="wikilink1" title="toolkit-pofilter_tests.html">musttranslatewords</a>, <a href="toolkit-pofilter_tests.html#notranslatewords" class="wikilink1" title="toolkit-pofilter_tests.html">notranslatewords</a>, <a href="toolkit-pofilter_tests.html#numbers" class="wikilink1" title="toolkit-pofilter_tests.html">numbers</a>, <a href="toolkit-pofilter_tests.html#options" class="wikilink1" title="toolkit-pofilter_tests.html">options</a> (v1.1), <a href="toolkit-pofilter_tests.html#purepunc" class="wikilink1" title="toolkit-pofilter_tests.html">purepunc</a>, <a href="toolkit-pofilter_tests.html#sentencecount" class="wikilink1" title="toolkit-pofilter_tests.html">sentencecount</a>, <a href="toolkit-pofilter_tests.html#short" class="wikilink1" title="toolkit-pofilter_tests.html">short</a>, <a href="toolkit-pofilter_tests.html#spellcheck" class="wikilink1" title="toolkit-pofilter_tests.html">spellcheck</a>, <a href="toolkit-pofilter_tests.html#urls" class="wikilink1" title="toolkit-pofilter_tests.html">urls</a>, <a href="toolkit-pofilter_tests.html#unchanged" class="wikilink1" title="toolkit-pofilter_tests.html">unchanged</a></div>
</li>
</ul>
</li>
@@ -91,12 +99,12 @@
</ul>
</div>
-<!-- SECTION "Test Classification" [731-2478] -->
+<!-- SECTION "Test Classification" [830-2603] -->
<h2><a name="test_description" id="test_description">Test Description</a></h2>
<div class="level2">
</div>
-<!-- SECTION "Test Description" [2479-2508] -->
+<!-- SECTION "Test Description" [2604-2633] -->
<h3><a name="accelerators" id="accelerators">accelerators</a></h3>
<div class="level3">
@@ -110,7 +118,7 @@
</p>
</div>
-<!-- SECTION "accelerators" [2509-2809] -->
+<!-- SECTION "accelerators" [2634-2934] -->
<h3><a name="acronyms" id="acronyms">acronyms</a></h3>
<div class="level3">
@@ -124,7 +132,7 @@
</p>
</div>
-<!-- SECTION "acronyms" [2810-3163] -->
+<!-- SECTION "acronyms" [2935-3288] -->
<h3><a name="blank" id="blank">blank</a></h3>
<div class="level3">
@@ -138,7 +146,7 @@
</p>
</div>
-<!-- SECTION "blank" [3164-3507] -->
+<!-- SECTION "blank" [3289-3632] -->
<h3><a name="brackets" id="brackets">brackets</a></h3>
<div class="level3">
@@ -152,7 +160,7 @@
</p>
</div>
-<!-- SECTION "brackets" [3508-3687] -->
+<!-- SECTION "brackets" [3633-3812] -->
<h3><a name="compendiumconflicts" id="compendiumconflicts">compendiumconflicts</a></h3>
<div class="level3">
@@ -167,7 +175,7 @@
</p>
</div>
-<!-- SECTION "compendiumconflicts" [3688-4038] -->
+<!-- SECTION "compendiumconflicts" [3813-4163] -->
<h3><a name="credits" id="credits">credits</a></h3>
<div class="level3">
@@ -181,7 +189,7 @@
</p>
</div>
-<!-- SECTION "credits" [4039-4513] -->
+<!-- SECTION "credits" [4164-4638] -->
<h3><a name="doublequoting" id="doublequoting">doublequoting</a></h3>
<div class="level3">
@@ -195,7 +203,7 @@
</p>
</div>
-<!-- SECTION "doublequoting" [4514-4721] -->
+<!-- SECTION "doublequoting" [4639-4846] -->
<h3><a name="doublespacing" id="doublespacing">doublespacing</a></h3>
<div class="level3">
@@ -209,7 +217,7 @@
</p>
</div>
-<!-- SECTION "doublespacing" [4722-5046] -->
+<!-- SECTION "doublespacing" [4847-5171] -->
<h3><a name="doublewords" id="doublewords">doublewords</a></h3>
<div class="level3">
@@ -223,7 +231,7 @@
</p>
</div>
-<!-- SECTION "doublewords" [5047-5439] -->
+<!-- SECTION "doublewords" [5172-5564] -->
<h3><a name="emails" id="emails">emails</a></h3>
<div class="level3">
@@ -237,7 +245,7 @@
</p>
</div>
-<!-- SECTION "emails" [5440-5740] -->
+<!-- SECTION "emails" [5565-5865] -->
<h3><a name="endpunc" id="endpunc">endpunc</a></h3>
<div class="level3">
@@ -255,7 +263,7 @@
</p>
</div>
-<!-- SECTION "endpunc" [5741-6826] -->
+<!-- SECTION "endpunc" [5866-6951] -->
<h3><a name="endwhitespace" id="endwhitespace">endwhitespace</a></h3>
<div class="level3">
@@ -269,7 +277,7 @@
</p>
</div>
-<!-- SECTION "endwhitespace" [6827-7283] -->
+<!-- SECTION "endwhitespace" [6952-7408] -->
<h3><a name="escapes" id="escapes">escapes</a></h3>
<div class="level3">
@@ -283,7 +291,7 @@
</p>
</div>
-<!-- SECTION "escapes" [7284-7491] -->
+<!-- SECTION "escapes" [7409-7616] -->
<h3><a name="filepaths" id="filepaths">filepaths</a></h3>
<div class="level3">
@@ -297,7 +305,7 @@
</p>
</div>
-<!-- SECTION "filepaths" [7492-7759] -->
+<!-- SECTION "filepaths" [7617-7884] -->
<h3><a name="functions" id="functions">functions</a></h3>
<div class="level3">
@@ -311,7 +319,21 @@
</p>
</div>
-<!-- SECTION "functions" [7760-7921] -->
+<!-- SECTION "functions" [7885-8045] -->
+<h3><a name="gconf" id="gconf">gconf</a></h3>
+<div class="level3">
+
+<p>
+
+Checks if we have any gconf config settings translated
+</p>
+
+<p>
+Gconf settings should not be translated so this check checks that gconf settings such as ânameâ or âmodification_dateâ are not translated in the translation. It allows you to change the surrounding quotes but will ensure that the setting values remain untranslated.
+</p>
+
+</div>
+<!-- SECTION "gconf" [8046-8386] -->
<h3><a name="hassuggestion" id="hassuggestion">hassuggestion</a></h3>
<div class="level3">
@@ -325,7 +347,7 @@
</p>
</div>
-<!-- SECTION "hassuggestion" [7922-8251] -->
+<!-- SECTION "hassuggestion" [8387-8716] -->
<h3><a name="isfuzzy" id="isfuzzy">isfuzzy</a></h3>
<div class="level3">
@@ -339,7 +361,7 @@
</p>
</div>
-<!-- SECTION "isfuzzy" [8252-8521] -->
+<!-- SECTION "isfuzzy" [8717-8986] -->
<h3><a name="isreview" id="isreview">isreview</a></h3>
<div class="level3">
@@ -361,7 +383,7 @@
</p>
</div>
-<!-- SECTION "isreview" [8522-8973] -->
+<!-- SECTION "isreview" [8987-9438] -->
<h3><a name="kdecomments" id="kdecomments">kdecomments</a></h3>
<div class="level3">
@@ -375,7 +397,7 @@
</p>
</div>
-<!-- SECTION "kdecomments" [8974-9259] -->
+<!-- SECTION "kdecomments" [9439-9724] -->
<h3><a name="long" id="long">long</a></h3>
<div class="level3">
@@ -391,7 +413,7 @@
</p>
</div>
-<!-- SECTION "long" [9260-9623] -->
+<!-- SECTION "long" [9725-10088] -->
<h3><a name="musttranslatewords" id="musttranslatewords">musttranslatewords</a></h3>
<div class="level3">
@@ -407,7 +429,7 @@
</p>
</div>
-<!-- SECTION "musttranslatewords" [9624-10024] -->
+<!-- SECTION "musttranslatewords" [10089-10489] -->
<h3><a name="newlines" id="newlines">newlines</a></h3>
<div class="level3">
@@ -421,7 +443,7 @@
</p>
</div>
-<!-- SECTION "newlines" [10025-10208] -->
+<!-- SECTION "newlines" [10490-10673] -->
<h3><a name="nplurals" id="nplurals">nplurals</a></h3>
<div class="level3">
@@ -435,7 +457,7 @@
</p>
</div>
-<!-- SECTION "nplurals" [10209-10526] -->
+<!-- SECTION "nplurals" [10674-10991] -->
<h3><a name="notranslatewords" id="notranslatewords">notranslatewords</a></h3>
<div class="level3">
@@ -450,7 +472,7 @@
</p>
</div>
-<!-- SECTION "notranslatewords" [10527-10887] -->
+<!-- SECTION "notranslatewords" [10992-11352] -->
<h3><a name="numbers" id="numbers">numbers</a></h3>
<div class="level3">
@@ -464,7 +486,7 @@
</p>
</div>
-<!-- SECTION "numbers" [10888-11160] -->
+<!-- SECTION "numbers" [11353-11625] -->
<h3><a name="options" id="options">options</a></h3>
<div class="level3">
@@ -478,7 +500,7 @@
</p>
</div>
-<!-- SECTION "options" [11161-11587] -->
+<!-- SECTION "options" [11626-12052] -->
<h3><a name="printf" id="printf">printf</a></h3>
<div class="level3">
@@ -492,7 +514,7 @@
</p>
</div>
-<!-- SECTION "printf" [11588-12264] -->
+<!-- SECTION "printf" [12053-12729] -->
<h3><a name="puncspacing" id="puncspacing">puncspacing</a></h3>
<div class="level3">
@@ -506,7 +528,7 @@
</p>
</div>
-<!-- SECTION "puncspacing" [12265-12490] -->
+<!-- SECTION "puncspacing" [12730-12955] -->
<h3><a name="purepunc" id="purepunc">purepunc</a></h3>
<div class="level3">
@@ -520,7 +542,7 @@
</p>
</div>
-<!-- SECTION "purepunc" [12491-12654] -->
+<!-- SECTION "purepunc" [12956-13119] -->
<h3><a name="sentencecount" id="sentencecount">sentencecount</a></h3>
<div class="level3">
@@ -534,7 +556,7 @@
</p>
</div>
-<!-- SECTION "sentencecount" [12655-13175] -->
+<!-- SECTION "sentencecount" [13120-13640] -->
<h3><a name="short" id="short">short</a></h3>
<div class="level3">
@@ -550,7 +572,7 @@
</p>
</div>
-<!-- SECTION "short" [13176-13537] -->
+<!-- SECTION "short" [13641-14002] -->
<h3><a name="simplecaps" id="simplecaps">simplecaps</a></h3>
<div class="level3">
@@ -564,7 +586,7 @@
</p>
</div>
-<!-- SECTION "simplecaps" [13538-14061] -->
+<!-- SECTION "simplecaps" [14003-14526] -->
<h3><a name="simpleplurals" id="simpleplurals">simpleplurals</a></h3>
<div class="level3">
@@ -581,7 +603,7 @@
</p>
</div>
-<!-- SECTION "simpleplurals" [14062-14663] -->
+<!-- SECTION "simpleplurals" [14527-15128] -->
<h3><a name="singlequoting" id="singlequoting">singlequoting</a></h3>
<div class="level3">
@@ -595,7 +617,7 @@
</p>
</div>
-<!-- SECTION "singlequoting" [14664-15112] -->
+<!-- SECTION "singlequoting" [15129-15577] -->
<h3><a name="spellcheck" id="spellcheck">spellcheck</a></h3>
<div class="level3">
@@ -620,7 +642,7 @@
</p>
</div>
-<!-- SECTION "spellcheck" [15113-16186] -->
+<!-- SECTION "spellcheck" [15578-16651] -->
<h3><a name="startcaps" id="startcaps">startcaps</a></h3>
<div class="level3">
@@ -634,7 +656,7 @@
</p>
</div>
-<!-- SECTION "startcaps" [16187-16659] -->
+<!-- SECTION "startcaps" [16652-17124] -->
<h3><a name="startpunc" id="startpunc">startpunc</a></h3>
<div class="level3">
@@ -648,7 +670,7 @@
</p>
</div>
-<!-- SECTION "startpunc" [16660-16807] -->
+<!-- SECTION "startpunc" [17125-17272] -->
<h3><a name="startwhitespace" id="startwhitespace">startwhitespace</a></h3>
<div class="level3">
@@ -662,7 +684,7 @@
</p>
</div>
-<!-- SECTION "startwhitespace" [16808-16953] -->
+<!-- SECTION "startwhitespace" [17273-17418] -->
<h3><a name="tabs" id="tabs">tabs</a></h3>
<div class="level3">
@@ -676,7 +698,7 @@
</p>
</div>
-<!-- SECTION "tabs" [16954-17103] -->
+<!-- SECTION "tabs" [17419-17568] -->
<h3><a name="unchanged" id="unchanged">unchanged</a></h3>
<div class="level3">
@@ -690,7 +712,7 @@
</p>
</div>
-<!-- SECTION "unchanged" [17104-17389] -->
+<!-- SECTION "unchanged" [17569-17854] -->
<h3><a name="untranslated" id="untranslated">untranslated</a></h3>
<div class="level3">
@@ -704,7 +726,7 @@
</p>
</div>
-<!-- SECTION "untranslated" [17390-17606] -->
+<!-- SECTION "untranslated" [17855-18071] -->
<h3><a name="urls" id="urls">urls</a></h3>
<div class="level3">
@@ -718,7 +740,7 @@
</p>
</div>
-<!-- SECTION "urls" [17607-18121] -->
+<!-- SECTION "urls" [18072-18586] -->
<h3><a name="validchars" id="validchars">validchars</a></h3>
<div class="level3">
@@ -741,7 +763,7 @@
</p>
</div>
-<!-- SECTION "validchars" [18122-18770] -->
+<!-- SECTION "validchars" [18587-19235] -->
<h3><a name="variables" id="variables">variables</a></h3>
<div class="level3">
@@ -755,7 +777,7 @@
</p>
</div>
-<!-- SECTION "variables" [18771-19181] -->
+<!-- SECTION "variables" [19236-19646] -->
<h3><a name="xmltags" id="xmltags">xmltags</a></h3>
<div class="level3">
@@ -774,5 +796,5 @@
</p>
</div>
-<!-- SECTION "xmltags" [19182-] --></body>
+<!-- SECTION "xmltags" [19647-] --></body>
</html>
Added: translate-toolkit/branches/upstream/current/translate/doc/user/toolkit-symb2po.html
URL: http://svn.debian.org/wsvn/translate-toolkit/branches/upstream/current/translate/doc/user/toolkit-symb2po.html?rev=1570&op=file
==============================================================================
--- translate-toolkit/branches/upstream/current/translate/doc/user/toolkit-symb2po.html (added)
+++ translate-toolkit/branches/upstream/current/translate/doc/user/toolkit-symb2po.html Sun Feb 8 16:49:31 2009
@@ -1,0 +1,238 @@
+<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN"
+ "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd">
+<html>
+<head>
+ <title></title>
+ <link rel="stylesheet" media="screen" type="text/css" href="./style.css" />
+ <link rel="stylesheet" media="screen" type="text/css" href="./design.css" />
+ <link rel="stylesheet" media="print" type="text/css" href="./print.css" />
+
+ <meta http-equiv="Content-Type" content="text/html; charset=utf-8" />
+</head>
+<body>
+<a href=.>start</a></br>
+
+
+
+<h1><a name="symb2po" id="symb2po">symb2po</a></h1>
+<div class="level1">
+
+<p>
+
+Converts Symbian-style translation files to <acronym title="Gettext Portable Object">PO</acronym> files and vice versa. The Symbian translation files currently have a strong Buddycloud flavour, but the tools will be made more general as the need arises.
+</p>
+
+<p>
+<p><div class="noteclassic">These tools currently only appear in the development version of the toolkit. You can get it by <a href="toolkit-installation.html" class="wikilink1" title="toolkit-installation.html">checking out the code from Subverion</a>
+</div></p>
+</p>
+
+</div>
+<!-- SECTION "symb2po" [1-398] -->
+<h2><a name="usage" id="usage">Usage</a></h2>
+<div class="level2">
+<pre class="code">symb2po [options] [-t <target_lang_symb>] <source_lang_symb> <po>
+po2symb [options] -t <target_lang_symb> <po> <target_lang_symb></pre>
+
+<p>
+
+Where:
+</p>
+<table class="inline">
+ <tr class="row0">
+ <td class="col0 leftalign"> <target_lang_symb> </td><td class="col1 leftalign"> is a valid Symbian translation file or directory of those files </td>
+ </tr>
+ <tr class="row1">
+ <td class="col0 leftalign"> <source_lang_symb> </td><td class="col1 leftalign"> is a valid Symbian translation file or directory of those files </td>
+ </tr>
+ <tr class="row2">
+ <td class="col0 leftalign"> <po> </td><td class="col1 leftalign"> is a <acronym title="Gettext Portable Object">PO</acronym> or <acronym title="Gettext Portable Object Template">POT</acronym> file or a directory of <acronym title="Gettext Portable Object">PO</acronym> or <acronym title="Gettext Portable Object Template">POT</acronym> files </td>
+ </tr>
+</table>
+
+<p>
+
+Options (symb2po):
+</p>
+<table class="inline">
+ <tr class="row0">
+ <td class="col0 leftalign"> --version </td><td class="col1 leftalign"> show program's version number and exit </td>
+ </tr>
+ <tr class="row1">
+ <td class="col0 leftalign"> -h, --help </td><td class="col1 leftalign"> show this help message and exit </td>
+ </tr>
+ <tr class="row2">
+ <td class="col0 leftalign"> --manpage </td><td class="col1 leftalign"> output a manpage based on the help </td>
+ </tr>
+ <tr class="row3">
+ <td class="col0 leftalign"> <a href="toolkit-progress_progress.html" class="wikilink1" title="toolkit-progress_progress.html">--progress=PROGRESS</a> </td><td class="col1 leftalign"> show progress as: dots, none, bar, names, verbose </td>
+ </tr>
+ <tr class="row4">
+ <td class="col0 leftalign"> <a href="toolkit-errorlevel_errorlevel.html" class="wikilink1" title="toolkit-errorlevel_errorlevel.html">--errorlevel=ERRORLEVEL</a> </td><td class="col1 leftalign"> show errorlevel as: none, message, exception, traceback </td>
+ </tr>
+ <tr class="row5">
+ <td class="col0 leftalign"> -i INPUT, --input=INPUT </td><td class="col1 leftalign"> read from INPUT in php format </td>
+ </tr>
+ <tr class="row6">
+ <td class="col0 leftalign"> -x EXCLUDE, --exclude=EXCLUDE </td><td class="col1 leftalign"> exclude names matching EXCLUDE from input paths </td>
+ </tr>
+ <tr class="row7">
+ <td class="col0 leftalign"> -o OUTPUT, --output=OUTPUT </td><td class="col1 leftalign"> write to OUTPUT in po, pot formats </td>
+ </tr>
+ <tr class="row8">
+ <td class="col0 leftalign"> -t TEMPLATE, --template=TEMPLATE </td><td class="col1 leftalign"> read from TEMPLATE in the Symbian translation format </td>
+ </tr>
+ <tr class="row9">
+ <td class="col0 leftalign"> <a href="toolkit-psyco_mode.html" class="wikilink1" title="toolkit-psyco_mode.html">--psyco=MODE</a> </td><td class="col1 leftalign"> use psyco to speed up the operation, modes: none, full, profile </td>
+ </tr>
+ <tr class="row10">
+ <td class="col0 leftalign"> -P, --pot </td><td class="col1 leftalign"> output <acronym title="Gettext Portable Object">PO</acronym> Templates (.pot) rather than <acronym title="Gettext Portable Object">PO</acronym> files (.po) </td>
+ </tr>
+</table>
+
+<p>
+
+Options (po2symb):
+</p>
+<table class="inline">
+ <tr class="row0">
+ <td class="col0 leftalign"> --version </td><td class="col1 leftalign"> show program's version number and exit </td>
+ </tr>
+ <tr class="row1">
+ <td class="col0 leftalign"> -h, --help </td><td class="col1 leftalign"> show this help message and exit </td>
+ </tr>
+ <tr class="row2">
+ <td class="col0 leftalign"> --manpage </td><td class="col1 leftalign"> output a manpage based on the help </td>
+ </tr>
+ <tr class="row3">
+ <td class="col0 leftalign"> <a href="toolkit-progress_progress.html" class="wikilink1" title="toolkit-progress_progress.html">--progress=PROGRESS</a> </td><td class="col1 leftalign"> show progress as: dots, none, bar, names, verbose </td>
+ </tr>
+ <tr class="row4">
+ <td class="col0 leftalign"> <a href="toolkit-errorlevel_errorlevel.html" class="wikilink1" title="toolkit-errorlevel_errorlevel.html">--errorlevel=ERRORLEVEL</a> </td><td class="col1 leftalign"> show errorlevel as: none, message, exception, traceback </td>
+ </tr>
+ <tr class="row5">
+ <td class="col0 leftalign"> -i INPUT, --input=INPUT </td><td class="col1 leftalign"> read from INPUT in po, pot formats </td>
+ </tr>
+ <tr class="row6">
+ <td class="col0 leftalign"> -x EXCLUDE, --exclude=EXCLUDE </td><td class="col1 leftalign"> exclude names matching EXCLUDE from input paths </td>
+ </tr>
+ <tr class="row7">
+ <td class="col0 leftalign"> -o OUTPUT, --output=OUTPUT </td><td class="col1 leftalign"> write to OUTPUT in php format </td>
+ </tr>
+ <tr class="row8">
+ <td class="col0 leftalign"> -t TEMPLATE, --template=TEMPLATE </td><td class="col1 leftalign"> read from TEMPLATE in the Symbian translation format </td>
+ </tr>
+ <tr class="row9">
+ <td class="col0 leftalign"> <a href="toolkit-psyco_mode.html" class="wikilink1" title="toolkit-psyco_mode.html">--psyco=MODE</a> </td><td class="col1 leftalign"> use psyco to speed up the operation, modes: none, full, profile </td>
+ </tr>
+</table>
+
+</div>
+<!-- SECTION "Usage" [399-2499] -->
+<h2><a name="examples" id="examples">Examples</a></h2>
+<div class="level2">
+
+</div>
+<!-- SECTION "Examples" [2500-2521] -->
+<h3><a name="symb2po1" id="symb2po1">symb2po</a></h3>
+<div class="level3">
+
+<p>
+
+The most common use of symb2po, is to generate a <acronym title="Gettext Portable Object Template">POT</acronym> (<acronym title="Gettext Portable Object">PO</acronym> template) file from the English translation (note that the tool currently expects the Symbian translation file to end with the extension .r01, which is the code for English translation files). This file then serves as the source document from which all translations will be derived.
+</p>
+
+<p>
+To create a <acronym title="Gettext Portable Object Template">POT</acronym> file called <code>my_project.pot</code> from the source Symbian translation file <code>my_project.r01</code>, the following is executed:
+
+</p>
+<pre class="code">symb2po my_project.r01 my_project.pot</pre>
+
+<p>
+
+In order to re-use existing translations in the Symbian translation format, symb2po can merge that translation into the source Symbian translation to produce a translated <acronym title="Gettext Portable Object">PO</acronym> file. The existing Symbian translation file is specified with the <code>-t</code> flag.
+</p>
+
+<p>
+To create a file called <code>my_project-en-fr.po</code> (this is not the recommended <acronym title="Gettext Portable Object">PO</acronym> naming convention) from the source Symbian translation file <code>my_project.r01</code> and its French translation <code>my_project.r02</code>, execute:
+
+</p>
+<pre class="code">symb2po -t my_project.r02 my_project.r01 my_project-en-fr.po</pre>
+
+<p>
+
+<p><div class="noteclassic">Ensure that the English and French files are well aligned, in other words, no changes to the source text should have happened since the translation was done.
+</div></p>
+</p>
+
+</div>
+<!-- SECTION "symb2po" [2522-3764] -->
+<h3><a name="po2symb" id="po2symb">po2symb</a></h3>
+<div class="level3">
+
+<p>
+
+The po2symb tool is used to extract the translations in a <acronym title="Gettext Portable Object">PO</acronym> into a template Symbian translation file. The template Symbian translation file supplies the âshapeâ of the generated file (formatting and comments).
+</p>
+
+<p>
+In order to produce a French Symbian translation file using the English Symbian translation file <code>my_project.r01</code> as a template and the <acronym title="Gettext Portable Object">PO</acronym> file <code>my_project-en-fr.po</code> (this is not the recommended <acronym title="Gettext Portable Object">PO</acronym> naming convention) as the source document, execute:
+
+</p>
+<pre class="code">po2symb -t my_project.r01 my_project-en-fr.po my_project.r02</pre>
+
+</div>
+<!-- SECTION "po2symb" [3765-4314] -->
+<h2><a name="notes" id="notes">Notes</a></h2>
+<div class="level2">
+
+<p>
+
+The tools won't touch anything appearing between lines marked as
+
+</p>
+<pre class="code">// DO NOT TRANSLATE</pre>
+
+<p>
+
+The string <code>r_string_languagegroup_name</code> is used to set the <code>Language-Team</code> <acronym title="Gettext Portable Object">PO</acronym> header field.
+</p>
+
+<p>
+The Symbian translation header field <code>Author</code> is used to set the <code>Last-Translator</code> <acronym title="Gettext Portable Object">PO</acronym> header field.
+</p>
+
+</div>
+<!-- SECTION "Notes" [4315-4625] -->
+<h2><a name="issues" id="issues">Issues</a></h2>
+<div class="level2">
+
+<p>
+
+The file format is heavily tilted towards the Buddycould implementation
+</p>
+
+<p>
+The tools do nothing with the <code>Name</code> and <code>Description</code> Symbian header fields. This means that <code>po2symb</code> will just copy the values in the supplied template. So you might see something such as
+
+</p>
+<pre class="code">Description : Localisation File : English</pre>
+
+<p>
+
+in a generated French translation file.
+</p>
+
+</div>
+<!-- SECTION "Issues" [4626-5003] -->
+<h2><a name="bugs" id="bugs">Bugs</a></h2>
+<div class="level2">
+
+<p>
+
+Probably many, since this software hasn't been tested much yet.
+</p>
+
+</div>
+<!-- SECTION "Bugs" [5004-] --></body>
+</html>
Modified: translate-toolkit/branches/upstream/current/translate/doc/user/toolkit-tbx.html
URL: http://svn.debian.org/wsvn/translate-toolkit/branches/upstream/current/translate/doc/user/toolkit-tbx.html?rev=1570&op=diff
==============================================================================
--- translate-toolkit/branches/upstream/current/translate/doc/user/toolkit-tbx.html (original)
+++ translate-toolkit/branches/upstream/current/translate/doc/user/toolkit-tbx.html Sun Feb 8 16:49:31 2009
@@ -18,7 +18,7 @@
<div class="level1">
<p>
-<acronym title="TermBase eXchange">TBX</acronym> is the LISA standard for terminology and term exchange. See <a href="http://www.lisa.org/standards/tbx/" class="urlextern" title="http://www.lisa.org/standards/tbx/">http://www.lisa.org/standards/tbx/</a>.
+<acronym title="TermBase eXchange">TBX</acronym> is the LISA standard for terminology and term exchange.
</p>
<p>
@@ -26,12 +26,25 @@
</p>
</div>
-<!-- SECTION "TBX" [1-183] -->
+<!-- SECTION "TBX" [1-139] -->
+<h2><a name="references" id="references">References</a></h2>
+<div class="level2">
+<ul>
+<li class="level1"><div class="li"> <a href="http://www.lisa.org/standards/tbx/" class="urlextern" title="http://www.lisa.org/standards/tbx/">Standard home page</a></div>
+</li>
+<li class="level1"><div class="li"> <a href="http://www.lisa.org/TBX-Specification.33.0.html" class="urlextern" title="http://www.lisa.org/TBX-Specification.33.0.html">Specification</a></div>
+</li>
+<li class="level1"><div class="li"> <a href="http://www.iso.org/iso/iso_catalogue/catalogue_tc/catalogue_detail.htm?csnumber=45797" class="urlextern" title="http://www.iso.org/iso/iso_catalogue/catalogue_tc/catalogue_detail.htm?csnumber=45797">ISO 30042</a> - <acronym title="TermBase eXchange">TBX</acronym> is an approved <acronym title="International Organization for Standardization">ISO</acronym> standard</div>
+</li>
+</ul>
+
+</div>
+<!-- SECTION "References" [140-434] -->
<h2><a name="standard_conformance" id="standard_conformance">Standard conformance</a></h2>
<div class="level2">
</div>
-<!-- SECTION "Standard conformance" [184-216] -->
+<!-- SECTION "Standard conformance" [435-467] -->
<h3><a name="done" id="done">Done</a></h3>
<div class="level3">
<ul>
@@ -44,7 +57,7 @@
</ul>
</div>
-<!-- SECTION "Done" [217-343] -->
+<!-- SECTION "Done" [468-594] -->
<h3><a name="todo" id="todo">Todo</a></h3>
<div class="level3">
<ul>
@@ -69,7 +82,7 @@
</ul>
</div>
-<!-- SECTION "Todo" [344-544] -->
+<!-- SECTION "Todo" [595-795] -->
<h2><a name="implementation_notes_for_missing_features" id="implementation_notes_for_missing_features">Implementation notes for missing features</a></h2>
<div class="level2">
@@ -84,7 +97,7 @@
</ul>
</div>
-<!-- SECTION "Implementation notes for missing features" [545-695] -->
+<!-- SECTION "Implementation notes for missing features" [796-946] -->
<h3><a name="synonyms" id="synonyms">Synonyms</a></h3>
<div class="level3">
@@ -104,7 +117,7 @@
</p>
</div>
-<!-- SECTION "Synonyms" [696-964] -->
+<!-- SECTION "Synonyms" [947-1215] -->
<h3><a name="definition" id="definition">Definition</a></h3>
<div class="level3">
@@ -123,7 +136,7 @@
</p>
</div>
-<!-- SECTION "Definition" [965-1230] -->
+<!-- SECTION "Definition" [1216-1481] -->
<h3><a name="context" id="context">Context</a></h3>
<div class="level3">
@@ -139,7 +152,7 @@
</p>
</div>
-<!-- SECTION "Context" [1231-1429] -->
+<!-- SECTION "Context" [1482-1680] -->
<h3><a name="parts_of_speech" id="parts_of_speech">Parts of speech</a></h3>
<div class="level3">
@@ -155,7 +168,7 @@
</p>
</div>
-<!-- SECTION "Parts of speech" [1430-1577] -->
+<!-- SECTION "Parts of speech" [1681-1828] -->
<h3><a name="cross_reference" id="cross_reference">Cross reference</a></h3>
<div class="level3">
@@ -166,7 +179,7 @@
</p>
</div>
-<!-- SECTION "Cross reference" [1578-1656] -->
+<!-- SECTION "Cross reference" [1829-1907] -->
<h3><a name="abbreviations" id="abbreviations">Abbreviations</a></h3>
<div class="level3">
@@ -178,5 +191,5 @@
</p>
</div>
-<!-- SECTION "Abbreviations" [1657-] --></body>
+<!-- SECTION "Abbreviations" [1908-] --></body>
</html>
Added: translate-toolkit/branches/upstream/current/translate/doc/user/toolkit-tiki2po.html
URL: http://svn.debian.org/wsvn/translate-toolkit/branches/upstream/current/translate/doc/user/toolkit-tiki2po.html?rev=1570&op=file
==============================================================================
--- translate-toolkit/branches/upstream/current/translate/doc/user/toolkit-tiki2po.html (added)
+++ translate-toolkit/branches/upstream/current/translate/doc/user/toolkit-tiki2po.html Sun Feb 8 16:49:31 2009
@@ -1,0 +1,156 @@
+<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN"
+ "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd">
+<html>
+<head>
+ <title></title>
+ <link rel="stylesheet" media="screen" type="text/css" href="./style.css" />
+ <link rel="stylesheet" media="screen" type="text/css" href="./design.css" />
+ <link rel="stylesheet" media="print" type="text/css" href="./print.css" />
+
+ <meta http-equiv="Content-Type" content="text/html; charset=utf-8" />
+</head>
+<body>
+<a href=.>start</a></br>
+
+
+
+<h1><a name="tiki2po" id="tiki2po">tiki2po</a></h1>
+<div class="level1">
+
+<p>
+
+Converts <a href="http://tikiwiki.org" class="urlextern" title="http://tikiwiki.org">TikiWiki</a> language.php files to Gettext <acronym title="Gettext Portable Object">PO</acronym> format.
+</p>
+
+</div>
+<!-- SECTION "tiki2po" [1-107] -->
+<h2><a name="usage" id="usage">Usage</a></h2>
+<div class="level2">
+<pre class="code">tiki2po [options] <tiki> <po>
+po2tiki [options] <po> <tiki></pre>
+
+<p>
+
+Where:
+</p>
+<table class="inline">
+ <tr class="row0">
+ <td class="col0 leftalign"> <tiki> </td><td class="col1 leftalign"> is a valid language.php file for TikiWiki </td>
+ </tr>
+ <tr class="row1">
+ <td class="col0 leftalign"> <po> </td><td class="col1 leftalign"> is a <acronym title="Gettext Portable Object">PO</acronym> file </td>
+ </tr>
+</table>
+
+<p>
+
+Options (tiki2po):
+</p>
+<table class="inline">
+ <tr class="row0">
+ <td class="col0 leftalign"> --version </td><td class="col1 leftalign"> show program's version number and exit </td>
+ </tr>
+ <tr class="row1">
+ <td class="col0 leftalign"> -h, --help </td><td class="col1 leftalign"> show this help message and exit </td>
+ </tr>
+ <tr class="row2">
+ <td class="col0 leftalign"> --manpage </td><td class="col1 leftalign"> output a manpage based on the help </td>
+ </tr>
+ <tr class="row3">
+ <td class="col0 leftalign"> <a href="toolkit-progress_progress.html" class="wikilink1" title="toolkit-progress_progress.html">--progress=PROGRESS</a> </td><td class="col1 leftalign"> show progress as: dots, none, bar, names, verbose </td>
+ </tr>
+ <tr class="row4">
+ <td class="col0 leftalign"> <a href="toolkit-errorlevel_errorlevel.html" class="wikilink1" title="toolkit-errorlevel_errorlevel.html">--errorlevel=ERRORLEVEL</a> </td><td class="col1 leftalign"> show errorlevel as: none, message, exception, traceback </td>
+ </tr>
+ <tr class="row5">
+ <td class="col0 leftalign"> -i INPUT, --input=INPUT </td><td class="col1 leftalign"> read from INPUT in php format </td>
+ </tr>
+ <tr class="row6">
+ <td class="col0 leftalign"> -x EXCLUDE, --exclude=EXCLUDE </td><td class="col1 leftalign"> exclude names matching EXCLUDE from input paths </td>
+ </tr>
+ <tr class="row7">
+ <td class="col0 leftalign"> -o OUTPUT, --output=OUTPUT </td><td class="col1 leftalign"> write to OUTPUT in po, pot formats </td>
+ </tr>
+ <tr class="row8">
+ <td class="col0"> --include-unused </td><td class="col1"> When converting, include strings in the âunusedâ section? </td>
+ </tr>
+</table>
+
+<p>
+
+Options (po2tiki):
+</p>
+<table class="inline">
+ <tr class="row0">
+ <td class="col0 leftalign"> --version </td><td class="col1 leftalign"> show program's version number and exit </td>
+ </tr>
+ <tr class="row1">
+ <td class="col0 leftalign"> -h, --help </td><td class="col1 leftalign"> show this help message and exit </td>
+ </tr>
+ <tr class="row2">
+ <td class="col0 leftalign"> --manpage </td><td class="col1 leftalign"> output a manpage based on the help </td>
+ </tr>
+ <tr class="row3">
+ <td class="col0 leftalign"> <a href="toolkit-progress_progress.html" class="wikilink1" title="toolkit-progress_progress.html">--progress=PROGRESS</a> </td><td class="col1 leftalign"> show progress as: dots, none, bar, names, verbose </td>
+ </tr>
+ <tr class="row4">
+ <td class="col0 leftalign"> <a href="toolkit-errorlevel_errorlevel.html" class="wikilink1" title="toolkit-errorlevel_errorlevel.html">--errorlevel=ERRORLEVEL</a> </td><td class="col1 leftalign"> show errorlevel as: none, message, exception, traceback </td>
+ </tr>
+ <tr class="row5">
+ <td class="col0 leftalign"> -i INPUT, --input=INPUT </td><td class="col1 leftalign"> read from INPUT in po, pot formats </td>
+ </tr>
+ <tr class="row6">
+ <td class="col0 leftalign"> -x EXCLUDE, --exclude=EXCLUDE </td><td class="col1 leftalign"> exclude names matching EXCLUDE from input paths </td>
+ </tr>
+ <tr class="row7">
+ <td class="col0 leftalign"> -o OUTPUT, --output=OUTPUT </td><td class="col1 leftalign"> write to OUTPUT in php format </td>
+ </tr>
+</table>
+
+</div>
+<!-- SECTION "Usage" [108-1582] -->
+<h2><a name="examples" id="examples">Examples</a></h2>
+<div class="level2">
+
+<p>
+
+These examples demonstrate the use of tiki2po:
+
+</p>
+<pre class="code">tiki2po language.php language.po</pre>
+
+<p>
+
+Convert the tiki language.php file to .po
+
+</p>
+<pre class="code">po2tiki language.po language.php</pre>
+
+<p>
+
+Convert a .po file to a tiki language.php file
+</p>
+
+</div>
+<!-- SECTION "Examples" [1583-1816] -->
+<h2><a name="notes" id="notes">Notes</a></h2>
+<div class="level2">
+<ul>
+<li class="level1"><div class="li"> Templates are not currently supported.</div>
+</li>
+</ul>
+
+</div>
+<!-- SECTION "Notes" [1817-1879] -->
+<h2><a name="bugs" id="bugs">Bugs</a></h2>
+<div class="level2">
+
+<p>
+
+None known
+
+</p>
+
+</div>
+<!-- SECTION "Bugs" [1880-] --></body>
+</html>
Added: translate-toolkit/branches/upstream/current/translate/doc/user/toolkit-tmserver.html
URL: http://svn.debian.org/wsvn/translate-toolkit/branches/upstream/current/translate/doc/user/toolkit-tmserver.html?rev=1570&op=file
==============================================================================
--- translate-toolkit/branches/upstream/current/translate/doc/user/toolkit-tmserver.html (added)
+++ translate-toolkit/branches/upstream/current/translate/doc/user/toolkit-tmserver.html Sun Feb 8 16:49:31 2009
@@ -1,0 +1,100 @@
+<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN"
+ "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd">
+<html>
+<head>
+ <title></title>
+ <link rel="stylesheet" media="screen" type="text/css" href="./style.css" />
+ <link rel="stylesheet" media="screen" type="text/css" href="./design.css" />
+ <link rel="stylesheet" media="print" type="text/css" href="./print.css" />
+
+ <meta http-equiv="Content-Type" content="text/html; charset=utf-8" />
+</head>
+<body>
+<a href=.>start</a></br>
+
+
+
+<h1><a name="tmserver" id="tmserver">tmserver</a></h1>
+<div class="level1">
+
+<p>
+tmserver is a Translation Memory service that can be queried via <acronym title="Hyper Text Transfer Protocol">HTTP</acronym> using a simple REST like <acronym title="Uniform Resource Locator">URL</acronym>/http and data is exchanged between server and client encoded in JSON
+</p>
+
+</div>
+<!-- SECTION "tmserver" [1-192] -->
+<h2><a name="usage" id="usage">Usage</a></h2>
+<div class="level2">
+
+<p>
+tmserver.py --bind=HOSTNAME --port=PORT [--tmdb=TMDBFILE] [--import-translation-file=TMFILE [--import-source-lang=SOURCE_LANG] [--import-target-lang=TARGET_LANG]]
+</p>
+
+<p>
+TMDBFILE is the sqlite database file containing <a href="toolkit-tmdb.html" class="wikilink2" title="toolkit-tmdb.html">tmdb</a> translation memory data, if not specified a new temporary database is created
+</p>
+
+<p>
+TMFILE is a translation file (po, xliff, etc.) that should be imported into the database (mostly useful when no tmdb file is specified).
+</p>
+
+<p>
+Options:
+</p>
+<table class="inline">
+ <tr class="row0">
+ <td class="col0"> -h, --help </td><td class="col1"> show this help message and exit </td>
+ </tr>
+ <tr class="row1">
+ <td class="col0"> -d TMDBFILE, --tmdb=TMDBFILE </td><td class="col1"> translation memory database </td>
+ </tr>
+ <tr class="row2">
+ <td class="col0"> -f TMFILES, --import-translation-file=TMFILE </td><td class="col1"> translation file to import into the database </td>
+ </tr>
+ <tr class="row3">
+ <td class="col0"> -t TARGET_LANG, --import-target-lang=TARGET_LANG </td><td class="col1"> target language of translation files </td>
+ </tr>
+ <tr class="row4">
+ <td class="col0"> -s SOURCE_LANG, --import-source-lang=SOURCE_LANG </td><td class="col1"> source language of translation files </td>
+ </tr>
+ <tr class="row5">
+ <td class="col0"> -b BIND, --bind=HOSTNAME </td><td class="col1"> adress to bind server to </td>
+ </tr>
+ <tr class="row6">
+ <td class="col0"> -p PORT, --port=PORT </td><td class="col1"> port to listen on </td>
+ </tr>
+</table>
+
+</div>
+<!-- SECTION "Usage" [193-1151] -->
+<h2><a name="testing" id="testing">Testing</a></h2>
+<div class="level2">
+
+<p>
+easiest way to run the server for testing is to pass it a large translation file (maybe generated by <a href="toolkit-pocompendium.html" class="wikilink1" title="toolkit-pocompendium.html">pocompendium</a>) to create a tmdb database on the fly.
+
+</p>
+<pre class="code"> tmserver -b localhost -p 8080 -f compendium.po -s en_US -t ar</pre>
+
+<p>
+
+The server can be queried using a webbrowser. the url would be <a href="http://HOST:PORT/tmserver/SOURCE_LANG/TARGET_LANG/unit/STRING" class="urlextern" title="http://HOST:PORT/tmserver/SOURCE_LANG/TARGET_LANG/unit/STRING">http://HOST:PORT/tmserver/SOURCE_LANG/TARGET_LANG/unit/STRING</a>
+</p>
+
+<p>
+So to see suggestions for âopen fileâ try the url <a href="http://localhost:8080/tmserver/en_US/ar/unit/open+file" class="urlextern" title="http://localhost:8080/tmserver/en_US/ar/unit/open+file">http://localhost:8080/tmserver/en_US/ar/unit/open+file</a>
+</p>
+
+</div>
+<!-- SECTION "Testing" [1152-1627] -->
+<h2><a name="api" id="api">API</a></h2>
+<div class="level2">
+
+</div>
+<!-- SECTION "API" [1628-1643] -->
+<h2><a name="example_client" id="example_client">Example Client</a></h2>
+<div class="level2">
+
+</div>
+<!-- SECTION "Example Client" [1644-] --></body>
+</html>
Modified: translate-toolkit/branches/upstream/current/translate/filters/checks.py
URL: http://svn.debian.org/wsvn/translate-toolkit/branches/upstream/current/translate/filters/checks.py?rev=1570&op=diff
==============================================================================
--- translate-toolkit/branches/upstream/current/translate/filters/checks.py (original)
+++ translate-toolkit/branches/upstream/current/translate/filters/checks.py Sun Feb 8 16:49:31 2009
@@ -66,6 +66,8 @@
# The whole tag
tag_re = re.compile("<[^>]+>")
+gconf_attribute_re = re.compile('"[a-z_]+?"')
+
def tagname(string):
"""Returns the name of the XML/HTML tag in string"""
return tagname_re.match(string).groups(1)[0]
@@ -153,15 +155,15 @@
self.updatetargetlanguage(targetlanguage)
self.sourcelang = factory.getlanguage('en')
# Inits with default values
- self.punctuation = self._init_default(data.forceunicode(punctuation), self.lang.punctuation)
- self.endpunctuation = self._init_default(data.forceunicode(endpunctuation), self.lang.sentenceend)
+ self.punctuation = self._init_default(data.normalized_unicode(punctuation), self.lang.punctuation)
+ self.endpunctuation = self._init_default(data.normalized_unicode(endpunctuation), self.lang.sentenceend)
self.ignoretags = self._init_default(ignoretags, common_ignoretags)
self.canchangetags = self._init_default(canchangetags, common_canchangetags)
# Other data
# TODO: allow user configuration of untranslatable words
- self.notranslatewords = dict.fromkeys([data.forceunicode(key) for key in self._init_list(notranslatewords)])
- self.musttranslatewords = dict.fromkeys([data.forceunicode(key) for key in self._init_list(musttranslatewords)])
- validchars = data.forceunicode(validchars)
+ self.notranslatewords = dict.fromkeys([data.normalized_unicode(key) for key in self._init_list(notranslatewords)])
+ self.musttranslatewords = dict.fromkeys([data.normalized_unicode(key) for key in self._init_list(musttranslatewords)])
+ validchars = data.normalized_unicode(validchars)
self.validcharsmap = {}
self.updatevalidchars(validchars)
@@ -208,7 +210,7 @@
"""updates the map that eliminates valid characters"""
if validchars is None:
return True
- validcharsmap = dict([(ord(validchar), None) for validchar in data.forceunicode(validchars)])
+ validcharsmap = dict([(ord(validchar), None) for validchar in data.normalized_unicode(validchars)])
self.validcharsmap.update(validcharsmap)
def updatetargetlanguage(self, langcode):
@@ -383,9 +385,10 @@
def run_filters(self, unit):
"""Do some optimisation by caching some data of the unit for the benefit
of run_test()."""
- self.str1 = data.forceunicode(unit.source)
- self.str2 = data.forceunicode(unit.target)
+ self.str1 = data.normalized_unicode(unit.source)
+ self.str2 = data.normalized_unicode(unit.target)
self.hasplural = unit.hasplural()
+ self.locations = unit.getlocations()
return super(TranslationChecker, self).run_filters(unit)
class TeeChecker:
@@ -484,16 +487,16 @@
def escapes(self, str1, str2):
"""checks whether escaping is consistent between the two strings"""
- if not helpers.countsmatch(str1, str2, ("\\", "\\\\")):
- escapes1 = u", ".join([u"'%s'" % word for word in str1.split() if "\\" in word])
- escapes2 = u", ".join([u"'%s'" % word for word in str2.split() if "\\" in word])
+ if not helpers.countsmatch(str1, str2, (u"\\", u"\\\\")):
+ escapes1 = u", ".join([u"'%s'" % word for word in str1.split() if u"\\" in word])
+ escapes2 = u", ".join([u"'%s'" % word for word in str2.split() if u"\\" in word])
raise SeriousFilterFailure(u"escapes in original (%s) don't match escapes in translation (%s)" % (escapes1, escapes2))
else:
return True
def newlines(self, str1, str2):
"""checks whether newlines are consistent between the two strings"""
- if not helpers.countsmatch(str1, str2, ("\n", "\r")):
+ if not helpers.countsmatch(str1, str2, (u"\n", u"\r")):
raise FilterFailure(u"line endings in original don't match line endings in translation")
else:
return True
@@ -510,7 +513,7 @@
"""checks whether singlequoting is consistent between the two strings"""
str1 = self.filterwordswithpunctuation(self.filteraccelerators(self.filtervariables(str1)))
str2 = self.filterwordswithpunctuation(self.filteraccelerators(self.filtervariables(str2)))
- return helpers.countsmatch(str1, str2, ("'", "''", "\\'"))
+ return helpers.countsmatch(str1, str2, (u"'", u"''", u"\\'"))
def doublequoting(self, str1, str2):
"""checks whether doublequoting is consistent between the two strings"""
@@ -519,13 +522,13 @@
str1 = self.config.lang.punctranslate(str1)
str2 = self.filteraccelerators(self.filtervariables(str2))
str2 = self.filterxml(str2)
- return helpers.countsmatch(str1, str2, ('"', '""', '\\"', u"«", u"»"))
+ return helpers.countsmatch(str1, str2, (u'"', u'""', u'\\"', u"«", u"»"))
def doublespacing(self, str1, str2):
"""checks for bad double-spaces by comparing to original"""
str1 = self.filteraccelerators(str1)
str2 = self.filteraccelerators(str2)
- return helpers.countmatch(str1, str2, " ")
+ return helpers.countmatch(str1, str2, u" ")
def puncspacing(self, str1, str2):
"""checks for bad spacing after punctuation"""
@@ -539,8 +542,8 @@
plaincount2 = str2.count(puncchar)
if not plaincount1 or plaincount1 != plaincount2:
continue
- spacecount1 = str1.count(puncchar+" ")
- spacecount2 = str2.count(puncchar+" ")
+ spacecount1 = str1.count(puncchar + u" ")
+ spacecount2 = str2.count(puncchar + u" ")
if spacecount1 != spacecount2:
# handle extra spaces that are because of transposed punctuation
if str1.endswith(puncchar) != str2.endswith(puncchar) and abs(spacecount1-spacecount2) == 1:
@@ -669,9 +672,9 @@
mismatch1.extend(vars1)
mismatch2.extend(vars2)
if mismatch1:
- messages.append(u"do not translate: %s" % ", ".join(mismatch1))
+ messages.append(u"do not translate: %s" % u", ".join(mismatch1))
elif mismatch2:
- messages.append(u"translation contains variables not in original: %s" % ", ".join(mismatch2))
+ messages.append(u"translation contains variables not in original: %s" % u", ".join(mismatch2))
if messages and mismatch1:
raise SeriousFilterFailure(messages)
elif messages:
@@ -718,7 +721,9 @@
str1 = self.filteraccelerators(self.filtervariables(self.filterwordswithpunctuation(str1)))
str1 = self.config.lang.punctranslate(str1)
str2 = self.filteraccelerators(self.filtervariables(self.filterwordswithpunctuation(str2)))
- return helpers.funcmatch(str1, str2, decoration.puncend, self.config.endpunctuation)
+ str1 = str1.rstrip()
+ str2 = str2.rstrip()
+ return helpers.funcmatch(str1, str2, decoration.puncend, self.config.endpunctuation + u":")
def purepunc(self, str1, str2):
"""checks that strings that are purely punctuation are not changed"""
@@ -735,17 +740,17 @@
messages = []
missing = []
extra = []
- for bracket in ("[", "]", "{", "}", "(", ")"):
+ for bracket in (u"[", u"]", u"{", u"}", u"(", u")"):
count1 = str1.count(bracket)
count2 = str2.count(bracket)
if count2 < count1:
- missing.append("'%s'" % bracket)
+ missing.append(u"'%s'" % bracket)
elif count2 > count1:
- extra.append("'%s'" % bracket)
+ extra.append(u"'%s'" % bracket)
if missing:
- messages.append(u"translation is missing %s" % ", ".join(missing))
+ messages.append(u"translation is missing %s" % u", ".join(missing))
if extra:
- messages.append(u"translation has extra %s" % ", ".join(extra))
+ messages.append(u"translation has extra %s" % u", ".join(extra))
if messages:
raise FilterFailure(messages)
return True
@@ -762,8 +767,8 @@
"""checks that options are not translated"""
str1 = self.filtervariables(str1)
for word1 in str1.split():
- if word1 != "--" and word1.startswith("--") and word1[-1].isalnum():
- parts = word1.split("=")
+ if word1 != u"--" and word1.startswith(u"--") and word1[-1].isalnum():
+ parts = word1.split(u"=")
if not parts[0] in str2:
raise FilterFailure(u"The option %s does not occur or is translated in the translation." % parts[0])
if len(parts) > 1 and parts[1] in str2:
@@ -788,7 +793,7 @@
str2 = self.removevariables(str2)
# TODO: review this. The 'I' is specific to English, so it probably serves
# no purpose to get sourcelang.sentenceend
- str1 = re.sub(u"[^%s]( I )" % self.config.sourcelang.sentenceend, " i ", str1)
+ str1 = re.sub(u"[^%s]( I )" % self.config.sourcelang.sentenceend, u" i ", str1)
capitals1 = helpers.filtercount(str1, unicode.isupper)
capitals2 = helpers.filtercount(str2, unicode.isupper)
alpha1 = helpers.filtercount(str1, unicode.isalpha)
@@ -827,17 +832,17 @@
if str2.find(word) == -1:
acronyms.append(word)
if acronyms:
- raise FilterFailure("acronyms should not be translated: " + ", ".join(acronyms))
+ raise FilterFailure(u"acronyms should not be translated: " + u", ".join(acronyms))
return True
def doublewords(self, str1, str2):
"""checks for repeated words in the translation"""
lastword = ""
without_newlines = "\n".join(str2.split("\n"))
- words = self.filteraccelerators(self.removevariables(without_newlines)).replace(".", "").lower().split()
+ words = self.filteraccelerators(self.removevariables(without_newlines)).replace(u".", u"").lower().split()
for word in words:
if word == lastword:
- raise FilterFailure("The word '%s' is repeated" % word)
+ raise FilterFailure(u"The word '%s' is repeated" % word)
lastword = word
return True
@@ -856,7 +861,7 @@
words2 = self.filteraccelerators(str2).split()
stopwords = [word for word in words1 if word in self.config.notranslatewords and word not in words2]
if stopwords:
- raise FilterFailure("do not translate: %s" % (", ".join(stopwords)))
+ raise FilterFailure(u"do not translate: %s" % (u", ".join(stopwords)))
return True
def musttranslatewords(self, str1, str2):
@@ -869,13 +874,13 @@
#The above is full of strange quotes and things in utf-8 encoding.
#single apostrophe perhaps problematic in words like "doesn't"
for seperator in self.config.punctuation:
- str1 = str1.replace(seperator, " ")
- str2 = str2.replace(seperator, " ")
+ str1 = str1.replace(seperator, u" ")
+ str2 = str2.replace(seperator, u" ")
words1 = self.filteraccelerators(str1).split()
words2 = self.filteraccelerators(str2).split()
stopwords = [word for word in words1 if word in self.config.musttranslatewords and word in words2]
if stopwords:
- raise FilterFailure("please translate: %s" % (", ".join(stopwords)))
+ raise FilterFailure(u"please translate: %s" % (u", ".join(stopwords)))
return True
def validchars(self, str1, str2):
@@ -892,7 +897,7 @@
def filepaths(self, str1, str2):
"""checks that file paths have not been translated"""
for word1 in self.filteraccelerators(str1).split():
- if word1.startswith("/"):
+ if word1.startswith(u"/"):
if not helpers.countsmatch(str1, str2, (word1,)):
return False
return True
@@ -901,7 +906,7 @@
"""checks that XML/HTML tags have not been translated"""
tags1 = tag_re.findall(str1)
if len(tags1) > 0:
- if (len(tags1[0]) == len(str1)) and not "=" in tags1[0]:
+ if (len(tags1[0]) == len(str1)) and not u"=" in tags1[0]:
return True
tags2 = tag_re.findall(str2)
properties1 = tagproperties(tags1, self.config.ignoretags)
@@ -926,11 +931,11 @@
def kdecomments(self, str1, str2):
"""checks to ensure that no KDE style comments appear in the translation"""
- return str2.find("\n_:") == -1 and not str2.startswith("_:")
+ return str2.find(u"\n_:") == -1 and not str2.startswith(u"_:")
def compendiumconflicts(self, str1, str2):
"""checks for Gettext compendium conflicts (#-#-#-#-#)"""
- return str2.find("#-#-#-#-#") == -1
+ return str2.find(u"#-#-#-#-#") == -1
def simpleplurals(self, str1, str2):
"""checks for English style plural(s) for you to review"""
@@ -954,6 +959,7 @@
return True
if not spelling.available:
return True
+ # TODO: filterxml?
str1 = self.filteraccelerators_by_list(self.filtervariables(str1), self.config.sourcelang.validaccel)
str2 = self.filteraccelerators_by_list(self.filtervariables(str2), self.config.lang.validaccel)
ignore1 = []
@@ -988,7 +994,7 @@
"isreview", "notranslatewords", "musttranslatewords",
"emails", "simpleplurals", "urls", "printf",
"tabs", "newlines", "functions", "options",
- "blank", "nplurals"),
+ "blank", "nplurals", "gconf"),
"blank": ("simplecaps", "variables", "startcaps",
"accelerators", "brackets", "endpunc",
"acronyms", "xmltags", "startpunc",
@@ -998,7 +1004,8 @@
"sentencecount", "numbers", "isfuzzy",
"isreview", "notranslatewords", "musttranslatewords",
"emails", "simpleplurals", "urls", "printf",
- "tabs", "newlines", "functions", "options"),
+ "tabs", "newlines", "functions", "options",
+ "gconf"),
"credits": ("simplecaps", "variables", "startcaps",
"accelerators", "brackets", "endpunc",
"acronyms", "xmltags", "startpunc",
@@ -1053,6 +1060,19 @@
checkerconfig.update(mozillaconfig)
StandardChecker.__init__(self, **kwargs)
+drupalconfig = CheckerConfig(
+ varmatches = [("%", None), ("@", None)],
+ )
+
+class DrupalChecker(StandardChecker):
+ def __init__(self, **kwargs):
+ checkerconfig = kwargs.get("checkerconfig", None)
+ if checkerconfig is None:
+ checkerconfig = CheckerConfig()
+ kwargs["checkerconfig"] = checkerconfig
+ checkerconfig.update(drupalconfig)
+ StandardChecker.__init__(self, **kwargs)
+
gnomeconfig = CheckerConfig(
accelmarkers = ["_"],
varmatches = [("%", 1), ("$(", ")")],
@@ -1067,6 +1087,17 @@
kwargs["checkerconfig"] = checkerconfig
checkerconfig.update(gnomeconfig)
StandardChecker.__init__(self, **kwargs)
+
+ def gconf(self, str1, str2):
+ """Checks if we have any gconf config settings translated."""
+ for location in self.locations:
+ if location.find('schemas.in') != -1:
+ gconf_attributes = gconf_attribute_re.findall(str1)
+ #stopwords = [word for word in words1 if word in self.config.notranslatewords and word not in words2]
+ stopwords = [word for word in gconf_attributes if word[1:-1] not in str2]
+ if stopwords:
+ raise FilterFailure(u"do not translate gconf attribute: %s" % (u", ".join(stopwords)))
+ return True
kdeconfig = CheckerConfig(
accelmarkers = ["&"],
@@ -1101,7 +1132,8 @@
"kde": KdeChecker,
"wx": KdeChecker,
"gnome": GnomeChecker,
- "creativecommons": CCLicenseChecker
+ "creativecommons": CCLicenseChecker,
+ "drupal": DrupalChecker,
}
@@ -1140,8 +1172,8 @@
def runtests(str1, str2, ignorelist=()):
"""verifies that the tests pass for a pair of strings"""
from translate.storage import base
- str1 = data.forceunicode(str1)
- str2 = data.forceunicode(str2)
+ str1 = data.normalized_unicode(str1)
+ str2 = data.normalized_unicode(str2)
unit = base.TranslationUnit(str1)
unit.target = str2
checker = StandardChecker(excludefilters=ignorelist)
Modified: translate-toolkit/branches/upstream/current/translate/filters/pofilter.py
URL: http://svn.debian.org/wsvn/translate-toolkit/branches/upstream/current/translate/filters/pofilter.py?rev=1570&op=diff
==============================================================================
--- translate-toolkit/branches/upstream/current/translate/filters/pofilter.py (original)
+++ translate-toolkit/branches/upstream/current/translate/filters/pofilter.py Sun Feb 8 16:49:31 2009
@@ -86,7 +86,8 @@
if filterresult:
if filterresult != autocorrect:
for filtername, filtermessage in filterresult.iteritems():
- unit.adderror(filtername, filtermessage)
+ if self.options.addnotes:
+ unit.adderror(filtername, filtermessage)
if isinstance(filtermessage, checks.SeriousFilterFailure):
unit.markfuzzy()
newtransfile.addunit(unit)
@@ -180,6 +181,9 @@
parser.add_option("", "--header", dest="includeheader",
action="store_true", default=False,
help="include a PO header in the output")
+ parser.add_option("", "--nonotes", dest="addnotes",
+ action="store_false", default=True,
+ help="don't add notes about the errors")
parser.add_option("", "--autocorrect", dest="autocorrect",
action="store_true", default=False,
help="output automatic corrections where possible rather than describing issues")
@@ -191,6 +195,9 @@
parser.add_option("", "--mozilla", dest="filterclass",
action="store_const", default=None, const=checks.MozillaChecker,
help="use the standard checks for Mozilla translations")
+ parser.add_option("", "--drupal", dest="filterclass",
+ action="store_const", default=None, const=checks.DrupalChecker,
+ help="use the standard checks for Drupal translations")
parser.add_option("", "--gnome", dest="filterclass",
action="store_const", default=None, const=checks.GnomeChecker,
help="use the standard checks for Gnome translations")
Modified: translate-toolkit/branches/upstream/current/translate/filters/test_checks.py
URL: http://svn.debian.org/wsvn/translate-toolkit/branches/upstream/current/translate/filters/test_checks.py?rev=1570&op=diff
==============================================================================
--- translate-toolkit/branches/upstream/current/translate/filters/test_checks.py (original)
+++ translate-toolkit/branches/upstream/current/translate/filters/test_checks.py Sun Feb 8 16:49:31 2009
@@ -4,7 +4,7 @@
from translate.storage import po
def strprep(str1, str2, message=None):
- return data.forceunicode(str1), data.forceunicode(str2), data.forceunicode(message)
+ return data.normalized_unicode(str1), data.normalized_unicode(str2), data.normalized_unicode(message)
def passes(filterfunction, str1, str2):
"""returns whether the given strings pass on the given test, handling FilterFailures"""
@@ -596,6 +596,10 @@
assert passes(stdchecker.startpunc, "<< Previous", "<< Correct")
assert fails(stdchecker.startpunc, " << Previous", "Wrong")
assert fails(stdchecker.startpunc, "Question", u"\u2026Wrong")
+
+ # The inverted Spanish question mark should be accepted
+ stdchecker = checks.StandardChecker(checks.CheckerConfig(targetlanguage='es'))
+ assert passes(stdchecker.startpunc, "Do you want to reload the file?", u"¿Quiere recargar el archivo?")
def test_startwhitespace():
"""tests startwhitespace"""
Modified: translate-toolkit/branches/upstream/current/translate/filters/test_pofilter.py
URL: http://svn.debian.org/wsvn/translate-toolkit/branches/upstream/current/translate/filters/test_pofilter.py?rev=1570&op=diff
==============================================================================
--- translate-toolkit/branches/upstream/current/translate/filters/test_pofilter.py (original)
+++ translate-toolkit/branches/upstream/current/translate/filters/test_pofilter.py Sun Feb 8 16:49:31 2009
@@ -2,6 +2,7 @@
# -*- coding: utf-8 -*-
from translate.storage import factory
+from translate.storage import xliff
from translate.filters import pofilter
from translate.filters import checks
from translate.misc import wStringIO
@@ -130,6 +131,24 @@
self.unit.markreviewneeded()
filter_result = self.filter(self.translationstore, cmdlineoptions=["--test=isreview"])
assert filter_result.units[0].isreview()
+
+ def test_notes(self):
+ """tests the optional adding of notes"""
+ # let's make sure we trigger the 'long' and/or 'doubleword' test
+ self.unit.target = u"asdf asdf asdf asdf asdf asdf asdf"
+ filter_result = self.filter(self.translationstore)
+ assert len(filter_result.units) == 1
+ assert filter_result.units[0].geterrors()
+
+ # now we remove the existing error. self.unit is changed since we copy
+ # units - very naughty
+ if isinstance(self.unit, xliff.xliffunit):
+ self.unit.removenotes(origin='pofilter')
+ else:
+ self.unit.removenotes()
+ filter_result = self.filter(self.translationstore, cmdlineoptions=["--nonotes"])
+ assert len(filter_result.units) == 1
+ assert len(filter_result.units[0].geterrors()) == 0
def test_unicode(self):
"""tests that we can handle UTF-8 encoded characters when there is no known header specified encoding"""
Added: translate-toolkit/branches/upstream/current/translate/i18n.py
URL: http://svn.debian.org/wsvn/translate-toolkit/branches/upstream/current/translate/i18n.py?rev=1570&op=file
==============================================================================
--- translate-toolkit/branches/upstream/current/translate/i18n.py (added)
+++ translate-toolkit/branches/upstream/current/translate/i18n.py Sun Feb 8 16:49:31 2009
@@ -1,0 +1,26 @@
+#!/usr/bin/env python
+# -*- coding: utf-8 -*-
+#
+# Copyright 2009 Zuza Software Foundation
+#
+# This file is part of translate.
+#
+# translate is free software; you can redistribute it and/or modify
+# it under the terms of the GNU General Public License as published by
+# the Free Software Foundation; either version 2 of the License, or
+# (at your option) any later version.
+#
+# translate is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+# GNU General Public License for more details.
+#
+# You should have received a copy of the GNU General Public License
+# along with translate; if not, write to the Free Software
+# Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
+
+"""Internationalization functions and functionality
+"""
+
+import gettext
+gettext.install("translate-toolkit", unicode=1)
Modified: translate-toolkit/branches/upstream/current/translate/lang/__init__.py
URL: http://svn.debian.org/wsvn/translate-toolkit/branches/upstream/current/translate/lang/__init__.py?rev=1570&op=diff
==============================================================================
--- translate-toolkit/branches/upstream/current/translate/lang/__init__.py (original)
+++ translate-toolkit/branches/upstream/current/translate/lang/__init__.py Sun Feb 8 16:49:31 2009
@@ -19,9 +19,18 @@
# along with translate; if not, write to the Free Software
# Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
-"""translate.lang is part of the translate package.
+"""lang contains classes that represent languages and provides language specific
+information.
-It contains classes that represent languages and provides language specific
-information. All classes inherit from the parent class called common.
+All classes inherit from the parent class called L{common}. The type of data
+includes:
+ - language codes
+ - language name
+ - plurals
+ - punctuation transformation
+ - etc
+
+ at group Common Language Functionality: common data
+ at group Languages: *
"""
Modified: translate-toolkit/branches/upstream/current/translate/lang/ar.py
URL: http://svn.debian.org/wsvn/translate-toolkit/branches/upstream/current/translate/lang/ar.py?rev=1570&op=diff
==============================================================================
--- translate-toolkit/branches/upstream/current/translate/lang/ar.py (original)
+++ translate-toolkit/branches/upstream/current/translate/lang/ar.py Sun Feb 8 16:49:31 2009
@@ -35,7 +35,8 @@
u",": u"Ø",
u";": u"Ø",
u"?": u"Ø",
- u"%": u"Ùª",
+ #This causes problems with variables, so commented out for now:
+ #u"%": u"Ùª",
}
- ignoretests = ["startcaps", "simplecaps"]
+ ignoretests = ["startcaps", "simplecaps", "acronyms"]
Modified: translate-toolkit/branches/upstream/current/translate/lang/bn.py
URL: http://svn.debian.org/wsvn/translate-toolkit/branches/upstream/current/translate/lang/bn.py?rev=1570&op=diff
==============================================================================
--- translate-toolkit/branches/upstream/current/translate/lang/bn.py (original)
+++ translate-toolkit/branches/upstream/current/translate/lang/bn.py Sun Feb 8 16:49:31 2009
@@ -24,9 +24,25 @@
For more information, see U{http://en.wikipedia.org/wiki/Bengali_language}
"""
+import re
+
from translate.lang import common
class bn(common.Common):
"""This class represents Bengali."""
+ sentenceend = u"।!?â¦"
+
+ sentencere = re.compile(r"""(?s) #make . also match newlines
+ .*? #anything, but match non-greedy
+ [%s] #the puntuation for sentence ending
+ \s+ #the spacing after the puntuation
+ (?=[^a-z\d])#lookahead that next part starts with caps
+ """ % sentenceend, re.VERBOSE)
+
+ puncdict = {
+ u". ": u"। ",
+ u".\n": u"।\n",
+ }
+
ignoretests = ["startcaps", "simplecaps"]
Modified: translate-toolkit/branches/upstream/current/translate/lang/code_or.py
URL: http://svn.debian.org/wsvn/translate-toolkit/branches/upstream/current/translate/lang/code_or.py?rev=1570&op=diff
==============================================================================
--- translate-toolkit/branches/upstream/current/translate/lang/code_or.py (original)
+++ translate-toolkit/branches/upstream/current/translate/lang/code_or.py Sun Feb 8 16:49:31 2009
@@ -30,10 +30,6 @@
class code_or(common.Common):
"""This class represents Oriya."""
- code = "or"
- fullname = "Oriya"
- nplurals = 2
- pluralequation = "(n != 1)"
sentenceend = u"।!?â¦"
@@ -45,7 +41,8 @@
""" % sentenceend, re.VERBOSE)
puncdict = {
- u".": u"।",
+ u". ": u"। ",
+ u".\n": u"।\n",
}
ignoretests = ["startcaps", "simplecaps"]
Modified: translate-toolkit/branches/upstream/current/translate/lang/common.py
URL: http://svn.debian.org/wsvn/translate-toolkit/branches/upstream/current/translate/lang/common.py?rev=1570&op=diff
==============================================================================
--- translate-toolkit/branches/upstream/current/translate/lang/common.py (original)
+++ translate-toolkit/branches/upstream/current/translate/lang/common.py Sun Feb 8 16:49:31 2009
@@ -87,15 +87,15 @@
0 is not a valid value - it must be overridden.
Any positive integer is valid (it should probably be between 1 and 6)
- Also see data.py
+ @see: L{data}
"""
pluralequation = "0"
"""The plural equation for selection of plural forms.
This is used for PO files to fill into the header.
- See U{http://www.gnu.org/software/gettext/manual/html_node/gettext_150.html}.
- Also see data.py
+ @see: U{Gettext manual<http://www.gnu.org/software/gettext/manual/html_node/gettext_150.html#Plural-forms>}
+ @see: L{data}
"""
# Don't change these defaults of nplurals or pluralequation willy-nilly:
# some code probably depends on these for unrecognised languages
@@ -217,10 +217,47 @@
# replaced, but the space won't exist at the end of a message.
# As a simple improvement for messages ending in ellipses (...), we
# test that the last character is different from the second last
- if (text[-1] + " " in cls.puncdict) and (text[-2] != text[-1]):
- text = text[:-1] + cls.puncdict[text[-1] + " "]
+ if (text[-1] + u" " in cls.puncdict) and (text[-2] != text[-1]):
+ text = text[:-1] + cls.puncdict[text[-1] + u" "].rstrip()
return text
punctranslate = classmethod(punctranslate)
+
+ def length_difference(cls, len):
+ """Returns an estimate to a likely change in length relative to an
+ English string of length len."""
+ # This is just a rudimentary heuristic guessing that most translations
+ # will be somewhat longer than the source language
+ expansion_factor = 0
+ code = cls.code
+ while code:
+ expansion_factor = data.expansion_factors.get(cls.code, 0)
+ if expansion_factor:
+ break
+ code = data.simplercode(code)
+ else:
+ expansion_factor = 0.1 # default
+ constant = max(5, int(40*expansion_factor))
+ # The default: return 5 + len/10
+ return constant + int(expansion_factor * len)
+
+ def alter_length(cls, text):
+ """Converts the given string by adding or removing characters as an
+ estimation of translation length (with English assumed as source
+ language)."""
+ def alter_it(text):
+ l = len(text)
+ if l > 9:
+ extra = cls.length_difference(l)
+ if extra > 0:
+ text = text[:extra].replace(u'\n', u'') + text
+ else:
+ text = text[-extra:]
+ return text
+ expanded = []
+ for subtext in text.split("\n\n"):
+ expanded.append(alter_it(subtext))
+ text = "\n\n".join(expanded)
+ return text
def character_iter(cls, text):
"""Returns an iterator over the characters in text."""
Modified: translate-toolkit/branches/upstream/current/translate/lang/data.py
URL: http://svn.debian.org/wsvn/translate-toolkit/branches/upstream/current/translate/lang/data.py?rev=1570&op=diff
==============================================================================
--- translate-toolkit/branches/upstream/current/translate/lang/data.py (original)
+++ translate-toolkit/branches/upstream/current/translate/lang/data.py Sun Feb 8 16:49:31 2009
@@ -1,7 +1,7 @@
#!/usr/bin/env python
# -*- coding: utf-8 -*-
#
-# Copyright 2007 Zuza Software Foundation
+# Copyright 2007-2009 Zuza Software Foundation
#
# This file is part of translate.
#
@@ -23,35 +23,36 @@
import unicodedata
-# The key is the language code, which may contain country codes and modifiers.
-# The value is a tuple: (Full name in English, nplurals, plural equation)
-
languages = {
'af': ('Afrikaans', 2, '(n != 1)'),
'ak': ('Akan', 2, 'n > 1'),
+'am': ('Amharic', 2, 'n > 1'),
'ar': ('Arabic', 6, 'n==0 ? 0 : n==1 ? 1 : n==2 ? 2 : n>=3 && n<=10 ? 3 : n>=11 && n<=99 ? 4 : 5'),
+'arn': ('Mapudungun; Mapuche', 2, 'n > 1'),
'az': ('Azerbaijani', 2, '(n != 1)'),
'be': ('Belarusian', 3, 'n%10==1 && n%100!=11 ? 0 : n%10>=2 && n%10<=4 && (n%100<10 || n%100>=20) ? 1 : 2'),
'bg': ('Bulgarian', 2, '(n != 1)'),
'bn': ('Bengali', 2, '(n != 1)'),
'bo': ('Tibetan', 1, '0'),
'bs': ('Bosnian', 3, 'n%10==1 && n%100!=11 ? 0 : n%10>=2 && n%10<=4 && (n%100<10 || n%100>=20) ? 1 : 2'),
-'ca': ('Catalan', 2, '(n != 1)'),
+'ca': ('Catalan; Valencian', 2, '(n != 1)'),
'cs': ('Czech', 3, '(n==1) ? 0 : (n>=2 && n<=4) ? 1 : 2'),
+'csb': ('Kashubian', 3, 'n==1 ? 0 : n%10>=2 && n%10<=4 && (n%100<10 || n%100>=20) ? 1 : 2'),
'cy': ('Welsh', 2, '(n==2) ? 1 : 0'),
'da': ('Danish', 2, '(n != 1)'),
'de': ('German', 2, '(n != 1)'),
'dz': ('Dzongkha', 1, '0'),
'el': ('Greek', 2, '(n != 1)'),
'en': ('English', 2, '(n != 1)'),
-'en_UK': ('English (United Kingdom)', 2, '(n != 1)'),
+'en_GB': ('English (United Kingdom)', 2, '(n != 1)'),
'en_ZA': ('English (South Africa)', 2, '(n != 1)'),
'eo': ('Esperanto', 2, '(n != 1)'),
-'es': ('Spanish', 2, '(n != 1)'),
+'es': ('Spanish; Castilian', 2, '(n != 1)'),
'et': ('Estonian', 2, '(n != 1)'),
'eu': ('Basque', 2, '(n != 1)'),
'fa': ('Persian', 1, '0'),
'fi': ('Finnish', 2, '(n != 1)'),
+'fil': ('Filipino; Pilipino', 2, '(n > 1)'),
'fo': ('Faroese', 2, '(n != 1)'),
'fr': ('French', 2, '(n > 1)'),
'fur': ('Friulian', 2, '(n != 1)'),
@@ -59,53 +60,68 @@
'ga': ('Irish', 3, 'n==1 ? 0 : n==2 ? 1 : 2'),
'gl': ('Galician', 2, '(n != 1)'),
'gu': ('Gujarati', 2, '(n != 1)'),
+'gun': ('Gun', 2, '(n > 1)'),
+'ha': ('Hausa', 2, '(n != 1)'),
'he': ('Hebrew', 2, '(n != 1)'),
'hi': ('Hindi', 2, '(n != 1)'),
+'hy': ('Armenian', 1, '0'),
'hr': ('Croatian', 3, '(n%10==1 && n%100!=11 ? 0 : n%10>=2 && n%10<=4 && (n%100<10 || n%100>=20) ? 1 : 2)'),
'hu': ('Hungarian', 2, '(n != 1)'),
'id': ('Indonesian', 1, '0'),
'is': ('Icelandic', 2, '(n != 1)'),
'it': ('Italian', 2, '(n != 1)'),
'ja': ('Japanese', 1, '0'),
+'jv': ('Javanese', 2, '(n != 1)'),
'ka': ('Georgian', 1, '0'),
'km': ('Khmer', 1, '0'),
+'kn': ('Kannada', 2, '(n != 1)'),
'ko': ('Korean', 1, '0'),
'ku': ('Kurdish', 2, '(n != 1)'),
-'ky': ('Kyrgyz', 1, '0'),
-'lb': ('Letzeburgesch', 2, '(n != 1)'),
+'kw': ('Cornish', 4, '(n==1) ? 0 : (n==2) ? 1 : (n == 3) ? 2 : 3'),
+'ky': ('Kirghiz; Kyrgyz', 1, '0'),
+'lb': ('Luxembourgish; Letzeburgesch', 2, '(n != 1)'),
'ln': ('Lingala', 2, '(n > 1)'),
'lt': ('Lithuanian', 3, '(n%10==1 && n%100!=11 ? 0 : n%10>=2 && (n%100<10 || n%100>=20) ? 1 : 2)'),
'lv': ('Latvian', 3, '(n%10==1 && n%100!=11 ? 0 : n != 0 ? 1 : 2)'),
'mg': ('Malagasy', 2, '(n > 1)'),
+'mi': ('Maori', 2, '(n > 1)'),
+'mk': ('Macedonian', 2, 'n==1 || n%10==1 ? 0 : 1'),
+'ml': ('Malayalam', 2, '(n != 1)'),
'mn': ('Mongolian', 2, '(n != 1)'),
'mr': ('Marathi', 2, '(n != 1)'),
'ms': ('Malay', 1, '0'),
'mt': ('Maltese', 4, '(n==1 ? 0 : n==0 || ( n%100>1 && n%100<11) ? 1 : (n%100>10 && n%100<20 ) ? 2 : 3)'),
-'nah': ('Nahuatl', 2, '(n != 1)'),
+'nah': ('Nahuatl languages', 2, '(n != 1)'),
'nb': ('Norwegian Bokmal', 2, '(n != 1)'),
'ne': ('Nepali', 2, '(n != 1)'),
-'nl': ('Dutch', 2, '(n != 1)'),
+'nl': ('Dutch; Flemish', 2, '(n != 1)'),
'nn': ('Norwegian Nynorsk', 2, '(n != 1)'),
'nso': ('Northern Sotho', 2, '(n > 1)'),
'or': ('Oriya', 2, '(n != 1)'),
-'pa': ('Punjabi', 2, '(n != 1)'),
+'pa': ('Panjabi; Punjabi', 2, '(n != 1)'),
'pap': ('Papiamento', 2, '(n != 1)'),
'pl': ('Polish', 3, '(n==1 ? 0 : n%10>=2 && n%10<=4 && (n%100<10 || n%100>=20) ? 1 : 2)'),
-'pt': ('Portugese', 2, '(n != 1)'),
-'pt_BR': ('Portugese (Brazil)', 2, '(n > 1)'),
+'pt': ('Portuguese', 2, '(n != 1)'),
+'pt_BR': ('Portuguese (Brazil)', 2, '(n > 1)'),
'ro': ('Romanian', 3, '(n==1 ? 0 : (n==0 || (n%100 > 0 && n%100 < 20)) ? 1 : 2);'),
'ru': ('Russian', 3, '(n%10==1 && n%100!=11 ? 0 : n%10>=2 && n%10<=4 && (n%100<10 || n%100>=20) ? 1 : 2)'),
+'sco': ('Scots', 2, '(n != 1)'),
'sk': ('Slovak', 3, '(n==1) ? 0 : (n>=2 && n<=4) ? 1 : 2'),
'sl': ('Slovenian', 4, '(n%100==1 ? 0 : n%100==2 ? 1 : n%100==3 || n%100==4 ? 2 : 3)'),
+'so': ('Somali', 2, '(n != 1)'),
'sq': ('Albanian', 2, '(n != 1)'),
'sr': ('Serbian', 3, '(n%10==1 && n%100!=11 ? 0 : n%10>=2 && n%10<=4 && (n%100<10 || n%100>=20) ? 1 : 2)'),
+'su': ('Sundanese', 1, '0'),
'sv': ('Swedish', 2, '(n != 1)'),
'ta': ('Tamil', 2, '(n != 1)'),
+'te': ('Telugu', 2, '(n != 1)'),
+'tg': ('Tajik', 2, '(n != 1)'),
+'ti': ('Tigrinya', 2, '(n > 1)'),
'th': ('Thai', 1, '0'),
'tk': ('Turkmen', 2, '(n != 1)'),
'tr': ('Turkish', 1, '0'),
'uk': ('Ukrainian', 3, '(n%10==1 && n%100!=11 ? 0 : n%10>=2 && n%10<=4 && (n%100<10 || n%100>=20) ? 1 : 2)'),
-'vi': ('Vietnamese',1 , '0'),
+'vi': ('Vietnamese', 1, '0'),
'wa': ('Walloon', 2, '(n > 1)'),
# Chinese is difficult because the main divide is on script, not really
# country. Simplified Chinese is used mostly in China, Singapore and Malaysia.
@@ -114,32 +130,47 @@
'zh_HK': ('Chinese (Hong Kong)', 1, '0'),
'zh_TW': ('Chinese (Taiwan)', 1, '0'),
}
+"""Dictionary of language data.
+The language code is the dictionary key (which may contain country codes and modifiers).
+The value is a tuple: (Full name in English, nplurals, plural equation)"""
def simplercode(code):
"""This attempts to simplify the given language code by ignoring country
- codes, for example."""
- # Check http://www.rfc-editor.org/rfc/bcp/bcp47.txt for possible extra issues
- # http://www.rfc-editor.org/rfc/rfc4646.txt
- # http://www.w3.org/International/articles/language-tags/
+ codes, for example.
+
+ @see:
+ - U{http://www.rfc-editor.org/rfc/bcp/bcp47.txt}
+ - U{http://www.rfc-editor.org/rfc/rfc4646.txt}
+ - U{http://www.rfc-editor.org/rfc/rfc4647.txt}
+ - U{http://www.w3.org/International/articles/language-tags/}
+ """
if not code:
return code
- # The @ modifier is used for script variants of the same language, like
- # sr at Latn or gez_ER at abegede
- modifier = code.rfind("@")
- if modifier >= 0:
- return code[:modifier]
-
- underscore = code.rfind("_")
- if underscore >= 0:
- return code[:underscore]
-
+ normalized = normalize_code(code)
+ separator = normalized.rfind('-')
+ if separator >= 0:
+ return code[:separator]
+ else:
+ return ""
+
+
+expansion_factors = {
+ 'af': 0.1,
+ 'ar': -0.09,
+ 'es': 0.21,
+ 'fr': 0.28,
+ 'it': 0.2,
+}
+"""Source to target string length expansion factors."""
import gettext
import re
iso639 = {}
+"""ISO 639 language codes"""
iso3166 = {}
+"""ISO 3166 country codes"""
langcode_re = re.compile("^[a-z]{2,3}([_-][A-Z]{2,3}|)(@[a-zA-Z0-9]+|)$")
variant_re = re.compile("^[_-][A-Z]{2,3}(@[a-zA-Z0-9]+|)$")
@@ -153,10 +184,11 @@
dialect_name_re = re.compile(r"([^(\s]+)\s*\(([^)]+)\)")
-def tr_lang(langcode):
+def tr_lang(langcode=None):
"""Gives a function that can translate a language name, even in the form::
"language (country)"
- into the language with iso code langcode."""
+ into the language with iso code langcode, or the system language if no
+ language is specified."""
langfunc = gettext_lang(langcode)
countryfunc = gettext_country(langcode)
@@ -170,25 +202,33 @@
return handlelanguage
-def gettext_lang(langcode):
- """Returns a gettext function to translate language names into the given
- language."""
+def gettext_lang(langcode=None):
+ """Returns a gettext function to translate language names into the given
+ language, or the system language if no language is specified."""
if not langcode in iso639:
- t = gettext.translation('iso_639', languages=[langcode], fallback=True)
+ if not langcode:
+ langcode = ""
+ t = gettext.translation('iso_639', fallback=True)
+ else:
+ t = gettext.translation('iso_639', languages=[langcode], fallback=True)
iso639[langcode] = t.ugettext
return iso639[langcode]
-def gettext_country(langcode):
- """Returns a gettext function to translate country names into the given
- language."""
+def gettext_country(langcode=None):
+ """Returns a gettext function to translate country names into the given
+ language, or the system language if no language is specified."""
if not langcode in iso3166:
- t = gettext.translation('iso_3166', languages=[langcode], fallback=True)
+ if not langcode:
+ langcode = ""
+ t = gettext.translation('iso_3166', fallback=True)
+ else:
+ t = gettext.translation('iso_3166', languages=[langcode], fallback=True)
iso3166[langcode] = t.ugettext
return iso3166[langcode]
def normalize(string, normal_form="NFC"):
"""Return a unicode string in its normalized form
-
+
@param string: The string to be normalized
@param normal_form: NFC (default), NFD, NFCK, NFDK
@return: Normalized string
@@ -199,12 +239,35 @@
return unicodedata.normalize(normal_form, string)
def forceunicode(string):
- """Helper method to ensure that the parameter becomes unicode if not yet"""
+ """Ensures that the string is in unicode.
+
+ @param string: A text string
+ @type string: Unicode, String
+ @return: String converted to Unicode and normalized as needed.
+ @rtype: Unicode
+ """
if string is None:
return None
if isinstance(string, str):
encoding = getattr(string, "encoding", "utf-8")
string = string.decode(encoding)
- string = normalize(string)
return string
+
+def normalized_unicode(string):
+ """Forces the string to unicode and does normalization."""
+ return normalize(forceunicode(string))
+
+def normalize_code(code):
+ return code.replace("_", "-").replace("@", "-").lower()
+
+
+def simplify_to_common(language_code, languages=languages):
+ """Simplify language code to the most commonly used form for the
+ language, stripping country information for languages that tend
+ not to be localized differently for different countries"""
+ simpler = simplercode(language_code)
+ if normalize_code(language_code) in [normalize_code(key) for key in languages.keys()] or simpler =="":
+ return language_code
+ else:
+ return simplify_to_common(simpler)
Added: translate-toolkit/branches/upstream/current/translate/lang/es.py
URL: http://svn.debian.org/wsvn/translate-toolkit/branches/upstream/current/translate/lang/es.py?rev=1570&op=file
==============================================================================
--- translate-toolkit/branches/upstream/current/translate/lang/es.py (added)
+++ translate-toolkit/branches/upstream/current/translate/lang/es.py Sun Feb 8 16:49:31 2009
@@ -1,0 +1,51 @@
+#!/usr/bin/env python
+# -*- coding: utf-8 -*-
+#
+# Copyright 2007 Zuza Software Foundation
+#
+# This file is part of translate.
+#
+# translate is free software; you can redistribute it and/or modify
+# it under the terms of the GNU General Public License as published by
+# the Free Software Foundation; either version 2 of the License, or
+# (at your option) any later version.
+#
+# translate is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+# GNU General Public License for more details.
+#
+# You should have received a copy of the GNU General Public License
+# along with translate; if not, write to the Free Software
+# Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
+
+"""This module represents Spanish language.
+However, as it only has special case code for initial inverted punctuation,
+it could also be used for Asturian, Galician, or Catalan.
+"""
+
+from translate.lang import common
+import re
+
+class es(common.Common):
+ """This class represents Spanish."""
+
+ def punctranslate(cls, text):
+ """Implement some extra features for inverted punctuation.
+ """
+ text = super(cls, cls).punctranslate(text)
+ # If the first sentence ends with ? or !, prepend inverted ¿ or ¡
+ firstmatch = cls.sentencere.match(text)
+ if firstmatch == None:
+ # only one sentence (if any) - use entire string
+ first = text
+ else:
+ first = firstmatch.group()
+ # remove trailing whitespace
+ first = first.strip()
+ if first[-1] == '?':
+ text = u"¿" + text
+ elif first[-1] == '!':
+ text = u"¡" + text
+ return text
+ punctranslate = classmethod(punctranslate)
Modified: translate-toolkit/branches/upstream/current/translate/lang/fr.py
URL: http://svn.debian.org/wsvn/translate-toolkit/branches/upstream/current/translate/lang/fr.py?rev=1570&op=diff
==============================================================================
--- translate-toolkit/branches/upstream/current/translate/lang/fr.py (original)
+++ translate-toolkit/branches/upstream/current/translate/lang/fr.py Sun Feb 8 16:49:31 2009
@@ -46,6 +46,7 @@
text = re.sub("(.|^)'([^']+)'", convertquotation, text)
if singlecount == text.count(u'`'):
text = re.sub("(.|^)`([^']+)'", convertquotation, text)
+ text = re.sub(u'(.|^)â([^â]+)â', convertquotation, text)
return text
class fr(common.Common):
Added: translate-toolkit/branches/upstream/current/translate/lang/poedit.py
URL: http://svn.debian.org/wsvn/translate-toolkit/branches/upstream/current/translate/lang/poedit.py?rev=1570&op=file
==============================================================================
--- translate-toolkit/branches/upstream/current/translate/lang/poedit.py (added)
+++ translate-toolkit/branches/upstream/current/translate/lang/poedit.py Sun Feb 8 16:49:31 2009
@@ -1,0 +1,231 @@
+#!/usr/bin/env python
+# -*- coding: utf-8 -*-
+#
+# Copyright 2009 Zuza Software Foundation
+#
+# This file is part of translate.
+#
+# translate is free software; you can redistribute it and/or modify
+# it under the terms of the GNU General Public License as published by
+# the Free Software Foundation; either version 2 of the License, or
+# (at your option) any later version.
+#
+# translate is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+# GNU General Public License for more details.
+#
+# You should have received a copy of the GNU General Public License
+# along with translate; if not, write to the Free Software
+# Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
+
+"""Functions to manage Poedit's language features.
+
+ ISO 639 maps are form Poedit's U{isocode.cpp 1.4.2<http://poedit.svn.sourceforge.net/viewvc/poedit/poedit/tags/release-1.4.2/src/isocodes.cpp?revision=1452&view=markup>}
+ to ensure that we match currently released versions of Poedit.
+"""
+
+lang_codes = {
+ "aa": "Afar",
+ "ab": "Abkhazian",
+ "ae": "Avestan",
+ "af": "Afrikaans",
+ "am": "Amharic",
+ "ar": "Arabic",
+ "as": "Assamese",
+ "ay": "Aymara",
+ "az": "Azerbaijani",
+ "ba": "Bashkir",
+ "be": "Belarusian",
+ "bg": "Bulgarian",
+ "bh": "Bihari",
+ "bi": "Bislama",
+ "bn": "Bengali",
+ "bo": "Tibetan",
+ "br": "Breton",
+ "bs": "Bosnian",
+ "ca": "Catalan",
+ "ce": "Chechen",
+ "ch": "Chamorro",
+ "co": "Corsican",
+ "cs": "Czech",
+ "cu": "Church Slavic",
+ "cv": "Chuvash",
+ "cy": "Welsh",
+ "da": "Danish",
+ "de": "German",
+ "dz": "Dzongkha",
+ "el": "Greek",
+ "en": "English",
+ "eo": "Esperanto",
+ "es": "Spanish",
+ "et": "Estonian",
+ "eu": "Basque",
+ "fa": "Persian",
+ "fi": "Finnish",
+ "fj": "Fijian",
+ "fo": "Faroese",
+ "fr": "French",
+ "fur": "Friulian",
+ "fy": "Frisian",
+ "ga": "Irish",
+ "gd": "Gaelic",
+ "gl": "Galician",
+ "gn": "Guarani",
+ "gu": "Gujarati",
+ "ha": "Hausa",
+ "he": "Hebrew",
+ "hi": "Hindi",
+ "ho": "Hiri Motu",
+ "hr": "Croatian",
+ "hu": "Hungarian",
+ "hy": "Armenian",
+ "hz": "Herero",
+ "ia": "Interlingua",
+ "id": "Indonesian",
+ "ie": "Interlingue",
+ "ik": "Inupiaq",
+ "is": "Icelandic",
+ "it": "Italian",
+ "iu": "Inuktitut",
+ "ja": "Japanese",
+ "jw": "Javanese",
+ "ka": "Georgian",
+ "ki": "Kikuyu",
+ "kj": "Kuanyama",
+ "kk": "Kazakh",
+ "kl": "Kalaallisut",
+ "km": "Khmer",
+ "kn": "Kannada",
+ "ko": "Korean",
+ "ks": "Kashmiri",
+ "ku": "Kurdish",
+ "kv": "Komi",
+ "kw": "Cornish",
+ "ky": "Kyrgyz",
+ "la": "Latin",
+ "lb": "Letzeburgesch",
+ "ln": "Lingala",
+ "lo": "Lao",
+ "lt": "Lithuanian",
+ "lv": "Latvian",
+ "mg": "Malagasy",
+ "mh": "Marshall",
+ "mi": "Maori",
+ "mk": "Macedonian",
+ "ml": "Malayalam",
+ "mn": "Mongolian",
+ "mo": "Moldavian",
+ "mr": "Marathi",
+ "ms": "Malay",
+ "mt": "Maltese",
+ "my": "Burmese",
+ "na": "Nauru",
+ "ne": "Nepali",
+ "ng": "Ndonga",
+ "nl": "Dutch",
+ "nn": "Norwegian Nynorsk",
+ "nb": "Norwegian Bokmal",
+ "nr": "Ndebele, South",
+ "nv": "Navajo",
+ "ny": "Chichewa; Nyanja",
+ "oc": "Occitan",
+ "om": "(Afan) Oromo",
+ "or": "Oriya",
+ "os": "Ossetian; Ossetic",
+ "pa": "Panjabi",
+ "pi": "Pali",
+ "pl": "Polish",
+ "ps": "Pashto, Pushto",
+ "pt": "Portuguese",
+ "qu": "Quechua",
+ "rm": "Rhaeto-Romance",
+ "rn": "Rundi",
+ "ro": "Romanian",
+ "ru": "Russian",
+ "rw": "Kinyarwanda",
+ "sa": "Sanskrit",
+ "sc": "Sardinian",
+ "sd": "Sindhi",
+ "se": "Northern Sami",
+ "sg": "Sangro",
+ "sh": "Serbo-Croatian",
+ "si": "Sinhalese",
+ "sk": "Slovak",
+ "sl": "Slovenian",
+ "sm": "Samoan",
+ "sn": "Shona",
+ "so": "Somali",
+ "sq": "Albanian",
+ "sr": "Serbian",
+ "ss": "Siswati",
+ "st": "Sesotho",
+ "su": "Sundanese",
+ "sv": "Swedish",
+ "sw": "Swahili",
+ "ta": "Tamil",
+ "te": "Telugu",
+ "tg": "Tajik",
+ "th": "Thai",
+ "ti": "Tigrinya",
+ "tk": "Turkmen",
+ "tl": "Tagalog",
+ "tn": "Setswana",
+ "to": "Tonga",
+ "tr": "Turkish",
+ "ts": "Tsonga",
+ "tt": "Tatar",
+ "tw": "Twi",
+ "ty": "Tahitian",
+ "ug": "Uighur",
+ "uk": "Ukrainian",
+ "ur": "Urdu",
+ "uz": "Uzbek",
+ "vi": "Vietnamese",
+ "vo": "Volapuk",
+ "wa": "Walloon",
+ "wo": "Wolof",
+ "xh": "Xhosa",
+ "yi": "Yiddish",
+ "yo": "Yoruba",
+ "za": "Zhuang",
+ "zh": "Chinese",
+ "zu": "Zulu",
+}
+"""ISO369 codes and names as used by Poedit.
+Mostly these are identical to ISO 639, but there are some differences."""
+
+lang_names = dict([(value, key) for (key, value) in lang_codes.items()])
+"""Reversed L{lang_codes}"""
+
+dialects = {
+ "Portuguese": {"PORTUGAL": "pt", "BRAZIL": "pt_BR", "None": "pt"},
+ # We choose not to subtype en_US
+ "English": {"UNITED KINGDOM": "en_GB", "SOUTH AFRICA": "en_ZA", "None": "en"},
+ # zh_CN = Simplified, zh_TW = Traditional
+ "Chinese": {"CHINA": "zh_CN", "TAIWAN": "zh_TW", "None": "zh_CN"},
+}
+"""Language dialects based on ISO 3166 country names, 'None' is the default fallback"""
+
+def isocode(language, country=None):
+ """Returns a language code for the given Poedit language name.
+
+ Poedit uses language and country names in the PO header entries:
+ - X-Poedit-Language
+ - X-Poedit-Country
+
+ This function converts the supplied language name into the required ISO 639
+ code. If needed, in the case of L{dialects}, the country name is used
+ to create an xx_YY style dialect code.
+
+ @param language: Language name
+ @type language: String
+ @param country: Country name
+ @type country: String
+ @return: ISO 639 language code
+ @rtype: String
+ """
+ dialect = dialects.get(language, None)
+ if dialect:
+ return dialect.get(country, dialect["None"])
+ return lang_names.get(language, None)
Modified: translate-toolkit/branches/upstream/current/translate/lang/test_fr.py
URL: http://svn.debian.org/wsvn/translate-toolkit/branches/upstream/current/translate/lang/test_fr.py?rev=1570&op=diff
==============================================================================
--- translate-toolkit/branches/upstream/current/translate/lang/test_fr.py (original)
+++ translate-toolkit/branches/upstream/current/translate/lang/test_fr.py Sun Feb 8 16:49:31 2009
@@ -23,6 +23,7 @@
assert language.punctranslate(u'Watch the " mark') == u'Watch the " mark'
assert language.punctranslate(u"Watch the ' mark") == u"Watch the ' mark"
assert language.punctranslate(u"Watch the ` mark") == u"Watch the ` mark"
+ assert language.punctranslate(u'Watch the âmarkâ') == u"Watch the « mark »"
assert language.punctranslate(u'The <a href="info">user</a> "root"?') == u'The <a href="info">user</a> « root » ?'
assert language.punctranslate(u"The <a href='info'>user</a> 'root'?") == u"The <a href='info'>user</a> « root » ?"
#Broken because we test for equal number of ` and ' in the string
Modified: translate-toolkit/branches/upstream/current/translate/lang/test_or.py
URL: http://svn.debian.org/wsvn/translate-toolkit/branches/upstream/current/translate/lang/test_or.py?rev=1570&op=diff
==============================================================================
--- translate-toolkit/branches/upstream/current/translate/lang/test_or.py (original)
+++ translate-toolkit/branches/upstream/current/translate/lang/test_or.py Sun Feb 8 16:49:31 2009
@@ -8,6 +8,8 @@
language = factory.getlanguage('or')
assert language.punctranslate(u"Document loaded") == u"Document loaded"
assert language.punctranslate(u"Document loaded.") == u"Document loaded।"
+ assert language.punctranslate(u"Document loaded.\n") == u"Document loaded।\n"
+ assert language.punctranslate(u"Document loaded...") == u"Document loaded..."
def test_country_code():
"""Tests that we get the correct one even if a country code is attached to
Added: translate-toolkit/branches/upstream/current/translate/lang/test_poedit.py
URL: http://svn.debian.org/wsvn/translate-toolkit/branches/upstream/current/translate/lang/test_poedit.py?rev=1570&op=file
==============================================================================
--- translate-toolkit/branches/upstream/current/translate/lang/test_poedit.py (added)
+++ translate-toolkit/branches/upstream/current/translate/lang/test_poedit.py Sun Feb 8 16:49:31 2009
@@ -1,0 +1,15 @@
+from translate.lang.poedit import isocode
+
+def test_isocode():
+ """Test the isocode function"""
+ # Standard lookup
+ assert isocode("French") == "fr"
+ # Dialect lookups: Portuguese
+ assert isocode("Portuguese") == "pt" # No country we default to 'None'
+ assert isocode("Portuguese", "BRAZIL") == "pt_BR" # Country with a valid dialect
+ assert isocode("Portuguese", "PORTUGAL") == "pt"
+ assert isocode("Portuguese", "MOZAMBIQUE") == "pt" # Country is not a dialect so use default
+ # Dialect lookups: English
+ assert isocode("English") == "en"
+ assert isocode("English", "UNITED KINGDOM") == "en_GB"
+ assert isocode("English", "UNITED STATES") == "en"
Modified: translate-toolkit/branches/upstream/current/translate/lang/zh.py
URL: http://svn.debian.org/wsvn/translate-toolkit/branches/upstream/current/translate/lang/zh.py?rev=1570&op=diff
==============================================================================
--- translate-toolkit/branches/upstream/current/translate/lang/zh.py (original)
+++ translate-toolkit/branches/upstream/current/translate/lang/zh.py Sun Feb 8 16:49:31 2009
@@ -45,7 +45,7 @@
# The following transformation rules should be mostly useful for all types
# of Chinese. The comma (,) is not handled here, since it maps to two
# different characters, depending on context.
- # If comma is used as seperation of sentence, then it is converted to a
+ # If comma is used as seperation of sentence, it should be converted to a
# fullwidth comma ("ï¼"). If comma is used as seperation of list items like
# "apple, orange, grape, .....", "ã" is used.
puncdict = {
@@ -62,4 +62,6 @@
u"% ": u"%",
}
+ length_difference = lambda cls, x: 10 - x/2
+
ignoretests = ["startcaps", "simplecaps"]
Modified: translate-toolkit/branches/upstream/current/translate/misc/contextlib.py
URL: http://svn.debian.org/wsvn/translate-toolkit/branches/upstream/current/translate/misc/contextlib.py?rev=1570&op=diff
==============================================================================
--- translate-toolkit/branches/upstream/current/translate/misc/contextlib.py (original)
+++ translate-toolkit/branches/upstream/current/translate/misc/contextlib.py Sun Feb 8 16:49:31 2009
@@ -135,6 +135,10 @@
exits = []
vars = []
exc = (None, None, None)
+ # Lambdas are an easy way to create unique objects. We don't want
+ # this to be None, since our answer might actually be None
+ undefined = lambda: 42
+ result = undefined
try:
for mgr in managers:
@@ -142,9 +146,14 @@
enter = mgr.__enter__
vars.append(enter())
exits.append(exit)
- yield vars
+ result = vars
except:
exc = sys.exc_info()
+
+ # If nothing has gone wrong, then result contains our return value
+ # and thus it is not equal to 'undefined'. Thus, yield the value.
+ if result != undefined:
+ yield result
while exits:
exit = exits.pop()
Modified: translate-toolkit/branches/upstream/current/translate/misc/file_discovery.py
URL: http://svn.debian.org/wsvn/translate-toolkit/branches/upstream/current/translate/misc/file_discovery.py?rev=1570&op=diff
==============================================================================
--- translate-toolkit/branches/upstream/current/translate/misc/file_discovery.py (original)
+++ translate-toolkit/branches/upstream/current/translate/misc/file_discovery.py Sun Feb 8 16:49:31 2009
@@ -25,21 +25,31 @@
import sys
import os
-def get_abs_data_filename(path_parts):
+def get_abs_data_filename(path_parts, basedirs=[]):
"""Get the absolute path to the given file- or directory name in the current
- running application's data directory.
+ running application's data directory.
@type path_parts: list
@param path_parts: The path parts that can be joined by os.path.join().
- """
+ """
if isinstance(path_parts, str):
path_parts = [path_parts]
- BASE_DIRS = [
+ BASE_DIRS = basedirs + [
os.path.dirname(unicode(__file__, sys.getfilesystemencoding())),
os.path.dirname(unicode(sys.executable, sys.getfilesystemencoding()))
]
+
+ # Freedesktop standard
+ if 'XDG_DATA_HOME' in os.environ:
+ BASE_DIRS += [os.environ['XDG_DATA_HOME']]
+ if 'XDG_DATA_DIRS' in os.environ:
+ BASE_DIRS += os.environ['XDG_DATA_DIRS'].split(os.path.pathsep)
+
+ # Mac OSX app bundles
+ if 'RESOURCEPATH' in os.environ:
+ BASE_DIRS += os.environ['RESOURCEPATH'].split(os.path.pathsep)
DATA_DIRS = [
["..", "share"],
Added: translate-toolkit/branches/upstream/current/translate/misc/hash.py
URL: http://svn.debian.org/wsvn/translate-toolkit/branches/upstream/current/translate/misc/hash.py?rev=1570&op=file
==============================================================================
--- translate-toolkit/branches/upstream/current/translate/misc/hash.py (added)
+++ translate-toolkit/branches/upstream/current/translate/misc/hash.py Sun Feb 8 16:49:31 2009
@@ -1,0 +1,30 @@
+#!/usr/bin/env python
+# -*- coding: utf-8 -*-
+#
+# Copyright 2008 Zuza Software Foundation
+#
+# This file is part of translate.
+#
+# This program is free software; you can redistribute it and/or modify
+# it under the terms of the GNU General Public License as published by
+# the Free Software Foundation; either version 2 of the License, or
+# (at your option) any later version.
+#
+# This program is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+# GNU General Public License for more details.
+#
+# You should have received a copy of the GNU General Public License
+# along with this program; if not, see <http://www.gnu.org/licenses/>.
+
+"""This module contains some temporary glue to make us work with md5 hashes on
+old and new versions of Python. The function md5_f() wraps whatever is
+available."""
+
+try:
+ import hashlib
+ md5_f = hashlib.md5
+except ImportError:
+ import md5
+ md5_f = md5.new
Modified: translate-toolkit/branches/upstream/current/translate/misc/optrecurse.py
URL: http://svn.debian.org/wsvn/translate-toolkit/branches/upstream/current/translate/misc/optrecurse.py?rev=1570&op=diff
==============================================================================
--- translate-toolkit/branches/upstream/current/translate/misc/optrecurse.py (original)
+++ translate-toolkit/branches/upstream/current/translate/misc/optrecurse.py Sun Feb 8 16:49:31 2009
@@ -79,7 +79,7 @@
"""
- optparse.OptionParser.__init__(self, version="%prog "+__version__.ver, description=description)
+ optparse.OptionParser.__init__(self, version="%prog "+__version__.sver, description=description)
self.setmanpageoption()
self.setprogressoptions()
self.seterrorleveloptions()
@@ -135,6 +135,10 @@
file.write(self.format_manpage())
def setpsycooption(self):
+ try:
+ import psyco
+ except Exception:
+ return
psycomodes = ["none", "full", "profile"]
psycooption = optparse.Option(None, "--psyco", dest="psyco", default=None,
choices=psycomodes, metavar="MODE",
@@ -144,7 +148,7 @@
def usepsyco(self, options):
# options.psyco == None means the default, which is "full", but don't give a warning...
# options.psyco == "none" means don't use psyco at all...
- if options.psyco == "none":
+ if getattr(options, "psyco", "none") == "none":
return
try:
import psyco
@@ -248,9 +252,12 @@
if not isinstance(outputoptions, tuple) or len(outputoptions) != 2:
raise ValueError("output options must be tuples of length 2")
outputformat, processor = outputoptions
- if not inputformat in inputformats: inputformats.append(inputformat)
- if not outputformat in outputformats: outputformats.append(outputformat)
- if not templateformat in templateformats: templateformats.append(templateformat)
+ if not inputformat in inputformats:
+ inputformats.append(inputformat)
+ if not outputformat in outputformats:
+ outputformats.append(outputformat)
+ if not templateformat in templateformats:
+ templateformats.append(templateformat)
self.outputoptions[(inputformat, templateformat)] = (outputformat, processor)
self.inputformats = inputformats
inputformathelp = self.getformathelp(inputformats)
@@ -624,7 +631,13 @@
return inputfiles
def splitext(self, pathname):
- """splits into name and ext, and removes the extsep"""
+ """Splits L{pathname} into name and ext, and removes the extsep
+
+ @param pathname: A file path
+ @type pathname: string
+ @return: root, ext
+ @rtype: tuple
+ """
root, ext = os.path.splitext(pathname)
ext = ext.replace(os.extsep, "", 1)
return (root, ext)
@@ -644,8 +657,10 @@
def gettemplatename(self, options, inputname):
"""gets an output filename based on the input filename"""
- if not self.usetemplates: return None
- if not inputname or not options.recursivetemplate: return options.template
+ if not self.usetemplates:
+ return None
+ if not inputname or not options.recursivetemplate:
+ return options.template
inputbase, inputext = self.splitinputext(inputname)
if options.template:
for inputext1, templateext1 in options.outputoptions:
@@ -669,7 +684,8 @@
def getoutputname(self, options, inputname, outputformat):
"""gets an output filename based on the input filename"""
- if not inputname or not options.recursiveoutput: return options.output
+ if not inputname or not options.recursiveoutput:
+ return options.output
inputbase, inputext = self.splitinputext(inputname)
outputname = inputbase
if outputformat:
Added: translate-toolkit/branches/upstream/current/translate/misc/selector.py
URL: http://svn.debian.org/wsvn/translate-toolkit/branches/upstream/current/translate/misc/selector.py?rev=1570&op=file
==============================================================================
--- translate-toolkit/branches/upstream/current/translate/misc/selector.py (added)
+++ translate-toolkit/branches/upstream/current/translate/misc/selector.py Sun Feb 8 16:49:31 2009
@@ -1,0 +1,533 @@
+# -*- coding: latin-1 -*-
+"""selector - WSGI delegation based on URL path and method.
+
+(See the docstring of selector.Selector.)
+
+Copyright (C) 2006 Luke Arno - http://lukearno.com/
+
+This library is free software; you can redistribute it and/or
+modify it under the terms of the GNU Lesser General Public
+License as published by the Free Software Foundation; either
+version 2.1 of the License, or (at your option) any later version.
+
+This library is distributed in the hope that it will be useful,
+but WITHOUT ANY WARRANTY; without even the implied warranty of
+MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
+Lesser General Public License for more details.
+
+You should have received a copy of the GNU Lesser General Public
+License along with this library; if not, write to
+the Free Software Foundation, Inc., 51 Franklin Street,
+Fifth Floor, Boston, MA 02110-1301 USA
+
+Luke Arno can be found at http://lukearno.com/
+
+"""
+
+import re
+from itertools import starmap
+from wsgiref.util import shift_path_info
+
+
+try:
+ from resolver import resolve
+except ImportError:
+ # resolver not essential for basic featurs
+ #FIXME: this library is overkill, simplify
+ pass
+
+class MappingFileError(Exception): pass
+
+
+class PathExpressionParserError(Exception): pass
+
+
+def method_not_allowed(environ, start_response):
+ """Respond with a 405 and appropriate Allow header."""
+ start_response("405 Method Not Allowed",
+ [('Allow', ', '.join(environ['selector.methods'])),
+ ('Content-Type', 'text/plain')])
+ return ["405 Method Not Allowed\n\n"
+ "The method specified in the Request-Line is not allowed "
+ "for the resource identified by the Request-URI."]
+
+
+def not_found(environ, start_response):
+ """Respond with a 404."""
+ start_response("404 Not Found", [('Content-Type', 'text/plain')])
+ return ["404 Not Found\n\n"
+ "The server has not found anything matching the Request-URI."]
+
+
+class Selector(object):
+ """WSGI middleware for URL paths and HTTP method based delegation.
+
+ see http://lukearno.com/projects/selector/
+
+ mappings are given are an iterable that returns tuples like this:
+
+ (path_expression, http_methods_dict, optional_prefix)
+ """
+
+ status405 = staticmethod(method_not_allowed)
+ status404 = staticmethod(not_found)
+
+ def __init__(self,
+ mappings=None,
+ prefix="",
+ parser=None,
+ wrap=None,
+ mapfile=None,
+ consume_path=True):
+ """Initialize selector."""
+ self.mappings = []
+ self.prefix = prefix
+ if parser is None:
+ self.parser = SimpleParser()
+ else:
+ self.parser = parser
+ self.wrap = wrap
+ if mapfile is not None:
+ self.slurp_file(mapfile)
+ if mappings is not None:
+ self.slurp(mappings)
+ self.consume_path = consume_path
+
+ def slurp(self, mappings, prefix=None, parser=None, wrap=None):
+ """Slurp in a whole list (or iterable) of mappings.
+
+ Prefix and parser args will override self.parser and self.args
+ for the given mappings.
+ """
+ if prefix is not None:
+ oldprefix = self.prefix
+ self.prefix = prefix
+ if parser is not None:
+ oldparser = self.parser
+ self.parser = parser
+ if wrap is not None:
+ oldwrap = self.wrap
+ self.wrap = wrap
+ list(starmap(self.add, mappings))
+ if wrap is not None:
+ self.wrap = oldwrap
+ if parser is not None:
+ self.parser = oldparser
+ if prefix is not None:
+ self.prefix = oldprefix
+
+ def add(self, path, method_dict=None, prefix=None, **http_methods):
+ """Add a mapping.
+
+ HTTP methods can be specified in a dict or using kwargs,
+ but kwargs will override if both are given.
+
+ Prefix will override self.prefix for this mapping.
+ """
+ # Thanks to Sébastien Pierre
+ # for suggesting that this accept keyword args.
+ if method_dict is None:
+ method_dict = {}
+ if prefix is None:
+ prefix = self.prefix
+ method_dict = dict(method_dict)
+ method_dict.update(http_methods)
+ if self.wrap is not None:
+ for meth, cbl in method_dict.items():
+ method_dict[meth] = self.wrap(cbl)
+ regex = self.parser(self.prefix + path)
+ compiled_regex = re.compile(regex, re.DOTALL | re.MULTILINE)
+ self.mappings.append((compiled_regex, method_dict))
+
+ def __call__(self, environ, start_response):
+ """Delegate request to the appropriate WSGI app."""
+ app, svars, methods, matched = \
+ self.select(environ['PATH_INFO'], environ['REQUEST_METHOD'])
+ unnamed, named = [], {}
+ for k, v in svars.iteritems():
+ if k.startswith('__pos'):
+ k = k[5:]
+ named[k] = v
+ environ['selector.vars'] = dict(named)
+ for k in named.keys():
+ if k.isdigit():
+ unnamed.append((k, named.pop(k)))
+ unnamed.sort(); unnamed = [v for k, v in unnamed]
+ cur_unnamed, cur_named = environ.get('wsgiorg.routing_args', ([], {}))
+ unnamed = cur_unnamed + unnamed
+ named.update(cur_named)
+ environ['wsgiorg.routing_args'] = unnamed, named
+ environ['selector.methods'] = methods
+ environ.setdefault('selector.matches', []).append(matched)
+ if self.consume_path:
+ environ['SCRIPT_NAME'] = environ.get('SCRIPT_NAME', '') + matched
+ environ['PATH_INFO'] = environ['PATH_INFO'][len(matched):]
+ return app(environ, start_response)
+
+ def select(self, path, method):
+ """Figure out which app to delegate to or send 404 or 405."""
+ for regex, method_dict in self.mappings:
+ match = regex.search(path)
+ if match:
+ methods = method_dict.keys()
+ if method_dict.has_key(method):
+ return (method_dict[method],
+ match.groupdict(),
+ methods,
+ match.group(0))
+ elif method_dict.has_key('_ANY_'):
+ return (method_dict['_ANY_'],
+ match.groupdict(),
+ methods,
+ match.group(0))
+ else:
+ return self.status405, {}, methods, ''
+ return self.status404, {}, [], ''
+
+ def slurp_file(self, the_file, prefix=None, parser=None, wrap=None):
+ """Read mappings from a simple text file.
+
+ == Format looks like this: ==
+
+ {{{
+
+ # Comments if first non-whitespace char on line is '#'
+ # Blank lines are ignored
+
+ /foo/{id}[/]
+ GET somemodule:some_wsgi_app
+ POST pak.subpak.mod:other_wsgi_app
+
+ @prefix /myapp
+ /path[/]
+ GET module:app
+ POST package.module:get_app('foo')
+ PUT package.module:FooApp('hello', resolve('module.setting'))
+
+ @parser :lambda x: x
+ @prefix
+ ^/spam/eggs[/]$
+ GET mod:regex_mapped_app
+
+ }}}
+
+ @prefix and @parser directives take effect
+ until the end of the file or until changed
+ """
+ if isinstance(the_file, str):
+ the_file = open(the_file)
+ oldprefix = self.prefix
+ if prefix is not None:
+ self.prefix = prefix
+ oldparser = self.parser
+ if parser is not None:
+ self.parser = parser
+ oldwrap = self.wrap
+ if parser is not None:
+ self.wrap = wrap
+ path = methods = None
+ lineno = 0
+ try:
+ #try:
+ # accumulate methods (notice add in 2 places)
+ for line in the_file:
+ lineno += 1
+ path, methods = self._parse_line(line, path, methods)
+ if path and methods:
+ self.add(path, methods)
+ #except Exception, e:
+ # raise MappingFileError("Mapping line %s: %s" % (lineno, e))
+ finally:
+ the_file.close()
+ self.wrap = oldwrap
+ self.parser = oldparser
+ self.prefix = oldprefix
+
+ def _parse_line(self, line, path, methods):
+ """Parse one line of a mapping file.
+
+ This method is for the use of selector.slurp_file.
+ """
+ if not line.strip() or line.strip()[0] == '#':
+ pass
+ elif not line.strip() or line.strip()[0] == '@':
+ #
+ if path and methods:
+ self.add(path, methods)
+ path = line.strip()
+ methods = {}
+ #
+ parts = line.strip()[1:].split(' ', 1)
+ if len(parts) == 2:
+ directive, rest = parts
+ else:
+ directive = parts[0]
+ rest = ''
+ if directive == 'prefix':
+ self.prefix = rest.strip()
+ if directive == 'parser':
+ self.parser = resolve(rest.strip())
+ if directive == 'wrap':
+ self.wrap = resolve(rest.strip())
+ elif line and line[0] not in ' \t':
+ if path and methods:
+ self.add(path, methods)
+ path = line.strip()
+ methods = {}
+ else:
+ meth, app = line.strip().split(' ', 1)
+ methods[meth.strip()] = resolve(app)
+ return path, methods
+
+
+class SimpleParser(object):
+ """Callable to turn path expressions into regexes with named groups.
+
+ For instance "/hello/{name}" becomes r"^\/hello\/(?P<name>[^\^.]+)$"
+
+ For /hello/{name:pattern}
+ you get whatever is in self.patterns['pattern'] instead of "[^\^.]+"
+
+ Optional portions of path expression can be expressed [like this]
+
+ /hello/{name}[/] (can have trailing slash or not)
+
+ Example:
+
+ /blog/archive/{year:digits}/{month:digits}[/[{article}[/]]]
+
+ This would catch any of these:
+
+ /blog/archive/2005/09
+ /blog/archive/2005/09/
+ /blog/archive/2005/09/1
+ /blog/archive/2005/09/1/
+
+ (I am not suggesting that this example is a best practice.
+ I would probably have a separate mapping for listing the month
+ and retrieving an individual entry. It depends, though.)
+ """
+
+ start, end = '{}'
+ ostart, oend = '[]'
+ _patterns = {'word': r'\w+',
+ 'alpha': r'[a-zA-Z]+',
+ 'digits': r'\d+',
+ 'number': r'\d*.?\d+',
+ 'chunk': r'[^/^.]+',
+ 'segment': r'[^/]+',
+ 'any': r'.+'}
+ default_pattern = 'chunk'
+
+ def __init__(self, patterns=None):
+ """Initialize with character class mappings."""
+ self.patterns = dict(self._patterns)
+ if patterns is not None:
+ self.patterns.update(patterns)
+
+ def lookup(self, name):
+ """Return the replacement for the name found."""
+ if ':' in name:
+ name, pattern = name.split(':')
+ pattern = self.patterns[pattern]
+ else:
+ pattern = self.patterns[self.default_pattern]
+ if name == '':
+ name = '__pos%s' % self._pos
+ self._pos += 1
+ return '(?P<%s>%s)' % (name, pattern)
+
+ def lastly(self, regex):
+ """Process the result of __call__ right before it returns.
+
+ Adds the ^ and the $ to the beginning and the end, respectively.
+ """
+ return "^%s$" % regex
+
+ def openended(self, regex):
+ """Process the result of __call__ right before it returns.
+
+ Adds the ^ to the beginning but no $ to the end.
+ Called as a special alternative to lastly.
+ """
+ return "^%s" % regex
+
+ def outermost_optionals_split(self, text):
+ """Split out optional portions by outermost matching delims."""
+ parts = []
+ buffer = ""
+ starts = ends = 0
+ for c in text:
+ if c == self.ostart:
+ if starts == 0:
+ parts.append(buffer)
+ buffer = ""
+ else:
+ buffer += c
+ starts +=1
+ elif c == self.oend:
+ ends +=1
+ if starts == ends:
+ parts.append(buffer)
+ buffer = ""
+ starts = ends = 0
+ else:
+ buffer += c
+ else:
+ buffer += c
+ if not starts == ends == 0:
+ raise PathExpressionParserError(
+ "Mismatch of optional portion delimiters."
+ )
+ parts.append(buffer)
+ return parts
+
+ def parse(self, text):
+ """Turn a path expression into regex."""
+ if self.ostart in text:
+ parts = self.outermost_optionals_split(text)
+ parts = map(self.parse, parts)
+ parts[1::2] = ["(%s)?" % p for p in parts[1::2]]
+ else:
+ parts = [part.split(self.end)
+ for part in text.split(self.start)]
+ parts = [y for x in parts for y in x]
+ parts[::2] = map(re.escape, parts[::2])
+ parts[1::2] = map(self.lookup, parts[1::2])
+ return ''.join(parts)
+
+ def __call__(self, url_pattern):
+ """Turn a path expression into regex via parse and lastly."""
+ self._pos = 0
+ if url_pattern.endswith('|'):
+ return self.openended(self.parse(url_pattern[:-1]))
+ else:
+ return self.lastly(self.parse(url_pattern))
+
+
+class EnvironDispatcher(object):
+ """Dispatch based on list of rules."""
+
+ def __init__(self, rules):
+ """Instantiate with a list of (predicate, wsgiapp) rules."""
+ self.rules = rules
+
+ def __call__(self, environ, start_response):
+ """Call the first app whose predicate is true.
+
+ Each predicate is passes the environ to evaluate.
+ """
+ for predicate, app in self.rules:
+ if predicate(environ):
+ return app(environ, start_response)
+
+
+class MiddlewareComposer(object):
+ """Compose middleware based on list of rules."""
+
+ def __init__(self, app, rules):
+ """Instantiate with an app and a list of rules."""
+ self.app = app
+ self.rules = rules
+
+ def __call__(self, environ, start_response):
+ """Apply each middleware whose predicate is true.
+
+ Each predicate is passes the environ to evaluate.
+
+ Given this set of rules:
+
+ t = lambda x: True; f = lambda x: False
+ [(t, a), (f, b), (t, c), (f, d), (t, e)]
+
+ The app composed would be equivalent to this:
+
+ a(c(e(app)))
+ """
+ app = self.app
+ for predicate, middleware in reversed(self.rules):
+ if predicate(environ):
+ app = middleware(app)
+ return app(environ, start_response)
+
+
+def expose(obj):
+ """Set obj._exposed = True and return obj."""
+ obj._exposed = True
+ return obj
+
+
+class Naked(object):
+ """Naked object style dispatch base class."""
+
+ _not_found = staticmethod(not_found)
+ _expose_all = True
+ _exposed = True
+
+ def _is_exposed(self, obj):
+ """Determine if obj should be exposed.
+
+ If self._expose_all is True, always return True.
+ Otherwise, look at obj._exposed.
+ """
+ return self._expose_all or getattr(obj, '_exposed', False)
+
+ def __call__(self, environ, start_response):
+ """Dispatch to the method named by the next bit of PATH_INFO."""
+ name = shift_path_info(dict(SCRIPT_NAME=environ['SCRIPT_NAME'],
+ PATH_INFO=environ['PATH_INFO']))
+ callable = getattr(self, name or 'index', None)
+ if callable is not None and self._is_exposed(callable):
+ shift_path_info(environ)
+ return callable(environ, start_response)
+ else:
+ return self._not_found(environ, start_response)
+
+
+class ByMethod(object):
+ """Base class for dispatching to method named by REQUEST_METHOD."""
+
+ _method_not_allowed = staticmethod(method_not_allowed)
+
+ def __call__(self, environ, start_response):
+ """Dispatch based on REQUEST_METHOD."""
+ environ['selector.methods'] = \
+ [m for m in dir(self) if not m.startswith('_')]
+ return getattr(self,
+ environ['REQUEST_METHOD'],
+ self._method_not_allowed)(environ, start_response)
+
+
+def pliant(func):
+ """Decorate an unbound wsgi callable taking args from wsgiorg.routing_args.
+
+ @pliant
+ def app(environ, start_response, arg1, arg2, foo='bar'):
+ ...
+ """
+ def wsgi_func(environ, start_response):
+ args, kwargs = environ.get('wsgiorg.routing_args', ([], {}))
+ args = list(args)
+ args.insert(0, start_response)
+ args.insert(0, environ)
+ return apply(func, args, dict(kwargs))
+ return wsgi_func
+
+
+def opliant(meth):
+ """Decorate a bound wsgi callable taking args from wsgiorg.routing_args.
+
+ class App(object):
+ @opliant
+ def __call__(self, environ, start_response, arg1, arg2, foo='bar'):
+ ...
+ """
+ def wsgi_meth(self, environ, start_response):
+ args, kwargs = environ.get('wsgiorg.routing_args', ([], {}))
+ args = list(args)
+ args.insert(0, start_response)
+ args.insert(0, environ)
+ args.insert(0, self)
+ return apply(meth, args, dict(kwargs))
+ return wsgi_meth
+
Added: translate-toolkit/branches/upstream/current/translate/misc/test_optrecurse.py
URL: http://svn.debian.org/wsvn/translate-toolkit/branches/upstream/current/translate/misc/test_optrecurse.py?rev=1570&op=file
==============================================================================
--- translate-toolkit/branches/upstream/current/translate/misc/test_optrecurse.py (added)
+++ translate-toolkit/branches/upstream/current/translate/misc/test_optrecurse.py Sun Feb 8 16:49:31 2009
@@ -1,0 +1,20 @@
+#!/usr/bin/env python
+
+from translate.misc import optrecurse
+import os
+
+class TestRecursiveOptionParser():
+
+ def __init__(self):
+ self.parser = optrecurse.RecursiveOptionParser({"txt":("po", None)})
+
+ def test_splitext(self):
+ """test the L{optrecurse.splitext} function"""
+ name = "name"
+ extension = "ext"
+ filename = name + os.extsep + extension
+ dirname = os.path.join("some", "path", "to")
+ fullpath = os.path.join(dirname, filename)
+ root = os.path.join(dirname, name)
+ print fullpath
+ assert self.parser.splitext(fullpath) == (root, extension)
Modified: translate-toolkit/branches/upstream/current/translate/misc/textwrap.py
URL: http://svn.debian.org/wsvn/translate-toolkit/branches/upstream/current/translate/misc/textwrap.py?rev=1570&op=diff
==============================================================================
--- translate-toolkit/branches/upstream/current/translate/misc/textwrap.py (original)
+++ translate-toolkit/branches/upstream/current/translate/misc/textwrap.py Sun Feb 8 16:49:31 2009
@@ -6,7 +6,7 @@
# Copyright (C) 2002, 2003 Python Software Foundation.
# Written by Greg Ward <gward at python.net>
-__revision__ = "$Id: textwrap.py 4103 2006-10-20 07:35:02Z dwaynebailey $"
+__revision__ = "$Id: textwrap.py 9228 2008-12-13 04:50:49Z friedelwolff $"
import string, re
@@ -83,6 +83,7 @@
# (after stripping out empty strings).
wordsep_re = re.compile(
r'(\s+|' # any whitespace
+ r'%|' # gettext handles % like whitespace
r'[^\s\w]*\w+[a-zA-Z]-(?=\w+[a-zA-Z])|' # hyphenated words
r'(?<=[\w\!\"\'\&\.\,\?])-{2,}(?=\w))') # em-dash
Modified: translate-toolkit/branches/upstream/current/translate/misc/wStringIO.py
URL: http://svn.debian.org/wsvn/translate-toolkit/branches/upstream/current/translate/misc/wStringIO.py?rev=1570&op=diff
==============================================================================
--- translate-toolkit/branches/upstream/current/translate/misc/wStringIO.py (original)
+++ translate-toolkit/branches/upstream/current/translate/misc/wStringIO.py Sun Feb 8 16:49:31 2009
@@ -86,7 +86,7 @@
if length is not None:
r = self.buf.readline(length)
else:
- r = self.buf.readline(length)
+ r = self.buf.readline()
self.pos = self.buf.tell()
return r
Modified: translate-toolkit/branches/upstream/current/translate/search/match.py
URL: http://svn.debian.org/wsvn/translate-toolkit/branches/upstream/current/translate/search/match.py?rev=1570&op=diff
==============================================================================
--- translate-toolkit/branches/upstream/current/translate/search/match.py (original)
+++ translate-toolkit/branches/upstream/current/translate/search/match.py Sun Feb 8 16:49:31 2009
@@ -20,6 +20,8 @@
#
"""Class to perform translation memory matching from a store of translation units"""
+
+import re
from translate.search import lshtein
from translate.search import terminology
@@ -112,10 +114,6 @@
# in the new po file
simpleunit.addnote(candidate.getnotes(origin="translator"))
simpleunit.fuzzy = candidate.isfuzzy()
- if store:
- simpleunit.filepath = store.filepath
- simpleunit.translator = store.translator
- simpleunit.date = store.date
self.candidates.units.append(simpleunit)
if sort:
self.candidates.units.sort(sourcelencmp)
@@ -209,9 +207,6 @@
newunit = po.pounit(candidate.source)
newunit.target = candidate.target
newunit.markfuzzy(candidate.fuzzy)
- newunit.filepath = candidate.filepath
- newunit.translator = candidate.translator
- newunit.date = candidate.date
candidatenotes = candidate.getnotes().strip()
if candidatenotes:
newunit.addnote(candidatenotes)
@@ -251,3 +246,15 @@
matches = matcher.matches(self, text)
return matches
+
+# utility functions used by virtaal and tmserver to convert matching units in easily marshallable dictionaries
+def unit2dict(unit):
+ """converts a pounit to a simple dict structure for use over the web"""
+ return {"source": unit.source, "target": unit.target,
+ "quality": _parse_quality(unit.getnotes()), "context": unit.getcontext()}
+
+def _parse_quality(comment):
+ """extracts match quality from po comments"""
+ quality = re.search('([0-9]+)%', comment)
+ if quality:
+ return quality.group(1)
Modified: translate-toolkit/branches/upstream/current/translate/services/__init__.py
URL: http://svn.debian.org/wsvn/translate-toolkit/branches/upstream/current/translate/services/__init__.py?rev=1570&op=diff
==============================================================================
--- translate-toolkit/branches/upstream/current/translate/services/__init__.py (original)
+++ translate-toolkit/branches/upstream/current/translate/services/__init__.py Sun Feb 8 16:49:31 2009
@@ -1,15 +1,15 @@
#!/usr/bin/env python
# -*- coding: utf-8 -*-
-#
+#
# Copyright 2006 Zuza Software Foundation
-#
+#
# This file is part of translate.
#
# translate is free software; you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation; either version 2 of the License, or
# (at your option) any later version.
-#
+#
# translate is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
Modified: translate-toolkit/branches/upstream/current/translate/services/lookupclient.py
URL: http://svn.debian.org/wsvn/translate-toolkit/branches/upstream/current/translate/services/lookupclient.py?rev=1570&op=diff
==============================================================================
--- translate-toolkit/branches/upstream/current/translate/services/lookupclient.py (original)
+++ translate-toolkit/branches/upstream/current/translate/services/lookupclient.py Sun Feb 8 16:49:31 2009
@@ -2,14 +2,14 @@
# -*- coding: utf-8 -*-
#
# Copyright 2006 Zuza Software Foundation
-#
+#
# This file is part of translate.
#
# translate is free software; you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation; either version 2 of the License, or
# (at your option) any later version.
-#
+#
# translate is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
@@ -32,7 +32,7 @@
UnitClass = tbx.tbxunit
text = sys.stdin.readline()
-while text:
+while text:
text = text.strip().decode("utf-8")
if text != "":
source = server.lookup(text)
Modified: translate-toolkit/branches/upstream/current/translate/services/lookupservice.py
URL: http://svn.debian.org/wsvn/translate-toolkit/branches/upstream/current/translate/services/lookupservice.py?rev=1570&op=diff
==============================================================================
--- translate-toolkit/branches/upstream/current/translate/services/lookupservice.py (original)
+++ translate-toolkit/branches/upstream/current/translate/services/lookupservice.py Sun Feb 8 16:49:31 2009
@@ -2,14 +2,14 @@
# -*- coding: utf-8 -*-
#
# Copyright 2006 Zuza Software Foundation
-#
+#
# This file is part of translate.
#
# translate is free software; you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation; either version 2 of the License, or
# (at your option) any later version.
-#
+#
# translate is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
@@ -42,7 +42,7 @@
"""Sets up the requested file for parsing"""
#TODO: Parse request to see if tbx/tmx is requested,
# or perhaps the url can specify the file to be queried
-
+
class lookupServer(SimpleXMLRPCServer):
def __init__(self, addr, storage):
"""Loads the initial tbx file from the given filename"""
@@ -66,8 +66,8 @@
except Exception, e:
print str(e)
return ""
-
- def internal_lookup(self, message):
+
+ def internal_lookup(self, message):
"""Could perhaps include some intelligence in future, like case trying with different casing, etc."""
message = message.strip()
if message == "":
@@ -79,7 +79,7 @@
except Exception:
return None
return unit
-
+
def public_lookup(self, message):
"""Returns the source string of whatever was found. Keep in mind that this might not be what you want."""
unit = self.internal_lookup(message)
@@ -107,7 +107,7 @@
score = unit.getnotes()
original = unit.source
translation = unit.target
-
+
# We might have gotten multistrings, so just convert them for now
if isinstance(original, multistring):
original = unicode(original)
@@ -151,9 +151,9 @@
help="the host to bind to")
parser.add_option("-p", "--port", dest="port", default=1234,
help="the port to listen on")
- parser.add_option("-l", "--language", dest="targetlanguage", default=None,
+ parser.add_option("-l", "--language", dest="targetlanguage", default=None,
help="set target language code", metavar="LANG")
- parser.add_option("", "--source-language", dest="sourcelanguage", default='en',
+ parser.add_option("", "--source-language", dest="sourcelanguage", default='en',
help="set source language code", metavar="LANG")
parser.remove_option("--output")
parser.remove_option("--exclude")
Added: translate-toolkit/branches/upstream/current/translate/services/tmserver
URL: http://svn.debian.org/wsvn/translate-toolkit/branches/upstream/current/translate/services/tmserver?rev=1570&op=file
==============================================================================
--- translate-toolkit/branches/upstream/current/translate/services/tmserver (added)
+++ translate-toolkit/branches/upstream/current/translate/services/tmserver Sun Feb 8 16:49:31 2009
@@ -1,0 +1,28 @@
+#!/usr/bin/env python
+# -*- coding: utf-8 -*-
+#
+# Copyright 2008 Zuza Software Foundation
+#
+# This file is part of translate.
+#
+# This program is free software; you can redistribute it and/or modify
+# it under the terms of the GNU General Public License as published by
+# the Free Software Foundation; either version 2 of the License, or
+# (at your option) any later version.
+#
+# This program is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+# GNU General Public License for more details.
+#
+# You should have received a copy of the GNU General Public License
+# along with this program; if not, see <http://www.gnu.org/licenses/>.
+
+"""A translation memory server using tmdb for storage, communicates
+with clients using JSON over HTTP."""
+
+from translate.services import tmserver
+
+if __name__ == '__main__':
+ tmserver.main()
+
Propchange: translate-toolkit/branches/upstream/current/translate/services/tmserver
------------------------------------------------------------------------------
svn:executable = *
Added: translate-toolkit/branches/upstream/current/translate/services/tmserver.py
URL: http://svn.debian.org/wsvn/translate-toolkit/branches/upstream/current/translate/services/tmserver.py?rev=1570&op=file
==============================================================================
--- translate-toolkit/branches/upstream/current/translate/services/tmserver.py (added)
+++ translate-toolkit/branches/upstream/current/translate/services/tmserver.py Sun Feb 8 16:49:31 2009
@@ -1,0 +1,194 @@
+#!/usr/bin/env python
+# -*- coding: utf-8 -*-
+#
+# Copyright 2008 Zuza Software Foundation
+#
+# This file is part of translate.
+#
+# This program is free software; you can redistribute it and/or modify
+# it under the terms of the GNU General Public License as published by
+# the Free Software Foundation; either version 2 of the License, or
+# (at your option) any later version.
+#
+# This program is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+# GNU General Public License for more details.
+#
+# You should have received a copy of the GNU General Public License
+# along with this program; if not, see <http://www.gnu.org/licenses/>.
+
+"""A translation memory server using tmdb for storage, communicates
+with clients using JSON over HTTP."""
+
+import urllib
+import StringIO
+import logging
+import sys
+from optparse import OptionParser
+import simplejson as json
+from wsgiref import simple_server
+
+from translate.misc import selector
+from translate.search import match
+from translate.storage import factory
+from translate.storage import base
+from translate.storage import tmdb
+
+class TMServer(object):
+ class RequestHandler(simple_server.WSGIRequestHandler):
+ """custom request handler, disables some inefficient defaults"""
+ def address_string(self):
+ """disable client reverse dns lookup"""
+ return self.client_address[0]
+
+ def log_message(self, format, *args):
+ """log requests using logging instead of printing to
+ stderror"""
+ logging.info("%s - - [%s] %s" %
+ (self.address_string(),
+ self.log_date_time_string(),
+ format%args))
+
+ """a RESTful JSON TM server"""
+ def __init__(self, tmdbfile, tmfiles, max_candidates=3, min_similarity=75, max_length=1000, prefix="", source_lang=None, target_lang=None):
+
+ self.tmdb = tmdb.TMDB(tmdbfile, max_candidates, min_similarity, max_length)
+
+ #load files into db
+ if isinstance(tmfiles, list):
+ [self.tmdb.add_store(factory.getobject(tmfile), source_lang, target_lang) for tmfile in tmfiles]
+ elif tmfiles:
+ self.tmdb.add_store(factory.getobject(tmfiles), source_lang, target_lang)
+
+ #initialize url dispatcher
+ self.rest = selector.Selector(prefix=prefix)
+ self.rest.add("/{slang}/{tlang}/unit/{uid:any}",
+ GET=self.translate_unit,
+ POST=self.update_unit,
+ PUT=self.add_unit,
+ DELETE=self.forget_unit
+ )
+
+ self.rest.add("/{slang}/{tlang}/store/{sid:any}",
+ GET=self.get_store_stats,
+ PUT=self.upload_store,
+ POST=self.add_store,
+ DELETE=self.forget_store)
+
+ @selector.opliant
+ def translate_unit(self, environ, start_response, uid, slang, tlang):
+ start_response("200 OK", [('Content-type', 'text/plain')])
+ uid = unicode(urllib.unquote_plus(uid),"utf-8")
+ candidates = self.tmdb.translate_unit(uid, slang, tlang)
+ response = json.dumps(candidates, indent=4)
+ return [response]
+
+ @selector.opliant
+ def add_unit(self, environ, start_response, uid, slang, tlang):
+ start_response("200 OK", [('Content-type', 'text/plain')])
+ uid = unicode(urllib.unquote_plus(uid),"utf-8")
+ data = json.loads(environ['wsgi.input'].read(int(environ['CONTENT_LENGTH'])))
+ unit = base.TranslationUnit(data['source'])
+ unit.target = data['target']
+ self.tmdb.add_unit(unit, slang, tlang)
+ return [""]
+
+ @selector.opliant
+ def update_unit(self, environ, start_response, uid, slang, tlang):
+ start_response("200 OK", [('Content-type', 'text/plain')])
+ uid = unicode(urllib.unquote_plus(uid),"utf-8")
+ data = json.loads(environ['wsgi.input'].read(int(environ['CONTENT_LENGTH'])))
+ unit = base.TranslationUnit(data['source'])
+ unit.target = data['target']
+ self.tmdb.add_unit(unit, slang, tlang)
+ return [""]
+
+ @selector.opliant
+ def forget_unit(self, environ, start_response, uid):
+ #FIXME: implement me
+ start_response("200 OK", [('Content-type', 'text/plain')])
+ uid = unicode(urllib.unquote_plus(uid),"utf-8")
+
+ return [response]
+
+ @selector.opliant
+ def get_store_stats(self, environ, start_response, sid):
+ #FIXME: implement me
+ start_response("200 OK", [('Content-type', 'text/plain')])
+ sid = unicode(urllib.unquote_plus(sid),"utf-8")
+
+ return [response]
+
+ @selector.opliant
+ def upload_store(self, environ, start_response, sid, slang, tlang):
+ """add units from uploaded file to tmdb"""
+ start_response("200 OK", [('Content-type', 'text/plain')])
+ data = StringIO.StringIO(environ['wsgi.input'].read(int(environ['CONTENT_LENGTH'])))
+ data.name = sid
+ store = factory.getobject(data)
+ count = self.tmdb.add_store(store, slang, tlang)
+ response = "added %d units from %s" % (count, sid)
+ return [response]
+
+ @selector.opliant
+ def add_store(self, environ, start_response, sid, slang, tlang):
+ """add unit from POST data to tmdb"""
+ start_response("200 OK", [('Content-type', 'text/plain')])
+ units = json.loads(environ['wsgi.input'].read(int(environ['CONTENT_LENGTH'])))
+ count = self.tmdb.add_list(units, slang, tlang)
+ response = "added %d units from %s" % (count, sid)
+ return [response]
+
+ @selector.opliant
+ def forget_store(self, environ, start_response, sid):
+ #FIXME: implement me
+ start_response("200 OK", [('Content-type', 'text/plain')])
+ sid = unicode(urllib.unquote_plus(sid),"utf-8")
+
+ return [response]
+
+
+def main():
+ parser = OptionParser()
+ parser.add_option("-d", "--tmdb", dest="tmdbfile", default=":memory:",
+ help="translation memory database file")
+ parser.add_option("-f", "--import-translation-file", dest="tmfiles", action="append",
+ help="translation file to import into the database")
+ parser.add_option("-t", "--import-target-lang", dest="target_lang",
+ help="target language of translation files")
+ parser.add_option("-s", "--import-source-lang", dest="source_lang",
+ help="source language of translation files")
+ parser.add_option("-b", "--bind", dest="bind",
+ help="adress to bind server to")
+ parser.add_option("-p", "--port", dest="port", type="int",
+ help="port to listen on")
+ parser.add_option("--debug", action="store_true", dest="debug", default=False,
+ help="enable debugging features")
+
+ (options, args) = parser.parse_args()
+
+ #setup debugging
+ format = '%(asctime)s %(levelname)s %(message)s'
+ level = options.debug and logging.DEBUG or logging.INFO
+ if options.debug:
+ format = '%(levelname)7s %(module)s.%(funcName)s:%(lineno)d: %(message)s'
+ if sys.version_info[:2] < (2, 5):
+ format = '%(levelname)7s %(module)s [%(filename)s:%(lineno)d]: %(message)s'
+ else:
+ try:
+ import psyco
+ psyco.full()
+ except Exception:
+ pass
+
+ logging.basicConfig(level=level, format=format)
+
+ application = TMServer(options.tmdbfile, options.tmfiles, prefix="/tmserver", source_lang=options.source_lang, target_lang=options.target_lang)
+ httpd = simple_server.make_server(options.bind, options.port, application.rest, handler_class=TMServer.RequestHandler)
+ httpd.serve_forever()
+
+
+if __name__ == '__main__':
+ main()
+
Propchange: translate-toolkit/branches/upstream/current/translate/services/tmserver.py
------------------------------------------------------------------------------
svn:executable = *
Modified: translate-toolkit/branches/upstream/current/translate/storage/__init__.py
URL: http://svn.debian.org/wsvn/translate-toolkit/branches/upstream/current/translate/storage/__init__.py?rev=1570&op=diff
==============================================================================
--- translate-toolkit/branches/upstream/current/translate/storage/__init__.py (original)
+++ translate-toolkit/branches/upstream/current/translate/storage/__init__.py Sun Feb 8 16:49:31 2009
@@ -20,5 +20,16 @@
# Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
"""translate.storage is part of the translate package
-It contains classes that represent various storage formats for localization"""
+It contains classes that represent various storage formats for localization
+ at group Primary Localisation: xliff pypo cpo pocommon po poheader base factory
+ at group Bilingual: ts2 ts oo lisa tmx tbx wordfast qph poxliff
+ at group Monolingual: dtd properties ini rc ical csvl10n html php txt subtitles
+ at group OpenDocument Format: xml_extract odf*
+ at group Binary: qm mo
+ at group Version Control: versioncontrol
+ at group Placeables: placeables
+ at group Other file processing: directory xpi zip statsdb statistics
+ at group Other: benchmark
+"""
+
Modified: translate-toolkit/branches/upstream/current/translate/storage/base.py
URL: http://svn.debian.org/wsvn/translate-toolkit/branches/upstream/current/translate/storage/base.py?rev=1570&op=diff
==============================================================================
--- translate-toolkit/branches/upstream/current/translate/storage/base.py (original)
+++ translate-toolkit/branches/upstream/current/translate/storage/base.py Sun Feb 8 16:49:31 2009
@@ -31,6 +31,7 @@
except:
import pickle
from exceptions import NotImplementedError
+import translate.i18n
from translate.storage.placeables.base import Placeable, as_string
from translate.misc.typecheck import accepts, Self, IsOneOf
from translate.misc.multistring import multistring
@@ -47,7 +48,11 @@
raise NotImplementedError("%s does not reimplement %s as required by %s" % (actualclass.__name__, method.__name__, baseclass.__name__))
class ParseError(Exception):
- pass
+ def __init__(self, inner_exc):
+ self.inner_exc = inner_exc
+
+ def __str__(self):
+ return repr(self.inner_exc)
class TranslationUnit(object):
"""Base class for translation units.
@@ -82,7 +87,6 @@
self.source = source
self.target = None
self.notes = ""
- super(TranslationUnit, self).__init__()
def __eq__(self, other):
"""Compares two TranslationUnits.
@@ -377,9 +381,6 @@
"""Constructs a blank TranslationStore."""
self.units = []
- self.filepath = None
- self.translator = ""
- self.date = ""
self.sourcelanguage = None
self.targetlanguage = None
if unitclass:
Modified: translate-toolkit/branches/upstream/current/translate/storage/cpo.py
URL: http://svn.debian.org/wsvn/translate-toolkit/branches/upstream/current/translate/storage/cpo.py?rev=1570&op=diff
==============================================================================
--- translate-toolkit/branches/upstream/current/translate/storage/cpo.py (original)
+++ translate-toolkit/branches/upstream/current/translate/storage/cpo.py Sun Feb 8 16:49:31 2009
@@ -152,8 +152,8 @@
def quoteforpo(text):
return pypo.quoteforpo(text)
-def unquotefrompo(postr, joinwithlinebreak=False):
- return pypo.unquotefrompo(postr, joinwithlinebreak)
+def unquotefrompo(postr):
+ return pypo.unquotefrompo(postr)
def encodingToUse(encoding):
return pypo.encodingToUse(encoding)
@@ -331,7 +331,7 @@
def addnote(self, text, origin=None, position="append"):
# ignore empty strings and strings without non-space characters
- if (not text) or (not text.strip()):
+ if not (text and text.strip()):
return
text = data.forceunicode(text)
oldnotes = self.getnotes(origin)
@@ -527,9 +527,10 @@
self._gpo_memory_file = gpo.po_file_create()
self._gpo_message_iterator = gpo.po_message_iterator(self._gpo_memory_file, None)
- def addunit(self, unit):
- gpo.po_message_insert(self._gpo_message_iterator, unit._gpo_message)
- self.units.append(unit)
+ def addunit(self, unit, new=True):
+ if new:
+ gpo.po_message_insert(self._gpo_message_iterator, unit._gpo_message)
+ super(pofile, self).addunit(unit)
def removeduplicates(self, duplicatestyle="merge"):
"""make sure each msgid is unique ; merge comments etc from duplicates into original"""
@@ -662,7 +663,7 @@
newmessage = gpo.po_next_message(self._gpo_message_iterator)
while newmessage:
newunit = pounit(gpo_message=newmessage)
- self.units.append(newunit)
+ self.addunit(newunit, new=False)
newmessage = gpo.po_next_message(self._gpo_message_iterator)
self._free_iterator()
Modified: translate-toolkit/branches/upstream/current/translate/storage/csvl10n.py
URL: http://svn.debian.org/wsvn/translate-toolkit/branches/upstream/current/translate/storage/csvl10n.py?rev=1570&op=diff
==============================================================================
--- translate-toolkit/branches/upstream/current/translate/storage/csvl10n.py (original)
+++ translate-toolkit/branches/upstream/current/translate/storage/csvl10n.py Sun Feb 8 16:49:31 2009
@@ -138,6 +138,7 @@
"""This class represents a .csv file with various lines.
The default format contains three columns: comments, source, target"""
UnitClass = csvunit
+ Name = _("Comma Seperated Value")
Mimetypes = ['text/comma-separated-values', 'text/csv']
Extensions = ["csv"]
def __init__(self, inputfile=None, fieldnames=None):
@@ -179,11 +180,3 @@
csvfile.reset()
return "".join(csvfile.readlines())
-
-if __name__ == '__main__':
- import sys
- cf = csvfile()
- cf.parse(sys.stdin.read())
- sys.stdout.write(str(cf))
-
-
Modified: translate-toolkit/branches/upstream/current/translate/storage/dtd.py
URL: http://svn.debian.org/wsvn/translate-toolkit/branches/upstream/current/translate/storage/dtd.py?rev=1570&op=diff
==============================================================================
--- translate-toolkit/branches/upstream/current/translate/storage/dtd.py (original)
+++ translate-toolkit/branches/upstream/current/translate/storage/dtd.py Sun Feb 8 16:49:31 2009
@@ -26,13 +26,19 @@
from translate.misc import quote
import re
-import sys
import warnings
try:
from lxml import etree
import StringIO
except ImportError:
etree = None
+
+labelsuffixes = (".label", ".title")
+"""Label suffixes: entries with this suffix are able to be comibed with accesskeys
+found in in entries ending with L{accesskeysuffixes}"""
+accesskeysuffixes = (".accesskey", ".accessKey", ".akey")
+"""Accesskey Suffixes: entries with this suffix may be combined with labels
+ending in L{labelsuffixes} into accelerator notation"""
def quotefordtd(source):
if '"' in source:
@@ -46,7 +52,8 @@
def unquotefromdtd(source):
"""unquotes a quoted dtd definition"""
# extract the string, get rid of quoting
- if len(source) == 0: source = '""'
+ if len(source) == 0:
+ source = '""'
quotechar = source[0]
extracted, quotefinished = quote.extractwithoutquotes(source, quotechar, quotechar, allowreentry=False)
if quotechar == "'" and "'" in extracted:
@@ -55,6 +62,49 @@
# of course there could also be quote characters within the string; not handled here
return extracted
+def removeinvalidamps(name, value):
+ """Find and remove ampersands that are not part of an entity definition.
+
+ A stray & in a DTD file can break an applications ability to parse the file. In Mozilla
+ localisation this is very important and these can break the parsing of files used in XUL
+ and thus break interface rendering. Tracking down the problem is very difficult,
+ thus by removing potential broken & and warning the users we can ensure that the output
+ DTD will always be parsable.
+
+ @type name: String
+ @param name: Entity name
+ @type value: String
+ @param value: Entity text value
+ @rtype: String
+ @return: Entity value without bad ampersands
+ """
+ def is_valid_entity_name(name):
+ """Check that supplied L{name} is a valid entity name"""
+ if name.replace('.', '').isalnum():
+ return True
+ elif name[0] == '#' and name[1:].isalnum():
+ return True
+ return False
+
+ amppos = 0
+ invalid_amps = []
+ while amppos >= 0:
+ amppos = value.find("&", amppos)
+ if amppos != -1:
+ amppos += 1
+ semipos = value.find(";", amppos)
+ if semipos != -1:
+ if is_valid_entity_name(value[amppos:semipos]):
+ continue
+ invalid_amps.append(amppos-1)
+ if len(invalid_amps) > 0:
+ warnings.warn("invalid ampersands in dtd entity %s" % (name))
+ adjustment = 0
+ for amppos in invalid_amps:
+ value = value[:amppos-adjustment] + value[amppos-adjustment+1:]
+ adjustment += 1
+ return value
+
class dtdunit(base.TranslationUnit):
"""this class represents an entity definition from a dtd file (and possibly associated comments)"""
def __init__(self, source=""):
@@ -62,8 +112,8 @@
super(dtdunit, self).__init__(source)
self.comments = []
self.unparsedlines = []
- self.incomment = 0
- self.inentity = 0
+ self.incomment = False
+ self.inentity = False
self.entity = "FakeEntityOnlyForInitialisationAndTesting"
self.source = source
@@ -120,13 +170,14 @@
# print "line(%d,%d): " % (self.incomment,self.inentity),line[:-1]
if not self.incomment:
if (line.find('<!--') != -1):
- self.incomment = 1
- self.continuecomment = 0
+ self.incomment = True
+ self.continuecomment = False
# now work out the type of comment, and save it (remember we're not in the comment yet)
(comment, dummy) = quote.extract(line, "<!--", "-->", None, 0)
if comment.find('LOCALIZATION NOTE') != -1:
l = quote.findend(comment,'LOCALIZATION NOTE')
- while (comment[l] == ' '): l += 1
+ while (comment[l] == ' '):
+ l += 1
if comment.find('FILE', l) == l:
self.commenttype = "locfile"
elif comment.find('BEGIN', l) == l:
@@ -181,7 +232,7 @@
if not self.inentity and not self.incomment:
entitypos = line.find('<!ENTITY')
if entitypos != -1:
- self.inentity = 1
+ self.inentity = True
beforeentity = line[:entitypos].strip()
if beforeentity.startswith("#"):
self.hashprefix = beforeentity
@@ -198,17 +249,20 @@
self.entitytype = "internal"
if self.entitypart == "name":
e = 0
- while (e < len(line) and line[e].isspace()): e += 1
+ while (e < len(line) and line[e].isspace()):
+ e += 1
self.entity = ''
if (e < len(line) and line[e] == '%'):
self.entitytype = "external"
self.entityparameter = ""
e += 1
- while (e < len(line) and line[e].isspace()): e += 1
+ while (e < len(line) and line[e].isspace()):
+ e += 1
while (e < len(line) and not line[e].isspace()):
self.entity += line[e]
e += 1
- while (e < len(line) and line[e].isspace()): e += 1
+ while (e < len(line) and line[e].isspace()):
+ e += 1
if self.entity:
if self.entitytype == "external":
self.entitypart = "parameter"
@@ -217,15 +271,19 @@
# remember the start position and the quote character
if e == len(line):
self.entityhelp = None
+ e = 0
continue
elif self.entitypart == "definition":
self.entityhelp = (e, line[e])
- self.instring = 0
+ self.instring = False
if self.entitypart == "parameter":
+ while (e < len(line) and line[e].isspace()): e += 1
paramstart = e
- while (e < len(line) and line[e].isalnum()): e += 1
+ while (e < len(line) and line[e].isalnum()):
+ e += 1
self.entityparameter += line[paramstart:e]
- while (e < len(line) and line[e].isspace()): e += 1
+ while (e < len(line) and line[e].isspace()):
+ e += 1
line = line[e:]
e = 0
if not line:
@@ -233,15 +291,16 @@
if line[0] in ('"', "'"):
self.entitypart = "definition"
self.entityhelp = (e, line[e])
- self.instring = 0
+ self.instring = False
if self.entitypart == "definition":
if self.entityhelp is None:
e = 0
- while (e < len(line) and line[e].isspace()): e += 1
+ while (e < len(line) and line[e].isspace()):
+ e += 1
if e == len(line):
continue
self.entityhelp = (e, line[e])
- self.instring = 0
+ self.instring = False
# actually the lines below should remember instring, rather than using it as dummy
e = self.entityhelp[0]
if (self.entityhelp[1] == "'"):
@@ -254,14 +313,15 @@
self.entityhelp = (0, self.entityhelp[1])
self.definition += defpart
if not self.instring:
- self.inentity = 0
+ self.inentity = False
break
# uncomment this line to debug processing
if 0:
for attr in dir(self):
r = repr(getattr(self, attr))
- if len(r) > 60: r = r[:57]+"..."
+ if len(r) > 60:
+ r = r[:57]+"..."
self.comments.append(("comment", "self.%s = %s" % (attr, r) ))
return linesprocessed
@@ -302,7 +362,6 @@
def __init__(self, inputfile=None):
"""construct a dtdfile, optionally reading in from inputfile"""
base.TranslationStore.__init__(self, unitclass = self.UnitClass)
- self.units = []
self.filename = getattr(inputfile, 'name', '')
if inputfile is not None:
dtdsrc = inputfile.read()
@@ -310,19 +369,19 @@
self.makeindex()
def parse(self, dtdsrc):
- """read the source code of a dtd file in and include them as dtdunits in self.units (any existing units are lost)"""
- self.units = []
+ """read the source code of a dtd file in and include them as dtdunits in self.units"""
start = 0
end = 0
lines = dtdsrc.split("\n")
while end < len(lines):
- if (start == end): end += 1
- foundentity = 0
+ if (start == end):
+ end += 1
+ foundentity = False
while end < len(lines):
if end >= len(lines):
break
if lines[end].find('<!ENTITY') > -1:
- foundentity = 1
+ foundentity = True
if foundentity and re.match("[\"']\s*>", lines[end]):
end += 1
break
@@ -343,12 +402,9 @@
def __str__(self):
"""convert to a string. double check that unicode is handled somehow here"""
source = self.getoutput()
- if etree is not None:
- try:
- dtd = etree.DTD(StringIO.StringIO(re.sub("#expand", "", source)))
- except etree.DTDParseError:
- warnings.warn("DTD file '%s' does not validate" % self.filename)
- return None
+ if not self._valid_store():
+ warnings.warn("DTD file '%s' does not validate" % self.filename)
+ return None
if isinstance(source, unicode):
return source.encode(getattr(self, "encoding", "UTF-8"))
return source
@@ -365,21 +421,19 @@
if not dtd.isnull():
self.index[dtd.entity] = dtd
- def rewrap(self):
- for dtd in self.units:
- lines = dtd.definition.split("\n")
- if len(lines) > 1:
- definition = lines[0]
- for line in lines[1:]:
- if definition[-1:].isspace() or line[:1].isspace():
- definition += line
- else:
- definition += " " + line
- dtd.definition = definition
-
-if __name__ == "__main__":
- import sys
- d = dtdfile(sys.stdin)
- d.rewrap()
- sys.stdout.write(str(d))
-
+ def _valid_store(self):
+ """Validate the store to determine if it is valid
+
+ This uses ElementTree to parse the DTD
+
+ @return: If the store passes validation
+ @rtype: Boolean
+ """
+ if etree is not None:
+ try:
+ # #expand is a Mozilla hack and are removed as they are not valid in DTDs
+ dtd = etree.DTD(StringIO.StringIO(re.sub("#expand", "", self.getoutput())))
+ except etree.DTDParseError:
+ return False
+ return True
+
Modified: translate-toolkit/branches/upstream/current/translate/storage/factory.py
URL: http://svn.debian.org/wsvn/translate-toolkit/branches/upstream/current/translate/storage/factory.py?rev=1570&op=diff
==============================================================================
--- translate-toolkit/branches/upstream/current/translate/storage/factory.py (original)
+++ translate-toolkit/branches/upstream/current/translate/storage/factory.py Sun Feb 8 16:49:31 2009
@@ -188,6 +188,7 @@
store = storeclass.parsefile(storefile)
else:
store = storeclass()
+ store.filename = storefilename
return store
def supported_files():
Modified: translate-toolkit/branches/upstream/current/translate/storage/html.py
URL: http://svn.debian.org/wsvn/translate-toolkit/branches/upstream/current/translate/storage/html.py?rev=1570&op=diff
==============================================================================
--- translate-toolkit/branches/upstream/current/translate/storage/html.py (original)
+++ translate-toolkit/branches/upstream/current/translate/storage/html.py Sun Feb 8 16:49:31 2009
@@ -1,7 +1,7 @@
#!/usr/bin/env python
# -*- coding: utf-8 -*-
#
-# Copyright 2004-2006 Zuza Software Foundation
+# Copyright 2004-2006,2008 Zuza Software Foundation
#
# This file is part of translate.
#
@@ -69,7 +69,7 @@
def guess_encoding(self, htmlsrc):
"""Returns the encoding of the html text.
-
+
We look for 'charset=' within a meta tag to do this.
"""
@@ -104,17 +104,17 @@
strings to help our regexes out.
"""
-
- import md5
-
+
+ from translate.misc import hash
+
self.phpdict = {}
result = re.findall('(?s)<\?(.*?)\?>', text)
for cmd in result:
- h = md5.new(cmd).hexdigest()
+ h = hash.md5_f(cmd).hexdigest()
self.phpdict[h] = cmd
text = text.replace(cmd,h)
return text
-
+
def reintrophp(self, text):
"""Replaces the PHP placeholders in text with the real code"""
for hash, code in self.phpdict.items():
@@ -136,7 +136,7 @@
def strip_html(self, text):
"""Strip unnecessary html from the text.
-
+
HTML tags are deemed unnecessary if it fully encloses the translatable
text, eg. '<a href="index.html">Home Page</a>'.
@@ -252,10 +252,10 @@
def handle_comment(self, data):
# we don't do anything with comments
pass
-
+
def handle_pi(self, data):
self.handle_data("<?%s>" % data)
class POHTMLParser(htmlfile):
pass
-
+
Modified: translate-toolkit/branches/upstream/current/translate/storage/ini.py
URL: http://svn.debian.org/wsvn/translate-toolkit/branches/upstream/current/translate/storage/ini.py?rev=1570&op=diff
==============================================================================
--- translate-toolkit/branches/upstream/current/translate/storage/ini.py (original)
+++ translate-toolkit/branches/upstream/current/translate/storage/ini.py Sun Feb 8 16:49:31 2009
@@ -36,6 +36,32 @@
from StringIO import StringIO
import re
+_dialects = {}
+
+def register_dialect(name, dialect):
+ """Register the dialect"""
+ _dialects[name] = dialect
+
+class Dialect(object):
+ """Base class for differentiating dialect options and functions"""
+ pass
+
+class DialectDefault(Dialect):
+ def unescape(self, text):
+ return text
+
+ def escape(self, text):
+ return text
+register_dialect("default", DialectDefault)
+
+class DialectInno(DialectDefault):
+ def unescape(self, text):
+ return text.replace("%n", "\n").replace("%t", "\t")
+
+ def escape(self, text):
+ return text.replace("\t", "%t").replace("\n", "%n")
+register_dialect("inno", DialectInno)
+
class iniunit(base.TranslationUnit):
"""A INI file entry"""
@@ -54,9 +80,10 @@
class inifile(base.TranslationStore):
"""An INI file"""
UnitClass = iniunit
- def __init__(self, inputfile=None, unitclass=iniunit):
+ def __init__(self, inputfile=None, unitclass=iniunit, dialect="default"):
"""construct an INI file, optionally reading in from inputfile."""
self.UnitClass = unitclass
+ self._dialect = _dialects.get(dialect, DialectDefault)() # fail correctly/use getattr/
base.TranslationStore.__init__(self, unitclass=unitclass)
self.units = []
self.filename = ''
@@ -69,7 +96,7 @@
for unit in self.units:
for location in unit.getlocations():
match = re.match('\\[(?P<section>.+)\\](?P<entry>.+)', location)
- _outinifile[match.groupdict()['section']][match.groupdict()['entry']] = unit.target
+ _outinifile[match.groupdict()['section']][match.groupdict()['entry']] = self._dialect.escape(unit.target)
if _outinifile:
return str(_outinifile)
else:
@@ -92,5 +119,5 @@
self._inifile = INIConfig(file(input), optionxformvalue=None)
for section in self._inifile:
for entry in self._inifile[section]:
- newunit = self.addsourceunit(self._inifile[section][entry])
+ newunit = self.addsourceunit(self._dialect.unescape(self._inifile[section][entry]))
newunit.addlocation("[%s]%s" % (section, entry))
Modified: translate-toolkit/branches/upstream/current/translate/storage/lisa.py
URL: http://svn.debian.org/wsvn/translate-toolkit/branches/upstream/current/translate/storage/lisa.py?rev=1570&op=diff
==============================================================================
--- translate-toolkit/branches/upstream/current/translate/storage/lisa.py (original)
+++ translate-toolkit/branches/upstream/current/translate/storage/lisa.py Sun Feb 8 16:49:31 2009
@@ -32,17 +32,11 @@
except ImportError, e:
raise ImportError("lxml is not installed. It might be possible to continue without support for XML formats.")
+string_xpath = etree.XPath("string()")
+
def getText(node):
"""joins together the text from all the text nodes in the nodelist and their children"""
- # node.xpath is very slow, so we only use it if there are children
- # TODO: consider rewriting by iterating over children
- if node is not None: # The etree way of testing for children
- # Only non-ASCII strings are returned as unicode, so we have to force
- # the ASCII-only ones to be unicode as well
- return unicode(node.xpath("string()")) # specific to lxml.etree
- else:
- return data.forceunicode(node.text) or u""
- # if node.text is none, we want to return "" since the tag is there
+ return unicode(string_xpath(node)) # specific to lxml.etree
def _findAllMatches(text, re_obj):
"""generate match objects for all L{re_obj} matches in L{text}."""
@@ -105,13 +99,12 @@
namespace = None
- def __init__(self, source, empty=False):
+ def __init__(self, source, empty=False, **kwargs):
"""Constructs a unit containing the given source string"""
if empty:
return
self.xmlelement = etree.Element(self.rootNode)
#add descrip, note, etc.
-
super(LISAunit, self).__init__(source)
def __eq__(self, other):
@@ -203,16 +196,28 @@
rich_target = property(get_rich_target, set_rich_target)
def settarget(self, text, lang='xx', append=False):
- #XXX: we really need the language - can't really be optional
+ #XXX: we really need the language - can't really be optional, and we
+ # need to propagate it
"""Sets the "target" string (second language), or alternatively appends to the list"""
text = data.forceunicode(text)
#Firstly deal with reinitialising to None or setting to identical string
if self.gettarget() == text:
return
- languageNode = None
+ languageNode = self.get_target_dom(None)
if not text is None:
- languageNode = self.createlanguageNode(lang, text, "target")
- self.set_target_dom(languageNode, append)
+ if languageNode is None:
+ languageNode = self.createlanguageNode(lang, text, "target")
+ self.set_target_dom(languageNode, append)
+ else:
+ if self.textNode:
+ terms = languageNode.iter(self.namespaced(self.textNode))
+ try:
+ languageNode = terms.next()
+ except StopIteration, e:
+ pass
+ languageNode.text = text
+ else:
+ self.set_target_dom(None, False)
def gettarget(self, lang=None):
"""retrieves the "target" text (second entry), or the entry in the
@@ -256,7 +261,7 @@
def getlanguageNodes(self):
"""Returns a list of all nodes that contain per language information."""
- return self.xmlelement.findall(self.namespaced(self.languageNode))
+ return list(self.xmlelement.iterchildren(self.namespaced(self.languageNode)))
def getlanguageNode(self, lang=None, index=None):
"""Retrieves a languageNode either by language or by index"""
@@ -279,10 +284,11 @@
if languageNode is None:
return None
if self.textNode:
- terms = languageNode.findall('.//%s' % self.namespaced(self.textNode))
- if len(terms) == 0:
+ terms = languageNode.iterdescendants(self.namespaced(self.textNode))
+ if terms is None:
return None
- return getText(terms[0])
+ else:
+ return getText(terms.next())
else:
return getText(languageNode)
@@ -328,6 +334,7 @@
# interfere with the the pretty printing of lxml
self.parse(self.XMLskeleton.replace("\n", ""))
self.addheader()
+ self._encoding = "UTF-8"
def addheader(self):
"""Method to be overridden to initialise headers, etc."""
@@ -372,14 +379,17 @@
xml.seek(0)
posrc = xml.read()
xml = posrc
- self.document = etree.fromstring(xml).getroottree()
- self.encoding = self.document.docinfo.encoding
+ if etree.LXML_VERSION > (2, 1, 0):
+ #Since version 2.1.0 we can pass the strip_cdata parameter to
+ #indicate that we don't want cdata to be converted to raw XML
+ parser = etree.XMLParser(strip_cdata=False)
+ else:
+ parser = etree.XMLParser()
+ self.document = etree.fromstring(xml, parser).getroottree()
+ self._encoding = self.document.docinfo.encoding
self.initbody()
assert self.document.getroot().tag == self.namespaced(self.rootNode)
- termEntries = self.body.findall('.//%s' % self.namespaced(self.UnitClass.rootNode))
- if termEntries is None:
- return
- for entry in termEntries:
+ for entry in self.body.iterdescendants(self.namespaced(self.UnitClass.rootNode)):
term = self.UnitClass.createfromxmlElement(entry)
self.addunit(term, new=False)
Modified: translate-toolkit/branches/upstream/current/translate/storage/mo.py
URL: http://svn.debian.org/wsvn/translate-toolkit/branches/upstream/current/translate/storage/mo.py?rev=1570&op=diff
==============================================================================
--- translate-toolkit/branches/upstream/current/translate/storage/mo.py (original)
+++ translate-toolkit/branches/upstream/current/translate/storage/mo.py Sun Feb 8 16:49:31 2009
@@ -43,6 +43,7 @@
from translate.storage import base
from translate.storage import po
+from translate.storage import poheader
from translate.misc.multistring import multistring
import struct
import array
@@ -121,10 +122,10 @@
"""Is this message translateable?"""
return bool(self.source)
-class mofile(base.TranslationStore):
+class mofile(base.TranslationStore, poheader.poheader):
"""A class representing a .mo file."""
UnitClass = mounit
- Name = "Gettext MO file"
+ Name = _("Gettext MO file")
Mimetypes = ["application/x-gettext-catalog", "application/x-mo"]
Extensions = ["mo", "gmo"]
_binary = True
Modified: translate-toolkit/branches/upstream/current/translate/storage/odf_io.py
URL: http://svn.debian.org/wsvn/translate-toolkit/branches/upstream/current/translate/storage/odf_io.py?rev=1570&op=diff
==============================================================================
--- translate-toolkit/branches/upstream/current/translate/storage/odf_io.py (original)
+++ translate-toolkit/branches/upstream/current/translate/storage/odf_io.py Sun Feb 8 16:49:31 2009
@@ -21,6 +21,8 @@
#
import zipfile
+from lxml import etree
+from translate.storage.xml_name import XmlNamer
def open_odf(filename):
z = zipfile.ZipFile(filename, 'r')
@@ -28,10 +30,24 @@
'meta.xml': z.read("meta.xml"),
'styles.xml': z.read("styles.xml")}
-def copy_odf(input_file, output_file, exclusion_list):
- input_zip = zipfile.ZipFile(input_file, 'r')
- output_zip = zipfile.ZipFile(output_file, 'w', compression=zipfile.ZIP_DEFLATED)
+def copy_odf(input_zip, output_zip, exclusion_list):
for name in [name for name in input_zip.namelist() if name not in exclusion_list]:
output_zip.writestr(name, input_zip.read(name))
return output_zip
+def namespaced(nsmap, short_namespace, tag):
+ return '{%s}%s' % (nsmap[short_namespace], tag)
+
+def add_file(output_zip, manifest_data, new_filename, new_data):
+ root = etree.fromstring(manifest_data)
+ namer = XmlNamer(root)
+ namespacer = namer.namespace('manifest')
+ file_entry_tag = namespacer.name('file-entry')
+ media_type_attr = namespacer.name('media-type')
+ full_path_attr = namespacer.name('full-path')
+
+ root.append(etree.Element(file_entry_tag, {media_type_attr: 'application/x-xliff+xml',
+ full_path_attr: new_filename}))
+ output_zip.writestr(new_filename, new_data)
+ output_zip.writestr('META-INF/manifest.xml', etree.tostring(root, xml_declaration=True, encoding="UTF-8"))
+ return output_zip
Modified: translate-toolkit/branches/upstream/current/translate/storage/oo.py
URL: http://svn.debian.org/wsvn/translate-toolkit/branches/upstream/current/translate/storage/oo.py?rev=1570&op=diff
==============================================================================
--- translate-toolkit/branches/upstream/current/translate/storage/oo.py (original)
+++ translate-toolkit/branches/upstream/current/translate/storage/oo.py Sun Feb 8 16:49:31 2009
@@ -24,7 +24,7 @@
These are specific .oo files for localisation exported by OpenOffice.org - SDF
format (previously knows as GSI files). For an overview of the format, see
-http://l10n.openoffice.org/L10N_Framework/Intermediate_file_format.html
+U{http://l10n.openoffice.org/L10N_Framework/Intermediate_file_format.html}
The behaviour in terms of escaping is explained in detail in the programming
comments.
@@ -33,7 +33,6 @@
import os
import re
-import sys
from translate.misc import quote
from translate.misc import wStringIO
import warnings
@@ -64,6 +63,32 @@
return filename.translate(normalizetable)
else:
return filename.translate(unormalizetable)
+
+def makekey(ookey, long_keys):
+ """converts an oo key tuple into a unique identifier
+
+ @param ookey: an oo key
+ @type ookey: tuple
+ @param long_keys: Use long keys
+ @type long_keys: Boolean
+ @rtype: str
+ @return: unique ascii identifier
+ """
+ project, sourcefile, resourcetype, groupid, localid, platform = ookey
+ sourcefile = sourcefile.replace('\\','/')
+ if long_keys:
+ sourcebase = os.path.join(project, sourcefile)
+ else:
+ sourceparts = sourcefile.split('/')
+ sourcebase = "".join(sourceparts[-1:])
+ if len(groupid) == 0 or len(localid) == 0:
+ fullid = groupid + localid
+ else:
+ fullid = groupid + "." + localid
+ if resourcetype:
+ fullid = fullid + "." + resourcetype
+ key = "%s#%s" % (sourcebase, fullid)
+ return normalizefilename(key)
# These are functions that deal with escaping and unescaping of the text fields
# of the SDF file. These should only be applied to the text column.
@@ -105,12 +130,12 @@
"""
text = text.replace("\\", "\\\\")
for tag in helptagre.findall(text):
- escapethistag = True
- if tag in ["<br>", "<h1>", "</h1>", "<img ...>", "<->", "<empty>", "<ref>", "<references>"]:
- escapethistag = False
- for skip in ["<font", "<node", "<help_section"]:
- if tag.startswith(skip):
- escapethistag = False
+ escapethistag = False
+ for escape_tag in ["ahelp", "link", "item", "emph", "defaultinline", "switchinline", "caseinline", "variable", "bookmark_value", "image", "embedvar", "alt"]:
+ if tag.startswith("<%s" % escape_tag) or tag == "</%s>" % escape_tag:
+ escapethistag = True
+ if tag in ["<br/>", "<help-id-missing/>"]:
+ escapethistag = True
if escapethistag:
escaped_tag = ("\\<" + tag[1:-1] + "\\>").replace('"', '\\"')
text = text.replace(tag, escaped_tag)
@@ -363,7 +388,3 @@
oosubfile.parse(subfilesrc)
return oosubfile
-if __name__ == '__main__':
- of = oofile()
- of.parse(sys.stdin.read())
- sys.stdout.write(str(of))
Modified: translate-toolkit/branches/upstream/current/translate/storage/php.py
URL: http://svn.debian.org/wsvn/translate-toolkit/branches/upstream/current/translate/storage/php.py?rev=1570&op=diff
==============================================================================
--- translate-toolkit/branches/upstream/current/translate/storage/php.py (original)
+++ translate-toolkit/branches/upstream/current/translate/storage/php.py Sun Feb 8 16:49:31 2009
@@ -19,9 +19,22 @@
# along with translate; if not, write to the Free Software
# Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
-"""classes that hold units of php localisation files (phpunit) or entire files
-(phpfile) these files are used in translating many PHP based applications
-
+"""classes that hold units of PHP localisation files L{phpunit} or entire files
+ L{phpfile} these files are used in translating many PHP based applications.
+
+ Only PHP files written with these conventtions are supported::
+ $lang['item'] = "vale"; # Array of values
+ $some_entity = "value"; # Named variables
+
+ The parser does not support other array conventions such as::
+ $lang = array(
+ 'item1' => 'value1',
+ 'item2' => 'value2',
+ );
+
+ The working of PHP strings and specifically the escaping conventions which
+ differ between single quote (') and double quote (") characters are outlined
+ in the PHP documentation for the U{String type<http://www.php.net/language.types.string>}
"""
from translate.storage import base
@@ -29,22 +42,57 @@
import re
def phpencode(text, quotechar="'"):
- """convert Python string to PHP escaping"""
+ """convert Python string to PHP escaping
+
+ The encoding is implemented for
+ U{'single quote'<http://www.php.net/manual/en/language.types.string.php#language.types.string.syntax.single>}
+ and U{"double quote"<http://www.php.net/manual/en/language.types.string.php#language.types.string.syntax.double>}
+ syntax.
+
+ heredoc and nowdoc are not implemented and it is not certain whether this would
+ ever be needed for PHP localisation needs.
+ """
if not text:
return text
- return text.replace("%s" % quotechar, "\\%s" % quotechar).replace("\n", "\\n")
-
-def phpdecode(text):
+ if quotechar == '"':
+ # \n may be converted to \\n but we don't. This allows us to preserve pretty layout that might have appeared in muliline entries
+ # we might lose some "blah\nblah" layouts but that's probably not the most frequent use case. See bug 588
+ escapes = (("\\", "\\\\"), ("\r", "\\r"), ("\t", "\\t"), ("\v", "\\v"), ("\f", "\\f"), ("\\\\$", "\\$"), ('"', '\\"'), ("\\\\", "\\"))
+ for a, b in escapes:
+ text = text.replace(a, b)
+ return text
+ else:
+ return text.replace("%s" % quotechar, "\\%s" % quotechar)
+
+def phpdecode(text, quotechar="'"):
"""convert PHP escaped string to a Python string"""
+ def decode_octal_hex(match):
+ """decode Octal \NNN and Hex values"""
+ if match.groupdict().has_key("octal"):
+ return match.groupdict()['octal'].decode("string_escape")
+ elif match.groupdict().has_key("hex"):
+ return match.groupdict()['hex'].decode("string_escape")
+ else:
+ return match.group
+
if not text:
return text
- return text.replace("\\'", "'").replace('\\"', '"').replace("\\n", "\n")
+ if quotechar == '"':
+ # We do not escape \$ as it is used by variables and we can't roundtrip that item.
+ text = text.replace('\\"', '"').replace("\\\\", "\\")
+ text = text.replace("\\n", "\n").replace("\\r", "\r").replace("\\t", "\t").replace("\\v", "\v").replace("\\f", "\f")
+ text = re.sub(r"(?P<octal>\\[0-7]{1,3})", decode_octal_hex, text)
+ text = re.sub(r"(?P<hex>\\x[0-9A-Fa-f]{1,2})", decode_octal_hex, text)
+ else:
+ text = text.replace("\\'", "'").replace("\\\\", "\\")
+ return text
class phpunit(base.TranslationUnit):
"""a unit of a PHP file i.e. a name and value, and any comments
associated"""
def __init__(self, source=""):
"""construct a blank phpunit"""
+ self.escape_type = None
super(phpunit, self).__init__(source)
self.name = ""
self.value = ""
@@ -53,10 +101,10 @@
def setsource(self, source):
"""Sets the source AND the target to be equal"""
- self.value = phpencode(source)
+ self.value = phpencode(source, self.escape_type)
def getsource(self):
- return phpdecode(self.value)
+ return phpdecode(self.value, self.escape_type)
source = property(getsource, setsource)
def settarget(self, target):
@@ -121,21 +169,21 @@
incomment = False
valuequote = "" # either ' or "
for line in phpsrc.decode(self._encoding).split("\n"):
- # Assuming /* comments */ are started and stopped on lines
commentstartpos = line.find("/*")
commentendpos = line.rfind("*/")
if commentstartpos != -1:
incomment = True
if commentendpos != -1:
- newunit.addnote(line[commentstartpos+2:commentendpos].strip(), "developer")
+ newunit.addnote(line[commentstartpos:commentendpos].strip(), "developer")
incomment = False
- if incomment:
- newunit.addnote(line[commentstartpos+2:].strip(), "developer")
+ else:
+ newunit.addnote(line[commentstartpos:].strip(), "developer")
if commentendpos != -1 and incomment:
- newunit.addnote(line[:commentendpos].strip(), "developer")
+ newunit.addnote(line[:commentendpos+2].strip(), "developer")
incomment = False
- if commentstartpos == -1 and incomment:
+ if incomment and commentstartpos == -1:
newunit.addnote(line.strip(), "developer")
+ continue
equalpos = line.find("=")
if equalpos != -1 and not invalue:
newunit.addlocation(line[:equalpos].strip().replace(" ", ""))
@@ -150,6 +198,7 @@
while colonpos != -1:
if value[colonpos-1] == valuequote:
newunit.value = lastvalue + value[:colonpos-1]
+ newunit.escape_type = valuequote
lastvalue = ""
invalue = False
if not invalue and colonpos != len(value)-1:
@@ -162,7 +211,7 @@
newunit = phpunit()
colonpos = value.rfind(";", 0, colonpos)
if invalue:
- lastvalue = lastvalue + value
+ lastvalue = lastvalue + value + "\n"
def __str__(self):
"""convert the units back to lines"""
@@ -171,8 +220,3 @@
lines.append(str(unit))
return "".join(lines)
-if __name__ == '__main__':
- import sys
- pf = phpfile(sys.stdin)
- sys.stdout.write(str(pf))
-
Modified: translate-toolkit/branches/upstream/current/translate/storage/pocommon.py
URL: http://svn.debian.org/wsvn/translate-toolkit/branches/upstream/current/translate/storage/pocommon.py?rev=1570&op=diff
==============================================================================
--- translate-toolkit/branches/upstream/current/translate/storage/pocommon.py (original)
+++ translate-toolkit/branches/upstream/current/translate/storage/pocommon.py Sun Feb 8 16:49:31 2009
@@ -61,22 +61,8 @@
self.removenotes()
self.addnote(newnotes, origin="translator")
-class pofile(base.TranslationStore, poheader.poheader):
- Name = "Gettext PO file"
+class pofile(poheader.poheader, base.TranslationStore):
+ Name = _("Gettext PO file")
Mimetypes = ["text/x-gettext-catalog", "text/x-gettext-translation", "text/x-po", "text/x-pot"]
Extensions = ["po", "pot"]
- def makeheader(self, **kwargs):
- """create a header for the given filename. arguments are specially handled, kwargs added as key: value
- pot_creation_date can be None (current date) or a value (datetime or string)
- po_revision_date can be None (form), False (=pot_creation_date), True (=now), or a value (datetime or string)"""
-
- headerpo = self.UnitClass(encoding=self._encoding)
- headerpo.markfuzzy()
- headerpo.source = ""
- headeritems = self.makeheaderdict(**kwargs)
- headervalue = ""
- for (key, value) in headeritems.items():
- headervalue += "%s: %s\n" % (key, value)
- headerpo.target = headervalue
- return headerpo
Modified: translate-toolkit/branches/upstream/current/translate/storage/poheader.py
URL: http://svn.debian.org/wsvn/translate-toolkit/branches/upstream/current/translate/storage/poheader.py?rev=1570&op=diff
==============================================================================
--- translate-toolkit/branches/upstream/current/translate/storage/poheader.py (original)
+++ translate-toolkit/branches/upstream/current/translate/storage/poheader.py Sun Feb 8 16:49:31 2009
@@ -92,7 +92,7 @@
This class is a mix-in class and useless on its own. It must be used from all
classes which represent a po file"""
- x_generator = "Translate Toolkit %s" % __version__.ver
+ x_generator = "Translate Toolkit %s" % __version__.sver
header_order = [
"Project-Id-Version",
@@ -195,7 +195,8 @@
headeritems["Content-Transfer-Encoding"] = "8bit"
headerString = ""
for key, value in headeritems.items():
- headerString += "%s: %s\n" % (key, value)
+ if value is not None:
+ headerString += "%s: %s\n" % (key, value)
header.target = headerString
header.markfuzzy(False) # TODO: check why we do this?
return header
@@ -226,11 +227,16 @@
def gettargetlanguage(self):
header = self.parseheader()
+ if 'X-Poedit-Language' in header:
+ from translate.lang import poedit
+ language = header.get('X-Poedit-Language')
+ country = header.get('X-Poedit-Country')
+ return poedit.isocode(language, country)
return header.get('Language')
def settargetlanguage(self, lang):
- if isinstance(lang, basestr) and len(lang) > 1:
- self.updateheader(add=True, Language=lang)
+ if isinstance(lang, basestring) and len(lang) > 1:
+ self.updateheader(add=True, Language=lang, X_Poedit_Language=None, X_Poedit_Country=None)
def mergeheaders(self, otherstore):
"""Merges another header with this header.
@@ -255,8 +261,7 @@
self.updateheader(**retain)
def updatecontributor(self, name, email=None):
- """Add contribution comments
- """
+ """Add contribution comments if necessary."""
header = self.header()
if not header:
return
@@ -287,10 +292,16 @@
year = time.strftime("%Y")
contribexists = False
- for line in contriblines:
+ for i in range(len(contriblines)):
+ line = contriblines[i]
if name in line and (email is None or email in line):
- contribexists = True
- break
+ if year in line:
+ contribexists = True
+ break
+ else:
+ #The contributor is there, but not for this year
+ contriblines[i] = "%s,%s" % (line, year)
+
if not contribexists:
# Add a new contributor
if email:
@@ -302,3 +313,20 @@
header.addnote("\n".join(prelines))
header.addnote("\n".join(contriblines))
header.addnote("\n".join(postlines))
+
+
+ def makeheader(self, **kwargs):
+ """create a header for the given filename. arguments are specially handled, kwargs added as key: value
+ pot_creation_date can be None (current date) or a value (datetime or string)
+ po_revision_date can be None (form), False (=pot_creation_date), True (=now), or a value (datetime or string)"""
+
+ headerpo = self.UnitClass(encoding=self._encoding)
+
+ headerpo.markfuzzy()
+ headerpo.source = ""
+ headeritems = self.makeheaderdict(**kwargs)
+ headervalue = ""
+ for (key, value) in headeritems.items():
+ headervalue += "%s: %s\n" % (key, value)
+ headerpo.target = headervalue
+ return headerpo
Added: translate-toolkit/branches/upstream/current/translate/storage/poparser.py
URL: http://svn.debian.org/wsvn/translate-toolkit/branches/upstream/current/translate/storage/poparser.py?rev=1570&op=file
==============================================================================
--- translate-toolkit/branches/upstream/current/translate/storage/poparser.py (added)
+++ translate-toolkit/branches/upstream/current/translate/storage/poparser.py Sun Feb 8 16:49:31 2009
@@ -1,0 +1,321 @@
+#!/usr/bin/env python
+# -*- coding: utf-8 -*-
+#
+# Copyright 2002-2007 Zuza Software Foundation
+#
+# This file is part of translate.
+#
+# translate is free software; you can redistribute it and/or modify
+# it under the terms of the GNU General Public License as published by
+# the Free Software Foundation; either version 2 of the License, or
+# (at your option) any later version.
+#
+# translate is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+# GNU General Public License for more details.
+#
+# You should have received a copy of the GNU General Public License
+# along with translate; if not, write to the Free Software
+# Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
+
+import re
+
+"""
+From the GNU gettext manual:
+ WHITE-SPACE
+ # TRANSLATOR-COMMENTS
+ #. AUTOMATIC-COMMENTS
+ #| PREVIOUS MSGID (Gettext 0.16 - check if this is the correct position - not yet implemented)
+ #: REFERENCE...
+ #, FLAG...
+ msgctxt CONTEXT (Gettext 0.15)
+ msgid UNTRANSLATED-STRING
+ msgstr TRANSLATED-STRING
+"""
+
+isspace = str.isspace
+find = str.find
+rfind = str.rfind
+startswith = str.startswith
+append = list.append
+decode = str.decode
+
+class ParseState(object):
+ def __init__(self, input_iterator, UnitClass, encoding = None):
+ self._input_iterator = input_iterator
+ self.next_line = ''
+ self.eof = False
+ self.encoding = encoding
+ self.read_line()
+ self.UnitClass = UnitClass
+
+ def decode(self, string):
+ if self.encoding is not None:
+ return decode(string, self.encoding)
+ else:
+ return string
+
+ def read_line(self):
+ current = self.next_line
+ if self.eof:
+ return current
+ try:
+ self.next_line = self._input_iterator.next()
+ while not self.eof and isspace(self.next_line):
+ self.next_line = self._input_iterator.next()
+ except StopIteration:
+ self.next_line = ''
+ self.eof = True
+ return current
+
+ def new_input(self, _input):
+ return ParseState(_input, self.UnitClass, self.encoding)
+
+def read_prevmsgid_lines(parse_state):
+ """Read all the lines belonging starting with #|. These lines contain
+ the previous msgid and msgctxt info. We strip away the leading '#| '
+ and read until we stop seeing #|."""
+ prevmsgid_lines = []
+ next_line = parse_state.next_line
+ while startswith(next_line, '#| '):
+ append(prevmsgid_lines, parse_state.read_line()[3:])
+ next_line = parse_state.next_line
+ return prevmsgid_lines
+
+def parse_prev_msgctxt(parse_state, unit):
+ parse_message(parse_state, 'msgctxt', 7, unit.prev_msgctxt)
+ return len(unit.prev_msgctxt) > 0
+
+def parse_prev_msgid(parse_state, unit):
+ parse_message(parse_state, 'msgid', 5, unit.prev_msgid)
+ return len(unit.prev_msgid) > 0
+
+def parse_prev_msgid_plural(parse_state, unit):
+ parse_message(parse_state, 'msgid_plural', 12, unit.prev_msgid_plural)
+ return len(unit.prev_msgid_plural) > 0
+
+def parse_comment(parse_state, unit):
+ next_line = parse_state.next_line
+ if len(next_line) > 0 and next_line[0] == '#':
+ next_char = next_line[1]
+ if isspace(next_char):
+ append(unit.othercomments, parse_state.decode(next_line))
+ elif next_char == '.':
+ append(unit.automaticcomments, parse_state.decode(next_line))
+ elif next_char == '|':
+ # Read all the lines starting with #|
+ prevmsgid_lines = read_prevmsgid_lines(parse_state)
+ # Create a parse state object that holds these lines
+ ps = parse_state.new_input(iter(prevmsgid_lines))
+ # Parse the msgctxt if any
+ parse_prev_msgctxt(ps, unit)
+ # Parse the msgid if any
+ parse_prev_msgid(ps, unit)
+ # Parse the msgid_plural if any
+ parse_prev_msgid_plural(ps, unit)
+ return parse_state.next_line
+ elif next_char == ':':
+ append(unit.sourcecomments, parse_state.decode(next_line))
+ elif next_char == ',':
+ append(unit.typecomments, parse_state.decode(next_line))
+ elif next_char == '~':
+ # Special case: we refuse to parse obsoletes: they are done
+ # elsewhere to ensure we reuse the normal unit parsing code
+ return None
+ else:
+ return None
+ return parse_state.read_line()
+ else:
+ return None
+
+def parse_comments(parse_state, unit):
+ if not parse_comment(parse_state, unit):
+ return None
+ else:
+ while parse_comment(parse_state, unit):
+ pass
+ return True
+
+def read_obsolete_lines(parse_state):
+ """Read all the lines belonging to the current unit if obsolete."""
+ obsolete_lines = []
+ if startswith(parse_state.next_line, '#~ '):
+ append(obsolete_lines, parse_state.read_line()[3:])
+ else:
+ return obsolete_lines
+ # Be extra careful that we don't start reading into a new unit. We detect
+ # that with #~ msgid followed by a space (to ensure msgid_plural works)
+ next_line = parse_state.next_line
+ if startswith(next_line, '#~ msgid ') and obsolete_lines[-1].startswith('msgctxt'):
+ append(obsolete_lines, parse_state.read_line()[3:])
+ next_line = parse_state.next_line
+ while startswith(next_line, '#~ ') and not (startswith(next_line, '#~ msgid ') or startswith(next_line, '#~ msgctxt')):
+ append(obsolete_lines, parse_state.read_line()[3:])
+ next_line = parse_state.next_line
+ return obsolete_lines
+
+def parse_obsolete(parse_state, unit):
+ obsolete_lines = read_obsolete_lines(parse_state)
+ if obsolete_lines == []:
+ return None
+ unit = parse_unit(parse_state.new_input(iter(obsolete_lines)), unit)
+ if unit is not None:
+ unit.makeobsolete()
+ return unit
+
+def parse_quoted(parse_state, start_pos = 0):
+ line = parse_state.next_line
+ left = find(line, '"', start_pos)
+ if left == start_pos or isspace(line[start_pos:left]):
+ right = rfind(line, '"')
+ if left != right and line[right - 1] != '\\': # If we found a terminating quote
+ return parse_state.read_line()[left:right+1]
+ else: # If there is no terminating quote
+ return parse_state.read_line()[left:] + '"'
+ return None
+
+def parse_msg_comment(parse_state, msg_comment_list, string):
+ while string is not None:
+ append(msg_comment_list, parse_state.decode(string))
+ if find(string, '\\n') > -1:
+ return parse_quoted(parse_state)
+ string = parse_quoted(parse_state)
+ return None
+
+def parse_multiple_quoted(parse_state, msg_list, msg_comment_list, first_start_pos=0):
+ string = parse_quoted(parse_state, first_start_pos)
+ while string is not None:
+ if not startswith(string, '"_:'):
+ append(msg_list, parse_state.decode(string))
+ string = parse_quoted(parse_state)
+ else:
+ string = parse_msg_comment(parse_state, msg_comment_list, string)
+
+def parse_message(parse_state, start_of_string, start_of_string_len, msg_list, msg_comment_list = []):
+ if startswith(parse_state.next_line, start_of_string):
+ return parse_multiple_quoted(parse_state, msg_list, msg_comment_list, start_of_string_len)
+
+def parse_msgctxt(parse_state, unit):
+ parse_message(parse_state, 'msgctxt', 7, unit.msgctxt)
+ return len(unit.msgctxt) > 0
+
+def parse_msgid(parse_state, unit):
+ parse_message(parse_state, 'msgid', 5, unit.msgid, unit.msgidcomments)
+ return len(unit.msgid) > 0 or len(unit.msgidcomments) > 0
+
+def parse_msgstr(parse_state, unit):
+ parse_message(parse_state, 'msgstr', 6, unit.msgstr)
+ return len(unit.msgstr) > 0
+
+def parse_msgid_plural(parse_state, unit):
+ parse_message(parse_state, 'msgid_plural', 12, unit.msgid_plural, unit.msgid_pluralcomments)
+ return len(unit.msgid_plural) > 0 or len(unit.msgid_pluralcomments) > 0
+
+MSGSTR_ARRAY_ENTRY_LEN = len('msgstr[')
+
+def add_to_dict(msgstr_dict, line, right_bracket_pos, entry):
+ index = int(line[MSGSTR_ARRAY_ENTRY_LEN:right_bracket_pos])
+ if index not in msgstr_dict:
+ msgstr_dict[index] = []
+ msgstr_dict[index].extend(entry)
+
+def get_entry(parse_state, right_bracket_pos):
+ entry = []
+ parse_message(parse_state, 'msgstr[', right_bracket_pos + 1, entry)
+ return entry
+
+def parse_msgstr_array_entry(parse_state, msgstr_dict):
+ line = parse_state.next_line
+ right_bracket_pos = find(line, ']', MSGSTR_ARRAY_ENTRY_LEN)
+ if right_bracket_pos >= 0:
+ entry = get_entry(parse_state, right_bracket_pos)
+ if len(entry) > 0:
+ add_to_dict(msgstr_dict, line, right_bracket_pos, entry)
+ return True
+ else:
+ return False
+ else:
+ return False
+
+def parse_msgstr_array(parse_state, unit):
+ msgstr_dict = {}
+ result = parse_msgstr_array_entry(parse_state, msgstr_dict)
+ if not result: # We require at least one result
+ return False
+ while parse_msgstr_array_entry(parse_state, msgstr_dict):
+ pass
+ unit.msgstr = msgstr_dict
+ return True
+
+def parse_plural(parse_state, unit):
+ if parse_msgid_plural(parse_state, unit) and \
+ (parse_msgstr_array(parse_state, unit) or parse_msgstr(parse_state, unit)):
+ return True
+ else:
+ return False
+
+def parse_msg_entries(parse_state, unit):
+ parse_msgctxt(parse_state, unit)
+ if parse_msgid(parse_state, unit) and \
+ (parse_msgstr(parse_state, unit) or parse_plural(parse_state, unit)):
+ return True
+ else:
+ return False
+
+def parse_unit(parse_state, unit=None):
+ unit = unit or parse_state.UnitClass()
+ parsed_comments = parse_comments(parse_state, unit)
+ obsolete_unit = parse_obsolete(parse_state, unit)
+ if obsolete_unit is not None:
+ return obsolete_unit
+ parsed_msg_entries = parse_msg_entries(parse_state, unit)
+ if parsed_comments or parsed_msg_entries:
+ return unit
+ else:
+ return None
+
+def set_encoding(parse_state, store, unit):
+ charset = None
+ if isinstance(unit.msgstr, list) and len(unit.msgstr) > 0 and isinstance(unit.msgstr[0], str):
+ charset = re.search("charset=([^\\s\\\\n]+)", "".join(unit.msgstr))
+ if charset:
+ encoding = charset.group(1)
+ if encoding != 'CHARSET':
+ store._encoding = encoding
+ else:
+ store._encoding = 'utf-8'
+ else:
+ store._encoding = 'utf-8'
+ parse_state.encoding = store._encoding
+
+def decode_list(lst, decode):
+ return [decode(item) for item in lst]
+
+def decode_header(unit, decode):
+ for attr in ('msgctxt', 'msgid', 'msgid_pluralcomments',
+ 'msgid_plural', 'msgstr', 'obsoletemsgctxt',
+ 'obsoletemsgid', 'obsoletemsgid_pluralcomments',
+ 'obsoletemsgid_plural', 'obsoletemsgstr',
+ 'othercomments', 'automaticcomments', 'sourcecomments',
+ 'typecomments', 'msgidcomments', 'obsoletemsgidcomments'):
+ element = getattr(unit, attr)
+ if isinstance(element, list):
+ setattr(unit, attr, decode_list(element, decode))
+ else:
+ setattr(unit, attr, dict([(key, decode_list(value, decode)) for key, value in element.items()]))
+
+def parse_header(parse_state, store):
+ first_unit = parse_unit(parse_state)
+ if first_unit is None:
+ return None
+ set_encoding(parse_state, store, first_unit)
+ decode_header(first_unit, parse_state.decode)
+ return first_unit
+
+def parse_units(parse_state, store):
+ unit = parse_header(parse_state, store)
+ while unit:
+ store.addunit(unit)
+ unit = parse_unit(parse_state)
+ return parse_state.eof
Modified: translate-toolkit/branches/upstream/current/translate/storage/poxliff.py
URL: http://svn.debian.org/wsvn/translate-toolkit/branches/upstream/current/translate/storage/poxliff.py?rev=1570&op=diff
==============================================================================
--- translate-toolkit/branches/upstream/current/translate/storage/poxliff.py (original)
+++ translate-toolkit/branches/upstream/current/translate/storage/poxliff.py Sun Feb 8 16:49:31 2009
@@ -37,7 +37,7 @@
class PoXliffUnit(xliff.xliffunit):
"""A class to specifically handle the plural units created from a po file."""
- def __init__(self, source, empty=False):
+ def __init__(self, source=None, empty=False, encoding="UTF-8"):
self.units = []
if empty:
@@ -230,6 +230,9 @@
def isheader(self):
return "gettext-domain-header" in (self.getrestype() or "")
+ def istranslatable(self):
+ return super(PoXliffUnit, self).istranslatable() and not self.isheader()
+
def createfromxmlElement(cls, element, namespace=None):
if element.tag.endswith("trans-unit"):
object = cls(None, empty=True)
@@ -240,7 +243,7 @@
group = cls(None, empty=True)
group.xmlelement = element
group.namespace = namespace
- units = element.findall('.//%s' % group.namespaced('trans-unit'))
+ units = list(element.iterdescendants(group.namespaced('trans-unit')))
for unit in units:
subunit = xliff.xliffunit.createfromxmlElement(unit)
subunit.namespace = namespace
@@ -334,14 +337,15 @@
xml = xmlsrc
self.document = etree.fromstring(xml).getroottree()
self.initbody()
- assert self.document.getroot().tag == self.namespaced(self.rootNode)
- groups = self.document.findall(".//%s" % self.namespaced("group"))
+ root_node = self.document.getroot()
+ assert root_node.tag == self.namespaced(self.rootNode)
+ groups = root_node.iterdescendants(self.namespaced("group"))
pluralgroups = filter(ispluralgroup, groups)
- termEntries = self.body.findall('.//%s' % self.namespaced(self.UnitClass.rootNode))
- if termEntries is None:
- return
+ termEntries = root_node.iterdescendants(self.namespaced(self.UnitClass.rootNode))
singularunits = filter(isnonpluralunit, termEntries)
+ if len(singularunits) == 0:
+ return
pluralunit_iter = pluralunits(pluralgroups)
try:
nextplural = pluralunit_iter.next()
@@ -351,11 +355,11 @@
for entry in singularunits:
term = self.UnitClass.createfromxmlElement(entry, namespace=self.namespace)
if nextplural and unicode(term.source) in nextplural.source.strings:
- self.units.append(nextplural)
+ self.addunit(nextplural, new=False)
try:
nextplural = pluralunit_iter.next()
except StopIteration, i:
nextplural = None
else:
- self.units.append(term)
-
+ self.addunit(term, new=False)
+
Modified: translate-toolkit/branches/upstream/current/translate/storage/properties.py
URL: http://svn.debian.org/wsvn/translate-toolkit/branches/upstream/current/translate/storage/properties.py?rev=1570&op=diff
==============================================================================
--- translate-toolkit/branches/upstream/current/translate/storage/properties.py (original)
+++ translate-toolkit/branches/upstream/current/translate/storage/properties.py Sun Feb 8 16:49:31 2009
@@ -189,8 +189,3 @@
lines.append(str(unit))
return "".join(lines)
-if __name__ == '__main__':
- import sys
- pf = propfile(sys.stdin)
- sys.stdout.write(str(pf))
-
Modified: translate-toolkit/branches/upstream/current/translate/storage/pypo.py
URL: http://svn.debian.org/wsvn/translate-toolkit/branches/upstream/current/translate/storage/pypo.py?rev=1570&op=diff
==============================================================================
--- translate-toolkit/branches/upstream/current/translate/storage/pypo.py (original)
+++ translate-toolkit/branches/upstream/current/translate/storage/pypo.py Sun Feb 8 16:49:31 2009
@@ -29,6 +29,9 @@
from translate.lang import data
from translate.storage import pocommon, base
import re
+import copy
+import cStringIO
+import poparser
lsep = "\n#: "
"""Seperator for #: entries"""
@@ -86,6 +89,7 @@
if len(lines) != 2 or lines[1]:
polines.extend(['""'])
for line in lines[:-1]:
+ #TODO: We should only wrap after escaping
lns = wrapline(line)
if len(lns) > 0:
for ln in lns[:-1]:
@@ -122,18 +126,8 @@
# return False
# return True
-"""
-From the GNU gettext manual:
- WHITE-SPACE
- # TRANSLATOR-COMMENTS
- #. AUTOMATIC-COMMENTS
- #| PREVIOUS MSGID (Gettext 0.16 - check if this is the correct position - not yet implemented)
- #: REFERENCE...
- #, FLAG...
- msgctxt CONTEXT (Gettext 0.15)
- msgid UNTRANSLATED-STRING
- msgstr TRANSLATED-STRING
-"""
+def is_null(lst):
+ return lst == [] or len(lst) == 1 and lst[0] == '""'
def extractstr(string):
left = string.find('"')
@@ -147,6 +141,9 @@
# othercomments = [] # # this is another comment
# automaticcomments = [] # #. comment extracted from the source code
# sourcecomments = [] # #: sourcefile.xxx:35
+ # prev_msgctxt = [] # #| The previous values that msgctxt and msgid held
+ # prev_msgid = [] #
+ # prev_msgid_plural = [] #
# typecomments = [] # #, fuzzy
# msgidcomments = [] # _: within msgid
# msgctxt
@@ -157,6 +154,9 @@
self._encoding = encodingToUse(encoding)
self.obsolete = False
self._initallcomments(blankall=True)
+ self.prev_msgctxt = []
+ self.prev_msgid = []
+ self.prev_msgid_plural = []
self.msgctxt = []
self.msgid = []
self.msgid_pluralcomments = []
@@ -167,9 +167,7 @@
self.obsoletemsgid_pluralcomments = []
self.obsoletemsgid_plural = []
self.obsoletemsgstr = []
- if source:
- self.setsource(source)
- super(pounit, self).__init__(source)
+ pocommon.pounit.__init__(self, source)
def _initallcomments(self, blankall=False):
"""Initialises allcomments"""
@@ -180,39 +178,67 @@
self.typecomments = []
self.msgidcomments = []
self.obsoletemsgidcomments = []
- self.allcomments = [self.othercomments,
- self.automaticcomments,
- self.sourcecomments,
- self.typecomments,
- self.msgidcomments,
- self.obsoletemsgidcomments]
-
- def getsource(self):
- """Returns the unescaped msgid"""
- multi = multistring(unquotefrompo(self.msgid), self._encoding)
+
+ def _get_all_comments(self):
+ return [self.othercomments,
+ self.automaticcomments,
+ self.sourcecomments,
+ self.typecomments,
+ self.msgidcomments,
+ self.obsoletemsgidcomments]
+
+ allcomments = property(_get_all_comments)
+
+ def _get_source_vars(self, msgid, msgid_plural):
+ multi = multistring(unquotefrompo(msgid), self._encoding)
if self.hasplural():
- pluralform = unquotefrompo(self.msgid_plural)
+ pluralform = unquotefrompo(msgid_plural)
if isinstance(pluralform, str):
pluralform = pluralform.decode(self._encoding)
multi.strings.append(pluralform)
return multi
+
+ def _set_source_vars(self, source):
+ msgid = None
+ msgid_plural = None
+ if isinstance(source, str):
+ source = source.decode(self._encoding)
+ if isinstance(source, multistring):
+ source = source.strings
+ if isinstance(source, list):
+ msgid = quoteforpo(source[0])
+ if len(source) > 1:
+ msgid_plural = quoteforpo(source[1])
+ else:
+ msgid_plural = []
+ else:
+ msgid = quoteforpo(source)
+ msgid_plural = []
+ return msgid, msgid_plural
+
+ def getsource(self):
+ """Returns the unescaped msgid"""
+ return self._get_source_vars(self.msgid, self.msgid_plural)
def setsource(self, source):
"""Sets the msgid to the given (unescaped) value.
@param source: an unescaped source string.
"""
- if isinstance(source, str):
- source = source.decode(self._encoding)
- if isinstance(source, multistring):
- source = source.strings
- if isinstance(source, list):
- self.msgid = quoteforpo(source[0])
- if len(source) > 1:
- self.msgid_plural = quoteforpo(source[1])
- else:
- self.msgid = quoteforpo(source)
+ self.msgid, self.msgid_plural = self._set_source_vars(source)
source = property(getsource, setsource)
+
+ def _get_prev_source(self):
+ """Returns the unescaped msgid"""
+ return self._get_source_vars(self.prev_msgid, self.prev_msgid_plural)
+
+ def _set_prev_source(self, source):
+ """Sets the msgid to the given (unescaped) value.
+
+ @param source: an unescaped source string.
+ """
+ self.prev_msgid, self.prev_msgid_plural = self._set_source_vars(source)
+ prev_source = property(_get_prev_source, _set_prev_source)
def gettarget(self):
"""Returns the unescaped msgstr"""
@@ -226,8 +252,6 @@
"""Sets the msgstr to the given (unescaped) value"""
if isinstance(target, str):
target = target.decode(self._encoding)
- if target == self.target:
- return
if self.hasplural():
if isinstance(target, multistring):
target = target.strings
@@ -265,8 +289,8 @@
def addnote(self, text, origin=None, position="append"):
"""This is modeled on the XLIFF method. See xliff.py::xliffunit.addnote"""
- # We don't want to put in an empty '#' without a real comment:
- if not text:
+ # ignore empty strings and strings without non-space characters
+ if not (text and text.strip()):
return
text = data.forceunicode(text)
commentlist = self.othercomments
@@ -291,40 +315,15 @@
self.othercomments = []
def copy(self):
- newpo = self.__class__()
- newpo.othercomments = self.othercomments[:]
- newpo.automaticcomments = self.automaticcomments[:]
- newpo.sourcecomments = self.sourcecomments[:]
- newpo.typecomments = self.typecomments[:]
- newpo.obsolete = self.obsolete
- newpo.msgidcomments = self.msgidcomments[:]
- newpo._initallcomments()
- newpo.msgctxt = self.msgctxt[:]
- newpo.msgid = self.msgid[:]
- newpo.msgid_pluralcomments = self.msgid_pluralcomments[:]
- newpo.msgid_plural = self.msgid_plural[:]
- if isinstance(self.msgstr, dict):
- newpo.msgstr = self.msgstr.copy()
- else:
- newpo.msgstr = self.msgstr[:]
-
- newpo.obsoletemsgctxt = self.obsoletemsgctxt[:]
- newpo.obsoletemsgid = self.obsoletemsgid[:]
- newpo.obsoletemsgid_pluralcomments = self.obsoletemsgid_pluralcomments[:]
- newpo.obsoletemsgid_plural = self.obsoletemsgid_plural[:]
- if isinstance(self.obsoletemsgstr, dict):
- newpo.obsoletemsgstr = self.obsoletemsgstr.copy()
- else:
- newpo.obsoletemsgstr = self.obsoletemsgstr[:]
- return newpo
-
- def msgidlen(self):
+ return copy.deepcopy(self)
+
+ def _msgidlen(self):
if self.hasplural():
return len(unquotefrompo(self.msgid).strip()) + len(unquotefrompo(self.msgid_plural).strip())
else:
return len(unquotefrompo(self.msgid).strip())
- def msgstrlen(self):
+ def _msgstrlen(self):
if isinstance(self.msgstr, dict):
combinedstr = "\n".join([unquotefrompo(msgstr).strip() for msgstr in self.msgstr.itervalues()])
return len(combinedstr.strip())
@@ -410,18 +409,18 @@
self.markfuzzy()
def isheader(self):
- #return (self.msgidlen() == 0) and (self.msgstrlen() > 0) and (len(self.msgidcomments) == 0)
- #rewritten here for performance:
- return ((self.msgid == [] or self.msgid == ['""']) and
- not (self.msgstr == [] or self.msgstr == ['""'])
+ #return (self._msgidlen() == 0) and (self._msgstrlen() > 0) and (len(self.msgidcomments) == 0)
+ #rewritten here for performance:
+ return (is_null(self.msgid)
+ and not is_null(self.msgstr)
and self.msgidcomments == []
- and (self.msgctxt == [] or self.msgctxt == ['""'])
- and (self.sourcecomments == [] or self.sourcecomments == [""]))
+ and is_null(self.msgctxt)
+ )
def isblank(self):
if self.isheader() or len(self.msgidcomments):
return False
- if (self.msgidlen() == 0) and (self.msgstrlen() == 0):
+ if (self._msgidlen() == 0) and (self._msgstrlen() == 0):
return True
return False
# TODO: remove:
@@ -512,112 +511,8 @@
"""returns whether this pounit contains plural strings..."""
return len(self.msgid_plural) > 0
- def parselines(self, lines):
- inmsgctxt = 0
- inmsgid = 0
- inmsgid_comment = 0
- inmsgid_plural = 0
- inmsgstr = 0
- msgstr_pluralid = None
- linesprocessed = 0
- for line in lines:
- line = line + "\n"
- linesprocessed += 1
- if len(line) == 0:
- continue
- elif line[0] == '#':
- if inmsgstr and not line[1] == '~':
- # if we're already in the message string, this is from the next element
- break
- if line[1] == '.':
- self.automaticcomments.append(line)
- elif line[1] == ':':
- self.sourcecomments.append(line)
- elif line[1] == ',':
- self.typecomments.append(line)
- elif line[1] == '~':
- line = line[3:]
- self.obsolete = True
- else:
- self.othercomments.append(line)
- if line.startswith('msgid_plural'):
- inmsgctxt = 0
- inmsgid = 0
- inmsgid_plural = 1
- inmsgstr = 0
- inmsgid_comment = 0
- elif line.startswith('msgctxt'):
- inmsgctxt = 1
- inmsgid = 0
- inmsgid_plural = 0
- inmsgstr = 0
- inmsgid_comment = 0
- elif line.startswith('msgid'):
- # if we just finished a msgstr or msgid_plural, there is probably an
- # empty line missing between the units, so let's stop the parsing now.
- if inmsgstr or inmsgid_plural:
- break
- inmsgctxt = 0
- inmsgid = 1
- inmsgid_plural = 0
- inmsgstr = 0
- inmsgid_comment = 0
- elif line.startswith('msgstr'):
- inmsgctxt = 0
- inmsgid = 0
- inmsgid_plural = 0
- inmsgstr = 1
- if line.startswith('msgstr['):
- msgstr_pluralid = int(line[len('msgstr['):line.find(']')].strip())
- else:
- msgstr_pluralid = None
- extracted = extractstr(line)
- if not extracted is None:
- if inmsgctxt:
- self.msgctxt.append(extracted)
- elif inmsgid:
- # TODO: improve kde comment detection
- if extracted.find("_:") != -1:
- inmsgid_comment = 1
- if inmsgid_comment:
- self.msgidcomments.append(extracted)
- else:
- self.msgid.append(extracted)
- if inmsgid_comment and extracted.find("\\n") != -1:
- inmsgid_comment = 0
- elif inmsgid_plural:
- if extracted.find("_:") != -1:
- inmsgid_comment = 1
- if inmsgid_comment:
- self.msgid_pluralcomments.append(extracted)
- else:
- self.msgid_plural.append(extracted)
- if inmsgid_comment and extracted.find("\\n") != -1:
- inmsgid_comment = 0
- elif inmsgstr:
- if msgstr_pluralid is None:
- self.msgstr.append(extracted)
- else:
- if type(self.msgstr) == list:
- self.msgstr = {0: self.msgstr}
- if msgstr_pluralid not in self.msgstr:
- self.msgstr[msgstr_pluralid] = []
- self.msgstr[msgstr_pluralid].append(extracted)
- if self.obsolete:
- self.makeobsolete()
- # If this unit is the header, we have to get the encoding to ensure that no
- # methods are called that need the encoding before we obtained it.
- if self.isheader():
- charset = re.search("charset=([^\\s]+)", unquotefrompo(self.msgstr))
- if charset:
- self._encoding = encodingToUse(charset.group(1))
- return linesprocessed
-
def parse(self, src):
- if isinstance(src, str):
- # This has not been decoded yet, so we need to make a plan
- src = src.decode(self._encoding)
- return self.parselines(src.split("\n"))
+ return poparser.parse_unit(poparser.ParseState(cStringIO.StringIO(src), pounit), self)
def _getmsgpartstr(self, partname, partlines, partcomments=""):
if isinstance(partlines, dict):
@@ -676,6 +571,16 @@
def _getoutput(self):
"""return this po element as a string"""
+ def add_prev_msgid_lines(lines, header, var):
+ if len(var) > 0:
+ lines.append("#| %s %s\n" % (header, var[0]))
+ lines.extend("#| %s\n" % line for line in var[1:])
+
+ def add_prev_msgid_info(lines):
+ add_prev_msgid_lines(lines, 'msgctxt', self.prev_msgctxt)
+ add_prev_msgid_lines(lines, 'msgid', self.prev_msgid)
+ add_prev_msgid_lines(lines, 'msgid_plural', self.prev_msgid_plural)
+
lines = []
lines.extend(self.othercomments)
if self.isobsolete():
@@ -695,11 +600,12 @@
return "".join(lines)
# if there's no msgid don't do msgid and string, unless we're the header
# this will also discard any comments other than plain othercomments...
- if (len(self.msgid) == 0) or ((len(self.msgid) == 1) and (self.msgid[0] == '""')):
+ if is_null(self.msgid):
if not (self.isheader() or self.msgidcomments or self.sourcecomments):
return "".join(lines)
lines.extend(self.automaticcomments)
lines.extend(self.sourcecomments)
+ add_prev_msgid_info(lines)
lines.extend(self.typecomments)
if self.msgctxt:
lines.append(self._getmsgpartstr("msgctxt", self.msgctxt))
@@ -814,45 +720,11 @@
self.filename = input.name
elif not getattr(self, 'filename', ''):
self.filename = ''
- if hasattr(input, "read"):
- posrc = input.read()
- input.close()
- input = posrc
- # TODO: change this to a proper parser that doesn't do line-by-line madness
- lines = input.split("\n")
- start = 0
- end = 0
- # make only the first one the header
- linesprocessed = 0
- is_decoded = False
- while end <= len(lines):
- if (end == len(lines)) or (not lines[end].strip()): # end of lines or blank line
- newpe = self.UnitClass(encoding=self._encoding)
- unit_lines = lines[start:end]
- # We need to work carefully if we haven't decoded properly yet.
- # So let's solve this temporarily until we actually get the
- # encoding from the header.
- if not is_decoded:
- unit_lines = [line.decode('ascii', 'ignore') for line in unit_lines]
- linesprocessed = newpe.parselines(unit_lines)
- start += linesprocessed
- # TODO: find a better way of working out if we actually read anything
- if linesprocessed >= 1 and newpe._getoutput():
- self.units.append(newpe)
- if not is_decoded:
- if newpe.isheader(): # If there is a header...
- if "Content-Type" in self.parseheader(): # and a Content-Type...
- if self._encoding.lower() != 'charset': # with a valid charset...
- self._encoding = newpe._encoding # then change the encoding
- # otherwise we'll decode using UTF-8
- lines = self.decode(lines)
- self.units = []
- start = 0
- end = 0
- is_decoded = True
- end = end+1
+ if isinstance(input, str):
+ input = cStringIO.StringIO(input)
+ poparser.parse_units(poparser.ParseState(input, pounit), self)
except Exception, e:
- raise base.ParseError()
+ raise base.ParseError(e)
def removeduplicates(self, duplicatestyle="merge"):
"""make sure each msgid is unique ; merge comments etc from duplicates into original"""
@@ -869,7 +741,7 @@
msgid = unquotefrompo(thepo.msgidcomments) + unquotefrompo(thepo.msgid)
else:
msgid = unquotefrompo(thepo.msgid)
- if thepo.isheader():
+ if thepo.isheader() and not thepo.getlocations():
# header msgids shouldn't be merged...
uniqueunits.append(thepo)
elif duplicatestyle == "msgid_comment_all":
@@ -951,8 +823,3 @@
if not (unit.isheader() or unit.isobsolete()):
yield unit
-if __name__ == '__main__':
- import sys
- pf = pofile(sys.stdin)
- sys.stdout.write(str(pf))
-
Modified: translate-toolkit/branches/upstream/current/translate/storage/qm.py
URL: http://svn.debian.org/wsvn/translate-toolkit/branches/upstream/current/translate/storage/qm.py?rev=1570&op=diff
==============================================================================
--- translate-toolkit/branches/upstream/current/translate/storage/qm.py (original)
+++ translate-toolkit/branches/upstream/current/translate/storage/qm.py Sun Feb 8 16:49:31 2009
@@ -62,6 +62,7 @@
class qmfile(base.TranslationStore):
"""A class representing a .qm file."""
UnitClass = qmunit
+ Name = _("Qt .qm file")
Mimetypes = ["application/x-qm"]
Extensions = ["qm"]
_binary = True
@@ -167,3 +168,6 @@
subsection_name = "Unkown"
print >> sys.stderr, "Unimplemented: %s %s" % (subsection, subsection_name)
return
+
+ def savefile(self, storefile):
+ raise Exception("Writing of .qm files is not supported yet")
Modified: translate-toolkit/branches/upstream/current/translate/storage/qph.py
URL: http://svn.debian.org/wsvn/translate-toolkit/branches/upstream/current/translate/storage/qph.py?rev=1570&op=diff
==============================================================================
--- translate-toolkit/branches/upstream/current/translate/storage/qph.py (original)
+++ translate-toolkit/branches/upstream/current/translate/storage/qph.py Sun Feb 8 16:49:31 2009
@@ -27,6 +27,10 @@
.qph Qt Phrase Book Files are human-readable XML files containing standard
phrases and their translations. These files are created and updated by Qt
Linguist and may be used by any number of projects and applications.
+
+A DTD to define the format does not seem to exist, but the following U{code
+<http://www.google.com/codesearch?hl=en&q=show:gtsFsbhpVeE:KeGnQG0wDCQ:xOXsNYqccyE&sa=N&ct=rd&cs_p=ftp://ftp.trolltech.com/qt/source/qt-x11-opensource-4.0.0-b1.tar.gz&cs_f=qt-x11-opensource-4.0.0-b1/tools/linguist/linguist/phrase.cpp>}
+provides the reference implementation for the Qt Linguist product.
"""
from translate.storage import lisa
@@ -77,95 +81,43 @@
def removenotes(self):
"""Remove all the translator notes."""
- note = self.xmlelement.find(self.namespaced("comment"))
+ note = self.xmlelement.find(self.namespaced("definition"))
if not note is None:
self.xmlelement.remove(note)
-
- def getid(self):
- return self.source
-
- def merge(self, otherunit, overwrite=False, comments=True):
- super(QphUnit, self).merge(otherunit, overwrite, comments)
class QphFile(lisa.LISAfile):
"""Class representing a QPH file store."""
UnitClass = QphUnit
- Name = "Qt Phrase Book File"
+ Name = _("Qt Phrase Book File")
Mimetypes = ["application/x-qph"]
Extensions = ["qph"]
rootNode = "QPH"
- # We will switch out .body to fit with the context we are working on
- bodyNode = "context"
+ bodyNode = "QPH"
XMLskeleton = '''<!DOCTYPE QPH>
<QPH>
</QPH>
'''
namespace = ''
- def __init__(self, *args, **kwargs):
- self._contextname = None
- lisa.LISAfile.__init__(self, *args, **kwargs)
+ def initbody(self):
+ """Initialises self.body so it never needs to be retrieved from the XML again."""
+ self.namespace = self.document.getroot().nsmap.get(None, None)
+ self.body = self.document.getroot() # The root node contains the units
- def initbody(self):
- """Initialises self.body."""
- self.namespace = self.document.getroot().nsmap.get(None, None)
- if self._contextname:
- self.body = self.getcontextnode(self._contextname)
- else:
- self.body = self.document.getroot()
-
- def createcontext(self, contextname, comment=None):
- """Creates a context node with an optional comment"""
- context = etree.SubElement(self.document.getroot(), self.namespaced(self.bodyNode))
- if comment:
- comment_node = context.SubElement(context, "comment")
- comment_node.text = comment
- return context
-
- def getcontextname(self, contextnode):
- """Returns the name of the given context."""
- return filenode.find(self.namespaced("name")).text
-
- def getcontextnames(self):
- """Returns all contextnames in this TS file."""
- contextnodes = self.document.findall(self.namespaced("context"))
- contextnames = [self.getcontextname(contextnode) for contextnode in contextnodes]
- contextnames = filter(None, contextnames)
- if len(contextnames) == 1 and contextnames[0] == '':
- contextnames = []
- return contextnames
-
- def getcontextnode(self, contextname):
- """Finds the contextnode with the given name."""
- contextnodes = self.document.findall(self.namespaced("context"))
- for contextnode in contextnodes:
- if self.getcontextname(contextnode) == contextname:
- return contextnode
- return None
-
- def addunit(self, unit, new=True, contextname=None, createifmissing=False):
- """adds the given trans-unit to the last used body node if the contextname has changed it uses the slow method instead (will create the nodes required if asked). Returns success"""
- if self._contextname != contextname:
- if not self.switchcontext(contextname, createifmissing):
- return None
- super(QphFile, self).addunit(unit, new)
-# unit._context_node = self.getcontextnode(self._contextname)
-# lisa.setXMLspace(unit.xmlelement, "preserve")
- return unit
-
- def switchcontext(self, contextname, createifmissing=False):
- """Switch the current context to the one named contextname, optionally
- creating it if it doesn't exist."""
- self._context_name = contextname
- contextnode = self.getcontextnode(contextname)
- if contextnode is None:
- if not createifmissing:
- return False
- contextnode = self.createcontextnode(contextname)
- self.document.getroot().append(contextnode)
-
- self.body = contextnode
- if self.body is None:
- return False
- return True
+ def __str__(self):
+ """Converts to a string containing the file's XML.
+
+ We have to override this to ensure mimic the Qt convention:
+ - no XML decleration
+ - plain DOCTYPE that lxml seems to ignore
+ """
+ # A bug in lxml means we have to output the doctype ourselves. For
+ # more information, see:
+ # http://codespeak.net/pipermail/lxml-dev/2008-October/004112.html
+ # The problem was fixed in lxml 2.1.3
+ output = etree.tostring(self.document, pretty_print=True,
+ xml_declaration=False, encoding='utf-8')
+ if not "<!DOCTYPE QPH>" in output[:30]:
+ output = "<!DOCTYPE QPH>" + output
+ return output
Added: translate-toolkit/branches/upstream/current/translate/storage/symbian.py
URL: http://svn.debian.org/wsvn/translate-toolkit/branches/upstream/current/translate/storage/symbian.py?rev=1570&op=file
==============================================================================
--- translate-toolkit/branches/upstream/current/translate/storage/symbian.py (added)
+++ translate-toolkit/branches/upstream/current/translate/storage/symbian.py Sun Feb 8 16:49:31 2009
@@ -1,0 +1,68 @@
+#!/usr/bin/env python
+# -*- coding: utf-8 -*-
+#
+# Copyright 2008 Zuza Software Foundation
+#
+# This file is part of translate.
+#
+# translate is free software; you can redistribute it and/or modify
+# it under the terms of the GNU General Public License as published by
+# the Free Software Foundation; either version 2 of the License, or
+# (at your option) any later version.
+#
+# translate is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+# GNU General Public License for more details.
+#
+# You should have received a copy of the GNU General Public License
+# along with translate; if not, write to the Free Software
+# Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
+
+import re
+
+charset_re = re.compile('CHARACTER_SET[ ]+(?P<charset>.*)')
+header_item_or_end_re = re.compile('(((?P<key>[^ ]+)(?P<space>[ ]*:[ ]*)(?P<value>.*))|(?P<end_comment>[*]/))')
+header_item_re = re.compile('(?P<key>[^ ]+)(?P<space>[ ]*:[ ]*)(?P<value>.*)')
+string_entry_re = re.compile('(?P<start>rls_string[ ]+)(?P<id>[^ ]+)(?P<space>[ ]+)(?P<str>.*)')
+
+def identity(x):
+ return x
+
+class ParseState(object):
+ def __init__(self, f, charset, read_hook=identity):
+ self.f = f
+ self.charset = charset
+ self.current_line = u''
+ self.read_hook = read_hook
+ self.read_line()
+
+ def read_line(self):
+ current_line = self.current_line
+ self.read_hook(current_line)
+ self.current_line = self.f.next().decode(self.charset)
+ return current_line
+
+def read_while(ps, f, test):
+ result = f(ps.current_line)
+ while test(result):
+ ps.read_line()
+ result = f(ps.current_line)
+ return result
+
+def eat_whitespace(ps):
+ read_while(ps, identity, lambda line: line.strip() == '')
+
+def skip_no_translate(ps):
+ if ps.current_line.startswith('// DO NOT TRANSLATE'):
+ ps.read_line()
+ read_while(ps, identity, lambda line: not line.startswith('// DO NOT TRANSLATE'))
+ ps.read_line()
+ eat_whitespace(ps)
+
+def read_charset(lines):
+ for line in lines:
+ match = charset_re.match(line)
+ if match is not None:
+ return match.groupdict()['charset']
+ return 'UTF-8'
Modified: translate-toolkit/branches/upstream/current/translate/storage/tbx.py
URL: http://svn.debian.org/wsvn/translate-toolkit/branches/upstream/current/translate/storage/tbx.py?rev=1570&op=diff
==============================================================================
--- translate-toolkit/branches/upstream/current/translate/storage/tbx.py (original)
+++ translate-toolkit/branches/upstream/current/translate/storage/tbx.py Sun Feb 8 16:49:31 2009
@@ -47,7 +47,7 @@
class tbxfile(lisa.LISAfile):
"""Class representing a TBX file store."""
UnitClass = tbxunit
- Name = "TBX file"
+ Name = _("TBX file")
Mimetypes = ["application/x-tbx"]
Extensions = ["tbx"]
rootNode = "martif"
Modified: translate-toolkit/branches/upstream/current/translate/storage/test_base.py
URL: http://svn.debian.org/wsvn/translate-toolkit/branches/upstream/current/translate/storage/test_base.py?rev=1570&op=diff
==============================================================================
--- translate-toolkit/branches/upstream/current/translate/storage/test_base.py (original)
+++ translate-toolkit/branches/upstream/current/translate/storage/test_base.py Sun Feb 8 16:49:31 2009
@@ -6,6 +6,7 @@
from translate.storage import base
from py import test
import os
+import warnings
def test_force_override():
"""Tests that derived classes are not allowed to call certain functions"""
@@ -126,7 +127,7 @@
actual_notes = unit.getnotes()
assert actual_notes == expected_notes
-class TestTranslationStore:
+class TestTranslationStore(object):
"""Tests a TranslationStore.
Derived classes can reuse these tests by pointing StoreClass to a derived Store"""
StoreClass = base.TranslationStore
@@ -136,11 +137,13 @@
self.filename = "%s_%s.test" % (self.__class__.__name__, method.__name__)
if os.path.exists(self.filename):
os.remove(self.filename)
+ warnings.resetwarnings()
def teardown_method(self, method):
"""Makes sure that if self.filename was created by the method, it is cleaned up"""
if os.path.exists(self.filename):
os.remove(self.filename)
+ warnings.resetwarnings()
def test_create_blank(self):
"""Tests creating a new blank store"""
Modified: translate-toolkit/branches/upstream/current/translate/storage/test_dtd.py
URL: http://svn.debian.org/wsvn/translate-toolkit/branches/upstream/current/translate/storage/test_dtd.py?rev=1570&op=diff
==============================================================================
--- translate-toolkit/branches/upstream/current/translate/storage/test_dtd.py (original)
+++ translate-toolkit/branches/upstream/current/translate/storage/test_dtd.py Sun Feb 8 16:49:31 2009
@@ -3,6 +3,8 @@
from translate.storage import dtd
from translate.storage import test_monolingual
from translate.misc import wStringIO
+import warnings
+from py import test
def test_roundtrip_quoting():
specials = ['Fish & chips', 'five < six', 'six > five',
@@ -16,6 +18,18 @@
unquoted_special = dtd.unquotefromdtd(quoted_special)
print "special: %r\nquoted: %r\nunquoted: %r\n" % (special, quoted_special, unquoted_special)
assert special == unquoted_special
+
+def test_removeinvalidamp():
+ """tests the the removeinvalidamps function"""
+ def tester(actual, expected):
+ assert dtd.removeinvalidamps("test.name", actual) == expected
+ tester("Valid &entity; included", "Valid &entity; included")
+ tester("Valid &entity.name; included", "Valid &entity.name; included")
+ tester("Valid Ӓ included", "Valid Ӓ included")
+ tester("This & is broken", "This amp is broken")
+ tester("Mad & & &", "Mad amp &")
+ warnings.simplefilter("error")
+ assert test.raises(Warning, dtd.removeinvalidamps, "simple.warningtest", "Dimpled &Ring")
class TestDTDUnit(test_monolingual.TestMonolingualUnit):
UnitClass = dtd.dtdunit
@@ -110,6 +124,25 @@
"""checks that an &entity; in the source is retained"""
dtdsource = '<!ENTITY % realBrandDTD SYSTEM "chrome://branding/locale/brand.dtd">\n%realBrandDTD;\n'
dtdregen = self.dtdregen(dtdsource)
+ assert dtdsource == dtdregen
+
+ #test for bug #610
+ def test_entitityreference_order_in_source(self):
+ """checks that an &entity; in the source is retained"""
+ dtdsource = '<!ENTITY % realBrandDTD SYSTEM "chrome://branding/locale/brand.dtd">\n%realBrandDTD;\n<!-- some comment -->\n'
+ dtdregen = self.dtdregen(dtdsource)
+ assert dtdsource == dtdregen
+
+ # The following test is identical to the one above, except that the entity is split over two lines.
+ # This is to ensure that a recent bug fixed in dtdunit.parse() is at least partly documented.
+ # The essence of the bug was that after it had read "realBrandDTD", the line index is not reset
+ # before starting to parse the next line. It would then read the next available word (sequence of
+ # alphanum characters) in stead of SYSTEM and then get very confused by not finding an opening ' or
+ # " in the entity, borking the parsing for threst of the file.
+ dtdsource = '<!ENTITY % realBrandDTD\n SYSTEM "chrome://branding/locale/brand.dtd">\n%realBrandDTD;\n'
+ # FIXME: The following line is necessary, because of dtdfile's inability to remember the spacing of
+ # the source DTD file when converting back to DTD.
+ dtdregen = self.dtdregen(dtdsource).replace('realBrandDTD SYSTEM', 'realBrandDTD\n SYSTEM')
print dtdsource
print dtdregen
assert dtdsource == dtdregen
@@ -140,7 +173,8 @@
def test_missing_quotes(self):
"""test that we fail graacefully when a message without quotes is found (bug #161)"""
dtdsource = '<!ENTITY bad no quotes">\n<!ENTITY good "correct quotes">\n'
+ warnings.simplefilter("error")
+ assert test.raises(Warning, self.dtdparse, dtdsource)
+ warnings.resetwarnings()
dtdfile = self.dtdparse(dtdsource)
- # Check that we raise a correct warning
assert len(dtdfile.units) == 1
-
Modified: translate-toolkit/branches/upstream/current/translate/storage/test_oo.py
URL: http://svn.debian.org/wsvn/translate-toolkit/branches/upstream/current/translate/storage/test_oo.py?rev=1570&op=diff
==============================================================================
--- translate-toolkit/branches/upstream/current/translate/storage/test_oo.py (original)
+++ translate-toolkit/branches/upstream/current/translate/storage/test_oo.py Sun Feb 8 16:49:31 2009
@@ -1,8 +1,28 @@
#!/usr/bin/env python
+# -*- coding: utf-8 -*-
from translate.storage import oo
from translate.misc import wStringIO
import warnings
+
+def test_makekey():
+ """checks the makekey function for consistency"""
+ assert oo.makekey(('project', r'path\to\the\sourcefile.src', 'resourcetype', 'GROUP_ID', 'LOCAL_ID', 'platform'), False) == "sourcefile.src#GROUP_ID.LOCAL_ID.resourcetype"
+ # Testwith long_key i.e. used in multifile options
+ assert oo.makekey(('project', r'path\to\the\sourcefile.src', 'resourcetype', 'GROUP_ID', 'LOCAL_ID', 'platform'), True) == "project/path/to/the/sourcefile.src#GROUP_ID.LOCAL_ID.resourcetype"
+ assert oo.makekey(('project', r'path\to\the\sourcefile.src', 'resourcetype', 'GROUP_ID', '', 'platform'), False) == "sourcefile.src#GROUP_ID.resourcetype"
+ assert oo.makekey(('project', r'path\to\the\sourcefile.src', 'resourcetype', '', 'LOCAL_ID', 'platform'), False) == "sourcefile.src#LOCAL_ID.resourcetype"
+ assert oo.makekey(('project', r'path\to\the\sourcefile.src', '', 'GROUP_ID', 'LOCAL_ID', 'platform'), False) == "sourcefile.src#GROUP_ID.LOCAL_ID"
+ assert oo.makekey(('project', r'path\to\the\sourcefile.src', '', 'GROUP_ID', '', 'platform'), False) == "sourcefile.src#GROUP_ID"
+
+def test_escape_help_text():
+ """Check the help text escape function"""
+ assert oo.escape_help_text("If we don't know <tag> we don't <br> escape it") == "If we don't know <tag> we don't <br> escape it"
+ # Bug 694
+ assert oo.escape_help_text("A szó: <nyelv>") == "A szó: <nyelv>"
+ assert oo.escape_help_text("""...következÅ: "<kiszolgáló> <témakör> <elem>", ahol...""") == """...következÅ: "<kiszolgáló> <témakör> <elem>", ahol..."""
+ # See bug 694 comments 8-10 not fully resolved.
+ assert oo.escape_help_text(r"...törtjel (\) létrehozásához...") == r"...törtjel (\\) létrehozásához..."
class TestOO:
def setup_method(self, method):
Modified: translate-toolkit/branches/upstream/current/translate/storage/test_php.py
URL: http://svn.debian.org/wsvn/translate-toolkit/branches/upstream/current/translate/storage/test_php.py?rev=1570&op=diff
==============================================================================
--- translate-toolkit/branches/upstream/current/translate/storage/test_php.py (original)
+++ translate-toolkit/branches/upstream/current/translate/storage/test_php.py Sun Feb 8 16:49:31 2009
@@ -5,16 +5,78 @@
from translate.storage import test_monolingual
from translate.misc import wStringIO
-def test_php_escaping():
- """Test the helper escaping funtions"""
- # Encoding
- assert php.phpencode("'") == "\\'"
- assert php.phpencode('"', quotechar='"') == '\\"'
- assert php.phpencode("\n") == "\\n"
- # Decoding
- assert php.phpdecode("\\'") == "'"
- assert php.phpdecode('\\"') == '"'
- assert php.phpdecode("\\n") == "\n"
+def test_php_escaping_single_quote():
+ """Test the helper escaping funtions for 'single quotes'
+
+ The tests are built mostly from examples from the PHP
+ U{string type definition<http://www.php.net/manual/en/language.types.string.php#language.types.string.syntax.single>}.
+ """
+ # Decoding - PHP -> Python
+ assert php.phpdecode(r"\'") == r"'" # To specify a literal single quote, escape it with a backslash (\).
+ assert php.phpdecode(r'"') == r'"'
+ assert php.phpdecode(r"\\'") == r"\'" # To specify a literal backslash before a single quote, or at the end of the string, double it (\\)
+ assert php.phpdecode(r"\x") == r"\x" # Note that attempting to escape any other character will print the backslash too.
+ assert php.phpdecode(r'\t') == r'\t'
+ assert php.phpdecode(r'\n') == r'\n'
+ assert php.phpdecode(r"this is a simple string") == r"this is a simple string"
+ assert php.phpdecode("""You can also have embedded newlines in
+strings this way as it is
+okay to do""") == """You can also have embedded newlines in
+strings this way as it is
+okay to do"""
+ assert php.phpdecode(r"This will not expand: \n a newline") == r"This will not expand: \n a newline"
+ assert php.phpdecode(r'Arnold once said: "I\'ll be back"') == r'''Arnold once said: "I'll be back"'''
+ assert php.phpdecode(r'You deleted C:\\*.*?') == r"You deleted C:\*.*?"
+ assert php.phpdecode(r'You deleted C:\*.*?') == r"You deleted C:\*.*?"
+ assert php.phpdecode(r'\117\143\164\141\154') == r'\117\143\164\141\154' # We don't handle Octal like " does
+ assert php.phpdecode(r'\x48\x65\x78') == r'\x48\x65\x78' # Don't handle Hex either
+ # Should implement for false interpretation of double quoted data.
+ # Encoding - Python -> PHP
+ assert php.phpencode(r"'") == r"\'" # To specify a literal single quote, escape it with a backslash (\).
+ assert php.phpencode(r"\'") == r"\\'" # To specify a literal backslash before a single quote, or at the end of the string, double it (\\)
+ assert php.phpencode(r'"') == r'"'
+ assert php.phpencode(r"\x") == r"\x" # Note that attempting to escape any other character will print the backslash too.
+ assert php.phpencode(r"\t") == r"\t"
+ assert php.phpencode(r"\n") == r"\n"
+ assert php.phpencode(r"""String with
+newline""") == r"""String with
+newline"""
+ assert php.phpencode(r"This will not expand: \n a newline") == r"This will not expand: \n a newline"
+ assert php.phpencode(r'''Arnold once said: "I'll be back"''') == r'''Arnold once said: "I\'ll be back"'''
+ assert php.phpencode(r'You deleted C:\*.*?') == r"You deleted C:\*.*?"
+
+def test_php_escaping_double_quote():
+ """Test the helper escaping funtions for 'double quotes'"""
+ # Decoding - PHP -> Python
+ assert php.phpdecode("'", quotechar='"') == "'" # we do nothing with single quotes
+ assert php.phpdecode(r"\n", quotechar='"') == "\n" # See table of escaped characters
+ assert php.phpdecode(r"\r", quotechar='"') == "\r" # See table of escaped characters
+ assert php.phpdecode(r"\t", quotechar='"') == "\t" # See table of escaped characters
+ assert php.phpdecode(r"\v", quotechar='"') == "\v" # See table of escaped characters
+ assert php.phpdecode(r"\f", quotechar='"') == "\f" # See table of escaped characters
+ assert php.phpdecode(r"\\", quotechar='"') == "\\" # See table of escaped characters
+ #assert php.phpdecode(r"\$", quotechar='"') == "$" # See table of escaped characters - this may cause confusion with actual variables in roundtripping
+ assert php.phpdecode(r"\$", quotechar='"') == "\\$" # Just to check that we don't unescape this
+ assert php.phpdecode(r'\"', quotechar='"') == '"' # See table of escaped characters
+ assert php.phpdecode(r'\117\143\164\141\154', quotechar='"') == 'Octal' # Octal: \[0-7]{1,3}
+ assert php.phpdecode(r'\x48\x65\x78', quotechar='"') == 'Hex' # Hex: \x[0-9A-Fa-f]{1,2}
+ assert php.phpdecode(r'\117\\c\164\141\154', quotechar='"') == 'O\ctal' # Mixed
+ # Decoding - special examples
+ assert php.phpdecode(r"Don't escape me here\'s", quotechar='"') == r"Don't escape me here\'s" # See bug #589
+ assert php.phpdecode("Line1\nLine2") == "Line1\nLine2" # Preserve newlines in multiline messages
+ assert php.phpdecode("Line1\r\nLine2") == "Line1\r\nLine2" # DOS PHP files
+ # Encoding - Python -> PHP
+ assert php.phpencode("'", quotechar='"') == "'"
+ assert php.phpencode("\n", quotechar='"') == "\n" # See table of escaped characters - we leave newlines unescaped so that we can try best to preserve pretty printing. See bug 588
+ assert php.phpencode("\r", quotechar='"') == r"\r" # See table of escaped characters
+ assert php.phpencode("\t", quotechar='"') == r"\t" # See table of escaped characters
+ assert php.phpencode("\v", quotechar='"') == r"\v" # See table of escaped characters
+ assert php.phpencode("\f", quotechar='"') == r"\f" # See table of escaped characters
+ assert php.phpencode(r"\\", quotechar='"') == r"\\" # See table of escaped characters
+ #assert php.phpencode("\$", quotechar='"') == "$" # See table of escaped characters - this may cause confusion with actual variables in roundtripping
+ assert php.phpencode("\$", quotechar='"') == r"\$" # Just to check that we don't unescape this
+ assert php.phpencode('"', quotechar='"') == r'\"'
+ assert php.phpencode(r"Don't escape me here\'s", quotechar='"') == r"Don't escape me here\'s" # See bug #589
class TestPhpUnit(test_monolingual.TestMonolingualUnit):
UnitClass = php.phpunit
@@ -58,3 +120,28 @@
phpunit = phpfile.units[0]
assert phpunit.name == "$lang['mediaselect']"
assert phpunit.source == "Bestand selectie"
+
+ def test_comment_blocks(self):
+ """check that we don't process name value pairs in comment blocks"""
+ phpsource = """/*
+ * $lang[0] = "Blah";
+ * $lang[1] = "Bluh";
+ */
+$lang[2] = "Yeah";
+"""
+ phpfile = self.phpparse(phpsource)
+ assert len(phpfile.units) == 1
+ phpunit = phpfile.units[0]
+ assert phpunit.name == "$lang[2]"
+ assert phpunit.source == "Yeah"
+
+ def test_multiline(self):
+ """check that we preserve newlines in a multiline message"""
+ phpsource = """$lang['multiline'] = "Line1%sLine2";"""
+ # Try DOS and Unix and make sure the output has the same
+ for lineending in ("\n", "\r\n"):
+ phpfile = self.phpparse(phpsource % lineending)
+ assert len(phpfile.units) == 1
+ phpunit = phpfile.units[0]
+ assert phpunit.name == "$lang['multiline']"
+ assert phpunit.source == "Line1%sLine2" % lineending
Modified: translate-toolkit/branches/upstream/current/translate/storage/test_po.py
URL: http://svn.debian.org/wsvn/translate-toolkit/branches/upstream/current/translate/storage/test_po.py?rev=1570&op=diff
==============================================================================
--- translate-toolkit/branches/upstream/current/translate/storage/test_po.py (original)
+++ translate-toolkit/branches/upstream/current/translate/storage/test_po.py Sun Feb 8 16:49:31 2009
@@ -364,6 +364,25 @@
assert unit.isobsolete()
assert str(pofile) == posource
+ posource = '''msgid "one"
+msgstr "een"
+
+#, fuzzy
+#~ msgid "File not found."
+#~ msgid_plural "Files not found."
+#~ msgstr[0] "Leer(s) nie gevind nie."
+#~ msgstr[1] "Leer(s) nie gevind nie."
+'''
+ pofile = self.poparse(posource)
+ assert len(pofile.units) == 2
+ unit = pofile.units[1]
+ assert unit.isobsolete()
+
+ assert str(pofile) == posource
+ unit.resurrect()
+ assert unit.hasplural()
+
+
def test_header_escapes(self):
pofile = self.StoreClass()
header = pofile.makeheader(**{"Report-Msgid-Bugs-To": r"http://qa.openoffice.org/issues/enter_bug.cgi?subcomponent=ui&comment=&short_desc=Localization%20issue%20in%20file%3A%20dbaccess\source\core\resource.oo&component=l10n&form_name=enter_issue"})
@@ -439,7 +458,7 @@
def test_multiline_obsolete(self):
"""Tests for correct output of mulitline obsolete messages"""
- posource = '#~ msgid "Old thing\\n"\n#~ "Second old thing"\n#~ msgstr "Ou ding\\n"\n#~ "Tweede ou ding"\n'
+ posource = '#~ msgid ""\n#~ "Old thing\\n"\n#~ "Second old thing"\n#~ msgstr ""\n#~ "Ou ding\\n"\n#~ "Tweede ou ding"\n'
pofile = self.poparse(posource)
assert pofile.isempty()
assert len(pofile.units) == 1
@@ -564,6 +583,15 @@
unit = pofile.units[1]
assert unit.getcontext() == 'Verb. _: The action of changing.'
assert unit.getnotes() == 'Test comment 2'
+
+ def test_broken_kde_context(self):
+ posource = '''msgid "Broken _: here"
+msgstr "Broken _: here"
+'''
+ pofile = self.poparse(posource)
+ unit = pofile.units[0]
+ assert unit.source == "Broken _: here"
+ assert unit.target == "Broken _: here"
def test_id(self):
"""checks that ids work correctly"""
@@ -603,3 +631,22 @@
# commented out for conformance to gettext.
# assert pofile.units[4].getid() == "tree\0trees"
+ def test_non_ascii_header_comments(self):
+ posource = r'''
+# TëÅt þis.
+# Hé Há Hó.
+#. Lêkkør.
+msgid ""
+msgstr ""
+"PO-Revision-Date: 2006-02-09 23:33+0200\n"
+"MIME-Version: 1.0\n"
+"Content-Type: text/plain; charset=UTF-8\n"
+"Content-Transfer-Encoding: 8-bit\n"
+
+msgid "a"
+msgstr "b"
+'''
+ pofile = self.poparse(posource)
+ for line in pofile.units[0].getnotes():
+ assert isinstance(line, unicode)
+
Modified: translate-toolkit/branches/upstream/current/translate/storage/test_poheader.py
URL: http://svn.debian.org/wsvn/translate-toolkit/branches/upstream/current/translate/storage/test_poheader.py?rev=1570&op=diff
==============================================================================
--- translate-toolkit/branches/upstream/current/translate/storage/test_poheader.py (original)
+++ translate-toolkit/branches/upstream/current/translate/storage/test_poheader.py Sun Feb 8 16:49:31 2009
@@ -107,24 +107,6 @@
# Typically "+0430"
assert poheader.tzstring() == time.strftime("%z")
- os.environ['TZ'] = 'Asia/Tehran'
- time.tzset()
- assert time.timezone == -12600
- # Typically "+0330"
- assert poheader.tzstring() == time.strftime("%z")
-
- os.environ['TZ'] = 'Canada/Newfoundland'
- time.tzset()
- assert time.timezone == 12600
- # Typically "-0230"
- assert poheader.tzstring() == time.strftime("%z")
-
- os.environ['TZ'] = 'US/Eastern'
- time.tzset()
- assert time.timezone == 18000
- # Typically "-0400"
- assert poheader.tzstring() == time.strftime("%z")
-
os.environ['TZ'] = 'Asia/Seoul'
time.tzset()
assert time.timezone == -32400
@@ -143,12 +125,6 @@
# Typically "+0100"
# For some reason python's %z doesn't know about Windhoek DST
#assert poheader.tzstring() == time.strftime("%z")
-
- os.environ['TZ'] = 'Egypt'
- time.tzset()
- assert time.timezone == -7200
- # Typically "+0300"
- assert poheader.tzstring() == time.strftime("%z")
os.environ['TZ'] = 'UTC'
time.tzset()
Modified: translate-toolkit/branches/upstream/current/translate/storage/test_pypo.py
URL: http://svn.debian.org/wsvn/translate-toolkit/branches/upstream/current/translate/storage/test_pypo.py?rev=1570&op=diff
==============================================================================
--- translate-toolkit/branches/upstream/current/translate/storage/test_pypo.py (original)
+++ translate-toolkit/branches/upstream/current/translate/storage/test_pypo.py Sun Feb 8 16:49:31 2009
@@ -48,7 +48,7 @@
unit.target = "Boom"
# FIXME: currently assigning the target to the same as the first string won't change anything
# we need to verify that this is the desired behaviour...
- assert unit.target.strings == ["Boom", "Bome"]
+ assert unit.target.strings == ["Boom"]
unit.target = "Een Boom"
assert unit.target.strings == ["Een Boom"]
@@ -236,7 +236,57 @@
"""tests behaviour of unassociated comments."""
oldsource = '# old lonesome comment\n\nmsgid "one"\nmsgstr "een"\n'
oldfile = self.poparse(oldsource)
- print "__str__", str(oldfile)
- assert len(oldfile.units) == 2
- assert str(oldfile).find("# old lonesome comment\n\n") >= 0
-
+ print str(oldfile)
+ assert len(oldfile.units) == 1
+
+ def test_prevmsgid_parse(self):
+ """checks that prevmsgid (i.e. #|) is parsed and saved correctly"""
+ posource = r'''msgid ""
+msgstr ""
+"PO-Revision-Date: 2006-02-09 23:33+0200\n"
+"MIME-Version: 1.0\n"
+"Content-Type: text/plain; charset=UTF-8\n"
+"Content-Transfer-Encoding: 8-bit\n"
+
+#| msgid "trea"
+msgid "tree"
+msgstr "boom"
+
+#| msgid "trea"
+#| msgid_plural "treas"
+msgid "tree"
+msgid_plural "trees"
+msgstr[0] "boom"
+msgstr[1] "bome"
+
+#| msgctxt "context 1"
+#| msgid "tast"
+msgctxt "context 1a"
+msgid "test"
+msgstr "toets"
+
+#| msgctxt "context 2"
+#| msgid "tast"
+#| msgid_plural "tasts"
+msgctxt "context 2a"
+msgid "test"
+msgid_plural "tests"
+msgstr[0] "toet"
+msgstr[1] "toetse"
+'''
+
+ pofile = self.poparse(posource)
+
+ assert pofile.units[1].prev_msgctxt == []
+ assert pofile.units[1].prev_source == multistring([u"trea"])
+
+ assert pofile.units[2].prev_msgctxt == []
+ assert pofile.units[2].prev_source == multistring([u"trea", u"treas"])
+
+ assert pofile.units[3].prev_msgctxt == [u'"context 1"']
+ assert pofile.units[3].prev_source == multistring([u"tast"])
+
+ assert pofile.units[4].prev_msgctxt == [u'"context 2"']
+ assert pofile.units[4].prev_source == multistring([u"tast", u"tasts"])
+
+ assert str(pofile) == posource
Added: translate-toolkit/branches/upstream/current/translate/storage/test_rc.py
URL: http://svn.debian.org/wsvn/translate-toolkit/branches/upstream/current/translate/storage/test_rc.py?rev=1570&op=file
==============================================================================
--- translate-toolkit/branches/upstream/current/translate/storage/test_rc.py (added)
+++ translate-toolkit/branches/upstream/current/translate/storage/test_rc.py Sun Feb 8 16:49:31 2009
@@ -1,0 +1,11 @@
+from translate.storage import rc
+
+def test_escaping():
+ """test escaping Windows Resource files to Python strings"""
+ assert rc.escape_to_python('''First line \
+second line''') == "First line second line"
+ assert rc.escape_to_python("A newline \\n in a string") == "A newline \n in a string"
+ assert rc.escape_to_python("A tab \\t in a string") == "A tab \t in a string"
+ assert rc.escape_to_python("A backslash \\\\ in a string") == "A backslash \\ in a string"
+ assert rc.escape_to_python(r'''First line " \
+ "second line''') == "First line second line"
Added: translate-toolkit/branches/upstream/current/translate/storage/test_tiki.py
URL: http://svn.debian.org/wsvn/translate-toolkit/branches/upstream/current/translate/storage/test_tiki.py?rev=1570&op=file
==============================================================================
--- translate-toolkit/branches/upstream/current/translate/storage/test_tiki.py (added)
+++ translate-toolkit/branches/upstream/current/translate/storage/test_tiki.py Sun Feb 8 16:49:31 2009
@@ -1,0 +1,84 @@
+#!/usr/bin/env python
+# -*- coding: utf-8 -*-
+
+# tiki unit tests
+# Author: Wil Clouser <wclouser at mozilla.com>
+# Date: 2008-12-01
+from translate.storage import tiki
+
+class TestTikiUnit:
+ def test_locations(self):
+ unit = tiki.TikiUnit("one")
+ unit.addlocation('blah')
+ assert unit.getlocations() == []
+ unit.addlocation('unused')
+ assert unit.getlocations() == ['unused']
+
+ def test_to_unicode(self):
+ unit = tiki.TikiUnit("one")
+ unit.settarget('two')
+ assert unicode(unit) == '"one" => "two",\n'
+
+ unit2 = tiki.TikiUnit("one")
+ unit2.settarget('two')
+ unit2.addlocation('untranslated')
+ assert unicode(unit2) == '// "one" => "two",\n'
+
+class TestTikiStore:
+ def test_parse_simple(self):
+ tikisource = r'"Top authors" => "Top autoren",'
+ tikifile = tiki.TikiStore(tikisource)
+ assert len(tikifile.units) == 1
+ assert tikifile.units[0].source == "Top authors"
+ assert tikifile.units[0].target == "Top autoren"
+
+ def test_parse_encode(self):
+ """Make sure these tiki special symbols come through correctly"""
+ tikisource = r'"test: |\n \r \t \\ \$ \"|" => "test: |\n \r \t \\ \$ \"|",'
+ tikifile = tiki.TikiStore(tikisource)
+ assert tikifile.units[0].source == r"test: |\n \r \t \\ \$ \"|"
+ assert tikifile.units[0].target == r"test: |\n \r \t \\ \$ \"|"
+
+ def test_parse_locations(self):
+ """This function will test to make sure the location matching is working. It
+ tests that locations are detected, the default "translated" case, and that
+ "unused" lines can start with //"""
+ tikisource = """
+"zero_source" => "zero_target",
+// ### Start of unused words
+"one_source" => "one_target",
+// ### end of unused words
+"two_source" => "two_target",
+// ### start of untranslated words
+// "three_source" => "three_target",
+// ### end of untranslated words
+"four_source" => "four_target",
+// ### start of possibly untranslated words
+"five_source" => "five_target",
+// ### end of possibly untranslated words
+"six_source" => "six_target",
+ """
+ tikifile = tiki.TikiStore(tikisource)
+ assert len(tikifile.units) == 7
+ assert tikifile.units[0].location == ["translated"]
+ assert tikifile.units[1].location == ["unused"]
+ assert tikifile.units[2].location == ["translated"]
+ assert tikifile.units[3].location == ["untranslated"]
+ assert tikifile.units[4].location == ["translated"]
+ assert tikifile.units[5].location == ["possiblyuntranslated"]
+ assert tikifile.units[6].location == ["translated"]
+
+ def test_parse_ignore_extras(self):
+ """Tests that we ignore extraneous lines"""
+ tikisource = """<?php
+$lang = Array(
+"zero_source" => "zero_target",
+// ###
+// this is a blank line:
+
+"###end###"=>"###end###");
+ """
+ tikifile = tiki.TikiStore(tikisource)
+ assert len(tikifile.units) == 1
+ assert tikifile.units[0].source == "zero_source"
+ assert tikifile.units[0].target == "zero_target"
Modified: translate-toolkit/branches/upstream/current/translate/storage/test_wordfast.py
URL: http://svn.debian.org/wsvn/translate-toolkit/branches/upstream/current/translate/storage/test_wordfast.py?rev=1570&op=diff
==============================================================================
--- translate-toolkit/branches/upstream/current/translate/storage/test_wordfast.py (original)
+++ translate-toolkit/branches/upstream/current/translate/storage/test_wordfast.py Sun Feb 8 16:49:31 2009
@@ -24,7 +24,7 @@
UnitClass = wf.WordfastUnit
def test_difficult_escapes(self):
- """Wordfast files need to perform magic with escapes.
+ r"""Wordfast files need to perform magic with escapes.
Wordfast does not accept line breaks in its TM (even though they would be
valid in CSV) thus we turn \\n into \n and reimplement the base class test but
Added: translate-toolkit/branches/upstream/current/translate/storage/tiki.py
URL: http://svn.debian.org/wsvn/translate-toolkit/branches/upstream/current/translate/storage/tiki.py?rev=1570&op=file
==============================================================================
--- translate-toolkit/branches/upstream/current/translate/storage/tiki.py (added)
+++ translate-toolkit/branches/upstream/current/translate/storage/tiki.py Sun Feb 8 16:49:31 2009
@@ -1,0 +1,185 @@
+#!/usr/bin/env python
+# -*- coding: utf-8 -*-
+#
+# Copyright 2008 Mozilla Corporation, Zuza Software Foundation
+#
+# This file is part of translate.
+#
+# translate is free software; you can redistribute it and/or modify
+# it under the terms of the GNU General Public License as published by
+# the Free Software Foundation; either version 2 of the License, or
+# (at your option) any later version.
+#
+# translate is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+# GNU General Public License for more details.
+#
+# You should have received a copy of the GNU General Public License
+# along with translate; if not, write to the Free Software
+# Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
+
+"""Class that manages TikiWiki files for translation. Tiki files are <strike>ugly and
+inconsistent</strike> formatted as a single large PHP array with several special
+sections identified by comments. Example current as of 2008-12-01:
+
+ <?php
+ // Many comments at the top
+ $lang=Array(
+ // ### Start of unused words
+ "aaa" => "zzz",
+ // ### end of unused words
+
+ // ### start of untranslated words
+ // "bbb" => "yyy",
+ // ### end of untranslated words
+
+ // ### start of possibly untranslated words
+ "ccc" => "xxx",
+ // ### end of possibly untranslated words
+
+ "ddd" => "www",
+ "###end###"=>"###end###");
+ ?>
+
+In addition there are several auto-generated //-style comments scattered through the
+page and array, some of which matter when being parsed.
+
+This has all been gleaned from the source
+U{<http://tikiwiki.svn.sourceforge.net/viewvc/tikiwiki/trunk/get_strings.php?view=markup>}.
+As far as I know no detailed documentation exists for the tiki language.php files.
+
+"""
+
+from translate.storage import base
+from translate.misc import wStringIO
+import re
+import datetime
+
+class TikiUnit(base.TranslationUnit):
+ """A tiki unit entry."""
+ def __init__(self, source=None, encoding="UTF-8"):
+ self.location = []
+ super(TikiUnit, self).__init__(source)
+
+ def __unicode__(self):
+ """Returns a string formatted to be inserted into a tiki language.php file."""
+ ret = u'"%s" => "%s",' % (self.source, self.target)
+ if self.location == ["untranslated"]:
+ ret = u'// ' + ret
+ return ret + "\n"
+
+ def addlocation(self, location):
+ """Location is defined by the comments in the file. This function will only
+ set valid locations.
+
+ @param location: Where the string is located in the file. Must be a valid location.
+ """
+ if location in ['unused', 'untranslated', 'possiblyuntranslated', 'translated']:
+ self.location.append(location)
+
+ def getlocations(self):
+ """Returns the a list of the location(s) of the string."""
+ return self.location
+
+class TikiStore(base.TranslationStore):
+ """Represents a tiki language.php file."""
+ def __init__(self, inputfile=None):
+ """If an inputfile is specified it will be parsed.
+
+ @param inputfile: Either a string or a filehandle of the source file
+ """
+ base.TranslationStore.__init__(self, TikiUnit)
+ self.units = []
+ self.filename = getattr(inputfile, 'name', '')
+ if inputfile is not None:
+ self.parse(inputfile)
+
+ def __str__(self):
+ """Will return a formatted tiki-style language.php file."""
+ _unused = []
+ _untranslated = []
+ _possiblyuntranslated = []
+ _translated = []
+
+ output = self._tiki_header()
+
+ # Reorder all the units into their groups
+ for unit in self.units:
+ if unit.getlocations() == ["unused"]:
+ _unused.append(unit)
+ elif unit.getlocations() == ["untranslated"]:
+ _untranslated.append(unit)
+ elif unit.getlocations() == ["possiblyuntranslated"]:
+ _possiblyuntranslated.append(unit)
+ else:
+ _translated.append(unit)
+
+ output += "// ### Start of unused words\n"
+ for unit in _unused:
+ output += unicode(unit)
+ output += "// ### end of unused words\n\n"
+ output += "// ### start of untranslated words\n"
+ for unit in _untranslated:
+ output += unicode(unit)
+ output += "// ### end of untranslated words\n\n"
+ output += "// ### start of possibly untranslated words\n"
+ for unit in _possiblyuntranslated:
+ output += unicode(unit)
+ output += "// ### end of possibly untranslated words\n\n"
+ for unit in _translated:
+ output += unicode(unit)
+
+ output += self._tiki_footer()
+ return output.encode('UTF-8')
+
+ def _tiki_header(self):
+ """Returns a tiki-file header string."""
+ return u"<?php // -*- coding:utf-8 -*-\n// Generated from po2tiki on %s\n\n$lang=Array(\n" % datetime.datetime.now()
+
+ def _tiki_footer(self):
+ """Returns a tiki-file footer string."""
+ return u'"###end###"=>"###end###");\n?>'
+
+ def parse(self, input):
+ """Parse the given input into source units.
+
+ @param input: the source, either a string or filehandle
+ """
+ if hasattr(input, "name"):
+ self.filename = input.name
+
+ if isinstance(input, str):
+ input = wStringIO.StringIO(input)
+
+ _split_regex = re.compile(r"^(?:// )?\"(.*)\" => \"(.*)\",$", re.UNICODE)
+
+ try:
+ _location = "translated"
+
+ for line in input:
+ # The tiki file fails to identify each section so we have to look for start and end
+ # points and if we're outside of them we assume the string is translated
+ if line.count("### Start of unused words"):
+ _location = "unused"
+ elif line.count("### start of untranslated words"):
+ _location = "untranslated"
+ elif line.count("### start of possibly untranslated words"):
+ _location = "possiblyuntranslated"
+ elif line.count("### end of unused words"):
+ _location = "translated"
+ elif line.count("### end of untranslated words"):
+ _location = "translated"
+ elif line.count("### end of possibly untranslated words"):
+ _location = "translated"
+
+ match = _split_regex.match(line)
+
+ if match:
+ unit = self.addsourceunit("".join(match.group(1)))
+ # Untranslated words get an empty msgstr
+ if not _location == "untranslated":
+ unit.settarget(match.group(2))
+ unit.addlocation(_location)
+ finally:
+ input.close()
Added: translate-toolkit/branches/upstream/current/translate/storage/tmdb.py
URL: http://svn.debian.org/wsvn/translate-toolkit/branches/upstream/current/translate/storage/tmdb.py?rev=1570&op=file
==============================================================================
--- translate-toolkit/branches/upstream/current/translate/storage/tmdb.py (added)
+++ translate-toolkit/branches/upstream/current/translate/storage/tmdb.py Sun Feb 8 16:49:31 2009
@@ -1,0 +1,306 @@
+#!/usr/bin/env python
+# -*- coding: utf-8 -*-
+#
+# Copyright 2009 Zuza Software Foundation
+#
+# This file is part of translate.
+#
+# translate is free software; you can redistribute it and/or modify
+# it under the terms of the GNU General Public License as published by
+# the Free Software Foundation; either version 2 of the License, or
+# (at your option) any later version.
+#
+# translate is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+# GNU General Public License for more details.
+#
+# You should have received a copy of the GNU General Public License
+# along with translate; if not, write to the Free Software
+# Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
+
+"""Module to provide a translation memory database."""
+import math
+import time
+import logging
+try:
+ from sqlite3 import dbapi2
+except ImportError:
+ from pysqlite2 import dbapi2
+
+from translate.search.lshtein import LevenshteinComparer
+from translate.lang import data
+
+
+class LanguageError(Exception):
+ def __init__(self, value):
+ self.value = value
+
+ def __str__(self):
+ return str(self.value)
+
+
+class TMDB(object):
+ _tm_dbs = {}
+ def __init__(self, db_file, max_candidates=3, min_similarity=75, max_length=1000):
+
+ self.max_candidates = max_candidates
+ self.min_similarity = min_similarity
+ self.max_length = max_length
+
+ # share connections to same database file between different instances
+ if not self._tm_dbs.has_key(db_file):
+ self._tm_dbs[db_file] = dbapi2.connect(db_file)
+
+ self.connection = self._tm_dbs[db_file]
+ self.cursor = self.connection.cursor()
+
+ #FIXME: do we want to do any checks before we initialize the DB?
+ self.init_database()
+ self.fulltext = False
+ self.init_fulltext()
+
+ self.comparer = LevenshteinComparer(self.max_length)
+
+ self.preload_db()
+
+ def init_database(self):
+ """creates database tables and indices"""
+
+ script = """
+CREATE TABLE IF NOT EXISTS sources (
+ sid INTEGER PRIMARY KEY AUTOINCREMENT,
+ text VARCHAR NOT NULL,
+ context VARCHAR DEFAULT NULL,
+ lang VARCHAR NOT NULL,
+ length INTEGER NOT NULL
+);
+CREATE INDEX IF NOT EXISTS sources_context_idx ON sources (context);
+CREATE INDEX IF NOT EXISTS sources_lang_idx ON sources (lang);
+CREATE INDEX IF NOT EXISTS sources_length_idx ON sources (length);
+CREATE UNIQUE INDEX IF NOT EXISTS sources_uniq_idx ON sources (text, context, lang);
+
+CREATE TABLE IF NOT EXISTS targets (
+ tid INTEGER PRIMARY KEY AUTOINCREMENT,
+ sid INTEGER NOT NULL,
+ text VARCHAR NOT NULL,
+ lang VARCHAR NOT NULL,
+ time INTEGER DEFAULT NULL,
+ FOREIGN KEY (sid) references sources(sid)
+);
+CREATE INDEX IF NOT EXISTS targets_sid_idx ON targets (sid);
+CREATE INDEX IF NOT EXISTS targets_lang_idx ON targets (lang);
+CREATE INDEX IF NOT EXISTS targets_time_idx ON targets (time);
+CREATE UNIQUE INDEX IF NOT EXISTS targets_uniq_idx ON targets (sid, text, lang);
+"""
+
+ try:
+ self.cursor.executescript(script)
+ self.connection.commit()
+ except:
+ self.connection.rollback()
+ raise
+
+ def init_fulltext(self):
+ """detects if fts3 fulltext indexing module exists, initializes fulltext table if it does"""
+
+ #HACKISH: no better way to detect fts3 support except trying to construct a dummy table?!
+ try:
+ script = """
+DROP TABLE IF EXISTS test_for_fts3;
+CREATE VIRTUAL TABLE test_for_fts3 USING fts3;
+DROP TABLE test_for_fts3;
+"""
+ self.cursor.executescript(script)
+ logging.debug("fts3 supported")
+ # for some reason CREATE VIRTUAL TABLE doesn't support IF NOT EXISTS syntax
+ # check if fulltext index table exists manually
+ self.cursor.execute("SELECT name FROM sqlite_master WHERE name = 'fulltext'")
+ if not self.cursor.fetchone():
+ # create fulltext index table, and index all strings in sources
+ script= """
+CREATE VIRTUAL TABLE fulltext USING fts3(text);
+"""
+ logging.debug("fulltext table not exists, creating")
+ self.cursor.executescript(script)
+ logging.debug("created fulltext table")
+ else:
+ logging.debug("fulltext table already exists")
+
+ # create triggers that would sync sources table with fulltext index
+ script = """
+INSERT INTO fulltext (rowid, text) SELECT sid, text FROM sources WHERE sid NOT IN (SELECT rowid FROM fulltext);
+CREATE TRIGGER IF NOT EXISTS sources_insert_trig AFTER INSERT ON sources FOR EACH ROW
+BEGIN
+ INSERT INTO fulltext (docid, text) VALUES (NEW.sid, NEW.text);
+END;
+CREATE TRIGGER IF NOT EXISTS sources_update_trig AFTER UPDATE OF text ON sources FOR EACH ROW
+BEGIN
+ UPDATE fulltext SET text = NEW.text WHERE docid = NEW.sid;
+END;
+CREATE TRIGGER IF NOT EXISTS sources_delete_trig AFTER DELETE ON sources FOR EACH ROW
+BEGIN
+ DELETE FROM fulltext WHERE docid = OLD.sid;
+END;
+"""
+ self.cursor.executescript(script)
+ self.connection.commit()
+ logging.debug("created fulltext triggers")
+ self.fulltext = True
+
+ except dbapi2.OperationalError, e:
+ self.fulltext = False
+ logging.debug("failed to initialize fts3 support: " + str(e))
+ script = """
+DROP TRIGGER IF EXISTS sources_insert_trig;
+DROP TRIGGER IF EXISTS sources_update_trig;
+DROP TRIGGER IF EXISTS sources_delete_trig;
+"""
+ self.cursor.executescript(script)
+
+ def preload_db(self):
+ """ugly hack to force caching of sqlite db file in memory for
+ improved performance"""
+ if self.fulltext:
+ query = """SELECT COUNT(*) FROM sources s JOIN fulltext f ON s.sid = f.docid JOIN targets t on s.sid = t.sid"""
+ else:
+ query = """SELECT COUNT(*) FROM sources s JOIN targets t on s.sid = t.sid"""
+ self.cursor.execute(query)
+ (numrows,) = self.cursor.fetchone()
+ logging.debug("tmdb has %d records" % numrows)
+ return numrows
+
+ def add_unit(self, unit, source_lang=None, target_lang=None, commit=True):
+ """inserts unit in the database"""
+ #TODO: is that really the best way to handle unspecified
+ # source and target languages? what about conflicts between
+ # unit attributes and passed arguments
+ if unit.getsourcelanguage():
+ source_lang = unit.getsourcelanguage()
+ if unit.gettargetlanguage():
+ target_lang = unit.gettargetlanguage()
+
+ if not source_lang:
+ raise LanguageError("undefined source language")
+ if not target_lang:
+ raise LanguageError("undefined target language")
+
+ unitdict = {"source" : unit.source,
+ "target" : unit.target,
+ "context": unit.getcontext()
+ }
+ self.add_dict(unitdict, source_lang, target_lang, commit)
+
+ def add_dict(self, unit, source_lang, target_lang, commit=True):
+ """inserts units represented as dictionaries in database"""
+ source_lang = data.normalize_code(source_lang)
+ target_lang = data.normalize_code(target_lang)
+ try:
+ try:
+ self.cursor.execute("INSERT INTO sources (text, context, lang, length) VALUES(?, ?, ?, ?)",
+ (unit["source"],
+ unit["context"],
+ source_lang,
+ len(unit["source"])))
+ sid = self.cursor.lastrowid
+ except dbapi2.IntegrityError:
+ # source string already exists in db, run query to find sid
+ self.cursor.execute("SELECT sid FROM sources WHERE text=? AND context=? and lang=?",
+ (unit["source"],
+ unit["context"],
+ source_lang))
+ sid = self.cursor.fetchone()
+ (sid,) = sid
+ try:
+ #FIXME: get time info from translation store
+ #FIXME: do we need so store target length?
+ self.cursor.execute("INSERT INTO targets (sid, text, lang, time) VALUES (?, ?, ?, ?)",
+ (sid,
+ unit["target"],
+ target_lang,
+ int(time.time())))
+ except dbapi2.IntegrityError:
+ # target string already exists in db, do nothing
+ pass
+
+ if commit:
+ self.connection.commit()
+ except:
+ if commit:
+ self.connection.rollback()
+ raise
+
+ def add_store(self, store, source_lang, target_lang, commit=True):
+ """insert all units in store in database"""
+ count = 0
+ for unit in store.units:
+ if unit.istranslatable() and unit.istranslated():
+ self.add_unit(unit, source_lang, target_lang, commit=False)
+ count +=1
+ if commit:
+ self.connection.commit()
+ return count
+
+ def add_list(self, units, source_lang, target_lang, commit=True):
+ """insert all units in list into the database, units are
+ represented as dictionaries"""
+ count = 0
+ for unit in units:
+ self.add_dict(unit, source_lang, target_lang, commit=False)
+ count +=1
+ if commit:
+ self.connection.commit()
+ return count
+
+ def translate_unit(self, unit_source, source_langs, target_langs):
+ """return TM suggestions for unit_source"""
+ if isinstance(unit_source, str):
+ unit_source = unicode(unit_source, "utf-8")
+ if isinstance(source_langs, list):
+ source_langs = [data.normalize_code(lang) for lang in source_langs]
+ source_langs = ','.join(source_langs)
+ else:
+ source_langs = data.normalize_code(source_langs)
+ if isinstance(target_langs, list):
+ target_langs = [data.normalize_code(lang) for lang in target_langs]
+ target_langs = ','.join(target_langs)
+ else:
+ target_langs = data.normalize_code(target_langs)
+
+ minlen = min_levenshtein_length(len(unit_source), self.min_similarity)
+ maxlen = max_levenshtein_length(len(unit_source), self.min_similarity, self.max_length)
+
+ unit_words = unit_source.split()
+ if self.fulltext and len(unit_words) > 3:
+ logging.debug("fulltext matching")
+ query = """SELECT s.text, t.text, s.context, s.lang, t.lang FROM sources s JOIN targets t ON s.sid = t.sid JOIN fulltext f ON s.sid = f.docid
+ WHERE s.lang IN (?) AND t.lang IN (?) AND s.length BETWEEN ? AND ?
+ AND fulltext MATCH ?"""
+ search_str = " OR ".join(unit_words)
+ self.cursor.execute(query, (source_langs, target_langs, minlen, maxlen, search_str))
+ else:
+ logging.debug("nonfulltext matching")
+ query = """SELECT s.text, t.text, s.context, s.lang, t.lang FROM sources s JOIN targets t ON s.sid = t.sid
+ WHERE s.lang IN (?) AND t.lang IN (?)
+ AND s.length >= ? AND s.length <= ?"""
+ self.cursor.execute(query, (source_langs, target_langs, minlen, maxlen))
+
+ results = []
+ for row in self.cursor:
+ result = {}
+ result['source'] = row[0]
+ result['target'] = row[1]
+ result['context'] = row[2]
+ result['quality'] = self.comparer.similarity(unit_source, result['source'], self.min_similarity)
+ if result['quality'] >= self.min_similarity:
+ results.append(result)
+ results.sort(key=lambda match: match['quality'], reverse=True)
+ results = results[:self.max_candidates]
+ return results
+
+
+def min_levenshtein_length(length, min_similarity):
+ return math.ceil(max(length * (min_similarity/100.0), 2))
+
+def max_levenshtein_length(length, min_similarity, max_length):
+ return math.floor(min(length / (min_similarity/100.0), max_length))
Modified: translate-toolkit/branches/upstream/current/translate/storage/tmx.py
URL: http://svn.debian.org/wsvn/translate-toolkit/branches/upstream/current/translate/storage/tmx.py?rev=1570&op=diff
==============================================================================
--- translate-toolkit/branches/upstream/current/translate/storage/tmx.py (original)
+++ translate-toolkit/branches/upstream/current/translate/storage/tmx.py Sun Feb 8 16:49:31 2009
@@ -66,7 +66,7 @@
"""Private method that returns the text from notes.
The origin parameter is ignored.."""
- note_nodes = self.xmlelement.findall(".//%s" % self.namespaced("note"))
+ note_nodes = self.xmlelement.iterdescendants(self.namespaced("note"))
note_list = [lisa.getText(note) for note in note_nodes]
return note_list
@@ -76,7 +76,7 @@
def removenotes(self):
"""Remove all the translator notes."""
- notes = self.xmlelement.findall(".//%s" % self.namespaced("note"))
+ notes = self.xmlelement.iterdescendants(self.namespaced("note"))
for note in notes:
self.xmlelement.remove(note)
@@ -110,7 +110,7 @@
class tmxfile(lisa.LISAfile):
"""Class representing a TMX file store."""
UnitClass = tmxunit
- Name = "TMX file"
+ Name = _("TMX file")
Mimetypes = ["application/x-tmx"]
Extensions = ["tmx"]
rootNode = "tmx"
@@ -123,9 +123,9 @@
</tmx>'''
def addheader(self):
- headernode = self.document.find("//%s" % self.namespaced("header"))
+ headernode = self.document.getroot().iterchildren(self.namespaced("header")).next()
headernode.set("creationtool", "Translate Toolkit - po2tmx")
- headernode.set("creationtoolversion", __version__.ver)
+ headernode.set("creationtoolversion", __version__.sver)
headernode.set("segtype", "sentence")
headernode.set("o-tmf", "UTF-8")
headernode.set("adminlang", "en")
@@ -139,9 +139,9 @@
"""addtranslation method for testing old unit tests"""
unit = self.addsourceunit(source)
unit.target = translation
- tuvs = unit.xmlelement.findall('.//%s' % self.namespaced('tuv'))
- lisa.setXMLlang(tuvs[0], srclang)
- lisa.setXMLlang(tuvs[1], translang)
+ tuvs = unit.xmlelement.iterdescendants(self.namespaced('tuv'))
+ lisa.setXMLlang(tuvs.next(), srclang)
+ lisa.setXMLlang(tuvs.next(), translang)
def translate(self, sourcetext, sourcelang=None, targetlang=None):
"""method to test old unit tests"""
Modified: translate-toolkit/branches/upstream/current/translate/storage/ts2.py
URL: http://svn.debian.org/wsvn/translate-toolkit/branches/upstream/current/translate/storage/ts2.py?rev=1570&op=diff
==============================================================================
--- translate-toolkit/branches/upstream/current/translate/storage/ts2.py (original)
+++ translate-toolkit/branches/upstream/current/translate/storage/ts2.py Sun Feb 8 16:49:31 2009
@@ -103,23 +103,30 @@
source = property(getsource, lisa.LISAunit.setsource)
def settarget(self, text):
+ # This is a fairly destructive implementation. Don't assume that this
+ # is necessarily correct in all regards, but it does deal with a lot of
+ # cases. It is hard to deal with plurals, since
#Firstly deal with reinitialising to None or setting to identical string
if self.gettarget() == text:
return
+ strings = []
+ if isinstance(text, multistring):
+ strings = text.strings
+ elif isinstance(text, list):
+ strings = text
+ else:
+ strings = [text]
targetnode = self._gettargetnode()
- strings = []
- if isinstance(text, multistring) and (len(text.strings) > 1):
- strings = text.strings
- targetnode.set("numerus", "yes")
- elif self.hasplural():
- #XXX: str vs unicode?
-# text = data.forceunicode(text)
- strings = [text]
- for string in strings:
- numerus = etree.SubElement(targetnode, self.namespaced("numerusform"))
- numerus.text = string
- else:
- targetnode.text = text
+ type = targetnode.get("type")
+ targetnode.clear()
+ if type:
+ targetnode.set("type", type)
+ if self.hasplural():
+ for string in strings:
+ numerus = etree.SubElement(targetnode, self.namespaced("numerusform"))
+ numerus.text = data.forceunicode(string) or u""
+ else:
+ targetnode.text = data.forceunicode(text) or u""
def gettarget(self):
targetnode = self._gettargetnode()
@@ -128,7 +135,7 @@
return None
if self.hasplural():
numerus_nodes = targetnode.findall(self.namespaced("numerusform"))
- return multistring([node.text for node in numerus_nodes])
+ return multistring([node.text or u"" for node in numerus_nodes])
else:
return data.forceunicode(targetnode.text) or u""
target = property(gettarget, settarget)
@@ -178,7 +185,7 @@
def isfuzzy(self):
return self._gettype() == "unfinished"
-
+
def markfuzzy(self, value=True):
if value:
self._settype("unfinished")
@@ -186,8 +193,9 @@
self._settype(None)
def getid(self):
-# return self._context_node.text + self.source
- context_name = self.xmlelement.getparent().find("name").text
+ context_name = self.getcontext()
+ #XXX: context_name is not supposed to be able to be None (the <name>
+ # tag is compulsary in the <context> tag)
if context_name is not None:
return context_name + self.source
else:
@@ -206,10 +214,10 @@
def getlocations(self):
location = self.xmlelement.find(self.namespaced("location"))
- if location:
+ if location is None:
+ return []
+ else:
return [':'.join([location.get("filename"), location.get("line")])]
- else:
- return []
def merge(self, otherunit, overwrite=False, comments=True):
super(tsunit, self).merge(otherunit, overwrite, comments)
@@ -220,15 +228,11 @@
def isobsolete(self):
return self._gettype() == "obsolete"
-# def createfromxmlElement(cls, element):
-# unit = lisa.LISAunit.createfromxmlElement(element)
-# unit._context_node =
-
class tsfile(lisa.LISAfile):
"""Class representing a XLIFF file store."""
UnitClass = tsunit
- Name = "Qt Linguist Translation File"
+ Name = _("Qt Linguist Translation File")
Mimetypes = ["application/x-linguist"]
Extensions = ["ts"]
rootNode = "TS"
@@ -252,29 +256,45 @@
else:
self.body = self.document.getroot()
- def createcontext(self, contextname, comment=None):
+ def gettargetlanguage(self):
+ """Get the target language for this .ts file.
+
+ @return: ISO code e.g. af, fr, pt_BR
+ @rtype: String
+ """
+ return self.body.get('language')
+
+ def settargetlanguage(self, targetlanguage):
+ """Set the target language for this .ts file to L{targetlanguage}.
+
+ @param targetlanguage: ISO code e.g. af, fr, pt_BR
+ @type targetlanguage: String
+ """
+ if targetlanguage:
+ self.body.set('language', targetlanguage)
+
+ def _createcontext(self, contextname, comment=None):
"""Creates a context node with an optional comment"""
context = etree.SubElement(self.document.getroot(), self.namespaced(self.bodyNode))
+ name = etree.SubElement(context, self.namespaced("name"))
+ name.text = contextname
if comment:
comment_node = context.SubElement(context, "comment")
comment_node.text = comment
return context
- def getcontextname(self, contextnode):
- """Returns the name of the given context."""
+ def _getcontextname(self, contextnode):
+ """Returns the name of the given context node."""
return filenode.find(self.namespaced("name")).text
- def getcontextnames(self):
+ def _getcontextnames(self):
"""Returns all contextnames in this TS file."""
contextnodes = self.document.findall(self.namespaced("context"))
contextnames = [self.getcontextname(contextnode) for contextnode in contextnodes]
- contextnames = filter(None, contextnames)
- if len(contextnames) == 1 and contextnames[0] == '':
- contextnames = []
return contextnames
- def getcontextnode(self, contextname):
- """Finds the contextnode with the given name."""
+ def _getcontextnode(self, contextname):
+ """Returns the context node with the given name."""
contextnodes = self.document.findall(self.namespaced("context"))
for contextnode in contextnodes:
if self.getcontextname(contextnode) == contextname:
@@ -282,25 +302,26 @@
return None
def addunit(self, unit, new=True, contextname=None, createifmissing=False):
- """adds the given trans-unit to the last used body node if the contextname has changed it uses the slow method instead (will create the nodes required if asked). Returns success"""
+ """Adds the given unit to the last used body node (current context).
+
+ If the contextname is specified, switch to that context (creating it
+ if allowed by createifmissing)."""
if self._contextname != contextname:
- if not self.switchcontext(contextname, createifmissing):
+ if not self._switchcontext(contextname, createifmissing):
return None
super(tsfile, self).addunit(unit, new)
-# unit._context_node = self.getcontextnode(self._contextname)
# lisa.setXMLspace(unit.xmlelement, "preserve")
return unit
- def switchcontext(self, contextname, createifmissing=False):
+ def _switchcontext(self, contextname, createifmissing=False):
"""Switch the current context to the one named contextname, optionally
creating it if it doesn't exist."""
self._contextname = contextname
- contextnode = self.getcontextnode(contextname)
+ contextnode = self._getcontextnode(contextname)
if contextnode is None:
if not createifmissing:
return False
- contextnode = self.createcontextnode(contextname)
- self.document.getroot().append(contextnode)
+ contextnode = self._createcontext(contextname)
self.body = contextnode
if self.body is None:
@@ -321,6 +342,14 @@
- no XML decleration
- plain DOCTYPE that lxml seems to ignore
"""
- return "<!DOCTYPE TS>" + etree.tostring(self.document, pretty_print=True, xml_declaration=False, encoding='utf-8')
-
-
+ # A bug in lxml means we have to output the doctype ourselves. For
+ # more information, see:
+ # http://codespeak.net/pipermail/lxml-dev/2008-October/004112.html
+ # The problem was fixed in lxml 2.1.3
+ output = etree.tostring(self.document, pretty_print=True,
+ xml_declaration=False, encoding='utf-8')
+ if not "<!DOCTYPE TS>" in output[:30]:
+ output = "<!DOCTYPE TS>" + output
+ return output
+
+
Modified: translate-toolkit/branches/upstream/current/translate/storage/wordfast.py
URL: http://svn.debian.org/wsvn/translate-toolkit/branches/upstream/current/translate/storage/wordfast.py?rev=1570&op=diff
==============================================================================
--- translate-toolkit/branches/upstream/current/translate/storage/wordfast.py (original)
+++ translate-toolkit/branches/upstream/current/translate/storage/wordfast.py Sun Feb 8 16:49:31 2009
@@ -90,6 +90,7 @@
"""
import csv
+import sys
import time
from translate.storage import base
@@ -186,10 +187,14 @@
delimiter = "\t"
lineterminator = "\r\n"
quoting = csv.QUOTE_NONE
- # We need to define the following 3 items for csv in Python < 2.5
- doublequote = False
- skipinitialspace = False
- escapechar = ''
+ if sys.version_info < (2, 5, 0):
+ # We need to define the following items for csv in Python < 2.5
+ quoting = csv.QUOTE_MINIMAL # Wordfast does not quote anything, since we escape
+ # \t anyway in _char_to_wf this should not be a problem
+ doublequote = False
+ skipinitialspace = False
+ escapechar = None
+ quotechar ='"'
csv.register_dialect("wordfast", WordfastDialect)
class WordfastTime(object):
@@ -345,7 +350,7 @@
class WordfastTMFile(base.TranslationStore):
"""A Wordfast translation memory file"""
- Name = "Wordfast TM file"
+ Name = _("Wordfast TM file")
Mimetypes = ["application/x-wordfast"]
Extensions = ["txt"]
def __init__(self, inputfile=None, unitclass=WordfastUnit):
@@ -354,7 +359,7 @@
base.TranslationStore.__init__(self, unitclass=unitclass)
self.filename = ''
self.header = WordfastHeader()
- self._encoding = 'utf-16'
+ self._encoding = 'iso-8859-1'
if inputfile is not None:
self.parse(inputfile)
Modified: translate-toolkit/branches/upstream/current/translate/storage/xliff.py
URL: http://svn.debian.org/wsvn/translate-toolkit/branches/upstream/current/translate/storage/xliff.py?rev=1570&op=diff
==============================================================================
--- translate-toolkit/branches/upstream/current/translate/storage/xliff.py (original)
+++ translate-toolkit/branches/upstream/current/translate/storage/xliff.py Sun Feb 8 16:49:31 2009
@@ -58,8 +58,8 @@
def getlanguageNodes(self):
"""We override this to get source and target nodes."""
- sources = self.xmlelement.findall('.//%s' % self.namespaced(self.languageNode))
- targets = self.xmlelement.findall('.//%s' % self.namespaced('target'))
+ sources = list(self.xmlelement.iterdescendants(self.namespaced(self.languageNode)))
+ targets = list(self.xmlelement.iterdescendants(self.namespaced('target')))
sourcesl = len(sources)
targetsl = len(targets)
nodes = []
@@ -99,20 +99,22 @@
"""Returns <alt-trans> for the given origin as a list of units. No
origin means all alternatives."""
translist = []
- for node in self.xmlelement.findall(".//%s" % self.namespaced("alt-trans")):
+ for node in self.xmlelement.iterdescendants(self.namespaced("alt-trans")):
if self.correctorigin(node, origin):
# We build some mini units that keep the xmlelement. This
# makes it easier to delete it if it is passed back to us.
newunit = base.TranslationUnit(self.source)
# the source tag is optional
- sourcenode = node.find(".//%s" % self.namespaced("source"))
- if not sourcenode is None:
- newunit.source = lisa.getText(sourcenode)
+ sourcenode = node.iterdescendants(self.namespaced("source"))
+ try:
+ newunit.source = lisa.getText(sourcenode.next())
+ except StopIteration:
+ pass
# must have one or more targets
- targetnode = node.find(".//%s" % self.namespaced("target"))
- newunit.target = lisa.getText(targetnode)
+ targetnode = node.iterdescendants(self.namespaced("target"))
+ newunit.target = lisa.getText(targetnode.next())
#TODO: support multiple targets better
#TODO: support notes in alt-trans
newunit.xmlelement = node
@@ -135,7 +137,7 @@
def getnotelist(self, origin=None):
"""Private method that returns the text from notes matching 'origin' or all notes."""
- notenodes = self.xmlelement.findall(".//%s" % self.namespaced("note"))
+ notenodes = self.xmlelement.iterdescendants(self.namespaced("note"))
# TODO: consider using xpath to construct initial_list directly
# or to simply get the correct text from the outset (just remember to
# check for duplication.
@@ -150,11 +152,11 @@
def getnotes(self, origin=None):
return '\n'.join(self.getnotelist(origin=origin))
- def removenotes(self):
+ def removenotes(self, origin="translator"):
"""Remove all the translator notes."""
- notes = self.xmlelement.findall(".//%s" % self.namespaced("note"))
+ notes = self.xmlelement.iterdescendants(self.namespaced("note"))
for note in notes:
- if self.correctorigin(note, origin="translator"):
+ if self.correctorigin(note, origin=origin):
self.xmlelement.remove(note)
def adderror(self, errorname, errortext):
@@ -287,10 +289,10 @@
def getcontextgroups(self, name):
"""Returns the contexts in the context groups with the specified name"""
groups = []
- grouptags = self.xmlelement.findall(".//%s" % self.namespaced("context-group"))
+ grouptags = self.xmlelement.iterdescendants(self.namespaced("context-group"))
for group in grouptags:
if group.get("name") == name:
- contexts = group.findall(".//%s" % self.namespaced("context"))
+ contexts = group.iterdescendants(self.namespaced("context"))
pairs = []
for context in contexts:
pairs.append((context.get("context-type"), lisa.getText(context)))
@@ -310,6 +312,8 @@
self.markfuzzy()
elif otherunit.source == self.source:
self.markfuzzy(False)
+ if comments:
+ self.addnote(otherunit.getnotes())
def correctorigin(self, node, origin):
"""Check against node tag's origin (e.g note or alt-trans)"""
@@ -325,7 +329,7 @@
class xlifffile(lisa.LISAfile):
"""Class representing a XLIFF file store."""
UnitClass = xliffunit
- Name = "XLIFF file"
+ Name = _("XLIFF file")
Mimetypes = ["application/x-xliff", "application/x-xliff+xml"]
Extensions = ["xlf", "xliff"]
rootNode = "xliff"
@@ -345,7 +349,7 @@
self._messagenum = 0
# Allow the inputfile to override defaults for source and target language.
- filenode = self.document.find('.//%s' % self.namespaced('file'))
+ filenode = self.document.getroot().iterchildren(self.namespaced('file')).next()
sourcelanguage = filenode.get('source-language')
if sourcelanguage:
self.setsourcelanguage(sourcelanguage)
@@ -355,7 +359,7 @@
def addheader(self):
"""Initialise the file header."""
- filenode = self.document.find(self.namespaced("file"))
+ filenode = self.document.getroot().iterchildren(self.namespaced("file")).next()
filenode.set("source-language", self.sourcelanguage)
if self.targetlanguage:
filenode.set("target-language", self.targetlanguage)
@@ -383,7 +387,7 @@
def getfilenames(self):
"""returns all filenames in this XLIFF file"""
- filenodes = self.document.findall(self.namespaced("file"))
+ filenodes = self.document.getroot().iterchildren(self.namespaced("file"))
filenames = [self.getfilename(filenode) for filenode in filenodes]
filenames = filter(None, filenames)
if len(filenames) == 1 and filenames[0] == '':
@@ -392,7 +396,7 @@
def getfilenode(self, filename):
"""finds the filenode with the given name"""
- filenodes = self.document.findall(self.namespaced("file"))
+ filenodes = self.document.getroot().iterchildren(self.namespaced("file"))
for filenode in filenodes:
if self.getfilename(filenode) == filename:
return filenode
@@ -428,20 +432,22 @@
def removedefaultfile(self):
"""We want to remove the default file-tag as soon as possible if we
know if still present and empty."""
- filenodes = self.document.findall(self.namespaced("file"))
+ filenodes = list(self.document.getroot().iterchildren(self.namespaced("file")))
if len(filenodes) > 1:
for filenode in filenodes:
if filenode.get("original") == "NoName" and \
- not filenode.findall(".//%s" % self.namespaced(self.UnitClass.rootNode)):
+ not list(filenode.iterdescendants(self.namespaced(self.UnitClass.rootNode))):
self.document.getroot().remove(filenode)
break
def getheadernode(self, filenode, createifmissing=False):
"""finds the header node for the given filenode"""
# TODO: Deprecated?
- headernode = list(filenode.find(self.namespaced("header")))
- if not headernode is None:
- return headernode
+ headernode = filenode.iterchildren(self.namespaced("header"))
+ try:
+ return headernode.next()
+ except StopIteration:
+ pass
if not createifmissing:
return None
headernode = etree.SubElement(filenode, self.namespaced("header"))
@@ -449,9 +455,11 @@
def getbodynode(self, filenode, createifmissing=False):
"""finds the body node for the given filenode"""
- bodynode = filenode.find(self.namespaced("body"))
- if not bodynode is None:
- return bodynode
+ bodynode = filenode.iterchildren(self.namespaced("body"))
+ try:
+ return bodynode.next()
+ except StopIteration:
+ pass
if not createifmissing:
return None
bodynode = etree.SubElement(filenode, self.namespaced("body"))
@@ -481,7 +489,7 @@
self.body = self.getbodynode(filenode, createifmissing=createifmissing)
if self.body is None:
return False
- self._messagenum = len(self.body.findall(".//%s" % self.namespaced("trans-unit")))
+ self._messagenum = len(list(self.body.iterdescendants(self.namespaced("trans-unit"))))
#TODO: was 0 based before - consider
# messagenum = len(self.units)
#TODO: we want to number them consecutively inside a body/file tag
Modified: translate-toolkit/branches/upstream/current/translate/storage/xml_extract/generate.py
URL: http://svn.debian.org/wsvn/translate-toolkit/branches/upstream/current/translate/storage/xml_extract/generate.py?rev=1570&op=diff
==============================================================================
--- translate-toolkit/branches/upstream/current/translate/storage/xml_extract/generate.py (original)
+++ translate-toolkit/branches/upstream/current/translate/storage/xml_extract/generate.py Sun Feb 8 16:49:31 2009
@@ -76,7 +76,7 @@
@accepts(etree._Element, etree._Element)
def find_dom_root(parent_dom_node, dom_node):
- """@see find_placeable_dom_tree_roots"""
+ """@see: L{find_placeable_dom_tree_roots}"""
if dom_node is None or parent_dom_node is None:
return None
if dom_node.getparent() == parent_dom_node:
@@ -95,13 +95,13 @@
element. However, the span is contained in other tags (which we never process).
When splicing the template DOM tree (that is, the DOM which comes from
the XML document we're using to generate a translated XML document), we'll
- need to move DOM sub-trees around and we need the roots of these sub-trees.
-
- <p> This is text \/ <- Paragraph containing an inline placeable
- <blah> <- Inline placeable's root (which we want to find)
- ... <- Any number of intermediate DOM nodes
- <span> bold text <- The inline placeable's Translatable
- holds a reference to this DOM node
+ need to move DOM sub-trees around and we need the roots of these sub-trees::
+
+ <p> This is text \/ <- Paragraph containing an inline placeable
+ <blah> <- Inline placeable's root (which we want to find)
+ ... <- Any number of intermediate DOM nodes
+ <span> bold text <- The inline placeable's Translatable
+ holds a reference to this DOM node
"""
def set_dom_root_for_unit_node(parent_unit_node, unit_node, dom_tree_roots):
@@ -114,14 +114,14 @@
"""Creating a mapping from the DOM nodes in source_dom_node which correspond to
placeables, with DOM nodes in the XML document template (this information is obtained
from unit_node). We are interested in DOM nodes in the XML document template which
- are the roots of placeables. @see the diagram below, as well as
- find_placeable_dom_tree_roots.
-
- XLIFF Source (below)
- <source>This is text <g> bold text</g> and a footnote<x/></source>
- / \________
- / \
- <p>This is text<blah>...<span> bold text</span>...</blah> and <note>...</note></p>
+ are the roots of placeables. See the diagram below, as well as
+ L{find_placeable_dom_tree_roots}.
+
+ XLIFF Source (below)::
+ <source>This is text <g> bold text</g> and a footnote<x/></source>
+ / \________
+ / \
+ <p>This is text<blah>...<span> bold text</span>...</blah> and <note>...</note></p>
Input XML document used as a template (above)
In the above diagram, the XLIFF source DOM node <g> is associated with the XML
Modified: translate-toolkit/branches/upstream/current/translate/storage/xml_extract/unit_tree.py
URL: http://svn.debian.org/wsvn/translate-toolkit/branches/upstream/current/translate/storage/xml_extract/unit_tree.py?rev=1570&op=diff
==============================================================================
--- translate-toolkit/branches/upstream/current/translate/storage/xml_extract/unit_tree.py (original)
+++ translate-toolkit/branches/upstream/current/translate/storage/xml_extract/unit_tree.py Sun Feb 8 16:49:31 2009
@@ -70,7 +70,7 @@
components of xpath_components. When reaching the end of xpath_components,
set the reference of the node to unit.
- With reference to the tree diagram in build_unit_tree,
+ With reference to the tree diagram in build_unit_tree::
add_unit_to_tree(node, [('p', 2), ('text', 3), ('body', 2), ('document-content', 1)], unit)
@@ -98,20 +98,19 @@
and where a node contains a unit if a path from the root of the tree to the node
containing the unit, is equal to the XPath of the unit.
- The tree looks something like this:
-
- root
- `- ('document-content', 1)
- `- ('body', 2)
- |- ('text', 1)
- | `- ('p', 1)
- | `- <reference to a unit>
- |- ('text', 2)
- | `- ('p', 1)
- | `- <reference to a unit>
- `- ('text', 3)
- `- ('p', 1)
- `- <reference to a unit>
+ The tree looks something like this::
+ root
+ `- ('document-content', 1)
+ `- ('body', 2)
+ |- ('text', 1)
+ | `- ('p', 1)
+ | `- <reference to a unit>
+ |- ('text', 2)
+ | `- ('p', 1)
+ | `- <reference to a unit>
+ `- ('text', 3)
+ `- ('p', 1)
+ `- <reference to a unit>
"""
tree = XPathTree()
for unit in store.units:
Modified: translate-toolkit/branches/upstream/current/translate/storage/xml_name.py
URL: http://svn.debian.org/wsvn/translate-toolkit/branches/upstream/current/translate/storage/xml_name.py?rev=1570&op=diff
==============================================================================
--- translate-toolkit/branches/upstream/current/translate/storage/xml_name.py (original)
+++ translate-toolkit/branches/upstream/current/translate/storage/xml_name.py Sun Feb 8 16:49:31 2009
@@ -23,14 +23,14 @@
class XmlNamespace(object):
def __init__(self, namespace):
self._namespace = namespace
-
+
def name(self, tag):
return "{%s}%s" % (self._namespace, tag)
class XmlNamer(object):
"""Initialize me with a DOM node or a DOM document node (the
- toplevel node you get when parsing an XML file). The use me
- to get generate fully qualified XML names.
+ toplevel node you get when parsing an XML file). Then use me
+ to generate fully qualified XML names.
>>> xml = '<office:document-styles xmlns:office="urn:oasis:names:tc:opendocument:xmlns:office:1.0"></office>'
>>> from lxml import etree
Modified: translate-toolkit/branches/upstream/current/translate/storage/xpi.py
URL: http://svn.debian.org/wsvn/translate-toolkit/branches/upstream/current/translate/storage/xpi.py?rev=1570&op=diff
==============================================================================
--- translate-toolkit/branches/upstream/current/translate/storage/xpi.py (original)
+++ translate-toolkit/branches/upstream/current/translate/storage/xpi.py Sun Feb 8 16:49:31 2009
@@ -529,7 +529,7 @@
if __name__ == '__main__':
import optparse
- optparser = optparse.OptionParser(version="%prog "+__version__.ver)
+ optparser = optparse.OptionParser(version="%prog "+__version__.sver)
optparser.usage = "%prog [-l|-x] [options] file.xpi"
optparser.add_option("-l", "--list", help="list files", \
action="store_true", dest="listfiles", default=False)
Added: translate-toolkit/branches/upstream/current/translate/tools/build_tmdb
URL: http://svn.debian.org/wsvn/translate-toolkit/branches/upstream/current/translate/tools/build_tmdb?rev=1570&op=file
==============================================================================
--- translate-toolkit/branches/upstream/current/translate/tools/build_tmdb (added)
+++ translate-toolkit/branches/upstream/current/translate/tools/build_tmdb Sun Feb 8 16:49:31 2009
@@ -1,0 +1,27 @@
+#!/usr/bin/env python
+# -*- coding: utf-8 -*-
+#
+# Copyright 2008 Zuza Software Foundation
+#
+# This file is part of translate.
+#
+# This program is free software; you can redistribute it and/or modify
+# it under the terms of the GNU General Public License as published by
+# the Free Software Foundation; either version 2 of the License, or
+# (at your option) any later version.
+#
+# This program is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+# GNU General Public License for more details.
+#
+# You should have received a copy of the GNU General Public License
+# along with this program; if not, see <http://www.gnu.org/licenses/>.
+
+"""Import units from translations files into tmdb."""
+
+from translate.tools import build_tmdb
+
+if __name__ == '__main__':
+ build_tmdb.main()
+
Propchange: translate-toolkit/branches/upstream/current/translate/tools/build_tmdb
------------------------------------------------------------------------------
svn:executable = *
Added: translate-toolkit/branches/upstream/current/translate/tools/build_tmdb.py
URL: http://svn.debian.org/wsvn/translate-toolkit/branches/upstream/current/translate/tools/build_tmdb.py?rev=1570&op=file
==============================================================================
--- translate-toolkit/branches/upstream/current/translate/tools/build_tmdb.py (added)
+++ translate-toolkit/branches/upstream/current/translate/tools/build_tmdb.py Sun Feb 8 16:49:31 2009
@@ -1,0 +1,95 @@
+#!/usr/bin/env python
+# -*- coding: utf-8 -*-
+#
+# Copyright 2008 Zuza Software Foundation
+#
+# This file is part of Virtaal.
+#
+# This program is free software; you can redistribute it and/or modify
+# it under the terms of the GNU General Public License as published by
+# the Free Software Foundation; either version 2 of the License, or
+# (at your option) any later version.
+#
+# This program is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+# GNU General Public License for more details.
+#
+# You should have received a copy of the GNU General Public License
+# along with this program; if not, see <http://www.gnu.org/licenses/>.
+
+"""Import units from translations files into tmdb."""
+
+import sys
+import os
+from optparse import OptionParser
+from translate.storage import factory
+from translate.storage import tmdb
+
+
+class Builder:
+ def __init__(self, tmdbfile, source_lang, target_lang, filenames):
+ self.tmdb = tmdb.TMDB(tmdbfile)
+ self.source_lang = source_lang
+ self.target_lang = target_lang
+
+ for filename in filenames:
+ if not os.path.exists(filename):
+ print >> sys.stderr, "cannot process %s: does not exist" % filename
+ continue
+ elif os.path.isdir(filename):
+ self.handledir(filename)
+ else:
+ self.handlefile(filename)
+ self.tmdb.connection.commit()
+
+
+ def handlefile(self, filename):
+ try:
+ store = factory.getobject(filename)
+ except Exception, e:
+ print >> sys.stderr, str(e)
+ return
+ # do something useful with the store and db
+ try:
+ self.tmdb.add_store(store, self.source_lang, self.target_lang, commit=False)
+ except Exception, e:
+ print e
+ print "new file:", filename
+
+
+ def handlefiles(self, dirname, filenames):
+ for filename in filenames:
+ pathname = os.path.join(dirname, filename)
+ if os.path.isdir(pathname):
+ self.handledir(pathname)
+ else:
+ self.handlefile(pathname)
+
+
+ def handledir(self, dirname):
+ path, name = os.path.split(dirname)
+ if name in ["CVS", ".svn", "_darcs", ".git", ".hg", ".bzr"]:
+ return
+ entries = os.listdir(dirname)
+ self.handlefiles(dirname, entries)
+
+def main():
+ try:
+ import psyco
+ psyco.full()
+ except Exception:
+ pass
+ parser = OptionParser()
+ parser.add_option("-d", "--tmdb", dest="tmdbfile",
+ help="translation memory database file")
+ parser.add_option("-s", "--import-source-lang", dest="source_lang",
+ help="source language of translation files")
+ parser.add_option("-t", "--import-target-lang", dest="target_lang",
+ help="target language of translation files")
+ (options, args) = parser.parse_args()
+
+ Builder(options.tmdbfile, options.source_lang, options.target_lang, args)
+
+if __name__ == '__main__':
+ main()
Propchange: translate-toolkit/branches/upstream/current/translate/tools/build_tmdb.py
------------------------------------------------------------------------------
svn:executable = *
Modified: translate-toolkit/branches/upstream/current/translate/tools/podebug.py
URL: http://svn.debian.org/wsvn/translate-toolkit/branches/upstream/current/translate/tools/podebug.py?rev=1570&op=diff
==============================================================================
--- translate-toolkit/branches/upstream/current/translate/tools/podebug.py (original)
+++ translate-toolkit/branches/upstream/current/translate/tools/podebug.py Sun Feb 8 16:49:31 2009
@@ -1,7 +1,7 @@
#!/usr/bin/env python
# -*- coding: utf-8 -*-
#
-# Copyright 2004-2006 Zuza Software Foundation
+# Copyright 2004-2006,2008 Zuza Software Foundation
#
# This file is part of translate.
#
@@ -19,7 +19,7 @@
# along with translate; if not, write to the Free Software
# Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
-"""Insert debug messages into XLIFF and Gettex PO localization files
+"""Insert debug messages into XLIFF and Gettext PO localization files
See: http://translate.sourceforge.net/wiki/toolkit/podebug for examples and
usage instructions
@@ -29,7 +29,7 @@
from translate.misc.rich import map_rich, only_strings
import os
import re
-import md5
+from translate.misc import hash
def add_prefix(prefix, strings):
for string in strings:
@@ -51,6 +51,8 @@
rewritelist = classmethod(rewritelist)
def rewrite_xxx(self, string):
+ if string.endswith("\n"):
+ return "xxx%sxxx\n" % string[:-1]
return "xxx%sxxx" % string
def rewrite_en(self, string):
@@ -104,11 +106,11 @@
return "".join(map(transpose, string))
def ignorelist(cls):
- return [rewrite.replace("ignore_", "") for rewrite in dir(cls) if rewrite.startswith("ignore_")]
+ return [ignore.replace("ignore_", "") for ignore in dir(cls) if ignore.startswith("ignore_")]
ignorelist = classmethod(ignorelist)
- def ignore_openoffice(self, locations):
- for location in locations:
+ def ignore_openoffice(self, unit):
+ for location in unit.getlocations():
if location.startswith("Common.xcu#..Common.View.Localisation"):
return True
elif location.startswith("profile.lng#STR_DIR_MENU_NEW_"):
@@ -117,7 +119,8 @@
return True
return False
- def ignore_mozilla(self, locations):
+ def ignore_mozilla(self, unit):
+ locations = unit.getlocations()
if len(locations) == 1 and locations[0].lower().endswith(".accesskey"):
return True
for location in locations:
@@ -130,16 +133,26 @@
return True
return False
+ def ignore_gtk(self, unit):
+ if unit.source == "default:LTR":
+ return True
+ return False
+
+ def ignore_kde(self, unit):
+ if unit.source == "LTR":
+ return True
+ return False
+
def convertunit(self, unit, prefix):
if self.ignorefunc:
- if self.ignorefunc(unit.getlocations()):
+ if self.ignorefunc(unit):
return unit
if self.hash:
if unit.getlocations():
hashable = unit.getlocations()[0]
else:
hashable = unit.source
- prefix = md5.new(hashable).hexdigest()[:self.hash] + " "
+ prefix = hash.md5_f(hashable).hexdigest()[:self.hash] + " "
if self.rewritefunc:
unit.rich_target = map_rich(only_strings(self.rewritefunc), unit.rich_source)
elif not unit.istranslated():
@@ -210,10 +223,10 @@
def main():
from translate.convert import convert
- formats = {"po":("po", convertpo), "xlf":("xlf", convertpo)}
- parser = convert.ConvertOptionParser(formats, usepots=True, description=__doc__)
+ formats = {"po":("po", convertpo), "pot":("po", convertpo), "xlf":("xlf", convertpo)}
+ parser = convert.ConvertOptionParser(formats, description=__doc__)
# TODO: add documentation on format strings...
- parser.add_option("-f", "--format", dest="format", default="[%s] ", help="specify format string")
+ parser.add_option("-f", "--format", dest="format", default="", help="specify format string")
parser.add_option("", "--rewrite", dest="rewritestyle",
type="choice", choices=podebug.rewritelist(), metavar="STYLE", help="the translation rewrite style: %s" % ", ".join(podebug.rewritelist()))
parser.add_option("", "--ignore", dest="ignoreoption",
Modified: translate-toolkit/branches/upstream/current/translate/tools/pogrep.py
URL: http://svn.debian.org/wsvn/translate-toolkit/branches/upstream/current/translate/tools/pogrep.py?rev=1570&op=diff
==============================================================================
--- translate-toolkit/branches/upstream/current/translate/tools/pogrep.py (original)
+++ translate-toolkit/branches/upstream/current/translate/tools/pogrep.py Sun Feb 8 16:49:31 2009
@@ -35,8 +35,94 @@
import re
import locale
+
+class GrepMatch(object):
+ """Just a small data structure that represents a search match."""
+
+ # INITIALIZERS #
+ def __init__(self, unit, part='target', part_n=0, start=0, end=0):
+ self.unit = unit
+ self.part = part
+ self.part_n = part_n
+ self.start = start
+ self.end = end
+
+ # ACCESSORS #
+ def get_getter(self):
+ if self.part == 'target':
+ if self.unit.hasplural():
+ getter = lambda: self.unit.target.strings[self.part_n]
+ else:
+ getter = lambda: self.unit.target
+ return getter
+ elif self.part == 'source':
+ if self.unit.hasplural():
+ getter = lambda: self.unit.source.strings[self.part_n]
+ else:
+ getter = lambda: self.unit.source
+ return getter
+ elif self.part == 'notes':
+ def getter():
+ return self.unit.getnotes()[self.part_n]
+ return getter
+ elif self.part == 'locations':
+ def getter():
+ return self.unit.getlocations()[self.part_n]
+ return getter
+
+ def get_setter(self):
+ if self.part == 'target':
+ if self.unit.hasplural():
+ def setter(value):
+ strings = self.unit.target.strings
+ strings[self.part_n] = value
+ self.unit.target = strings
+ else:
+ def setter(value):
+ self.unit.target = value
+ return setter
+
+ # SPECIAL METHODS #
+ def __str__(self):
+ start, end = self.start, self.end
+ if start < 3:
+ start = 3
+ if end > len(self.get_getter()()) - 3:
+ end = len(self.get_getter()()) - 3
+ matchpart = self.get_getter()()[start-2:end+2]
+ return '<GrepMatch "%s" part=%s[%d] start=%d end=%d>' % (matchpart, self.part, self.part_n, self.start, self.end)
+
+ def __repr__(self):
+ return str(self)
+
+def real_index(string, nfc_index):
+ """Calculate the real index in the unnormalized string that corresponds to
+ the index nfc_index in the normalized string."""
+ length = nfc_index
+ max_length = len(string)
+ while len(data.normalize(string[:length])) <= nfc_index:
+ if length == max_length:
+ return length
+ length += 1
+ return length - 1
+
+
+def find_matches(unit, part, strings, re_search):
+ """Return the GrepFilter objects where re_search matches in strings."""
+ matches = []
+ part_n = 0
+ for string in strings:
+ normalized = data.normalize(string)
+ for matchobj in re_search.finditer(normalized):
+ start = real_index(string, matchobj.start())
+ end = real_index(string, matchobj.end())
+ matches.append(GrepMatch(unit, part=part, part_n=part_n, start=start, end=end))
+ return matches
+
class GrepFilter:
- def __init__(self, searchstring, searchparts, ignorecase=False, useregexp=False, invertmatch=False, accelchar=None, encoding='utf-8', includeheader=False):
+ def __init__(self, searchstring, searchparts, ignorecase=False, useregexp=False,
+ invertmatch=False, accelchar=None, encoding='utf-8', includeheader=False,
+ max_matches=0):
"""builds a checkfilter using the given checker"""
if isinstance(searchstring, unicode):
self.searchstring = searchstring
@@ -64,6 +150,7 @@
self.invertmatch = invertmatch
self.accelchar = accelchar
self.includeheader = includeheader
+ self.max_matches = max_matches
def matches(self, teststr):
if teststr is None:
@@ -125,6 +212,55 @@
thenewfile.units.insert(0, thenewfile.makeheader())
return thenewfile
+ def getmatches(self, units):
+ if not self.searchstring:
+ return [], []
+
+ searchstring = self.searchstring
+ flags = re.LOCALE | re.MULTILINE | re.UNICODE
+
+ if self.ignorecase:
+ flags |= re.IGNORECASE
+ if not self.useregexp:
+ searchstring = re.escape(searchstring)
+ self.re_search = re.compile(u'(%s)' % (searchstring), flags)
+
+ matches = []
+ indexes = []
+
+ for index, unit in enumerate(units):
+ old_length = len(matches)
+
+ if self.search_target:
+ if unit.hasplural():
+ targets = unit.target.strings
+ else:
+ targets = [unit.target]
+ matches.extend(find_matches(unit, 'target', targets, self.re_search))
+ if self.search_source:
+ if unit.hasplural():
+ sources = unit.source.strings
+ else:
+ sources = [unit.source]
+ matches.extend(find_matches(unit, 'source', sources, self.re_search))
+ if self.search_notes:
+ matches.extend(find_matches(unit, 'notes', unit.getnotes(), self.re_search))
+
+ if self.search_locations:
+ matches.extend(find_matches(unit, 'locations', unit.getlocations(), self.re_search))
+
+ # A search for a single letter or an all-inclusive regular
+ # expression could give enough results to cause performance
+ # problems. The answer is probably not very useful at this scale.
+ if self.max_matches and len(matches) > self.max_matches:
+ raise Exception("Too many matches found")
+
+ if len(matches) > old_length:
+ old_length = len(matches)
+ indexes.append(index)
+
+ return matches, indexes
+
class GrepOptionParser(optrecurse.RecursiveOptionParser):
"""a specialized Option Parser for the grep tool..."""
def parse_args(self, args=None, values=None):
@@ -179,8 +315,9 @@
def cmdlineparser():
formats = {"po":("po", rungrep), "pot":("pot", rungrep),
+ "mo":("mo", rungrep), "gmo":("gmo", rungrep),
+ "tmx":("tmx", rungrep),
"xliff":("xliff", rungrep), "xlf":("xlf", rungrep), "xlff":("xlff", rungrep),
- "tmx":("tmx", rungrep),
None:("po", rungrep)}
parser = GrepOptionParser(formats)
parser.add_option("", "--search", dest="searchparts",
Modified: translate-toolkit/branches/upstream/current/translate/tools/pretranslate.py
URL: http://svn.debian.org/wsvn/translate-toolkit/branches/upstream/current/translate/tools/pretranslate.py?rev=1570&op=diff
==============================================================================
--- translate-toolkit/branches/upstream/current/translate/tools/pretranslate.py (original)
+++ translate-toolkit/branches/upstream/current/translate/tools/pretranslate.py Sun Feb 8 16:49:31 2009
@@ -97,8 +97,8 @@
if matching_unit and len(matching_unit.target) > 0:
#FIXME: should we dispatch here instead of this crude type check
if isinstance(input_unit, xliff.xliffunit):
- #FIXME: what about origin and lang?
- input_unit.addalttrans(matching_unit.target, sourcetxt=matching_unit.source)
+ #FIXME: what about origin, lang and matchquality
+ input_unit.addalttrans(matching_unit.target, origin="fish", sourcetxt=matching_unit.source)
else:
input_unit.merge(matching_unit, authoritative=True)
Added: translate-toolkit/branches/upstream/current/translate/tools/test_podebug.py
URL: http://svn.debian.org/wsvn/translate-toolkit/branches/upstream/current/translate/tools/test_podebug.py?rev=1570&op=file
==============================================================================
--- translate-toolkit/branches/upstream/current/translate/tools/test_podebug.py (added)
+++ translate-toolkit/branches/upstream/current/translate/tools/test_podebug.py Sun Feb 8 16:49:31 2009
@@ -1,0 +1,38 @@
+# -*- coding: utf-8 -*-
+
+from translate.tools import podebug
+from translate.storage import base
+
+class TestPODebug:
+
+ debug = podebug.podebug()
+
+ def test_ignore_gtk(self):
+ """Test operation of GTK message ignoring"""
+ unit = base.TranslationUnit("default:LTR")
+ assert self.debug.ignore_gtk(unit) == True
+
+ def test_rewrite_blank(self):
+ """Test the blank rewrite function"""
+ assert self.debug.rewrite_blank("Test") == ""
+
+ def test_rewrite_en(self):
+ """Test the en rewrite function"""
+ assert self.debug.rewrite_en("Test") == "Test"
+
+ def test_rewrite_xxx(self):
+ """Test the xxx rewrite function"""
+ assert self.debug.rewrite_xxx("Test") == "xxxTestxxx"
+ assert self.debug.rewrite_xxx("Newline\n") == "xxxNewlinexxx\n"
+
+ def test_rewrite_unicode(self):
+ """Test the unicode rewrite function"""
+ assert self.debug.rewrite_unicode("Test") == u"Ŧá¸Åŧ"
+
+ def test_rewrite_chef(self):
+ """Test the chef rewrite function
+
+ This is not realy critical to test but a simple tests ensures
+ that it stays working.
+ """
+ assert self.debug.rewrite_chef("Mock Swedish test you muppet") == "Mock Swedish test yooo mooppet"
Modified: translate-toolkit/branches/upstream/current/translate/tools/test_pomerge.py
URL: http://svn.debian.org/wsvn/translate-toolkit/branches/upstream/current/translate/tools/test_pomerge.py?rev=1570&op=diff
==============================================================================
--- translate-toolkit/branches/upstream/current/translate/tools/test_pomerge.py (original)
+++ translate-toolkit/branches/upstream/current/translate/tools/test_pomerge.py Sun Feb 8 16:49:31 2009
@@ -108,7 +108,7 @@
"""ensure that we do not delete comments in the PO file that are not assocaited with a message block"""
templatepo = '''# Lonely comment\n\n# Translation comment\nmsgid "Bob"\nmsgstr "Toolmaker"\n'''
mergepo = '''# Translation comment\nmsgid "Bob"\nmsgstr "Builder"\n'''
- expectedpo = '''# Lonely comment\n\n# Translation comment\nmsgid "Bob"\nmsgstr "Builder"\n'''
+ expectedpo = '''# Lonely comment\n# Translation comment\nmsgid "Bob"\nmsgstr "Builder"\n'''
pofile = self.mergestore(templatepo, mergepo)
# pounit = self.singleunit(pofile)
print pofile
@@ -202,7 +202,7 @@
# Unassociated comment
templatepo = '''# Lonely comment\n\n#: location_comment.c:110\nmsgid "Bob"\nmsgstr "Toolmaker"\n'''
mergepo = '''# Lonely comment\r\n\r\n#: location_comment.c:110\r\nmsgid "Bob"\r\nmsgstr "Builder"\r\n\r\n'''
- expectedpo = '''# Lonely comment\n\n#: location_comment.c:110\nmsgid "Bob"\nmsgstr "Builder"\n'''
+ expectedpo = '''# Lonely comment\n#: location_comment.c:110\nmsgid "Bob"\nmsgstr "Builder"\n'''
pofile = self.mergestore(templatepo, mergepo)
assert str(pofile) == expectedpo
More information about the Debian-l10n-commits
mailing list