[SCM] WebKit Debian packaging branch, webkit-1.1, updated. upstream/1.1.19-706-ge5415e9

dpranke at chromium.org dpranke at chromium.org
Thu Feb 4 21:33:01 UTC 2010


The following commit has been merged in the webkit-1.1 branch:
commit 9f10973c0b6752d04eba00be61009ac65de55289
Author: dpranke at chromium.org <dpranke at chromium.org@268f45cc-cd09-0410-ab3c-d52691b4dbfc>
Date:   Sat Jan 30 01:06:56 2010 +0000

    2010-01-29  Dirk Pranke  <dpranke at chromium.org>
    
            Reviewed by Eric Siedel.
    
            Check in the first part of the Chromium Python port of the
            run-webkit-tests test driver. The files under
            layout_tests/layout_layout constitute most of the implementation;
            they can be roughly divided into code that parses the
            "test_expectations.txt" file that describes how we expect tests to
            pass or fail, platform-specific hooks for the different Chromium
            ports (in platform_utils*), code for parsing the output of the
            tests and generating results files and HTML and JSON for the
            dashboards, auxiliary scripts for starting and stopping HTTP and
            Web Socket servers, and then one of the actual driver files
            (test_shell_thread). Code for actually parsing test output for
            failures and the top-level driver scripts will follow shortly.
    
            https://bugs.webkit.org/show_bug.cgi?id=31498
    
            * Scripts/webkitpy/layout_tests: Added.
            * Scripts/webkitpy/layout_tests/layout_package: Added.
            * Scripts/webkitpy/layout_tests/layout_package/__init__.py: Added.
            * Scripts/webkitpy/layout_tests/layout_package/apache_http_server.py: Added.
            * Scripts/webkitpy/layout_tests/layout_package/http_server.py: Added.
            * Scripts/webkitpy/layout_tests/layout_package/http_server_base.py: Added.
            * Scripts/webkitpy/layout_tests/layout_package/httpd2.pem: Added.
              - scripts to start and stop apache. Note that the apache file
                generates a conf file dynamically, and we should switch to
                using the same static conf file that the regular run-webkit-tests
                uses, and we can also use the same httpd2.pem file.
    
            * Scripts/webkitpy/layout_tests/layout_package/json_layout_results_generator.py: Added.
            * Scripts/webkitpy/layout_tests/layout_package/json_results_generator.py: Added.
              - scripts to generate the JSON layout test dashboard and the
                flakiness dashboard
            * Scripts/webkitpy/layout_tests/layout_package/lighttpd.conf: Added.
              - default configuration for LigHTTPd (used on Windows)
            * Scripts/webkitpy/layout_tests/layout_package/metered_stream.py: Added.
              - utility class that implements progress bars on the console to
                be displayed while the tests are running
            * Scripts/webkitpy/layout_tests/layout_package/path_utils.py: Added.
              - various routines for manipulating paths and URIs
            * Scripts/webkitpy/layout_tests/layout_package/platform_utils.py: Added.
            * Scripts/webkitpy/layout_tests/layout_package/platform_utils_linux.py: Added.
            * Scripts/webkitpy/layout_tests/layout_package/platform_utils_mac.py: Added.
            * Scripts/webkitpy/layout_tests/layout_package/platform_utils_win.py: Added.
              - platform-specific aspects of the drivers (binary names, paths,
                process control, etc.)
            * Scripts/webkitpy/layout_tests/layout_package/test_expectations.py: Added.
              - code for parsing the 'test_expectations.txt' file to determine
                which tests are expected to fail (and how) on which platforms
            * Scripts/webkitpy/layout_tests/layout_package/test_failures.py: Added.
              - code for handling different kinds of failures (generating output
                in the results, etc.)
            * Scripts/webkitpy/layout_tests/layout_package/test_files.py: Added.
              - code to gather the lists of tests
            * Scripts/webkitpy/layout_tests/layout_package/test_shell_thread.py: Added.
              - code to actually execute tests via TestShell and process
                the output
            * Scripts/webkitpy/layout_tests/layout_package/websocket_server.py: Added.
              - scripts to start and stop the pywebsocket server
    
    
    git-svn-id: http://svn.webkit.org/repository/webkit/trunk@54091 268f45cc-cd09-0410-ab3c-d52691b4dbfc

diff --git a/WebKitTools/ChangeLog b/WebKitTools/ChangeLog
index c0b5239..57360d9 100644
--- a/WebKitTools/ChangeLog
+++ b/WebKitTools/ChangeLog
@@ -1,5 +1,67 @@
 2010-01-29  Dirk Pranke  <dpranke at chromium.org>
 
+        Reviewed by Eric Siedel.
+
+        Check in the first part of the Chromium Python port of the 
+        run-webkit-tests test driver. The files under 
+        layout_tests/layout_layout constitute most of the implementation;
+        they can be roughly divided into code that parses the 
+        "test_expectations.txt" file that describes how we expect tests to
+        pass or fail, platform-specific hooks for the different Chromium 
+        ports (in platform_utils*), code for parsing the output of the
+        tests and generating results files and HTML and JSON for the
+        dashboards, auxiliary scripts for starting and stopping HTTP and
+        Web Socket servers, and then one of the actual driver files 
+        (test_shell_thread). Code for actually parsing test output for 
+        failures and the top-level driver scripts will follow shortly.
+
+        https://bugs.webkit.org/show_bug.cgi?id=31498
+
+        * Scripts/webkitpy/layout_tests: Added.
+        * Scripts/webkitpy/layout_tests/layout_package: Added.
+        * Scripts/webkitpy/layout_tests/layout_package/__init__.py: Added.
+        * Scripts/webkitpy/layout_tests/layout_package/apache_http_server.py: Added.
+        * Scripts/webkitpy/layout_tests/layout_package/http_server.py: Added.
+        * Scripts/webkitpy/layout_tests/layout_package/http_server_base.py: Added.
+        * Scripts/webkitpy/layout_tests/layout_package/httpd2.pem: Added.
+          - scripts to start and stop apache. Note that the apache file
+            generates a conf file dynamically, and we should switch to 
+            using the same static conf file that the regular run-webkit-tests
+            uses, and we can also use the same httpd2.pem file.
+
+        * Scripts/webkitpy/layout_tests/layout_package/json_layout_results_generator.py: Added.
+        * Scripts/webkitpy/layout_tests/layout_package/json_results_generator.py: Added.
+          - scripts to generate the JSON layout test dashboard and the
+            flakiness dashboard
+        * Scripts/webkitpy/layout_tests/layout_package/lighttpd.conf: Added.
+          - default configuration for LigHTTPd (used on Windows)
+        * Scripts/webkitpy/layout_tests/layout_package/metered_stream.py: Added.
+          - utility class that implements progress bars on the console to
+            be displayed while the tests are running
+        * Scripts/webkitpy/layout_tests/layout_package/path_utils.py: Added.
+          - various routines for manipulating paths and URIs
+        * Scripts/webkitpy/layout_tests/layout_package/platform_utils.py: Added.
+        * Scripts/webkitpy/layout_tests/layout_package/platform_utils_linux.py: Added.
+        * Scripts/webkitpy/layout_tests/layout_package/platform_utils_mac.py: Added.
+        * Scripts/webkitpy/layout_tests/layout_package/platform_utils_win.py: Added.
+          - platform-specific aspects of the drivers (binary names, paths,
+            process control, etc.)
+        * Scripts/webkitpy/layout_tests/layout_package/test_expectations.py: Added.
+          - code for parsing the 'test_expectations.txt' file to determine
+            which tests are expected to fail (and how) on which platforms
+        * Scripts/webkitpy/layout_tests/layout_package/test_failures.py: Added.
+          - code for handling different kinds of failures (generating output
+            in the results, etc.)
+        * Scripts/webkitpy/layout_tests/layout_package/test_files.py: Added.
+          - code to gather the lists of tests
+        * Scripts/webkitpy/layout_tests/layout_package/test_shell_thread.py: Added.
+          - code to actually execute tests via TestShell and process
+            the output
+        * Scripts/webkitpy/layout_tests/layout_package/websocket_server.py: Added.
+          - scripts to start and stop the pywebsocket server
+
+2010-01-29  Dirk Pranke  <dpranke at chromium.org>
+
         Reviewed by Eric Seidel.
 
         Check in a copy of the simplejson library; it will be used by
diff --git a/BugsSite/data/mail b/WebKitTools/Scripts/webkitpy/layout_tests/layout_package/__init__.py
similarity index 100%
copy from BugsSite/data/mail
copy to WebKitTools/Scripts/webkitpy/layout_tests/layout_package/__init__.py
diff --git a/WebKitTools/Scripts/webkitpy/layout_tests/layout_package/apache_http_server.py b/WebKitTools/Scripts/webkitpy/layout_tests/layout_package/apache_http_server.py
new file mode 100644
index 0000000..15f2065
--- /dev/null
+++ b/WebKitTools/Scripts/webkitpy/layout_tests/layout_package/apache_http_server.py
@@ -0,0 +1,229 @@
+#!/usr/bin/env python
+# Copyright (C) 2010 The Chromium Authors. All rights reserved.
+#
+# Redistribution and use in source and binary forms, with or without
+# modification, are permitted provided that the following conditions are
+# met:
+#
+#     * Redistributions of source code must retain the above copyright
+# notice, this list of conditions and the following disclaimer.
+#     * Redistributions in binary form must reproduce the above
+# copyright notice, this list of conditions and the following disclaimer
+# in the documentation and/or other materials provided with the
+# distribution.
+#     * Neither the Chromium name nor the names of its
+# contributors may be used to endorse or promote products derived from
+# this software without specific prior written permission.
+#
+# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+# A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+# OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+# SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+# LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+# DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+# THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+
+"""A class to start/stop the apache http server used by layout tests."""
+
+import logging
+import optparse
+import os
+import re
+import subprocess
+import sys
+
+import http_server_base
+import path_utils
+import platform_utils
+
+
+class LayoutTestApacheHttpd(http_server_base.HttpServerBase):
+
+    def __init__(self, output_dir):
+        """Args:
+          output_dir: the absolute path to the layout test result directory
+        """
+        self._output_dir = output_dir
+        self._httpd_proc = None
+        path_utils.maybe_make_directory(output_dir)
+
+        self.mappings = [{'port': 8000},
+                         {'port': 8080},
+                         {'port': 8081},
+                         {'port': 8443, 'sslcert': True}]
+
+        # The upstream .conf file assumed the existence of /tmp/WebKit for
+        # placing apache files like the lock file there.
+        self._runtime_path = os.path.join("/tmp", "WebKit")
+        path_utils.maybe_make_directory(self._runtime_path)
+
+        # The PID returned when Apache is started goes away (due to dropping
+        # privileges?). The proper controlling PID is written to a file in the
+        # apache runtime directory.
+        self._pid_file = os.path.join(self._runtime_path, 'httpd.pid')
+
+        test_dir = path_utils.path_from_base('third_party', 'WebKit',
+            'LayoutTests')
+        js_test_resources_dir = self._cygwin_safe_join(test_dir, "fast", "js",
+            "resources")
+        mime_types_path = self._cygwin_safe_join(test_dir, "http", "conf",
+            "mime.types")
+        cert_file = self._cygwin_safe_join(test_dir, "http", "conf",
+            "webkit-httpd.pem")
+        access_log = self._cygwin_safe_join(output_dir, "access_log.txt")
+        error_log = self._cygwin_safe_join(output_dir, "error_log.txt")
+        document_root = self._cygwin_safe_join(test_dir, "http", "tests")
+
+        executable = platform_utils.apache_executable_path()
+        if self._is_cygwin():
+            executable = self._get_cygwin_path(executable)
+
+        cmd = [executable,
+            '-f', self._get_apache_config_file_path(test_dir, output_dir),
+            '-C', "\'DocumentRoot %s\'" % document_root,
+            '-c', "\'Alias /js-test-resources %s\'" % js_test_resources_dir,
+            '-C', "\'Listen %s\'" % "127.0.0.1:8000",
+            '-C', "\'Listen %s\'" % "127.0.0.1:8081",
+            '-c', "\'TypesConfig \"%s\"\'" % mime_types_path,
+            '-c', "\'CustomLog \"%s\" common\'" % access_log,
+            '-c', "\'ErrorLog \"%s\"\'" % error_log,
+            '-C', "\'User \"%s\"\'" % os.environ.get("USERNAME",
+                os.environ.get("USER", ""))]
+
+        if self._is_cygwin():
+            cygbin = path_utils.path_from_base('third_party', 'cygwin', 'bin')
+            # Not entirely sure why, but from cygwin we need to run the
+            # httpd command through bash.
+            self._start_cmd = [
+                os.path.join(cygbin, 'bash.exe'),
+                '-c',
+                'PATH=%s %s' % (self._get_cygwin_path(cygbin), " ".join(cmd)),
+              ]
+        else:
+            # TODO(ojan): When we get cygwin using Apache 2, use set the
+            # cert file for cygwin as well.
+            cmd.extend(['-c', "\'SSLCertificateFile %s\'" % cert_file])
+            # Join the string here so that Cygwin/Windows and Mac/Linux
+            # can use the same code. Otherwise, we could remove the single
+            # quotes above and keep cmd as a sequence.
+            self._start_cmd = " ".join(cmd)
+
+    def _is_cygwin(self):
+        return sys.platform in ("win32", "cygwin")
+
+    def _cygwin_safe_join(self, *parts):
+        """Returns a platform appropriate path."""
+        path = os.path.join(*parts)
+        if self._is_cygwin():
+            return self._get_cygwin_path(path)
+        return path
+
+    def _get_cygwin_path(self, path):
+        """Convert a Windows path to a cygwin path.
+
+        The cygpath utility insists on converting paths that it thinks are
+        Cygwin root paths to what it thinks the correct roots are.  So paths
+        such as "C:\b\slave\webkit-release\build\third_party\cygwin\bin"
+        are converted to plain "/usr/bin".  To avoid this, we
+        do the conversion manually.
+
+        The path is expected to be an absolute path, on any drive.
+        """
+        drive_regexp = re.compile(r'([a-z]):[/\\]', re.IGNORECASE)
+
+        def lower_drive(matchobj):
+            return '/cygdrive/%s/' % matchobj.group(1).lower()
+        path = drive_regexp.sub(lower_drive, path)
+        return path.replace('\\', '/')
+
+    def _get_apache_config_file_path(self, test_dir, output_dir):
+        """Returns the path to the apache config file to use.
+        Args:
+          test_dir: absolute path to the LayoutTests directory.
+          output_dir: absolute path to the layout test results directory.
+        """
+        httpd_config = platform_utils.apache_config_file_path()
+        httpd_config_copy = os.path.join(output_dir, "httpd.conf")
+        httpd_conf = open(httpd_config).read()
+        if self._is_cygwin():
+            # This is a gross hack, but it lets us use the upstream .conf file
+            # and our checked in cygwin. This tells the server the root
+            # directory to look in for .so modules. It will use this path
+            # plus the relative paths to the .so files listed in the .conf
+            # file. We have apache/cygwin checked into our tree so
+            # people don't have to install it into their cygwin.
+            cygusr = path_utils.path_from_base('third_party', 'cygwin', 'usr')
+            httpd_conf = httpd_conf.replace('ServerRoot "/usr"',
+                'ServerRoot "%s"' % self._get_cygwin_path(cygusr))
+
+        # TODO(ojan): Instead of writing an extra file, checkin a conf file
+        # upstream. Or, even better, upstream/delete all our chrome http
+        # tests so we don't need this special-cased DocumentRoot and then
+        # just use the upstream
+        # conf file.
+        chrome_document_root = path_utils.path_from_base('webkit', 'data',
+            'layout_tests')
+        if self._is_cygwin():
+            chrome_document_root = self._get_cygwin_path(chrome_document_root)
+        httpd_conf = (httpd_conf +
+            self._get_virtual_host_config(chrome_document_root, 8081))
+
+        f = open(httpd_config_copy, 'wb')
+        f.write(httpd_conf)
+        f.close()
+
+        if self._is_cygwin():
+            return self._get_cygwin_path(httpd_config_copy)
+        return httpd_config_copy
+
+    def _get_virtual_host_config(self, document_root, port, ssl=False):
+        """Returns a <VirtualHost> directive block for an httpd.conf file.
+        It will listen to 127.0.0.1 on each of the given port.
+        """
+        return '\n'.join(('<VirtualHost 127.0.0.1:%s>' % port,
+                          'DocumentRoot %s' % document_root,
+                          ssl and 'SSLEngine On' or '',
+                          '</VirtualHost>', ''))
+
+    def _start_httpd_process(self):
+        """Starts the httpd process and returns whether there were errors."""
+        # Use shell=True because we join the arguments into a string for
+        # the sake of Window/Cygwin and it needs quoting that breaks
+        # shell=False.
+        self._httpd_proc = subprocess.Popen(self._start_cmd,
+                                            stderr=subprocess.PIPE,
+            shell=True)
+        err = self._httpd_proc.stderr.read()
+        if len(err):
+            logging.debug(err)
+            return False
+        return True
+
+    def start(self):
+        """Starts the apache http server."""
+        # Stop any currently running servers.
+        self.stop()
+
+        logging.debug("Starting apache http server")
+        server_started = self.wait_for_action(self._start_httpd_process)
+        if server_started:
+            logging.debug("Apache started. Testing ports")
+            server_started = self.wait_for_action(
+                self.is_server_running_on_all_ports)
+
+        if server_started:
+            logging.debug("Server successfully started")
+        else:
+            raise Exception('Failed to start http server')
+
+    def stop(self):
+        """Stops the apache http server."""
+        logging.debug("Shutting down any running http servers")
+        httpd_pid = None
+        if os.path.exists(self._pid_file):
+            httpd_pid = int(open(self._pid_file).readline())
+        path_utils.shut_down_http_server(httpd_pid)
diff --git a/WebKitTools/Scripts/webkitpy/layout_tests/layout_package/http_server.py b/WebKitTools/Scripts/webkitpy/layout_tests/layout_package/http_server.py
new file mode 100755
index 0000000..dfcb44f
--- /dev/null
+++ b/WebKitTools/Scripts/webkitpy/layout_tests/layout_package/http_server.py
@@ -0,0 +1,279 @@
+#!/usr/bin/env python
+# Copyright (C) 2010 The Chromium Authors. All rights reserved.
+#
+# Redistribution and use in source and binary forms, with or without
+# modification, are permitted provided that the following conditions are
+# met:
+#
+#     * Redistributions of source code must retain the above copyright
+# notice, this list of conditions and the following disclaimer.
+#     * Redistributions in binary form must reproduce the above
+# copyright notice, this list of conditions and the following disclaimer
+# in the documentation and/or other materials provided with the
+# distribution.
+#     * Neither the Chromium name nor the names of its
+# contributors may be used to endorse or promote products derived from
+# this software without specific prior written permission.
+#
+# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+# A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+# OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+# SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+# LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+# DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+# THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+
+"""A class to help start/stop the lighttpd server used by layout tests."""
+
+
+import logging
+import optparse
+import os
+import shutil
+import subprocess
+import sys
+import tempfile
+import time
+import urllib
+
+import http_server_base
+import path_utils
+
+class HttpdNotStarted(Exception): pass
+
+def remove_log_files(folder, starts_with):
+    files = os.listdir(folder)
+    for file in files:
+        if file.startswith(starts_with):
+            full_path = os.path.join(folder, file)
+            os.remove(full_path)
+
+
+class Lighttpd(http_server_base.HttpServerBase):
+    # Webkit tests
+    try:
+        _webkit_tests = path_utils.path_from_base('third_party', 'WebKit',
+                                                  'LayoutTests', 'http',
+                                                  'tests')
+        _js_test_resource = path_utils.path_from_base('third_party', 'WebKit',
+                                                      'LayoutTests', 'fast',
+                                                      'js', 'resources')
+    except path_utils.PathNotFound:
+        _webkit_tests = None
+        _js_test_resource = None
+
+    # Path where we can access all of the tests
+    _all_tests = path_utils.path_from_base('webkit', 'data', 'layout_tests')
+    # Self generated certificate for SSL server (for client cert get
+    # <base-path>\chrome\test\data\ssl\certs\root_ca_cert.crt)
+    _pem_file = path_utils.path_from_base(
+        os.path.dirname(os.path.abspath(__file__)), 'httpd2.pem')
+    # One mapping where we can get to everything
+    VIRTUALCONFIG = [{'port': 8081, 'docroot': _all_tests}]
+
+    if _webkit_tests:
+        VIRTUALCONFIG.extend(
+          # Three mappings (one with SSL enabled) for LayoutTests http tests
+          [{'port': 8000, 'docroot': _webkit_tests},
+           {'port': 8080, 'docroot': _webkit_tests},
+           {'port': 8443, 'docroot': _webkit_tests, 'sslcert': _pem_file}])
+
+    def __init__(self, output_dir, background=False, port=None,
+                 root=None, register_cygwin=None, run_background=None):
+        """Args:
+          output_dir: the absolute path to the layout test result directory
+        """
+        self._output_dir = output_dir
+        self._process = None
+        self._port = port
+        self._root = root
+        self._register_cygwin = register_cygwin
+        self._run_background = run_background
+        if self._port:
+            self._port = int(self._port)
+
+    def is_running(self):
+        return self._process != None
+
+    def start(self):
+        if self.is_running():
+            raise 'Lighttpd already running'
+
+        base_conf_file = path_utils.path_from_base('third_party',
+            'WebKitTools', 'Scripts', 'webkitpy', 'layout_tests',
+            'layout_package', 'lighttpd.conf')
+        out_conf_file = os.path.join(self._output_dir, 'lighttpd.conf')
+        time_str = time.strftime("%d%b%Y-%H%M%S")
+        access_file_name = "access.log-" + time_str + ".txt"
+        access_log = os.path.join(self._output_dir, access_file_name)
+        log_file_name = "error.log-" + time_str + ".txt"
+        error_log = os.path.join(self._output_dir, log_file_name)
+
+        # Remove old log files. We only need to keep the last ones.
+        remove_log_files(self._output_dir, "access.log-")
+        remove_log_files(self._output_dir, "error.log-")
+
+        # Write out the config
+        f = file(base_conf_file, 'rb')
+        base_conf = f.read()
+        f.close()
+
+        f = file(out_conf_file, 'wb')
+        f.write(base_conf)
+
+        # Write out our cgi handlers.  Run perl through env so that it
+        # processes the #! line and runs perl with the proper command
+        # line arguments. Emulate apache's mod_asis with a cat cgi handler.
+        f.write(('cgi.assign = ( ".cgi"  => "/usr/bin/env",\n'
+                 '               ".pl"   => "/usr/bin/env",\n'
+                 '               ".asis" => "/bin/cat",\n'
+                 '               ".php"  => "%s" )\n\n') %
+                                     path_utils.lighttpd_php_path())
+
+        # Setup log files
+        f.write(('server.errorlog = "%s"\n'
+                 'accesslog.filename = "%s"\n\n') % (error_log, access_log))
+
+        # Setup upload folders. Upload folder is to hold temporary upload files
+        # and also POST data. This is used to support XHR layout tests that
+        # does POST.
+        f.write(('server.upload-dirs = ( "%s" )\n\n') % (self._output_dir))
+
+        # Setup a link to where the js test templates are stored
+        f.write(('alias.url = ( "/js-test-resources" => "%s" )\n\n') %
+                    (self._js_test_resource))
+
+        # dump out of virtual host config at the bottom.
+        if self._root:
+            if self._port:
+                # Have both port and root dir.
+                mappings = [{'port': self._port, 'docroot': self._root}]
+            else:
+                # Have only a root dir - set the ports as for LayoutTests.
+                # This is used in ui_tests to run http tests against a browser.
+
+                # default set of ports as for LayoutTests but with a
+                # specified root.
+                mappings = [{'port': 8000, 'docroot': self._root},
+                            {'port': 8080, 'docroot': self._root},
+                            {'port': 8443, 'docroot': self._root,
+                             'sslcert': Lighttpd._pem_file}]
+        else:
+            mappings = self.VIRTUALCONFIG
+        for mapping in mappings:
+            ssl_setup = ''
+            if 'sslcert' in mapping:
+                ssl_setup = ('  ssl.engine = "enable"\n'
+                             '  ssl.pemfile = "%s"\n' % mapping['sslcert'])
+
+            f.write(('$SERVER["socket"] == "127.0.0.1:%d" {\n'
+                     '  server.document-root = "%s"\n' +
+                     ssl_setup +
+                     '}\n\n') % (mapping['port'], mapping['docroot']))
+        f.close()
+
+        executable = path_utils.lighttpd_executable_path()
+        module_path = path_utils.lighttpd_module_path()
+        start_cmd = [executable,
+                     # Newly written config file
+                     '-f', path_utils.path_from_base(self._output_dir,
+                                                     'lighttpd.conf'),
+                     # Where it can find its module dynamic libraries
+                     '-m', module_path]
+
+        if not self._run_background:
+            start_cmd.append(# Don't background
+                             '-D')
+
+        # Copy liblightcomp.dylib to /tmp/lighttpd/lib to work around the
+        # bug that mod_alias.so loads it from the hard coded path.
+        if sys.platform == 'darwin':
+            tmp_module_path = '/tmp/lighttpd/lib'
+            if not os.path.exists(tmp_module_path):
+                os.makedirs(tmp_module_path)
+            lib_file = 'liblightcomp.dylib'
+            shutil.copyfile(os.path.join(module_path, lib_file),
+                            os.path.join(tmp_module_path, lib_file))
+
+        # Put the cygwin directory first in the path to find cygwin1.dll
+        env = os.environ
+        if sys.platform in ('cygwin', 'win32'):
+            env['PATH'] = '%s;%s' % (
+                path_utils.path_from_base('third_party', 'cygwin', 'bin'),
+                env['PATH'])
+
+        if sys.platform == 'win32' and self._register_cygwin:
+            setup_mount = path_utils.path_from_base('third_party', 'cygwin',
+                                                    'setup_mount.bat')
+            subprocess.Popen(setup_mount).wait()
+
+        logging.debug('Starting http server')
+        self._process = subprocess.Popen(start_cmd, env=env)
+
+        # Wait for server to start.
+        self.mappings = mappings
+        server_started = self.wait_for_action(
+            self.is_server_running_on_all_ports)
+
+        # Our process terminated already
+        if not server_started or self._process.returncode != None:
+            raise google.httpd_utils.HttpdNotStarted('Failed to start httpd.')
+
+        logging.debug("Server successfully started")
+
+    # TODO(deanm): Find a nicer way to shutdown cleanly.  Our log files are
+    # probably not being flushed, etc... why doesn't our python have os.kill ?
+
+    def stop(self, force=False):
+        if not force and not self.is_running():
+            return
+
+        httpd_pid = None
+        if self._process:
+            httpd_pid = self._process.pid
+        path_utils.shut_down_http_server(httpd_pid)
+
+        if self._process:
+            self._process.wait()
+            self._process = None
+
+if '__main__' == __name__:
+    # Provide some command line params for starting/stopping the http server
+    # manually. Also used in ui_tests to run http layout tests in a browser.
+    option_parser = optparse.OptionParser()
+    option_parser.add_option('-k', '--server',
+        help='Server action (start|stop)')
+    option_parser.add_option('-p', '--port',
+        help='Port to listen on (overrides layout test ports)')
+    option_parser.add_option('-r', '--root',
+        help='Absolute path to DocumentRoot (overrides layout test roots)')
+    option_parser.add_option('--register_cygwin', action="store_true",
+        dest="register_cygwin", help='Register Cygwin paths (on Win try bots)')
+    option_parser.add_option('--run_background', action="store_true",
+        dest="run_background",
+        help='Run on background (for running as UI test)')
+    options, args = option_parser.parse_args()
+
+    if not options.server:
+        print ('Usage: %s --server {start|stop} [--root=root_dir]'
+               ' [--port=port_number]' % sys.argv[0])
+    else:
+        if (options.root is None) and (options.port is not None):
+            # specifying root but not port means we want httpd on default
+            # set of ports that LayoutTest use, but pointing to a different
+            # source of tests. Specifying port but no root does not seem
+            # meaningful.
+            raise 'Specifying port requires also a root.'
+        httpd = Lighttpd(tempfile.gettempdir(),
+                         port=options.port,
+                         root=options.root,
+                         register_cygwin=options.register_cygwin,
+                         run_background=options.run_background)
+        if 'start' == options.server:
+            httpd.start()
+        else:
+            httpd.stop(force=True)
diff --git a/WebKitTools/Scripts/webkitpy/layout_tests/layout_package/http_server_base.py b/WebKitTools/Scripts/webkitpy/layout_tests/layout_package/http_server_base.py
new file mode 100644
index 0000000..2720486
--- /dev/null
+++ b/WebKitTools/Scripts/webkitpy/layout_tests/layout_package/http_server_base.py
@@ -0,0 +1,67 @@
+#!/usr/bin/env python
+# Copyright (C) 2010 The Chromium Authors. All rights reserved.
+#
+# Redistribution and use in source and binary forms, with or without
+# modification, are permitted provided that the following conditions are
+# met:
+#
+#     * Redistributions of source code must retain the above copyright
+# notice, this list of conditions and the following disclaimer.
+#     * Redistributions in binary form must reproduce the above
+# copyright notice, this list of conditions and the following disclaimer
+# in the documentation and/or other materials provided with the
+# distribution.
+#     * Neither the Chromium name nor the names of its
+# contributors may be used to endorse or promote products derived from
+# this software without specific prior written permission.
+#
+# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+# A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+# OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+# SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+# LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+# DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+# THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+
+"""Base class with common routines between the Apache and Lighttpd servers."""
+
+import logging
+import time
+import urllib
+
+
+class HttpServerBase(object):
+
+    def wait_for_action(self, action):
+        """Repeat the action for 20 seconds or until it succeeds. Returns
+        whether it succeeded."""
+        start_time = time.time()
+        while time.time() - start_time < 20:
+            if action():
+                return True
+            time.sleep(1)
+
+        return False
+
+    def is_server_running_on_all_ports(self):
+        """Returns whether the server is running on all the desired ports."""
+        for mapping in self.mappings:
+            if 'sslcert' in mapping:
+                http_suffix = 's'
+            else:
+                http_suffix = ''
+
+            url = 'http%s://127.0.0.1:%d/' % (http_suffix, mapping['port'])
+
+            try:
+                response = urllib.urlopen(url)
+                logging.debug("Server running at %s" % url)
+            except IOError:
+                logging.debug("Server NOT running at %s" % url)
+                return False
+
+        return True
diff --git a/WebKitTools/Scripts/webkitpy/layout_tests/layout_package/httpd2.pem b/WebKitTools/Scripts/webkitpy/layout_tests/layout_package/httpd2.pem
new file mode 100644
index 0000000..6349b78
--- /dev/null
+++ b/WebKitTools/Scripts/webkitpy/layout_tests/layout_package/httpd2.pem
@@ -0,0 +1,41 @@
+-----BEGIN CERTIFICATE-----
+MIIEZDCCAkygAwIBAgIBATANBgkqhkiG9w0BAQUFADBgMRAwDgYDVQQDEwdUZXN0
+IENBMQswCQYDVQQGEwJVUzETMBEGA1UECBMKQ2FsaWZvcm5pYTEWMBQGA1UEBxMN
+TW91bnRhaW4gVmlldzESMBAGA1UEChMJQ2VydCBUZXN0MB4XDTA4MDcyODIyMzIy
+OFoXDTEzMDcyNzIyMzIyOFowSjELMAkGA1UEBhMCVVMxEzARBgNVBAgTCkNhbGlm
+b3JuaWExEjAQBgNVBAoTCUNlcnQgVGVzdDESMBAGA1UEAxMJMTI3LjAuMC4xMIGf
+MA0GCSqGSIb3DQEBAQUAA4GNADCBiQKBgQDQj2tPWPUgbuI4H3/3dnttqVbndwU3
+3BdRCd67DFM44GRrsjDSH4bY/EbFyX9D52d/iy6ZaAmDePcCz5k/fgP3DMujykYG
+qgNiV2ywxTlMj7NlN2C7SRt68fQMZr5iI7rypdxuaZt9lSMD3ENBffYtuLTyZd9a
+3JPJe1TaIab5GwIDAQABo4HCMIG/MAkGA1UdEwQCMAAwHQYDVR0OBBYEFCYLBv5K
+x5sLNVlpLh5FwTwhdDl7MIGSBgNVHSMEgYowgYeAFF3Of5nj1BlBMU/Gz7El9Vqv
+45cxoWSkYjBgMRAwDgYDVQQDEwdUZXN0IENBMQswCQYDVQQGEwJVUzETMBEGA1UE
+CBMKQ2FsaWZvcm5pYTEWMBQGA1UEBxMNTW91bnRhaW4gVmlldzESMBAGA1UEChMJ
+Q2VydCBUZXN0ggkA1FGT1D/e2U4wDQYJKoZIhvcNAQEFBQADggIBAEtkVmLObUgk
+b2cIA2S+QDtifq1UgVfBbytvR2lFmnADOR55mo0gHQG3HHqq4g034LmoVXDHhUk8
+Gb6aFiv4QubmVhLXcUelTRXwiNvGzkW7pC6Jrq105hdPjzXMKTcmiLaopm5Fqfc7
+hj5Cn1Sjspc8pdeQjrbeMdvca7KlFrGP8YkwCU2xOOX9PiN9G0966BWfjnr/fZZp
++OQVuUFHdiAZwthEMuDpAAXHqYXIsermgdOpgJaA53cf8NqBV2QGhtFgtsJCRoiu
+7DKqhyRWBGyz19VIH2b7y+6qvQVxuHk19kKRM0nftw/yNcJnm7gtttespMUPsOMa
+a2SD1G0hm0TND6vxaBhgR3cVqpl/qIpAdFi00Tm7hTyYE7I43zPW03t+/DpCt3Um
+EMRZsQ90co5q+bcx/vQ7YAtwUh30uMb0wpibeyCwDp8cqNmSiRkEuc/FjTYes5t8
+5gR//WX1l0+qjrjusO9NmoLnq2Yk6UcioX+z+q6Z/dudGfqhLfeWD2Q0LWYA242C
+d7km5Y3KAt1PJdVsof/aiVhVdddY/OIEKTRQhWEdDbosy2eh16BCKXT2FFvhNDg1
+AYFvn6I8nj9IldMJiIc3DdhacEAEzRMeRgPdzAa1griKUGknxsyTyRii8ru0WS6w
+DCNrlDOVXdzYGEZooBI76BDVY0W0akjV
+-----END CERTIFICATE-----
+-----BEGIN RSA PRIVATE KEY-----
+MIICXQIBAAKBgQDQj2tPWPUgbuI4H3/3dnttqVbndwU33BdRCd67DFM44GRrsjDS
+H4bY/EbFyX9D52d/iy6ZaAmDePcCz5k/fgP3DMujykYGqgNiV2ywxTlMj7NlN2C7
+SRt68fQMZr5iI7rypdxuaZt9lSMD3ENBffYtuLTyZd9a3JPJe1TaIab5GwIDAQAB
+AoGANHXu8z2YIzlhE+bwhGm8MGBpKL3qhRuKjeriqMA36tWezOw8lY4ymEAU+Ulv
+BsCdaxqydQoTYou57m4TyUHEcxq9pq3H0zB0qL709DdHi/t4zbV9XIoAzC5v0/hG
+9+Ca29TwC02FCw+qLkNrtwCpwOcQmc+bPxqvFu1iMiahURECQQD2I/Hi2413CMZz
+TBjl8fMiVO9GhA2J0sc8Qi+YcgJakaLD9xcbaiLkTzPZDlA389C1b6Ia+poAr4YA
+Ve0FFbxpAkEA2OobayyHE/QtPEqoy6NLR57jirmVBNmSWWd4lAyL5UIHIYVttJZg
+8CLvbzaU/iDGwR+wKsM664rKPHEmtlyo4wJBAMeSqYO5ZOCJGu9NWjrHjM3fdAsG
+8zs2zhiLya+fcU0iHIksBW5TBmt71Jw/wMc9R5J1K0kYvFml98653O5si1ECQBCk
+RV4/mE1rmlzZzYFyEcB47DQkcM5ictvxGEsje0gnfKyRtAz6zI0f4QbDRUMJ+LWw
+XK+rMsYHa+SfOb0b9skCQQCLdeonsIpFDv/Uv+flHISy0WA+AFkLXrRkBKh6G/OD
+dMHaNevkJgUnpceVEnkrdenp5CcEoFTI17pd+nBgDm/B
+-----END RSA PRIVATE KEY-----
diff --git a/WebKitTools/Scripts/webkitpy/layout_tests/layout_package/json_layout_results_generator.py b/WebKitTools/Scripts/webkitpy/layout_tests/layout_package/json_layout_results_generator.py
new file mode 100644
index 0000000..b7b26e9
--- /dev/null
+++ b/WebKitTools/Scripts/webkitpy/layout_tests/layout_package/json_layout_results_generator.py
@@ -0,0 +1,184 @@
+#!/usr/bin/env python
+# Copyright (C) 2010 The Chromium Authors. All rights reserved.
+#
+# Redistribution and use in source and binary forms, with or without
+# modification, are permitted provided that the following conditions are
+# met:
+#
+#     * Redistributions of source code must retain the above copyright
+# notice, this list of conditions and the following disclaimer.
+#     * Redistributions in binary form must reproduce the above
+# copyright notice, this list of conditions and the following disclaimer
+# in the documentation and/or other materials provided with the
+# distribution.
+#     * Neither the Chromium name nor the names of its
+# contributors may be used to endorse or promote products derived from
+# this software without specific prior written permission.
+#
+# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+# A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+# OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+# SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+# LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+# DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+# THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+
+import logging
+import os
+
+from layout_package import json_results_generator
+from layout_package import path_utils
+from layout_package import test_expectations
+from layout_package import test_failures
+
+
+class JSONLayoutResultsGenerator(json_results_generator.JSONResultsGenerator):
+    """A JSON results generator for layout tests."""
+
+    LAYOUT_TESTS_PATH = "LayoutTests"
+
+    # Additional JSON fields.
+    WONTFIX = "wontfixCounts"
+    DEFERRED = "deferredCounts"
+
+    def __init__(self, builder_name, build_name, build_number,
+        results_file_base_path, builder_base_url,
+        test_timings, expectations, result_summary, all_tests):
+        """Modifies the results.json file. Grabs it off the archive directory
+        if it is not found locally.
+
+        Args:
+          result_summary: ResultsSummary object storing the summary of the test
+              results.
+          (see the comment of JSONResultsGenerator.__init__ for other Args)
+        """
+
+        self._builder_name = builder_name
+        self._build_name = build_name
+        self._build_number = build_number
+        self._builder_base_url = builder_base_url
+        self._results_file_path = os.path.join(results_file_base_path,
+            self.RESULTS_FILENAME)
+        self._expectations = expectations
+
+        # We don't use self._skipped_tests and self._passed_tests as we
+        # override _InsertFailureSummaries.
+
+        # We want relative paths to LayoutTest root for JSON output.
+        path_to_name = self._get_path_relative_to_layout_test_root
+        self._result_summary = result_summary
+        self._failures = dict(
+            (path_to_name(test), test_failures.determine_result_type(failures))
+            for (test, failures) in result_summary.failures.iteritems())
+        self._all_tests = [path_to_name(test) for test in all_tests]
+        self._test_timings = dict(
+            (path_to_name(test_tuple.filename), test_tuple.test_run_time)
+            for test_tuple in test_timings)
+
+        self._generate_json_output()
+
+    def _get_path_relative_to_layout_test_root(self, test):
+        """Returns the path of the test relative to the layout test root.
+        For example, for:
+          src/third_party/WebKit/LayoutTests/fast/forms/foo.html
+        We would return
+          fast/forms/foo.html
+        """
+        index = test.find(self.LAYOUT_TESTS_PATH)
+        if index is not -1:
+            index += len(self.LAYOUT_TESTS_PATH)
+
+        if index is -1:
+            # Already a relative path.
+            relativePath = test
+        else:
+            relativePath = test[index + 1:]
+
+        # Make sure all paths are unix-style.
+        return relativePath.replace('\\', '/')
+
+    # override
+    def _convert_json_to_current_version(self, results_json):
+        archive_version = None
+        if self.VERSION_KEY in results_json:
+            archive_version = results_json[self.VERSION_KEY]
+
+        super(JSONLayoutResultsGenerator,
+              self)._convert_json_to_current_version(results_json)
+
+        # version 2->3
+        if archive_version == 2:
+            for results_for_builder in results_json.itervalues():
+                try:
+                    test_results = results_for_builder[self.TESTS]
+                except:
+                    continue
+
+            for test in test_results:
+                # Make sure all paths are relative
+                test_path = self._get_path_relative_to_layout_test_root(test)
+                if test_path != test:
+                    test_results[test_path] = test_results[test]
+                    del test_results[test]
+
+    # override
+    def _insert_failure_summaries(self, results_for_builder):
+        summary = self._result_summary
+
+        self._insert_item_into_raw_list(results_for_builder,
+            len((set(summary.failures.keys()) |
+                summary.tests_by_expectation[test_expectations.SKIP]) &
+                summary.tests_by_timeline[test_expectations.NOW]),
+            self.FIXABLE_COUNT)
+        self._insert_item_into_raw_list(results_for_builder,
+            self._get_failure_summary_entry(test_expectations.NOW),
+            self.FIXABLE)
+        self._insert_item_into_raw_list(results_for_builder,
+            len(self._expectations.get_tests_with_timeline(
+                test_expectations.NOW)), self.ALL_FIXABLE_COUNT)
+        self._insert_item_into_raw_list(results_for_builder,
+            self._get_failure_summary_entry(test_expectations.DEFER),
+            self.DEFERRED)
+        self._insert_item_into_raw_list(results_for_builder,
+            self._get_failure_summary_entry(test_expectations.WONTFIX),
+            self.WONTFIX)
+
+    # override
+    def _normalize_results_json(self, test, test_name, tests):
+        super(JSONLayoutResultsGenerator, self)._normalize_results_json(
+            test, test_name, tests)
+
+        # Remove tests that don't exist anymore.
+        full_path = os.path.join(path_utils.layout_tests_dir(), test_name)
+        full_path = os.path.normpath(full_path)
+        if not os.path.exists(full_path):
+            del tests[test_name]
+
+    def _get_failure_summary_entry(self, timeline):
+        """Creates a summary object to insert into the JSON.
+
+        Args:
+          summary   ResultSummary object with test results
+          timeline  current test_expectations timeline to build entry for
+                    (e.g., test_expectations.NOW, etc.)
+        """
+        entry = {}
+        summary = self._result_summary
+        timeline_tests = summary.tests_by_timeline[timeline]
+        entry[self.SKIP_RESULT] = len(
+            summary.tests_by_expectation[test_expectations.SKIP] &
+            timeline_tests)
+        entry[self.PASS_RESULT] = len(
+            summary.tests_by_expectation[test_expectations.PASS] &
+            timeline_tests)
+        for failure_type in summary.tests_by_expectation.keys():
+            if failure_type not in self.FAILURE_TO_CHAR:
+                continue
+            count = len(summary.tests_by_expectation[failure_type] &
+                        timeline_tests)
+            entry[self.FAILURE_TO_CHAR[failure_type]] = count
+        return entry
diff --git a/WebKitTools/Scripts/webkitpy/layout_tests/layout_package/json_results_generator.py b/WebKitTools/Scripts/webkitpy/layout_tests/layout_package/json_results_generator.py
new file mode 100644
index 0000000..596e1e4
--- /dev/null
+++ b/WebKitTools/Scripts/webkitpy/layout_tests/layout_package/json_results_generator.py
@@ -0,0 +1,418 @@
+#!/usr/bin/env python
+# Copyright (C) 2010 The Chromium Authors. All rights reserved.
+#
+# Redistribution and use in source and binary forms, with or without
+# modification, are permitted provided that the following conditions are
+# met:
+#
+#     * Redistributions of source code must retain the above copyright
+# notice, this list of conditions and the following disclaimer.
+#     * Redistributions in binary form must reproduce the above
+# copyright notice, this list of conditions and the following disclaimer
+# in the documentation and/or other materials provided with the
+# distribution.
+#     * Neither the Chromium name nor the names of its
+# contributors may be used to endorse or promote products derived from
+# this software without specific prior written permission.
+#
+# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+# A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+# OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+# SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+# LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+# DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+# THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+
+import logging
+import os
+import subprocess
+import sys
+import time
+import urllib2
+import xml.dom.minidom
+
+from layout_package import path_utils
+from layout_package import test_expectations
+
+sys.path.append(path_utils.path_from_base('third_party', 'WebKit', 
+                                          'WebKitTools')) 
+import simplejson
+
+
+class JSONResultsGenerator(object):
+
+    MAX_NUMBER_OF_BUILD_RESULTS_TO_LOG = 750
+    # Min time (seconds) that will be added to the JSON.
+    MIN_TIME = 1
+    JSON_PREFIX = "ADD_RESULTS("
+    JSON_SUFFIX = ");"
+    PASS_RESULT = "P"
+    SKIP_RESULT = "X"
+    NO_DATA_RESULT = "N"
+    VERSION = 3
+    VERSION_KEY = "version"
+    RESULTS = "results"
+    TIMES = "times"
+    BUILD_NUMBERS = "buildNumbers"
+    WEBKIT_SVN = "webkitRevision"
+    CHROME_SVN = "chromeRevision"
+    TIME = "secondsSinceEpoch"
+    TESTS = "tests"
+
+    FIXABLE_COUNT = "fixableCount"
+    FIXABLE = "fixableCounts"
+    ALL_FIXABLE_COUNT = "allFixableCount"
+
+    # Note that we omit test_expectations.FAIL from this list because
+    # it should never show up (it's a legacy input expectation, never
+    # an output expectation).
+    FAILURE_TO_CHAR = {test_expectations.CRASH: "C",
+                       test_expectations.TIMEOUT: "T",
+                       test_expectations.IMAGE: "I",
+                       test_expectations.TEXT: "F",
+                       test_expectations.MISSING: "O",
+                       test_expectations.IMAGE_PLUS_TEXT: "Z"}
+    FAILURE_CHARS = FAILURE_TO_CHAR.values()
+
+    RESULTS_FILENAME = "results.json"
+
+    def __init__(self, builder_name, build_name, build_number,
+        results_file_base_path, builder_base_url,
+        test_timings, failures, passed_tests, skipped_tests, all_tests):
+        """Modifies the results.json file. Grabs it off the archive directory
+        if it is not found locally.
+
+        Args
+          builder_name: the builder name (e.g. Webkit).
+          build_name: the build name (e.g. webkit-rel).
+          build_number: the build number.
+          results_file_base_path: Absolute path to the directory containing the
+              results json file.
+          builder_base_url: the URL where we have the archived test results.
+          test_timings: Map of test name to a test_run-time.
+          failures: Map of test name to a failure type (of test_expectations).
+          passed_tests: A set containing all the passed tests.
+          skipped_tests: A set containing all the skipped tests.
+          all_tests: List of all the tests that were run.  This should not
+              include skipped tests.
+        """
+        self._builder_name = builder_name
+        self._build_name = build_name
+        self._build_number = build_number
+        self._builder_base_url = builder_base_url
+        self._results_file_path = os.path.join(results_file_base_path,
+            self.RESULTS_FILENAME)
+        self._test_timings = test_timings
+        self._failures = failures
+        self._passed_tests = passed_tests
+        self._skipped_tests = skipped_tests
+        self._all_tests = all_tests
+
+        self._generate_json_output()
+
+    def _generate_json_output(self):
+        """Generates the JSON output file."""
+        json = self._get_json()
+        if json:
+            results_file = open(self._results_file_path, "w")
+            results_file.write(json)
+            results_file.close()
+
+    def _get_svn_revision(self, in_directory=None):
+        """Returns the svn revision for the given directory.
+
+        Args:
+          in_directory: The directory where svn is to be run.
+        """
+        output = subprocess.Popen(["svn", "info", "--xml"],
+                                  cwd=in_directory,
+                                  shell=(sys.platform == 'win32'),
+                                  stdout=subprocess.PIPE).communicate()[0]
+        try:
+            dom = xml.dom.minidom.parseString(output)
+            return dom.getElementsByTagName('entry')[0].getAttribute(
+                'revision')
+        except xml.parsers.expat.ExpatError:
+            return ""
+
+    def _get_archived_json_results(self):
+        """Reads old results JSON file if it exists.
+        Returns (archived_results, error) tuple where error is None if results
+        were successfully read.
+        """
+        results_json = {}
+        old_results = None
+        error = None
+
+        if os.path.exists(self._results_file_path):
+            old_results_file = open(self._results_file_path, "r")
+            old_results = old_results_file.read()
+        elif self._builder_base_url:
+            # Check if we have the archived JSON file on the buildbot server.
+            results_file_url = (self._builder_base_url +
+                self._build_name + "/" + self.RESULTS_FILENAME)
+            logging.error("Local results.json file does not exist. Grabbing "
+                "it off the archive at " + results_file_url)
+
+            try:
+                results_file = urllib2.urlopen(results_file_url)
+                info = results_file.info()
+                old_results = results_file.read()
+            except urllib2.HTTPError, http_error:
+                # A non-4xx status code means the bot is hosed for some reason
+                # and we can't grab the results.json file off of it.
+                if (http_error.code < 400 and http_error.code >= 500):
+                    error = http_error
+            except urllib2.URLError, url_error:
+                error = url_error
+
+        if old_results:
+            # Strip the prefix and suffix so we can get the actual JSON object.
+            old_results = old_results[len(self.JSON_PREFIX):
+                                      len(old_results) - len(self.JSON_SUFFIX)]
+
+            try:
+                results_json = simplejson.loads(old_results)
+            except:
+                logging.debug("results.json was not valid JSON. Clobbering.")
+                # The JSON file is not valid JSON. Just clobber the results.
+                results_json = {}
+        else:
+            logging.debug('Old JSON results do not exist. Starting fresh.')
+            results_json = {}
+
+        return results_json, error
+
+    def _get_json(self):
+        """Gets the results for the results.json file."""
+        results_json, error = self._get_archived_json_results()
+        if error:
+            # If there was an error don't write a results.json
+            # file at all as it would lose all the information on the bot.
+            logging.error("Archive directory is inaccessible. Not modifying "
+                "or clobbering the results.json file: " + str(error))
+            return None
+
+        builder_name = self._builder_name
+        if results_json and builder_name not in results_json:
+            logging.debug("Builder name (%s) is not in the results.json file."
+                          % builder_name)
+
+        self._convert_json_to_current_version(results_json)
+
+        if builder_name not in results_json:
+            results_json[builder_name] = (
+                self._create_results_for_builder_json())
+
+        results_for_builder = results_json[builder_name]
+
+        self._insert_generic_metadata(results_for_builder)
+
+        self._insert_failure_summaries(results_for_builder)
+
+        # Update the all failing tests with result type and time.
+        tests = results_for_builder[self.TESTS]
+        all_failing_tests = set(self._failures.iterkeys())
+        all_failing_tests.update(tests.iterkeys())
+        for test in all_failing_tests:
+            self._insert_test_time_and_result(test, tests)
+
+        # Specify separators in order to get compact encoding.
+        results_str = simplejson.dumps(results_json, separators=(',', ':'))
+        return self.JSON_PREFIX + results_str + self.JSON_SUFFIX
+
+    def _insert_failure_summaries(self, results_for_builder):
+        """Inserts aggregate pass/failure statistics into the JSON.
+        This method reads self._skipped_tests, self._passed_tests and
+        self._failures and inserts FIXABLE, FIXABLE_COUNT and ALL_FIXABLE_COUNT
+        entries.
+
+        Args:
+          results_for_builder: Dictionary containing the test results for a
+              single builder.
+        """
+        # Insert the number of tests that failed.
+        self._insert_item_into_raw_list(results_for_builder,
+            len(set(self._failures.keys()) | self._skipped_tests),
+            self.FIXABLE_COUNT)
+
+        # Create a pass/skip/failure summary dictionary.
+        entry = {}
+        entry[self.SKIP_RESULT] = len(self._skipped_tests)
+        entry[self.PASS_RESULT] = len(self._passed_tests)
+        get = entry.get
+        for failure_type in self._failures.values():
+            failure_char = self.FAILURE_TO_CHAR[failure_type]
+            entry[failure_char] = get(failure_char, 0) + 1
+
+        # Insert the pass/skip/failure summary dictionary.
+        self._insert_item_into_raw_list(results_for_builder, entry,
+                                        self.FIXABLE)
+
+        # Insert the number of all the tests that are supposed to pass.
+        self._insert_item_into_raw_list(results_for_builder,
+            len(self._skipped_tests | self._all_tests),
+            self.ALL_FIXABLE_COUNT)
+
+    def _insert_item_into_raw_list(self, results_for_builder, item, key):
+        """Inserts the item into the list with the given key in the results for
+        this builder. Creates the list if no such list exists.
+
+        Args:
+          results_for_builder: Dictionary containing the test results for a
+              single builder.
+          item: Number or string to insert into the list.
+          key: Key in results_for_builder for the list to insert into.
+        """
+        if key in results_for_builder:
+            raw_list = results_for_builder[key]
+        else:
+            raw_list = []
+
+        raw_list.insert(0, item)
+        raw_list = raw_list[:self.MAX_NUMBER_OF_BUILD_RESULTS_TO_LOG]
+        results_for_builder[key] = raw_list
+
+    def _insert_item_run_length_encoded(self, item, encoded_results):
+        """Inserts the item into the run-length encoded results.
+
+        Args:
+          item: String or number to insert.
+          encoded_results: run-length encoded results. An array of arrays, e.g.
+              [[3,'A'],[1,'Q']] encodes AAAQ.
+        """
+        if len(encoded_results) and item == encoded_results[0][1]:
+            num_results = encoded_results[0][0]
+            if num_results <= self.MAX_NUMBER_OF_BUILD_RESULTS_TO_LOG:
+                encoded_results[0][0] = num_results + 1
+        else:
+            # Use a list instead of a class for the run-length encoding since
+            # we want the serialized form to be concise.
+            encoded_results.insert(0, [1, item])
+
+    def _insert_generic_metadata(self, results_for_builder):
+        """ Inserts generic metadata (such as version number, current time etc)
+        into the JSON.
+
+        Args:
+          results_for_builder: Dictionary containing the test results for
+              a single builder.
+        """
+        self._insert_item_into_raw_list(results_for_builder,
+            self._build_number, self.BUILD_NUMBERS)
+
+        path_to_webkit = path_utils.path_from_base('third_party', 'WebKit',
+                                                   'WebCore')
+        self._insert_item_into_raw_list(results_for_builder,
+            self._get_svn_revision(path_to_webkit),
+            self.WEBKIT_SVN)
+
+        path_to_chrome_base = path_utils.path_from_base()
+        self._insert_item_into_raw_list(results_for_builder,
+            self._get_svn_revision(path_to_chrome_base),
+            self.CHROME_SVN)
+
+        self._insert_item_into_raw_list(results_for_builder,
+            int(time.time()),
+            self.TIME)
+
+    def _insert_test_time_and_result(self, test_name, tests):
+        """ Insert a test item with its results to the given tests dictionary.
+
+        Args:
+          tests: Dictionary containing test result entries.
+        """
+
+        result = JSONResultsGenerator.PASS_RESULT
+        time = 0
+
+        if test_name not in self._all_tests:
+            result = JSONResultsGenerator.NO_DATA_RESULT
+
+        if test_name in self._failures:
+            result = self.FAILURE_TO_CHAR[self._failures[test_name]]
+
+        if test_name in self._test_timings:
+            # Floor for now to get time in seconds.
+            time = int(self._test_timings[test_name])
+
+        if test_name not in tests:
+            tests[test_name] = self._create_results_and_times_json()
+
+        thisTest = tests[test_name]
+        self._insert_item_run_length_encoded(result, thisTest[self.RESULTS])
+        self._insert_item_run_length_encoded(time, thisTest[self.TIMES])
+        self._normalize_results_json(thisTest, test_name, tests)
+
+    def _convert_json_to_current_version(self, results_json):
+        """If the JSON does not match the current version, converts it to the
+        current version and adds in the new version number.
+        """
+        if (self.VERSION_KEY in results_json and
+            results_json[self.VERSION_KEY] == self.VERSION):
+            return
+
+        results_json[self.VERSION_KEY] = self.VERSION
+
+    def _create_results_and_times_json(self):
+        results_and_times = {}
+        results_and_times[self.RESULTS] = []
+        results_and_times[self.TIMES] = []
+        return results_and_times
+
+    def _create_results_for_builder_json(self):
+        results_for_builder = {}
+        results_for_builder[self.TESTS] = {}
+        return results_for_builder
+
+    def _remove_items_over_max_number_of_builds(self, encoded_list):
+        """Removes items from the run-length encoded list after the final
+        item that exceeds the max number of builds to track.
+
+        Args:
+          encoded_results: run-length encoded results. An array of arrays, e.g.
+              [[3,'A'],[1,'Q']] encodes AAAQ.
+        """
+        num_builds = 0
+        index = 0
+        for result in encoded_list:
+            num_builds = num_builds + result[0]
+            index = index + 1
+            if num_builds > self.MAX_NUMBER_OF_BUILD_RESULTS_TO_LOG:
+                return encoded_list[:index]
+        return encoded_list
+
+    def _normalize_results_json(self, test, test_name, tests):
+        """ Prune tests where all runs pass or tests that no longer exist and
+        truncate all results to maxNumberOfBuilds.
+
+        Args:
+          test: ResultsAndTimes object for this test.
+          test_name: Name of the test.
+          tests: The JSON object with all the test results for this builder.
+        """
+        test[self.RESULTS] = self._remove_items_over_max_number_of_builds(
+            test[self.RESULTS])
+        test[self.TIMES] = self._remove_items_over_max_number_of_builds(
+            test[self.TIMES])
+
+        is_all_pass = self._is_results_all_of_type(test[self.RESULTS],
+                                                   self.PASS_RESULT)
+        is_all_no_data = self._is_results_all_of_type(test[self.RESULTS],
+            self.NO_DATA_RESULT)
+        max_time = max([time[1] for time in test[self.TIMES]])
+
+        # Remove all passes/no-data from the results to reduce noise and
+        # filesize. If a test passes every run, but takes > MIN_TIME to run,
+        # don't throw away the data.
+        if is_all_no_data or (is_all_pass and max_time <= self.MIN_TIME):
+            del tests[test_name]
+
+    def _is_results_all_of_type(self, results, type):
+        """Returns whether all the results are of the given type
+        (e.g. all passes)."""
+        return len(results) == 1 and results[0][1] == type
diff --git a/WebKitTools/Scripts/webkitpy/layout_tests/layout_package/lighttpd.conf b/WebKitTools/Scripts/webkitpy/layout_tests/layout_package/lighttpd.conf
new file mode 100644
index 0000000..d3150dd
--- /dev/null
+++ b/WebKitTools/Scripts/webkitpy/layout_tests/layout_package/lighttpd.conf
@@ -0,0 +1,89 @@
+server.tag                  = "LightTPD/1.4.19 (Win32)"
+server.modules              = ( "mod_accesslog",
+                                "mod_alias",
+                                "mod_cgi",
+                                "mod_rewrite" )
+
+# default document root required
+server.document-root = "."
+
+# files to check for if .../ is requested
+index-file.names            = ( "index.php", "index.pl", "index.cgi",
+                                "index.html", "index.htm", "default.htm" )
+# mimetype mapping
+mimetype.assign             = (
+  ".gif"          =>      "image/gif",
+  ".jpg"          =>      "image/jpeg",
+  ".jpeg"         =>      "image/jpeg",
+  ".png"          =>      "image/png",
+  ".svg"          =>      "image/svg+xml",
+  ".css"          =>      "text/css",
+  ".html"         =>      "text/html",
+  ".htm"          =>      "text/html",
+  ".xhtml"        =>      "application/xhtml+xml",
+  ".js"           =>      "text/javascript",
+  ".log"          =>      "text/plain",
+  ".conf"         =>      "text/plain",
+  ".text"         =>      "text/plain",
+  ".txt"          =>      "text/plain",
+  ".dtd"          =>      "text/xml",
+  ".xml"          =>      "text/xml",
+  ".manifest"     =>      "text/cache-manifest",
+ )
+
+# Use the "Content-Type" extended attribute to obtain mime type if possible
+mimetype.use-xattr          = "enable"
+
+##
+# which extensions should not be handle via static-file transfer
+#
+# .php, .pl, .fcgi are most often handled by mod_fastcgi or mod_cgi
+static-file.exclude-extensions = ( ".php", ".pl", ".cgi" )
+
+server.bind = "localhost"
+server.port = 8001
+
+## virtual directory listings
+dir-listing.activate        = "enable"
+#dir-listing.encoding       = "iso-8859-2"
+#dir-listing.external-css   = "style/oldstyle.css"
+
+## enable debugging
+#debug.log-request-header   = "enable"
+#debug.log-response-header  = "enable"
+#debug.log-request-handling = "enable"
+#debug.log-file-not-found   = "enable"
+
+#### SSL engine
+#ssl.engine                 = "enable"
+#ssl.pemfile                = "server.pem"
+
+# Rewrite rule for utf-8 path test (LayoutTests/http/tests/uri/utf8-path.html)
+# See the apache rewrite rule at LayoutTests/http/tests/uri/intercept/.htaccess
+# Rewrite rule for LayoutTests/http/tests/appcache/cyrillic-uri.html.
+# See the apache rewrite rule at
+# LayoutTests/http/tests/appcache/resources/intercept/.htaccess
+url.rewrite-once = (
+  "^/uri/intercept/(.*)" => "/uri/resources/print-uri.php",
+  "^/appcache/resources/intercept/(.*)" => "/appcache/resources/print-uri.php"
+)
+
+# LayoutTests/http/tests/xmlhttprequest/response-encoding.html uses an htaccess
+# to override charset for reply2.txt, reply2.xml, and reply4.txt.
+$HTTP["url"] =~ "^/xmlhttprequest/resources/reply2.(txt|xml)" {
+  mimetype.assign = (
+    ".txt" => "text/plain; charset=windows-1251",
+    ".xml" => "text/xml; charset=windows-1251"
+  )
+}
+$HTTP["url"] =~ "^/xmlhttprequest/resources/reply4.txt" {
+  mimetype.assign = ( ".txt" => "text/plain; charset=koi8-r" )
+}
+
+# LayoutTests/http/tests/appcache/wrong-content-type.html uses an htaccess
+# to override mime type for wrong-content-type.manifest.
+$HTTP["url"] =~ "^/appcache/resources/wrong-content-type.manifest" {
+  mimetype.assign = ( ".manifest" => "text/plain" )
+}
+
+# Autogenerated test-specific config follows.
diff --git a/WebKitTools/Scripts/webkitpy/layout_tests/layout_package/metered_stream.py b/WebKitTools/Scripts/webkitpy/layout_tests/layout_package/metered_stream.py
new file mode 100644
index 0000000..6c094e3
--- /dev/null
+++ b/WebKitTools/Scripts/webkitpy/layout_tests/layout_package/metered_stream.py
@@ -0,0 +1,96 @@
+#!/usr/bin/env python
+# Copyright (C) 2010 The Chromium Authors. All rights reserved.
+#
+# Redistribution and use in source and binary forms, with or without
+# modification, are permitted provided that the following conditions are
+# met:
+#
+#     * Redistributions of source code must retain the above copyright
+# notice, this list of conditions and the following disclaimer.
+#     * Redistributions in binary form must reproduce the above
+# copyright notice, this list of conditions and the following disclaimer
+# in the documentation and/or other materials provided with the
+# distribution.
+#     * Neither the Chromium name nor the names of its
+# contributors may be used to endorse or promote products derived from
+# this software without specific prior written permission.
+#
+# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+# A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+# OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+# SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+# LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+# DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+# THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+
+"""
+Package that implements a stream wrapper that has 'meters' as well as
+regular output. A 'meter' is a single line of text that can be erased
+and rewritten repeatedly, without producing multiple lines of output. It
+can be used to produce effects like progress bars.
+"""
+
+
+class MeteredStream:
+    """This class is a wrapper around a stream that allows you to implement
+    meters.
+
+    It can be used like a stream, but calling update() will print
+    the string followed by only a carriage return (instead of a carriage
+    return and a line feed). This can be used to implement progress bars and
+    other sorts of meters. Note that anything written by update() will be
+    erased by a subsequent update(), write(), or flush()."""
+
+    def __init__(self, verbose, stream):
+        """
+        Args:
+          verbose: whether update is a no-op
+          stream: output stream to write to
+        """
+        self._dirty = False
+        self._verbose = verbose
+        self._stream = stream
+        self._last_update = ""
+
+    def write(self, txt):
+        """Write text directly to the stream, overwriting and resetting the
+        meter."""
+        if self._dirty:
+            self.update("")
+            self._dirty = False
+        self._stream.write(txt)
+
+    def flush(self):
+        """Flush any buffered output."""
+        self._stream.flush()
+
+    def update(self, str):
+        """Write an update to the stream that will get overwritten by the next
+        update() or by a write().
+
+        This is used for progress updates that don't need to be preserved in
+        the log. Note that verbose disables this routine; we have this in
+        case we are logging lots of output and the update()s will get lost
+        or won't work properly (typically because verbose streams are
+        redirected to files.
+
+        TODO(dpranke): figure out if there is a way to detect if we're writing
+        to a stream that handles CRs correctly (e.g., terminals). That might
+        be a cleaner way of handling this.
+        """
+        if self._verbose:
+            return
+
+        # Print the necessary number of backspaces to erase the previous
+        # message.
+        self._stream.write("\b" * len(self._last_update))
+        self._stream.write(str)
+        num_remaining = len(self._last_update) - len(str)
+        if num_remaining > 0:
+            self._stream.write(" " * num_remaining + "\b" * num_remaining)
+        self._last_update = str
+        self._dirty = True
diff --git a/WebKitTools/Scripts/webkitpy/layout_tests/layout_package/path_utils.py b/WebKitTools/Scripts/webkitpy/layout_tests/layout_package/path_utils.py
new file mode 100644
index 0000000..26d062b
--- /dev/null
+++ b/WebKitTools/Scripts/webkitpy/layout_tests/layout_package/path_utils.py
@@ -0,0 +1,395 @@
+#!/usr/bin/env python
+# Copyright (C) 2010 The Chromium Authors. All rights reserved.
+#
+# Redistribution and use in source and binary forms, with or without
+# modification, are permitted provided that the following conditions are
+# met:
+#
+#     * Redistributions of source code must retain the above copyright
+# notice, this list of conditions and the following disclaimer.
+#     * Redistributions in binary form must reproduce the above
+# copyright notice, this list of conditions and the following disclaimer
+# in the documentation and/or other materials provided with the
+# distribution.
+#     * Neither the Chromium name nor the names of its
+# contributors may be used to endorse or promote products derived from
+# this software without specific prior written permission.
+#
+# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+# A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+# OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+# SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+# LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+# DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+# THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+
+"""This package contains utility methods for manipulating paths and
+filenames for test results and baselines. It also contains wrappers
+of a few routines in platform_utils.py so that platform_utils.py can
+be considered a 'protected' package - i.e., this file should be
+the only file that ever includes platform_utils. This leads to
+us including a few things that don't really have anything to do
+ with paths, unfortunately."""
+
+import errno
+import os
+import stat
+import sys
+
+import platform_utils
+import platform_utils_win
+import platform_utils_mac
+import platform_utils_linux
+
+# Cache some values so we don't have to recalculate them. _basedir is
+# used by PathFromBase() and caches the full (native) path to the top
+# of the source tree (/src). _baseline_search_path is used by
+# ExpectedBaselines() and caches the list of native paths to search
+# for baseline results.
+_basedir = None
+_baseline_search_path = None
+
+
+class PathNotFound(Exception):
+    pass
+
+
+def layout_tests_dir():
+    """Returns the fully-qualified path to the directory containing the input
+    data for the specified layout test."""
+    return path_from_base('third_party', 'WebKit', 'LayoutTests')
+
+
+def chromium_baseline_path(platform=None):
+    """Returns the full path to the directory containing expected
+    baseline results from chromium ports. If |platform| is None, the
+    currently executing platform is used.
+
+    Note: although directly referencing individual platform_utils_* files is
+    usually discouraged, we allow it here so that the rebaselining tool can
+    pull baselines for platforms other than the host platform."""
+
+    # Normalize the platform string.
+    platform = platform_name(platform)
+    if platform.startswith('chromium-mac'):
+        return platform_utils_mac.baseline_path(platform)
+    elif platform.startswith('chromium-win'):
+        return platform_utils_win.baseline_path(platform)
+    elif platform.startswith('chromium-linux'):
+        return platform_utils_linux.baseline_path(platform)
+
+    return platform_utils.baseline_path()
+
+
+def webkit_baseline_path(platform):
+    """Returns the full path to the directory containing expected
+    baseline results from WebKit ports."""
+    return path_from_base('third_party', 'WebKit', 'LayoutTests',
+                          'platform', platform)
+
+
+def baseline_search_path(platform=None):
+    """Returns the list of directories to search for baselines/results for a
+    given platform, in order of preference. Paths are relative to the top of
+    the source tree. If parameter platform is None, returns the list for the
+    current platform that the script is running on.
+
+    Note: although directly referencing individual platform_utils_* files is
+    usually discouraged, we allow it here so that the rebaselining tool can
+    pull baselines for platforms other than the host platform."""
+
+    # Normalize the platform name.
+    platform = platform_name(platform)
+    if platform.startswith('chromium-mac'):
+        return platform_utils_mac.baseline_search_path(platform)
+    elif platform.startswith('chromium-win'):
+        return platform_utils_win.baseline_search_path(platform)
+    elif platform.startswith('chromium-linux'):
+        return platform_utils_linux.baseline_search_path(platform)
+    return platform_utils.baseline_search_path()
+
+
+def expected_baselines(filename, suffix, platform=None, all_baselines=False):
+    """Given a test name, finds where the baseline results are located.
+
+    Args:
+       filename: absolute filename to test file
+       suffix: file suffix of the expected results, including dot; e.g. '.txt'
+           or '.png'.  This should not be None, but may be an empty string.
+       platform: layout test platform: 'win', 'linux' or 'mac'. Defaults to
+                 the current platform.
+       all_baselines: If True, return an ordered list of all baseline paths
+                      for the given platform. If False, return only the first
+                      one.
+    Returns
+       a list of ( platform_dir, results_filename ), where
+         platform_dir - abs path to the top of the results tree (or test tree)
+         results_filename - relative path from top of tree to the results file
+           (os.path.join of the two gives you the full path to the file,
+            unless None was returned.)
+      Return values will be in the format appropriate for the current platform
+      (e.g., "\\" for path separators on Windows). If the results file is not
+      found, then None will be returned for the directory, but the expected
+      relative pathname will still be returned.
+    """
+    global _baseline_search_path
+    global _search_path_platform
+    testname = os.path.splitext(relative_test_filename(filename))[0]
+
+    baseline_filename = testname + '-expected' + suffix
+
+    if (_baseline_search_path is None) or (_search_path_platform != platform):
+        _baseline_search_path = baseline_search_path(platform)
+        _search_path_platform = platform
+
+    baselines = []
+    for platform_dir in _baseline_search_path:
+        if os.path.exists(os.path.join(platform_dir, baseline_filename)):
+            baselines.append((platform_dir, baseline_filename))
+
+        if not all_baselines and baselines:
+            return baselines
+
+    # If it wasn't found in a platform directory, return the expected result
+    # in the test directory, even if no such file actually exists.
+    platform_dir = layout_tests_dir()
+    if os.path.exists(os.path.join(platform_dir, baseline_filename)):
+        baselines.append((platform_dir, baseline_filename))
+
+    if baselines:
+        return baselines
+
+    return [(None, baseline_filename)]
+
+
+def expected_filename(filename, suffix):
+    """Given a test name, returns an absolute path to its expected results.
+
+    If no expected results are found in any of the searched directories, the
+    directory in which the test itself is located will be returned. The return
+    value is in the format appropriate for the platform (e.g., "\\" for
+    path separators on windows).
+
+    Args:
+       filename: absolute filename to test file
+       suffix: file suffix of the expected results, including dot; e.g. '.txt'
+           or '.png'.  This should not be None, but may be an empty string.
+       platform: the most-specific directory name to use to build the
+           search list of directories, e.g., 'chromium-win', or
+           'chromium-mac-leopard' (we follow the WebKit format)
+    """
+    platform_dir, baseline_filename = expected_baselines(filename, suffix)[0]
+    if platform_dir:
+        return os.path.join(platform_dir, baseline_filename)
+    return os.path.join(layout_tests_dir(), baseline_filename)
+
+
+def relative_test_filename(filename):
+    """Provide the filename of the test relative to the layout tests
+    directory as a unix style path (a/b/c)."""
+    return _win_path_to_unix(filename[len(layout_tests_dir()) + 1:])
+
+
+def _win_path_to_unix(path):
+    """Convert a windows path to use unix-style path separators (a/b/c)."""
+    return path.replace('\\', '/')
+
+#
+# Routines that are arguably platform-specific but have been made
+# generic for now (they used to be in platform_utils_*)
+#
+
+
+def filename_to_uri(full_path):
+    """Convert a test file to a URI."""
+    LAYOUTTEST_HTTP_DIR = "http/tests/"
+    LAYOUTTEST_WEBSOCKET_DIR = "websocket/tests/"
+
+    relative_path = _win_path_to_unix(relative_test_filename(full_path))
+    port = None
+    use_ssl = False
+
+    if relative_path.startswith(LAYOUTTEST_HTTP_DIR):
+        # http/tests/ run off port 8000 and ssl/ off 8443
+        relative_path = relative_path[len(LAYOUTTEST_HTTP_DIR):]
+        port = 8000
+    elif relative_path.startswith(LAYOUTTEST_WEBSOCKET_DIR):
+        # websocket/tests/ run off port 8880 and 9323
+        # Note: the root is /, not websocket/tests/
+        port = 8880
+
+    # Make http/tests/local run as local files. This is to mimic the
+    # logic in run-webkit-tests.
+    # TODO(jianli): Consider extending this to "media/".
+    if port and not relative_path.startswith("local/"):
+        if relative_path.startswith("ssl/"):
+            port += 443
+            protocol = "https"
+        else:
+            protocol = "http"
+        return "%s://127.0.0.1:%u/%s" % (protocol, port, relative_path)
+
+    if sys.platform in ('cygwin', 'win32'):
+        return "file:///" + get_absolute_path(full_path)
+    return "file://" + get_absolute_path(full_path)
+
+
+def get_absolute_path(path):
+    """Returns an absolute UNIX path."""
+    return _win_path_to_unix(os.path.abspath(path))
+
+
+def maybe_make_directory(*path):
+    """Creates the specified directory if it doesn't already exist."""
+    try:
+        os.makedirs(os.path.join(*path))
+    except OSError, e:
+        if e.errno != errno.EEXIST:
+            raise
+
+
+def path_from_base(*comps):
+    """Returns an absolute filename from a set of components specified
+    relative to the top of the source tree. If the path does not exist,
+    the exception PathNotFound is raised."""
+    global _basedir
+    if _basedir == None:
+        # We compute the top of the source tree by finding the absolute
+        # path of this source file, and then climbing up three directories
+        # as given in subpath. If we move this file, subpath needs to be
+        # updated.
+        path = os.path.abspath(__file__)
+        subpath = os.path.join('third_party', 'WebKit')
+        _basedir = path[:path.index(subpath)]
+    path = os.path.join(_basedir, *comps)
+    if not os.path.exists(path):
+        raise PathNotFound('could not find %s' % (path))
+    return path
+
+
+def remove_directory(*path):
+    """Recursively removes a directory, even if it's marked read-only.
+
+    Remove the directory located at *path, if it exists.
+
+    shutil.rmtree() doesn't work on Windows if any of the files or directories
+    are read-only, which svn repositories and some .svn files are.  We need to
+    be able to force the files to be writable (i.e., deletable) as we traverse
+    the tree.
+
+    Even with all this, Windows still sometimes fails to delete a file, citing
+    a permission error (maybe something to do with antivirus scans or disk
+    indexing).  The best suggestion any of the user forums had was to wait a
+    bit and try again, so we do that too.  It's hand-waving, but sometimes it
+    works. :/
+    """
+    file_path = os.path.join(*path)
+    if not os.path.exists(file_path):
+        return
+
+    win32 = False
+    if sys.platform == 'win32':
+        win32 = True
+        # Some people don't have the APIs installed. In that case we'll do
+        # without.
+        try:
+            win32api = __import__('win32api')
+            win32con = __import__('win32con')
+        except ImportError:
+            win32 = False
+
+        def remove_with_retry(rmfunc, path):
+            os.chmod(path, stat.S_IWRITE)
+            if win32:
+                win32api.SetFileAttributes(path,
+                                           win32con.FILE_ATTRIBUTE_NORMAL)
+            try:
+                return rmfunc(path)
+            except EnvironmentError, e:
+                if e.errno != errno.EACCES:
+                    raise
+                print 'Failed to delete %s: trying again' % repr(path)
+                time.sleep(0.1)
+                return rmfunc(path)
+    else:
+
+        def remove_with_retry(rmfunc, path):
+            if os.path.islink(path):
+                return os.remove(path)
+            else:
+                return rmfunc(path)
+
+    for root, dirs, files in os.walk(file_path, topdown=False):
+        # For POSIX:  making the directory writable guarantees removability.
+        # Windows will ignore the non-read-only bits in the chmod value.
+        os.chmod(root, 0770)
+        for name in files:
+            remove_with_retry(os.remove, os.path.join(root, name))
+        for name in dirs:
+            remove_with_retry(os.rmdir, os.path.join(root, name))
+
+    remove_with_retry(os.rmdir, file_path)
+
+#
+# Wrappers around platform_utils
+#
+
+
+def platform_name(platform=None):
+    """Returns the appropriate chromium platform name for |platform|. If
+       |platform| is None, returns the name of the chromium platform on the
+       currently running system. If |platform| is of the form 'chromium-*',
+       it is returned unchanged, otherwise 'chromium-' is prepended."""
+    if platform == None:
+        return platform_utils.platform_name()
+    if not platform.startswith('chromium-'):
+        platform = "chromium-" + platform
+    return platform
+
+
+def platform_version():
+    return platform_utils.platform_version()
+
+
+def lighttpd_executable_path():
+    return platform_utils.lighttpd_executable_path()
+
+
+def lighttpd_module_path():
+    return platform_utils.lighttpd_module_path()
+
+
+def lighttpd_php_path():
+    return platform_utils.lighttpd_php_path()
+
+
+def wdiff_path():
+    return platform_utils.wdiff_path()
+
+
+def test_shell_path(target):
+    return platform_utils.test_shell_path(target)
+
+
+def image_diff_path(target):
+    return platform_utils.image_diff_path(target)
+
+
+def layout_test_helper_path(target):
+    return platform_utils.layout_test_helper_path(target)
+
+
+def fuzzy_match_path():
+    return platform_utils.fuzzy_match_path()
+
+
+def shut_down_http_server(server_pid):
+    return platform_utils.shut_down_http_server(server_pid)
+
+
+def kill_all_test_shells():
+    platform_utils.kill_all_test_shells()
diff --git a/WebKitTools/Scripts/webkitpy/layout_tests/layout_package/platform_utils.py b/WebKitTools/Scripts/webkitpy/layout_tests/layout_package/platform_utils.py
new file mode 100644
index 0000000..09e7b4b
--- /dev/null
+++ b/WebKitTools/Scripts/webkitpy/layout_tests/layout_package/platform_utils.py
@@ -0,0 +1,50 @@
+#!/usr/bin/env python
+# Copyright (C) 2010 The Chromium Authors. All rights reserved.
+#
+# Redistribution and use in source and binary forms, with or without
+# modification, are permitted provided that the following conditions are
+# met:
+#
+#     * Redistributions of source code must retain the above copyright
+# notice, this list of conditions and the following disclaimer.
+#     * Redistributions in binary form must reproduce the above
+# copyright notice, this list of conditions and the following disclaimer
+# in the documentation and/or other materials provided with the
+# distribution.
+#     * Neither the Chromium name nor the names of its
+# contributors may be used to endorse or promote products derived from
+# this software without specific prior written permission.
+#
+# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+# A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+# OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+# SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+# LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+# DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+# THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+
+"""Platform-specific utilities and pseudo-constants
+
+Any functions whose implementations or values differ from one platform to
+another should be defined in their respective platform_utils_<platform>.py
+modules. The appropriate one of those will be imported into this module to
+provide callers with a common, platform-independent interface.
+
+This file should only ever be imported by layout_package.path_utils.
+"""
+
+import sys
+
+# We may not support the version of Python that a user has installed (Cygwin
+# especially has had problems), but we'll allow the platform utils to be
+# included in any case so we don't get an import error.
+if sys.platform in ('cygwin', 'win32'):
+    from platform_utils_win import *
+elif sys.platform == 'darwin':
+    from platform_utils_mac import *
+elif sys.platform in ('linux', 'linux2', 'freebsd7', 'openbsd4'):
+    from platform_utils_linux import *
diff --git a/WebKitTools/Scripts/webkitpy/layout_tests/layout_package/platform_utils_linux.py b/WebKitTools/Scripts/webkitpy/layout_tests/layout_package/platform_utils_linux.py
new file mode 100644
index 0000000..87b27c7
--- /dev/null
+++ b/WebKitTools/Scripts/webkitpy/layout_tests/layout_package/platform_utils_linux.py
@@ -0,0 +1,248 @@
+#!/usr/bin/env python
+# Copyright (C) 2010 The Chromium Authors. All rights reserved.
+#
+# Redistribution and use in source and binary forms, with or without
+# modification, are permitted provided that the following conditions are
+# met:
+#
+#     * Redistributions of source code must retain the above copyright
+# notice, this list of conditions and the following disclaimer.
+#     * Redistributions in binary form must reproduce the above
+# copyright notice, this list of conditions and the following disclaimer
+# in the documentation and/or other materials provided with the
+# distribution.
+#     * Neither the Chromium name nor the names of its
+# contributors may be used to endorse or promote products derived from
+# this software without specific prior written permission.
+#
+# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+# A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+# OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+# SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+# LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+# DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+# THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+
+"""This is the Linux implementation of the layout_package.platform_utils
+   package. This file should only be imported by that package."""
+
+import os
+import signal
+import subprocess
+import sys
+import logging
+
+import path_utils
+import platform_utils_win
+
+
+def platform_name():
+    """Returns the name of the platform we're currently running on."""
+    return 'chromium-linux' + platform_version()
+
+
+def platform_version():
+    """Returns the version string for the platform, e.g. '-vista' or
+    '-snowleopard'. If the platform does not distinguish between
+    minor versions, it returns ''."""
+    return ''
+
+
+def get_num_cores():
+    """Returns the number of cores on the machine. For hyperthreaded machines,
+    this will be double the number of actual processors."""
+    num_cores = os.sysconf("SC_NPROCESSORS_ONLN")
+    if isinstance(num_cores, int) and num_cores > 0:
+        return num_cores
+    return 1
+
+
+def baseline_path(platform=None):
+    """Returns the path relative to the top of the source tree for the
+    baselines for the specified platform version. If |platform| is None,
+    then the version currently in use is used."""
+    if platform is None:
+        platform = platform_name()
+    return path_utils.path_from_base('webkit', 'data', 'layout_tests',
+                                     'platform', platform, 'LayoutTests')
+
+
+def baseline_search_path(platform=None):
+    """Returns the list of directories to search for baselines/results, in
+    order of preference. Paths are relative to the top of the source tree."""
+    return [baseline_path(platform),
+            platform_utils_win.baseline_path('chromium-win'),
+            path_utils.webkit_baseline_path('win'),
+            path_utils.webkit_baseline_path('mac')]
+
+
+def apache_executable_path():
+    """Returns the executable path to start Apache"""
+    path = os.path.join("/usr", "sbin", "apache2")
+    if os.path.exists(path):
+        return path
+    print "Unable to fine Apache executable %s" % path
+    _missing_apache()
+
+
+def apache_config_file_path():
+    """Returns the path to Apache config file"""
+    return path_utils.path_from_base("third_party", "WebKit", "LayoutTests",
+        "http", "conf", "apache2-debian-httpd.conf")
+
+
+def lighttpd_executable_path():
+    """Returns the executable path to start LigHTTPd"""
+    binpath = "/usr/sbin/lighttpd"
+    if os.path.exists(binpath):
+        return binpath
+    print "Unable to find LigHTTPd executable %s" % binpath
+    _missing_lighttpd()
+
+
+def lighttpd_module_path():
+    """Returns the library module path for LigHTTPd"""
+    modpath = "/usr/lib/lighttpd"
+    if os.path.exists(modpath):
+        return modpath
+    print "Unable to find LigHTTPd modules %s" % modpath
+    _missing_lighttpd()
+
+
+def lighttpd_php_path():
+    """Returns the PHP executable path for LigHTTPd"""
+    binpath = "/usr/bin/php-cgi"
+    if os.path.exists(binpath):
+        return binpath
+    print "Unable to find PHP CGI executable %s" % binpath
+    _missing_lighttpd()
+
+
+def wdiff_path():
+    """Path to the WDiff executable, which we assume is already installed and
+    in the user's $PATH."""
+    return 'wdiff'
+
+
+def image_diff_path(target):
+    """Path to the image_diff binary.
+
+    Args:
+      target: Build target mode (debug or release)"""
+    return _path_from_build_results(target, 'image_diff')
+
+
+def layout_test_helper_path(target):
+    """Path to the layout_test helper binary, if needed, empty otherwise"""
+    return ''
+
+
+def test_shell_path(target):
+    """Return the platform-specific binary path for our TestShell.
+
+    Args:
+      target: Build target mode (debug or release) """
+    if target in ('Debug', 'Release'):
+        try:
+            debug_path = _path_from_build_results('Debug', 'test_shell')
+            release_path = _path_from_build_results('Release', 'test_shell')
+
+            debug_mtime = os.stat(debug_path).st_mtime
+            release_mtime = os.stat(release_path).st_mtime
+
+            if debug_mtime > release_mtime and target == 'Release' or \
+               release_mtime > debug_mtime and target == 'Debug':
+                logging.info('\x1b[31mWarning: you are not running the most '
+                             'recent test_shell binary. You need to pass '
+                             '--debug or not to select between Debug and '
+                             'Release.\x1b[0m')
+        # This will fail if we don't have both a debug and release binary.
+        # That's fine because, in this case, we must already be running the
+        # most up-to-date one.
+        except path_utils.PathNotFound:
+            pass
+
+    return _path_from_build_results(target, 'test_shell')
+
+
+def fuzzy_match_path():
+    """Return the path to the fuzzy matcher binary."""
+    return path_utils.path_from_base('third_party', 'fuzzymatch', 'fuzzymatch')
+
+
+def shut_down_http_server(server_pid):
+    """Shut down the lighttpd web server. Blocks until it's fully shut down.
+
+    Args:
+      server_pid: The process ID of the running server.
+    """
+    # server_pid is not set when "http_server.py stop" is run manually.
+    if server_pid is None:
+        # This isn't ideal, since it could conflict with web server processes
+        # not started by http_server.py, but good enough for now.
+        kill_all_process('lighttpd')
+        kill_all_process('apache2')
+    else:
+        try:
+            os.kill(server_pid, signal.SIGTERM)
+            #TODO(mmoss) Maybe throw in a SIGKILL just to be sure?
+        except OSError:
+            # Sometimes we get a bad PID (e.g. from a stale httpd.pid file),
+            # so if kill fails on the given PID, just try to 'killall' web
+            # servers.
+            shut_down_http_server(None)
+
+
+def kill_process(pid):
+    """Forcefully kill the process.
+
+    Args:
+      pid: The id of the process to be killed.
+    """
+    os.kill(pid, signal.SIGKILL)
+
+
+def kill_all_process(process_name):
+    null = open(os.devnull)
+    subprocess.call(['killall', '-TERM', '-u', os.getenv('USER'),
+                    process_name], stderr=null)
+    null.close()
+
+
+def kill_all_test_shells():
+    """Kills all instances of the test_shell binary currently running."""
+    kill_all_process('test_shell')
+
+#
+# Private helper functions
+#
+
+
+def _missing_lighttpd():
+    print 'Please install using: "sudo apt-get install lighttpd php5-cgi"'
+    print 'For complete Linux build requirements, please see:'
+    print 'http://code.google.com/p/chromium/wiki/LinuxBuildInstructions'
+    sys.exit(1)
+
+
+def _missing_apache():
+    print ('Please install using: "sudo apt-get install apache2 '
+        'libapache2-mod-php5"')
+    print 'For complete Linux build requirements, please see:'
+    print 'http://code.google.com/p/chromium/wiki/LinuxBuildInstructions'
+    sys.exit(1)
+
+
+def _path_from_build_results(*pathies):
+    # FIXME(dkegel): use latest or warn if more than one found?
+    for dir in ["sconsbuild", "out", "xcodebuild"]:
+        try:
+            return path_utils.path_from_base(dir, *pathies)
+        except:
+            pass
+    raise path_utils.PathNotFound("Unable to find %s in build tree" %
+        (os.path.join(*pathies)))
diff --git a/WebKitTools/Scripts/webkitpy/layout_tests/layout_package/platform_utils_mac.py b/WebKitTools/Scripts/webkitpy/layout_tests/layout_package/platform_utils_mac.py
new file mode 100644
index 0000000..1eaa10c
--- /dev/null
+++ b/WebKitTools/Scripts/webkitpy/layout_tests/layout_package/platform_utils_mac.py
@@ -0,0 +1,201 @@
+#!/usr/bin/env python
+# Copyright (C) 2010 The Chromium Authors. All rights reserved.
+#
+# Redistribution and use in source and binary forms, with or without
+# modification, are permitted provided that the following conditions are
+# met:
+#
+#     * Redistributions of source code must retain the above copyright
+# notice, this list of conditions and the following disclaimer.
+#     * Redistributions in binary form must reproduce the above
+# copyright notice, this list of conditions and the following disclaimer
+# in the documentation and/or other materials provided with the
+# distribution.
+#     * Neither the Chromium name nor the names of its
+# contributors may be used to endorse or promote products derived from
+# this software without specific prior written permission.
+#
+# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+# A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+# OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+# SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+# LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+# DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+# THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+
+"""This is the Mac implementation of the layout_package.platform_utils
+   package. This file should only be imported by that package."""
+
+import os
+import platform
+import signal
+import subprocess
+
+import path_utils
+
+
+def platform_name():
+    """Returns the name of the platform we're currently running on."""
+    # At the moment all chromium mac results are version-independent. At some
+    # point we may need to return 'chromium-mac' + PlatformVersion()
+    return 'chromium-mac'
+
+
+def platform_version():
+    """Returns the version string for the platform, e.g. '-vista' or
+    '-snowleopard'. If the platform does not distinguish between
+    minor versions, it returns ''."""
+    os_version_string = platform.mac_ver()[0]  # e.g. "10.5.6"
+    if not os_version_string:
+        return '-leopard'
+
+    release_version = int(os_version_string.split('.')[1])
+
+    # we don't support 'tiger' or earlier releases
+    if release_version == 5:
+        return '-leopard'
+    elif release_version == 6:
+        return '-snowleopard'
+
+    return ''
+
+
+def get_num_cores():
+    """Returns the number of cores on the machine. For hyperthreaded machines,
+    this will be double the number of actual processors."""
+    return int(os.popen2("sysctl -n hw.ncpu")[1].read())
+
+
+def baseline_path(platform=None):
+    """Returns the path relative to the top of the source tree for the
+    baselines for the specified platform version. If |platform| is None,
+    then the version currently in use is used."""
+    if platform is None:
+        platform = platform_name()
+    return path_utils.path_from_base('webkit', 'data', 'layout_tests',
+                                     'platform', platform, 'LayoutTests')
+
+# TODO: We should add leopard and snowleopard to the list of paths to check
+# once we start running the tests from snowleopard.
+
+
+def baseline_search_path(platform=None):
+    """Returns the list of directories to search for baselines/results, in
+    order of preference. Paths are relative to the top of the source tree."""
+    return [baseline_path(platform),
+            path_utils.webkit_baseline_path('mac' + platform_version()),
+            path_utils.webkit_baseline_path('mac')]
+
+
+def wdiff_path():
+    """Path to the WDiff executable, which we assume is already installed and
+    in the user's $PATH."""
+    return 'wdiff'
+
+
+def image_diff_path(target):
+    """Path to the image_diff executable
+
+    Args:
+      target: build type - 'Debug','Release',etc."""
+    return path_utils.path_from_base('xcodebuild', target, 'image_diff')
+
+
+def layout_test_helper_path(target):
+    """Path to the layout_test_helper executable, if needed, empty otherwise
+
+    Args:
+      target: build type - 'Debug','Release',etc."""
+    return path_utils.path_from_base('xcodebuild', target,
+                                     'layout_test_helper')
+
+
+def test_shell_path(target):
+    """Path to the test_shell executable.
+
+    Args:
+      target: build type - 'Debug','Release',etc."""
+    # TODO(pinkerton): make |target| happy with case-sensitive file systems.
+    return path_utils.path_from_base('xcodebuild', target, 'TestShell.app',
+                                     'Contents', 'MacOS', 'TestShell')
+
+
+def apache_executable_path():
+    """Returns the executable path to start Apache"""
+    return os.path.join("/usr", "sbin", "httpd")
+
+
+def apache_config_file_path():
+    """Returns the path to Apache config file"""
+    return path_utils.path_from_base("third_party", "WebKit", "LayoutTests",
+        "http", "conf", "apache2-httpd.conf")
+
+
+def lighttpd_executable_path():
+    """Returns the executable path to start LigHTTPd"""
+    return path_utils.path_from_base('third_party', 'lighttpd', 'mac',
+                                     'bin', 'lighttpd')
+
+
+def lighttpd_module_path():
+    """Returns the library module path for LigHTTPd"""
+    return path_utils.path_from_base('third_party', 'lighttpd', 'mac', 'lib')
+
+
+def lighttpd_php_path():
+    """Returns the PHP executable path for LigHTTPd"""
+    return path_utils.path_from_base('third_party', 'lighttpd', 'mac', 'bin',
+                                     'php-cgi')
+
+
+def shut_down_http_server(server_pid):
+    """Shut down the lighttpd web server. Blocks until it's fully shut down.
+
+      Args:
+        server_pid: The process ID of the running server.
+    """
+    # server_pid is not set when "http_server.py stop" is run manually.
+    if server_pid is None:
+        # TODO(mmoss) This isn't ideal, since it could conflict with lighttpd
+        # processes not started by http_server.py, but good enough for now.
+        kill_all_process('lighttpd')
+        kill_all_process('httpd')
+    else:
+        try:
+            os.kill(server_pid, signal.SIGTERM)
+            # TODO(mmoss) Maybe throw in a SIGKILL just to be sure?
+        except OSError:
+            # Sometimes we get a bad PID (e.g. from a stale httpd.pid file),
+            # so if kill fails on the given PID, just try to 'killall' web
+            # servers.
+            shut_down_http_server(None)
+
+
+def kill_process(pid):
+    """Forcefully kill the process.
+
+    Args:
+      pid: The id of the process to be killed.
+    """
+    os.kill(pid, signal.SIGKILL)
+
+
+def kill_all_process(process_name):
+    # On Mac OS X 10.6, killall has a new constraint: -SIGNALNAME or
+    # -SIGNALNUMBER must come first.  Example problem:
+    #   $ killall -u $USER -TERM lighttpd
+    #   killall: illegal option -- T
+    # Use of the earlier -TERM placement is just fine on 10.5.
+    null = open(os.devnull)
+    subprocess.call(['killall', '-TERM', '-u', os.getenv('USER'),
+                     process_name], stderr=null)
+    null.close()
+
+
+def kill_all_test_shells():
+    """Kills all instances of the test_shell binary currently running."""
+    kill_all_process('TestShell')
diff --git a/WebKitTools/Scripts/webkitpy/layout_tests/layout_package/platform_utils_win.py b/WebKitTools/Scripts/webkitpy/layout_tests/layout_package/platform_utils_win.py
new file mode 100644
index 0000000..3cbbec3
--- /dev/null
+++ b/WebKitTools/Scripts/webkitpy/layout_tests/layout_package/platform_utils_win.py
@@ -0,0 +1,210 @@
+#!/usr/bin/env python
+# Copyright (C) 2010 The Chromium Authors. All rights reserved.
+#
+# Redistribution and use in source and binary forms, with or without
+# modification, are permitted provided that the following conditions are
+# met:
+#
+#     * Redistributions of source code must retain the above copyright
+# notice, this list of conditions and the following disclaimer.
+#     * Redistributions in binary form must reproduce the above
+# copyright notice, this list of conditions and the following disclaimer
+# in the documentation and/or other materials provided with the
+# distribution.
+#     * Neither the Chromium name nor the names of its
+# contributors may be used to endorse or promote products derived from
+# this software without specific prior written permission.
+#
+# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+# A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+# OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+# SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+# LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+# DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+# THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+
+"""This is the Linux implementation of the layout_package.platform_utils
+   package. This file should only be imported by that package."""
+
+import os
+import path_utils
+import subprocess
+import sys
+
+
+def platform_name():
+    """Returns the name of the platform we're currently running on."""
+    # We're not ready for version-specific results yet. When we uncomment
+    # this, we also need to add it to the BaselineSearchPath()
+    return 'chromium-win' + platform_version()
+
+
+def platform_version():
+    """Returns the version string for the platform, e.g. '-vista' or
+    '-snowleopard'. If the platform does not distinguish between
+    minor versions, it returns ''."""
+    winver = sys.getwindowsversion()
+    if winver[0] == 6 and (winver[1] == 1):
+        return '-7'
+    if winver[0] == 6 and (winver[1] == 0):
+        return '-vista'
+    if winver[0] == 5 and (winver[1] == 1 or winver[1] == 2):
+        return '-xp'
+    return ''
+
+
+def get_num_cores():
+    """Returns the number of cores on the machine. For hyperthreaded machines,
+    this will be double the number of actual processors."""
+    return int(os.environ.get('NUMBER_OF_PROCESSORS', 1))
+
+
+def baseline_path(platform=None):
+    """Returns the path relative to the top of the source tree for the
+    baselines for the specified platform version. If |platform| is None,
+    then the version currently in use is used."""
+    if platform is None:
+        platform = platform_name()
+    return path_utils.path_from_base('webkit', 'data', 'layout_tests',
+                                     'platform', platform, 'LayoutTests')
+
+
+def baseline_search_path(platform=None):
+    """Returns the list of directories to search for baselines/results, in
+    order of preference. Paths are relative to the top of the source tree."""
+    dirs = []
+    if platform is None:
+        platform = platform_name()
+
+    if platform == 'chromium-win-xp':
+        dirs.append(baseline_path(platform))
+    if platform in ('chromium-win-xp', 'chromium-win-vista'):
+        dirs.append(baseline_path('chromium-win-vista'))
+    dirs.append(baseline_path('chromium-win'))
+    dirs.append(path_utils.webkit_baseline_path('win'))
+    dirs.append(path_utils.webkit_baseline_path('mac'))
+    return dirs
+
+
+def wdiff_path():
+    """Path to the WDiff executable, whose binary is checked in on Win"""
+    return path_utils.path_from_base('third_party', 'cygwin', 'bin',
+                                     'wdiff.exe')
+
+
+def image_diff_path(target):
+    """Return the platform-specific binary path for the image compare util.
+         We use this if we can't find the binary in the default location
+         in path_utils.
+
+    Args:
+      target: Build target mode (debug or release)
+    """
+    return _find_binary(target, 'image_diff.exe')
+
+
+def layout_test_helper_path(target):
+    """Return the platform-specific binary path for the layout test helper.
+    We use this if we can't find the binary in the default location
+    in path_utils.
+
+    Args:
+      target: Build target mode (debug or release)
+    """
+    return _find_binary(target, 'layout_test_helper.exe')
+
+
+def test_shell_path(target):
+    """Return the platform-specific binary path for our TestShell.
+       We use this if we can't find the binary in the default location
+       in path_utils.
+
+    Args:
+      target: Build target mode (debug or release)
+    """
+    return _find_binary(target, 'test_shell.exe')
+
+
+def apache_executable_path():
+    """Returns the executable path to start Apache"""
+    path = path_utils.path_from_base('third_party', 'cygwin', "usr", "sbin")
+    # Don't return httpd.exe since we want to use this from cygwin.
+    return os.path.join(path, "httpd")
+
+
+def apache_config_file_path():
+    """Returns the path to Apache config file"""
+    return path_utils.path_from_base("third_party", "WebKit", "LayoutTests",
+        "http", "conf", "cygwin-httpd.conf")
+
+
+def lighttpd_executable_path():
+    """Returns the executable path to start LigHTTPd"""
+    return path_utils.path_from_base('third_party', 'lighttpd', 'win',
+                                     'LightTPD.exe')
+
+
+def lighttpd_module_path():
+    """Returns the library module path for LigHTTPd"""
+    return path_utils.path_from_base('third_party', 'lighttpd', 'win', 'lib')
+
+
+def lighttpd_php_path():
+    """Returns the PHP executable path for LigHTTPd"""
+    return path_utils.path_from_base('third_party', 'lighttpd', 'win', 'php5',
+                                     'php-cgi.exe')
+
+
+def shut_down_http_server(server_pid):
+    """Shut down the lighttpd web server. Blocks until it's fully shut down.
+
+    Args:
+      server_pid: The process ID of the running server.
+          Unused in this implementation of the method.
+    """
+    subprocess.Popen(('taskkill.exe', '/f', '/im', 'LightTPD.exe'),
+                     stdout=subprocess.PIPE,
+                     stderr=subprocess.PIPE).wait()
+    subprocess.Popen(('taskkill.exe', '/f', '/im', 'httpd.exe'),
+                     stdout=subprocess.PIPE,
+                     stderr=subprocess.PIPE).wait()
+
+
+def kill_process(pid):
+    """Forcefully kill the process.
+
+    Args:
+      pid: The id of the process to be killed.
+    """
+    subprocess.call(('taskkill.exe', '/f', '/pid', str(pid)),
+                    stdout=subprocess.PIPE,
+                    stderr=subprocess.PIPE)
+
+
+def kill_all_test_shells(self):
+    """Kills all instances of the test_shell binary currently running."""
+    subprocess.Popen(('taskkill.exe', '/f', '/im', 'test_shell.exe'),
+                     stdout=subprocess.PIPE,
+                     stderr=subprocess.PIPE).wait()
+
+#
+# Private helper functions.
+#
+
+
+def _find_binary(target, binary):
+    """On Windows, we look for binaries that we compile in potentially
+    two places: src/webkit/$target (preferably, which we get if we
+    built using webkit_glue.gyp), or src/chrome/$target (if compiled some
+    other way)."""
+    try:
+        return path_utils.path_from_base('webkit', target, binary)
+    except path_utils.PathNotFound:
+        try:
+            return path_utils.path_from_base('chrome', target, binary)
+        except path_utils.PathNotFound:
+            return path_utils.path_from_base('build', target, binary)
diff --git a/WebKitTools/Scripts/webkitpy/layout_tests/layout_package/test_expectations.py b/WebKitTools/Scripts/webkitpy/layout_tests/layout_package/test_expectations.py
new file mode 100644
index 0000000..f1647f7
--- /dev/null
+++ b/WebKitTools/Scripts/webkitpy/layout_tests/layout_package/test_expectations.py
@@ -0,0 +1,818 @@
+#!/usr/bin/env python
+# Copyright (C) 2010 The Chromium Authors. All rights reserved.
+#
+# Redistribution and use in source and binary forms, with or without
+# modification, are permitted provided that the following conditions are
+# met:
+#
+#     * Redistributions of source code must retain the above copyright
+# notice, this list of conditions and the following disclaimer.
+#     * Redistributions in binary form must reproduce the above
+# copyright notice, this list of conditions and the following disclaimer
+# in the documentation and/or other materials provided with the
+# distribution.
+#     * Neither the Chromium name nor the names of its
+# contributors may be used to endorse or promote products derived from
+# this software without specific prior written permission.
+#
+# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+# A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+# OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+# SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+# LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+# DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+# THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+
+"""A helper class for reading in and dealing with tests expectations
+for layout tests.
+"""
+
+import logging
+import os
+import re
+import sys
+import time
+import path_utils
+
+sys.path.append(path_utils.path_from_base('third_party', 'WebKit', 
+                                          'WebKitTools')) 
+import simplejson
+
+# Test expectation and modifier constants.
+(PASS, FAIL, TEXT, IMAGE, IMAGE_PLUS_TEXT, TIMEOUT, CRASH, SKIP, WONTFIX,
+ DEFER, SLOW, REBASELINE, MISSING, FLAKY, NOW, NONE) = range(16)
+
+# Test expectation file update action constants
+(NO_CHANGE, REMOVE_TEST, REMOVE_PLATFORM, ADD_PLATFORMS_EXCEPT_THIS) = range(4)
+
+
+class TestExpectations:
+    TEST_LIST = "test_expectations.txt"
+
+    def __init__(self, tests, directory, platform, is_debug_mode, is_lint_mode,
+        tests_are_present=True):
+        """Reads the test expectations files from the given directory."""
+        path = os.path.join(directory, self.TEST_LIST)
+        self._expected_failures = TestExpectationsFile(path, tests, platform,
+            is_debug_mode, is_lint_mode, tests_are_present=tests_are_present)
+
+    # TODO(ojan): Allow for removing skipped tests when getting the list of
+    # tests to run, but not when getting metrics.
+    # TODO(ojan): Replace the Get* calls here with the more sane API exposed
+    # by TestExpectationsFile below. Maybe merge the two classes entirely?
+
+    def get_expectations_json_for_all_platforms(self):
+        return (
+            self._expected_failures.get_expectations_json_for_all_platforms())
+
+    def get_rebaselining_failures(self):
+        return (self._expected_failures.get_test_set(REBASELINE, FAIL) |
+                self._expected_failures.get_test_set(REBASELINE, IMAGE) |
+                self._expected_failures.get_test_set(REBASELINE, TEXT) |
+                self._expected_failures.get_test_set(REBASELINE,
+                                                     IMAGE_PLUS_TEXT))
+
+    def get_options(self, test):
+        return self._expected_failures.get_options(test)
+
+    def get_expectations(self, test):
+        return self._expected_failures.get_expectations(test)
+
+    def get_expectations_string(self, test):
+        """Returns the expectatons for the given test as an uppercase string.
+        If there are no expectations for the test, then "PASS" is returned."""
+        expectations = self.get_expectations(test)
+        retval = []
+
+        for expectation in expectations:
+            for item in TestExpectationsFile.EXPECTATIONS.items():
+                if item[1] == expectation:
+                    retval.append(item[0])
+                    break
+
+        return " ".join(retval).upper()
+
+    def get_timeline_for_test(self, test):
+        return self._expected_failures.get_timeline_for_test(test)
+
+    def get_tests_with_result_type(self, result_type):
+        return self._expected_failures.get_tests_with_result_type(result_type)
+
+    def get_tests_with_timeline(self, timeline):
+        return self._expected_failures.get_tests_with_timeline(timeline)
+
+    def matches_an_expected_result(self, test, result):
+        """Returns whether we got one of the expected results for this test."""
+        return (result in self._expected_failures.get_expectations(test) or
+                (result in (IMAGE, TEXT, IMAGE_PLUS_TEXT) and
+                FAIL in self._expected_failures.get_expectations(test)) or
+                result == MISSING and self.is_rebaselining(test) or
+                result == SKIP and self._expected_failures.has_modifier(test,
+                                                                        SKIP))
+
+    def is_rebaselining(self, test):
+        return self._expected_failures.has_modifier(test, REBASELINE)
+
+    def has_modifier(self, test, modifier):
+        return self._expected_failures.has_modifier(test, modifier)
+
+    def remove_platform_from_file(self, tests, platform, backup=False):
+        return self._expected_failures.remove_platform_from_file(tests,
+                                                                 platform,
+                                                                 backup)
+
+
+def strip_comments(line):
+    """Strips comments from a line and return None if the line is empty
+    or else the contents of line with leading and trailing spaces removed
+    and all other whitespace collapsed"""
+
+    commentIndex = line.find('//')
+    if commentIndex is -1:
+        commentIndex = len(line)
+
+    line = re.sub(r'\s+', ' ', line[:commentIndex].strip())
+    if line == '':
+        return None
+    else:
+        return line
+
+
+class ModifiersAndExpectations:
+    """A holder for modifiers and expectations on a test that serializes to
+    JSON."""
+
+    def __init__(self, modifiers, expectations):
+        self.modifiers = modifiers
+        self.expectations = expectations
+
+
+class ExpectationsJsonEncoder(simplejson.JSONEncoder):
+    """JSON encoder that can handle ModifiersAndExpectations objects.
+    """
+
+    def default(self, obj):
+        if isinstance(obj, ModifiersAndExpectations):
+            return {"modifiers": obj.modifiers,
+                    "expectations": obj.expectations}
+        else:
+            return JSONEncoder.default(self, obj)
+
+
+class TestExpectationsFile:
+    """Test expectation files consist of lines with specifications of what
+    to expect from layout test cases. The test cases can be directories
+    in which case the expectations apply to all test cases in that
+    directory and any subdirectory. The format of the file is along the
+    lines of:
+
+      LayoutTests/fast/js/fixme.js = FAIL
+      LayoutTests/fast/js/flaky.js = FAIL PASS
+      LayoutTests/fast/js/crash.js = CRASH TIMEOUT FAIL PASS
+      ...
+
+    To add other options:
+      SKIP : LayoutTests/fast/js/no-good.js = TIMEOUT PASS
+      DEBUG : LayoutTests/fast/js/no-good.js = TIMEOUT PASS
+      DEBUG SKIP : LayoutTests/fast/js/no-good.js = TIMEOUT PASS
+      LINUX DEBUG SKIP : LayoutTests/fast/js/no-good.js = TIMEOUT PASS
+      DEFER LINUX WIN : LayoutTests/fast/js/no-good.js = TIMEOUT PASS
+
+    SKIP: Doesn't run the test.
+    SLOW: The test takes a long time to run, but does not timeout indefinitely.
+    WONTFIX: For tests that we never intend to pass on a given platform.
+    DEFER: Test does not count in our statistics for the current release.
+    DEBUG: Expectations apply only to the debug build.
+    RELEASE: Expectations apply only to release build.
+    LINUX/WIN/WIN-XP/WIN-VISTA/WIN-7/MAC: Expectations apply only to these
+        platforms.
+
+    Notes:
+      -A test cannot be both SLOW and TIMEOUT
+      -A test cannot be both DEFER and WONTFIX
+      -A test should only be one of IMAGE, TEXT, IMAGE+TEXT, or FAIL. FAIL is
+       a migratory state that currently means either IMAGE, TEXT, or
+       IMAGE+TEXT. Once we have finished migrating the expectations, we will
+       change FAIL to have the meaning of IMAGE+TEXT and remove the IMAGE+TEXT
+       identifier.
+      -A test can be included twice, but not via the same path.
+      -If a test is included twice, then the more precise path wins.
+      -CRASH tests cannot be DEFER or WONTFIX
+    """
+
+    EXPECTATIONS = {'pass': PASS,
+                    'fail': FAIL,
+                    'text': TEXT,
+                    'image': IMAGE,
+                    'image+text': IMAGE_PLUS_TEXT,
+                    'timeout': TIMEOUT,
+                    'crash': CRASH,
+                    'missing': MISSING}
+
+    EXPECTATION_DESCRIPTIONS = {SKIP: ('skipped', 'skipped'),
+                                PASS: ('pass', 'passes'),
+                                FAIL: ('failure', 'failures'),
+                                TEXT: ('text diff mismatch',
+                                       'text diff mismatch'),
+                                IMAGE: ('image mismatch', 'image mismatch'),
+                                IMAGE_PLUS_TEXT: ('image and text mismatch',
+                                                  'image and text mismatch'),
+                                CRASH: ('test shell crash',
+                                        'test shell crashes'),
+                                TIMEOUT: ('test timed out', 'tests timed out'),
+                                MISSING: ('no expected result found',
+                                          'no expected results found')}
+
+    EXPECTATION_ORDER = (PASS, CRASH, TIMEOUT, MISSING, IMAGE_PLUS_TEXT,
+       TEXT, IMAGE, FAIL, SKIP)
+
+    BASE_PLATFORMS = ('linux', 'mac', 'win')
+    PLATFORMS = BASE_PLATFORMS + ('win-xp', 'win-vista', 'win-7')
+
+    BUILD_TYPES = ('debug', 'release')
+
+    MODIFIERS = {'skip': SKIP,
+                 'wontfix': WONTFIX,
+                 'defer': DEFER,
+                 'slow': SLOW,
+                 'rebaseline': REBASELINE,
+                 'none': NONE}
+
+    TIMELINES = {'wontfix': WONTFIX,
+                 'now': NOW,
+                 'defer': DEFER}
+
+    RESULT_TYPES = {'skip': SKIP,
+                    'pass': PASS,
+                    'fail': FAIL,
+                    'flaky': FLAKY}
+
+    def __init__(self, path, full_test_list, platform, is_debug_mode,
+        is_lint_mode, expectations_as_str=None, suppress_errors=False,
+        tests_are_present=True):
+        """
+        path: The path to the expectation file. An error is thrown if a test is
+            listed more than once.
+        full_test_list: The list of all tests to be run pending processing of
+            the expections for those tests.
+        platform: Which platform from self.PLATFORMS to filter tests for.
+        is_debug_mode: Whether we testing a test_shell built debug mode.
+        is_lint_mode: Whether this is just linting test_expecatations.txt.
+        expectations_as_str: Contents of the expectations file. Used instead of
+            the path. This makes unittesting sane.
+        suppress_errors: Whether to suppress lint errors.
+        tests_are_present: Whether the test files are present in the local
+            filesystem. The LTTF Dashboard uses False here to avoid having to
+            keep a local copy of the tree.
+        """
+
+        self._path = path
+        self._expectations_as_str = expectations_as_str
+        self._is_lint_mode = is_lint_mode
+        self._tests_are_present = tests_are_present
+        self._full_test_list = full_test_list
+        self._suppress_errors = suppress_errors
+        self._errors = []
+        self._non_fatal_errors = []
+        self._platform = self.to_test_platform_name(platform)
+        if self._platform is None:
+            raise Exception("Unknown platform '%s'" % (platform))
+        self._is_debug_mode = is_debug_mode
+
+        # Maps relative test paths as listed in the expectations file to a
+        # list of maps containing modifiers and expectations for each time
+        # the test is listed in the expectations file.
+        self._all_expectations = {}
+
+        # Maps a test to its list of expectations.
+        self._test_to_expectations = {}
+
+        # Maps a test to its list of options (string values)
+        self._test_to_options = {}
+
+        # Maps a test to its list of modifiers: the constants associated with
+        # the options minus any bug or platform strings
+        self._test_to_modifiers = {}
+
+        # Maps a test to the base path that it was listed with in the list.
+        self._test_list_paths = {}
+
+        self._modifier_to_tests = self._dict_of_sets(self.MODIFIERS)
+        self._expectation_to_tests = self._dict_of_sets(self.EXPECTATIONS)
+        self._timeline_to_tests = self._dict_of_sets(self.TIMELINES)
+        self._result_type_to_tests = self._dict_of_sets(self.RESULT_TYPES)
+
+        self._read(self._get_iterable_expectations())
+
+    def _dict_of_sets(self, strings_to_constants):
+        """Takes a dict of strings->constants and returns a dict mapping
+        each constant to an empty set."""
+        d = {}
+        for c in strings_to_constants.values():
+            d[c] = set()
+        return d
+
+    def _get_iterable_expectations(self):
+        """Returns an object that can be iterated over. Allows for not caring
+        about whether we're iterating over a file or a new-line separated
+        string."""
+        if self._expectations_as_str:
+            iterable = [x + "\n" for x in
+                self._expectations_as_str.split("\n")]
+            # Strip final entry if it's empty to avoid added in an extra
+            # newline.
+            if iterable[len(iterable) - 1] == "\n":
+                return iterable[:len(iterable) - 1]
+            return iterable
+        else:
+            return open(self._path)
+
+    def to_test_platform_name(self, name):
+        """Returns the test expectation platform that will be used for a
+        given platform name, or None if there is no match."""
+        chromium_prefix = 'chromium-'
+        name = name.lower()
+        if name.startswith(chromium_prefix):
+            name = name[len(chromium_prefix):]
+        if name in self.PLATFORMS:
+            return name
+        return None
+
+    def get_test_set(self, modifier, expectation=None, include_skips=True):
+        if expectation is None:
+            tests = self._modifier_to_tests[modifier]
+        else:
+            tests = (self._expectation_to_tests[expectation] &
+                self._modifier_to_tests[modifier])
+
+        if not include_skips:
+            tests = tests - self.get_test_set(SKIP, expectation)
+
+        return tests
+
+    def get_tests_with_result_type(self, result_type):
+        return self._result_type_to_tests[result_type]
+
+    def get_tests_with_timeline(self, timeline):
+        return self._timeline_to_tests[timeline]
+
+    def get_options(self, test):
+        """This returns the entire set of options for the given test
+        (the modifiers plus the BUGXXXX identifier). This is used by the
+        LTTF dashboard."""
+        return self._test_to_options[test]
+
+    def has_modifier(self, test, modifier):
+        return test in self._modifier_to_tests[modifier]
+
+    def get_expectations(self, test):
+        return self._test_to_expectations[test]
+
+    def get_expectations_json_for_all_platforms(self):
+        # Specify separators in order to get compact encoding.
+        return ExpectationsJsonEncoder(separators=(',', ':')).encode(
+            self._all_expectations)
+
+    def contains(self, test):
+        return test in self._test_to_expectations
+
+    def remove_platform_from_file(self, tests, platform, backup=False):
+        """Remove the platform option from test expectations file.
+
+        If a test is in the test list and has an option that matches the given
+        platform, remove the matching platform and save the updated test back
+        to the file. If no other platforms remaining after removal, delete the
+        test from the file.
+
+        Args:
+          tests: list of tests that need to update..
+          platform: which platform option to remove.
+          backup: if true, the original test expectations file is saved as
+                  [self.TEST_LIST].orig.YYYYMMDDHHMMSS
+
+        Returns:
+          no
+        """
+
+        new_file = self._path + '.new'
+        logging.debug('Original file: "%s"', self._path)
+        logging.debug('New file: "%s"', new_file)
+        f_orig = self._get_iterable_expectations()
+        f_new = open(new_file, 'w')
+
+        tests_removed = 0
+        tests_updated = 0
+        lineno = 0
+        for line in f_orig:
+            lineno += 1
+            action = self._get_platform_update_action(line, lineno, tests,
+                                                      platform)
+            if action == NO_CHANGE:
+                # Save the original line back to the file
+                logging.debug('No change to test: %s', line)
+                f_new.write(line)
+            elif action == REMOVE_TEST:
+                tests_removed += 1
+                logging.info('Test removed: %s', line)
+            elif action == REMOVE_PLATFORM:
+                parts = line.split(':')
+                new_options = parts[0].replace(platform.upper() + ' ', '', 1)
+                new_line = ('%s:%s' % (new_options, parts[1]))
+                f_new.write(new_line)
+                tests_updated += 1
+                logging.info('Test updated: ')
+                logging.info('  old: %s', line)
+                logging.info('  new: %s', new_line)
+            elif action == ADD_PLATFORMS_EXCEPT_THIS:
+                parts = line.split(':')
+                new_options = parts[0]
+                for p in self.PLATFORMS:
+                    p = p.upper();
+                    # This is a temp solution for rebaselining tool.
+                    # Do not add tags WIN-7 and WIN-VISTA to test expectations
+                    # if the original line does not specify the platform option.
+                    # TODO(victorw): Remove WIN-VISTA and WIN-7 once we have
+                    # reliable Win 7 and Win Vista buildbots setup.
+                    if not p in (platform.upper(), 'WIN-VISTA', 'WIN-7'):
+                        new_options += p + ' '
+                new_line = ('%s:%s' % (new_options, parts[1]))
+                f_new.write(new_line)
+                tests_updated += 1
+                logging.info('Test updated: ')
+                logging.info('  old: %s', line)
+                logging.info('  new: %s', new_line)
+            else:
+                logging.error('Unknown update action: %d; line: %s',
+                              action, line)
+
+        logging.info('Total tests removed: %d', tests_removed)
+        logging.info('Total tests updated: %d', tests_updated)
+
+        f_orig.close()
+        f_new.close()
+
+        if backup:
+            date_suffix = time.strftime('%Y%m%d%H%M%S',
+                                        time.localtime(time.time()))
+            backup_file = ('%s.orig.%s' % (self._path, date_suffix))
+            if os.path.exists(backup_file):
+                os.remove(backup_file)
+            logging.info('Saving original file to "%s"', backup_file)
+            os.rename(self._path, backup_file)
+        else:
+            os.remove(self._path)
+
+        logging.debug('Saving new file to "%s"', self._path)
+        os.rename(new_file, self._path)
+        return True
+
+    def parse_expectations_line(self, line, lineno):
+        """Parses a line from test_expectations.txt and returns a tuple
+        with the test path, options as a list, expectations as a list."""
+        line = strip_comments(line)
+        if not line:
+            return (None, None, None)
+
+        options = []
+        if line.find(":") is -1:
+            test_and_expectation = line.split("=")
+        else:
+            parts = line.split(":")
+            options = self._get_options_list(parts[0])
+            test_and_expectation = parts[1].split('=')
+
+        test = test_and_expectation[0].strip()
+        if (len(test_and_expectation) is not 2):
+            self._add_error(lineno, "Missing expectations.",
+                           test_and_expectation)
+            expectations = None
+        else:
+            expectations = self._get_options_list(test_and_expectation[1])
+
+        return (test, options, expectations)
+
+    def _get_platform_update_action(self, line, lineno, tests, platform):
+        """Check the platform option and return the action needs to be taken.
+
+        Args:
+          line: current line in test expectations file.
+          lineno: current line number of line
+          tests: list of tests that need to update..
+          platform: which platform option to remove.
+
+        Returns:
+          NO_CHANGE: no change to the line (comments, test not in the list etc)
+          REMOVE_TEST: remove the test from file.
+          REMOVE_PLATFORM: remove this platform option from the test.
+          ADD_PLATFORMS_EXCEPT_THIS: add all the platforms except this one.
+        """
+        test, options, expectations = self.parse_expectations_line(line,
+                                                                   lineno)
+        if not test or test not in tests:
+            return NO_CHANGE
+
+        has_any_platform = False
+        for option in options:
+            if option in self.PLATFORMS:
+                has_any_platform = True
+                if not option == platform:
+                    return REMOVE_PLATFORM
+
+        # If there is no platform specified, then it means apply to all
+        # platforms. Return the action to add all the platforms except this
+        # one.
+        if not has_any_platform:
+            return ADD_PLATFORMS_EXCEPT_THIS
+
+        return REMOVE_TEST
+
+    def _has_valid_modifiers_for_current_platform(self, options, lineno,
+        test_and_expectations, modifiers):
+        """Returns true if the current platform is in the options list or if
+        no platforms are listed and if there are no fatal errors in the
+        options list.
+
+        Args:
+          options: List of lowercase options.
+          lineno: The line in the file where the test is listed.
+          test_and_expectations: The path and expectations for the test.
+          modifiers: The set to populate with modifiers.
+        """
+        has_any_platform = False
+        has_bug_id = False
+        for option in options:
+            if option in self.MODIFIERS:
+                modifiers.add(option)
+            elif option in self.PLATFORMS:
+                has_any_platform = True
+            elif option.startswith('bug'):
+                has_bug_id = True
+            elif option not in self.BUILD_TYPES:
+                self._add_error(lineno, 'Invalid modifier for test: %s' %
+                                option, test_and_expectations)
+
+        if has_any_platform and not self._match_platform(options):
+            return False
+
+        if not has_bug_id and 'wontfix' not in options:
+            # TODO(ojan): Turn this into an AddError call once all the
+            # tests have BUG identifiers.
+            self._log_non_fatal_error(lineno, 'Test lacks BUG modifier.',
+                test_and_expectations)
+
+        if 'release' in options or 'debug' in options:
+            if self._is_debug_mode and 'debug' not in options:
+                return False
+            if not self._is_debug_mode and 'release' not in options:
+                return False
+
+        if 'wontfix' in options and 'defer' in options:
+            self._add_error(lineno, 'Test cannot be both DEFER and WONTFIX.',
+                test_and_expectations)
+
+        if self._is_lint_mode and 'rebaseline' in options:
+            self._add_error(lineno,
+                'REBASELINE should only be used for running rebaseline.py. '
+                'Cannot be checked in.', test_and_expectations)
+
+        return True
+
+    def _match_platform(self, options):
+        """Match the list of options against our specified platform. If any
+        of the options prefix-match self._platform, return True. This handles
+        the case where a test is marked WIN and the platform is WIN-VISTA.
+
+        Args:
+          options: list of options
+        """
+        for opt in options:
+            if self._platform.startswith(opt):
+                return True
+        return False
+
+    def _add_to_all_expectations(self, test, options, expectations):
+        # Make all paths unix-style so the dashboard doesn't need to.
+        test = test.replace('\\', '/')
+        if not test in self._all_expectations:
+            self._all_expectations[test] = []
+        self._all_expectations[test].append(
+            ModifiersAndExpectations(options, expectations))
+
+    def _read(self, expectations):
+        """For each test in an expectations iterable, generate the
+        expectations for it."""
+        lineno = 0
+        for line in expectations:
+            lineno += 1
+
+            test_list_path, options, expectations = \
+                self.parse_expectations_line(line, lineno)
+            if not expectations:
+                continue
+
+            self._add_to_all_expectations(test_list_path,
+                                          " ".join(options).upper(),
+                                          " ".join(expectations).upper())
+
+            modifiers = set()
+            if options and not self._has_valid_modifiers_for_current_platform(
+                options, lineno, test_list_path, modifiers):
+                continue
+
+            expectations = self._parse_expectations(expectations, lineno,
+                test_list_path)
+
+            if 'slow' in options and TIMEOUT in expectations:
+                self._add_error(lineno,
+                    'A test can not be both slow and timeout. If it times out '
+                    'indefinitely, then it should be just timeout.',
+                    test_list_path)
+
+            full_path = os.path.join(path_utils.layout_tests_dir(),
+                                     test_list_path)
+            full_path = os.path.normpath(full_path)
+            # WebKit's way of skipping tests is to add a -disabled suffix.
+            # So we should consider the path existing if the path or the
+            # -disabled version exists.
+            if (self._tests_are_present and not os.path.exists(full_path)
+                and not os.path.exists(full_path + '-disabled')):
+                # Log a non fatal error here since you hit this case any
+                # time you update test_expectations.txt without syncing
+                # the LayoutTests directory
+                self._log_non_fatal_error(lineno, 'Path does not exist.',
+                                       test_list_path)
+                continue
+
+            if not self._full_test_list:
+                tests = [test_list_path]
+            else:
+                tests = self._expand_tests(test_list_path)
+
+            self._add_tests(tests, expectations, test_list_path, lineno,
+                           modifiers, options)
+
+        if not self._suppress_errors and (
+            len(self._errors) or len(self._non_fatal_errors)):
+            if self._is_debug_mode:
+                build_type = 'DEBUG'
+            else:
+                build_type = 'RELEASE'
+            print "\nFAILURES FOR PLATFORM: %s, BUILD_TYPE: %s" \
+                % (self._platform.upper(), build_type)
+
+            for error in self._non_fatal_errors:
+                logging.error(error)
+            if len(self._errors):
+                raise SyntaxError('\n'.join(map(str, self._errors)))
+
+        # Now add in the tests that weren't present in the expectations file
+        expectations = set([PASS])
+        options = []
+        modifiers = []
+        if self._full_test_list:
+            for test in self._full_test_list:
+                if not test in self._test_list_paths:
+                    self._add_test(test, modifiers, expectations, options)
+
+    def _get_options_list(self, listString):
+        return [part.strip().lower() for part in listString.strip().split(' ')]
+
+    def _parse_expectations(self, expectations, lineno, test_list_path):
+        result = set()
+        for part in expectations:
+            if not part in self.EXPECTATIONS:
+                self._add_error(lineno, 'Unsupported expectation: %s' % part,
+                    test_list_path)
+                continue
+            expectation = self.EXPECTATIONS[part]
+            result.add(expectation)
+        return result
+
+    def _expand_tests(self, test_list_path):
+        """Convert the test specification to an absolute, normalized
+        path and make sure directories end with the OS path separator."""
+        path = os.path.join(path_utils.layout_tests_dir(), test_list_path)
+        path = os.path.normpath(path)
+        path = self._fix_dir(path)
+
+        result = []
+        for test in self._full_test_list:
+            if test.startswith(path):
+                result.append(test)
+        return result
+
+    def _fix_dir(self, path):
+        """Check to see if the path points to a directory, and if so, append
+        the directory separator if necessary."""
+        if self._tests_are_present:
+            if os.path.isdir(path):
+                path = os.path.join(path, '')
+        else:
+            # If we can't check the filesystem to see if this is a directory,
+            # we assume that files w/o an extension are directories.
+            # TODO(dpranke): What happens w/ LayoutTests/css2.1 ?
+            if os.path.splitext(path)[1] == '':
+                path = os.path.join(path, '')
+        return path
+
+    def _add_tests(self, tests, expectations, test_list_path, lineno,
+                   modifiers, options):
+        for test in tests:
+            if self._already_seen_test(test, test_list_path, lineno):
+                continue
+
+            self._clear_expectations_for_test(test, test_list_path)
+            self._add_test(test, modifiers, expectations, options)
+
+    def _add_test(self, test, modifiers, expectations, options):
+        """Sets the expected state for a given test.
+
+        This routine assumes the test has not been added before. If it has,
+        use _ClearExpectationsForTest() to reset the state prior to
+        calling this.
+
+        Args:
+          test: test to add
+          modifiers: sequence of modifier keywords ('wontfix', 'slow', etc.)
+          expectations: sequence of expectations (PASS, IMAGE, etc.)
+          options: sequence of keywords and bug identifiers."""
+        self._test_to_expectations[test] = expectations
+        for expectation in expectations:
+            self._expectation_to_tests[expectation].add(test)
+
+        self._test_to_options[test] = options
+        self._test_to_modifiers[test] = set()
+        for modifier in modifiers:
+            mod_value = self.MODIFIERS[modifier]
+            self._modifier_to_tests[mod_value].add(test)
+            self._test_to_modifiers[test].add(mod_value)
+
+        if 'wontfix' in modifiers:
+            self._timeline_to_tests[WONTFIX].add(test)
+        elif 'defer' in modifiers:
+            self._timeline_to_tests[DEFER].add(test)
+        else:
+            self._timeline_to_tests[NOW].add(test)
+
+        if 'skip' in modifiers:
+            self._result_type_to_tests[SKIP].add(test)
+        elif expectations == set([PASS]):
+            self._result_type_to_tests[PASS].add(test)
+        elif len(expectations) > 1:
+            self._result_type_to_tests[FLAKY].add(test)
+        else:
+            self._result_type_to_tests[FAIL].add(test)
+
+    def _clear_expectations_for_test(self, test, test_list_path):
+        """Remove prexisting expectations for this test.
+        This happens if we are seeing a more precise path
+        than a previous listing.
+        """
+        if test in self._test_list_paths:
+            self._test_to_expectations.pop(test, '')
+            self._remove_from_sets(test, self._expectation_to_tests)
+            self._remove_from_sets(test, self._modifier_to_tests)
+            self._remove_from_sets(test, self._timeline_to_tests)
+            self._remove_from_sets(test, self._result_type_to_tests)
+
+        self._test_list_paths[test] = os.path.normpath(test_list_path)
+
+    def _remove_from_sets(self, test, dict):
+        """Removes the given test from the sets in the dictionary.
+
+        Args:
+          test: test to look for
+          dict: dict of sets of files"""
+        for set_of_tests in dict.itervalues():
+            if test in set_of_tests:
+                set_of_tests.remove(test)
+
+    def _already_seen_test(self, test, test_list_path, lineno):
+        """Returns true if we've already seen a more precise path for this test
+        than the test_list_path.
+        """
+        if not test in self._test_list_paths:
+            return False
+
+        prev_base_path = self._test_list_paths[test]
+        if (prev_base_path == os.path.normpath(test_list_path)):
+            self._add_error(lineno, 'Duplicate expectations.', test)
+            return True
+
+        # Check if we've already seen a more precise path.
+        return prev_base_path.startswith(os.path.normpath(test_list_path))
+
+    def _add_error(self, lineno, msg, path):
+        """Reports an error that will prevent running the tests. Does not
+        immediately raise an exception because we'd like to aggregate all the
+        errors so they can all be printed out."""
+        self._errors.append('\nLine:%s %s %s' % (lineno, msg, path))
+
+    def _log_non_fatal_error(self, lineno, msg, path):
+        """Reports an error that will not prevent running the tests. These are
+        still errors, but not bad enough to warrant breaking test running."""
+        self._non_fatal_errors.append('Line:%s %s %s' % (lineno, msg, path))
diff --git a/WebKitTools/Scripts/webkitpy/layout_tests/layout_package/test_failures.py b/WebKitTools/Scripts/webkitpy/layout_tests/layout_package/test_failures.py
new file mode 100644
index 0000000..6957dea
--- /dev/null
+++ b/WebKitTools/Scripts/webkitpy/layout_tests/layout_package/test_failures.py
@@ -0,0 +1,267 @@
+#!/usr/bin/env python
+# Copyright (C) 2010 The Chromium Authors. All rights reserved.
+#
+# Redistribution and use in source and binary forms, with or without
+# modification, are permitted provided that the following conditions are
+# met:
+#
+#     * Redistributions of source code must retain the above copyright
+# notice, this list of conditions and the following disclaimer.
+#     * Redistributions in binary form must reproduce the above
+# copyright notice, this list of conditions and the following disclaimer
+# in the documentation and/or other materials provided with the
+# distribution.
+#     * Neither the Chromium name nor the names of its
+# contributors may be used to endorse or promote products derived from
+# this software without specific prior written permission.
+#
+# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+# A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+# OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+# SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+# LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+# DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+# THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+
+"""Classes for failures that occur during tests."""
+
+import os
+import test_expectations
+
+
+def determine_result_type(failure_list):
+    """Takes a set of test_failures and returns which result type best fits
+    the list of failures. "Best fits" means we use the worst type of failure.
+
+    Returns:
+      one of the test_expectations result types - PASS, TEXT, CRASH, etc."""
+
+    if not failure_list or len(failure_list) == 0:
+        return test_expectations.PASS
+
+    failure_types = [type(f) for f in failure_list]
+    if FailureCrash in failure_types:
+        return test_expectations.CRASH
+    elif FailureTimeout in failure_types:
+        return test_expectations.TIMEOUT
+    elif (FailureMissingResult in failure_types or
+          FailureMissingImage in failure_types or
+          FailureMissingImageHash in failure_types):
+        return test_expectations.MISSING
+    else:
+        is_text_failure = FailureTextMismatch in failure_types
+        is_image_failure = (FailureImageHashIncorrect in failure_types or
+                            FailureImageHashMismatch in failure_types)
+        if is_text_failure and is_image_failure:
+            return test_expectations.IMAGE_PLUS_TEXT
+        elif is_text_failure:
+            return test_expectations.TEXT
+        elif is_image_failure:
+            return test_expectations.IMAGE
+        else:
+            raise ValueError("unclassifiable set of failures: "
+                             + str(failure_types))
+
+
+class TestFailure(object):
+    """Abstract base class that defines the failure interface."""
+
+    @staticmethod
+    def message():
+        """Returns a string describing the failure in more detail."""
+        raise NotImplemented
+
+    def result_html_output(self, filename):
+        """Returns an HTML string to be included on the results.html page."""
+        raise NotImplemented
+
+    def should_kill_test_shell(self):
+        """Returns True if we should kill the test shell before the next
+        test."""
+        return False
+
+    def relative_output_filename(self, filename, modifier):
+        """Returns a relative filename inside the output dir that contains
+        modifier.
+
+        For example, if filename is fast\dom\foo.html and modifier is
+        "-expected.txt", the return value is fast\dom\foo-expected.txt
+
+        Args:
+          filename: relative filename to test file
+          modifier: a string to replace the extension of filename with
+
+        Return:
+          The relative windows path to the output filename
+        """
+        return os.path.splitext(filename)[0] + modifier
+
+
+class FailureWithType(TestFailure):
+    """Base class that produces standard HTML output based on the test type.
+
+    Subclasses may commonly choose to override the ResultHtmlOutput, but still
+    use the standard OutputLinks.
+    """
+
+    def __init__(self, test_type):
+        TestFailure.__init__(self)
+        # TODO(ojan): This class no longer needs to know the test_type.
+        self._test_type = test_type
+
+    # Filename suffixes used by ResultHtmlOutput.
+    OUT_FILENAMES = []
+
+    def output_links(self, filename, out_names):
+        """Returns a string holding all applicable output file links.
+
+        Args:
+          filename: the test filename, used to construct the result file names
+          out_names: list of filename suffixes for the files. If three or more
+              suffixes are in the list, they should be [actual, expected, diff,
+              wdiff]. Two suffixes should be [actual, expected], and a
+              single item is the [actual] filename suffix.
+              If out_names is empty, returns the empty string.
+        """
+        links = ['']
+        uris = [self.relative_output_filename(filename, fn) for
+                fn in out_names]
+        if len(uris) > 1:
+            links.append("<a href='%s'>expected</a>" % uris[1])
+        if len(uris) > 0:
+            links.append("<a href='%s'>actual</a>" % uris[0])
+        if len(uris) > 2:
+            links.append("<a href='%s'>diff</a>" % uris[2])
+        if len(uris) > 3:
+            links.append("<a href='%s'>wdiff</a>" % uris[3])
+        return ' '.join(links)
+
+    def result_html_output(self, filename):
+        return self.message() + self.output_links(filename, self.OUT_FILENAMES)
+
+
+class FailureTimeout(TestFailure):
+    """Test timed out.  We also want to restart the test shell if this
+    happens."""
+
+    @staticmethod
+    def message():
+        return "Test timed out"
+
+    def result_html_output(self, filename):
+        return "<strong>%s</strong>" % self.message()
+
+    def should_kill_test_shell(self):
+        return True
+
+
+class FailureCrash(TestFailure):
+    """Test shell crashed."""
+
+    @staticmethod
+    def message():
+        return "Test shell crashed"
+
+    def result_html_output(self, filename):
+        # TODO(tc): create a link to the minidump file
+        stack = self.relative_output_filename(filename, "-stack.txt")
+        return "<strong>%s</strong> <a href=%s>stack</a>" % (self.message(),
+                                                             stack)
+
+    def should_kill_test_shell(self):
+        return True
+
+
+class FailureMissingResult(FailureWithType):
+    """Expected result was missing."""
+    OUT_FILENAMES = ["-actual.txt"]
+
+    @staticmethod
+    def message():
+        return "No expected results found"
+
+    def result_html_output(self, filename):
+        return ("<strong>%s</strong>" % self.message() +
+                self.output_links(filename, self.OUT_FILENAMES))
+
+
+class FailureTextMismatch(FailureWithType):
+    """Text diff output failed."""
+    # Filename suffixes used by ResultHtmlOutput.
+    OUT_FILENAMES = ["-actual.txt", "-expected.txt", "-diff.txt"]
+    OUT_FILENAMES_WDIFF = ["-actual.txt", "-expected.txt", "-diff.txt",
+                           "-wdiff.html"]
+
+    def __init__(self, test_type, has_wdiff):
+        FailureWithType.__init__(self, test_type)
+        if has_wdiff:
+            self.OUT_FILENAMES = self.OUT_FILENAMES_WDIFF
+
+    @staticmethod
+    def message():
+        return "Text diff mismatch"
+
+
+class FailureMissingImageHash(FailureWithType):
+    """Actual result hash was missing."""
+    # Chrome doesn't know to display a .checksum file as text, so don't bother
+    # putting in a link to the actual result.
+    OUT_FILENAMES = []
+
+    @staticmethod
+    def message():
+        return "No expected image hash found"
+
+    def result_html_output(self, filename):
+        return "<strong>%s</strong>" % self.message()
+
+
+class FailureMissingImage(FailureWithType):
+    """Actual result image was missing."""
+    OUT_FILENAMES = ["-actual.png"]
+
+    @staticmethod
+    def message():
+        return "No expected image found"
+
+    def result_html_output(self, filename):
+        return ("<strong>%s</strong>" % self.message() +
+                self.output_links(filename, self.OUT_FILENAMES))
+
+
+class FailureImageHashMismatch(FailureWithType):
+    """Image hashes didn't match."""
+    OUT_FILENAMES = ["-actual.png", "-expected.png", "-diff.png"]
+
+    @staticmethod
+    def message():
+        # We call this a simple image mismatch to avoid confusion, since
+        # we link to the PNGs rather than the checksums.
+        return "Image mismatch"
+
+
+class FailureFuzzyFailure(FailureWithType):
+    """Image hashes didn't match."""
+    OUT_FILENAMES = ["-actual.png", "-expected.png"]
+
+    @staticmethod
+    def message():
+        return "Fuzzy image match also failed"
+
+
+class FailureImageHashIncorrect(FailureWithType):
+    """Actual result hash is incorrect."""
+    # Chrome doesn't know to display a .checksum file as text, so don't bother
+    # putting in a link to the actual result.
+    OUT_FILENAMES = []
+
+    @staticmethod
+    def message():
+        return "Images match, expected image hash incorrect. "
+
+    def result_html_output(self, filename):
+        return "<strong>%s</strong>" % self.message()
diff --git a/WebKitTools/Scripts/webkitpy/layout_tests/layout_package/test_files.py b/WebKitTools/Scripts/webkitpy/layout_tests/layout_package/test_files.py
new file mode 100644
index 0000000..91fe136
--- /dev/null
+++ b/WebKitTools/Scripts/webkitpy/layout_tests/layout_package/test_files.py
@@ -0,0 +1,95 @@
+#!/usr/bin/env python
+# Copyright (C) 2010 The Chromium Authors. All rights reserved.
+#
+# Redistribution and use in source and binary forms, with or without
+# modification, are permitted provided that the following conditions are
+# met:
+#
+#     * Redistributions of source code must retain the above copyright
+# notice, this list of conditions and the following disclaimer.
+#     * Redistributions in binary form must reproduce the above
+# copyright notice, this list of conditions and the following disclaimer
+# in the documentation and/or other materials provided with the
+# distribution.
+#     * Neither the Chromium name nor the names of its
+# contributors may be used to endorse or promote products derived from
+# this software without specific prior written permission.
+#
+# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+# A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+# OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+# SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+# LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+# DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+# THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+
+"""This module is used to find all of the layout test files used by Chromium
+(across all platforms). It exposes one public function - GatherTestFiles() -
+which takes an optional list of paths. If a list is passed in, the returned
+list of test files is constrained to those found under the paths passed in,
+i.e. calling GatherTestFiles(["LayoutTests/fast"]) will only return files
+under that directory."""
+
+import glob
+import os
+import path_utils
+
+# When collecting test cases, we include any file with these extensions.
+_supported_file_extensions = set(['.html', '.shtml', '.xml', '.xhtml', '.pl',
+                                  '.php', '.svg'])
+# When collecting test cases, skip these directories
+_skipped_directories = set(['.svn', '_svn', 'resources', 'script-tests'])
+
+
+def gather_test_files(paths):
+    """Generate a set of test files and return them.
+
+    Args:
+      paths: a list of command line paths relative to the webkit/tests
+          directory. glob patterns are ok.
+    """
+    paths_to_walk = set()
+    # if paths is empty, provide a pre-defined list.
+    if paths:
+        for path in paths:
+            # If there's an * in the name, assume it's a glob pattern.
+            path = os.path.join(path_utils.layout_tests_dir(), path)
+            if path.find('*') > -1:
+                filenames = glob.glob(path)
+                paths_to_walk.update(filenames)
+            else:
+                paths_to_walk.add(path)
+    else:
+        paths_to_walk.add(path_utils.layout_tests_dir())
+
+    # Now walk all the paths passed in on the command line and get filenames
+    test_files = set()
+    for path in paths_to_walk:
+        if os.path.isfile(path) and _has_supported_extension(path):
+            test_files.add(os.path.normpath(path))
+            continue
+
+        for root, dirs, files in os.walk(path):
+            # don't walk skipped directories and sub directories
+            if os.path.basename(root) in _skipped_directories:
+                del dirs[:]
+                continue
+
+            for filename in files:
+                if _has_supported_extension(filename):
+                    filename = os.path.join(root, filename)
+                    filename = os.path.normpath(filename)
+                    test_files.add(filename)
+
+    return test_files
+
+
+def _has_supported_extension(filename):
+    """Return true if filename is one of the file extensions we want to run a
+    test on."""
+    extension = os.path.splitext(filename)[1]
+    return extension in _supported_file_extensions
diff --git a/WebKitTools/Scripts/webkitpy/layout_tests/layout_package/test_shell_thread.py b/WebKitTools/Scripts/webkitpy/layout_tests/layout_package/test_shell_thread.py
new file mode 100644
index 0000000..10d0509
--- /dev/null
+++ b/WebKitTools/Scripts/webkitpy/layout_tests/layout_package/test_shell_thread.py
@@ -0,0 +1,511 @@
+#!/usr/bin/env python
+# Copyright (C) 2010 The Chromium Authors. All rights reserved.
+#
+# Redistribution and use in source and binary forms, with or without
+# modification, are permitted provided that the following conditions are
+# met:
+#
+#     * Redistributions of source code must retain the above copyright
+# notice, this list of conditions and the following disclaimer.
+#     * Redistributions in binary form must reproduce the above
+# copyright notice, this list of conditions and the following disclaimer
+# in the documentation and/or other materials provided with the
+# distribution.
+#     * Neither the Chromium name nor the names of its
+# contributors may be used to endorse or promote products derived from
+# this software without specific prior written permission.
+#
+# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+# A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+# OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+# SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+# LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+# DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+# THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+
+"""A Thread object for running the test shell and processing URLs from a
+shared queue.
+
+Each thread runs a separate instance of the test_shell binary and validates
+the output.  When there are no more URLs to process in the shared queue, the
+thread exits.
+"""
+
+import copy
+import logging
+import os
+import Queue
+import signal
+import subprocess
+import sys
+import thread
+import threading
+import time
+
+import path_utils
+import test_failures
+
+
+def process_output(proc, test_info, test_types, test_args, target, output_dir):
+    """Receives the output from a test_shell process, subjects it to a number
+    of tests, and returns a list of failure types the test produced.
+
+    Args:
+      proc: an active test_shell process
+      test_info: Object containing the test filename, uri and timeout
+      test_types: list of test types to subject the output to
+      test_args: arguments to be passed to each test
+      target: Debug or Release
+      output_dir: directory to put crash stack traces into
+
+    Returns: a list of failure objects and times for the test being processed
+    """
+    outlines = []
+    extra_lines = []
+    failures = []
+    crash = False
+
+    # Some test args, such as the image hash, may be added or changed on a
+    # test-by-test basis.
+    local_test_args = copy.copy(test_args)
+
+    start_time = time.time()
+
+    line = proc.stdout.readline()
+
+    # Only start saving output lines once we've loaded the URL for the test.
+    url = None
+    test_string = test_info.uri.strip()
+
+    while line.rstrip() != "#EOF":
+        # Make sure we haven't crashed.
+        if line == '' and proc.poll() is not None:
+            failures.append(test_failures.FailureCrash())
+
+            # This is hex code 0xc000001d, which is used for abrupt
+            # termination. This happens if we hit ctrl+c from the prompt and
+            # we happen to be waiting on the test_shell.
+            # sdoyon: Not sure for which OS and in what circumstances the
+            # above code is valid. What works for me under Linux to detect
+            # ctrl+c is for the subprocess returncode to be negative SIGINT.
+            # And that agrees with the subprocess documentation.
+            if (-1073741510 == proc.returncode or
+                - signal.SIGINT == proc.returncode):
+                raise KeyboardInterrupt
+            crash = True
+            break
+
+        # Don't include #URL lines in our output
+        if line.startswith("#URL:"):
+            url = line.rstrip()[5:]
+            if url != test_string:
+                logging.fatal("Test got out of sync:\n|%s|\n|%s|" %
+                              (url, test_string))
+                raise AssertionError("test out of sync")
+        elif line.startswith("#MD5:"):
+            local_test_args.hash = line.rstrip()[5:]
+        elif line.startswith("#TEST_TIMED_OUT"):
+            # Test timed out, but we still need to read until #EOF.
+            failures.append(test_failures.FailureTimeout())
+        elif url:
+            outlines.append(line)
+        else:
+            extra_lines.append(line)
+
+        line = proc.stdout.readline()
+
+    end_test_time = time.time()
+
+    if len(extra_lines):
+        extra = "".join(extra_lines)
+        if crash:
+            logging.debug("Stacktrace for %s:\n%s" % (test_string, extra))
+            # Strip off "file://" since RelativeTestFilename expects
+            # filesystem paths.
+            filename = os.path.join(output_dir,
+                path_utils.relative_test_filename(test_string[7:]))
+            filename = os.path.splitext(filename)[0] + "-stack.txt"
+            path_utils.maybe_make_directory(os.path.split(filename)[0])
+            open(filename, "wb").write(extra)
+        else:
+            logging.debug("Previous test output extra lines after dump:\n%s" %
+                extra)
+
+    # Check the output and save the results.
+    time_for_diffs = {}
+    for test_type in test_types:
+        start_diff_time = time.time()
+        new_failures = test_type.compare_output(test_info.filename,
+                                                proc, ''.join(outlines),
+                                                local_test_args, target)
+        # Don't add any more failures if we already have a crash, so we don't
+        # double-report those tests. We do double-report for timeouts since
+        # we still want to see the text and image output.
+        if not crash:
+            failures.extend(new_failures)
+        time_for_diffs[test_type.__class__.__name__] = (
+            time.time() - start_diff_time)
+
+    total_time_for_all_diffs = time.time() - end_test_time
+    test_run_time = end_test_time - start_time
+    return TestStats(test_info.filename, failures, test_run_time,
+        total_time_for_all_diffs, time_for_diffs)
+
+
+def start_test_shell(command, args):
+    """Returns the process for a new test_shell started in layout-tests mode.
+    """
+    cmd = []
+    # Hook for injecting valgrind or other runtime instrumentation,
+    # used by e.g. tools/valgrind/valgrind_tests.py.
+    wrapper = os.environ.get("BROWSER_WRAPPER", None)
+    if wrapper != None:
+        cmd += [wrapper]
+    cmd += command + ['--layout-tests'] + args
+    return subprocess.Popen(cmd,
+                            stdin=subprocess.PIPE,
+                            stdout=subprocess.PIPE,
+                            stderr=subprocess.STDOUT)
+
+
+class TestStats:
+
+    def __init__(self, filename, failures, test_run_time,
+                 total_time_for_all_diffs, time_for_diffs):
+        self.filename = filename
+        self.failures = failures
+        self.test_run_time = test_run_time
+        self.total_time_for_all_diffs = total_time_for_all_diffs
+        self.time_for_diffs = time_for_diffs
+
+
+class SingleTestThread(threading.Thread):
+    """Thread wrapper for running a single test file."""
+
+    def __init__(self, test_shell_command, shell_args, test_info, test_types,
+        test_args, target, output_dir):
+        """
+        Args:
+          test_info: Object containing the test filename, uri and timeout
+          output_dir: Directory to put crash stacks into.
+          See TestShellThread for documentation of the remaining arguments.
+        """
+
+        threading.Thread.__init__(self)
+        self._command = test_shell_command
+        self._shell_args = shell_args
+        self._test_info = test_info
+        self._test_types = test_types
+        self._test_args = test_args
+        self._target = target
+        self._output_dir = output_dir
+
+    def run(self):
+        proc = start_test_shell(self._command, self._shell_args +
+            ["--time-out-ms=" + self._test_info.timeout, self._test_info.uri])
+        self._test_stats = process_output(proc, self._test_info,
+            self._test_types, self._test_args, self._target, self._output_dir)
+
+    def get_test_stats(self):
+        return self._test_stats
+
+
+class TestShellThread(threading.Thread):
+
+    def __init__(self, filename_list_queue, result_queue, test_shell_command,
+                 test_types, test_args, shell_args, options):
+        """Initialize all the local state for this test shell thread.
+
+        Args:
+          filename_list_queue: A thread safe Queue class that contains lists
+              of tuples of (filename, uri) pairs.
+          result_queue: A thread safe Queue class that will contain tuples of
+              (test, failure lists) for the test results.
+          test_shell_command: A list specifying the command+args for
+              test_shell
+          test_types: A list of TestType objects to run the test output
+              against.
+          test_args: A TestArguments object to pass to each TestType.
+          shell_args: Any extra arguments to be passed to test_shell.exe.
+          options: A property dictionary as produced by optparse. The
+              command-line options should match those expected by
+              run_webkit_tests; they are typically passed via the
+              run_webkit_tests.TestRunner class."""
+        threading.Thread.__init__(self)
+        self._filename_list_queue = filename_list_queue
+        self._result_queue = result_queue
+        self._filename_list = []
+        self._test_shell_command = test_shell_command
+        self._test_types = test_types
+        self._test_args = test_args
+        self._test_shell_proc = None
+        self._shell_args = shell_args
+        self._options = options
+        self._canceled = False
+        self._exception_info = None
+        self._directory_timing_stats = {}
+        self._test_stats = []
+        self._num_tests = 0
+        self._start_time = 0
+        self._stop_time = 0
+
+        # Current directory of tests we're running.
+        self._current_dir = None
+        # Number of tests in self._current_dir.
+        self._num_tests_in_current_dir = None
+        # Time at which we started running tests from self._current_dir.
+        self._current_dir_start_time = None
+
+    def get_directory_timing_stats(self):
+        """Returns a dictionary mapping test directory to a tuple of
+        (number of tests in that directory, time to run the tests)"""
+        return self._directory_timing_stats
+
+    def get_individual_test_stats(self):
+        """Returns a list of (test_filename, time_to_run_test,
+        total_time_for_all_diffs, time_for_diffs) tuples."""
+        return self._test_stats
+
+    def cancel(self):
+        """Set a flag telling this thread to quit."""
+        self._canceled = True
+
+    def get_exception_info(self):
+        """If run() terminated on an uncaught exception, return it here
+        ((type, value, traceback) tuple).
+        Returns None if run() terminated normally. Meant to be called after
+        joining this thread."""
+        return self._exception_info
+
+    def get_total_time(self):
+        return max(self._stop_time - self._start_time, 0.0)
+
+    def get_num_tests(self):
+        return self._num_tests
+
+    def run(self):
+        """Delegate main work to a helper method and watch for uncaught
+        exceptions."""
+        self._start_time = time.time()
+        self._num_tests = 0
+        try:
+            logging.debug('%s starting' % (self.getName()))
+            self._run(test_runner=None, result_summary=None)
+            logging.debug('%s done (%d tests)' % (self.getName(),
+                          self.get_num_tests()))
+        except:
+            # Save the exception for our caller to see.
+            self._exception_info = sys.exc_info()
+            self._stop_time = time.time()
+            # Re-raise it and die.
+            logging.error('%s dying: %s' % (self.getName(),
+                          self._exception_info))
+            raise
+        self._stop_time = time.time()
+
+    def run_in_main_thread(self, test_runner, result_summary):
+        """This hook allows us to run the tests from the main thread if
+        --num-test-shells==1, instead of having to always run two or more
+        threads. This allows us to debug the test harness without having to
+        do multi-threaded debugging."""
+        self._run(test_runner, result_summary)
+
+    def _run(self, test_runner, result_summary):
+        """Main work entry point of the thread. Basically we pull urls from the
+        filename queue and run the tests until we run out of urls.
+
+        If test_runner is not None, then we call test_runner.UpdateSummary()
+        with the results of each test."""
+        batch_size = 0
+        batch_count = 0
+        if self._options.batch_size:
+            try:
+                batch_size = int(self._options.batch_size)
+            except:
+                logging.info("Ignoring invalid batch size '%s'" %
+                             self._options.batch_size)
+
+        # Append tests we're running to the existing tests_run.txt file.
+        # This is created in run_webkit_tests.py:_PrepareListsAndPrintOutput.
+        tests_run_filename = os.path.join(self._options.results_directory,
+                                          "tests_run.txt")
+        tests_run_file = open(tests_run_filename, "a")
+
+        while True:
+            if self._canceled:
+                logging.info('Testing canceled')
+                tests_run_file.close()
+                return
+
+            if len(self._filename_list) is 0:
+                if self._current_dir is not None:
+                    self._directory_timing_stats[self._current_dir] = \
+                        (self._num_tests_in_current_dir,
+                         time.time() - self._current_dir_start_time)
+
+                try:
+                    self._current_dir, self._filename_list = \
+                        self._filename_list_queue.get_nowait()
+                except Queue.Empty:
+                    self._kill_test_shell()
+                    tests_run_file.close()
+                    return
+
+                self._num_tests_in_current_dir = len(self._filename_list)
+                self._current_dir_start_time = time.time()
+
+            test_info = self._filename_list.pop()
+
+            # We have a url, run tests.
+            batch_count += 1
+            self._num_tests += 1
+            if self._options.run_singly:
+                failures = self._run_test_singly(test_info)
+            else:
+                failures = self._run_test(test_info)
+
+            filename = test_info.filename
+            tests_run_file.write(filename + "\n")
+            if failures:
+                # Check and kill test shell if we need too.
+                if len([1 for f in failures if f.should_kill_test_shell()]):
+                    self._kill_test_shell()
+                    # Reset the batch count since the shell just bounced.
+                    batch_count = 0
+                # Print the error message(s).
+                error_str = '\n'.join(['  ' + f.message() for f in failures])
+                logging.debug("%s %s failed:\n%s" % (self.getName(),
+                              path_utils.relative_test_filename(filename),
+                              error_str))
+            else:
+                logging.debug("%s %s passed" % (self.getName(),
+                              path_utils.relative_test_filename(filename)))
+            self._result_queue.put((filename, failures))
+
+            if batch_size > 0 and batch_count > batch_size:
+                # Bounce the shell and reset count.
+                self._kill_test_shell()
+                batch_count = 0
+
+            if test_runner:
+                test_runner.update_summary(result_summary)
+
+    def _run_test_singly(self, test_info):
+        """Run a test in a separate thread, enforcing a hard time limit.
+
+        Since we can only detect the termination of a thread, not any internal
+        state or progress, we can only run per-test timeouts when running test
+        files singly.
+
+        Args:
+          test_info: Object containing the test filename, uri and timeout
+
+        Return:
+          A list of TestFailure objects describing the error.
+        """
+        worker = SingleTestThread(self._test_shell_command,
+                                  self._shell_args,
+                                  test_info,
+                                  self._test_types,
+                                  self._test_args,
+                                  self._options.target,
+                                  self._options.results_directory)
+
+        worker.start()
+
+        # When we're running one test per test_shell process, we can enforce
+        # a hard timeout. the test_shell watchdog uses 2.5x the timeout
+        # We want to be larger than that.
+        worker.join(int(test_info.timeout) * 3.0 / 1000.0)
+        if worker.isAlive():
+            # If join() returned with the thread still running, the
+            # test_shell.exe is completely hung and there's nothing
+            # more we can do with it.  We have to kill all the
+            # test_shells to free it up. If we're running more than
+            # one test_shell thread, we'll end up killing the other
+            # test_shells too, introducing spurious crashes. We accept that
+            # tradeoff in order to avoid losing the rest of this thread's
+            # results.
+            logging.error('Test thread hung: killing all test_shells')
+            path_utils.kill_all_test_shells()
+
+        try:
+            stats = worker.get_test_stats()
+            self._test_stats.append(stats)
+            failures = stats.failures
+        except AttributeError, e:
+            failures = []
+            logging.error('Cannot get results of test: %s' %
+                          test_info.filename)
+
+        return failures
+
+    def _run_test(self, test_info):
+        """Run a single test file using a shared test_shell process.
+
+        Args:
+          test_info: Object containing the test filename, uri and timeout
+
+        Return:
+          A list of TestFailure objects describing the error.
+        """
+        self._ensure_test_shell_is_running()
+        # Args to test_shell is a space-separated list of
+        # "uri timeout pixel_hash"
+        # The timeout and pixel_hash are optional.  The timeout is used if this
+        # test has a custom timeout. The pixel_hash is used to avoid doing an
+        # image dump if the checksums match, so it should be set to a blank
+        # value if we are generating a new baseline.
+        # (Otherwise, an image from a previous run will be copied into
+        # the baseline.)
+        image_hash = test_info.image_hash
+        if image_hash and self._test_args.new_baseline:
+            image_hash = ""
+        self._test_shell_proc.stdin.write(("%s %s %s\n" %
+            (test_info.uri, test_info.timeout, image_hash)))
+
+        # If the test shell is dead, the above may cause an IOError as we
+        # try to write onto the broken pipe. If this is the first test for
+        # this test shell process, than the test shell did not
+        # successfully start. If this is not the first test, then the
+        # previous tests have caused some kind of delayed crash. We don't
+        # try to recover here.
+        self._test_shell_proc.stdin.flush()
+
+        stats = process_output(self._test_shell_proc, test_info,
+                               self._test_types, self._test_args,
+                               self._options.target,
+                               self._options.results_directory)
+
+        self._test_stats.append(stats)
+        return stats.failures
+
+    def _ensure_test_shell_is_running(self):
+        """Start the shared test shell, if it's not running.  Not for use when
+        running tests singly, since those each start a separate test shell in
+        their own thread.
+        """
+        if (not self._test_shell_proc or
+            self._test_shell_proc.poll() is not None):
+            self._test_shell_proc = start_test_shell(self._test_shell_command,
+                                                     self._shell_args)
+
+    def _kill_test_shell(self):
+        """Kill the test shell process if it's running."""
+        if self._test_shell_proc:
+            self._test_shell_proc.stdin.close()
+            self._test_shell_proc.stdout.close()
+            if self._test_shell_proc.stderr:
+                self._test_shell_proc.stderr.close()
+            if (sys.platform not in ('win32', 'cygwin') and
+                not self._test_shell_proc.poll()):
+                # Closing stdin/stdout/stderr hangs sometimes on OS X.
+                null = open(os.devnull, "w")
+                subprocess.Popen(["kill", "-9",
+                                 str(self._test_shell_proc.pid)], stderr=null)
+                null.close()
+            self._test_shell_proc = None
diff --git a/WebKitTools/Scripts/webkitpy/layout_tests/layout_package/websocket_server.py b/WebKitTools/Scripts/webkitpy/layout_tests/layout_package/websocket_server.py
new file mode 100644
index 0000000..7fc47a0
--- /dev/null
+++ b/WebKitTools/Scripts/webkitpy/layout_tests/layout_package/websocket_server.py
@@ -0,0 +1,316 @@
+#!/usr/bin/env python
+# Copyright (C) 2010 The Chromium Authors. All rights reserved.
+#
+# Redistribution and use in source and binary forms, with or without
+# modification, are permitted provided that the following conditions are
+# met:
+#
+#     * Redistributions of source code must retain the above copyright
+# notice, this list of conditions and the following disclaimer.
+#     * Redistributions in binary form must reproduce the above
+# copyright notice, this list of conditions and the following disclaimer
+# in the documentation and/or other materials provided with the
+# distribution.
+#     * Neither the Chromium name nor the names of its
+# contributors may be used to endorse or promote products derived from
+# this software without specific prior written permission.
+#
+# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+# A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+# OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+# SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+# LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+# DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+# THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+
+"""A class to help start/stop the PyWebSocket server used by layout tests."""
+
+
+import logging
+import optparse
+import os
+import subprocess
+import sys
+import tempfile
+import time
+import urllib
+
+import path_utils
+import platform_utils
+import http_server
+
+_WS_LOG_PREFIX = 'pywebsocket.ws.log-'
+_WSS_LOG_PREFIX = 'pywebsocket.wss.log-'
+
+_DEFAULT_WS_PORT = 8880
+_DEFAULT_WSS_PORT = 9323
+
+
+def url_is_alive(url):
+  """Checks to see if we get an http response from |url|.
+  We poll the url 5 times with a 1 second delay.  If we don't
+  get a reply in that time, we give up and assume the httpd
+  didn't start properly.
+
+  Args:
+    url: The URL to check.
+  Return:
+    True if the url is alive.
+  """
+  wait_time = 5
+  while wait_time > 0:
+    try:
+      response = urllib.urlopen(url)
+      # Server is up and responding.
+      return True
+    except IOError:
+      pass
+    wait_time -= 1
+    # Wait a second and try again.
+    time.sleep(1)
+
+  return False
+
+
+def remove_log_files(folder, starts_with):
+    files = os.listdir(folder)
+    for file in files:
+        if file.startswith(starts_with):
+            full_path = os.path.join(folder, file)
+            os.remove(full_path)
+
+
+class PyWebSocketNotStarted(Exception):
+    pass
+
+
+class PyWebSocketNotFound(Exception):
+    pass
+
+
+class PyWebSocket(http_server.Lighttpd):
+
+    def __init__(self, output_dir, port=_DEFAULT_WS_PORT,
+                 root=None,
+                 use_tls=False,
+                 private_key=http_server.Lighttpd._pem_file,
+                 certificate=http_server.Lighttpd._pem_file,
+                 register_cygwin=None,
+                 pidfile=None):
+        """Args:
+          output_dir: the absolute path to the layout test result directory
+        """
+        http_server.Lighttpd.__init__(self, output_dir,
+                                      port=port,
+                                      root=root,
+                                      register_cygwin=register_cygwin)
+        self._output_dir = output_dir
+        self._process = None
+        self._port = port
+        self._root = root
+        self._use_tls = use_tls
+        self._private_key = private_key
+        self._certificate = certificate
+        if self._port:
+            self._port = int(self._port)
+        if self._use_tls:
+            self._server_name = 'PyWebSocket(Secure)'
+        else:
+            self._server_name = 'PyWebSocket'
+        self._pidfile = pidfile
+        self._wsout = None
+
+        # Webkit tests
+        if self._root:
+            self._layout_tests = os.path.abspath(self._root)
+            self._web_socket_tests = os.path.abspath(
+                os.path.join(self._root, 'websocket', 'tests'))
+        else:
+            try:
+                self._web_socket_tests = path_utils.path_from_base(
+                    'third_party', 'WebKit', 'LayoutTests', 'websocket',
+                    'tests')
+                self._layout_tests = path_utils.path_from_base(
+                    'third_party', 'WebKit', 'LayoutTests')
+            except path_utils.PathNotFound:
+                self._web_socket_tests = None
+
+    def start(self):
+        if not self._web_socket_tests:
+            logging.info('No need to start %s server.' % self._server_name)
+            return
+        if self.is_running():
+            raise PyWebSocketNotStarted('%s is already running.' %
+                                        self._server_name)
+
+        time_str = time.strftime('%d%b%Y-%H%M%S')
+        if self._use_tls:
+            log_prefix = _WSS_LOG_PREFIX
+        else:
+            log_prefix = _WS_LOG_PREFIX
+        log_file_name = log_prefix + time_str
+
+        # Remove old log files. We only need to keep the last ones.
+        remove_log_files(self._output_dir, log_prefix)
+
+        error_log = os.path.join(self._output_dir, log_file_name + "-err.txt")
+
+        output_log = os.path.join(self._output_dir, log_file_name + "-out.txt")
+        self._wsout = open(output_log, "w")
+
+        python_interp = sys.executable
+        pywebsocket_base = path_utils.path_from_base(
+            'third_party', 'WebKit', 'WebKitTools', 'pywebsocket')
+        pywebsocket_script = path_utils.path_from_base(
+            'third_party', 'WebKit', 'WebKitTools', 'pywebsocket',
+            'mod_pywebsocket', 'standalone.py')
+        start_cmd = [
+            python_interp, pywebsocket_script,
+            '-p', str(self._port),
+            '-d', self._layout_tests,
+            '-s', self._web_socket_tests,
+            '-l', error_log,
+        ]
+
+        handler_map_file = os.path.join(self._web_socket_tests,
+                                        'handler_map.txt')
+        if os.path.exists(handler_map_file):
+            logging.debug('Using handler_map_file: %s' % handler_map_file)
+            start_cmd.append('-m')
+            start_cmd.append(handler_map_file)
+        else:
+            logging.warning('No handler_map_file found')
+
+        if self._use_tls:
+            start_cmd.extend(['-t', '-k', self._private_key,
+                              '-c', self._certificate])
+
+        # Put the cygwin directory first in the path to find cygwin1.dll
+        env = os.environ
+        if sys.platform in ('cygwin', 'win32'):
+            env['PATH'] = '%s;%s' % (
+                path_utils.path_from_base('third_party', 'cygwin', 'bin'),
+                env['PATH'])
+
+        if sys.platform == 'win32' and self._register_cygwin:
+            setup_mount = path_utils.path_from_base('third_party', 'cygwin',
+                'setup_mount.bat')
+            subprocess.Popen(setup_mount).wait()
+
+        env['PYTHONPATH'] = (pywebsocket_base + os.path.pathsep +
+                             env.get('PYTHONPATH', ''))
+
+        logging.debug('Starting %s server on %d.' % (
+            self._server_name, self._port))
+        logging.debug('cmdline: %s' % ' '.join(start_cmd))
+        self._process = subprocess.Popen(start_cmd, stdout=self._wsout,
+                                         stderr=subprocess.STDOUT,
+                                         env=env)
+
+        # Wait a bit before checking the liveness of the server.
+        time.sleep(0.5)
+
+        if self._use_tls:
+            url = 'https'
+        else:
+            url = 'http'
+        url = url + '://127.0.0.1:%d/' % self._port
+        if not url_is_alive(url):
+            fp = open(output_log)
+            try:
+                for line in fp:
+                    logging.error(line)
+            finally:
+                fp.close()
+            raise PyWebSocketNotStarted(
+                'Failed to start %s server on port %s.' %
+                    (self._server_name, self._port))
+
+        # Our process terminated already
+        if self._process.returncode != None:
+            raise PyWebSocketNotStarted(
+                'Failed to start %s server.' % self._server_name)
+        if self._pidfile:
+            f = open(self._pidfile, 'w')
+            f.write("%d" % self._process.pid)
+            f.close()
+
+    def stop(self, force=False):
+        if not force and not self.is_running():
+            return
+
+        if self._process:
+            pid = self._process.pid
+        elif self._pidfile:
+            f = open(self._pidfile)
+            pid = int(f.read().strip())
+            f.close()
+
+        if not pid:
+            raise PyWebSocketNotFound(
+                'Failed to find %s server pid.' % self._server_name)
+
+        logging.debug('Shutting down %s server %d.' % (self._server_name, pid))
+        platform_utils.kill_process(pid)
+
+        if self._process:
+            self._process.wait()
+            self._process = None
+
+        if self._wsout:
+            self._wsout.close()
+            self._wsout = None
+
+
+if '__main__' == __name__:
+    # Provide some command line params for starting the PyWebSocket server
+    # manually.
+    option_parser = optparse.OptionParser()
+    option_parser.add_option('--server', type='choice',
+                             choices=['start', 'stop'], default='start',
+                             help='Server action (start|stop)')
+    option_parser.add_option('-p', '--port', dest='port',
+                             default=None, help='Port to listen on')
+    option_parser.add_option('-r', '--root',
+                             help='Absolute path to DocumentRoot '
+                                  '(overrides layout test roots)')
+    option_parser.add_option('-t', '--tls', dest='use_tls',
+                             action='store_true',
+                             default=False, help='use TLS (wss://)')
+    option_parser.add_option('-k', '--private_key', dest='private_key',
+                             default='', help='TLS private key file.')
+    option_parser.add_option('-c', '--certificate', dest='certificate',
+                             default='', help='TLS certificate file.')
+    option_parser.add_option('--register_cygwin', action="store_true",
+                             dest="register_cygwin",
+                             help='Register Cygwin paths (on Win try bots)')
+    option_parser.add_option('--pidfile', help='path to pid file.')
+    options, args = option_parser.parse_args()
+
+    if not options.port:
+        if options.use_tls:
+            options.port = _DEFAULT_WSS_PORT
+        else:
+            options.port = _DEFAULT_WS_PORT
+
+    kwds = {'port': options.port, 'use_tls': options.use_tls}
+    if options.root:
+        kwds['root'] = options.root
+    if options.private_key:
+        kwds['private_key'] = options.private_key
+    if options.certificate:
+        kwds['certificate'] = options.certificate
+    kwds['register_cygwin'] = options.register_cygwin
+    if options.pidfile:
+        kwds['pidfile'] = options.pidfile
+
+    pywebsocket = PyWebSocket(tempfile.gettempdir(), **kwds)
+
+    if 'start' == options.server:
+        pywebsocket.start()
+    else:
+        pywebsocket.stop(force=True)

-- 
WebKit Debian packaging



More information about the Pkg-webkit-commits mailing list