[SCM] WebKit Debian packaging branch, debian/experimental, updated. upstream/1.3.3-9427-gc2be6fc

oliver at apple.com oliver at apple.com
Wed Dec 22 11:51:53 UTC 2010


The following commit has been merged in the debian/experimental branch:
commit 2d5480db470a9e5eeaed0c3e84ca467461741acd
Author: oliver at apple.com <oliver at apple.com@268f45cc-cd09-0410-ab3c-d52691b4dbfc>
Date:   Tue Aug 10 03:19:19 2010 +0000

    2010-08-09  Oliver Hunt  <oliver at apple.com>
    
            Reviewed by Gavin Barraclough.
    
            Allow an assembler/macroassembler to compact branches to more concise forms when linking
            https://bugs.webkit.org/show_bug.cgi?id=43745
    
            This patch makes it possible for an assembler to convert jumps into a different
            (presumably more efficient) form at link time.  Currently implemented in the
            ARMv7 JIT as that already had logic to delay linking of jumps until the end of
            compilation already.  The ARMv7 JIT chooses between either a 4 byte short jump
            or a full 32-bit offset (and rewrites ITTT instructions as appropriate), so does
            not yet produce the most compact form possible.  The general design of the linker
            should make it relatively simple to introduce new branch types with little effort,
            as the linker has no knowledge of the exact form of any of the branches.
    
            * JavaScriptCore.xcodeproj/project.pbxproj:
            * assembler/ARMv7Assembler.cpp: Added.
            (JSC::):
              Record jump sizes
    
            * assembler/ARMv7Assembler.h:
            (JSC::ARMv7Assembler::LinkRecord::LinkRecord):
            (JSC::ARMv7Assembler::LinkRecord::from):
            (JSC::ARMv7Assembler::LinkRecord::setFrom):
            (JSC::ARMv7Assembler::LinkRecord::to):
            (JSC::ARMv7Assembler::LinkRecord::type):
            (JSC::ARMv7Assembler::LinkRecord::linkType):
            (JSC::ARMv7Assembler::LinkRecord::setLinkType):
              Encapsulate LinkRecord fields so we can compress the values somewhat
    
            (JSC::ARMv7Assembler::JmpSrc::JmpSrc):
              Need to record the jump type now
    
            (JSC::ARMv7Assembler::b):
            (JSC::ARMv7Assembler::blx):
            (JSC::ARMv7Assembler::bx):
              Need to pass the jump types
    
            (JSC::ARMv7Assembler::executableOffsetFor):
            (JSC::ARMv7Assembler::jumpSizeDelta):
            (JSC::ARMv7Assembler::linkRecordSourceComparator):
            (JSC::ARMv7Assembler::computeJumpType):
            (JSC::ARMv7Assembler::convertJumpTo):
            (JSC::ARMv7Assembler::recordLinkOffsets):
            (JSC::ARMv7Assembler::jumpsToLink):
            (JSC::ARMv7Assembler::link):
            (JSC::ARMv7Assembler::unlinkedCode):
              Helper functions for the linker
    
            (JSC::ARMv7Assembler::linkJump):
            (JSC::ARMv7Assembler::canBeShortJump):
            (JSC::ARMv7Assembler::linkLongJump):
            (JSC::ARMv7Assembler::linkShortJump):
            (JSC::ARMv7Assembler::linkJumpAbsolute):
               Moving code around for the various jump linking functions
    
            * assembler/AbstractMacroAssembler.h:
            (JSC::AbstractMacroAssembler::beginUninterruptedSequence):
            (JSC::AbstractMacroAssembler::endUninterruptedSequence):
              We have to track uninterrupted sequences in any assembler that compacts
              branches as that's not something we're allowed to do in such sequences.
              AbstractMacroAssembler has a nop version of these functions as it makes the
              code elsewhere nicer.
    
            * assembler/LinkBuffer.h:
            (JSC::LinkBuffer::LinkBuffer):
            (JSC::LinkBuffer::link):
            (JSC::LinkBuffer::patch):
            (JSC::LinkBuffer::locationOf):
            (JSC::LinkBuffer::locationOfNearCall):
            (JSC::LinkBuffer::returnAddressOffset):
            (JSC::LinkBuffer::trampolineAt):
              Updated these functions to adjust for any changed offsets in the linked code
    
            (JSC::LinkBuffer::applyOffset):
              A helper function to deal with the now potentially moved labels
    
            (JSC::LinkBuffer::linkCode):
              The new and mighty linker function
    
            * assembler/MacroAssemblerARMv7.h:
            (JSC::MacroAssemblerARMv7::MacroAssemblerARMv7):
            (JSC::MacroAssemblerARMv7::beginUninterruptedSequence):
            (JSC::MacroAssemblerARMv7::endUninterruptedSequence):
            (JSC::MacroAssemblerARMv7::jumpsToLink):
            (JSC::MacroAssemblerARMv7::unlinkedCode):
            (JSC::MacroAssemblerARMv7::computeJumpType):
            (JSC::MacroAssemblerARMv7::convertJumpTo):
            (JSC::MacroAssemblerARMv7::recordLinkOffsets):
            (JSC::MacroAssemblerARMv7::jumpSizeDelta):
            (JSC::MacroAssemblerARMv7::link):
            (JSC::MacroAssemblerARMv7::jump):
            (JSC::MacroAssemblerARMv7::branchMul32):
            (JSC::MacroAssemblerARMv7::breakpoint):
            (JSC::MacroAssemblerARMv7::nearCall):
            (JSC::MacroAssemblerARMv7::call):
            (JSC::MacroAssemblerARMv7::ret):
            (JSC::MacroAssemblerARMv7::tailRecursiveCall):
            (JSC::MacroAssemblerARMv7::executableOffsetFor):
            (JSC::MacroAssemblerARMv7::inUninterruptedSequence):
            (JSC::MacroAssemblerARMv7::makeJump):
            (JSC::MacroAssemblerARMv7::makeBranch):
               All branches need to pass on their type now
    
            * jit/ExecutableAllocator.h:
            (JSC::ExecutablePool::returnLastBytes):
               We can't know ahead of time how much space will be necessary to
               hold the linked code if we're compacting branches, this new
               function allows us to return the unused bytes at the end of linking
    
            * jit/JIT.cpp:
            (JSC::JIT::JIT):
            (JSC::JIT::privateCompile):
            * jit/JIT.h:
            (JSC::JIT::compile):
               The JIT class now needs to take a linker offset so that recompilation
               can generate the same jumps when using branch compaction.
            * jit/JITArithmetic32_64.cpp:
            (JSC::JIT::emitSlow_op_mod):
            * jit/JITOpcodes.cpp:
            (JSC::JIT::privateCompileCTIMachineTrampolines):
            * jit/JITOpcodes32_64.cpp:
            (JSC::JIT::privateCompileCTIMachineTrampolines):
            (JSC::JIT::privateCompileCTINativeCall):
              Update for new trampolineAt changes
    
            * wtf/FastMalloc.cpp:
            (WTF::TCMallocStats::):
            * wtf/Platform.h:
    
    git-svn-id: http://svn.webkit.org/repository/webkit/trunk@65042 268f45cc-cd09-0410-ab3c-d52691b4dbfc

diff --git a/JavaScriptCore/ChangeLog b/JavaScriptCore/ChangeLog
index 747f345..f5343b9 100644
--- a/JavaScriptCore/ChangeLog
+++ b/JavaScriptCore/ChangeLog
@@ -1,3 +1,134 @@
+2010-08-09  Oliver Hunt  <oliver at apple.com>
+
+        Reviewed by Gavin Barraclough.
+
+        Allow an assembler/macroassembler to compact branches to more concise forms when linking
+        https://bugs.webkit.org/show_bug.cgi?id=43745
+
+        This patch makes it possible for an assembler to convert jumps into a different
+        (presumably more efficient) form at link time.  Currently implemented in the
+        ARMv7 JIT as that already had logic to delay linking of jumps until the end of
+        compilation already.  The ARMv7 JIT chooses between either a 4 byte short jump
+        or a full 32-bit offset (and rewrites ITTT instructions as appropriate), so does
+        not yet produce the most compact form possible.  The general design of the linker
+        should make it relatively simple to introduce new branch types with little effort,
+        as the linker has no knowledge of the exact form of any of the branches.
+
+        * JavaScriptCore.xcodeproj/project.pbxproj:
+        * assembler/ARMv7Assembler.cpp: Added.
+        (JSC::):
+          Record jump sizes
+
+        * assembler/ARMv7Assembler.h:
+        (JSC::ARMv7Assembler::LinkRecord::LinkRecord):
+        (JSC::ARMv7Assembler::LinkRecord::from):
+        (JSC::ARMv7Assembler::LinkRecord::setFrom):
+        (JSC::ARMv7Assembler::LinkRecord::to):
+        (JSC::ARMv7Assembler::LinkRecord::type):
+        (JSC::ARMv7Assembler::LinkRecord::linkType):
+        (JSC::ARMv7Assembler::LinkRecord::setLinkType):
+          Encapsulate LinkRecord fields so we can compress the values somewhat
+
+        (JSC::ARMv7Assembler::JmpSrc::JmpSrc):
+          Need to record the jump type now
+
+        (JSC::ARMv7Assembler::b):
+        (JSC::ARMv7Assembler::blx):
+        (JSC::ARMv7Assembler::bx):
+          Need to pass the jump types
+
+        (JSC::ARMv7Assembler::executableOffsetFor):
+        (JSC::ARMv7Assembler::jumpSizeDelta):
+        (JSC::ARMv7Assembler::linkRecordSourceComparator):
+        (JSC::ARMv7Assembler::computeJumpType):
+        (JSC::ARMv7Assembler::convertJumpTo):
+        (JSC::ARMv7Assembler::recordLinkOffsets):
+        (JSC::ARMv7Assembler::jumpsToLink):
+        (JSC::ARMv7Assembler::link):
+        (JSC::ARMv7Assembler::unlinkedCode):
+          Helper functions for the linker
+
+        (JSC::ARMv7Assembler::linkJump):
+        (JSC::ARMv7Assembler::canBeShortJump):
+        (JSC::ARMv7Assembler::linkLongJump):
+        (JSC::ARMv7Assembler::linkShortJump):
+        (JSC::ARMv7Assembler::linkJumpAbsolute):
+           Moving code around for the various jump linking functions
+
+        * assembler/AbstractMacroAssembler.h:
+        (JSC::AbstractMacroAssembler::beginUninterruptedSequence):
+        (JSC::AbstractMacroAssembler::endUninterruptedSequence):
+          We have to track uninterrupted sequences in any assembler that compacts
+          branches as that's not something we're allowed to do in such sequences.
+          AbstractMacroAssembler has a nop version of these functions as it makes the
+          code elsewhere nicer.
+
+        * assembler/LinkBuffer.h:
+        (JSC::LinkBuffer::LinkBuffer):
+        (JSC::LinkBuffer::link):
+        (JSC::LinkBuffer::patch):
+        (JSC::LinkBuffer::locationOf):
+        (JSC::LinkBuffer::locationOfNearCall):
+        (JSC::LinkBuffer::returnAddressOffset):
+        (JSC::LinkBuffer::trampolineAt):
+          Updated these functions to adjust for any changed offsets in the linked code
+
+        (JSC::LinkBuffer::applyOffset):
+          A helper function to deal with the now potentially moved labels
+
+        (JSC::LinkBuffer::linkCode):
+          The new and mighty linker function
+
+        * assembler/MacroAssemblerARMv7.h:
+        (JSC::MacroAssemblerARMv7::MacroAssemblerARMv7):
+        (JSC::MacroAssemblerARMv7::beginUninterruptedSequence):
+        (JSC::MacroAssemblerARMv7::endUninterruptedSequence):
+        (JSC::MacroAssemblerARMv7::jumpsToLink):
+        (JSC::MacroAssemblerARMv7::unlinkedCode):
+        (JSC::MacroAssemblerARMv7::computeJumpType):
+        (JSC::MacroAssemblerARMv7::convertJumpTo):
+        (JSC::MacroAssemblerARMv7::recordLinkOffsets):
+        (JSC::MacroAssemblerARMv7::jumpSizeDelta):
+        (JSC::MacroAssemblerARMv7::link):
+        (JSC::MacroAssemblerARMv7::jump):
+        (JSC::MacroAssemblerARMv7::branchMul32):
+        (JSC::MacroAssemblerARMv7::breakpoint):
+        (JSC::MacroAssemblerARMv7::nearCall):
+        (JSC::MacroAssemblerARMv7::call):
+        (JSC::MacroAssemblerARMv7::ret):
+        (JSC::MacroAssemblerARMv7::tailRecursiveCall):
+        (JSC::MacroAssemblerARMv7::executableOffsetFor):
+        (JSC::MacroAssemblerARMv7::inUninterruptedSequence):
+        (JSC::MacroAssemblerARMv7::makeJump):
+        (JSC::MacroAssemblerARMv7::makeBranch):
+           All branches need to pass on their type now
+
+        * jit/ExecutableAllocator.h:
+        (JSC::ExecutablePool::returnLastBytes):
+           We can't know ahead of time how much space will be necessary to
+           hold the linked code if we're compacting branches, this new
+           function allows us to return the unused bytes at the end of linking
+
+        * jit/JIT.cpp:
+        (JSC::JIT::JIT):
+        (JSC::JIT::privateCompile):
+        * jit/JIT.h:
+        (JSC::JIT::compile):
+           The JIT class now needs to take a linker offset so that recompilation
+           can generate the same jumps when using branch compaction.
+        * jit/JITArithmetic32_64.cpp:
+        (JSC::JIT::emitSlow_op_mod):
+        * jit/JITOpcodes.cpp:
+        (JSC::JIT::privateCompileCTIMachineTrampolines):
+        * jit/JITOpcodes32_64.cpp:
+        (JSC::JIT::privateCompileCTIMachineTrampolines):
+        (JSC::JIT::privateCompileCTINativeCall):
+          Update for new trampolineAt changes
+
+        * wtf/FastMalloc.cpp:
+        (WTF::TCMallocStats::):
+        * wtf/Platform.h:
+
 2010-08-09  Gavin Barraclough  <barraclough at apple.com>
 
         Qt build fix III.
diff --git a/JavaScriptCore/JavaScriptCore.xcodeproj/project.pbxproj b/JavaScriptCore/JavaScriptCore.xcodeproj/project.pbxproj
index d85264d..24b48a7 100644
--- a/JavaScriptCore/JavaScriptCore.xcodeproj/project.pbxproj
+++ b/JavaScriptCore/JavaScriptCore.xcodeproj/project.pbxproj
@@ -311,6 +311,7 @@
 		A7482B9411671147003B0712 /* JSWeakObjectMapRefPrivate.cpp in Sources */ = {isa = PBXBuildFile; fileRef = A7482B7A1166CDEA003B0712 /* JSWeakObjectMapRefPrivate.cpp */; };
 		A7482E93116A7CAD003B0712 /* JSWeakObjectMapRefInternal.h in Headers */ = {isa = PBXBuildFile; fileRef = A7482E37116A697B003B0712 /* JSWeakObjectMapRefInternal.h */; settings = {ATTRIBUTES = (Private, ); }; };
 		A74B3499102A5F8E0032AB98 /* MarkStack.cpp in Sources */ = {isa = PBXBuildFile; fileRef = A74B3498102A5F8E0032AB98 /* MarkStack.cpp */; };
+		A74DE1D0120B875600D40D5B /* ARMv7Assembler.cpp in Sources */ = {isa = PBXBuildFile; fileRef = A74DE1CB120B86D600D40D5B /* ARMv7Assembler.cpp */; };
 		A75706DE118A2BCF0057F88F /* JITArithmetic32_64.cpp in Sources */ = {isa = PBXBuildFile; fileRef = A75706DD118A2BCF0057F88F /* JITArithmetic32_64.cpp */; };
 		A766B44F0EE8DCD1009518CA /* ExecutableAllocator.h in Headers */ = {isa = PBXBuildFile; fileRef = A7B48DB50EE74CFC00DCBDB6 /* ExecutableAllocator.h */; settings = {ATTRIBUTES = (Private, ); }; };
 		A76C51761182748D00715B05 /* JSInterfaceJIT.h in Headers */ = {isa = PBXBuildFile; fileRef = A76C51741182748D00715B05 /* JSInterfaceJIT.h */; };
@@ -927,6 +928,7 @@
 		A7482B7A1166CDEA003B0712 /* JSWeakObjectMapRefPrivate.cpp */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.cpp.cpp; path = JSWeakObjectMapRefPrivate.cpp; sourceTree = "<group>"; };
 		A7482E37116A697B003B0712 /* JSWeakObjectMapRefInternal.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = JSWeakObjectMapRefInternal.h; sourceTree = "<group>"; };
 		A74B3498102A5F8E0032AB98 /* MarkStack.cpp */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.cpp.cpp; path = MarkStack.cpp; sourceTree = "<group>"; };
+		A74DE1CB120B86D600D40D5B /* ARMv7Assembler.cpp */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.cpp.cpp; path = ARMv7Assembler.cpp; sourceTree = "<group>"; };
 		A75706DD118A2BCF0057F88F /* JITArithmetic32_64.cpp */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.cpp.cpp; path = JITArithmetic32_64.cpp; sourceTree = "<group>"; };
 		A76C51741182748D00715B05 /* JSInterfaceJIT.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = JSInterfaceJIT.h; sourceTree = "<group>"; };
 		A76EE6580FAE59D5003F069A /* NativeFunctionWrapper.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = NativeFunctionWrapper.h; sourceTree = "<group>"; };
@@ -1848,6 +1850,7 @@
 				86D3B2BF10156BDE002865E7 /* ARMAssembler.cpp */,
 				86D3B2C010156BDE002865E7 /* ARMAssembler.h */,
 				86ADD1430FDDEA980006EEC2 /* ARMv7Assembler.h */,
+				A74DE1CB120B86D600D40D5B /* ARMv7Assembler.cpp */,
 				9688CB130ED12B4E001D649F /* AssemblerBuffer.h */,
 				86D3B2C110156BDE002865E7 /* AssemblerBufferWithConstantPool.h */,
 				86E116B00FE75AC800B512BC /* CodeLocation.h */,
@@ -2677,6 +2680,7 @@
 				DDF7ABD511F60ED200108E36 /* GCActivityCallbackCF.cpp in Sources */,
 				8627E5EB11F1281900A313B5 /* PageAllocation.cpp in Sources */,
 				DDE82AD71209D955005C1756 /* GCHandle.cpp in Sources */,
+				A74DE1D0120B875600D40D5B /* ARMv7Assembler.cpp in Sources */,
 			);
 			runOnlyForDeploymentPostprocessing = 0;
 		};
diff --git a/JavaScriptCore/assembler/ARMv7Assembler.cpp b/JavaScriptCore/assembler/ARMv7Assembler.cpp
new file mode 100644
index 0000000..233a6f1
--- /dev/null
+++ b/JavaScriptCore/assembler/ARMv7Assembler.cpp
@@ -0,0 +1,38 @@
+/*
+ * Copyright (C) 2010 Apple Inc. All rights reserved.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions
+ * are met:
+ * 1. Redistributions of source code must retain the above copyright
+ *    notice, this list of conditions and the following disclaimer.
+ * 2. Redistributions in binary form must reproduce the above copyright
+ *    notice, this list of conditions and the following disclaimer in the
+ *    documentation and/or other materials provided with the distribution.
+ *
+ * THIS SOFTWARE IS PROVIDED BY APPLE INC. AND ITS CONTRIBUTORS ``AS IS''
+ * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO,
+ * THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
+ * PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL APPLE INC. OR ITS CONTRIBUTORS
+ * BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+ * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+ * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+ * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+ * CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF
+ * THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#include "config.h"
+
+#if ENABLE(ASSEMBLER) && CPU(ARM_THUMB2)
+
+#include "ARMv7Assembler.h"
+
+namespace JSC {
+
+const int ARMv7Assembler::JumpSizes[] = { 0xffffffff, 2 * sizeof(uint16_t), 2 * sizeof(uint16_t), 5 * sizeof(uint16_t) };
+
+}
+
+#endif
diff --git a/JavaScriptCore/assembler/ARMv7Assembler.h b/JavaScriptCore/assembler/ARMv7Assembler.h
index 425c5ef..f1b57b8 100644
--- a/JavaScriptCore/assembler/ARMv7Assembler.h
+++ b/JavaScriptCore/assembler/ARMv7Assembler.h
@@ -381,8 +381,8 @@ public:
 
         u.d = d;
 
-        int sign = (u.i >> 63);
-        int exponent = (u.i >> 52) & 0x7ff;
+        int sign = static_cast<int>(u.i >> 63);
+        int exponent = static_cast<int>(u.i >> 52) & 0x7ff;
         uint64_t mantissa = u.i & 0x000fffffffffffffull;
 
         if ((exponent >= 0x3fc) && (exponent <= 0x403) && !(mantissa & 0x0000ffffffffffffull))
@@ -445,7 +445,6 @@ private:
     } m_u;
 };
 
-
 class ARMv7Assembler {
 public:
     ~ARMv7Assembler()
@@ -476,14 +475,44 @@ public:
         ConditionGT,
         ConditionLE,
         ConditionAL,
-
+        
         ConditionCS = ConditionHS,
         ConditionCC = ConditionLO,
     } Condition;
 
+    enum JumpType { JumpNoCondition, JumpCondition, JumpFullSize };
+    enum JumpLinkType { LinkInvalid, LinkShortJump, LinkConditionalShortJump, LinkLongJump, JumpTypeCount };
+    static const int JumpSizes[JumpTypeCount];
+    enum { JumpPaddingSize = 5 * sizeof(uint16_t) };
+    class LinkRecord {
+    public:
+        LinkRecord(intptr_t from, intptr_t to, JumpType type, Condition condition)
+            : m_from(from)
+            , m_to(to)
+            , m_type(type)
+            , m_linkType(LinkInvalid)
+            , m_condition(condition)
+        {
+        }
+        intptr_t from() const { return m_from; }
+        void setFrom(intptr_t from) { m_from = from; }
+        intptr_t to() const { return m_to; }
+        JumpType type() const { return m_type; }
+        JumpLinkType linkType() const { return m_linkType; }
+        void setLinkType(JumpLinkType linkType) { ASSERT(m_linkType == LinkInvalid); m_linkType = linkType; }
+        Condition condition() const { return m_condition; }
+    private:
+        intptr_t m_from : 31;
+        intptr_t m_to : 31;
+        JumpType m_type : 2;
+        JumpLinkType m_linkType : 3;
+        Condition m_condition : 16;
+    };
+    
     class JmpSrc {
         friend class ARMv7Assembler;
         friend class ARMInstructionFormatter;
+        friend class LinkBuffer;
     public:
         JmpSrc()
             : m_offset(-1)
@@ -491,17 +520,32 @@ public:
         }
 
     private:
-        JmpSrc(int offset)
+        JmpSrc(int offset, JumpType type)
+            : m_offset(offset)
+            , m_condition(0xffff)
+            , m_type(type)
+        {
+            ASSERT(m_type != JumpCondition);
+        }
+
+        JmpSrc(int offset, JumpType type, Condition condition)
             : m_offset(offset)
+            , m_condition(condition)
+            , m_type(type)
         {
+            ASSERT(m_type == JumpCondition || m_type == JumpFullSize);
         }
 
         int m_offset;
+        Condition m_condition : 16;
+        JumpType m_type : 16;
+        
     };
     
     class JmpDst {
         friend class ARMv7Assembler;
         friend class ARMInstructionFormatter;
+        friend class LinkBuffer;
     public:
         JmpDst()
             : m_offset(-1)
@@ -525,17 +569,6 @@ public:
 
 private:
 
-    struct LinkRecord {
-        LinkRecord(intptr_t from, intptr_t to)
-            : from(from)
-            , to(to)
-        {
-        }
-
-        intptr_t from;
-        intptr_t to;
-    };
-
     // ARMv7, Appx-A.6.3
     bool BadReg(RegisterID reg)
     {
@@ -739,7 +772,7 @@ private:
     }
 
 public:
-
+    
     void add(RegisterID rd, RegisterID rn, ARMThumbImmediate imm)
     {
         // Rd can only be SP if Rn is also SP.
@@ -878,27 +911,33 @@ public:
         ASSERT(!BadReg(rm));
         m_formatter.twoWordOp12Reg4FourFours(OP_ASR_reg_T2, rn, FourFours(0xf, rd, 0, rm));
     }
-
+    
     // Only allowed in IT (if then) block if last instruction.
-    JmpSrc b()
+    JmpSrc b(JumpType type)
     {
         m_formatter.twoWordOp16Op16(OP_B_T4a, OP_B_T4b);
-        return JmpSrc(m_formatter.size());
+        return JmpSrc(m_formatter.size(), type);
     }
     
     // Only allowed in IT (if then) block if last instruction.
-    JmpSrc blx(RegisterID rm)
+    JmpSrc blx(RegisterID rm, JumpType type)
     {
         ASSERT(rm != ARMRegisters::pc);
         m_formatter.oneWordOp8RegReg143(OP_BLX, rm, (RegisterID)8);
-        return JmpSrc(m_formatter.size());
+        return JmpSrc(m_formatter.size(), type);
     }
 
     // Only allowed in IT (if then) block if last instruction.
-    JmpSrc bx(RegisterID rm)
+    JmpSrc bx(RegisterID rm, JumpType type, Condition condition)
+    {
+        m_formatter.oneWordOp8RegReg143(OP_BX, rm, (RegisterID)0);
+        return JmpSrc(m_formatter.size(), type, condition);
+    }
+
+    JmpSrc bx(RegisterID rm, JumpType type)
     {
         m_formatter.oneWordOp8RegReg143(OP_BX, rm, (RegisterID)0);
-        return JmpSrc(m_formatter.size());
+        return JmpSrc(m_formatter.size(), type);
     }
 
     void bkpt(uint8_t imm=0)
@@ -1617,6 +1656,15 @@ public:
     {
         return dst.m_offset - src.m_offset;
     }
+
+    int executableOffsetFor(int location)
+    {
+        if (!location)
+            return 0;
+        return static_cast<int32_t*>(m_formatter.data())[location / sizeof(int32_t) - 1];
+    }
+    
+    int jumpSizeDelta(JumpLinkType jumpLinkType) { return JumpPaddingSize - JumpSizes[jumpLinkType]; }
     
     // Assembler admin methods:
 
@@ -1625,23 +1673,66 @@ public:
         return m_formatter.size();
     }
 
-    void* executableCopy(ExecutablePool* allocator)
+    static bool linkRecordSourceComparator(const LinkRecord& a, const LinkRecord& b)
     {
-        void* copy = m_formatter.executableCopy(allocator);
-        if (!copy)
-            return 0;
+        return a.from() < b.from();
+    }
 
-        unsigned jumpCount = m_jumpsToLink.size();
-        for (unsigned i = 0; i < jumpCount; ++i) {
-            uint16_t* location = reinterpret_cast<uint16_t*>(reinterpret_cast<intptr_t>(copy) + m_jumpsToLink[i].from);
-            uint16_t* target = reinterpret_cast<uint16_t*>(reinterpret_cast<intptr_t>(copy) + m_jumpsToLink[i].to);
-            linkJumpAbsolute(location, target);
+    JumpLinkType computeJumpType(LinkRecord& record, const uint8_t* from, const uint8_t* to)
+    {
+        if (record.type() >= JumpFullSize) {
+            record.setLinkType(LinkLongJump);
+            return LinkLongJump;
         }
-        m_jumpsToLink.clear();
+        bool mayTriggerErrata = false;
+        const uint16_t* shortJumpLocation = reinterpret_cast<const uint16_t*>(from  - (JumpPaddingSize - JumpSizes[LinkShortJump]));
+        if (!canBeShortJump(shortJumpLocation, to, mayTriggerErrata)) {
+            record.setLinkType(LinkLongJump);
+            return LinkLongJump;
+        }
+        if (mayTriggerErrata) {
+            record.setLinkType(LinkLongJump);
+            return LinkLongJump;
+        }
+        if (record.type() == JumpCondition) {
+            record.setLinkType(LinkConditionalShortJump);
+            return LinkConditionalShortJump;
+        }
+        record.setLinkType(LinkShortJump);
+        return LinkShortJump;
+    }
 
-        return copy;
+    void recordLinkOffsets(int32_t regionStart, int32_t regionEnd, int32_t offset)
+    {
+        int32_t ptr = regionStart / sizeof(int32_t);
+        const int32_t end = regionEnd / sizeof(int32_t);
+        int32_t* offsets = static_cast<int32_t*>(m_formatter.data());
+        while (ptr < end)
+            offsets[ptr++] = offset;
+    }
+    
+    Vector<LinkRecord>& jumpsToLink()
+    {
+        std::sort(m_jumpsToLink.begin(), m_jumpsToLink.end(), linkRecordSourceComparator);
+        return m_jumpsToLink;
+    }
+
+    void link(LinkRecord& record, uint8_t* from, uint8_t* to)
+    {
+        uint16_t* itttLocation;
+        if (record.linkType() == LinkConditionalShortJump) {
+            itttLocation = reinterpret_cast<uint16_t*>(from - JumpSizes[LinkConditionalShortJump] - 2);
+            itttLocation[0] = ifThenElse(record.condition()) | OP_IT;
+        }
+        ASSERT(record.linkType() != LinkInvalid);
+        if (record.linkType() != LinkLongJump)
+            linkShortJump(reinterpret_cast<uint16_t*>(from), to);
+        else
+            linkLongJump(reinterpret_cast<uint16_t*>(from), to);
     }
 
+    void* unlinkedCode() { return m_formatter.data(); }
+    
     static unsigned getCallReturnOffset(JmpSrc call)
     {
         ASSERT(call.m_offset >= 0);
@@ -1660,7 +1751,7 @@ public:
     {
         ASSERT(to.m_offset != -1);
         ASSERT(from.m_offset != -1);
-        m_jumpsToLink.append(LinkRecord(from.m_offset, to.m_offset));
+        m_jumpsToLink.append(LinkRecord(from.m_offset, to.m_offset, from.m_type, from.m_condition));
     }
 
     static void linkJump(void* code, JmpSrc from, void* to)
@@ -1863,19 +1954,12 @@ private:
         return (instruction[0] == OP_NOP_T2a) && (instruction[1] == OP_NOP_T2b);
     }
 
-    static void linkJumpAbsolute(uint16_t* instruction, void* target)
+    static bool canBeShortJump(const uint16_t* instruction, const void* target, bool& mayTriggerErrata)
     {
-        // FIMXE: this should be up in the MacroAssembler layer. :-(
-        const uint16_t JUMP_TEMPORARY_REGISTER = ARMRegisters::ip;
-
         ASSERT(!(reinterpret_cast<intptr_t>(instruction) & 1));
         ASSERT(!(reinterpret_cast<intptr_t>(target) & 1));
-
-        ASSERT( (isMOV_imm_T3(instruction - 5) && isMOVT(instruction - 3) && isBX(instruction - 1))
-            || (isNOP_T1(instruction - 5) && isNOP_T2(instruction - 4) && isB(instruction - 2)) );
-
+        
         intptr_t relative = reinterpret_cast<intptr_t>(target) - (reinterpret_cast<intptr_t>(instruction));
-
         // From Cortex-A8 errata:
         // If the 32-bit Thumb-2 branch instruction spans two 4KiB regions and
         // the target of the branch falls within the first region it is
@@ -1884,11 +1968,50 @@ private:
         // to enter a deadlock state.
         // The instruction is spanning two pages if it ends at an address ending 0x002
         bool spansTwo4K = ((reinterpret_cast<intptr_t>(instruction) & 0xfff) == 0x002);
+        mayTriggerErrata = spansTwo4K;
         // The target is in the first page if the jump branch back by [3..0x1002] bytes
         bool targetInFirstPage = (relative >= -0x1002) && (relative < -2);
         bool wouldTriggerA8Errata = spansTwo4K && targetInFirstPage;
+        return ((relative << 7) >> 7) == relative && !wouldTriggerA8Errata;
+    }
+
+    static void linkLongJump(uint16_t* instruction, void* target)
+    {
+        linkJumpAbsolute(instruction, target);
+    }
+    
+    static void linkShortJump(uint16_t* instruction, void* target)
+    {
+        // FIMXE: this should be up in the MacroAssembler layer. :-(        
+        ASSERT(!(reinterpret_cast<intptr_t>(instruction) & 1));
+        ASSERT(!(reinterpret_cast<intptr_t>(target) & 1));
+        
+        intptr_t relative = reinterpret_cast<intptr_t>(target) - (reinterpret_cast<intptr_t>(instruction));
+        bool scratch;
+        UNUSED_PARAM(scratch);
+        ASSERT(canBeShortJump(instruction, target, scratch));
+        // ARM encoding for the top two bits below the sign bit is 'peculiar'.
+        if (relative >= 0)
+            relative ^= 0xC00000;
+
+        // All branch offsets should be an even distance.
+        ASSERT(!(relative & 1));
+        instruction[-2] = OP_B_T4a | ((relative & 0x1000000) >> 14) | ((relative & 0x3ff000) >> 12);
+        instruction[-1] = OP_B_T4b | ((relative & 0x800000) >> 10) | ((relative & 0x400000) >> 11) | ((relative & 0xffe) >> 1);
+    }
 
-        if (((relative << 7) >> 7) == relative && !wouldTriggerA8Errata) {
+    static void linkJumpAbsolute(uint16_t* instruction, void* target)
+    {
+        // FIMXE: this should be up in the MacroAssembler layer. :-(
+        ASSERT(!(reinterpret_cast<intptr_t>(instruction) & 1));
+        ASSERT(!(reinterpret_cast<intptr_t>(target) & 1));
+
+        ASSERT((isMOV_imm_T3(instruction - 5) && isMOVT(instruction - 3) && isBX(instruction - 1))
+            || (isNOP_T1(instruction - 5) && isNOP_T2(instruction - 4) && isB(instruction - 2)));
+
+        intptr_t relative = reinterpret_cast<intptr_t>(target) - (reinterpret_cast<intptr_t>(instruction));
+        bool scratch;
+        if (canBeShortJump(instruction, target, scratch)) {
             // ARM encoding for the top two bits below the sign bit is 'peculiar'.
             if (relative >= 0)
                 relative ^= 0xC00000;
@@ -1906,6 +2029,7 @@ private:
             instruction[-2] = OP_B_T4a | ((relative & 0x1000000) >> 14) | ((relative & 0x3ff000) >> 12);
             instruction[-1] = OP_B_T4b | ((relative & 0x800000) >> 10) | ((relative & 0x400000) >> 11) | ((relative & 0xffe) >> 1);
         } else {
+            const uint16_t JUMP_TEMPORARY_REGISTER = ARMRegisters::ip;
             ARMThumbImmediate lo16 = ARMThumbImmediate::makeUInt16(static_cast<uint16_t>(reinterpret_cast<uint32_t>(target) + 1));
             ARMThumbImmediate hi16 = ARMThumbImmediate::makeUInt16(static_cast<uint16_t>(reinterpret_cast<uint32_t>(target) >> 16));
             instruction[-5] = twoWordOp5i6Imm4Reg4EncodedImmFirst(OP_MOV_imm_T3, lo16);
@@ -1920,6 +2044,7 @@ private:
     {
         return op | (imm.m_value.i << 10) | imm.m_value.imm4;
     }
+
     static uint16_t twoWordOp5i6Imm4Reg4EncodedImmSecond(uint16_t rd, ARMThumbImmediate imm)
     {
         return (imm.m_value.imm3 << 12) | (rd << 8) | imm.m_value.imm8;
@@ -2036,6 +2161,7 @@ private:
     } m_formatter;
 
     Vector<LinkRecord> m_jumpsToLink;
+    Vector<int32_t> m_offsets;
 };
 
 } // namespace JSC
diff --git a/JavaScriptCore/assembler/AbstractMacroAssembler.h b/JavaScriptCore/assembler/AbstractMacroAssembler.h
index aab9089..5db2cb9 100644
--- a/JavaScriptCore/assembler/AbstractMacroAssembler.h
+++ b/JavaScriptCore/assembler/AbstractMacroAssembler.h
@@ -418,12 +418,6 @@ public:
 
 
     // Section 3: Misc admin methods
-
-    static CodePtr trampolineAt(CodeRef ref, Label label)
-    {
-        return CodePtr(AssemblerType::getRelocatedAddress(ref.m_code.dataLocation(), label.m_label));
-    }
-
     size_t size()
     {
         return m_assembler.size();
@@ -479,6 +473,9 @@ public:
     {
         return AssemblerType::getDifferenceBetweenLabels(from.m_label, to.m_jmp);
     }
+    
+    void beginUninterruptedSequence() { }
+    void endUninterruptedSequence() { }
 
 protected:
     AssemblerType m_assembler;
diff --git a/JavaScriptCore/assembler/LinkBuffer.h b/JavaScriptCore/assembler/LinkBuffer.h
index 221fa13..624d1cc 100644
--- a/JavaScriptCore/assembler/LinkBuffer.h
+++ b/JavaScriptCore/assembler/LinkBuffer.h
@@ -49,12 +49,18 @@ namespace JSC {
 //
 class LinkBuffer : public Noncopyable {
     typedef MacroAssemblerCodeRef CodeRef;
+    typedef MacroAssemblerCodePtr CodePtr;
     typedef MacroAssembler::Label Label;
     typedef MacroAssembler::Jump Jump;
     typedef MacroAssembler::JumpList JumpList;
     typedef MacroAssembler::Call Call;
     typedef MacroAssembler::DataLabel32 DataLabel32;
     typedef MacroAssembler::DataLabelPtr DataLabelPtr;
+    typedef MacroAssembler::JmpDst JmpDst;
+#if ENABLE(BRANCH_COMPACTION)
+    typedef MacroAssembler::LinkRecord LinkRecord;
+    typedef MacroAssembler::JumpLinkType JumpLinkType;
+#endif
 
     enum LinkBufferState {
         StateInit,
@@ -66,14 +72,17 @@ public:
     // Note: Initialization sequence is significant, since executablePool is a PassRefPtr.
     //       First, executablePool is copied into m_executablePool, then the initialization of
     //       m_code uses m_executablePool, *not* executablePool, since this is no longer valid.
-    LinkBuffer(MacroAssembler* masm, PassRefPtr<ExecutablePool> executablePool)
+    // The linkOffset parameter should only be non-null when recompiling for exception info
+    LinkBuffer(MacroAssembler* masm, PassRefPtr<ExecutablePool> executablePool, void* linkOffset)
         : m_executablePool(executablePool)
-        , m_code(masm->m_assembler.executableCopy(m_executablePool.get()))
-        , m_size(masm->m_assembler.size())
+        , m_size(0)
+        , m_code(0)
+        , m_assembler(masm)
 #ifndef NDEBUG
         , m_state(StateInit)
 #endif
     {
+        linkCode(linkOffset);
     }
 
     ~LinkBuffer()
@@ -97,28 +106,32 @@ public:
     void link(Call call, FunctionPtr function)
     {
         ASSERT(call.isFlagSet(Call::Linkable));
+        call.m_jmp = applyOffset(call.m_jmp);
         MacroAssembler::linkCall(code(), call, function);
     }
     
     void link(Jump jump, CodeLocationLabel label)
     {
+        jump.m_jmp = applyOffset(jump.m_jmp);
         MacroAssembler::linkJump(code(), jump, label);
     }
 
     void link(JumpList list, CodeLocationLabel label)
     {
         for (unsigned i = 0; i < list.m_jumps.size(); ++i)
-            MacroAssembler::linkJump(code(), list.m_jumps[i], label);
+            link(list.m_jumps[i], label);
     }
 
     void patch(DataLabelPtr label, void* value)
     {
-        MacroAssembler::linkPointer(code(), label.m_label, value);
+        JmpDst target = applyOffset(label.m_label);
+        MacroAssembler::linkPointer(code(), target, value);
     }
 
     void patch(DataLabelPtr label, CodeLocationLabel value)
     {
-        MacroAssembler::linkPointer(code(), label.m_label, value.executableAddress());
+        JmpDst target = applyOffset(label.m_label);
+        MacroAssembler::linkPointer(code(), target, value.executableAddress());
     }
 
     // These methods are used to obtain handles to allow the code to be relinked / repatched later.
@@ -127,35 +140,36 @@ public:
     {
         ASSERT(call.isFlagSet(Call::Linkable));
         ASSERT(!call.isFlagSet(Call::Near));
-        return CodeLocationCall(MacroAssembler::getLinkerAddress(code(), call.m_jmp));
+        return CodeLocationCall(MacroAssembler::getLinkerAddress(code(), applyOffset(call.m_jmp)));
     }
 
     CodeLocationNearCall locationOfNearCall(Call call)
     {
         ASSERT(call.isFlagSet(Call::Linkable));
         ASSERT(call.isFlagSet(Call::Near));
-        return CodeLocationNearCall(MacroAssembler::getLinkerAddress(code(), call.m_jmp));
+        return CodeLocationNearCall(MacroAssembler::getLinkerAddress(code(), applyOffset(call.m_jmp)));
     }
 
     CodeLocationLabel locationOf(Label label)
     {
-        return CodeLocationLabel(MacroAssembler::getLinkerAddress(code(), label.m_label));
+        return CodeLocationLabel(MacroAssembler::getLinkerAddress(code(), applyOffset(label.m_label)));
     }
 
     CodeLocationDataLabelPtr locationOf(DataLabelPtr label)
     {
-        return CodeLocationDataLabelPtr(MacroAssembler::getLinkerAddress(code(), label.m_label));
+        return CodeLocationDataLabelPtr(MacroAssembler::getLinkerAddress(code(), applyOffset(label.m_label)));
     }
 
     CodeLocationDataLabel32 locationOf(DataLabel32 label)
     {
-        return CodeLocationDataLabel32(MacroAssembler::getLinkerAddress(code(), label.m_label));
+        return CodeLocationDataLabel32(MacroAssembler::getLinkerAddress(code(), applyOffset(label.m_label)));
     }
 
     // This method obtains the return address of the call, given as an offset from
     // the start of the code.
     unsigned returnAddressOffset(Call call)
     {
+        call.m_jmp = applyOffset(call.m_jmp);
         return MacroAssembler::getLinkerCallReturnOffset(call);
     }
 
@@ -169,6 +183,7 @@ public:
 
         return CodeRef(m_code, m_executablePool, m_size);
     }
+
     CodeLocationLabel finalizeCodeAddendum()
     {
         performFinalization();
@@ -176,7 +191,20 @@ public:
         return CodeLocationLabel(code());
     }
 
+    CodePtr trampolineAt(Label label)
+    {
+        return CodePtr(MacroAssembler::AssemblerType_T::getRelocatedAddress(code(), applyOffset(label.m_label)));
+    }
+
 private:
+    template <typename T> T applyOffset(T src)
+    {
+#if ENABLE(BRANCH_COMPACTION)
+        src.m_offset -= m_assembler->executableOffsetFor(src.m_offset);
+#endif
+        return src;
+    }
+    
     // Keep this private! - the underlying code should only be obtained externally via 
     // finalizeCode() or finalizeCodeAddendum().
     void* code()
@@ -184,6 +212,75 @@ private:
         return m_code;
     }
 
+    void linkCode(void* linkOffset)
+    {
+        UNUSED_PARAM(linkOffset);
+        ASSERT(!m_code);
+#if !ENABLE(BRANCH_COMPACTION)
+        m_code = m_assembler->m_assembler.executableCopy(m_executablePool.get());
+        m_size = m_assembler->size();
+#else
+        size_t initialSize = m_assembler->size();
+        m_code = (uint8_t*)m_executablePool->alloc(initialSize);
+        if (!m_code)
+            return;
+        ExecutableAllocator::makeWritable(m_code, m_assembler->size());
+        uint8_t* inData = (uint8_t*)m_assembler->unlinkedCode();
+        uint8_t* outData = reinterpret_cast<uint8_t*>(m_code);
+        const uint8_t* linkBase = linkOffset ? reinterpret_cast<uint8_t*>(linkOffset) : outData;
+        int readPtr = 0;
+        int writePtr = 0;
+        Vector<LinkRecord>& jumpsToLink = m_assembler->jumpsToLink();
+        unsigned jumpCount = jumpsToLink.size();
+        for (unsigned i = 0; i < jumpCount; ++i) {
+            int offset = readPtr - writePtr;
+            ASSERT(!(offset & 1));
+            
+            // Copy the instructions from the last jump to the current one.
+            size_t regionSize = jumpsToLink[i].from() - readPtr;
+            memcpy(outData + writePtr, inData + readPtr, regionSize);
+            m_assembler->recordLinkOffsets(readPtr, jumpsToLink[i].from(), offset);
+            readPtr += regionSize;
+            writePtr += regionSize;
+            
+            // Calculate absolute address of the jump target, in the case of backwards
+            // branches we need to be precise, forward branches we are pessimistic
+            const uint8_t* target;
+            if (jumpsToLink[i].to() >= jumpsToLink[i].from())
+                target = linkBase + jumpsToLink[i].to() - offset; // Compensate for what we have collapsed so far
+            else
+                target = linkBase + jumpsToLink[i].to() - m_assembler->executableOffsetFor(jumpsToLink[i].to());
+            
+            JumpLinkType jumpLinkType = m_assembler->computeJumpType(jumpsToLink[i], linkBase + writePtr, target);
+
+            // Step back in the write stream
+            int32_t delta = m_assembler->jumpSizeDelta(jumpLinkType);
+            if (delta) {
+                writePtr -= delta;
+                m_assembler->recordLinkOffsets(jumpsToLink[i].from() - delta, readPtr, readPtr - writePtr);
+            }
+            jumpsToLink[i].setFrom(writePtr);
+        }
+        // Copy everything after the last jump
+        memcpy(outData + writePtr, inData + readPtr, m_assembler->size() - readPtr);
+        m_assembler->recordLinkOffsets(readPtr, m_assembler->size(), readPtr - writePtr);
+        
+        // Actually link everything (don't link if we've be given a linkoffset as it's a
+        // waste of time: linkOffset is used for recompiling to get exception info)
+        if (!linkOffset) {
+            for (unsigned i = 0; i < jumpCount; ++i) {
+                uint8_t* location = outData + jumpsToLink[i].from();
+                uint8_t* target = outData + jumpsToLink[i].to() - m_assembler->executableOffsetFor(jumpsToLink[i].to());
+                m_assembler->link(jumpsToLink[i], location, target);
+            }
+        }
+
+        jumpsToLink.clear();
+        m_size = writePtr + m_assembler->size() - readPtr;
+        m_executablePool->returnLastBytes(initialSize - m_size);
+#endif
+    }
+
     void performFinalization()
     {
 #ifndef NDEBUG
@@ -196,8 +293,9 @@ private:
     }
 
     RefPtr<ExecutablePool> m_executablePool;
-    void* m_code;
     size_t m_size;
+    void* m_code;
+    MacroAssembler* m_assembler;
 #ifndef NDEBUG
     LinkBufferState m_state;
 #endif
diff --git a/JavaScriptCore/assembler/MacroAssemblerARMv7.h b/JavaScriptCore/assembler/MacroAssemblerARMv7.h
index 64513fd..a1539f2 100644
--- a/JavaScriptCore/assembler/MacroAssemblerARMv7.h
+++ b/JavaScriptCore/assembler/MacroAssemblerARMv7.h
@@ -45,6 +45,23 @@ class MacroAssemblerARMv7 : public AbstractMacroAssembler<ARMv7Assembler> {
     inline ARMRegisters::FPSingleRegisterID fpTempRegisterAsSingle() { return ARMRegisters::asSingle(fpTempRegister); }
 
 public:
+    typedef ARMv7Assembler::LinkRecord LinkRecord;
+    typedef ARMv7Assembler::JumpLinkType JumpLinkType;
+
+    MacroAssemblerARMv7()
+        : m_inUninterruptedSequence(false)
+    {
+    }
+    
+    void beginUninterruptedSequence() { m_inUninterruptedSequence = true; }
+    void endUninterruptedSequence() { m_inUninterruptedSequence = false; }
+    Vector<LinkRecord>& jumpsToLink() { return m_assembler.jumpsToLink(); }
+    void* unlinkedCode() { return m_assembler.unlinkedCode(); }
+    JumpLinkType computeJumpType(LinkRecord& record, const uint8_t* from, const uint8_t* to) { return m_assembler.computeJumpType(record, from, to); }
+    void recordLinkOffsets(int32_t regionStart, int32_t regionEnd, int32_t offset) {return m_assembler.recordLinkOffsets(regionStart, regionEnd, offset); }
+    int jumpSizeDelta(JumpLinkType jumpLinkType) { return m_assembler.jumpSizeDelta(jumpLinkType); }
+    void link(LinkRecord& record, uint8_t* from, uint8_t* to) { return m_assembler.link(record, from, to); }
+
     struct ArmAddress {
         enum AddressType {
             HasOffset,
@@ -969,14 +986,14 @@ public:
 
     void jump(RegisterID target)
     {
-        m_assembler.bx(target);
+        m_assembler.bx(target, inUninterruptedSequence() ? ARMv7Assembler::JumpFullSize : ARMv7Assembler::JumpNoCondition);
     }
 
     // Address is a memory location containing the address to jump to
     void jump(Address address)
     {
         load32(address, dataTempRegister);
-        m_assembler.bx(dataTempRegister);
+        m_assembler.bx(dataTempRegister, inUninterruptedSequence() ? ARMv7Assembler::JumpFullSize : ARMv7Assembler::JumpNoCondition);
     }
 
 
@@ -1012,7 +1029,7 @@ public:
 
     Jump branchMul32(Condition cond, RegisterID src, RegisterID dest)
     {
-        ASSERT(cond == Overflow);
+        ASSERT_UNUSED(cond, cond == Overflow);
         m_assembler.smull(dest, dataTempRegister, dest, src);
         m_assembler.asr(addressTempRegister, dest, 31);
         return branch32(NotEqual, addressTempRegister, dataTempRegister);
@@ -1020,7 +1037,7 @@ public:
 
     Jump branchMul32(Condition cond, Imm32 imm, RegisterID src, RegisterID dest)
     {
-        ASSERT(cond == Overflow);
+        ASSERT_UNUSED(cond, cond == Overflow);
         move(imm, dataTempRegister);
         m_assembler.smull(dest, dataTempRegister, src, dataTempRegister);
         m_assembler.asr(addressTempRegister, dest, 31);
@@ -1059,35 +1076,35 @@ public:
 
     void breakpoint()
     {
-        m_assembler.bkpt();
+        m_assembler.bkpt(0);
     }
 
     Call nearCall()
     {
         moveFixedWidthEncoding(Imm32(0), dataTempRegister);
-        return Call(m_assembler.blx(dataTempRegister), Call::LinkableNear);
+        return Call(m_assembler.blx(dataTempRegister, ARMv7Assembler::JumpFullSize), Call::LinkableNear);
     }
 
     Call call()
     {
         moveFixedWidthEncoding(Imm32(0), dataTempRegister);
-        return Call(m_assembler.blx(dataTempRegister), Call::Linkable);
+        return Call(m_assembler.blx(dataTempRegister, ARMv7Assembler::JumpFullSize), Call::Linkable);
     }
 
     Call call(RegisterID target)
     {
-        return Call(m_assembler.blx(target), Call::None);
+        return Call(m_assembler.blx(target, ARMv7Assembler::JumpFullSize), Call::None);
     }
 
     Call call(Address address)
     {
         load32(address, dataTempRegister);
-        return Call(m_assembler.blx(dataTempRegister), Call::None);
+        return Call(m_assembler.blx(dataTempRegister, ARMv7Assembler::JumpFullSize), Call::None);
     }
 
     void ret()
     {
-        m_assembler.bx(linkRegister);
+        m_assembler.bx(linkRegister, ARMv7Assembler::JumpFullSize);
     }
 
     void set32(Condition cond, RegisterID left, RegisterID right, RegisterID dest)
@@ -1187,7 +1204,7 @@ public:
     {
         // Like a normal call, but don't link.
         moveFixedWidthEncoding(Imm32(0), dataTempRegister);
-        return Call(m_assembler.bx(dataTempRegister), Call::Linkable);
+        return Call(m_assembler.bx(dataTempRegister, ARMv7Assembler::JumpFullSize), Call::Linkable);
     }
 
     Call makeTailRecursiveCall(Jump oldJump)
@@ -1196,19 +1213,29 @@ public:
         return tailRecursiveCall();
     }
 
+    
+    int executableOffsetFor(int location)
+    {
+        return m_assembler.executableOffsetFor(location);
+    }
 
 protected:
+    bool inUninterruptedSequence()
+    {
+        return m_inUninterruptedSequence;
+    }
+
     ARMv7Assembler::JmpSrc makeJump()
     {
         moveFixedWidthEncoding(Imm32(0), dataTempRegister);
-        return m_assembler.bx(dataTempRegister);
+        return m_assembler.bx(dataTempRegister, inUninterruptedSequence() ? ARMv7Assembler::JumpFullSize : ARMv7Assembler::JumpNoCondition);
     }
 
     ARMv7Assembler::JmpSrc makeBranch(ARMv7Assembler::Condition cond)
     {
         m_assembler.it(cond, true, true);
         moveFixedWidthEncoding(Imm32(0), dataTempRegister);
-        return m_assembler.bx(dataTempRegister);
+        return m_assembler.bx(dataTempRegister, inUninterruptedSequence() ? ARMv7Assembler::JumpFullSize : ARMv7Assembler::JumpCondition, cond);
     }
     ARMv7Assembler::JmpSrc makeBranch(Condition cond) { return makeBranch(armV7Condition(cond)); }
     ARMv7Assembler::JmpSrc makeBranch(DoubleCondition cond) { return makeBranch(armV7Condition(cond)); }
@@ -1298,6 +1325,8 @@ private:
     {
         ARMv7Assembler::relinkCall(call.dataLocation(), destination.executableAddress());
     }
+    
+    bool m_inUninterruptedSequence;
 };
 
 } // namespace JSC
diff --git a/JavaScriptCore/jit/ExecutableAllocator.h b/JavaScriptCore/jit/ExecutableAllocator.h
index 91c41dc..b60d591 100644
--- a/JavaScriptCore/jit/ExecutableAllocator.h
+++ b/JavaScriptCore/jit/ExecutableAllocator.h
@@ -128,6 +128,11 @@ public:
         return poolAllocate(n);
     }
     
+    void returnLastBytes(size_t count)
+    {
+        m_freePtr -= count;
+    }
+
     ~ExecutablePool()
     {
         AllocationList::iterator end = m_pools.end();
diff --git a/JavaScriptCore/jit/JIT.cpp b/JavaScriptCore/jit/JIT.cpp
index ff02c12..cd5944a 100644
--- a/JavaScriptCore/jit/JIT.cpp
+++ b/JavaScriptCore/jit/JIT.cpp
@@ -71,7 +71,7 @@ void ctiPatchCallByReturnAddress(CodeBlock* codeblock, ReturnAddressPtr returnAd
     repatchBuffer.relinkCallerToFunction(returnAddress, newCalleeFunction);
 }
 
-JIT::JIT(JSGlobalData* globalData, CodeBlock* codeBlock)
+JIT::JIT(JSGlobalData* globalData, CodeBlock* codeBlock, void* linkerOffset)
     : m_interpreter(globalData->interpreter)
     , m_globalData(globalData)
     , m_codeBlock(codeBlock)
@@ -89,6 +89,7 @@ JIT::JIT(JSGlobalData* globalData, CodeBlock* codeBlock)
     , m_lastResultBytecodeRegister(std::numeric_limits<int>::max())
     , m_jumpTargetsPosition(0)
 #endif
+    , m_linkerOffset(linkerOffset)
 {
 }
 
@@ -511,7 +512,7 @@ JITCode JIT::privateCompile(CodePtr* functionEntryArityCheck)
     RefPtr<ExecutablePool> executablePool = m_globalData->executableAllocator.poolForSize(m_assembler.size());
     if (!executablePool)
         return JITCode();
-    LinkBuffer patchBuffer(this, executablePool.release());
+    LinkBuffer patchBuffer(this, executablePool.release(), m_linkerOffset);
     if (!patchBuffer.allocationSuccessful())
         return JITCode();
 
diff --git a/JavaScriptCore/jit/JIT.h b/JavaScriptCore/jit/JIT.h
index f3c4b6a..2e66946 100644
--- a/JavaScriptCore/jit/JIT.h
+++ b/JavaScriptCore/jit/JIT.h
@@ -178,9 +178,9 @@ namespace JSC {
         static const int patchGetByIdDefaultOffset = 256;
 
     public:
-        static JITCode compile(JSGlobalData* globalData, CodeBlock* codeBlock, CodePtr* functionEntryArityCheck = 0)
+        static JITCode compile(JSGlobalData* globalData, CodeBlock* codeBlock, CodePtr* functionEntryArityCheck = 0, void* offsetBase = 0)
         {
-            return JIT(globalData, codeBlock).privateCompile(functionEntryArityCheck);
+            return JIT(globalData, codeBlock, offsetBase).privateCompile(functionEntryArityCheck);
         }
 
         static bool compileGetByIdProto(JSGlobalData* globalData, CallFrame* callFrame, CodeBlock* codeBlock, StructureStubInfo* stubInfo, Structure* structure, Structure* prototypeStructure, const Identifier& ident, const PropertySlot& slot, size_t cachedOffset, ReturnAddressPtr returnAddress)
@@ -221,7 +221,7 @@ namespace JSC {
         {
             if (!globalData->canUseJIT())
                 return;
-            JIT jit(globalData);
+            JIT jit(globalData, 0, 0);
             jit.privateCompileCTIMachineTrampolines(executablePool, globalData, trampolines);
         }
 
@@ -229,7 +229,7 @@ namespace JSC {
         {
             if (!globalData->canUseJIT())
                 return CodePtr();
-            JIT jit(globalData);
+            JIT jit(globalData, 0, 0);
             return jit.privateCompileCTINativeCall(executablePool, globalData, func);
         }
 
@@ -259,7 +259,7 @@ namespace JSC {
             }
         };
 
-        JIT(JSGlobalData*, CodeBlock* = 0);
+        JIT(JSGlobalData*, CodeBlock* = 0, void* = 0);
 
         void privateCompileMainPass();
         void privateCompileLinkPass();
@@ -666,16 +666,16 @@ namespace JSC {
 #endif
 #endif // USE(JSVALUE32_64)
 
-#if defined(ASSEMBLER_HAS_CONSTANT_POOL) && ASSEMBLER_HAS_CONSTANT_POOL
-#define BEGIN_UNINTERRUPTED_SEQUENCE(name) beginUninterruptedSequence(name ## InstructionSpace, name ## ConstantSpace)
-#define END_UNINTERRUPTED_SEQUENCE(name) endUninterruptedSequence(name ## InstructionSpace, name ## ConstantSpace)
+#if (defined(ASSEMBLER_HAS_CONSTANT_POOL) && ASSEMBLER_HAS_CONSTANT_POOL)
+#define BEGIN_UNINTERRUPTED_SEQUENCE(name) do { beginUninterruptedSequence(); beginUninterruptedSequence(name ## InstructionSpace, name ## ConstantSpace); } while (false)
+#define END_UNINTERRUPTED_SEQUENCE(name) do { endUninterruptedSequence(name ## InstructionSpace, name ## ConstantSpace); endUninterruptedSequence(); } while (false)
 
         void beginUninterruptedSequence(int, int);
         void endUninterruptedSequence(int, int);
 
 #else
-#define BEGIN_UNINTERRUPTED_SEQUENCE(name)
-#define END_UNINTERRUPTED_SEQUENCE(name)
+#define BEGIN_UNINTERRUPTED_SEQUENCE(name)  do { beginUninterruptedSequence(); } while (false)
+#define END_UNINTERRUPTED_SEQUENCE(name)  do { endUninterruptedSequence(); } while (false)
 #endif
 
         void emit_op_add(Instruction*);
@@ -940,6 +940,7 @@ namespace JSC {
         int m_uninterruptedConstantSequenceBegin;
 #endif
 #endif
+        void* m_linkerOffset;
         static CodePtr stringGetByValStubGenerator(JSGlobalData* globalData, ExecutablePool* pool);
     } JIT_CLASS_ALIGNMENT;
 
diff --git a/JavaScriptCore/jit/JITArithmetic32_64.cpp b/JavaScriptCore/jit/JITArithmetic32_64.cpp
index 5a69d5a..e53af77 100644
--- a/JavaScriptCore/jit/JITArithmetic32_64.cpp
+++ b/JavaScriptCore/jit/JITArithmetic32_64.cpp
@@ -1383,6 +1383,8 @@ void JIT::emit_op_mod(Instruction* currentInstruction)
 
 void JIT::emitSlow_op_mod(Instruction* currentInstruction, Vector<SlowCaseEntry>::iterator& iter)
 {
+    UNUSED_PARAM(currentInstruction);
+    UNUSED_PARAM(iter);
 #if ENABLE(JIT_USE_SOFT_MODULO)
     unsigned result = currentInstruction[1].u.operand;
     unsigned op1 = currentInstruction[2].u.operand;
diff --git a/JavaScriptCore/jit/JITOpcodes.cpp b/JavaScriptCore/jit/JITOpcodes.cpp
index 5cd0bfe..28ef4ca 100644
--- a/JavaScriptCore/jit/JITOpcodes.cpp
+++ b/JavaScriptCore/jit/JITOpcodes.cpp
@@ -165,7 +165,7 @@ void JIT::privateCompileCTIMachineTrampolines(RefPtr<ExecutablePool>* executable
     // We can't run without the JIT trampolines!
     if (!*executablePool)
         CRASH();
-    LinkBuffer patchBuffer(this, *executablePool);
+    LinkBuffer patchBuffer(this, *executablePool, 0);
     // We can't run without the JIT trampolines!
     if (!patchBuffer.allocationSuccessful())
         CRASH();
@@ -184,17 +184,17 @@ void JIT::privateCompileCTIMachineTrampolines(RefPtr<ExecutablePool>* executable
 
     CodeRef finalCode = patchBuffer.finalizeCode();
 
-    trampolines->ctiVirtualCallLink = trampolineAt(finalCode, virtualCallLinkBegin);
-    trampolines->ctiVirtualConstructLink = trampolineAt(finalCode, virtualConstructLinkBegin);
-    trampolines->ctiVirtualCall = trampolineAt(finalCode, virtualCallBegin);
-    trampolines->ctiVirtualConstruct = trampolineAt(finalCode, virtualConstructBegin);
-    trampolines->ctiNativeCall = trampolineAt(finalCode, nativeCallThunk);
-    trampolines->ctiNativeConstruct = trampolineAt(finalCode, nativeConstructThunk);
+    trampolines->ctiVirtualCallLink = patchBuffer.trampolineAt(virtualCallLinkBegin);
+    trampolines->ctiVirtualConstructLink = patchBuffer.trampolineAt(virtualConstructLinkBegin);
+    trampolines->ctiVirtualCall = patchBuffer.trampolineAt(virtualCallBegin);
+    trampolines->ctiVirtualConstruct = patchBuffer.trampolineAt(virtualConstructBegin);
+    trampolines->ctiNativeCall = patchBuffer.trampolineAt(nativeCallThunk);
+    trampolines->ctiNativeConstruct = patchBuffer.trampolineAt(nativeConstructThunk);
 #if ENABLE(JIT_USE_SOFT_MODULO)
-    trampolines->ctiSoftModulo = trampolineAt(finalCode, softModBegin);
+    trampolines->ctiSoftModulo = patchBuffer.trampolineAt(softModBegin);
 #endif
 #if ENABLE(JIT_OPTIMIZE_PROPERTY_ACCESS)
-    trampolines->ctiStringLengthTrampoline = trampolineAt(finalCode, stringLengthBegin);
+    trampolines->ctiStringLengthTrampoline = patchBuffer.trampolineAt(stringLengthBegin);
 #endif
 }
 
diff --git a/JavaScriptCore/jit/JITOpcodes32_64.cpp b/JavaScriptCore/jit/JITOpcodes32_64.cpp
index 927d158..939aa8c 100644
--- a/JavaScriptCore/jit/JITOpcodes32_64.cpp
+++ b/JavaScriptCore/jit/JITOpcodes32_64.cpp
@@ -163,7 +163,7 @@ void JIT::privateCompileCTIMachineTrampolines(RefPtr<ExecutablePool>* executable
     // We can't run without the JIT trampolines!
     if (!*executablePool)
         CRASH();
-    LinkBuffer patchBuffer(this, *executablePool);
+    LinkBuffer patchBuffer(this, *executablePool, 0);
     // We can't run without the JIT trampolines!
     if (!patchBuffer.allocationSuccessful())
         CRASH();
@@ -182,19 +182,19 @@ void JIT::privateCompileCTIMachineTrampolines(RefPtr<ExecutablePool>* executable
 
     CodeRef finalCode = patchBuffer.finalizeCode();
 
-    trampolines->ctiVirtualCall = trampolineAt(finalCode, virtualCallBegin);
-    trampolines->ctiVirtualConstruct = trampolineAt(finalCode, virtualConstructBegin);
-    trampolines->ctiNativeCall = trampolineAt(finalCode, nativeCallThunk);
-    trampolines->ctiNativeConstruct = trampolineAt(finalCode, nativeConstructThunk);
+    trampolines->ctiVirtualCall = patchBuffer.trampolineAt(virtualCallBegin);
+    trampolines->ctiVirtualConstruct = patchBuffer.trampolineAt(virtualConstructBegin);
+    trampolines->ctiNativeCall = patchBuffer.trampolineAt(nativeCallThunk);
+    trampolines->ctiNativeConstruct = patchBuffer.trampolineAt(nativeConstructThunk);
 #if ENABLE(JIT_OPTIMIZE_PROPERTY_ACCESS)
-    trampolines->ctiStringLengthTrampoline = trampolineAt(finalCode, stringLengthBegin);
+    trampolines->ctiStringLengthTrampoline = patchBuffer.trampolineAt(stringLengthBegin);
 #endif
 #if ENABLE(JIT_OPTIMIZE_CALL)
-    trampolines->ctiVirtualCallLink = trampolineAt(finalCode, virtualCallLinkBegin);
-    trampolines->ctiVirtualConstructLink = trampolineAt(finalCode, virtualConstructLinkBegin);
+    trampolines->ctiVirtualCallLink = patchBuffer.trampolineAt(virtualCallLinkBegin);
+    trampolines->ctiVirtualConstructLink = patchBuffer.trampolineAt(virtualConstructLinkBegin);
 #endif
 #if ENABLE(JIT_USE_SOFT_MODULO)
-    trampolines->ctiSoftModulo = trampolineAt(finalCode, softModBegin);
+    trampolines->ctiSoftModulo = patchBuffer.trampolineAt(softModBegin);
 #endif
 }
 
@@ -362,15 +362,15 @@ JIT::CodePtr JIT::privateCompileCTINativeCall(PassRefPtr<ExecutablePool> executa
     ret();
 
     // All trampolines constructed! copy the code, link up calls, and set the pointers on the Machine object.
-    LinkBuffer patchBuffer(this, executablePool);
+    LinkBuffer patchBuffer(this, executablePool, 0);
     // We can't continue if we can't call a function!
     if (!patchBuffer.allocationSuccessful())
         CRASH();
 
     patchBuffer.link(nativeCall, FunctionPtr(func));
+    patchBuffer.finalizeCode();
 
-    CodeRef finalCode = patchBuffer.finalizeCode();
-    return trampolineAt(finalCode, nativeCallThunk);
+    return patchBuffer.trampolineAt(nativeCallThunk);
 }
 
 void JIT::emit_op_mov(Instruction* currentInstruction)
diff --git a/JavaScriptCore/jit/JITPropertyAccess.cpp b/JavaScriptCore/jit/JITPropertyAccess.cpp
index 540e079..6b2a2fe 100644
--- a/JavaScriptCore/jit/JITPropertyAccess.cpp
+++ b/JavaScriptCore/jit/JITPropertyAccess.cpp
@@ -77,7 +77,7 @@ JIT::CodePtr JIT::stringGetByValStubGenerator(JSGlobalData* globalData, Executab
     jit.move(Imm32(0), regT0);
     jit.ret();
     
-    LinkBuffer patchBuffer(&jit, pool);
+    LinkBuffer patchBuffer(&jit, pool, 0);
     // We can't run without the JIT trampolines!
     if (!patchBuffer.allocationSuccessful())
         CRASH();
@@ -652,7 +652,7 @@ bool JIT::privateCompilePutByIdTransition(StructureStubInfo* stubInfo, Structure
     restoreArgumentReferenceForTrampoline();
     Call failureCall = tailRecursiveCall();
 
-    LinkBuffer patchBuffer(this, m_codeBlock->executablePool());
+    LinkBuffer patchBuffer(this, m_codeBlock->executablePool(), 0);
     if (!patchBuffer.allocationSuccessful())
         return false;
 
@@ -743,7 +743,7 @@ bool JIT::privateCompilePatchGetArrayLength(StructureStubInfo* stubInfo, ReturnA
     emitFastArithIntToImmNoCheck(regT2, regT0);
     Jump success = jump();
 
-    LinkBuffer patchBuffer(this, m_codeBlock->executablePool());
+    LinkBuffer patchBuffer(this, m_codeBlock->executablePool(), 0);
     if (!patchBuffer.allocationSuccessful())
         return false;
 
@@ -809,7 +809,7 @@ bool JIT::privateCompileGetByIdProto(StructureStubInfo* stubInfo, Structure* str
     } else
         compileGetDirectOffset(protoObject, regT1, regT0, cachedOffset);
     Jump success = jump();
-    LinkBuffer patchBuffer(this, m_codeBlock->executablePool());
+    LinkBuffer patchBuffer(this, m_codeBlock->executablePool(), 0);
     if (!patchBuffer.allocationSuccessful())
         return false;
 
@@ -872,7 +872,7 @@ bool JIT::privateCompileGetByIdSelfList(StructureStubInfo* stubInfo, Structure*
         compileGetDirectOffset(regT0, regT0, structure, cachedOffset);
     Jump success = jump();
 
-    LinkBuffer patchBuffer(this, m_codeBlock->executablePool());
+    LinkBuffer patchBuffer(this, m_codeBlock->executablePool(), 0);
     if (!patchBuffer.allocationSuccessful())
         return false;
 
@@ -950,7 +950,7 @@ bool JIT::privateCompileGetByIdProtoList(StructureStubInfo* stubInfo, Structure*
 
     Jump success = jump();
 
-    LinkBuffer patchBuffer(this, m_codeBlock->executablePool());
+    LinkBuffer patchBuffer(this, m_codeBlock->executablePool(), 0);
     if (!patchBuffer.allocationSuccessful())
         return false;
 
@@ -1026,7 +1026,7 @@ bool JIT::privateCompileGetByIdChainList(StructureStubInfo* stubInfo, Structure*
         compileGetDirectOffset(protoObject, regT1, regT0, cachedOffset);
     Jump success = jump();
 
-    LinkBuffer patchBuffer(this, m_codeBlock->executablePool());
+    LinkBuffer patchBuffer(this, m_codeBlock->executablePool(), 0);
     if (!patchBuffer.allocationSuccessful())
         return false;
     
@@ -1100,7 +1100,7 @@ bool JIT::privateCompileGetByIdChain(StructureStubInfo* stubInfo, Structure* str
         compileGetDirectOffset(protoObject, regT1, regT0, cachedOffset);
     Jump success = jump();
 
-    LinkBuffer patchBuffer(this, m_codeBlock->executablePool());
+    LinkBuffer patchBuffer(this, m_codeBlock->executablePool(), 0);
     if (!patchBuffer.allocationSuccessful())
         return false;
 
diff --git a/JavaScriptCore/jit/JITPropertyAccess32_64.cpp b/JavaScriptCore/jit/JITPropertyAccess32_64.cpp
index bbffd7d..9239641 100644
--- a/JavaScriptCore/jit/JITPropertyAccess32_64.cpp
+++ b/JavaScriptCore/jit/JITPropertyAccess32_64.cpp
@@ -295,7 +295,7 @@ JIT::CodePtr JIT::stringGetByValStubGenerator(JSGlobalData* globalData, Executab
     jit.move(Imm32(0), regT0);
     jit.ret();
     
-    LinkBuffer patchBuffer(&jit, pool);
+    LinkBuffer patchBuffer(&jit, pool, 0);
     // We can't run without the JIT trampolines!
     if (!patchBuffer.allocationSuccessful())
         CRASH();
@@ -656,7 +656,7 @@ bool JIT::privateCompilePutByIdTransition(StructureStubInfo* stubInfo, Structure
     restoreArgumentReferenceForTrampoline();
     Call failureCall = tailRecursiveCall();
     
-    LinkBuffer patchBuffer(this, m_codeBlock->executablePool());
+    LinkBuffer patchBuffer(this, m_codeBlock->executablePool(), 0);
     if (!patchBuffer.allocationSuccessful())
         return false;
 
@@ -752,7 +752,7 @@ bool JIT::privateCompilePatchGetArrayLength(StructureStubInfo* stubInfo, ReturnA
     move(Imm32(JSValue::Int32Tag), regT1);
     Jump success = jump();
     
-    LinkBuffer patchBuffer(this, m_codeBlock->executablePool());
+    LinkBuffer patchBuffer(this, m_codeBlock->executablePool(), 0);
     if (!patchBuffer.allocationSuccessful())
         return false;
 
@@ -819,7 +819,7 @@ bool JIT::privateCompileGetByIdProto(StructureStubInfo* stubInfo, Structure* str
     
     Jump success = jump();
     
-    LinkBuffer patchBuffer(this, m_codeBlock->executablePool());
+    LinkBuffer patchBuffer(this, m_codeBlock->executablePool(), 0);
     if (!patchBuffer.allocationSuccessful())
         return false;
 
@@ -886,7 +886,7 @@ bool JIT::privateCompileGetByIdSelfList(StructureStubInfo* stubInfo, Structure*
 
     Jump success = jump();
     
-    LinkBuffer patchBuffer(this, m_codeBlock->executablePool());
+    LinkBuffer patchBuffer(this, m_codeBlock->executablePool(), 0);
     if (!patchBuffer.allocationSuccessful())
         return false;
 
@@ -964,7 +964,7 @@ bool JIT::privateCompileGetByIdProtoList(StructureStubInfo* stubInfo, Structure*
     
     Jump success = jump();
     
-    LinkBuffer patchBuffer(this, m_codeBlock->executablePool());
+    LinkBuffer patchBuffer(this, m_codeBlock->executablePool(), 0);
     if (!patchBuffer.allocationSuccessful())
         return false;
 
@@ -1041,7 +1041,7 @@ bool JIT::privateCompileGetByIdChainList(StructureStubInfo* stubInfo, Structure*
 
     Jump success = jump();
     
-    LinkBuffer patchBuffer(this, m_codeBlock->executablePool());
+    LinkBuffer patchBuffer(this, m_codeBlock->executablePool(), 0);
     if (!patchBuffer.allocationSuccessful())
         return false;
 
@@ -1115,7 +1115,7 @@ bool JIT::privateCompileGetByIdChain(StructureStubInfo* stubInfo, Structure* str
         compileGetDirectOffset(protoObject, regT2, regT1, regT0, cachedOffset);
     Jump success = jump();
     
-    LinkBuffer patchBuffer(this, m_codeBlock->executablePool());
+    LinkBuffer patchBuffer(this, m_codeBlock->executablePool(), 0);
     if (!patchBuffer.allocationSuccessful())
         return false;
 
diff --git a/JavaScriptCore/jit/SpecializedThunkJIT.h b/JavaScriptCore/jit/SpecializedThunkJIT.h
index e14d6a9..ba95498 100644
--- a/JavaScriptCore/jit/SpecializedThunkJIT.h
+++ b/JavaScriptCore/jit/SpecializedThunkJIT.h
@@ -129,7 +129,7 @@ namespace JSC {
         
         MacroAssemblerCodePtr finalize(MacroAssemblerCodePtr fallback)
         {
-            LinkBuffer patchBuffer(this, m_pool.get());
+            LinkBuffer patchBuffer(this, m_pool.get(), 0);
             // We can't continue if we can't call a function!
             if (!patchBuffer.allocationSuccessful())
                 CRASH();
diff --git a/JavaScriptCore/runtime/Executable.cpp b/JavaScriptCore/runtime/Executable.cpp
index 856f92c..058a091 100644
--- a/JavaScriptCore/runtime/Executable.cpp
+++ b/JavaScriptCore/runtime/Executable.cpp
@@ -304,7 +304,7 @@ PassOwnPtr<ExceptionInfo> FunctionExecutable::reparseExceptionInfo(JSGlobalData*
 
 #if ENABLE(JIT)
     if (globalData->canUseJIT()) {
-        JITCode newJITCode = JIT::compile(globalData, newCodeBlock.get());
+        JITCode newJITCode = JIT::compile(globalData, newCodeBlock.get(), 0, codeBlock->m_isConstructor ? generatedJITCodeForConstruct().start() : generatedJITCodeForCall().start());
         if (!newJITCode) {
             globalData->functionCodeBlockBeingReparsed = 0;
             return PassOwnPtr<ExceptionInfo>();
@@ -337,7 +337,7 @@ PassOwnPtr<ExceptionInfo> EvalExecutable::reparseExceptionInfo(JSGlobalData* glo
 
 #if ENABLE(JIT)
     if (globalData->canUseJIT()) {
-        JITCode newJITCode = JIT::compile(globalData, newCodeBlock.get());
+        JITCode newJITCode = JIT::compile(globalData, newCodeBlock.get(), 0, generatedJITCodeForCall().start());
         if (!newJITCode) {
             globalData->functionCodeBlockBeingReparsed = 0;
             return PassOwnPtr<ExceptionInfo>();
diff --git a/JavaScriptCore/wtf/FastMalloc.cpp b/JavaScriptCore/wtf/FastMalloc.cpp
index c44b5de..c440417 100644
--- a/JavaScriptCore/wtf/FastMalloc.cpp
+++ b/JavaScriptCore/wtf/FastMalloc.cpp
@@ -4454,10 +4454,10 @@ extern "C" {
 malloc_introspection_t jscore_fastmalloc_introspection = { &FastMallocZone::enumerate, &FastMallocZone::goodSize, &FastMallocZone::check, &FastMallocZone::print,
     &FastMallocZone::log, &FastMallocZone::forceLock, &FastMallocZone::forceUnlock, &FastMallocZone::statistics
 
-#if !defined(BUILDING_ON_TIGER) && !defined(BUILDING_ON_LEOPARD) && !OS(IOS)
+#if !defined(BUILDING_ON_TIGER) && !defined(BUILDING_ON_LEOPARD)
     , 0 // zone_locked will not be called on the zone unless it advertises itself as version five or higher.
 #endif
-#if !defined(BUILDING_ON_TIGER) && !defined(BUILDING_ON_LEOPARD) && !defined(BUILDING_ON_SNOW_LEOPARD) && !OS(IOS)
+#if !defined(BUILDING_ON_TIGER) && !defined(BUILDING_ON_LEOPARD) && !defined(BUILDING_ON_SNOW_LEOPARD)
     , 0, 0, 0, 0 // These members will not be used unless the zone advertises itself as version seven or higher.
 #endif
 
diff --git a/JavaScriptCore/wtf/Platform.h b/JavaScriptCore/wtf/Platform.h
index f40f834..3a96aac 100644
--- a/JavaScriptCore/wtf/Platform.h
+++ b/JavaScriptCore/wtf/Platform.h
@@ -1094,4 +1094,8 @@ on MinGW. See https://bugs.webkit.org/show_bug.cgi?id=29268 */
 #define WTF_USE_PREEMPT_GEOLOCATION_PERMISSION 1
 #endif
 
+#if CPU(ARM_THUMB2)
+#define ENABLE_BRANCH_COMPACTION 1
+#endif
+
 #endif /* WTF_Platform_h */
diff --git a/JavaScriptCore/yarr/RegexJIT.cpp b/JavaScriptCore/yarr/RegexJIT.cpp
index c01378a..5a53ced 100644
--- a/JavaScriptCore/yarr/RegexJIT.cpp
+++ b/JavaScriptCore/yarr/RegexJIT.cpp
@@ -1472,7 +1472,7 @@ public:
             return;
         }
 
-        LinkBuffer patchBuffer(this, executablePool.release());
+        LinkBuffer patchBuffer(this, executablePool.release(), 0);
         if (!patchBuffer.allocationSuccessful()) {
             m_shouldFallBack = true;
             return;

-- 
WebKit Debian packaging



More information about the Pkg-webkit-commits mailing list