[SCM] WebKit Debian packaging branch, webkit-1.2, updated. upstream/1.1.90-6072-g9a69373

ggaren at apple.com ggaren at apple.com
Thu Apr 8 00:35:00 UTC 2010


The following commit has been merged in the webkit-1.2 branch:
commit 6a429819b14c76c939e26013df8e509f550d4416
Author: ggaren at apple.com <ggaren at apple.com@268f45cc-cd09-0410-ab3c-d52691b4dbfc>
Date:   Mon Dec 14 08:13:24 2009 +0000

    JavaScriptCore: Changed GC from mark-sweep to mark-allocate.
    
    Reviewed by Sam Weinig.
    
    Added WeakGCMap to keep WebCore blissfully ignorant about objects that
    have become garbage but haven't run their destructors yet.
    
    1% SunSpider speedup.
    7.6% v8 speedup (37% splay speedup).
    17% speedup on bench-alloc-nonretained.js.
    18% speedup on bench-alloc-retained.js.
    
    * API/JSBase.cpp:
    (JSGarbageCollect):
    * API/JSContextRef.cpp:
    * JavaScriptCore.exp:
    * JavaScriptCore.xcodeproj/project.pbxproj: Updated for renames and new
    files.
    
    * debugger/Debugger.cpp:
    (JSC::Debugger::recompileAllJSFunctions): Updated to use the Collector
    iterator abstraction.
    
    * jsc.cpp:
    (functionGC): Updated for rename.
    
    * runtime/Collector.cpp: Slightly reduced the number of allocations per
    collection, so that small workloads only allocate on collector block,
    rather than two.
    
    (JSC::Heap::Heap): Updated to use the new allocateBlock function.
    
    (JSC::Heap::destroy): Updated to use the new freeBlocks function.
    
    (JSC::Heap::allocateBlock): New function to initialize a block when
    allocating it.
    
    (JSC::Heap::freeBlock): Consolidated the responsibility for running
    destructors into this function.
    
    (JSC::Heap::freeBlocks): Updated to use freeBlock.
    
    (JSC::Heap::recordExtraCost): Sweep the heap in this reporting function,
    so that allocation, which is more common, doesn't have to check extraCost.
    
    (JSC::Heap::heapAllocate): Run destructors right before recycling a
    garbage cell. This has better cache utilization than a separate sweep phase.
    
    (JSC::Heap::resizeBlocks):
    (JSC::Heap::growBlocks):
    (JSC::Heap::shrinkBlocks): New set of functions for managing the size of
    the heap, now that the heap doesn't maintain any information about its
    size.
    
    (JSC::isPointerAligned):
    (JSC::isHalfCellAligned):
    (JSC::isPossibleCell):
    (JSC::isCellAligned):
    (JSC::Heap::markConservatively): Cleaned up this code a bit.
    
    (JSC::Heap::clearMarkBits):
    (JSC::Heap::markedCells): Some helper functions for examining the the mark
    bitmap.
    
    (JSC::Heap::sweep): Simplified this function by using a DeadObjectIterator.
    
    (JSC::Heap::markRoots): Reordered some operations for clarity.
    
    (JSC::Heap::objectCount):
    (JSC::Heap::addToStatistics):
    (JSC::Heap::statistics): Rewrote these functions to calculate an object
    count on demand, since the heap doesn't maintain this information by
    itself.
    
    (JSC::Heap::reset): New function for resetting the heap once we've
    exhausted heap space.
    
    (JSC::Heap::collectAllGarbage): This function matches the old collect()
    behavior, but it's now an uncommon function used only by API.
    
    * runtime/Collector.h:
    (JSC::CollectorBitmap::count):
    (JSC::CollectorBitmap::isEmpty): Added some helper functions for managing
    the collector mark bitmap.
    
    (JSC::Heap::reportExtraMemoryCost): Changed reporting from cell equivalents
    to bytes, so it's easier to understand.
    
    * runtime/CollectorHeapIterator.h:
    (JSC::CollectorHeapIterator::CollectorHeapIterator):
    (JSC::CollectorHeapIterator::operator!=):
    (JSC::CollectorHeapIterator::operator*):
    (JSC::CollectorHeapIterator::advance):
    (JSC::::LiveObjectIterator):
    (JSC::::operator):
    (JSC::::DeadObjectIterator):
    (JSC::::ObjectIterator): New iterators for encapsulating details about
    heap layout, and what's live and dead on the heap.
    
    * runtime/JSArray.cpp:
    (JSC::JSArray::putSlowCase):
    (JSC::JSArray::increaseVectorLength): Delay reporting extra cost until
    we're fully constructed, so the heap mark phase won't visit us in an
    invalid state.
    
    * runtime/JSCell.h:
    (JSC::JSCell::):
    (JSC::JSCell::createDummyStructure):
    (JSC::JSCell::JSCell):
    * runtime/JSGlobalData.cpp:
    (JSC::JSGlobalData::JSGlobalData):
    * runtime/JSGlobalData.h: Added a dummy cell to simplify allocation logic.
    
    * runtime/JSString.h:
    (JSC::jsSubstring): Don't report extra cost for substrings, since they
    share a buffer that's already reported extra cost.
    
    * runtime/Tracing.d:
    * runtime/Tracing.h: Changed these dtrace hooks not to report object
    counts, since they're no longer cheap to compute.
    
    * runtime/UString.h: Updated for renames.
    
    * runtime/WeakGCMap.h: Added.
    (JSC::WeakGCMap::isEmpty):
    (JSC::WeakGCMap::uncheckedGet):
    (JSC::WeakGCMap::uncheckedBegin):
    (JSC::WeakGCMap::uncheckedEnd):
    (JSC::::get):
    (JSC::::take):
    (JSC::::set):
    (JSC::::uncheckedRemove): Mentioned above.
    
    * wtf/StdLibExtras.h:
    (WTF::bitCount): Added a bit population count function, so the heap can
    count live objects to fulfill statistics questions.
    
    JavaScriptGlue: Changed GC from mark-sweep to mark-allocate.
    
    Reviewed by Sam Weinig.
    
    * JavaScriptGlue.cpp:
    (JSCollect): Updated for rename. Fixed a bug where JSGlue would not check
    to avoid nested GC calls.
    
    WebCore: Changed GC from mark-sweep to mark-allocate.
    
    Reviewed by Sam Weinig.
    
    * ForwardingHeaders/runtime/WeakGCMap.h: Added.
    * bindings/js/GCController.cpp:
    (WebCore::collect):
    (WebCore::GCController::gcTimerFired):
    (WebCore::GCController::garbageCollectNow): Updated for rename.
    
    * bindings/js/JSDOMBinding.cpp:
    (WebCore::removeWrappers):
    (WebCore::hasCachedDOMObjectWrapperUnchecked):
    (WebCore::hasCachedDOMObjectWrapper):
    (WebCore::hasCachedDOMNodeWrapperUnchecked):
    (WebCore::forgetDOMObject):
    (WebCore::forgetDOMNode):
    (WebCore::isObservableThroughDOM):
    (WebCore::markDOMNodesForDocument):
    (WebCore::markDOMObjectWrapper):
    (WebCore::markDOMNodeWrapper):
    * bindings/js/JSDOMBinding.h: Changed DOM wrapper maps to be WeakGCMaps.
    Don't ASSERT that an item must be in the WeakGCMap when its destructor
    runs, since it might have been overwritten in the map first.
    
    * bindings/js/JSDocumentCustom.cpp:
    (WebCore::toJS): Changed Document from a DOM object wrapper to a DOM node
    wrapper, to simplify some code.
    
    * bindings/js/JSInspectedObjectWrapper.cpp:
    (WebCore::JSInspectedObjectWrapper::JSInspectedObjectWrapper):
    (WebCore::JSInspectedObjectWrapper::~JSInspectedObjectWrapper):
    * bindings/js/JSInspectorCallbackWrapper.cpp: Use a WeakGCMap for these
    wrappers.
    
    * bindings/js/JSNodeCustom.cpp:
    (WebCore::JSNode::markChildren): Updated for WeakGCMap and Document using
    a DOM node wrapper instead of a DOM object wrapper.
    
    * bindings/js/JSSVGPODTypeWrapper.h:
    (WebCore::JSSVGDynamicPODTypeWrapperCache::wrapperMap):
    (WebCore::JSSVGDynamicPODTypeWrapperCache::lookupOrCreateWrapper):
    (WebCore::JSSVGDynamicPODTypeWrapperCache::forgetWrapper):
    (WebCore::::~JSSVGDynamicPODTypeWrapper): Shined a small beam of sanity light
    on this code. Use hashtable-based lookup in JSSVGPODTypeWrapper.h instead
    of linear lookup through iteration, since that's what hashtables were
    invented for. Make JSSVGPODTypeWrapper.h responsible for reomving itself
    from the table, instead of its JS wrapper, to decouple these objects from
    GC, and because these objects are refCounted, not solely owned by their
    JS wrappers.
    
    * bindings/scripts/CodeGeneratorJS.pm:
    * dom/Document.h: Adopted changes above.
    
    
    
    git-svn-id: http://svn.webkit.org/repository/webkit/trunk@52082 268f45cc-cd09-0410-ab3c-d52691b4dbfc

diff --git a/JavaScriptCore/API/JSBase.cpp b/JavaScriptCore/API/JSBase.cpp
index 4a32d35..6c362c4 100644
--- a/JavaScriptCore/API/JSBase.cpp
+++ b/JavaScriptCore/API/JSBase.cpp
@@ -99,7 +99,7 @@ void JSGarbageCollect(JSContextRef ctx)
     JSLock lock(globalData.isSharedInstance ? LockForReal : SilenceAssertionsOnly);
 
     if (!globalData.heap.isBusy())
-        globalData.heap.collect();
+        globalData.heap.collectAllGarbage();
 
     // FIXME: Perhaps we should trigger a second mark and sweep
     // once the garbage collector is done if this is called when
diff --git a/JavaScriptCore/API/JSContextRef.cpp b/JavaScriptCore/API/JSContextRef.cpp
index e6626b7..92ad396 100644
--- a/JavaScriptCore/API/JSContextRef.cpp
+++ b/JavaScriptCore/API/JSContextRef.cpp
@@ -133,7 +133,7 @@ void JSGlobalContextRelease(JSGlobalContextRef ctx)
         ASSERT(!globalData.heap.isBusy());
         globalData.heap.destroy();
     } else
-        globalData.heap.collect();
+        globalData.heap.collectAllGarbage();
 
     globalData.deref();
 }
diff --git a/JavaScriptCore/ChangeLog b/JavaScriptCore/ChangeLog
index 4fcef93..40359fe 100644
--- a/JavaScriptCore/ChangeLog
+++ b/JavaScriptCore/ChangeLog
@@ -1,3 +1,163 @@
+2009-12-13  Geoffrey Garen  <ggaren at apple.com>
+
+        Reviewed by Sam Weinig.
+        
+        Changed GC from mark-sweep to mark-allocate.
+        
+        Added WeakGCMap to keep WebCore blissfully ignorant about objects that
+        have become garbage but haven't run their destructors yet.
+        
+        1% SunSpider speedup.
+        7.6% v8 speedup (37% splay speedup).
+        17% speedup on bench-alloc-nonretained.js.
+        18% speedup on bench-alloc-retained.js.
+
+        * API/JSBase.cpp:
+        (JSGarbageCollect):
+        * API/JSContextRef.cpp:
+        * JavaScriptCore.exp:
+        * JavaScriptCore.xcodeproj/project.pbxproj: Updated for renames and new
+        files.
+
+        * debugger/Debugger.cpp:
+        (JSC::Debugger::recompileAllJSFunctions): Updated to use the Collector
+        iterator abstraction.
+
+        * jsc.cpp:
+        (functionGC): Updated for rename.
+
+        * runtime/Collector.cpp: Slightly reduced the number of allocations per
+        collection, so that small workloads only allocate on collector block,
+        rather than two.
+
+        (JSC::Heap::Heap): Updated to use the new allocateBlock function.
+
+        (JSC::Heap::destroy): Updated to use the new freeBlocks function.
+
+        (JSC::Heap::allocateBlock): New function to initialize a block when
+        allocating it.
+
+        (JSC::Heap::freeBlock): Consolidated the responsibility for running
+        destructors into this function.
+
+        (JSC::Heap::freeBlocks): Updated to use freeBlock.
+
+        (JSC::Heap::recordExtraCost): Sweep the heap in this reporting function,
+        so that allocation, which is more common, doesn't have to check extraCost.
+
+        (JSC::Heap::heapAllocate): Run destructors right before recycling a
+        garbage cell. This has better cache utilization than a separate sweep phase.
+
+        (JSC::Heap::resizeBlocks):
+        (JSC::Heap::growBlocks):
+        (JSC::Heap::shrinkBlocks): New set of functions for managing the size of
+        the heap, now that the heap doesn't maintain any information about its
+        size.
+
+        (JSC::isPointerAligned):
+        (JSC::isHalfCellAligned):
+        (JSC::isPossibleCell):
+        (JSC::isCellAligned):
+        (JSC::Heap::markConservatively): Cleaned up this code a bit.
+
+        (JSC::Heap::clearMarkBits):
+        (JSC::Heap::markedCells): Some helper functions for examining the the mark
+        bitmap.
+
+        (JSC::Heap::sweep): Simplified this function by using a DeadObjectIterator.
+
+        (JSC::Heap::markRoots): Reordered some operations for clarity.
+
+        (JSC::Heap::objectCount):
+        (JSC::Heap::addToStatistics):
+        (JSC::Heap::statistics): Rewrote these functions to calculate an object
+        count on demand, since the heap doesn't maintain this information by 
+        itself.
+
+        (JSC::Heap::reset): New function for resetting the heap once we've
+        exhausted heap space.
+
+        (JSC::Heap::collectAllGarbage): This function matches the old collect()
+        behavior, but it's now an uncommon function used only by API.
+
+        * runtime/Collector.h:
+        (JSC::CollectorBitmap::count):
+        (JSC::CollectorBitmap::isEmpty): Added some helper functions for managing
+        the collector mark bitmap.
+
+        (JSC::Heap::reportExtraMemoryCost): Changed reporting from cell equivalents
+        to bytes, so it's easier to understand.
+        
+        * runtime/CollectorHeapIterator.h:
+        (JSC::CollectorHeapIterator::CollectorHeapIterator):
+        (JSC::CollectorHeapIterator::operator!=):
+        (JSC::CollectorHeapIterator::operator*):
+        (JSC::CollectorHeapIterator::advance):
+        (JSC::::LiveObjectIterator):
+        (JSC::::operator):
+        (JSC::::DeadObjectIterator):
+        (JSC::::ObjectIterator): New iterators for encapsulating details about
+        heap layout, and what's live and dead on the heap.
+
+        * runtime/JSArray.cpp:
+        (JSC::JSArray::putSlowCase):
+        (JSC::JSArray::increaseVectorLength): Delay reporting extra cost until
+        we're fully constructed, so the heap mark phase won't visit us in an
+        invalid state.
+
+        * runtime/JSCell.h:
+        (JSC::JSCell::):
+        (JSC::JSCell::createDummyStructure):
+        (JSC::JSCell::JSCell):
+        * runtime/JSGlobalData.cpp:
+        (JSC::JSGlobalData::JSGlobalData):
+        * runtime/JSGlobalData.h: Added a dummy cell to simplify allocation logic.
+
+        * runtime/JSString.h:
+        (JSC::jsSubstring): Don't report extra cost for substrings, since they
+        share a buffer that's already reported extra cost.
+
+        * runtime/Tracing.d:
+        * runtime/Tracing.h: Changed these dtrace hooks not to report object
+        counts, since they're no longer cheap to compute.
+
+        * runtime/UString.h: Updated for renames.
+
+        * runtime/WeakGCMap.h: Added.
+        (JSC::WeakGCMap::isEmpty):
+        (JSC::WeakGCMap::uncheckedGet):
+        (JSC::WeakGCMap::uncheckedBegin):
+        (JSC::WeakGCMap::uncheckedEnd):
+        (JSC::::get):
+        (JSC::::take):
+        (JSC::::set):
+        (JSC::::uncheckedRemove): Mentioned above.
+
+        * wtf/StdLibExtras.h:
+        (WTF::bitCount): Added a bit population count function, so the heap can
+        count live objects to fulfill statistics questions.
+
+The very last cell in the block is not allocated -- should not be marked.
+
+2009-12-13  Geoffrey Garen  <ggaren at apple.com>
+
+        Windows build fix: Export some new symbols.
+
+        * JavaScriptCore.vcproj/JavaScriptCore/JavaScriptCore.def:
+
+2009-12-13  Geoffrey Garen  <ggaren at apple.com>
+
+        Windows build fix: Removed some old exports.
+
+        * JavaScriptCore.vcproj/JavaScriptCore/JavaScriptCore.def:
+
+2009-12-13  Geoffrey Garen  <ggaren at apple.com>
+
+        Windows build fix: Use unsigned instead of uint32_t to avoid dependencies.
+
+        * wtf/StdLibExtras.h:
+        (WTF::bitCount):
+
 2009-12-13  Gavin Barraclough  <barraclough at apple.com>
 
         Reviewed by NOBODY (speculative Windows build fix).
diff --git a/JavaScriptCore/JavaScriptCore.exp b/JavaScriptCore/JavaScriptCore.exp
index 1c91d36..4e3cc75 100644
--- a/JavaScriptCore/JavaScriptCore.exp
+++ b/JavaScriptCore/JavaScriptCore.exp
@@ -180,16 +180,15 @@ __ZN3JSC24createStackOverflowErrorEPNS_9ExecStateE
 __ZN3JSC25evaluateInGlobalCallFrameERKNS_7UStringERNS_7JSValueEPNS_14JSGlobalObjectE
 __ZN3JSC35createInterruptedExecutionExceptionEPNS_12JSGlobalDataE
 __ZN3JSC3NaNE
-__ZN3JSC4Heap11objectCountEv
 __ZN3JSC4Heap14primaryHeapEndEv
 __ZN3JSC4Heap15recordExtraCostEm
 __ZN3JSC4Heap16primaryHeapBeginEv
+__ZN3JSC4Heap17collectAllGarbageEv
 __ZN3JSC4Heap17globalObjectCountEv
 __ZN3JSC4Heap20protectedObjectCountEv
 __ZN3JSC4Heap25protectedObjectTypeCountsEv
 __ZN3JSC4Heap26protectedGlobalObjectCountEv
 __ZN3JSC4Heap6isBusyEv
-__ZN3JSC4Heap7collectEv
 __ZN3JSC4Heap7destroyEv
 __ZN3JSC4Heap7protectENS_7JSValueE
 __ZN3JSC4Heap8allocateEm
@@ -368,6 +367,7 @@ __ZNK3JSC18PropertyDescriptor6getterEv
 __ZNK3JSC18PropertyDescriptor6setterEv
 __ZNK3JSC18PropertyDescriptor8writableEv
 __ZNK3JSC4Heap10statisticsEv
+__ZNK3JSC4Heap11objectCountEv
 __ZNK3JSC6JSCell11toPrimitiveEPNS_9ExecStateENS_22PreferredPrimitiveTypeE
 __ZNK3JSC6JSCell12toThisObjectEPNS_9ExecStateE
 __ZNK3JSC6JSCell12toThisStringEPNS_9ExecStateE
diff --git a/JavaScriptCore/JavaScriptCore.vcproj/JavaScriptCore/JavaScriptCore.def b/JavaScriptCore/JavaScriptCore.vcproj/JavaScriptCore/JavaScriptCore.def
index aae8118..a04b3e6 100644
--- a/JavaScriptCore/JavaScriptCore.vcproj/JavaScriptCore/JavaScriptCore.def
+++ b/JavaScriptCore/JavaScriptCore.vcproj/JavaScriptCore/JavaScriptCore.def
@@ -62,7 +62,7 @@ EXPORTS
     ?classInfo at JSCell@JSC@@UBEPBUClassInfo at 2@XZ
     ?className at JSObject@JSC@@UBE?AVUString at 2@XZ
     ?collate at Collator@WTF@@QBE?AW4Result at 12@PB_WI0I at Z
-    ?collect at Heap@JSC@@QAE_NXZ
+    ?collectAllGarbage at Heap@JSC@@QAEXXZ
     ?computeHash at Rep@UString at JSC@@SAIPBDH at Z
     ?computeHash at Rep@UString at JSC@@SAIPB_WH at Z
     ?configurable at PropertyDescriptor@JSC@@QBE_NXZ
@@ -115,8 +115,8 @@ EXPORTS
     ?detach at Debugger@JSC@@UAEXPAVJSGlobalObject at 2@@Z
     ?detachThread at WTF@@YAXI at Z
     ?didTimeOut at TimeoutChecker@JSC@@QAE_NPAVExecState at 2@@Z
-    ?dumpSampleData at JSGlobalData@JSC@@QAEXPAVExecState at 2@@Z
     ?doubleToStringInJavaScriptFormat at WTF@@YAXNQADPAI at Z
+    ?dumpSampleData at JSGlobalData@JSC@@QAEXPAVExecState at 2@@Z
     ?enumerable at PropertyDescriptor@JSC@@QBE_NXZ
     ?equal at Identifier@JSC@@SA_NPBURep at UString@2 at PBD@Z
     ?equal at JSC@@YA_NPBURep at UString@1 at 0@Z
@@ -200,12 +200,10 @@ EXPORTS
     ?materializePropertyMap at Structure@JSC@@AAEXXZ
     ?name at InternalFunction@JSC@@QAEABVUString at 2@PAVExecState at 2@@Z
     ?nonInlineNaN at JSC@@YANXZ
-    ?objectCount at Heap@JSC@@QAEIXZ
+    ?objectCount at Heap@JSC@@QBEIXZ
     ?objectProtoFuncToString at JSC@@YI?AVJSValue at 1@PAVExecState at 1@PAVJSObject at 1@V21 at ABVArgList@1@@Z
     ?parse at Parser@JSC@@AAEXPAVJSGlobalData at 2@PAHPAVUString at 2@@Z
     ?parseDateFromNullTerminatedCharacters at WTF@@YANPBD at Z
-    ?primaryHeapBegin at Heap@JSC@@QAE?AV?$CollectorHeapIterator@$0A@@2 at XZ
-    ?primaryHeapEnd at Heap@JSC@@QAE?AV?$CollectorHeapIterator@$0A@@2 at XZ
     ?profiler at Profiler@JSC@@SAPAV12 at XZ
     ?protect at Heap@JSC@@QAEXVJSValue at 2@@Z
     ?protectedGlobalObjectCount at Heap@JSC@@QAEIXZ
diff --git a/JavaScriptCore/JavaScriptCore.xcodeproj/project.pbxproj b/JavaScriptCore/JavaScriptCore.xcodeproj/project.pbxproj
index 1626e50..087bdaf 100644
--- a/JavaScriptCore/JavaScriptCore.xcodeproj/project.pbxproj
+++ b/JavaScriptCore/JavaScriptCore.xcodeproj/project.pbxproj
@@ -167,6 +167,7 @@
 		14BD59C50A3E8F9F00BAF59C /* JavaScriptCore.framework in Frameworks */ = {isa = PBXBuildFile; fileRef = 932F5BD90822A1C700736975 /* JavaScriptCore.framework */; };
 		14BD5A300A3E91F600BAF59C /* JSContextRef.cpp in Sources */ = {isa = PBXBuildFile; fileRef = 14BD5A290A3E91F600BAF59C /* JSContextRef.cpp */; };
 		14BD5A320A3E91F600BAF59C /* JSValueRef.cpp in Sources */ = {isa = PBXBuildFile; fileRef = 14BD5A2B0A3E91F600BAF59C /* JSValueRef.cpp */; };
+		14BFCE6910CDB1FC00364CCE /* WeakGCMap.h in Headers */ = {isa = PBXBuildFile; fileRef = 14BFCE6810CDB1FC00364CCE /* WeakGCMap.h */; settings = {ATTRIBUTES = (Private, ); }; };
 		14C5242B0F5355E900BA3D04 /* JITStubs.h in Headers */ = {isa = PBXBuildFile; fileRef = 14A6581A0F4E36F4000150FD /* JITStubs.h */; settings = {ATTRIBUTES = (Private, ); }; };
 		14E9D17B107EC469004DDA21 /* JSGlobalObjectFunctions.cpp in Sources */ = {isa = PBXBuildFile; fileRef = BC756FC60E2031B200DE7D12 /* JSGlobalObjectFunctions.cpp */; };
 		14F3488F0E95EF8A003648BC /* CollectorHeapIterator.h in Headers */ = {isa = PBXBuildFile; fileRef = 14F3488E0E95EF8A003648BC /* CollectorHeapIterator.h */; settings = {ATTRIBUTES = (); }; };
@@ -637,6 +638,7 @@
 		14BD5A2A0A3E91F600BAF59C /* JSContextRef.h */ = {isa = PBXFileReference; fileEncoding = 30; lastKnownFileType = sourcecode.c.h; path = JSContextRef.h; sourceTree = "<group>"; };
 		14BD5A2B0A3E91F600BAF59C /* JSValueRef.cpp */ = {isa = PBXFileReference; fileEncoding = 30; lastKnownFileType = sourcecode.cpp.cpp; path = JSValueRef.cpp; sourceTree = "<group>"; };
 		14BD5A2D0A3E91F600BAF59C /* testapi.c */ = {isa = PBXFileReference; fileEncoding = 30; lastKnownFileType = sourcecode.c.c; name = testapi.c; path = API/tests/testapi.c; sourceTree = "<group>"; };
+		14BFCE6810CDB1FC00364CCE /* WeakGCMap.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = WeakGCMap.h; sourceTree = "<group>"; };
 		14D792640DAA03FB001A9F05 /* RegisterFile.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = RegisterFile.h; sourceTree = "<group>"; };
 		14D857740A4696C80032146C /* testapi.js */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.javascript; name = testapi.js; path = API/tests/testapi.js; sourceTree = "<group>"; };
 		14DA818E0D99FD2000B0A4FB /* JSActivation.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = JSActivation.h; sourceTree = "<group>"; };
@@ -1607,6 +1609,9 @@
 				F692A8850255597D01FF60F7 /* UString.cpp */,
 				F692A8860255597D01FF60F7 /* UString.h */,
 				1420BE7A10AA6DDB00F455D2 /* WeakRandom.h */,
+				14BFCE6810CDB1FC00364CCE /* WeakGCMap.h */,
+				A7C2216810C745E000F97913 /* JSZombie.h */,
+				A7C2216B10C7469C00F97913 /* JSZombie.cpp */,
 			);
 			path = runtime;
 			sourceTree = "<group>";
@@ -2013,6 +2018,7 @@
 				86CAFEE31035DDE60028A609 /* Executable.h in Headers */,
 				142D3939103E4560007DCB52 /* NumericStrings.h in Headers */,
 				A7FB61001040C38B0017A286 /* PropertyDescriptor.h in Headers */,
+				14BFCE6910CDB1FC00364CCE /* WeakGCMap.h in Headers */,
 				BC87CDB910712AD4000614CF /* JSONObject.lut.h in Headers */,
 				148CD1D8108CF902008163C6 /* JSContextRefPrivate.h in Headers */,
 				14A1563210966365006FA260 /* DateInstanceCache.h in Headers */,
diff --git a/JavaScriptCore/debugger/Debugger.cpp b/JavaScriptCore/debugger/Debugger.cpp
index 902a802..98d707c 100644
--- a/JavaScriptCore/debugger/Debugger.cpp
+++ b/JavaScriptCore/debugger/Debugger.cpp
@@ -67,8 +67,9 @@ void Debugger::recompileAllJSFunctions(JSGlobalData* globalData)
     FunctionExecutableSet functionExecutables;
     SourceProviderMap sourceProviders;
 
-    Heap::iterator heapEnd = globalData->heap.primaryHeapEnd();
-    for (Heap::iterator it = globalData->heap.primaryHeapBegin(); it != heapEnd; ++it) {
+    LiveObjectIterator<PrimaryHeap> it = globalData->heap.primaryHeapBegin();
+    LiveObjectIterator<PrimaryHeap> heapEnd = globalData->heap.primaryHeapEnd();
+    for ( ; it != heapEnd; ++it) {
         if (!(*it)->inherits(&JSFunction::info))
             continue;
 
diff --git a/JavaScriptCore/jsc.cpp b/JavaScriptCore/jsc.cpp
index b6bc0aa..6c61889 100644
--- a/JavaScriptCore/jsc.cpp
+++ b/JavaScriptCore/jsc.cpp
@@ -194,7 +194,7 @@ JSValue JSC_HOST_CALL functionDebug(ExecState* exec, JSObject*, JSValue, const A
 JSValue JSC_HOST_CALL functionGC(ExecState* exec, JSObject*, JSValue, const ArgList&)
 {
     JSLock lock(SilenceAssertionsOnly);
-    exec->heap()->collect();
+    exec->heap()->collectAllGarbage();
     return jsUndefined();
 }
 
diff --git a/JavaScriptCore/runtime/Collector.cpp b/JavaScriptCore/runtime/Collector.cpp
index 1630a58..af1fb7d 100644
--- a/JavaScriptCore/runtime/Collector.cpp
+++ b/JavaScriptCore/runtime/Collector.cpp
@@ -104,7 +104,7 @@ namespace JSC {
 
 const size_t GROWTH_FACTOR = 2;
 const size_t LOW_WATER_FACTOR = 4;
-const size_t ALLOCATIONS_PER_COLLECTION = 4000;
+const size_t ALLOCATIONS_PER_COLLECTION = 3600;
 // This value has to be a macro to be used in max() without introducing
 // a PIC branch in Mach-O binaries, see <rdar://problem/5971391>.
 #define MIN_ARRAY_SIZE (static_cast<size_t>(14))
@@ -148,7 +148,7 @@ Heap::Heap(JSGlobalData* globalData)
     , m_globalData(globalData)
 {
     ASSERT(globalData);
-
+    
 #if PLATFORM(SYMBIAN)
     // Symbian OpenC supports mmap but currently not the MAP_ANON flag.
     // Using fastMalloc() does not properly align blocks on 64k boundaries
@@ -170,7 +170,12 @@ Heap::Heap(JSGlobalData* globalData)
 #endif // PLATFORM(SYMBIAN)
     
     memset(&primaryHeap, 0, sizeof(CollectorHeap));
+    allocateBlock<PrimaryHeap>();
+
     memset(&numberHeap, 0, sizeof(CollectorHeap));
+#if USE(JSVALUE32)
+    allocateBlock<NumberHeap>();
+#endif
 }
 
 Heap::~Heap()
@@ -193,15 +198,8 @@ void Heap::destroy()
     delete m_markListSet;
     m_markListSet = 0;
 
-    sweep<PrimaryHeap>();
-    // No need to sweep number heap, because the JSNumber destructor doesn't do anything.
-#if ENABLE(JSC_ZOMBIES)
-    ASSERT(primaryHeap.numLiveObjects == primaryHeap.numZombies);
-#else
-    ASSERT(!primaryHeap.numLiveObjects);
-#endif
-    freeBlocks(&primaryHeap);
-    freeBlocks(&numberHeap);
+    freeBlocks<PrimaryHeap>();
+    freeBlocks<NumberHeap>();
 
 #if ENABLE(JSC_MULTIPLE_THREADS)
     if (m_currentThreadRegistrar) {
@@ -225,7 +223,6 @@ NEVER_INLINE CollectorBlock* Heap::allocateBlock()
 {
 #if PLATFORM(DARWIN)
     vm_address_t address = 0;
-    // FIXME: tag the region as a JavaScriptCore heap when we get a registered VM tag: <rdar://problem/6054788>.
     vm_map(current_task(), &address, BLOCK_SIZE, BLOCK_OFFSET_MASK, VM_FLAGS_ANYWHERE | VM_TAG_FOR_COLLECTOR_MEMORY, MEMORY_OBJECT_NULL, 0, FALSE, VM_PROT_DEFAULT, VM_PROT_DEFAULT, VM_INHERIT_DEFAULT);
 #elif PLATFORM(SYMBIAN)
     // Allocate a 64 kb aligned CollectorBlock
@@ -233,8 +230,6 @@ NEVER_INLINE CollectorBlock* Heap::allocateBlock()
     if (!mask)
         CRASH();
     uintptr_t address = reinterpret_cast<uintptr_t>(mask);
-
-    memset(reinterpret_cast<void*>(address), 0, BLOCK_SIZE);
 #elif PLATFORM(WINCE)
     void* address = VirtualAlloc(NULL, BLOCK_SIZE, MEM_COMMIT | MEM_RESERVE, PAGE_READWRITE);
 #elif PLATFORM(WIN_OS)
@@ -247,7 +242,6 @@ NEVER_INLINE CollectorBlock* Heap::allocateBlock()
 #elif HAVE(POSIX_MEMALIGN)
     void* address;
     posix_memalign(&address, BLOCK_SIZE, BLOCK_SIZE);
-    memset(address, 0, BLOCK_SIZE);
 #else
 
 #if ENABLE(JSC_MULTIPLE_THREADS)
@@ -273,13 +267,23 @@ NEVER_INLINE CollectorBlock* Heap::allocateBlock()
         munmap(reinterpret_cast<char*>(address + adjust + BLOCK_SIZE), extra - adjust);
 
     address += adjust;
-    memset(reinterpret_cast<void*>(address), 0, BLOCK_SIZE);
 #endif
 
+    // Initialize block.
+
     CollectorBlock* block = reinterpret_cast<CollectorBlock*>(address);
-    block->freeList = block->cells;
     block->heap = this;
     block->type = heapType;
+    clearMarkBits<heapType>(block);
+
+    // heapAllocate assumes that it's safe to call a destructor on any cell in the primary heap.
+    if (heapType != NumberHeap) {
+        Structure* dummyMarkableCellStructure = m_globalData->dummyMarkableCellStructure.get();
+        for (size_t i = 0; i < HeapConstants<heapType>::cellsPerBlock; ++i)
+            new (block->cells + i) JSCell(dummyMarkableCellStructure);
+    }
+    
+    // Add block to blocks vector.
 
     CollectorHeap& heap = heapType == PrimaryHeap ? primaryHeap : numberHeap;
     size_t numBlocks = heap.numBlocks;
@@ -301,6 +305,12 @@ NEVER_INLINE void Heap::freeBlock(size_t block)
 {
     CollectorHeap& heap = heapType == PrimaryHeap ? primaryHeap : numberHeap;
 
+    if (heapType != NumberHeap) {
+        ObjectIterator<heapType> it(heap, block);
+        ObjectIterator<heapType> end(heap, block + 1);
+        for ( ; it != end; ++it)
+            (*it)->~JSCell();
+    }
     freeBlock(heap.blocks[block]);
 
     // swap with the last block so we compact as we go
@@ -334,13 +344,15 @@ NEVER_INLINE void Heap::freeBlock(CollectorBlock* block)
 #endif
 }
 
-void Heap::freeBlocks(CollectorHeap* heap)
+template <HeapType heapType>
+void Heap::freeBlocks()
 {
-    for (size_t i = 0; i < heap->usedBlocks; ++i)
-        if (heap->blocks[i])
-            freeBlock(heap->blocks[i]);
-    fastFree(heap->blocks);
-    memset(heap, 0, sizeof(CollectorHeap));
+    CollectorHeap& heap = heapType == PrimaryHeap ? primaryHeap : numberHeap;
+
+    while (heap.usedBlocks)
+        freeBlock<heapType>(0);
+    fastFree(heap.blocks);
+    memset(&heap, 0, sizeof(CollectorHeap));
 }
 
 void Heap::recordExtraCost(size_t cost)
@@ -357,6 +369,14 @@ void Heap::recordExtraCost(size_t cost)
     // collecting more frequently as long as it stays alive.
     // NOTE: we target the primaryHeap unconditionally as JSNumber doesn't modify cost 
 
+    if (primaryHeap.extraCost > maxExtraCost && primaryHeap.extraCost > primaryHeap.usedBlocks * BLOCK_SIZE / 2) {
+        // If the last iteration through the heap deallocated blocks, we need
+        // to clean up remaining garbage before marking. Otherwise, the conservative
+        // marking mechanism might follow a pointer to unmapped memory.
+        if (primaryHeap.didShrink)
+            sweep<PrimaryHeap>();
+        reset();
+    }
     primaryHeap.extraCost += cost;
 }
 
@@ -364,101 +384,101 @@ template <HeapType heapType> ALWAYS_INLINE void* Heap::heapAllocate(size_t s)
 {
     typedef typename HeapConstants<heapType>::Block Block;
     typedef typename HeapConstants<heapType>::Cell Cell;
-
+    
     CollectorHeap& heap = heapType == PrimaryHeap ? primaryHeap : numberHeap;
+
     ASSERT(JSLock::lockCount() > 0);
     ASSERT(JSLock::currentThreadIsHoldingLock());
     ASSERT_UNUSED(s, s <= HeapConstants<heapType>::cellSize);
 
     ASSERT(heap.operationInProgress == NoOperation);
     ASSERT(heapType == PrimaryHeap || heap.extraCost == 0);
-    // FIXME: If another global variable access here doesn't hurt performance
-    // too much, we could CRASH() in NDEBUG builds, which could help ensure we
-    // don't spend any time debugging cases where we allocate inside an object's
-    // deallocation code.
 
 #if COLLECT_ON_EVERY_ALLOCATION
-    collect();
+    collectAllGarbage();
+    ASSERT(heap.operationInProgress == NoOperation);
 #endif
 
-    size_t numLiveObjects = heap.numLiveObjects;
-    size_t usedBlocks = heap.usedBlocks;
-    size_t i = heap.firstBlockWithPossibleSpace;
-
-    // if we have a huge amount of extra cost, we'll try to collect even if we still have
-    // free cells left.
-    if (heapType == PrimaryHeap && heap.extraCost > ALLOCATIONS_PER_COLLECTION) {
-        size_t numLiveObjectsAtLastCollect = heap.numLiveObjectsAtLastCollect;
-        size_t numNewObjects = numLiveObjects - numLiveObjectsAtLastCollect;
-        const size_t newCost = numNewObjects + heap.extraCost;
-        if (newCost >= ALLOCATIONS_PER_COLLECTION && newCost >= numLiveObjectsAtLastCollect)
-            goto collect;
-    }
+allocate:
 
-    ASSERT(heap.operationInProgress == NoOperation);
-#ifndef NDEBUG
-    // FIXME: Consider doing this in NDEBUG builds too (see comment above).
-    heap.operationInProgress = Allocation;
-#endif
+    // Fast case: find the next garbage cell and recycle it.
 
-scan:
-    Block* targetBlock;
-    size_t targetBlockUsedCells;
-    if (i != usedBlocks) {
-        targetBlock = reinterpret_cast<Block*>(heap.blocks[i]);
-        targetBlockUsedCells = targetBlock->usedCells;
-        ASSERT(targetBlockUsedCells <= HeapConstants<heapType>::cellsPerBlock);
-        while (targetBlockUsedCells == HeapConstants<heapType>::cellsPerBlock) {
-            if (++i == usedBlocks)
-                goto collect;
-            targetBlock = reinterpret_cast<Block*>(heap.blocks[i]);
-            targetBlockUsedCells = targetBlock->usedCells;
-            ASSERT(targetBlockUsedCells <= HeapConstants<heapType>::cellsPerBlock);
-        }
-        heap.firstBlockWithPossibleSpace = i;
-    } else {
+    do {
+        ASSERT(heap.nextBlock < heap.usedBlocks);
+        Block* block = reinterpret_cast<Block*>(heap.blocks[heap.nextBlock]);
+        do {
+            ASSERT(heap.nextCell < HeapConstants<heapType>::cellsPerBlock);
+            if (!block->marked.get(heap.nextCell >> HeapConstants<heapType>::bitmapShift)) { // Always false for the last cell in the block
+                Cell* cell = block->cells + heap.nextCell;
+                if (heapType != NumberHeap) {
+                    heap.operationInProgress = Allocation;
+                    JSCell* imp = reinterpret_cast<JSCell*>(cell);
+                    imp->~JSCell();
+                    heap.operationInProgress = NoOperation;
+                }
+                ++heap.nextCell;
+                return cell;
+            }
+        } while (++heap.nextCell != HeapConstants<heapType>::cellsPerBlock);
+        heap.nextCell = 0;
+    } while (++heap.nextBlock != heap.usedBlocks);
 
-collect:
-        size_t numLiveObjectsAtLastCollect = heap.numLiveObjectsAtLastCollect;
-        size_t numNewObjects = numLiveObjects - numLiveObjectsAtLastCollect;
-        const size_t newCost = numNewObjects + heap.extraCost;
+    // Slow case: reached the end of the heap. Mark live objects and start over.
 
-        if (newCost >= ALLOCATIONS_PER_COLLECTION && newCost >= numLiveObjectsAtLastCollect) {
-#ifndef NDEBUG
-            heap.operationInProgress = NoOperation;
-#endif
-            bool foundGarbage = collect();
-            numLiveObjects = heap.numLiveObjects;
-            usedBlocks = heap.usedBlocks;
-            i = heap.firstBlockWithPossibleSpace;
-#ifndef NDEBUG
-            heap.operationInProgress = Allocation;
-#endif
-            if (foundGarbage)
-                goto scan;
-        }
+    reset();
+    goto allocate;
+}
 
-        // didn't find a block, and GC didn't reclaim anything, need to allocate a new block
-        targetBlock = reinterpret_cast<Block*>(allocateBlock<heapType>());
-        heap.firstBlockWithPossibleSpace = heap.usedBlocks - 1;
-        targetBlockUsedCells = 0;
-    }
+template <HeapType heapType>
+void Heap::resizeBlocks()
+{
+    CollectorHeap& heap = heapType == PrimaryHeap ? primaryHeap : numberHeap;
 
-    // find a free spot in the block and detach it from the free list
-    Cell* newCell = targetBlock->freeList;
+    heap.didShrink = false;
 
-    // "next" field is a cell offset -- 0 means next cell, so a zeroed block is already initialized
-    targetBlock->freeList = (newCell + 1) + newCell->u.freeCell.next;
+    size_t usedCellCount = markedCells<heapType>();
+    size_t minCellCount = usedCellCount + max(ALLOCATIONS_PER_COLLECTION, usedCellCount);
+    size_t minBlockCount = (minCellCount + HeapConstants<heapType>::cellsPerBlock - 1) / HeapConstants<heapType>::cellsPerBlock;
 
-    targetBlock->usedCells = static_cast<uint32_t>(targetBlockUsedCells + 1);
-    heap.numLiveObjects = numLiveObjects + 1;
+    size_t maxCellCount = 1.25f * minCellCount;
+    size_t maxBlockCount = (maxCellCount + HeapConstants<heapType>::cellsPerBlock - 1) / HeapConstants<heapType>::cellsPerBlock;
 
-#ifndef NDEBUG
-    // FIXME: Consider doing this in NDEBUG builds too (see comment above).
-    heap.operationInProgress = NoOperation;
-#endif
+    if (heap.usedBlocks < minBlockCount)
+        growBlocks<heapType>(minBlockCount);
+    else if (heap.usedBlocks > maxBlockCount)
+        shrinkBlocks<heapType>(maxBlockCount);
+}
+
+template <HeapType heapType> 
+void Heap::growBlocks(size_t neededBlocks)
+{
+    CollectorHeap& heap = heapType == PrimaryHeap ? primaryHeap : numberHeap;
+    ASSERT(heap.usedBlocks < neededBlocks);
+    while (heap.usedBlocks < neededBlocks)
+        allocateBlock<heapType>();
+}
+
+template <HeapType heapType> 
+void Heap::shrinkBlocks(size_t neededBlocks)
+{
+    CollectorHeap& heap = heapType == PrimaryHeap ? primaryHeap : numberHeap;
+    ASSERT(heap.usedBlocks > neededBlocks);
+    
+    // Clear the always-on last bit, so isEmpty() isn't fooled by it.
+    for (size_t i = 0; i < heap.usedBlocks; ++i)
+        heap.blocks[i]->marked.clear((HeapConstants<heapType>::cellsPerBlock - 1) >> HeapConstants<heapType>::bitmapShift);
+
+    for (size_t i = 0; i != heap.usedBlocks && heap.usedBlocks != neededBlocks; ) {
+        if (heap.blocks[i]->marked.isEmpty()) {
+            freeBlock<heapType>(i);
+            heap.didShrink = true;
+        } else
+            ++i;
+    }
 
-    return newCell;
+    // Reset the always-on last bit.
+    for (size_t i = 0; i < heap.usedBlocks; ++i)
+        heap.blocks[i]->marked.set((HeapConstants<heapType>::cellsPerBlock - 1) >> HeapConstants<heapType>::bitmapShift);
 }
 
 void* Heap::allocate(size_t s)
@@ -714,10 +734,37 @@ void Heap::registerThread()
 
 #endif
 
-#define IS_POINTER_ALIGNED(p) (((intptr_t)(p) & (sizeof(char*) - 1)) == 0)
+inline bool isPointerAligned(void* p)
+{
+    return (((intptr_t)(p) & (sizeof(char*) - 1)) == 0);
+}
+
+// Cell size needs to be a power of two for isPossibleCell to be valid.
+COMPILE_ASSERT(sizeof(CollectorCell) % 2 == 0, Collector_cell_size_is_power_of_two);
+
+#if USE(JSVALUE32)
+static bool isHalfCellAligned(void *p)
+{
+    return (((intptr_t)(p) & (CELL_MASK >> 1)) == 0);
+}
+
+static inline bool isPossibleCell(void* p)
+{
+    return isHalfCellAligned(p) && p;
+}
+
+#else
 
-// cell size needs to be a power of two for this to be valid
-#define IS_HALF_CELL_ALIGNED(p) (((intptr_t)(p) & (CELL_MASK >> 1)) == 0)
+static inline bool isCellAligned(void *p)
+{
+    return (((intptr_t)(p) & CELL_MASK) == 0);
+}
+
+static inline bool isPossibleCell(void* p)
+{
+    return isCellAligned(p) && p;
+}
+#endif
 
 void Heap::markConservatively(MarkStack& markStack, void* start, void* end)
 {
@@ -728,46 +775,52 @@ void Heap::markConservatively(MarkStack& markStack, void* start, void* end)
     }
 
     ASSERT((static_cast<char*>(end) - static_cast<char*>(start)) < 0x1000000);
-    ASSERT(IS_POINTER_ALIGNED(start));
-    ASSERT(IS_POINTER_ALIGNED(end));
+    ASSERT(isPointerAligned(start));
+    ASSERT(isPointerAligned(end));
 
     char** p = static_cast<char**>(start);
     char** e = static_cast<char**>(end);
 
-    size_t usedPrimaryBlocks = primaryHeap.usedBlocks;
-    size_t usedNumberBlocks = numberHeap.usedBlocks;
     CollectorBlock** primaryBlocks = primaryHeap.blocks;
+#if USE(JSVALUE32)
     CollectorBlock** numberBlocks = numberHeap.blocks;
-
-    const size_t lastCellOffset = sizeof(CollectorCell) * (CELLS_PER_BLOCK - 1);
+#endif
 
     while (p != e) {
         char* x = *p++;
-        if (IS_HALF_CELL_ALIGNED(x) && x) {
+        if (isPossibleCell(x)) {
             uintptr_t xAsBits = reinterpret_cast<uintptr_t>(x);
             xAsBits &= CELL_ALIGN_MASK;
+
             uintptr_t offset = xAsBits & BLOCK_OFFSET_MASK;
+            const size_t lastCellOffset = sizeof(CollectorCell) * (CELLS_PER_BLOCK - 1);
+            if (offset > lastCellOffset)
+                continue;
+
             CollectorBlock* blockAddr = reinterpret_cast<CollectorBlock*>(xAsBits - offset);
+#if USE(JSVALUE32)
             // Mark the the number heap, we can mark these Cells directly to avoid the virtual call cost
+            size_t usedNumberBlocks = numberHeap.usedBlocks;
             for (size_t block = 0; block < usedNumberBlocks; block++) {
-                if ((numberBlocks[block] == blockAddr) & (offset <= lastCellOffset)) {
-                    Heap::markCell(reinterpret_cast<JSCell*>(xAsBits));
-                    goto endMarkLoop;
-                }
+                if (numberBlocks[block] != blockAddr)
+                    continue;
+                Heap::markCell(reinterpret_cast<JSCell*>(xAsBits));
+                goto endMarkLoop;
             }
-          
+#endif
+
             // Mark the primary heap
+            size_t usedPrimaryBlocks = primaryHeap.usedBlocks;
             for (size_t block = 0; block < usedPrimaryBlocks; block++) {
-                if ((primaryBlocks[block] == blockAddr) & (offset <= lastCellOffset)) {
-                    if (reinterpret_cast<CollectorCell*>(xAsBits)->u.freeCell.zeroIfFree) {
-                        markStack.append(reinterpret_cast<JSCell*>(xAsBits));
-                        markStack.drain();
-                    }
-                    break;
-                }
+                if (primaryBlocks[block] != blockAddr)
+                    continue;
+                markStack.append(reinterpret_cast<JSCell*>(xAsBits));
+                markStack.drain();
             }
+#if USE(JSVALUE32)
         endMarkLoop:
             ;
+#endif
         }
     }
 }
@@ -1009,129 +1062,78 @@ void Heap::markProtectedObjects(MarkStack& markStack)
     }
 }
 
-template <HeapType heapType> size_t Heap::sweep()
+template <HeapType heapType> 
+void Heap::clearMarkBits()
 {
-    typedef typename HeapConstants<heapType>::Block Block;
-    typedef typename HeapConstants<heapType>::Cell Cell;
+    CollectorHeap& heap = heapType == PrimaryHeap ? primaryHeap : numberHeap;
+    for (size_t i = 0; i < heap.usedBlocks; ++i)
+        clearMarkBits<heapType>(heap.blocks[i]);
+}
+
+template <HeapType heapType> 
+void Heap::clearMarkBits(CollectorBlock* block)
+{
+    // heapAllocate assumes that the last cell in every block is marked.
+    block->marked.clearAll();
+    block->marked.set((HeapConstants<heapType>::cellsPerBlock - 1) >> HeapConstants<heapType>::bitmapShift);
+}
+
+template <HeapType heapType> 
+size_t Heap::markedCells(size_t startBlock, size_t startCell) const
+{
+    const CollectorHeap& heap = heapType == PrimaryHeap ? primaryHeap : numberHeap;
+    ASSERT(startBlock <= heap.usedBlocks);
+    ASSERT(startCell < HeapConstants<heapType>::cellsPerBlock);
+
+    if (startBlock >= heap.usedBlocks)
+        return 0;
+
+    size_t result = 0;
+    result += heap.blocks[startBlock]->marked.count(startCell);
+    for (size_t i = startBlock + 1; i < heap.usedBlocks; ++i)
+        result += heap.blocks[i]->marked.count();
+
+    return result;
+}
+
+template <HeapType heapType> 
+void Heap::sweep()
+{
+    ASSERT(heapType != NumberHeap); // The number heap does not contain meaningful destructors.
 
-    // SWEEP: delete everything with a zero refcount (garbage) and unmark everything else
     CollectorHeap& heap = heapType == PrimaryHeap ? primaryHeap : numberHeap;
+
+    ASSERT(heap.operationInProgress == NoOperation);
+    if (heap.operationInProgress != NoOperation)
+        CRASH();
+    heap.operationInProgress = Collection;
     
-    size_t emptyBlocks = 0;
-    size_t numLiveObjects = heap.numLiveObjects;
-    
-    for (size_t block = 0; block < heap.usedBlocks; block++) {
-        Block* curBlock = reinterpret_cast<Block*>(heap.blocks[block]);
-        
-        size_t usedCells = curBlock->usedCells;
-        Cell* freeList = curBlock->freeList;
-        
-        if (usedCells == HeapConstants<heapType>::cellsPerBlock) {
-            // special case with a block where all cells are used -- testing indicates this happens often
-            for (size_t i = 0; i < HeapConstants<heapType>::cellsPerBlock; i++) {
-                if (!curBlock->marked.get(i >> HeapConstants<heapType>::bitmapShift)) {
-                    Cell* cell = curBlock->cells + i;
-                    
-                    if (heapType != NumberHeap) {
-                        JSCell* imp = reinterpret_cast<JSCell*>(cell);
-                        // special case for allocated but uninitialized object
-                        // (We don't need this check earlier because nothing prior this point 
-                        // assumes the object has a valid vptr.)
-                        if (cell->u.freeCell.zeroIfFree == 0)
-                            continue;
-#if ENABLE(JSC_ZOMBIES)
-                        if (!imp->isZombie()) {
-                            const ClassInfo* info = imp->classInfo();
-                            imp->~JSCell();
-                            new (imp) JSZombie(info, JSZombie::leakedZombieStructure());
-                            heap.numZombies++;
-                        }
-#else
-                        imp->~JSCell();
-#endif
-                    }
-                    --numLiveObjects;
 #if !ENABLE(JSC_ZOMBIES)
-                    --usedCells;
-                    
-                    // put cell on the free list
-                    cell->u.freeCell.zeroIfFree = 0;
-                    cell->u.freeCell.next = freeList - (cell + 1);
-                    freeList = cell;
+    Structure* dummyMarkableCellStructure = m_globalData->dummyMarkableCellStructure.get();
 #endif
-                }
-            }
-        } else {
-            size_t minimumCellsToProcess = usedCells;
-            for (size_t i = 0; (i < minimumCellsToProcess) & (i < HeapConstants<heapType>::cellsPerBlock); i++) {
-                Cell* cell = curBlock->cells + i;
-                if (cell->u.freeCell.zeroIfFree == 0) {
-                    ++minimumCellsToProcess;
-                } else {
-                    if (!curBlock->marked.get(i >> HeapConstants<heapType>::bitmapShift)) {
-                        if (heapType != NumberHeap) {
-                            JSCell* imp = reinterpret_cast<JSCell*>(cell);
+
+    DeadObjectIterator<heapType> it(heap, heap.nextBlock, heap.nextCell);
+    DeadObjectIterator<heapType> end(heap, heap.usedBlocks);
+    for ( ; it != end; ++it) {
+        JSCell* cell = *it;
 #if ENABLE(JSC_ZOMBIES)
-                            if (!imp->isZombie()) {
-                                const ClassInfo* info = imp->classInfo();
-                                imp->~JSCell();
-                                new (imp) JSZombie(info, JSZombie::leakedZombieStructure());
-                                heap.numZombies++;
-                            }
+        if (!cell->isZombie()) {
+            const ClassInfo* info = cell->classInfo();
+            cell->~JSCell();
+            new (cell) JSZombie(info, JSZombie::leakedZombieStructure());
+            Heap::markCell(cell);
+        }
 #else
-                            imp->~JSCell();
+        cell->~JSCell();
+        // Callers of sweep assume it's safe to mark any cell in the heap.
+        new (cell) JSCell(dummyMarkableCellStructure);
 #endif
-                        }
-#if !ENABLE(JSC_ZOMBIES)
-                        --usedCells;
-                        --numLiveObjects;
-                        
-                        // put cell on the free list
-                        cell->u.freeCell.zeroIfFree = 0;
-                        cell->u.freeCell.next = freeList - (cell + 1); 
-                        freeList = cell;
-#endif
-                    }
-                }
-            }
-        }
-        
-        curBlock->usedCells = static_cast<uint32_t>(usedCells);
-        curBlock->freeList = freeList;
-        curBlock->marked.clearAll();
-        
-        if (!usedCells)
-            ++emptyBlocks;
     }
-    
-    if (heap.numLiveObjects != numLiveObjects)
-        heap.firstBlockWithPossibleSpace = 0;
-    
-    heap.numLiveObjects = numLiveObjects;
-    heap.numLiveObjectsAtLastCollect = numLiveObjects;
-    heap.extraCost = 0;
-    
-    if (!emptyBlocks)
-        return numLiveObjects;
-
-    size_t neededCells = 1.25f * (numLiveObjects + max(ALLOCATIONS_PER_COLLECTION, numLiveObjects));
-    size_t neededBlocks = (neededCells + HeapConstants<heapType>::cellsPerBlock - 1) / HeapConstants<heapType>::cellsPerBlock;
-    for (size_t block = 0; block < heap.usedBlocks; block++) {
-        if (heap.usedBlocks <= neededBlocks)
-            break;
 
-        Block* curBlock = reinterpret_cast<Block*>(heap.blocks[block]);
-        if (curBlock->usedCells)
-            continue;
-
-        freeBlock<heapType>(block);
-        block--; // Don't move forward a step in this case
-    }
-
-    return numLiveObjects;
+    heap.operationInProgress = NoOperation;
 }
 
-bool Heap::collect()
+void Heap::markRoots()
 {
 #ifndef NDEBUG
     if (m_globalData->isSharedInstance) {
@@ -1140,23 +1142,31 @@ bool Heap::collect()
     }
 #endif
 
-    ASSERT((primaryHeap.operationInProgress == NoOperation) | (numberHeap.operationInProgress == NoOperation));
-    if ((primaryHeap.operationInProgress != NoOperation) | (numberHeap.operationInProgress != NoOperation))
+    ASSERT((primaryHeap.operationInProgress == NoOperation) & (numberHeap.operationInProgress == NoOperation));
+    if (!((primaryHeap.operationInProgress == NoOperation) & (numberHeap.operationInProgress == NoOperation)))
         CRASH();
 
-    JAVASCRIPTCORE_GC_BEGIN();
     primaryHeap.operationInProgress = Collection;
     numberHeap.operationInProgress = Collection;
 
-    // MARK: first mark all referenced objects recursively starting out from the set of root objects
     MarkStack& markStack = m_globalData->markStack;
+
+    // Reset mark bits.
+    clearMarkBits<PrimaryHeap>();
+    clearMarkBits<NumberHeap>();
+
+    // Mark stack roots.
     markStackObjectsConservatively(markStack);
+    m_globalData->interpreter->registerFile().markCallFrames(markStack, this);
+
+    // Mark explicitly registered roots.
     markProtectedObjects(markStack);
+
+    // Mark misc. other roots.
     if (m_markListSet && m_markListSet->size())
         MarkedArgumentBuffer::markLists(markStack, *m_markListSet);
     if (m_globalData->exception)
         markStack.append(m_globalData->exception);
-    m_globalData->interpreter->registerFile().markCallFrames(markStack, this);
     m_globalData->smallStrings.markChildren(markStack);
     if (m_globalData->functionCodeBlockBeingReparsed)
         m_globalData->functionCodeBlockBeingReparsed->markAggregate(markStack);
@@ -1165,41 +1175,41 @@ bool Heap::collect()
 
     markStack.drain();
     markStack.compact();
-    JAVASCRIPTCORE_GC_MARKED();
-
-    size_t originalLiveObjects = primaryHeap.numLiveObjects + numberHeap.numLiveObjects;
-    size_t numLiveObjects = sweep<PrimaryHeap>();
-    numLiveObjects += sweep<NumberHeap>();
 
     primaryHeap.operationInProgress = NoOperation;
     numberHeap.operationInProgress = NoOperation;
-    JAVASCRIPTCORE_GC_END(originalLiveObjects, numLiveObjects);
+}
 
-    return numLiveObjects < originalLiveObjects;
+size_t Heap::objectCount() const
+{
+    return objectCount<PrimaryHeap>() + objectCount<NumberHeap>();
 }
 
-size_t Heap::objectCount() 
+template <HeapType heapType> 
+size_t Heap::objectCount() const
 {
-    return primaryHeap.numLiveObjects + numberHeap.numLiveObjects - m_globalData->smallStrings.count(); 
+    const CollectorHeap& heap = heapType == PrimaryHeap ? primaryHeap : numberHeap;
+
+    return heap.nextBlock * HeapConstants<heapType>::cellsPerBlock // allocated full blocks
+           + heap.nextCell // allocated cells in current block
+           + markedCells<heapType>(heap.nextBlock, heap.nextCell) // marked cells in remainder of heap
+           - heap.usedBlocks; // 1 cell per block is a dummy sentinel
 }
 
 template <HeapType heapType> 
-static void addToStatistics(Heap::Statistics& statistics, const CollectorHeap& heap)
+void Heap::addToStatistics(Heap::Statistics& statistics) const
 {
-    typedef HeapConstants<heapType> HC;
-    for (size_t i = 0; i < heap.usedBlocks; ++i) {
-        if (heap.blocks[i]) {
-            statistics.size += BLOCK_SIZE;
-            statistics.free += (HC::cellsPerBlock - heap.blocks[i]->usedCells) * HC::cellSize;
-        }
-    }
+    const CollectorHeap& heap = heapType == PrimaryHeap ? primaryHeap : numberHeap;
+
+    statistics.size += heap.usedBlocks * BLOCK_SIZE;
+    statistics.free += heap.usedBlocks * BLOCK_SIZE - (objectCount<heapType>() * HeapConstants<heapType>::cellSize);
 }
 
 Heap::Statistics Heap::statistics() const
 {
     Statistics statistics = { 0, 0 };
-    JSC::addToStatistics<PrimaryHeap>(statistics, primaryHeap);
-    JSC::addToStatistics<NumberHeap>(statistics, numberHeap);
+    addToStatistics<PrimaryHeap>(statistics);
+    addToStatistics<NumberHeap>(statistics);
     return statistics;
 }
 
@@ -1271,14 +1281,68 @@ bool Heap::isBusy()
     return (primaryHeap.operationInProgress != NoOperation) | (numberHeap.operationInProgress != NoOperation);
 }
 
-Heap::iterator Heap::primaryHeapBegin()
+void Heap::reset()
+{
+    JAVASCRIPTCORE_GC_BEGIN();
+
+    markRoots();
+
+    JAVASCRIPTCORE_GC_MARKED();
+
+    primaryHeap.nextCell = 0;
+    primaryHeap.nextBlock = 0;
+    primaryHeap.extraCost = 0;
+#if ENABLE(JSC_ZOMBIES)
+    sweep<PrimaryHeap>();
+#endif
+    resizeBlocks<PrimaryHeap>();
+
+#if USE(JSVALUE32)
+    numberHeap.nextCell = 0;
+    numberHeap.nextBlock = 0;
+    resizeBlocks<NumberHeap>();
+#endif
+
+    JAVASCRIPTCORE_GC_END();
+}
+
+void Heap::collectAllGarbage()
+{
+    JAVASCRIPTCORE_GC_BEGIN();
+
+    // If the last iteration through the heap deallocated blocks, we need
+    // to clean up remaining garbage before marking. Otherwise, the conservative
+    // marking mechanism might follow a pointer to unmapped memory.
+    if (primaryHeap.didShrink)
+        sweep<PrimaryHeap>();
+
+    markRoots();
+
+    JAVASCRIPTCORE_GC_MARKED();
+
+    primaryHeap.nextCell = 0;
+    primaryHeap.nextBlock = 0;
+    primaryHeap.extraCost = 0;
+    sweep<PrimaryHeap>();
+    resizeBlocks<PrimaryHeap>();
+
+#if USE(JSVALUE32)
+    numberHeap.nextCell = 0;
+    numberHeap.nextBlock = 0;
+    resizeBlocks<NumberHeap>();
+#endif
+
+    JAVASCRIPTCORE_GC_END();
+}
+
+LiveObjectIterator<PrimaryHeap> Heap::primaryHeapBegin()
 {
-    return iterator(primaryHeap.blocks, primaryHeap.blocks + primaryHeap.usedBlocks);
+    return LiveObjectIterator<PrimaryHeap>(primaryHeap, 0);
 }
 
-Heap::iterator Heap::primaryHeapEnd()
+LiveObjectIterator<PrimaryHeap> Heap::primaryHeapEnd()
 {
-    return iterator(primaryHeap.blocks + primaryHeap.usedBlocks, primaryHeap.blocks + primaryHeap.usedBlocks);
+    return LiveObjectIterator<PrimaryHeap>(primaryHeap, primaryHeap.usedBlocks);
 }
 
 } // namespace JSC
diff --git a/JavaScriptCore/runtime/Collector.h b/JavaScriptCore/runtime/Collector.h
index 9128701..c86ca83 100644
--- a/JavaScriptCore/runtime/Collector.h
+++ b/JavaScriptCore/runtime/Collector.h
@@ -28,9 +28,9 @@
 #include <wtf/HashSet.h>
 #include <wtf/Noncopyable.h>
 #include <wtf/OwnPtr.h>
+#include <wtf/StdLibExtras.h>
 #include <wtf/Threading.h>
 
-// This is supremely lame that we require pthreads to build on windows.
 #if ENABLE(JSC_MULTIPLE_THREADS)
 #include <pthread.h>
 #endif
@@ -49,20 +49,19 @@ namespace JSC {
     enum OperationInProgress { NoOperation, Allocation, Collection };
     enum HeapType { PrimaryHeap, NumberHeap };
 
-    template <HeapType> class CollectorHeapIterator;
+    template <HeapType> class LiveObjectIterator;
 
     struct CollectorHeap {
+        size_t nextBlock;
+        size_t nextCell;
+
         CollectorBlock** blocks;
         size_t numBlocks;
         size_t usedBlocks;
-        size_t firstBlockWithPossibleSpace;
 
-        size_t numLiveObjects;
-        size_t numLiveObjectsAtLastCollect;
         size_t extraCost;
-#if ENABLE(JSC_ZOMBIES)
-        size_t numZombies;
-#endif
+        
+        bool didShrink;
 
         OperationInProgress operationInProgress;
     };
@@ -70,21 +69,21 @@ namespace JSC {
     class Heap : public Noncopyable {
     public:
         class Thread;
-        typedef CollectorHeapIterator<PrimaryHeap> iterator;
 
         void destroy();
 
         void* allocateNumber(size_t);
         void* allocate(size_t);
 
-        bool collect();
         bool isBusy(); // true if an allocation or collection is in progress
+        void collectAllGarbage();
 
-        static const size_t minExtraCostSize = 256;
+        static const size_t minExtraCost = 256;
+        static const size_t maxExtraCost = 1024 * 1024;
 
         void reportExtraMemoryCost(size_t cost);
 
-        size_t objectCount();
+        size_t objectCount() const;
         struct Statistics {
             size_t size;
             size_t free;
@@ -114,13 +113,14 @@ namespace JSC {
         JSGlobalData* globalData() const { return m_globalData; }
         static bool isNumber(JSCell*);
         
-        // Iterators for the object heap.
-        iterator primaryHeapBegin();
-        iterator primaryHeapEnd();
+        LiveObjectIterator<PrimaryHeap> primaryHeapBegin();
+        LiveObjectIterator<PrimaryHeap> primaryHeapEnd();
 
     private:
         template <HeapType heapType> void* heapAllocate(size_t);
-        template <HeapType heapType> size_t sweep();
+        void reset();
+        void collectRemainingGarbage();
+        template <HeapType heapType> void sweep();
         static CollectorBlock* cellBlock(const JSCell*);
         static size_t cellOffset(const JSCell*);
 
@@ -131,9 +131,20 @@ namespace JSC {
         template <HeapType heapType> NEVER_INLINE CollectorBlock* allocateBlock();
         template <HeapType heapType> NEVER_INLINE void freeBlock(size_t);
         NEVER_INLINE void freeBlock(CollectorBlock*);
-        void freeBlocks(CollectorHeap*);
+        template <HeapType heapType> void freeBlocks();
+        template <HeapType heapType> void resizeBlocks();
+        template <HeapType heapType> void growBlocks(size_t neededBlocks);
+        template <HeapType heapType> void shrinkBlocks(size_t neededBlocks);
+        template <HeapType heapType> void clearMarkBits();
+        template <HeapType heapType> void clearMarkBits(CollectorBlock*);
+        template <HeapType heapType> size_t markedCells(size_t startBlock = 0, size_t startCell = 0) const;
 
         void recordExtraCost(size_t);
+
+        template <HeapType heapType> void addToStatistics(Statistics&) const;
+        template <HeapType heapType> size_t objectCount() const;
+
+        void markRoots();
         void markProtectedObjects(MarkStack&);
         void markCurrentThreadConservatively(MarkStack&);
         void markCurrentThreadConservativelyInternal(MarkStack&);
@@ -189,44 +200,49 @@ namespace JSC {
     const size_t SMALL_CELL_SIZE = CELL_SIZE / 2;
     const size_t CELL_MASK = CELL_SIZE - 1;
     const size_t CELL_ALIGN_MASK = ~CELL_MASK;
-    const size_t CELLS_PER_BLOCK = (BLOCK_SIZE * 8 - sizeof(uint32_t) * 8 - sizeof(void *) * 8 - 2 * (7 + 3 * 8)) / (CELL_SIZE * 8 + 2);
+    const size_t CELLS_PER_BLOCK = (BLOCK_SIZE - sizeof(Heap*) - sizeof(HeapType)) * 8 * CELL_SIZE / (8 * CELL_SIZE + 1) / CELL_SIZE; // one bitmap byte can represent 8 cells.
+    
     const size_t SMALL_CELLS_PER_BLOCK = 2 * CELLS_PER_BLOCK;
     const size_t BITMAP_SIZE = (CELLS_PER_BLOCK + 7) / 8;
     const size_t BITMAP_WORDS = (BITMAP_SIZE + 3) / sizeof(uint32_t);
-  
+
     struct CollectorBitmap {
         uint32_t bits[BITMAP_WORDS];
         bool get(size_t n) const { return !!(bits[n >> 5] & (1 << (n & 0x1F))); } 
         void set(size_t n) { bits[n >> 5] |= (1 << (n & 0x1F)); } 
         void clear(size_t n) { bits[n >> 5] &= ~(1 << (n & 0x1F)); } 
         void clearAll() { memset(bits, 0, sizeof(bits)); }
+        size_t count(size_t startCell = 0)
+        {
+            size_t result = 0;
+            for ( ; (startCell & 0x1F) != 0; ++startCell) {
+                if (get(startCell))
+                    ++result;
+            }
+            for (size_t i = startCell >> 5; i < BITMAP_WORDS; ++i)
+                result += WTF::bitCount(bits[i]);
+            return result;
+        }
+        size_t isEmpty() // Much more efficient than testing count() == 0.
+        {
+            for (size_t i = 0; i < BITMAP_WORDS; ++i)
+                if (bits[i] != 0)
+                    return false;
+            return true;
+        }
     };
   
     struct CollectorCell {
-        union {
-            double memory[CELL_ARRAY_LENGTH];
-            struct {
-                void* zeroIfFree;
-                ptrdiff_t next;
-            } freeCell;
-        } u;
+        double memory[CELL_ARRAY_LENGTH];
     };
 
     struct SmallCollectorCell {
-        union {
-            double memory[CELL_ARRAY_LENGTH / 2];
-            struct {
-                void* zeroIfFree;
-                ptrdiff_t next;
-            } freeCell;
-        } u;
+        double memory[CELL_ARRAY_LENGTH / 2];
     };
 
     class CollectorBlock {
     public:
         CollectorCell cells[CELLS_PER_BLOCK];
-        uint32_t usedCells;
-        CollectorCell* freeList;
         CollectorBitmap marked;
         Heap* heap;
         HeapType type;
@@ -235,8 +251,6 @@ namespace JSC {
     class SmallCellCollectorBlock {
     public:
         SmallCollectorCell cells[SMALL_CELLS_PER_BLOCK];
-        uint32_t usedCells;
-        SmallCollectorCell* freeList;
         CollectorBitmap marked;
         Heap* heap;
         HeapType type;
@@ -287,8 +301,8 @@ namespace JSC {
 
     inline void Heap::reportExtraMemoryCost(size_t cost)
     {
-        if (cost > minExtraCostSize) 
-            recordExtraCost(cost / (CELL_SIZE * 2)); 
+        if (cost > minExtraCost) 
+            recordExtraCost(cost);
     }
 
 } // namespace JSC
diff --git a/JavaScriptCore/runtime/CollectorHeapIterator.h b/JavaScriptCore/runtime/CollectorHeapIterator.h
index e38a852..cdd6702 100644
--- a/JavaScriptCore/runtime/CollectorHeapIterator.h
+++ b/JavaScriptCore/runtime/CollectorHeapIterator.h
@@ -31,58 +31,117 @@
 
 namespace JSC {
 
-    template <HeapType heapType> class CollectorHeapIterator {
+    class CollectorHeapIterator {
     public:
-        CollectorHeapIterator(CollectorBlock** block, CollectorBlock** endBlock);
-
-        bool operator!=(const CollectorHeapIterator<heapType>& other) { return m_block != other.m_block || m_cell != other.m_cell; }
-        CollectorHeapIterator<heapType>& operator++();
+        bool operator!=(const CollectorHeapIterator& other);
         JSCell* operator*() const;
     
-    private:
-        typedef typename HeapConstants<heapType>::Block Block;
-        typedef typename HeapConstants<heapType>::Cell Cell;
-
-        Block** m_block;
-        Block** m_endBlock;
-        Cell* m_cell;
-        Cell* m_endCell;
+    protected:
+        CollectorHeapIterator(CollectorHeap&, size_t startBlock, size_t startCell);
+        void advance(size_t cellsPerBlock);
+
+        CollectorHeap& m_heap;
+        size_t m_block;
+        size_t m_cell;
+    };
+
+    template <HeapType heapType>
+    class LiveObjectIterator : public CollectorHeapIterator {
+    public:
+        LiveObjectIterator(CollectorHeap&, size_t startBlock, size_t startCell = 0);
+        LiveObjectIterator<heapType>& operator++();
+    };
+
+    template <HeapType heapType>
+    class DeadObjectIterator : public CollectorHeapIterator {
+    public:
+        DeadObjectIterator(CollectorHeap&, size_t startBlock, size_t startCell = 0);
+        DeadObjectIterator<heapType>& operator++();
+    };
+
+    template <HeapType heapType>
+    class ObjectIterator : public CollectorHeapIterator {
+    public:
+        ObjectIterator(CollectorHeap&, size_t startBlock, size_t startCell = 0);
+        ObjectIterator<heapType>& operator++();
     };
 
-    template <HeapType heapType> 
-    CollectorHeapIterator<heapType>::CollectorHeapIterator(CollectorBlock** block, CollectorBlock** endBlock)
-        : m_block(reinterpret_cast<Block**>(block))
-        , m_endBlock(reinterpret_cast<Block**>(endBlock))
-        , m_cell(m_block == m_endBlock ? 0 : (*m_block)->cells)
-        , m_endCell(m_block == m_endBlock ? 0 : (*m_block)->cells + HeapConstants<heapType>::cellsPerBlock)
+    inline CollectorHeapIterator::CollectorHeapIterator(CollectorHeap& heap, size_t startBlock, size_t startCell)
+        : m_heap(heap)
+        , m_block(startBlock)
+        , m_cell(startCell)
+    {
+    }
+
+    inline bool CollectorHeapIterator::operator!=(const CollectorHeapIterator& other)
+    {
+        return m_block != other.m_block || m_cell != other.m_cell;
+    }
+
+    inline JSCell* CollectorHeapIterator::operator*() const
+    {
+        return reinterpret_cast<JSCell*>(m_heap.blocks[m_block]->cells + m_cell);
+    }
+    
+    inline void CollectorHeapIterator::advance(size_t cellsPerBlock)
+    {
+        ++m_cell;
+        if (m_cell == cellsPerBlock) {
+            m_cell = 0;
+            ++m_block;
+        }
+    }
+
+    template <HeapType heapType>
+    inline LiveObjectIterator<heapType>::LiveObjectIterator(CollectorHeap& heap, size_t startBlock, size_t startCell)
+        : CollectorHeapIterator(heap, startBlock, startCell - 1)
+    {
+        ++(*this);
+    }
+
+    template <HeapType heapType>
+    inline LiveObjectIterator<heapType>& LiveObjectIterator<heapType>::operator++()
+    {
+        if (m_block < m_heap.nextBlock || m_cell < m_heap.nextCell) {
+            advance(HeapConstants<heapType>::cellsPerBlock);
+            return *this;
+        }
+
+        do {
+            advance(HeapConstants<heapType>::cellsPerBlock);
+        } while (m_block < m_heap.usedBlocks && !m_heap.blocks[m_block]->marked.get(m_cell));
+        return *this;
+    }
+
+    template <HeapType heapType>
+    inline DeadObjectIterator<heapType>::DeadObjectIterator(CollectorHeap& heap, size_t startBlock, size_t startCell)
+        : CollectorHeapIterator(heap, startBlock, startCell - 1)
     {
-        if (m_cell && m_cell->u.freeCell.zeroIfFree == 0)
-            ++*this;
+        ++(*this);
     }
 
-    template <HeapType heapType> 
-    CollectorHeapIterator<heapType>& CollectorHeapIterator<heapType>::operator++()
+    template <HeapType heapType>
+    inline DeadObjectIterator<heapType>& DeadObjectIterator<heapType>::operator++()
     {
         do {
-            for (++m_cell; m_cell != m_endCell; ++m_cell)
-                if (m_cell->u.freeCell.zeroIfFree != 0) {
-                    return *this;
-                }
-
-            if (++m_block != m_endBlock) {
-                m_cell = (*m_block)->cells;
-                m_endCell = (*m_block)->cells + HeapConstants<heapType>::cellsPerBlock;
-            }
-        } while(m_block != m_endBlock);
-
-        m_cell = 0;
+            advance(HeapConstants<heapType>::cellsPerBlock);
+            ASSERT(m_block > m_heap.nextBlock || (m_block == m_heap.nextBlock && m_cell >= m_heap.nextCell));
+        } while (m_block < m_heap.usedBlocks && m_heap.blocks[m_block]->marked.get(m_cell));
         return *this;
     }
 
-    template <HeapType heapType> 
-    JSCell* CollectorHeapIterator<heapType>::operator*() const
+    template <HeapType heapType>
+    inline ObjectIterator<heapType>::ObjectIterator(CollectorHeap& heap, size_t startBlock, size_t startCell)
+        : CollectorHeapIterator(heap, startBlock, startCell - 1)
     {
-        return reinterpret_cast<JSCell*>(m_cell);
+        ++(*this);
+    }
+
+    template <HeapType heapType>
+    inline ObjectIterator<heapType>& ObjectIterator<heapType>::operator++()
+    {
+        advance(HeapConstants<heapType>::cellsPerBlock);
+        return *this;
     }
 
 } // namespace JSC
diff --git a/JavaScriptCore/runtime/JSArray.cpp b/JavaScriptCore/runtime/JSArray.cpp
index b16d3fa..0d2a9b4 100644
--- a/JavaScriptCore/runtime/JSArray.cpp
+++ b/JavaScriptCore/runtime/JSArray.cpp
@@ -380,8 +380,6 @@ NEVER_INLINE void JSArray::putSlowCase(ExecState* exec, unsigned i, JSValue valu
 
     unsigned vectorLength = m_vectorLength;
 
-    Heap::heap(this)->reportExtraMemoryCost(storageSize(newVectorLength) - storageSize(vectorLength));
-
     if (newNumValuesInVector == storage->m_numValuesInVector + 1) {
         for (unsigned j = vectorLength; j < newVectorLength; ++j)
             storage->m_vector[j] = JSValue();
@@ -402,6 +400,8 @@ NEVER_INLINE void JSArray::putSlowCase(ExecState* exec, unsigned i, JSValue valu
     m_storage = storage;
 
     checkConsistency();
+
+    Heap::heap(this)->reportExtraMemoryCost(storageSize(newVectorLength) - storageSize(vectorLength));
 }
 
 bool JSArray::deleteProperty(ExecState* exec, const Identifier& propertyName)
@@ -492,13 +492,15 @@ bool JSArray::increaseVectorLength(unsigned newLength)
     if (!tryFastRealloc(storage, storageSize(newVectorLength)).getValue(storage))
         return false;
 
-    Heap::heap(this)->reportExtraMemoryCost(storageSize(newVectorLength) - storageSize(vectorLength));
     m_vectorLength = newVectorLength;
 
     for (unsigned i = vectorLength; i < newVectorLength; ++i)
         storage->m_vector[i] = JSValue();
 
     m_storage = storage;
+
+    Heap::heap(this)->reportExtraMemoryCost(storageSize(newVectorLength) - storageSize(vectorLength));
+
     return true;
 }
 
diff --git a/JavaScriptCore/runtime/JSCell.h b/JavaScriptCore/runtime/JSCell.h
index c8ba2b8..ba3e3ba 100644
--- a/JavaScriptCore/runtime/JSCell.h
+++ b/JavaScriptCore/runtime/JSCell.h
@@ -47,10 +47,14 @@ namespace JSC {
 
     private:
         explicit JSCell(Structure*);
-        JSCell(); // Only used for initializing Collector blocks.
         virtual ~JSCell();
 
     public:
+        static PassRefPtr<Structure> createDummyStructure()
+        {
+            return Structure::create(jsNull(), TypeInfo(UnspecifiedType));
+        }
+
         // Querying the type.
 #if USE(JSVALUE32)
         bool isNumber() const;
@@ -122,11 +126,6 @@ namespace JSC {
     {
     }
 
-    // Only used for initializing Collector blocks.
-    inline JSCell::JSCell()
-    {
-    }
-
     inline JSCell::~JSCell()
     {
     }
diff --git a/JavaScriptCore/runtime/JSGlobalData.cpp b/JavaScriptCore/runtime/JSGlobalData.cpp
index 234449f..7726f4d 100644
--- a/JavaScriptCore/runtime/JSGlobalData.cpp
+++ b/JavaScriptCore/runtime/JSGlobalData.cpp
@@ -126,6 +126,7 @@ JSGlobalData::JSGlobalData(bool isShared, const VPtrSet& vptrSet)
     , propertyNameIteratorStructure(JSPropertyNameIterator::createStructure(jsNull()))
     , getterSetterStructure(GetterSetter::createStructure(jsNull()))
     , apiWrapperStructure(JSAPIValueWrapper::createStructure(jsNull()))
+    , dummyMarkableCellStructure(JSCell::createDummyStructure())
 #if USE(JSVALUE32)
     , numberStructure(JSNumberCell::createStructure(jsNull()))
 #endif
diff --git a/JavaScriptCore/runtime/JSGlobalData.h b/JavaScriptCore/runtime/JSGlobalData.h
index f0c1b5c..9f725c6 100644
--- a/JavaScriptCore/runtime/JSGlobalData.h
+++ b/JavaScriptCore/runtime/JSGlobalData.h
@@ -123,6 +123,7 @@ namespace JSC {
         RefPtr<Structure> propertyNameIteratorStructure;
         RefPtr<Structure> getterSetterStructure;
         RefPtr<Structure> apiWrapperStructure;
+        RefPtr<Structure> dummyMarkableCellStructure;
 
 #if USE(JSVALUE32)
         RefPtr<Structure> numberStructure;
diff --git a/JavaScriptCore/runtime/JSString.h b/JavaScriptCore/runtime/JSString.h
index c423814..865f383 100644
--- a/JavaScriptCore/runtime/JSString.h
+++ b/JavaScriptCore/runtime/JSString.h
@@ -422,7 +422,7 @@ namespace JSC {
             if (c <= 0xFF)
                 return globalData->smallStrings.singleCharacterString(globalData, c);
         }
-        return new (globalData) JSString(globalData, UString(UString::Rep::create(s.rep(), offset, length)));
+        return new (globalData) JSString(globalData, UString(UString::Rep::create(s.rep(), offset, length)), JSString::HasOtherOwner);
     }
 
     inline JSString* jsOwnedString(JSGlobalData* globalData, const UString& s)
diff --git a/JavaScriptCore/runtime/Tracing.d b/JavaScriptCore/runtime/Tracing.d
index b9efaff..da854b9 100644
--- a/JavaScriptCore/runtime/Tracing.d
+++ b/JavaScriptCore/runtime/Tracing.d
@@ -27,7 +27,7 @@ provider JavaScriptCore
 {
     probe gc__begin();
     probe gc__marked();
-    probe gc__end(int, int);
+    probe gc__end();
     
     probe profile__will_execute(int, char*, char*, int);
     probe profile__did_execute(int, char*, char*, int);
diff --git a/JavaScriptCore/runtime/Tracing.h b/JavaScriptCore/runtime/Tracing.h
index e544f66..c28c85f 100644
--- a/JavaScriptCore/runtime/Tracing.h
+++ b/JavaScriptCore/runtime/Tracing.h
@@ -33,7 +33,7 @@
 #define JAVASCRIPTCORE_GC_BEGIN()
 #define JAVASCRIPTCORE_GC_BEGIN_ENABLED() 0
 
-#define JAVASCRIPTCORE_GC_END(arg0, arg1)
+#define JAVASCRIPTCORE_GC_END()
 #define JAVASCRIPTCORE_GC_END_ENABLED() 0
 
 #define JAVASCRIPTCORE_GC_MARKED()
diff --git a/JavaScriptCore/runtime/UString.h b/JavaScriptCore/runtime/UString.h
index ec9c211..b880f9c 100644
--- a/JavaScriptCore/runtime/UString.h
+++ b/JavaScriptCore/runtime/UString.h
@@ -512,7 +512,7 @@ namespace JSC {
     // huge buffer.
     // FIXME: this should be size_t but that would cause warnings until we
     // fix UString sizes to be size_t instead of int
-    static const int minShareSize = Heap::minExtraCostSize / sizeof(UChar);
+    static const int minShareSize = Heap::minExtraCost / sizeof(UChar);
 
     inline size_t UString::cost() const
     {
diff --git a/JavaScriptCore/runtime/WeakGCMap.h b/JavaScriptCore/runtime/WeakGCMap.h
new file mode 100644
index 0000000..e287d17
--- /dev/null
+++ b/JavaScriptCore/runtime/WeakGCMap.h
@@ -0,0 +1,121 @@
+/*
+ * Copyright (C) 2009 Apple Inc. All rights reserved.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions
+ * are met:
+ * 1. Redistributions of source code must retain the above copyright
+ *    notice, this list of conditions and the following disclaimer.
+ * 2. Redistributions in binary form must reproduce the above copyright
+ *    notice, this list of conditions and the following disclaimer in the
+ *    documentation and/or other materials provided with the distribution.
+ *
+ * THIS SOFTWARE IS PROVIDED BY APPLE INC. AND ITS CONTRIBUTORS ``AS IS''
+ * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO,
+ * THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
+ * PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL APPLE INC. OR ITS CONTRIBUTORS
+ * BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+ * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+ * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+ * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+ * CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF
+ * THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#ifndef WeakGCMap_h
+#define WeakGCMap_h
+
+#include <wtf/HashMap.h>
+
+namespace JSC {
+
+class JSCell;
+
+// A HashMap whose get() function returns emptyValue() for cells awaiting destruction.
+template<typename KeyType, typename MappedType>
+class WeakGCMap {
+    /*
+    Invariants:
+        * A value enters the WeakGCMap marked. (Guaranteed by set().)
+        * A value that becomes unmarked leaves the WeakGCMap before being recycled. (Guaranteed by the value's destructor removing it from the WeakGCMap.)
+        * A value that becomes unmarked leaves the WeakGCMap before becoming marked again. (Guaranteed by all destructors running before the mark phase begins.)
+        * During the mark phase, all values in the WeakGCMap are valid. (Guaranteed by all destructors running before the mark phase begins.)
+    */
+
+public:
+    typedef typename HashMap<KeyType, MappedType>::iterator iterator;
+    typedef typename HashMap<KeyType, MappedType>::const_iterator const_iterator;
+    
+    bool isEmpty() { return m_map.isEmpty(); }
+
+    MappedType get(const KeyType& key) const;
+    pair<iterator, bool> set(const KeyType&, const MappedType&); 
+    MappedType take(const KeyType& key);
+
+    // These unchecked functions provide access to a value even if the value's
+    // mark bit is not set. This is used, among other things, to retrieve values
+    // during the GC mark phase, which begins by clearing all mark bits.
+
+    MappedType uncheckedGet(const KeyType& key) const { return m_map.get(key); }
+    bool uncheckedRemove(const KeyType&, const MappedType&);
+
+    iterator uncheckedBegin() { return m_map.begin(); }
+    iterator uncheckedEnd() { return m_map.end(); }
+
+    const_iterator uncheckedBegin() const { return m_map.begin(); }
+    const_iterator uncheckedEnd() const { return m_map.end(); }
+
+private:
+    HashMap<KeyType, MappedType> m_map;
+};
+
+template<typename KeyType, typename MappedType>
+MappedType WeakGCMap<KeyType, MappedType>::get(const KeyType& key) const
+{
+    MappedType result = m_map.get(key);
+    if (result == HashTraits<MappedType>::emptyValue())
+        return result;
+    if (!Heap::isCellMarked(result))
+        return HashTraits<MappedType>::emptyValue();
+    return result;
+}
+
+template<typename KeyType, typename MappedType>
+MappedType WeakGCMap<KeyType, MappedType>::take(const KeyType& key)
+{
+    MappedType result = m_map.take(key);
+    if (result == HashTraits<MappedType>::emptyValue())
+        return result;
+    if (!Heap::isCellMarked(result))
+        return HashTraits<MappedType>::emptyValue();
+    return result;
+}
+
+template<typename KeyType, typename MappedType>
+pair<typename HashMap<KeyType, MappedType>::iterator, bool> WeakGCMap<KeyType, MappedType>::set(const KeyType& key, const MappedType& value)
+{
+    Heap::markCell(value); // If value is newly allocated, it's not marked, so mark it now.
+    pair<iterator, bool> result = m_map.add(key, value);
+    if (!result.second) { // pre-existing entry
+        result.second = !Heap::isCellMarked(result.first->second);
+        result.first->second = value;
+    }
+    return result;
+}
+
+template<typename KeyType, typename MappedType>
+bool WeakGCMap<KeyType, MappedType>::uncheckedRemove(const KeyType& key, const MappedType& value)
+{
+    iterator it = m_map.find(key);
+    if (it == m_map.end())
+        return false;
+    if (it->second != value)
+        return false;
+    m_map.remove(it);
+    return true;
+}
+
+} // namespace JSC
+
+#endif // WeakGCMap_h
diff --git a/JavaScriptCore/wtf/StdLibExtras.h b/JavaScriptCore/wtf/StdLibExtras.h
index dd90c85..09436ab 100644
--- a/JavaScriptCore/wtf/StdLibExtras.h
+++ b/JavaScriptCore/wtf/StdLibExtras.h
@@ -69,6 +69,14 @@ namespace WTF {
         return u.to;
     }
 
+    // Returns a count of the number of bits set in 'bits'.
+    inline size_t bitCount(unsigned bits)
+    {
+        bits = bits - ((bits >> 1) & 0x55555555);
+        bits = (bits & 0x33333333) + ((bits >> 2) & 0x33333333);
+        return ((bits + (bits >> 4) & 0xF0F0F0F) * 0x1010101) >> 24;
+    }
+
 } // namespace WTF
 
 #endif
diff --git a/JavaScriptGlue/ChangeLog b/JavaScriptGlue/ChangeLog
index 95cce28..4254672 100644
--- a/JavaScriptGlue/ChangeLog
+++ b/JavaScriptGlue/ChangeLog
@@ -1,3 +1,13 @@
+2009-12-13  Geoffrey Garen  <ggaren at apple.com>
+
+        Reviewed by Sam Weinig.
+        
+        Changed GC from mark-sweep to mark-allocate.
+        
+        * JavaScriptGlue.cpp:
+        (JSCollect): Updated for rename. Fixed a bug where JSGlue would not check
+        to avoid nested GC calls.
+
 2009-12-08  Dmitry Titov  <dimich at chromium.org>
 
         Rubber-stamped by David Levin.
diff --git a/JavaScriptGlue/JavaScriptGlue.cpp b/JavaScriptGlue/JavaScriptGlue.cpp
index b4f26e9..e552f19 100644
--- a/JavaScriptGlue/JavaScriptGlue.cpp
+++ b/JavaScriptGlue/JavaScriptGlue.cpp
@@ -339,7 +339,9 @@ void JSCollect()
     initializeThreading();
 
     JSLock lock(LockForReal);
-    getThreadGlobalExecState()->heap()->collect();
+    Heap* heap = getThreadGlobalExecState()->heap();
+    if (!heap->isBusy())
+        heap->collectAllGarbage();
 }
 
 /*
diff --git a/WebCore/ChangeLog b/WebCore/ChangeLog
index 93b8844..bbd4b73 100644
--- a/WebCore/ChangeLog
+++ b/WebCore/ChangeLog
@@ -1,3 +1,65 @@
+2009-12-13  Geoffrey Garen  <ggaren at apple.com>
+
+        Reviewed by Sam Weinig.
+        
+        Changed GC from mark-sweep to mark-allocate.
+
+        * ForwardingHeaders/runtime/WeakGCMap.h: Added.
+        * bindings/js/GCController.cpp:
+        (WebCore::collect):
+        (WebCore::GCController::gcTimerFired):
+        (WebCore::GCController::garbageCollectNow): Updated for rename.
+
+        * bindings/js/JSDOMBinding.cpp:
+        (WebCore::removeWrappers):
+        (WebCore::hasCachedDOMObjectWrapperUnchecked):
+        (WebCore::hasCachedDOMObjectWrapper):
+        (WebCore::hasCachedDOMNodeWrapperUnchecked):
+        (WebCore::forgetDOMObject):
+        (WebCore::forgetDOMNode):
+        (WebCore::isObservableThroughDOM):
+        (WebCore::markDOMNodesForDocument):
+        (WebCore::markDOMObjectWrapper):
+        (WebCore::markDOMNodeWrapper):
+        * bindings/js/JSDOMBinding.h: Changed DOM wrapper maps to be WeakGCMaps.
+        Don't ASSERT that an item must be in the WeakGCMap when its destructor
+        runs, since it might have been overwritten in the map first.
+
+        * bindings/js/JSDocumentCustom.cpp:
+        (WebCore::toJS): Changed Document from a DOM object wrapper to a DOM node
+        wrapper, to simplify some code.
+
+        * bindings/js/JSInspectedObjectWrapper.cpp:
+        (WebCore::JSInspectedObjectWrapper::JSInspectedObjectWrapper):
+        (WebCore::JSInspectedObjectWrapper::~JSInspectedObjectWrapper):
+        * bindings/js/JSInspectorCallbackWrapper.cpp: Use a WeakGCMap for these
+        wrappers.
+
+        * bindings/js/JSNodeCustom.cpp:
+        (WebCore::JSNode::markChildren): Updated for WeakGCMap and Document using
+        a DOM node wrapper instead of a DOM object wrapper.
+
+        * bindings/js/JSSVGPODTypeWrapper.h:
+        (WebCore::JSSVGDynamicPODTypeWrapperCache::wrapperMap):
+        (WebCore::JSSVGDynamicPODTypeWrapperCache::lookupOrCreateWrapper):
+        (WebCore::JSSVGDynamicPODTypeWrapperCache::forgetWrapper):
+        (WebCore::::~JSSVGDynamicPODTypeWrapper): Shined a small beam of sanity light
+        on this code. Use hashtable-based lookup in JSSVGPODTypeWrapper.h instead
+        of linear lookup through iteration, since that's what hashtables were
+        invented for. Make JSSVGPODTypeWrapper.h responsible for reomving itself
+        from the table, instead of its JS wrapper, to decouple these objects from
+        GC, and because these objects are refCounted, not solely owned by their
+        JS wrappers.
+
+        * bindings/scripts/CodeGeneratorJS.pm:
+        * dom/Document.h: Adopted changes above.
+
+2009-12-13  Geoffrey Garen  <ggaren at apple.com>
+
+        Windows build fix: Removed an incorrect #ifdef.
+
+        * bindings/js/GCController.cpp:
+
 2009-12-13  Charles Reis  <creis at chromium.org>
 
         Reviewed by Adam Barth.
diff --git a/WebCore/ForwardingHeaders/runtime/WeakGCMap.h b/WebCore/ForwardingHeaders/runtime/WeakGCMap.h
new file mode 100644
index 0000000..89432a8
--- /dev/null
+++ b/WebCore/ForwardingHeaders/runtime/WeakGCMap.h
@@ -0,0 +1,4 @@
+#ifndef WebCore_FWD_WeakGCMap_h
+#define WebCore_FWD_WeakGCMap_h
+#include <JavaScriptCore/WeakGCMap.h>
+#endif
diff --git a/WebCore/bindings/js/GCController.cpp b/WebCore/bindings/js/GCController.cpp
index 59bcfa3..3e5645f 100644
--- a/WebCore/bindings/js/GCController.cpp
+++ b/WebCore/bindings/js/GCController.cpp
@@ -40,17 +40,13 @@ using namespace JSC;
 
 namespace WebCore {
 
-#if USE(PTHREADS)
-
 static void* collect(void*)
 {
     JSLock lock(SilenceAssertionsOnly);
-    JSDOMWindow::commonJSGlobalData()->heap.collect();
+    JSDOMWindow::commonJSGlobalData()->heap.collectAllGarbage();
     return 0;
 }
 
-#endif
-
 GCController& gcController()
 {
     DEFINE_STATIC_LOCAL(GCController, staticGCController, ());
@@ -70,14 +66,12 @@ void GCController::garbageCollectSoon()
 
 void GCController::gcTimerFired(Timer<GCController>*)
 {
-    JSLock lock(SilenceAssertionsOnly);
-    JSDOMWindow::commonJSGlobalData()->heap.collect();
+    collect(0);
 }
 
 void GCController::garbageCollectNow()
 {
-    JSLock lock(SilenceAssertionsOnly);
-    JSDOMWindow::commonJSGlobalData()->heap.collect();
+    collect(0);
 }
 
 void GCController::garbageCollectOnAlternateThreadForDebugging(bool waitUntilDone)
diff --git a/WebCore/bindings/js/JSDOMBinding.cpp b/WebCore/bindings/js/JSDOMBinding.cpp
index f12c779..6faea27 100644
--- a/WebCore/bindings/js/JSDOMBinding.cpp
+++ b/WebCore/bindings/js/JSDOMBinding.cpp
@@ -134,15 +134,15 @@ static void removeWrapper(DOMObject* wrapper)
 
 static void removeWrappers(const JSWrapperCache& wrappers)
 {
-    JSWrapperCache::const_iterator wrappersEnd = wrappers.end();
-    for (JSWrapperCache::const_iterator it = wrappers.begin(); it != wrappersEnd; ++it)
+    JSWrapperCache::const_iterator wrappersEnd = wrappers.uncheckedEnd();
+    for (JSWrapperCache::const_iterator it = wrappers.uncheckedBegin(); it != wrappersEnd; ++it)
         removeWrapper(it->second);
 }
 
 static inline void removeWrappers(const DOMObjectWrapperMap& wrappers)
 {
-    DOMObjectWrapperMap::const_iterator wrappersEnd = wrappers.end();
-    for (DOMObjectWrapperMap::const_iterator it = wrappers.begin(); it != wrappersEnd; ++it)
+    DOMObjectWrapperMap::const_iterator wrappersEnd = wrappers.uncheckedEnd();
+    for (DOMObjectWrapperMap::const_iterator it = wrappers.uncheckedBegin(); it != wrappersEnd; ++it)
         removeWrapper(it->second);
 }
 
@@ -242,10 +242,19 @@ static inline DOMObjectWrapperMap& DOMObjectWrapperMapFor(JSC::ExecState* exec)
     return currentWorld(exec)->m_wrappers;
 }
 
+bool hasCachedDOMObjectWrapperUnchecked(JSGlobalData* globalData, void* objectHandle)
+{
+    for (JSGlobalDataWorldIterator worldIter(globalData); worldIter; ++worldIter) {
+        if (worldIter->m_wrappers.uncheckedGet(objectHandle))
+            return true;
+    }
+    return false;
+}
+
 bool hasCachedDOMObjectWrapper(JSGlobalData* globalData, void* objectHandle)
 {
     for (JSGlobalDataWorldIterator worldIter(globalData); worldIter; ++worldIter) {
-        if (worldIter->m_wrappers.contains(objectHandle))
+        if (worldIter->m_wrappers.get(objectHandle))
             return true;
     }
     return false;
@@ -262,14 +271,14 @@ void cacheDOMObjectWrapper(JSC::ExecState* exec, void* objectHandle, DOMObject*
     DOMObjectWrapperMapFor(exec).set(objectHandle, wrapper);
 }
 
-bool hasCachedDOMNodeWrapper(Document* document, Node* node)
+bool hasCachedDOMNodeWrapperUnchecked(Document* document, Node* node)
 {
     if (!document)
-        return hasCachedDOMObjectWrapper(JSDOMWindow::commonJSGlobalData(), node);
+        return hasCachedDOMObjectWrapperUnchecked(JSDOMWindow::commonJSGlobalData(), node);
 
     JSWrapperCacheMap& wrapperCacheMap = document->wrapperCacheMap();
     for (JSWrapperCacheMap::iterator iter = wrapperCacheMap.begin(); iter != wrapperCacheMap.end(); ++iter) {
-        if (iter->second->contains(node))
+        if (iter->second->uncheckedGet(node))
             return true;
     }
     return false;
@@ -286,20 +295,13 @@ void forgetDOMObject(DOMObject* wrapper, void* objectHandle)
 {
     JSC::JSGlobalData* globalData = Heap::heap(wrapper)->globalData();
     for (JSGlobalDataWorldIterator worldIter(globalData); worldIter; ++worldIter) {
-        DOMObjectWrapperMap& wrappers = worldIter->m_wrappers;
-        DOMObjectWrapperMap::iterator iter = wrappers.find(objectHandle);
-        if ((iter != wrappers.end()) && (iter->second == wrapper)) {
-            removeWrapper(wrapper);
-            wrappers.remove(iter);
-            return;
-        }
+        if (worldIter->m_wrappers.uncheckedRemove(objectHandle, wrapper))
+            break;
     }
-
-    // If the world went away, it should have removed this wrapper from the set.
-    ASSERT(!wrapperSet().contains(wrapper));
+    removeWrapper(wrapper);
 }
 
-void forgetDOMNode(DOMObject* wrapper, Node* node, Document* document)
+void forgetDOMNode(JSNode* wrapper, Node* node, Document* document)
 {
     if (!document) {
         forgetDOMObject(wrapper, node);
@@ -308,17 +310,10 @@ void forgetDOMNode(DOMObject* wrapper, Node* node, Document* document)
 
     JSWrapperCacheMap& wrapperCacheMap = document->wrapperCacheMap();
     for (JSWrapperCacheMap::iterator wrappersIter = wrapperCacheMap.begin(); wrappersIter != wrapperCacheMap.end(); ++wrappersIter) {
-        JSWrapperCache* wrappers = wrappersIter->second;
-        JSWrapperCache::iterator iter = wrappers->find(node);
-        if ((iter != wrappers->end()) && (iter->second == wrapper)) {
-            wrappers->remove(iter);
-            removeWrapper(wrapper);
-            return;
-        }
+        if (wrappersIter->second->uncheckedRemove(node, wrapper))
+            break;
     }
-
-    // If the world went away, it should have removed this wrapper from the set.
-    ASSERT(!wrapperSet().contains(wrapper));
+    removeWrapper(wrapper);
 }
 
 void cacheDOMNodeWrapper(JSC::ExecState* exec, Document* document, Node* node, JSNode* wrapper)
@@ -379,14 +374,14 @@ static inline bool isObservableThroughDOM(JSNode* jsNode, DOMWrapperWorld* world
         // the custom markChildren functions rather than here.
         if (node->isElementNode()) {
             if (NamedNodeMap* attributes = static_cast<Element*>(node)->attributeMap()) {
-                if (DOMObject* wrapper = world->m_wrappers.get(attributes)) {
+                if (DOMObject* wrapper = world->m_wrappers.uncheckedGet(attributes)) {
                     if (wrapper->hasCustomProperties())
                         return true;
                 }
             }
             if (node->isStyledElement()) {
                 if (CSSMutableStyleDeclaration* style = static_cast<StyledElement*>(node)->inlineStyleDecl()) {
-                    if (DOMObject* wrapper = world->m_wrappers.get(style)) {
+                    if (DOMObject* wrapper = world->m_wrappers.uncheckedGet(style)) {
                         if (wrapper->hasCustomProperties())
                             return true;
                     }
@@ -394,7 +389,7 @@ static inline bool isObservableThroughDOM(JSNode* jsNode, DOMWrapperWorld* world
             }
             if (static_cast<Element*>(node)->hasTagName(canvasTag)) {
                 if (CanvasRenderingContext* context = static_cast<HTMLCanvasElement*>(node)->renderingContext()) {
-                    if (DOMObject* wrapper = world->m_wrappers.get(context)) {
+                    if (DOMObject* wrapper = world->m_wrappers.uncheckedGet(context)) {
                         if (wrapper->hasCustomProperties())
                             return true;
                     }
@@ -432,8 +427,8 @@ void markDOMNodesForDocument(MarkStack& markStack, Document* document)
         DOMWrapperWorld* world = wrappersIter->first;
         JSWrapperCache* nodeDict = wrappersIter->second;
 
-        JSWrapperCache::iterator nodeEnd = nodeDict->end();
-        for (JSWrapperCache::iterator nodeIt = nodeDict->begin(); nodeIt != nodeEnd; ++nodeIt) {
+        JSWrapperCache::iterator nodeEnd = nodeDict->uncheckedEnd();
+        for (JSWrapperCache::iterator nodeIt = nodeDict->uncheckedBegin(); nodeIt != nodeEnd; ++nodeIt) {
             JSNode* jsNode = nodeIt->second;
             if (isObservableThroughDOM(jsNode, world))
                 markStack.append(jsNode);
@@ -516,7 +511,7 @@ void markDOMObjectWrapper(MarkStack& markStack, JSGlobalData& globalData, void*
         return;
 
     for (JSGlobalDataWorldIterator worldIter(&globalData); worldIter; ++worldIter) {
-        if (DOMObject* wrapper = worldIter->m_wrappers.get(object))
+        if (DOMObject* wrapper = worldIter->m_wrappers.uncheckedGet(object))
             markStack.append(wrapper);
     }
 }
@@ -526,14 +521,14 @@ void markDOMNodeWrapper(MarkStack& markStack, Document* document, Node* node)
     if (document) {
         JSWrapperCacheMap& wrapperCacheMap = document->wrapperCacheMap();
         for (JSWrapperCacheMap::iterator iter = wrapperCacheMap.begin(); iter != wrapperCacheMap.end(); ++iter) {
-            if (JSNode* wrapper = iter->second->get(node))
+            if (JSNode* wrapper = iter->second->uncheckedGet(node))
                 markStack.append(wrapper);
         }
         return;
     }
 
     for (JSGlobalDataWorldIterator worldIter(JSDOMWindow::commonJSGlobalData()); worldIter; ++worldIter) {
-        if (DOMObject* wrapper = worldIter->m_wrappers.get(node))
+        if (DOMObject* wrapper = worldIter->m_wrappers.uncheckedGet(node))
             markStack.append(wrapper);
     }
 }
diff --git a/WebCore/bindings/js/JSDOMBinding.h b/WebCore/bindings/js/JSDOMBinding.h
index 3982dad..d0b6762 100644
--- a/WebCore/bindings/js/JSDOMBinding.h
+++ b/WebCore/bindings/js/JSDOMBinding.h
@@ -23,9 +23,10 @@
 #define JSDOMBinding_h
 
 #include "JSDOMGlobalObject.h"
-#include "Document.h" // For DOMConstructorWithDocument
+#include "Document.h"
 #include <runtime/Completion.h>
 #include <runtime/Lookup.h>
+#include <runtime/WeakGCMap.h>
 #include <wtf/Noncopyable.h>
 
 namespace JSC {
@@ -139,7 +140,7 @@ namespace WebCore {
         }
     };
 
-    typedef HashMap<void*, DOMObject*> DOMObjectWrapperMap;
+    typedef JSC::WeakGCMap<void*, DOMObject*> DOMObjectWrapperMap;
 
     class DOMWrapperWorld : public RefCounted<DOMWrapperWorld> {
     public:
@@ -216,22 +217,24 @@ namespace WebCore {
         DOMWrapperWorld m_normalWorld;
     };
 
-    bool hasCachedDOMObjectWrapper(JSC::JSGlobalData*, void* objectHandle);
     DOMObject* getCachedDOMObjectWrapper(JSC::ExecState*, void* objectHandle);
+    bool hasCachedDOMObjectWrapper(JSC::JSGlobalData*, void* objectHandle);
     void cacheDOMObjectWrapper(JSC::ExecState*, void* objectHandle, DOMObject* wrapper);
-    void forgetDOMNode(DOMObject* wrapper, Node* node, Document* document);
+    void forgetDOMNode(JSNode* wrapper, Node* node, Document* document);
     void forgetDOMObject(DOMObject* wrapper, void* objectHandle);
 
-    bool hasCachedDOMNodeWrapper(Document*, Node*);
     JSNode* getCachedDOMNodeWrapper(JSC::ExecState*, Document*, Node*);
     void cacheDOMNodeWrapper(JSC::ExecState*, Document*, Node*, JSNode* wrapper);
     void forgetAllDOMNodesForDocument(Document*);
     void forgetWorldOfDOMNodesForDocument(Document*, DOMWrapperWorld*);
     void updateDOMNodeDocument(Node*, Document* oldDocument, Document* newDocument);
+
     void markDOMNodesForDocument(JSC::MarkStack&, Document*);
     void markActiveObjectsForContext(JSC::MarkStack&, JSC::JSGlobalData&, ScriptExecutionContext*);
     void markDOMObjectWrapper(JSC::MarkStack&, JSC::JSGlobalData& globalData, void* object);
     void markDOMNodeWrapper(JSC::MarkStack& markStack, Document* document, Node* node);
+    bool hasCachedDOMObjectWrapperUnchecked(JSC::JSGlobalData*, void* objectHandle);
+    bool hasCachedDOMNodeWrapperUnchecked(Document*, Node*);
 
     JSC::Structure* getCachedDOMStructure(JSDOMGlobalObject*, const JSC::ClassInfo*);
     JSC::Structure* cacheDOMStructure(JSDOMGlobalObject*, NonNullPassRefPtr<JSC::Structure>, const JSC::ClassInfo*);
diff --git a/WebCore/bindings/js/JSDocumentCustom.cpp b/WebCore/bindings/js/JSDocumentCustom.cpp
index 4aa6583..9366399 100644
--- a/WebCore/bindings/js/JSDocumentCustom.cpp
+++ b/WebCore/bindings/js/JSDocumentCustom.cpp
@@ -96,18 +96,18 @@ JSValue toJS(ExecState* exec, JSDOMGlobalObject* globalObject, Document* documen
     if (!document)
         return jsNull();
 
-    DOMObject* wrapper = getCachedDOMObjectWrapper(exec, document);
+    DOMObject* wrapper = getCachedDOMNodeWrapper(exec, document, document);
     if (wrapper)
         return wrapper;
 
     if (document->isHTMLDocument())
-        wrapper = CREATE_DOM_OBJECT_WRAPPER(exec, globalObject, HTMLDocument, document);
+        wrapper = CREATE_DOM_NODE_WRAPPER(exec, globalObject, HTMLDocument, document);
 #if ENABLE(SVG)
     else if (document->isSVGDocument())
-        wrapper = CREATE_DOM_OBJECT_WRAPPER(exec, globalObject, SVGDocument, document);
+        wrapper = CREATE_DOM_NODE_WRAPPER(exec, globalObject, SVGDocument, document);
 #endif
     else
-        wrapper = CREATE_DOM_OBJECT_WRAPPER(exec, globalObject, Document, document);
+        wrapper = CREATE_DOM_NODE_WRAPPER(exec, globalObject, Document, document);
 
     // Make sure the document is kept around by the window object, and works right with the
     // back/forward cache.
diff --git a/WebCore/bindings/js/JSInspectedObjectWrapper.cpp b/WebCore/bindings/js/JSInspectedObjectWrapper.cpp
index 13f59b7..60213b3 100644
--- a/WebCore/bindings/js/JSInspectedObjectWrapper.cpp
+++ b/WebCore/bindings/js/JSInspectedObjectWrapper.cpp
@@ -30,6 +30,7 @@
 
 #include "JSInspectorCallbackWrapper.h"
 #include <runtime/JSGlobalObject.h>
+#include <runtime/WeakGCMap.h>
 #include <wtf/StdLibExtras.h>
 
 using namespace JSC;
@@ -38,7 +39,7 @@ namespace WebCore {
 
 ASSERT_CLASS_FITS_IN_CELL(JSInspectedObjectWrapper);
 
-typedef HashMap<JSObject*, JSInspectedObjectWrapper*> WrapperMap;
+typedef WeakGCMap<JSObject*, JSInspectedObjectWrapper*> WrapperMap;
 typedef HashMap<JSGlobalObject*, WrapperMap*> GlobalObjectWrapperMap;
 
 static GlobalObjectWrapperMap& wrappers()
@@ -81,17 +82,17 @@ JSInspectedObjectWrapper::JSInspectedObjectWrapper(ExecState* unwrappedExec, JSO
         wrappers().set(unwrappedGlobalObject(), wrapperMap);
     }
 
-    ASSERT(!wrapperMap->contains(unwrappedObject));
-    wrapperMap->set(unwrappedObject, this);
+    pair<WrapperMap::iterator, bool> result = wrapperMap->set(unwrappedObject, this);
+    ASSERT(result.second);
+    UNUSED_PARAM(result);
 }
 
 JSInspectedObjectWrapper::~JSInspectedObjectWrapper()
 {
-    ASSERT(wrappers().contains(unwrappedGlobalObject()));
     WrapperMap* wrapperMap = wrappers().get(unwrappedGlobalObject());
+    ASSERT(wrapperMap);
 
-    ASSERT(wrapperMap->contains(unwrappedObject()));
-    wrapperMap->remove(unwrappedObject());
+    wrapperMap->uncheckedRemove(unwrappedObject(), this);
 
     if (wrapperMap->isEmpty()) {
         wrappers().remove(unwrappedGlobalObject());
diff --git a/WebCore/bindings/js/JSInspectorCallbackWrapper.cpp b/WebCore/bindings/js/JSInspectorCallbackWrapper.cpp
index ff4fbb9..3a73fda 100644
--- a/WebCore/bindings/js/JSInspectorCallbackWrapper.cpp
+++ b/WebCore/bindings/js/JSInspectorCallbackWrapper.cpp
@@ -29,6 +29,7 @@
 #if ENABLE(INSPECTOR)
 
 #include "JSInspectedObjectWrapper.h"
+#include <runtime/Protect.h>
 #include <wtf/StdLibExtras.h>
 
 using namespace JSC;
diff --git a/WebCore/bindings/js/JSNodeCustom.cpp b/WebCore/bindings/js/JSNodeCustom.cpp
index f375ae5..737430e 100644
--- a/WebCore/bindings/js/JSNodeCustom.cpp
+++ b/WebCore/bindings/js/JSNodeCustom.cpp
@@ -144,7 +144,7 @@ void JSNode::markChildren(MarkStack& markStack)
     // mark any other nodes.
     if (node->inDocument()) {
         if (Document* doc = node->ownerDocument())
-            markDOMObjectWrapper(markStack, *Heap::heap(this)->globalData(), doc);
+            markDOMNodeWrapper(markStack, doc, doc);
         return;
     }
 
@@ -154,7 +154,7 @@ void JSNode::markChildren(MarkStack& markStack)
     Node* outermostNodeWithWrapper = node;
     for (Node* current = m_impl.get(); current; current = current->parentNode()) {
         root = current;
-        if (hasCachedDOMNodeWrapper(current->document(), current))
+        if (hasCachedDOMNodeWrapperUnchecked(current->document(), current))
             outermostNodeWithWrapper = current;
     }
 
diff --git a/WebCore/bindings/js/JSSVGPODTypeWrapper.h b/WebCore/bindings/js/JSSVGPODTypeWrapper.h
index 51e4e9e..fea7a5f 100644
--- a/WebCore/bindings/js/JSSVGPODTypeWrapper.h
+++ b/WebCore/bindings/js/JSSVGPODTypeWrapper.h
@@ -105,6 +105,8 @@ private:
         ASSERT(m_setter);
     }
 
+    virtual ~JSSVGDynamicPODTypeWrapper();
+
     // Update callbacks
     RefPtr<PODTypeCreator> m_creator;
     GetterMethod m_getter;
@@ -351,6 +353,7 @@ struct PODTypeWrapperCacheInfoTraits : WTF::GenericHashTraits<PODTypeWrapperCach
     }
 };
 
+// Used for dynamic read-write attributes
 template<typename PODType, typename PODTypeCreator>
 class JSSVGDynamicPODTypeWrapperCache {
 public:
@@ -362,49 +365,41 @@ public:
     typedef PODTypeWrapperCacheInfoTraits<PODType, PODTypeCreator> CacheInfoTraits;
 
     typedef JSSVGPODTypeWrapper<PODType> WrapperBase;
-    typedef JSSVGDynamicPODTypeWrapper<PODType, PODTypeCreator> DynamicWrapper;
-    typedef HashMap<CacheInfo, DynamicWrapper*, CacheInfoHash, CacheInfoTraits> DynamicWrapperHashMap;
-    typedef typename DynamicWrapperHashMap::const_iterator DynamicWrapperHashMapIterator;
+    typedef JSSVGDynamicPODTypeWrapper<PODType, PODTypeCreator> Wrapper;
+    typedef HashMap<CacheInfo, Wrapper*, CacheInfoHash, CacheInfoTraits> WrapperMap;
 
-    static DynamicWrapperHashMap& dynamicWrapperHashMap()
+    static WrapperMap& wrapperMap()
     {
-        DEFINE_STATIC_LOCAL(DynamicWrapperHashMap, s_dynamicWrapperHashMap, ());
-        return s_dynamicWrapperHashMap;
+        DEFINE_STATIC_LOCAL(WrapperMap, s_wrapperMap, ());
+        return s_wrapperMap;
     }
 
-    // Used for readwrite attributes only
     static PassRefPtr<WrapperBase> lookupOrCreateWrapper(PODTypeCreator* creator, GetterMethod getter, SetterMethod setter)
     {
-        DynamicWrapperHashMap& map(dynamicWrapperHashMap());
         CacheInfo info(creator, getter, setter);
+        pair<typename WrapperMap::iterator, bool> result = wrapperMap().add(info, 0);
+        if (!result.second) // pre-existing entry
+            return result.first->second;
 
-        if (map.contains(info))
-            return map.get(info);
-
-        RefPtr<DynamicWrapper> wrapper = DynamicWrapper::create(creator, getter, setter);
-        map.set(info, wrapper.get());
+        RefPtr<Wrapper> wrapper = Wrapper::create(creator, getter, setter);
+        result.first->second = wrapper.get();
         return wrapper.release();
     }
 
-    static void forgetWrapper(WrapperBase* wrapper)
+    static void forgetWrapper(PODTypeCreator* creator, GetterMethod getter, SetterMethod setter)
     {
-        DynamicWrapperHashMap& map(dynamicWrapperHashMap());
-
-        DynamicWrapperHashMapIterator it = map.begin();
-        DynamicWrapperHashMapIterator end = map.end();
-
-        for (; it != end; ++it) {
-            if (it->second != wrapper)
-                continue;
-
-            // It's guaranteed that there's just one object we need to take care of.
-            map.remove(it->first);
-            break;
-        }
+        CacheInfo info(creator, getter, setter);
+        wrapperMap().remove(info);
     }
 };
 
-};
+template<typename PODType, typename PODTypeCreator>
+JSSVGDynamicPODTypeWrapper<PODType, PODTypeCreator>::~JSSVGDynamicPODTypeWrapper()
+{
+    JSSVGDynamicPODTypeWrapperCache<PODType, PODTypeCreator>::forgetWrapper(m_creator.get(), m_getter, m_setter);
+}
+
+} // namespace WebCore
 
 #endif // ENABLE(SVG)
 #endif // JSSVGPODTypeWrapper_h
diff --git a/WebCore/bindings/scripts/CodeGeneratorJS.pm b/WebCore/bindings/scripts/CodeGeneratorJS.pm
index 6ccf739..ca4b03c 100644
--- a/WebCore/bindings/scripts/CodeGeneratorJS.pm
+++ b/WebCore/bindings/scripts/CodeGeneratorJS.pm
@@ -566,7 +566,7 @@ sub GenerateHeader
     }
 
     # Destructor
-    push(@headerContent, "    virtual ~$className();\n") if (!$hasParent or $eventTarget or $interfaceName eq "Document" or $interfaceName eq "DOMWindow");
+    push(@headerContent, "    virtual ~$className();\n") if (!$hasParent or $eventTarget or $interfaceName eq "DOMWindow");
 
     # Prototype
     push(@headerContent, "    static JSC::JSObject* createPrototype(JSC::ExecState*, JSC::JSGlobalObject*);\n") unless ($dataNode->extendedAttributes->{"ExtendsDOMGlobalObject"});
@@ -1210,15 +1210,6 @@ sub GenerateImplementation
             if ($interfaceName eq "Node") {
                  push(@implContent, "    forgetDOMNode(this, impl(), impl()->document());\n");
             } else {
-                if ($podType) {
-                    my $animatedType = $implClassName;
-                    $animatedType =~ s/SVG/SVGAnimated/;
-
-                    # Special case for JSSVGNumber
-                    if ($codeGenerator->IsSVGAnimatedType($animatedType) and $podType ne "float") {
-                        push(@implContent, "    JSSVGDynamicPODTypeWrapperCache<$podType, $animatedType>::forgetWrapper(m_impl.get());\n");
-                    }
-                }
                 push(@implContent, "    forgetDOMObject(this, impl());\n");
             }
         }
@@ -1226,13 +1217,6 @@ sub GenerateImplementation
         push(@implContent, "}\n\n");
     }
 
-    # Document needs a special destructor because it's a special case for caching. It needs
-    # its own special handling rather than relying on the caching that Node normally does.
-    if ($interfaceName eq "Document") {
-        push(@implContent, "${className}::~$className()\n");
-        push(@implContent, "{\n    forgetDOMObject(this, static_cast<${implClassName}*>(impl()));\n}\n\n");
-    }
-
     if ($needsMarkChildren && !$dataNode->extendedAttributes->{"CustomMarkFunction"}) {
         push(@implContent, "void ${className}::markChildren(MarkStack& markStack)\n");
         push(@implContent, "{\n");
diff --git a/WebCore/dom/Document.h b/WebCore/dom/Document.h
index 0f108e9..7e8c5bd 100644
--- a/WebCore/dom/Document.h
+++ b/WebCore/dom/Document.h
@@ -35,6 +35,7 @@
 #include "DocumentMarker.h"
 #include "ScriptExecutionContext.h"
 #include "Timer.h"
+#include <runtime/WeakGCMap.h>
 #include <wtf/HashCountedSet.h>
 #include <wtf/OwnPtr.h>
 #include <wtf/PassOwnPtr.h>
@@ -829,7 +830,7 @@ public:
     virtual void scriptImported(unsigned long, const String&);
     virtual void postTask(PassOwnPtr<Task>); // Executes the task on context's thread asynchronously.
 
-    typedef HashMap<WebCore::Node*, JSNode*> JSWrapperCache;
+    typedef JSC::WeakGCMap<WebCore::Node*, JSNode*> JSWrapperCache;
     typedef HashMap<DOMWrapperWorld*, JSWrapperCache*> JSWrapperCacheMap;
     JSWrapperCacheMap& wrapperCacheMap() { return m_wrapperCacheMap; }
     JSWrapperCache* getWrapperCache(DOMWrapperWorld* world)

-- 
WebKit Debian packaging



More information about the Pkg-webkit-commits mailing list