[Pkg-clamav-commits] [SCM] Debian repository for ClamAV branch, debian/unstable, updated. debian/0.95+dfsg-1-6156-g094ec9b

Török Edvin edwin at clamav.net
Sun Apr 4 01:13:11 UTC 2010


The following commit has been merged in the debian/unstable branch:
commit ae1be988092fd9e627866e42a3a84a3e510ad00e
Author: Török Edvin <edwin at clamav.net>
Date:   Mon Dec 28 19:44:08 2009 +0200

    Update to LLVM upstream r92222.
    
    Squashed commit of the following:
    
    commit 4d06dfc51403e0e54eb688a3a9fb1839ea2136a6
    Author: Benjamin Kramer <benny.kra at googlemail.com>
    Date:   Mon Dec 28 12:27:56 2009 +0000
    
        Add missing include (for inline PATypeHolder::get).
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@92222 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit e05a9ea766a0b81c710a044887cfc22d6f36a664
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Mon Dec 28 09:32:10 2009 +0000
    
        avoid a completely unneeded linear walk.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@92221 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 104369f82cb166e9c446c3e844136811466916b3
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Mon Dec 28 09:24:53 2009 +0000
    
        Eliminate two bits of ugliness in MDNode::replaceElement:
        eliminate the temporary smallvector, and only do FindNodeOrInsertPos
        twice if the first one succeeds and we delete a node.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@92220 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit a7bbe9111b1caea1c424c73bbba6338eedd96d65
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Mon Dec 28 09:12:35 2009 +0000
    
        rearrange some methods, no functionality change.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@92219 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit cc1d027f0654706031bf0e6d7c57ede4dfc41373
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Mon Dec 28 09:10:16 2009 +0000
    
        avoid temporary CallbackVH's.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@92218 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 3716bc64a3ee906a6e40e6847ac915daa01e4919
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Mon Dec 28 09:07:21 2009 +0000
    
        Rewrite the function-local validation logic for MDNodes (most of r91708).
        Among other benefits, this doesn't leak the SmallPtrSet, has the verifier
        code in the verifier pass, actually does the verification at the end,
        and is considerably simpler.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@92217 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 894c992754b4095a0546181a0701674a3a696477
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Mon Dec 28 08:48:12 2009 +0000
    
        rename MDNode instance variables to something meaningful.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@92216 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit a89d238baa1ac500515df35491d0c7480ec87bf0
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Mon Dec 28 08:30:43 2009 +0000
    
        snip one more #include from Metadata.h
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@92214 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 4ba716d8d60ff7b6e0e07bce48bcb68bcffa1388
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Mon Dec 28 08:26:43 2009 +0000
    
        prune #includes more.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@92213 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 62a41720e49584820202799556c757518e23d3cf
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Mon Dec 28 08:24:16 2009 +0000
    
        prune some #includes
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@92212 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 749a4129c976d4895f231f0fbfddf9c54c17ac1b
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Mon Dec 28 08:20:46 2009 +0000
    
        Metadata.h doesn't need to include ValueHandle.h anymore.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@92211 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 6c8fe40711f3cf363831d95a63322a5963233f0a
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Mon Dec 28 08:14:54 2009 +0000
    
        change the strange MetadataContext::getMDs function to expose less
        irrelevant internal implementation details to clients.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@92210 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit febd655fa44c568fa3a80af56aed9955ff797157
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Mon Dec 28 08:07:14 2009 +0000
    
        change NamedMDNode to use a pimpl for its operand list instead
        of making it a declared part of the value.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@92209 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 3f465e4a2ed122934f187b6e0761c9268a200cb0
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Mon Dec 28 07:57:01 2009 +0000
    
        eliminate the elem_* iterator stuff from NamedMDNode.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@92208 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit da60d2e518af314a29eb38402e70a77b05fc5d32
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Mon Dec 28 07:41:54 2009 +0000
    
        move ElementVH out of the MDNode class into the MDNode.cpp file.  Among
        other things, this avoids vtable and rtti data for it being splatted in
        every translation unit that uses it.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@92207 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 2df342071750e4ecc7dfec0fc108efd69967401d
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Mon Dec 28 07:41:18 2009 +0000
    
        move these out of their own timer groups into the 'uncategorized' groups.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@92206 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 4f6dce81500015dbe2f7763bc88a74df9dc177f7
    Author: Sanjiv Gupta <sanjiv.gupta at microchip.com>
    Date:   Mon Dec 28 04:53:24 2009 +0000
    
        Fixed llc crash for zext (i1 -> i8) loads.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@92201 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 2f6348092dd3411c4bf41fb146945b6f5f387562
    Author: Sanjiv Gupta <sanjiv.gupta at microchip.com>
    Date:   Mon Dec 28 02:40:33 2009 +0000
    
        Allow targets to specify the return type of libcalls that are generated for floating point comparisons, rather than hard-coding them as i32.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@92199 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 4ab9372471bf169547c6c11eb1432d2c3fa06615
    Author: Bill Wendling <isanbard at gmail.com>
    Date:   Mon Dec 28 02:07:00 2009 +0000
    
        Mark variable used by 'assert' as 'unused'.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@92198 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 4e0224c4aa4b517723e2e950c00850422a7f083a
    Author: Bill Wendling <isanbard at gmail.com>
    Date:   Mon Dec 28 02:05:36 2009 +0000
    
        Remove dead variable.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@92197 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 39dd182871d4a9068a18d8d84e63983fc3a9d256
    Author: Bill Wendling <isanbard at gmail.com>
    Date:   Mon Dec 28 02:04:53 2009 +0000
    
        Remove dead variable.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@92196 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit f7e76f7cae0da2b59d8e6173723007f69a769f70
    Author: Bill Wendling <isanbard at gmail.com>
    Date:   Mon Dec 28 02:01:06 2009 +0000
    
        Remove dead variable.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@92195 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 83e5f56eb04727ac461cfa6354882bd9649f6f7f
    Author: Bill Wendling <isanbard at gmail.com>
    Date:   Mon Dec 28 02:00:30 2009 +0000
    
        Remove dead variable.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@92194 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 0077d08de74578e474f611872454275c125a0656
    Author: Bill Wendling <isanbard at gmail.com>
    Date:   Mon Dec 28 01:57:39 2009 +0000
    
        Remove dead variable.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@92193 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit e9c887a2902d05fa600b59ccda2b22f2af1ed76d
    Author: Bill Wendling <isanbard at gmail.com>
    Date:   Mon Dec 28 01:54:15 2009 +0000
    
        Remove dead store.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@92192 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit a18ebc30a2abf0fe66b55af79518ed8c4b06f7c6
    Author: Bill Wendling <isanbard at gmail.com>
    Date:   Mon Dec 28 01:53:00 2009 +0000
    
        Remove dead store and simplify code.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@92191 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 70ef837cf7e36be18d866d67bf374ed62a58f6fb
    Author: Bill Wendling <isanbard at gmail.com>
    Date:   Mon Dec 28 01:51:30 2009 +0000
    
        Remove dead store.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@92190 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit a6d452859e652c22b193c00abadb48aefc427ef4
    Author: Bill Wendling <isanbard at gmail.com>
    Date:   Mon Dec 28 01:48:56 2009 +0000
    
        Remove dead variable.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@92189 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 96d90c5c9b8a19e5e4f902e28430353fda2a2dd0
    Author: Bill Wendling <isanbard at gmail.com>
    Date:   Mon Dec 28 01:47:48 2009 +0000
    
        Remove dead variable.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@92188 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 4a223745a9f77c21863ca8b2725836092b38977a
    Author: Bill Wendling <isanbard at gmail.com>
    Date:   Mon Dec 28 01:44:39 2009 +0000
    
        Remove dead store.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@92187 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit c3e9be4e2bd703b642872ad835a93ebf79f8f465
    Author: Bill Wendling <isanbard at gmail.com>
    Date:   Mon Dec 28 01:42:12 2009 +0000
    
        Remove dead variable.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@92186 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 94fff552bffee0a421218b26f228ebd0d1f7a7d0
    Author: Bill Wendling <isanbard at gmail.com>
    Date:   Mon Dec 28 01:41:12 2009 +0000
    
        Remove dead variable.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@92185 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit da097dc64ce517757c11b1345dab55dfa19bb14e
    Author: Bill Wendling <isanbard at gmail.com>
    Date:   Mon Dec 28 01:36:02 2009 +0000
    
        Remove dead variable.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@92184 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit b537cbf815f7aa6a62504cb9b69ec0402a1313f5
    Author: Bill Wendling <isanbard at gmail.com>
    Date:   Mon Dec 28 01:34:57 2009 +0000
    
        Mark some debug variables as 'unused' to quiet compiler and analyzer.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@92183 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit dc6c4ad2c7f1b207357abbe2f954c4df14b6f455
    Author: Bill Wendling <isanbard at gmail.com>
    Date:   Mon Dec 28 01:31:11 2009 +0000
    
        Remove dead store. The initial value was never used, but always overridden.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@92182 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit b1e55ab7b75ee6d8bc8418f06b36fa77234276a8
    Author: Bill Wendling <isanbard at gmail.com>
    Date:   Mon Dec 28 01:20:29 2009 +0000
    
        Add an "ATTRIBUTE_UNUSED" macro (and use it). It's for variables which are
        mainly used in debugging and/or assert situations. It should make the compiler
        and the static analyzer stop nagging us about them.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@92181 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 89b41234a06c9062f112e6f9f9b2a0311c8166ff
    Author: Bill Wendling <isanbard at gmail.com>
    Date:   Mon Dec 28 01:02:21 2009 +0000
    
        Remove dead variable.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@92180 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 26af2111a96acb02a4f7dfa6815381a1979336eb
    Author: Bill Wendling <isanbard at gmail.com>
    Date:   Mon Dec 28 01:01:14 2009 +0000
    
        Remove dead variable.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@92179 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 8eb19339daa81669e709d65a479c1e837874a5b1
    Author: Bill Wendling <isanbard at gmail.com>
    Date:   Mon Dec 28 01:00:12 2009 +0000
    
        Remove dead variable.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@92178 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit c53dde98400a169d8a2aee60fd9e9749a0f1de3f
    Author: Daniel Dunbar <daniel at zuster.org>
    Date:   Sat Dec 26 22:58:39 2009 +0000
    
        lit: Add setuptools support.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@92169 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit c989d308a138180b0188cbe3d3c89497167a1a0b
    Author: Daniel Dunbar <daniel at zuster.org>
    Date:   Sat Dec 26 22:58:23 2009 +0000
    
        lit: Sink code into a 'lit' package.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@92168 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 824050a388d1beb1428686ea886b1ea77d4dee94
    Author: Eli Friedman <eli.friedman at gmail.com>
    Date:   Sat Dec 26 20:08:30 2009 +0000
    
        PR5886: Make sure IMUL32m is marked as setting EFLAGS, so scheduling doesn't
        do illegal stuff around it.  No testcase because the issue is very fragile.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@92167 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 7c5c0b12976c75ccdfd95a241525281e6061908c
    Author: Bill Wendling <isanbard at gmail.com>
    Date:   Fri Dec 25 13:50:18 2009 +0000
    
        Avoid assigning to Changed when it won't be used after the return.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@92160 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 0f0db31d8ee1ee48e20755a0fa6d4da5100aade2
    Author: Bill Wendling <isanbard at gmail.com>
    Date:   Fri Dec 25 13:45:50 2009 +0000
    
        Remove dead store.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@92159 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 90cd35e334e96b73cb0e68b126377e8f65d519fc
    Author: Bill Wendling <isanbard at gmail.com>
    Date:   Fri Dec 25 13:44:36 2009 +0000
    
        Remove dead store from copy-pasto.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@92158 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit d5cc74ccc7b1398736393e1634fa1e37a22ea2c3
    Author: Bill Wendling <isanbard at gmail.com>
    Date:   Fri Dec 25 13:39:58 2009 +0000
    
        Remove dead store.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@92157 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit ea340ea46c96dec8ba5dcb655474a3e811e956ca
    Author: Bill Wendling <isanbard at gmail.com>
    Date:   Fri Dec 25 13:37:27 2009 +0000
    
        Remove dead store.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@92156 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 220675b0f99b21cf7cb8a92f77d581708787ac3e
    Author: Bill Wendling <isanbard at gmail.com>
    Date:   Fri Dec 25 13:35:40 2009 +0000
    
        Use the 'MadeChange' variable instead of returning 'false' all of the time.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@92155 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit a1c4df64381a1795a44ac6f6342a40b024fa07e1
    Author: John McCall <rjmccall at apple.com>
    Date:   Thu Dec 24 23:18:09 2009 +0000
    
        Implement support for converting to string at "natural precision", and fix some
        major bugs in long-precision conversion.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@92150 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 9825c55a5a3a557c54c1af48bfb5954ddfbafc6c
    Author: Douglas Gregor <doug.gregor at gmail.com>
    Date:   Thu Dec 24 21:15:37 2009 +0000
    
        Move the two definitions of operator<< into namespace llvm, so they
        will be found by argument-dependent lookup. As with the previous
        commit, GCC is allowing ill-formed code.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@92146 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 1e69877669f44c5efd3b1f989aa32cdb362f3d27
    Author: Douglas Gregor <doug.gregor at gmail.com>
    Date:   Thu Dec 24 21:11:45 2009 +0000
    
        Define the new operator<< for sets into namespace std, so that
        argument-dependent lookup can find it. This is another case where an
        LLVM bug (not making operator<< visible) was masked by a GCC bug
        (looking in the global namespace when it shouldn't).
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@92144 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 3e573a9f0b82197ece967593d38d55799ada0572
    Author: Daniel Dunbar <daniel at zuster.org>
    Date:   Thu Dec 24 17:49:28 2009 +0000
    
        Don't emit trailing semicolon.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@92133 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 08cdb8a64f60f7737892f16ccab52acd1057d8e1
    Author: John McCall <rjmccall at apple.com>
    Date:   Thu Dec 24 12:16:56 2009 +0000
    
        Substantially optimize APFloat::toString() by doing a single large divide to
        cut the significand down to the desired precision *before* entering the
        core divmod loop.  Makes the overall algorithm logarithmic in the exponent.
    
        There's still a lot of room for improvement here, but this gets the
        performance back down to acceptable-for-diagnostics levels, even for
        long doubles.
        negligible, even on long doubles.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@92130 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit f7d9cb2b8c27f4785f1d5a50a13fbb74b9a6e028
    Author: John McCall <rjmccall at apple.com>
    Date:   Thu Dec 24 08:56:26 2009 +0000
    
        Add accessors for the largest-magnitude, smallest-magnitude, and
        smallest-normalized-magnitude values in a given FP semantics.
        Provide an APFloat-to-string conversion which I am quite ready to admit could
        be much more efficient.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@92126 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit c77cc1794d1e2f70ad0b1b8ccd5e1661cd1c55c2
    Author: John McCall <rjmccall at apple.com>
    Date:   Thu Dec 24 08:52:06 2009 +0000
    
        Set Remainder before Quotient in case Quotient and LHS alias.  The new
        order should be immune to such problems.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@92124 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit f35785f5df44d1908b82d87c9edd3d2b93f84022
    Author: Dale Johannesen <dalej at apple.com>
    Date:   Thu Dec 24 01:10:43 2009 +0000
    
        Testcase for llvm-gcc checkin 92108.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@92110 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 1d0ad3b99ec52705b0256dfd0421344511028b32
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Thu Dec 24 01:07:17 2009 +0000
    
        handle equality memcmp of 8 bytes on x86-64 with two unaligned loads and a
        compare.  On other targets we end up with a call to memcmp because we don't
        want 16 individual byte loads.  We should be able to use movups as well, but
        we're failing to select the generated icmp.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@92107 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 6d922b408462129adb924b960e81ac65077fab5a
    Author: David Greene <greened at obbligato.org>
    Date:   Thu Dec 24 00:39:02 2009 +0000
    
        Change errs() to dbgs().
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@92099 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 8a34688bac4bb0ec549e82da03d912282d364492
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Thu Dec 24 00:37:38 2009 +0000
    
        move an optimization for memcmp out of simplifylibcalls and into
        SDISel.  This optimization was causing simplifylibcalls to
        introduce type-unsafe nastiness.  This is the first step, I'll be
        expanding the memcmp optimizations shortly, covering things that
        we really really wouldn't want simplifylibcalls to do.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@92098 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 55f2a677a444fb3db1cf6b679f261d242b8636d1
    Author: David Greene <greened at obbligato.org>
    Date:   Thu Dec 24 00:34:21 2009 +0000
    
        Change errs() to dbgs().
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@92097 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 259191f8260ce170bac8eba3e221c35e93d2778e
    Author: David Greene <greened at obbligato.org>
    Date:   Thu Dec 24 00:31:35 2009 +0000
    
        Change errs() to dbgs().
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@92096 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 32c864e1d29d12c6156ce242458f4668e5acb211
    Author: David Greene <greened at obbligato.org>
    Date:   Thu Dec 24 00:27:55 2009 +0000
    
        Change errs() to dbgs().
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@92094 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit c0e0771a5c8a082bffed6d4bcea8d6f41b186bfb
    Author: David Greene <greened at obbligato.org>
    Date:   Thu Dec 24 00:14:25 2009 +0000
    
        Change errs() to dbgs().
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@92093 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 081d4ccced881523716f5017bf619546a2e7ab43
    Author: David Greene <greened at obbligato.org>
    Date:   Thu Dec 24 00:06:26 2009 +0000
    
        Change errs() to dbgs().
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@92092 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit b74668684da4ed870af49851ef26b0d526c77401
    Author: David Greene <greened at obbligato.org>
    Date:   Wed Dec 23 23:47:53 2009 +0000
    
        Change errs() to dbgs().
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@92091 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 3a94535216c37f5c33a40b1a88a32a735077c3ff
    Author: David Greene <greened at obbligato.org>
    Date:   Wed Dec 23 23:38:28 2009 +0000
    
        Change errs() to dbgs().
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@92088 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit ff374c2b59fe114b6910a40285deed40230f1208
    Author: David Greene <greened at obbligato.org>
    Date:   Wed Dec 23 23:29:28 2009 +0000
    
        Change dbgs() back to errs() as Chris requested.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@92086 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 4151dd0bbf6a685586b8fecf4037a7a27456f944
    Author: David Greene <greened at obbligato.org>
    Date:   Wed Dec 23 23:27:15 2009 +0000
    
        Change dbgs() back to errs() as Chris requested.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@92085 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 5878b32788cee3c581890e775e1d8fcc8d55ccc5
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Wed Dec 23 23:24:51 2009 +0000
    
        reorder to follow a normal fall-through style, no functionality change.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@92084 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit aa3571123711181442ca28841946762622336a2b
    Author: David Greene <greened at obbligato.org>
    Date:   Wed Dec 23 23:23:15 2009 +0000
    
        Clarify how dbgs() operates.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@92083 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 5e0a6b7dfd1f5ea140018bf1e30f1924d0a1c87e
    Author: David Greene <greened at obbligato.org>
    Date:   Wed Dec 23 23:19:43 2009 +0000
    
        Fix a comment.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@92082 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 702481a81b95fed36e43478d719bf0f0deebd262
    Author: David Greene <greened at obbligato.org>
    Date:   Wed Dec 23 23:14:41 2009 +0000
    
        Change dbgs() back to errs() for assert messages as Chris requested.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@92081 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 6350f81d59920edcfd8b44ac6b66af0c7ff18113
    Author: David Greene <greened at obbligato.org>
    Date:   Wed Dec 23 23:09:39 2009 +0000
    
        Change dbgs() back to errs() for assert messages as Chris requested.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@92080 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit aeeb42419cc76d1fd44b1fe3fd4f4140ee2c3961
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Wed Dec 23 23:03:24 2009 +0000
    
        sizeof(char) is always 1.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@92079 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit d5fad3b9406fcfda51c31bd559edd61d7ef44d27
    Author: David Greene <greened at obbligato.org>
    Date:   Wed Dec 23 23:00:50 2009 +0000
    
        Change dbgs() back to errs() for assert messages as Chris requested.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@92077 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit dfea7e628a030fc5f15cbdd757ff661b7d834fd8
    Author: David Greene <greened at obbligato.org>
    Date:   Wed Dec 23 22:59:29 2009 +0000
    
        Change dbgs() back to errs() for assert messages as Chris requested.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@92076 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit ce4d5136399a6a07388495c762e294986895d926
    Author: David Greene <greened at obbligato.org>
    Date:   Wed Dec 23 22:58:38 2009 +0000
    
        Remove dump routine and the associated Debug.h from a header.  Patch up
        other files to compensate.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@92075 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 053e91a4308e9fa66c1af1a7019ce45c71606f28
    Author: David Greene <greened at obbligato.org>
    Date:   Wed Dec 23 22:49:57 2009 +0000
    
        Change dbgs() back to errs() as Chris requested.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@92073 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 0a32a9d73bc8035a13c6241433d6e70a2d4290fb
    Author: David Greene <greened at obbligato.org>
    Date:   Wed Dec 23 22:35:10 2009 +0000
    
        Convert debug messages to use dbgs().  Generally this means
        s/errs/dbgs/g except for certain special cases.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@92071 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 9561c821ce2afaad4926e2830ed9f30cb27f29d7
    Author: David Greene <greened at obbligato.org>
    Date:   Wed Dec 23 22:28:01 2009 +0000
    
        Convert debug messages to use dbgs().  Generally this means
        s/errs/dbgs/g except for certain special cases.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@92068 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 6778bc905cce16db6b5b1be049a42ad1e0c9fddb
    Author: David Greene <greened at obbligato.org>
    Date:   Wed Dec 23 22:18:14 2009 +0000
    
        Convert debug messages to use dbgs().  Generally this means
        s/errs/dbgs/g except for certain special cases.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@92067 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit dc874baf87b0799d6f6c576b718b33691d7259dd
    Author: David Greene <greened at obbligato.org>
    Date:   Wed Dec 23 22:10:20 2009 +0000
    
        Convert debug messages to use dbgs().  Generally this means
        s/errs/dbgs/g except for certain special cases.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@92066 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 96a47017353e6883b2484cc5811c72118fbbd404
    Author: David Greene <greened at obbligato.org>
    Date:   Wed Dec 23 21:58:29 2009 +0000
    
        Convert debug messages to use dbgs().  Generally this means
        s/errs/dbgs/g except for certain special cases.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@92063 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit ce32cf00cc1f35e1d4cc5e00d611bcde720ce9da
    Author: David Greene <greened at obbligato.org>
    Date:   Wed Dec 23 21:48:18 2009 +0000
    
        Convert debug messages to use dbgs().  Generally this means
        s/errs/dbgs/g except for certain special cases.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@92060 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 01295bc07f8f0a55e3498827a5bfc3ee0a386f50
    Author: Jakob Stoklund Olesen <stoklund at 2pi.dk>
    Date:   Wed Dec 23 21:34:03 2009 +0000
    
        Move kill flags when the same register occurs more than once in a sequence.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@92058 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 2a98a2f5a8b106b2abfcb2b1e8b6c2cb49311951
    Author: Jakob Stoklund Olesen <stoklund at 2pi.dk>
    Date:   Wed Dec 23 21:28:42 2009 +0000
    
        Handle undef operands properly.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@92054 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit ecae21aaf03469b5b0dd06b256425ae03a560b47
    Author: Jakob Stoklund Olesen <stoklund at 2pi.dk>
    Date:   Wed Dec 23 21:28:37 2009 +0000
    
        Make insert position available to MergeOpsUpdate.
        Rearrange arguments.
        No functional changes
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@92053 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 07be4590c1626617747d7b7922e17273a73e4c17
    Author: Jakob Stoklund Olesen <stoklund at 2pi.dk>
    Date:   Wed Dec 23 21:28:31 2009 +0000
    
        Perform kill flag calculations in new method. No functional changes.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@92052 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 7c119d7d49e4c86b04e942d8ae5ac8721e2ca9a4
    Author: Jakob Stoklund Olesen <stoklund at 2pi.dk>
    Date:   Wed Dec 23 21:28:23 2009 +0000
    
        Move repeated code to a new method. No functional change.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@92051 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 07cc79a1623ca8414da9255543762abe8a8037d9
    Author: David Greene <greened at obbligato.org>
    Date:   Wed Dec 23 21:27:29 2009 +0000
    
        Convert debug messages to use dbgs().  Generally this means
        s/errs/dbgs/g except for certain special cases.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@92050 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 6ff267f2711ec600d9458749bef022b8a6476ab4
    Author: David Greene <greened at obbligato.org>
    Date:   Wed Dec 23 21:16:54 2009 +0000
    
        Convert debug messages to use dbgs().  Generally this means
        s/errs/dbgs/g except for certain special cases.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@92048 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit a80ab14e95be6b6ea2c7fa2b57398b830fe6beb5
    Author: David Greene <greened at obbligato.org>
    Date:   Wed Dec 23 21:06:14 2009 +0000
    
        Convert debug messages to use dbgs().  Generally this means
        s/errs/dbgs/g except for certain special cases.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@92046 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 00532a28dca9a915557626b73b4d089108ce85cb
    Author: David Greene <greened at obbligato.org>
    Date:   Wed Dec 23 20:52:41 2009 +0000
    
        Convert debug messages to use dbgs().  Generally this means
        s/errs/dbgs/g except for certain special cases.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@92042 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit d120239be2123affddcadf85d9ce6cdf75e4f064
    Author: David Greene <greened at obbligato.org>
    Date:   Wed Dec 23 20:43:58 2009 +0000
    
        Convert debug messages to use dbgs().  Generally this means
        s/errs/dbgs/g except for certain special cases.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@92040 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit ad44e7d5a9e226bf9b3aa54763e83fc51273a969
    Author: David Greene <greened at obbligato.org>
    Date:   Wed Dec 23 20:34:27 2009 +0000
    
        Convert debug messages to use dbgs().  Generally this means
        s/errs/dbgs/g except for certain special cases.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@92039 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 4a60d180d7fe86436196abb37f8b767da07cd30d
    Author: David Greene <greened at obbligato.org>
    Date:   Wed Dec 23 20:20:46 2009 +0000
    
        Convert debug messages to use dbgs().  Generally this means
        s/errs/dbgs/g except for certain special cases.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@92037 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit db4dd0d7e4a22a5c0ae0ca9970b649fc5ad0082a
    Author: Daniel Dunbar <daniel at zuster.org>
    Date:   Wed Dec 23 20:13:44 2009 +0000
    
        Remove an XFAIL.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@92036 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit f17f28ba4b1067d125d577d52864810ef7733e61
    Author: David Greene <greened at obbligato.org>
    Date:   Wed Dec 23 20:10:59 2009 +0000
    
        Convert debug messages to use dbgs().  Generally this means
        s/errs/dbgs/g except for certain special cases.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@92035 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit aa7455dbb5c01cac91b9a254981cb381324a1b9f
    Author: David Greene <greened at obbligato.org>
    Date:   Wed Dec 23 20:03:58 2009 +0000
    
        Convert debug messages to use dbgs().  Generally this means
        s/errs/dbgs/g except for certain special cases.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@92034 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 13e07bacee394735a7a604ffd4fdaf15b8384c95
    Author: David Greene <greened at obbligato.org>
    Date:   Wed Dec 23 19:51:44 2009 +0000
    
        Convert debug messages to use dbgs().  Generally this means
        s/errs/dbgs/g except for certain special cases.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@92033 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit d02176108f777e643dd1b34e70675c3c3892fe21
    Author: David Greene <greened at obbligato.org>
    Date:   Wed Dec 23 19:45:49 2009 +0000
    
        Convert debug messages to use dbgs().  Generally this means
        s/errs/dbgs/g except for certain special cases.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@92032 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit bb782751a0a6c7f4997f8afc75124241b0a07f94
    Author: David Greene <greened at obbligato.org>
    Date:   Wed Dec 23 19:27:59 2009 +0000
    
        Convert debug messages to use dbgs().  Generally this means
        s/errs/dbgs/g except for certain special cases.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@92029 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 8323720790f8a7b2ccb57d0ed927c406c7ada2ff
    Author: David Greene <greened at obbligato.org>
    Date:   Wed Dec 23 19:21:19 2009 +0000
    
        Convert debug messages to use dbgs().  Generally this means
        s/errs/dbgs/g except for certain special cases.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@92026 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 9177c9c21f5bade02492fdb1b1cae0d772108f21
    Author: David Greene <greened at obbligato.org>
    Date:   Wed Dec 23 19:15:13 2009 +0000
    
        Convert debug messages to use dbgs().  Generally this means
        s/errs/dbgs/g except for certain special cases.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@92024 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 514e97429ecef0dd9f8a483e2dbe3dc8de47b73d
    Author: Douglas Gregor <doug.gregor at gmail.com>
    Date:   Wed Dec 23 19:12:50 2009 +0000
    
        Alternative fix to make sure that the extern declarations used by
        DynamicLibrary::SearchForAddressOfSymbol refer to declarations in the
        global namespace.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@92023 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit afaefaaf3d83a920746492a48e70c7343f2df5e1
    Author: Douglas Gregor <doug.gregor at gmail.com>
    Date:   Wed Dec 23 19:04:10 2009 +0000
    
        Revert 92020 until I figure out a more portable fix
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@92021 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 852b0c0e84467eab2076b2014b54843b15b06b1e
    Author: Douglas Gregor <doug.gregor at gmail.com>
    Date:   Wed Dec 23 18:56:27 2009 +0000
    
        Move the extern symbol declarations outside of
        DynamicLibrary::SearchForAddressOfSymbol and force them to have "C"
        linkage.
    
        Interestingly, GCC treats the block-scoped "extern" declarations we
        previously had as if they were extern "C" declarations (or, at least,
        were in the global namespace), so that GCC bug papered over this LLVM
        bug. Clang and EDG get the linkage correct; this new variant seems to
        work for both GCC and Clang.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@92020 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit a21b0c41a131b8eaf95cc441e519923a380ea19c
    Author: Douglas Gregor <doug.gregor at gmail.com>
    Date:   Wed Dec 23 18:27:13 2009 +0000
    
        Fix another -Wmismatched-tags warning
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@92017 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 7cc2dbba20994329d71739adc9e54d63e95f37a5
    Author: David Greene <greened at obbligato.org>
    Date:   Wed Dec 23 18:25:37 2009 +0000
    
        Convert debug messages to use dbgs().  Generally this means
        s/errs/dbgs/g except for certain special cases.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@92016 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit a28b20c236ba2ec1c466f612a84c8b965f60ea81
    Author: David Greene <greened at obbligato.org>
    Date:   Wed Dec 23 17:55:11 2009 +0000
    
        Convert debug messages to use dbgs().  Generally this means
        s/errs/dbgs/g except for certain special cases.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@92013 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit dc8b5720c49dd593865b6ed1e18928e5e038000e
    Author: Nuno Lopes <nunoplopes at sapo.pt>
    Date:   Wed Dec 23 17:48:10 2009 +0000
    
        move a few more symbols to .rodata
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@92011 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit d25af07faae782ffd616b13b2d8dadd1e229a0f9
    Author: David Greene <greened at obbligato.org>
    Date:   Wed Dec 23 17:24:22 2009 +0000
    
        Convert debug messages to use dbgs().  Generally this means
        s/errs/dbgs/g except for certain special cases.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@92006 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 2531b82af471667ff2ffcab9eb801b77f04ec0f6
    Author: David Greene <greened at obbligato.org>
    Date:   Wed Dec 23 17:18:22 2009 +0000
    
        Convert debug messages to use dbgs().  Generally this means
        s/errs/dbgs/g except for certain special cases.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@92005 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit aa80e75a467dd54a9e2007f29ff99aa8fd42626d
    Author: Douglas Gregor <doug.gregor at gmail.com>
    Date:   Wed Dec 23 17:05:07 2009 +0000
    
        Fix struct/class mismatch for LTOModule and LTOCodeGenerator, detected by Clang
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@92004 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 5efd48c6c7c43ed1d8eadb79c17260b15f472f38
    Author: Douglas Gregor <doug.gregor at gmail.com>
    Date:   Wed Dec 23 17:03:46 2009 +0000
    
        De-bork CMake build
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@92003 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit bb6f927871a16e14c3b78ec1382d75fe82197bd4
    Author: David Greene <greened at obbligato.org>
    Date:   Wed Dec 23 16:39:06 2009 +0000
    
        Provide dbgs(), a circular-buffering debug output stream.  By default it
        simply passes output to errs().  If -debug-buffer-size=N is set N > 0,
        dbgs() buffers its output until program termination and dumps the last N
        characters sent to it.  This is handy when debugging very large inputs.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@92002 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 825e4b5c9261d0c2ee58c5a771c1331fbfdc8885
    Author: David Greene <greened at obbligato.org>
    Date:   Wed Dec 23 16:08:15 2009 +0000
    
        Add circular_raw_ostream, which buffers its output in a circular queue
        and outputs it when explicitly flushed.  The intent is to use it in
        situations such as debug output logging where a signal handler can take
        care of flushing the buffer at program termination.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@92001 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 236c4bde3fd5143cf9ea997a9863e4a3caaf9210
    Author: Mikhail Glushenkov <foldr at codedgers.com>
    Date:   Wed Dec 23 12:50:03 2009 +0000
    
        Make it easier to regenerate docs when srcdir != objdir.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@92000 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 388266107e740ca794485b3a585def2883956f6f
    Author: Mikhail Glushenkov <foldr at codedgers.com>
    Date:   Wed Dec 23 12:49:51 2009 +0000
    
        Regenerate.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@91999 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 509f9240d37ecbdcabf1d3157a6140322030706b
    Author: Mikhail Glushenkov <foldr at codedgers.com>
    Date:   Wed Dec 23 12:49:41 2009 +0000
    
        Cosmetic issue: more consistent naming.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@91998 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 27d15f58084d6b25bdec9c65eacc2821cb1b9d73
    Author: Mikhail Glushenkov <foldr at codedgers.com>
    Date:   Wed Dec 23 12:49:30 2009 +0000
    
        Allow (set_option SwitchOption, true).
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@91997 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 5bb804eb2f89c971a5860a6e0bb0f6f5d58a8ef7
    Author: Sanjiv Gupta <sanjiv.gupta at microchip.com>
    Date:   Wed Dec 23 11:19:09 2009 +0000
    
        Reapply 91904.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@91996 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 1f01e330bad7cb24d966bf9e51cfa8c4afbdb4d1
    Author: Sanjiv Gupta <sanjiv.gupta at microchip.com>
    Date:   Wed Dec 23 10:56:02 2009 +0000
    
        Added missing patterns for subtract instruction.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@91995 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 5316fa5d7262c48b511104e64664f7b8938a5d6b
    Author: Sanjiv Gupta <sanjiv.gupta at microchip.com>
    Date:   Wed Dec 23 10:35:24 2009 +0000
    
        deleting empty file.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@91994 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 391ab2127fb765c0c3fedd0e8db7ac440524353e
    Author: Sanjiv Gupta <sanjiv.gupta at microchip.com>
    Date:   Wed Dec 23 09:46:01 2009 +0000
    
        Reverting back 91904.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@91993 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit f81e44d0fe15207bd331798aee58a3c95c39a4b8
    Author: Dale Johannesen <dalej at apple.com>
    Date:   Wed Dec 23 07:32:51 2009 +0000
    
        Use more sensible type for flags in asms.  PR 5570.
        Patch by Sylve`re Teissier (sorry, ASCII only).
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@91988 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit fb2367dd4da70f93d7b83b41c6c00d857358b139
    Author: Eric Christopher <echristo at apple.com>
    Date:   Wed Dec 23 02:51:48 2009 +0000
    
        Update objectsize intrinsic and associated dependencies. Fix
        lowering code and update testcases.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@91979 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 25a9b18b58d5c85ad531ca50ac8e81acdf3ffd76
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Wed Dec 23 01:46:40 2009 +0000
    
        really remove the instruction, don't just comment it out
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@91976 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 0e34f628addb0a70b0f28ada176693cd30f9349a
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Wed Dec 23 01:45:04 2009 +0000
    
        completely eliminate the MOV16r0 'instruction'.  The only
        interesting part of this is the divrem changes, which are
        already tested by CodeGen/X86/divrem.ll.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@91975 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 8a8951e0e3247ddcfa1b0b4486cb77ceb1985a9a
    Author: Sean Callanan <scallanan at apple.com>
    Date:   Wed Dec 23 01:32:29 2009 +0000
    
        More fixes for Visual C++.  Replaced several very small
        static inline functions with macros.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@91973 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 1ef8bc9fb5cb8ea6d404809565bdc7088b85f948
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Wed Dec 23 01:30:26 2009 +0000
    
        stop pattern matching 16-bit zero's of a register to MOV16r0,
        instead use the appropriate subreggy thing.  This generates identical
        code on some large apps (thanks to Evan's cross class coalescing
        stuff he did back in july).  This means that MOV16r0 can go away
        completely in the future soon.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@91972 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit f70c94554077d2291b6867a2f24e1cd40f2337d1
    Author: Bill Wendling <isanbard at gmail.com>
    Date:   Wed Dec 23 01:28:19 2009 +0000
    
        Remove superfluous SDNode ordering.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@91971 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 2be082890768917e7360194b667a7fdb81b140b3
    Author: Jeffrey Yasskin <jyasskin at google.com>
    Date:   Wed Dec 23 00:58:02 2009 +0000
    
        Disable JITTest.FunctionIsRecompiledAndRelinked on ARM where it's not
        implemented.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@91963 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 1e5b9ec2b938c3eb3ef527f254d7200a720e351c
    Author: Bill Wendling <isanbard at gmail.com>
    Date:   Wed Dec 23 00:47:20 2009 +0000
    
        Remove node ordering from inline asm nodes. It's not needed.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@91961 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit ac0bfeb8b0af333e4167dbe6e15ff335ce7b67e1
    Author: Daniel Dunbar <daniel at zuster.org>
    Date:   Wed Dec 23 00:45:10 2009 +0000
    
        Suppress compiler warning.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@91959 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 92b889c9eb3ea14a43dcc2ec7ee64d319a3156d3
    Author: Bill Wendling <isanbard at gmail.com>
    Date:   Wed Dec 23 00:44:51 2009 +0000
    
        Remove node ordering from VA nodes. It's not needed.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@91958 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 9b4cefa1b947151b3f2178cf3aabec2642e7582b
    Author: Eric Christopher <echristo at apple.com>
    Date:   Wed Dec 23 00:29:49 2009 +0000
    
        Update docs for bitcode changes. For object size checking we won't
        work with partial objects so just count the type as a boolean. Update
        appropriately.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@91954 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 5b000faeb29370a0c4cb395368f4d026b74c1db7
    Author: Bill Wendling <isanbard at gmail.com>
    Date:   Wed Dec 23 00:28:23 2009 +0000
    
        Revert r91949 r91942 and r91936.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@91953 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 25ae075ab2194cfc445b001c65fa3bf73d65fa93
    Author: Gabor Greif <ggreif at gmail.com>
    Date:   Wed Dec 23 00:18:40 2009 +0000
    
        restore 'make update' functionality by not ignoring 'clang' here
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@91950 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit b24f6e8dd614f6e99af0caeedcbb79160d09e09c
    Author: Bill Wendling <isanbard at gmail.com>
    Date:   Wed Dec 23 00:05:09 2009 +0000
    
        Finish up node ordering in ExpandNode.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@91949 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit f55eba02e15f4dce951d3aef8e1e83c2c9fed762
    Author: Jakob Stoklund Olesen <stoklund at 2pi.dk>
    Date:   Tue Dec 22 23:54:54 2009 +0000
    
        Add coalescer asserts.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@91945 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit c6986e5d47b2c955e77994fb40bcd2f6c2f2038e
    Author: Jakob Stoklund Olesen <stoklund at 2pi.dk>
    Date:   Tue Dec 22 23:54:44 2009 +0000
    
        Add a SPR register class to the ARM target.
    
        Certain Thumb instructions require only SP (e.g. tSTRspi).
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@91944 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit daad0b43625ca69376938361b466ef42fbdd9776
    Author: Jeffrey Yasskin <jyasskin at google.com>
    Date:   Tue Dec 22 23:47:23 2009 +0000
    
        Partially revert r91626.  Materializing extra functions to determine whether
        they're available_externally broke VMKit, which was relying on the fact that
        functions would only be materialized when they were first called.  We'll have
        to wait for http://llvm.org/PR5737 to really fix this.
    
        I also added a test for one of the F->isDeclaration() calls which wasn't
        covered by anything else in the test suite.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@91943 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit c969f4b0e7b5df3a854b3d41285a26dc34c4b855
    Author: Bill Wendling <isanbard at gmail.com>
    Date:   Tue Dec 22 23:44:56 2009 +0000
    
        Assign ordering to nodes created in ExpandNode. Only roughly 1/2 of the function
        is finished.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@91942 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 14a5e3e29709da3c3297a739a06127f367628bef
    Author: Jeffrey Yasskin <jyasskin at google.com>
    Date:   Tue Dec 22 23:18:18 2009 +0000
    
        Fix a crash in JIT::recompileAndRelinkFunction(). It doesn't pass the MCI
        argument to runJITOnFunction(), which caused a null pointer dereference at
        every call.
    
        Patch by Gianluca Guida!
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@91939 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 0ee0a9f4d4ac6941cb03bb0e9bd88520221a0b45
    Author: Bill Wendling <isanbard at gmail.com>
    Date:   Tue Dec 22 22:53:39 2009 +0000
    
        Assign ordering to SDNodes in PromoteNode. Also fixing a subtle bug where BSWAP
        was using "Tmp1" in the first getNode call instead of Node->getOperand(0).
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@91936 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 7043d80affa413b00d8ec2e205e26772e29c2429
    Author: Sean Callanan <scallanan at apple.com>
    Date:   Tue Dec 22 22:51:40 2009 +0000
    
        Removed the "inline" keyword from the disassembler decoder,
        because the Visual C++ build does not build .c files as C99
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@91935 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 34879e45cd8129131ce21558aa92cc2f8876574d
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Tue Dec 22 22:50:29 2009 +0000
    
        rename HexDisassembler -> Disassembler, it works on any input
        integer encoding (0123, 0b10101, 42, etc).
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@91934 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 6913818399687ea52e6cce639dcf4e17fbd232c6
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Tue Dec 22 22:47:43 2009 +0000
    
        just discard the debug output from the disassembler.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@91933 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 70abd22a14353cb48d63cc24c311343ce2bb5a7a
    Author: Anton Korobeynikov <asl at math.spbu.ru>
    Date:   Tue Dec 22 22:37:23 2009 +0000
    
        Add testcase for PR5703
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@91931 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit cc48d238c088e790bf368544b45ba3f43b632877
    Author: Jakob Stoklund Olesen <stoklund at 2pi.dk>
    Date:   Tue Dec 22 21:52:27 2009 +0000
    
        Remove minimal CFG sanity checks from verifier.
    
        These checks would often trigger on unreachable statements inserted by
        bugpoint, leading it astray.
    
        It would be nice if we could distinguish unreachable blocks from errors.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@91923 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit f040bcb2c5d8edb446938db365ad2cc031822a9b
    Author: Jakob Stoklund Olesen <stoklund at 2pi.dk>
    Date:   Tue Dec 22 21:48:20 2009 +0000
    
        Allow explicit %reg0 operands beyond what the .td file describes.
    
        ARM uses these to indicate predicates.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@91922 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 798c23dcf69e484ae19906869e940bc6b7872117
    Author: Bill Wendling <isanbard at gmail.com>
    Date:   Tue Dec 22 21:35:02 2009 +0000
    
        Allow 0 as an order number. Don't assign an order to formal arguments.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@91920 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit b53da77b6aaf950b6a59166ddaa5541326c4e43b
    Author: Sean Callanan <scallanan at apple.com>
    Date:   Tue Dec 22 21:12:55 2009 +0000
    
        Fixes to the X86 disassembler:
        Made LEA memory operands emit only 4 MCInst operands.
        Made the scale operand equal 1 for instructions that have no
        SIB byte.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@91919 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 1e2dadd738efd7725ad8de7e064c2d67384f7ed4
    Author: Anton Korobeynikov <asl at math.spbu.ru>
    Date:   Tue Dec 22 20:11:00 2009 +0000
    
        Restore snprintf weirdness for VCPP only
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@91918 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 42806e669ce6482a6268d64ab4d0b53c07ccea70
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Tue Dec 22 19:33:28 2009 +0000
    
        Fix the Convert to scalar to not insert dead loads in the store case.  The
        load is needed when we have a small store into a large alloca (at which
        point we get a load/insert/store sequence), but when you do a full-sized
        store, this load ends up being dead.
    
        This dead load is bad in really large nasty testcases where the load ends
        up causing mem2reg to insert large chains of dependent phi nodes which only
        ADCE can delete.  Instead of doing this, just don't insert the dead load.
    
        This fixes rdar://6864035
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@91917 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit b0798ce7b64a048bb502c7cf954cd89484940e46
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Tue Dec 22 19:23:33 2009 +0000
    
        fix some fixme's by using twines
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@91916 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit fd628a53c45c84a9b6058a26fb58ac8063d7f937
    Author: Jakob Stoklund Olesen <stoklund at 2pi.dk>
    Date:   Tue Dec 22 18:49:55 2009 +0000
    
        Use proper move instructions. Make the verifier happy.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@91914 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 8e4381a943076f3c22a07729e3e2fc528d13a369
    Author: Bob Wilson <bob.wilson at apple.com>
    Date:   Tue Dec 22 18:34:19 2009 +0000
    
        Report an error for bad inline assembly, where the value passed for an
        "indirect" operand is not a pointer.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@91913 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit f8420485b61d066ed0a876a35cf3098769e1905c
    Author: Evan Cheng <evan.cheng at apple.com>
    Date:   Tue Dec 22 17:47:23 2009 +0000
    
        Remove target attribute break-sse-dep. Instead, do not fold load into sse partial update instructions unless optimizing for size.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@91910 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 1c794cc4f30e9b35a9d2aec6f5b6e05dce42a2b7
    Author: Douglas Gregor <doug.gregor at gmail.com>
    Date:   Tue Dec 22 17:25:11 2009 +0000
    
        Include based on the current path, since we already -I the X86 target's path. Fixes CMake build
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@91908 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit c3828a78a14f37cdf98db1a905ea51fdbd03a5b4
    Author: Sanjiv Gupta <sanjiv.gupta at microchip.com>
    Date:   Tue Dec 22 14:25:37 2009 +0000
    
        While converting one of the operands to a memory operand, we need to check if it is Legal and does not result into a cyclic dep.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@91904 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 7dafc1537eed6c94009aca694090c5a7d280d1ea
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Tue Dec 22 07:03:21 2009 +0000
    
        specify what is invalid about it
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@91901 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 1b67a19c78e9d7936eedd0d607dd56a849a5ae88
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Tue Dec 22 07:01:12 2009 +0000
    
        specify a triple to use, fixing the test on non-x86-64 hosts.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@91900 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 0e0dc43e679050600081a316ea18a09d292d47e8
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Tue Dec 22 06:58:29 2009 +0000
    
        reject invalid input with a caret, e.g.:
    
        simple-tests.txt:16:1: error: invalid instruction
        0xff 0xff
        ^
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@91898 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit bf24dfc6ea2873cce92ad5c10e8ec8c26e4b40f1
    Author: Bob Wilson <bob.wilson at apple.com>
    Date:   Tue Dec 22 06:57:14 2009 +0000
    
        Generalize SROA to allow the first index of a GEP to be non-zero.  Add a
        missing check that an array reference doesn't go past the end of the array,
        and remove some redundant checks for in-bound array and vector references
        that are no longer needed.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@91897 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 7a30eb39c8801739c9a3cc139ea00a16d38140f9
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Tue Dec 22 06:56:51 2009 +0000
    
        various cleanups, make the disassemble reject lines with too much
        data on them, for example:
    
        	addb	%al, (%rax)
        simple-tests.txt:11:5: error: excess data detected in input
        0 0 0 0 0
            ^
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@91896 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit e409a0ed6b8c66f9f4f0f3cef584af8d6dd013a0
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Tue Dec 22 06:45:48 2009 +0000
    
        If you thought that it didn't make sense for the disassembler
        to not produce caret diagnostics, you were right!
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@91895 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit d57f7cd43e80255b87f213e58611407add7b335d
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Tue Dec 22 06:37:58 2009 +0000
    
        rewrite the file parser for the disassembler, implementing support for
        comments.  Also, check in a simple testcase for the disassembler,
        including a test for r91864
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@91894 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 713e9712579faccc6f8760586bf6cbb777c5e2ed
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Tue Dec 22 06:24:00 2009 +0000
    
        don't crash on blank lines, rename some variables.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@91892 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 9d4f9105490cb5cad700f117ffa1b84c9a3e6b0a
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Tue Dec 22 06:07:30 2009 +0000
    
        Implement PR5795 by merging duplicated return blocks.  This could go further
        by merging all returns in a function into a single one, but simplifycfg
        currently likes to duplicate the return (an unfortunate choice!)
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@91890 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 7d85a53fad48d627c6e4a17771ebcd46cfc4db6e
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Tue Dec 22 06:04:26 2009 +0000
    
        convert to filecheck
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@91889 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 6c85d89d85046de478b62c7448927db2e4b87121
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Tue Dec 22 04:47:41 2009 +0000
    
        don't run GVN at -O1, GCC doesn't do it's equivalent at that optimization level.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@91886 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 873e26746bd3c642fbc9947a6dcf500c0995f3cb
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Tue Dec 22 04:25:02 2009 +0000
    
        The phi translated pointer can be computed when returning a partially cached result
        instead of stored.  This reduces memdep memory usage, and also eliminates a bunch of
        weakvh's.  This speeds up gvn on gcc.c-torture/20001226-1.c from 23.9s to 8.45s (2.8x)
        on a different machine than earlier.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@91885 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit bf9d0f9154274adb84a66c0581dd9529f0b83ea1
    Author: Bill Wendling <isanbard at gmail.com>
    Date:   Tue Dec 22 02:10:19 2009 +0000
    
        Add more plumbing. This time in the LowerArguments and "get" functions which
        return partial registers. This affected the back-end lowering code some.
    
        Also patch up some places I missed before in the "get" functions.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@91880 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 919f267795692f9df06fef0ee620411534c819aa
    Author: Sean Callanan <scallanan at apple.com>
    Date:   Tue Dec 22 02:07:42 2009 +0000
    
        Changed REG_* to MODRM_REG_* to avoid conflicts
        with symbols in AuroraUX's global namespace.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@91879 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 42c4404dd4b7162b06f56fd82dc779b0b3d13837
    Author: Daniel Dunbar <daniel at zuster.org>
    Date:   Tue Dec 22 01:41:37 2009 +0000
    
        Fix some may-be-uninitialized var warnings.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@91878 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 16ded292991e3e288f1d9deb3e33a5268f1cd6fb
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Tue Dec 22 01:38:23 2009 +0000
    
        fix unit test that I broke.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@91877 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit b22e7af680a571467d80a6b5fd5661525010ded5
    Author: Bill Wendling <isanbard at gmail.com>
    Date:   Tue Dec 22 01:25:10 2009 +0000
    
        Add SDNode ordering to inlined asm and VA functions.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@91876 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 7581e0c8ad813a3f6c6c0d357b7ced37ed3dfb67
    Author: Eric Christopher <echristo at apple.com>
    Date:   Tue Dec 22 01:23:51 2009 +0000
    
        Whitespace fixes.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@91875 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit f1bed4b1cef55bb83753e34e0929c4b10bcc0074
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Tue Dec 22 01:17:43 2009 +0000
    
        types don't need atomic inc/dec, they are local to an llvmcontext.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@91873 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 05a245f41d40f8be120b7d131a7ce16c17dc47d3
    Author: Bill Wendling <isanbard at gmail.com>
    Date:   Tue Dec 22 01:11:43 2009 +0000
    
        Adding more assignment of ordering to SDNodes. This time in the "call" and
        generic copy functions.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@91872 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit a116e5e520689a75baf513fa8d2cba65e0be743a
    Author: Sean Callanan <scallanan at apple.com>
    Date:   Tue Dec 22 01:11:26 2009 +0000
    
        Fixed library dependencies between the X86 disassembler and
        X86 codegen that were causing circular symbol dependencies.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@91871 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit fc86f8cc6c57cb3329675f4fc8ec868fed7a1fe1
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Tue Dec 22 01:00:32 2009 +0000
    
        avoid calling extractMallocCall when it's obvious we don't have
        a call.  This speeds up memdep ~1.5%
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@91869 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit d42a013740b839251387f41e40bad6a80552dae8
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Tue Dec 22 00:51:57 2009 +0000
    
        comment fix: weakvh -> tracking vh
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@91867 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 16c7988cb5516a7686c51c2387eb3eef3858893c
    Author: Bill Wendling <isanbard at gmail.com>
    Date:   Tue Dec 22 00:50:32 2009 +0000
    
        Add ordering of SDNodes to LowerCallTo.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@91866 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 437f3c269851340bf66d9f5b64f04df846781e13
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Tue Dec 22 00:44:05 2009 +0000
    
        print pcrel immediates as signed values instead of unsigned so that we
        get things like this out of the disassembler:
    
        0x100000ecb: callq	-96
    
        instead of:
    
        0x100000ecb: callq	4294967200
    
        rdar://7491123
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@91864 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 3b72641d65a254c01921e4f767003279383daba4
    Author: Bill Wendling <isanbard at gmail.com>
    Date:   Tue Dec 22 00:40:51 2009 +0000
    
        Now add ordering to SDNodes created by the massive intrinsic lowering function.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@91863 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit f406ec5e612f8958fb8ea7c6b5b129753b0b8064
    Author: Bill Wendling <isanbard at gmail.com>
    Date:   Tue Dec 22 00:12:37 2009 +0000
    
        To make things interesting, I added MORE code to set the ordering of
        SDNodes. This time in the load/store and limited-precision code.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@91860 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 87476f66ef1998ff5d93a2cb413ba10a1413d21b
    Author: Lang Hames <lhames at gmail.com>
    Date:   Tue Dec 22 00:11:50 2009 +0000
    
        Changed slot index ranges for MachineBasicBlocks to be exclusive of endpoint.
        This fixes an in-place update bug where code inserted at the end of basic blocks may not be covered by existing intervals which were live across the entire block. It is also consistent with the way ranges are specified for live intervals.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@91859 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 1c01aa9daf0ba64f48ab7c3ffe81a7c51b2006fa
    Author: Bill Wendling <isanbard at gmail.com>
    Date:   Mon Dec 21 23:47:40 2009 +0000
    
        Add more plumbing to assign ordering to SDNodes. Have the "getValue" method
        assign the ordering when called. Combine some of the ordering assignments to
        keep things simple.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@91857 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 4137a8aa60914101aa1ad7f1a7dfe12c0b35607c
    Author: Daniel Dunbar <daniel at zuster.org>
    Date:   Mon Dec 21 23:27:57 2009 +0000
    
        Add suggested parentheses.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@91853 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 79479fcf0a878ea487fd745920c106d5c860a723
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Mon Dec 21 23:15:48 2009 +0000
    
        Add a fastpath to Load GVN to special case when we have exactly one dominating
        load to avoid even messing around with SSAUpdate at all.  In this case (which
        is very common, we can just use the input value directly).
    
        This speeds up GVN time on gcc.c-torture/20001226-1.c from 36.4s to 16.3s,
        which still isn't great, but substantially better and this is a simple speedup
        that applies to lots of different cases.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@91851 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit b8d65d5bf97d41881148b3ce93f634a9ac3903ab
    Author: Bill Wendling <isanbard at gmail.com>
    Date:   Mon Dec 21 23:10:19 2009 +0000
    
        More ordering plumbing. This time for GEP. I need to remember to assign
        orderings to values returned by getValue().
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@91850 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit fe213dba0d5785b3dfa97a22f0d20aaf9982e3f3
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Mon Dec 21 23:04:33 2009 +0000
    
        refactor some code out to a new helper method.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@91849 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit b702ceab37bdb90e32041f6ae1b5085a82abd4c8
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Mon Dec 21 22:43:03 2009 +0000
    
        improve indentation avoid a pointless conversion from weakvh to trackingvh,
        no functionality change.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@91848 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 2a33d37c9f1e5ef0c7f5bcdb1c488540d42f3f67
    Author: Bill Wendling <isanbard at gmail.com>
    Date:   Mon Dec 21 22:42:14 2009 +0000
    
        Another incremental check-in for assigning ordering to SDNodes. This time for
        shuffle and insert vector.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@91847 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit ef4620fe2846df5790119fd45e62e28f6f490b3e
    Author: Bill Wendling <isanbard at gmail.com>
    Date:   Mon Dec 21 22:30:11 2009 +0000
    
        Assign ordering to more instructions. Incremental check-in.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@91846 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 4852546f88da1386e3f36ad7c0e270f0798a3dea
    Author: Bill Wendling <isanbard at gmail.com>
    Date:   Mon Dec 21 21:59:52 2009 +0000
    
        - Add a bit more plumbing assigning an order to SDNodes.
        - Modify the "dump" method to emit the order of an SDNode.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@91845 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 29296e63554a23d2e8a7ccc7c1b776dd52346e1c
    Author: David Greene <greened at obbligato.org>
    Date:   Mon Dec 21 21:21:34 2009 +0000
    
        Fix a bug in !subst where TableGen would go and resubstitute text it had
        just substituted.  This could cause infinite looping in certain
        pathological cases.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@91843 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit df70e60850a5c0b905cd23e84e44e677dc6c03e7
    Author: Anton Korobeynikov <asl at math.spbu.ru>
    Date:   Mon Dec 21 20:19:37 2009 +0000
    
        Remove uber-gross hack. The define _snprintf to snprintf is invalid due to two reasons: 1. Accroding to C++ standard snprintf should be available in std namespace (and __gnu_cxx in case of GCC to). Such ifdef will change all snprintf's to _snprintf's, but won't bring snprintf to all necessary namespaces. Thus e.g. any locale-using code on mingw will yield an error (include this file + string to see the result) 2. MSVCRT's _snprintf does not comply with C99 standard. Standard one is snprintf.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@91842 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 38150fc924f6aa5ba056d738d8ecd75cbc4f8778
    Author: Anton Korobeynikov <asl at math.spbu.ru>
    Date:   Mon Dec 21 20:18:49 2009 +0000
    
        Mark FPW as allocable when frame address is taken.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@91841 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 708808ae99856d00af548177ea42e18d4ad32dc8
    Author: Bill Wendling <isanbard at gmail.com>
    Date:   Mon Dec 21 19:59:38 2009 +0000
    
        First wave of plumbing for assigning an ordering to SDNodes. This takes care of
        a lot of the branching instructions.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@91838 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit e35f67f9527538257e684574bdf21392be2c92bf
    Author: Evan Cheng <evan.cheng at apple.com>
    Date:   Mon Dec 21 19:53:39 2009 +0000
    
        Delete the instruction just before the function terminates for consistency sake.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@91836 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit d58e7f8655f246d9bbda9ba4933e1906a5690dc8
    Author: Bill Wendling <isanbard at gmail.com>
    Date:   Mon Dec 21 19:34:59 2009 +0000
    
        Place SDNodeOrdering.h in the directory it's used.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@91834 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 962018b8231c524caba670c9dd19427eafd6f1fd
    Author: Bob Wilson <bob.wilson at apple.com>
    Date:   Mon Dec 21 18:39:47 2009 +0000
    
        Remove special-case SROA optimization of variable indexes to one-element and
        two-element arrays.  After restructuring the SROA code, it was not safe to
        do this without adding more checking.  It is not clear that this special-case
        has really been useful, and removing this simplifies the code quite a bit.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@91828 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 7f5262f3ef1729d48a3e03f90192c053d27fa1e9
    Author: Daniel Dunbar <daniel at zuster.org>
    Date:   Mon Dec 21 17:31:59 2009 +0000
    
        XFAIL these tests on powerpc, under the assumption that no one cares. If you care, feel free to fix.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@91826 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 5c6ab9595c06e46bd1670ce7a0a52852d9ff669f
    Author: Eric Christopher <echristo at apple.com>
    Date:   Mon Dec 21 08:15:29 2009 +0000
    
        Fix setting and default setting of code model for jit. Do this
        by allowing backends to override routines that will default
        the JIT and Static code generation to an appropriate code model
        for the architecture.
    
        Should fix PR 5773.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@91824 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 76f683191bd6137f9177dca4686ead7d6944981b
    Author: Eli Friedman <eli.friedman at gmail.com>
    Date:   Mon Dec 21 08:03:16 2009 +0000
    
        A couple minor README updates.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@91823 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 7abdd670070ff4c05b8e79c8d393b252e9cbf4d6
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Mon Dec 21 07:52:40 2009 +0000
    
        improve compatibility with SWIG, patch by James Knight!
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@91822 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 860471ea8a2cf7e68285dd1351c51152518bd8e1
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Mon Dec 21 07:45:57 2009 +0000
    
        revert r89298, which was committed without a testcase.  I think
        the underlying PHI node insertion issue in SSAUpdate is fixed.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@91821 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 2bf5b030a6357c2fb9ee937c0a7b907f9d995052
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Mon Dec 21 07:16:11 2009 +0000
    
        fix PR5837 by having SSAUpdate reuse phi nodes for the
        'GetValueInMiddleOfBlock' case, instead of inserting
        duplicates.
    
        A similar fix is almost certainly needed by the machine-level
        SSAUpdate implementation.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@91820 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit c8785545bcf2d1e861969bdf0cdc3c506abf7163
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Mon Dec 21 07:15:15 2009 +0000
    
        add a helper ctor.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@91819 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit ccdbd097759b1f3feaa10a39b7cc23341da58dbd
    Author: Eli Friedman <eli.friedman at gmail.com>
    Date:   Mon Dec 21 06:49:24 2009 +0000
    
        Change StringRef::startswith and StringRef::endswith to versions which are a
        bit more verbose, but optimize to much shorter code.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@91817 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit f171d4bff6fe4500beb60a3525e2c1e3b76d46b7
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Mon Dec 21 06:08:50 2009 +0000
    
        add check lines for min/max tests.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@91816 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 6c88fd376d17c371454f88795bc0a6bb4b2e30e3
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Mon Dec 21 06:06:10 2009 +0000
    
        really convert this to filecheck.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@91815 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 1ae148ac3895cc2204d32b84361bbebbb42384fb
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Mon Dec 21 06:03:05 2009 +0000
    
        give instcombine some helper functions for matching MIN and MAX, and
        implement some optimizations for MIN(MIN()) and MAX(MAX()) and
        MIN(MAX()) etc.  This substantially improves the code in PR5822 but
        doesn't kick in much elsewhere.  2 max's were optimized in
        pairlocalalign and one in smg2000.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@91814 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 3ee8750e43a6c4fd3baf29b50ff2f5144161f7eb
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Mon Dec 21 05:53:13 2009 +0000
    
        filecheckize
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@91813 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit eecd79c9182748a08fa05cb57b410f45a894432a
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Mon Dec 21 04:04:05 2009 +0000
    
        enhance x-(-A) -> x+A to preserve NUW/NSW.
    
        Use the presence of NSW/NUW to fold "icmp (x+cst), x" to a constant in
        cases where it would otherwise be undefined behavior.
    
        Surprisingly (to me at least), this triggers hundreds of the times in
        a few benchmarks: lencode, ldecode, and 466.h264ref seem to *really*
        like this.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@91812 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 2dce671f9422bce436c74ad48a2b964d08bf2af9
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Mon Dec 21 03:19:28 2009 +0000
    
        Optimize all cases of "icmp (X+Cst), X" to something simpler.  This triggers
        a bunch in lencode, ldecod, spass, 176.gcc, 252.eon, among others.  It is
        also the first part of PR5822
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@91811 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit fd6df61746f769d9195f057199a9d16a215c2396
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Mon Dec 21 03:11:05 2009 +0000
    
        convert to filecheck
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@91810 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 2e4b507f989730839d48a1ea77845b3a4d8b3aa9
    Author: Lang Hames <lhames at gmail.com>
    Date:   Sat Dec 19 23:32:32 2009 +0000
    
        Fixed use of phi param in SlotIndex constructors.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@91790 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit d452037790a4653227a51c854409e3a151d9ca3d
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Sat Dec 19 21:29:22 2009 +0000
    
        fix an overly conservative caching issue that caused memdep to
        cache a pointer as being unavailable due to phi trans in the
        wrong place.  This would cause later queries to fail even when
        they didn't involve phi trans.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@91787 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 6fade252c5089e35a484d16b67dc3193d4744590
    Author: Daniel Dunbar <daniel at zuster.org>
    Date:   Sat Dec 19 21:27:30 2009 +0000
    
        CMake: Update lib deps.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@91786 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit cae7a4ce386dd7b01264acd17fce0453ce671e66
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Sat Dec 19 20:56:53 2009 +0000
    
        .llx is no more.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@91784 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit d19bb92ed1e9b879ed5ad87ab0021bed0b21d1b9
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Sat Dec 19 20:44:43 2009 +0000
    
        fix inconsistent use of tabs
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@91783 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 70c75b50f227c44043294c642734da957925df59
    Author: Daniel Dunbar <daniel at zuster.org>
    Date:   Sat Dec 19 18:58:49 2009 +0000
    
        Remove unused variable (noticed by clang++).
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@91780 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 05e9f7410481592f66c6169923d903f9c592d3c4
    Author: Daniel Dunbar <daniel at zuster.org>
    Date:   Sat Dec 19 17:11:53 2009 +0000
    
        #if 0 out X86 disassembler for now, it is breaking the build in multiple places.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@91778 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 5bdef5b8ec8fd24804c3017e27417788d03c3e3b
    Author: Sanjiv Gupta <sanjiv.gupta at microchip.com>
    Date:   Sat Dec 19 13:52:01 2009 +0000
    
        Emit direction operand in binary insns that stores in memory.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@91777 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 4b40abc73cf4280ee01a3c40ecaab7834aa76796
    Author: Sanjiv Gupta <sanjiv.gupta at microchip.com>
    Date:   Sat Dec 19 13:13:29 2009 +0000
    
        Adding a bunch of options to the mcc16 driver.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@91776 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 333f2f2612d193b17a373c2643bf6033719e6998
    Author: Nuno Lopes <nunoplopes at sapo.pt>
    Date:   Sat Dec 19 12:07:00 2009 +0000
    
        rename dprintf to dbgpritnf, in order to fix build with glibc (which already defines dprintf in stdio.h
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@91775 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit c6adb84d954dd57766b7f3dd1454dfe3e05f83b7
    Author: Nuno Lopes <nunoplopes at sapo.pt>
    Date:   Sat Dec 19 11:52:18 2009 +0000
    
        fix build and while at it remove a redudant include
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@91774 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit d31fb873f66238f8ccd0edd157b1451e83ad796e
    Author: Sanjiv Gupta <sanjiv.gupta at microchip.com>
    Date:   Sat Dec 19 11:38:14 2009 +0000
    
        Test cases for changes done in 91768.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@91773 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 6758957e00103d5df70147ac3552efcf4a28c3b5
    Author: Sanjiv Gupta <sanjiv.gupta at microchip.com>
    Date:   Sat Dec 19 08:26:25 2009 +0000
    
        1. In indirect load/store insns , the name of fsr should be emitted as INDF.
        2. include standard asmbly headers in generated asmbly.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@91768 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 9731ea6739a2acc4c0b437f168dc6769f3808a98
    Author: Douglas Gregor <doug.gregor at gmail.com>
    Date:   Sat Dec 19 07:05:23 2009 +0000
    
        Fix a bunch of little errors that Clang complains about when its being pedantic
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@91764 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 42becae4234059212beb68cfdd09ddf0e9ff7377
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Sat Dec 19 07:01:15 2009 +0000
    
        fix PR5827 by disabling the phi slicing transformation in a case
        where instcombine would have to split a critical edge due to a
        phi node of an invoke.  Since instcombine can't change the CFG,
        it has to bail out from doing the transformation.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@91763 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 2b827d0d4b3e378a70b894d130235274c56d4697
    Author: Bob Wilson <bob.wilson at apple.com>
    Date:   Sat Dec 19 06:53:17 2009 +0000
    
        Update my SROA changes in response to review.
        * change FindElementAndOffset to return a uint64_t instead of unsigned, and
          to identify the type to be used for that result in a GEP instruction.
        * move "isa<ConstantInt>" to be first in conditional.
        * replace some dyn_casts with casts.
        * add a comment about handling mem intrinsics.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@91762 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit e38b949bc7db6805bb2a77ca4d6068037cd8c2d5
    Author: Daniel Dunbar <daniel at zuster.org>
    Date:   Sat Dec 19 04:16:57 2009 +0000
    
        More bzero -> memset that I missed.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@91757 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 6861faed9cba02c3a49b856a4e728dce8c1b651d
    Author: Daniel Dunbar <daniel at zuster.org>
    Date:   Sat Dec 19 04:16:48 2009 +0000
    
        Add missing newlines at EOF (for clang++).
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@91756 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 299fdcebe71c89b3fc73923b44c4f0a9169e340d
    Author: Daniel Dunbar <daniel at zuster.org>
    Date:   Sat Dec 19 03:31:50 2009 +0000
    
        Use memset instead of bzero, its more portable.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@91754 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 7566a55acf4fe9a69ad17887fea806d846ba7e98
    Author: Douglas Gregor <doug.gregor at gmail.com>
    Date:   Sat Dec 19 03:21:36 2009 +0000
    
        Remove spurious semicolon. Thanks, Clang
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@91752 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 1a4f32c3c2eba982b658ac52f37f1d667960acea
    Author: Sean Callanan <scallanan at apple.com>
    Date:   Sat Dec 19 02:59:52 2009 +0000
    
        Table-driven disassembler for the X86 architecture (16-, 32-, and 64-bit
        incarnations), integrated into the MC framework.
    
        The disassembler is table-driven, using a custom TableGen backend to
        generate hierarchical tables optimized for fast decode.  The disassembler
        consumes MemoryObjects and produces arrays of MCInsts, adhering to the
        abstract base class MCDisassembler (llvm/MC/MCDisassembler.h).
    
        The disassembler is documented in detail in
    
        - lib/Target/X86/Disassembler/X86Disassembler.cpp (disassembler runtime)
        - utils/TableGen/DisassemblerEmitter.cpp (table emitter)
    
        You can test the disassembler by running llvm-mc -disassemble for i386
        or x86_64 targets.  Please let me know if you encounter any problems
        with it.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@91749 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 232149a85f22b513b690ce6386cc8922b66b659b
    Author: Anton Korobeynikov <asl at math.spbu.ru>
    Date:   Sat Dec 19 02:04:23 2009 +0000
    
        Bump alignment requirements for windows targets to achieve compartibility with vcpp.
        Based on patch by Michael Beck!
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@91745 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 6a0bb8172b4327b725fd609028c9f8ceb8086d8d
    Author: Anton Korobeynikov <asl at math.spbu.ru>
    Date:   Sat Dec 19 02:04:00 2009 +0000
    
        Use 4-arg getVTList) variant instead of generic one, when possible
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@91744 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit a3e53b706108ada456d24d6f2090b6a96b153569
    Author: Dan Gohman <gohman at apple.com>
    Date:   Sat Dec 19 01:47:13 2009 +0000
    
        Delete unused code.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@91743 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 6a1afd606983941b9db6a41029c238b2df0b520a
    Author: Dan Gohman <gohman at apple.com>
    Date:   Sat Dec 19 01:46:34 2009 +0000
    
        Fix a spello in a comment that Nick spotted.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@91742 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 9cae1100652da2ac306c7b9d11446d8e4639b6d7
    Author: Dan Gohman <gohman at apple.com>
    Date:   Sat Dec 19 01:46:09 2009 +0000
    
        Fix a comment.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@91741 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 09e76650fe1e6c21a06f0ef7f43b15c1f433e546
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Sat Dec 19 01:38:42 2009 +0000
    
        Make some methods const.  The only interesting change here is that
        it changes raw_fd_ostream::preferred_buffer_size to return zero on
        a scary stat failure instead of setting the stream to an error state.
        This method really should not mutate the stream.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@91740 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 28039a48aebce267d2472eb5e143f19c68354ac9
    Author: John McCall <rjmccall at apple.com>
    Date:   Sat Dec 19 00:55:12 2009 +0000
    
        Qualify a bunch of explicit template instantiations to satisfy clang++.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@91736 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit fb0624b1443a68698a41e40e479b6910e6a18a57
    Author: John McCall <rjmccall at apple.com>
    Date:   Sat Dec 19 00:51:42 2009 +0000
    
        Put TypesEqual and TypeHasCycleThroughItself in namespace llvm so ADL from
        the templates in TypesContext.h can find them.  Caught by clang++.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@91735 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit f3b3032dc132f8386dde8aa6ecbc6c8d92d35876
    Author: Bill Wendling <isanbard at gmail.com>
    Date:   Sat Dec 19 00:05:07 2009 +0000
    
        Forgot forward declaration.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@91732 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit f169e657ed99eca80912bd00c06489556091a826
    Author: Dan Gohman <gohman at apple.com>
    Date:   Fri Dec 18 23:42:08 2009 +0000
    
        Eliminate unnecessary LLVMContexts.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@91729 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 3a8afc0ed526b8b49af2c089ba600193f8b708c4
    Author: Bill Wendling <isanbard at gmail.com>
    Date:   Fri Dec 18 23:32:53 2009 +0000
    
        Changes from review:
    
        - Move DisableScheduling flag into TargetOption.h
        - Move SDNodeOrdering into its own header file. Give it a minimal interface that
          doesn't conflate construction with storage.
        - Move assigning the ordering into the SelectionDAGBuilder.
    
        This isn't used yet, so there should be no functional changes.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@91727 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 2a0bb8e3f58a6f09316801bcaf043d34b048605c
    Author: Dan Gohman <gohman at apple.com>
    Date:   Fri Dec 18 23:18:03 2009 +0000
    
        Make this comment more precise.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@91722 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 52cda8c8d86f40c6e2cab28f54367bf7c64ba804
    Author: Eli Friedman <eli.friedman at gmail.com>
    Date:   Fri Dec 18 21:38:44 2009 +0000
    
        Fix an issue in googletest where a name was used before it was defined.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@91718 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit d26b552b438a1b4ba2d60bbf715f0286aa22d780
    Author: Evan Cheng <evan.cheng at apple.com>
    Date:   Fri Dec 18 21:31:31 2009 +0000
    
        Increase opportunities to optimize (brcond (srl (and c1), c2)).
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@91717 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 6a4a08546569962a0a1929d1eda9fd46bffbde75
    Author: Eli Friedman <eli.friedman at gmail.com>
    Date:   Fri Dec 18 21:07:18 2009 +0000
    
        Fix gcc warning.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@91715 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit d59ead6cba177414008b14ccbab2d86cd1e8cf3f
    Author: Rafael Espindola <rafael.espindola at gmail.com>
    Date:   Fri Dec 18 20:35:38 2009 +0000
    
        Catch more cases of a pointer being marked garbage twice. This helps when
        debugging some leaks (PR5770 in particular).
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@91713 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 9a3d620799aa212d3ef04231e0bace426cb4eb29
    Author: Bob Wilson <bob.wilson at apple.com>
    Date:   Fri Dec 18 20:14:40 2009 +0000
    
        Reapply 91459 with a simple fix for the problem that broke the x86_64-darwin
        bootstrap.  This also replaces the WeakVH references that Chris objected to
        with normal Value references.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@91711 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit a6e122fd8ed8e60cc79a1fbaef1658fe5aca160a
    Author: Bob Wilson <bob.wilson at apple.com>
    Date:   Fri Dec 18 20:12:14 2009 +0000
    
        Fix another parallel make race condition.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@91709 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 3e7564c22ba1e674327fc5c92540bb67bedaf292
    Author: Victor Hernandez <vhernandez at apple.com>
    Date:   Fri Dec 18 20:09:14 2009 +0000
    
        Formalize MDNode's function-localness:
        - an MDNode is designated as function-local when created, and continues to be even if its operands are modified not to refer to function-local IR
        - function-localness is designated via lowest bit in SubclassData
        - getLocalFunction() descends MDNode tree to see if it is consistently function-local
    
        Add verification of MDNodes to checks that MDNodes are consistently function-local.
        Update AsmWriter to use isFunctionLocal().
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@91708 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 80441befe44d1fc2d8c7c26536b4b0e582ccbcfd
    Author: Daniel Dunbar <daniel at zuster.org>
    Date:   Fri Dec 18 19:59:48 2009 +0000
    
        Fix Win32 Path.inc for API update.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@91706 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit cb4a07928cea4c2e4b9406665669c7062018280e
    Author: Dan Gohman <gohman at apple.com>
    Date:   Fri Dec 18 18:45:31 2009 +0000
    
        Revert this use of NUW/NSW also. Overflow-undefined multiplication isn't
        associative either.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@91701 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit bd9dd2b40d94c3846795ed176ea5d3b641006625
    Author: Rafael Espindola <rafael.espindola at gmail.com>
    Date:   Fri Dec 18 16:59:39 2009 +0000
    
        Fix libstdc++ build on ARM linux and part of PR5770.
    
        MI was not being used but it was also not being deleted, so it was kept in the garbage list. The memory itself was freed once the function code gen was done.
    
        Once in a while the codegen of another function would create an instruction on the same address. Adding it to the garbage group would work once, but when another pointer was added it would cause an assert as "Cache" was about to be pushed to Ts.
    
        For a patch that make us detect problems like this earlier, take a look at
    
        http://lists.cs.uiuc.edu/pipermail/llvm-commits/Week-of-Mon-20091214/092758.html
    
        With that patch we assert as soon and the new instruction is added to the garbage set.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@91691 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit ffb71a4733c04874b5d47320507eea7e6a083791
    Author: Tilmann Scheller <tilmann.scheller at googlemail.com>
    Date:   Fri Dec 18 13:00:34 2009 +0000
    
        Fix wrong frame pointer save offset in the 64-bit PowerPC SVR4 ABI.
    
        Patch contributed by Ken Werner of IBM!
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@91681 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit b2615b51f6c845fc7376f9bf3c773f5d1028e294
    Author: Tilmann Scheller <tilmann.scheller at googlemail.com>
    Date:   Fri Dec 18 13:00:15 2009 +0000
    
        Add support for calls through function pointers in the 64-bit PowerPC SVR4 ABI.
    
        Patch contributed by Ken Werner of IBM!
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@91680 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit e931a13acc5d5dd31e3ec33ccc4b7438e8a5afca
    Author: Mikhail Glushenkov <foldr at codedgers.com>
    Date:   Fri Dec 18 11:27:26 2009 +0000
    
        Make 'set_option' work with list options.
    
        This works now: (set_option "list_opt", ["val_1", "val_2", "val_3"])
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@91679 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 21e40b731446c720069c0929bdfa590c946232a5
    Author: Eli Friedman <eli.friedman at gmail.com>
    Date:   Fri Dec 18 08:22:35 2009 +0000
    
        Optimize icmp of null and select of two constants even if the select has
        multiple uses.  (The construct in question was found in gcc.)
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@91675 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 31d9e33e0d4b4b815ec6002aaab2f280d450c21a
    Author: Evan Cheng <evan.cheng at apple.com>
    Date:   Fri Dec 18 07:40:29 2009 +0000
    
        On recent Intel u-arch's, folding loads into some unary SSE instructions can
        be non-optimal. To be precise, we should avoid folding loads if the instructions
        only update part of the destination register, and the non-updated part is not
        needed. e.g. cvtss2sd, sqrtss. Unfolding the load from these instructions breaks
        the partial register dependency and it can improve performance. e.g.
    
        movss (%rdi), %xmm0
        cvtss2sd %xmm0, %xmm0
    
        instead of
        cvtss2sd (%rdi), %xmm0
    
        An alternative method to break dependency is to clear the register first. e.g.
        xorps %xmm0, %xmm0
        cvtss2sd (%rdi), %xmm0
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@91672 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit c8a33c1c3ea5a45185dbfd0046b79c14f9821b4a
    Author: Dan Gohman <gohman at apple.com>
    Date:   Fri Dec 18 03:57:04 2009 +0000
    
        Revert this use of NSW; this one isn't actually safe. NSW addition
        is not reassociative.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@91667 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit c7d9c3baf89ad6e2a3162daf524f1d4d4c9e9090
    Author: Dan Gohman <gohman at apple.com>
    Date:   Fri Dec 18 03:25:51 2009 +0000
    
        Eliminte unnecessary uses of <cstdio>.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@91666 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 532e3f52901265414842dbe21e106f9b32c5b182
    Author: Dan Gohman <gohman at apple.com>
    Date:   Fri Dec 18 03:10:26 2009 +0000
    
        Add utility routines for NSW multiply.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@91664 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit dbe0132963a2021bcabd9fe92cd0b1c257614ab8
    Author: Dan Gohman <gohman at apple.com>
    Date:   Fri Dec 18 02:58:50 2009 +0000
    
        Add utility routines for creating integer negation operators with NSW set.
        Integer negation only overflows with INT_MIN, but that's an important case.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@91662 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 18256c45ce2016abbde91d2c738609dea2225561
    Author: Dan Gohman <gohman at apple.com>
    Date:   Fri Dec 18 02:14:37 2009 +0000
    
        Delete an unused variable.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@91659 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit eadde674855dc5d1e1bee25aa2c621d982b6d8d6
    Author: Eric Christopher <echristo at apple.com>
    Date:   Fri Dec 18 02:12:53 2009 +0000
    
        Fix typo.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@91657 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 620156de509db9f3d1a1130c33c680e92b722b3a
    Author: Dan Gohman <gohman at apple.com>
    Date:   Fri Dec 18 02:09:29 2009 +0000
    
        Preserve NSW information in more places.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@91656 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit b3099793f471d4ca8845cb922da8a7052bd2d619
    Author: Evan Cheng <evan.cheng at apple.com>
    Date:   Fri Dec 18 01:59:21 2009 +0000
    
        Re-apply 91623 now that I actually know what I was trying to do.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@91655 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit bf4c33d434849ca17d8624a77c56b94085d84954
    Author: Dan Gohman <gohman at apple.com>
    Date:   Fri Dec 18 01:24:09 2009 +0000
    
        Add Loop contains utility methods for testing whether a loop
        contains another loop, or an instruction. The loop form is
        substantially more efficient on large loops than the typical
        code it replaces.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@91654 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 36dcb6d0219b2c549638bddb9b8c6d46d9877406
    Author: Dan Gohman <gohman at apple.com>
    Date:   Fri Dec 18 01:20:44 2009 +0000
    
        Minor code simplification.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@91653 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit fb8c8533ea5b7e620324138fb647b1b968ff5cae
    Author: Dan Gohman <gohman at apple.com>
    Date:   Fri Dec 18 01:14:11 2009 +0000
    
        Whitespace cleanups.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@91651 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit fa5054b14df7bdc8da0f9e20c771d7d3b599c8d9
    Author: Dan Gohman <gohman at apple.com>
    Date:   Fri Dec 18 01:05:06 2009 +0000
    
        Tidy up this testcase and add test for tailcall optimization
        with unreachable.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@91650 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit cc841d457d517cc8af00f08be3c7bdf803c1eb98
    Author: Bob Wilson <bob.wilson at apple.com>
    Date:   Fri Dec 18 01:03:29 2009 +0000
    
        Handle ARM inline asm "w" constraints with 64-bit ("d") registers.
        The change in SelectionDAGBuilder is needed to allow using bitcasts to convert
        between f64 (the default type for ARM "d" registers) and 64-bit Neon vector
        types.  Radar 7457110.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@91649 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 82dff7d94faf98ea406305388528b0d459987172
    Author: Dan Gohman <gohman at apple.com>
    Date:   Fri Dec 18 01:02:18 2009 +0000
    
        Remove "tail" keywords. These calls are not intended to be tail calls.
        This protects this test from depending on codegen not performing the
        tail call optimization by default.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@91648 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 33922e093855021c4516c58b25d2a97321d7ec9f
    Author: Dan Gohman <gohman at apple.com>
    Date:   Fri Dec 18 00:38:08 2009 +0000
    
        Don't pass const pointers by reference.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@91647 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 4d6a53b925c6331b45c191fd1e8dfa4af3ce2fc8
    Author: Dan Gohman <gohman at apple.com>
    Date:   Fri Dec 18 00:28:43 2009 +0000
    
        Update a comment.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@91645 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 9b781ff887e6ce97d55151debf0a6c91f4545344
    Author: John McCall <rjmccall at apple.com>
    Date:   Fri Dec 18 00:27:18 2009 +0000
    
        Pass the error string directly to llvm_unreachable instead of the residual
        (0 && "error").  Rough consensus seems to be that g++ *should* be diagnosing
        this because the pointer makes it not an ICE in c++03.  Everyone agrees that
        the current standard is silly and null-pointer-ness should not be based on
        ICE-ness.  Excellent fight scene in Act II, denouement weak, two stars.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@91644 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit e28a5aeb1044f710d929732875a68072d157fdb4
    Author: Jakob Stoklund Olesen <stoklund at 2pi.dk>
    Date:   Fri Dec 18 00:11:44 2009 +0000
    
        Add test case for the phi reuse patch.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@91642 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 8eba23b792b47890c4d973ad6df31fdedfd25088
    Author: Dan Gohman <gohman at apple.com>
    Date:   Fri Dec 18 00:06:20 2009 +0000
    
        Reapply LoopStrengthReduce and IVUsers cleanups, excluding the part
        of 91296 that caused trouble -- the Processed list needs to be
        preserved for the livetime of the pass, as AddUsersIfInteresting
        is called from other passes.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@91641 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 2366772de3db53fc74f19a184140687c2ac7d9bf
    Author: Dan Gohman <gohman at apple.com>
    Date:   Fri Dec 18 00:03:58 2009 +0000
    
        Add an svn:ignore.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@91639 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 10bd00cf60fd21ca257127ded9684b48aa22abc1
    Author: Sean Callanan <scallanan at apple.com>
    Date:   Fri Dec 18 00:01:26 2009 +0000
    
        Instruction fixes, added instructions, and AsmString changes in the
        X86 instruction tables.
    
        Also (while I was at it) cleaned up the X86 tables, removing tabs and
        80-line violations.
    
        This patch was reviewed by Chris Lattner, but please let me know if
        there are any problems.
    
        * X86*.td
        	Removed tabs and fixed 80-line violations
    
        * X86Instr64bit.td
        	(IRET, POPCNT, BT_, LSL, SWPGS, PUSH_S, POP_S, L_S, SMSW)
        		Added
        	(CALL, CMOV) Added qualifiers
        	(JMP) Added PC-relative jump instruction
        	(POPFQ/PUSHFQ) Added qualifiers; renamed PUSHFQ to indicate
        		that it is 64-bit only (ambiguous since it has no
        		REX prefix)
        	(MOV) Added rr form going the other way, which is encoded
        		differently
        	(MOV) Changed immediates to offsets, which is more correct;
        		also fixed MOV64o64a to have to a 64-bit offset
        	(MOV) Fixed qualifiers
        	(MOV) Added debug-register and condition-register moves
        	(MOVZX) Added more forms
        	(ADC, SUB, SBB, AND, OR, XOR) Added reverse forms, which
        		(as with MOV) are encoded differently
        	(ROL) Made REX.W required
        	(BT) Uncommented mr form for disassembly only
        	(CVT__2__) Added several missing non-intrinsic forms
        	(LXADD, XCHG) Reordered operands to make more sense for
        		MRMSrcMem
        	(XCHG) Added register-to-register forms
        	(XADD, CMPXCHG, XCHG) Added non-locked forms
        * X86InstrSSE.td
        	(CVTSS2SI, COMISS, CVTTPS2DQ, CVTPS2PD, CVTPD2PS, MOVQ)
        		Added
        * X86InstrFPStack.td
        	(COM_FST0, COMP_FST0, COM_FI, COM_FIP, FFREE, FNCLEX, FNOP,
        	 FXAM, FLDL2T, FLDL2E, FLDPI, FLDLG2, FLDLN2, F2XM1, FYL2X,
        	 FPTAN, FPATAN, FXTRACT, FPREM1, FDECSTP, FINCSTP, FPREM,
        	 FYL2XP1, FSINCOS, FRNDINT, FSCALE, FCOMPP, FXSAVE,
        	 FXRSTOR)
        		Added
        	(FCOM, FCOMP) Added qualifiers
        	(FSTENV, FSAVE, FSTSW) Fixed opcode names
        	(FNSTSW) Added implicit register operand
        * X86InstrInfo.td
        	(opaque512mem) Added for FXSAVE/FXRSTOR
        	(offset8, offset16, offset32, offset64) Added for MOV
        	(NOOPW, IRET, POPCNT, IN, BTC, BTR, BTS, LSL, INVLPG, STR,
        	 LTR, PUSHFS, PUSHGS, POPFS, POPGS, LDS, LSS, LES, LFS,
        	 LGS, VERR, VERW, SGDT, SIDT, SLDT, LGDT, LIDT, LLDT,
        	 LODSD, OUTSB, OUTSW, OUTSD, HLT, RSM, FNINIT, CLC, STC,
        	 CLI, STI, CLD, STD, CMC, CLTS, XLAT, WRMSR, RDMSR, RDPMC,
        	 SMSW, LMSW, CPUID, INVD, WBINVD, INVEPT, INVVPID, VMCALL,
        	 VMCLEAR, VMLAUNCH, VMRESUME, VMPTRLD, VMPTRST, VMREAD,
        	 VMWRITE, VMXOFF, VMXON) Added
        	(NOOPL, POPF, POPFD, PUSHF, PUSHFD) Added qualifier
        	(JO, JNO, JB, JAE, JE, JNE, JBE, JA, JS, JNS, JP, JNP, JL,
        	 JGE, JLE, JG, JCXZ) Added 32-bit forms
        	(MOV) Changed some immediate forms to offset forms
        	(MOV) Added reversed reg-reg forms, which are encoded
        		differently
        	(MOV) Added debug-register and condition-register moves
        	(CMOV) Added qualifiers
        	(AND, OR, XOR, ADC, SUB, SBB) Added reverse forms, like MOV
        	(BT) Uncommented memory-register forms for disassembler
        	(MOVSX, MOVZX) Added forms
        	(XCHG, LXADD) Made operand order make sense for MRMSrcMem
        	(XCHG) Added register-register forms
        	(XADD, CMPXCHG) Added unlocked forms
        * X86InstrMMX.td
        	(MMX_MOVD, MMV_MOVQ) Added forms
        * X86InstrInfo.cpp: Changed PUSHFQ to PUSHFQ64 to reflect table
        	change
    
        * X86RegisterInfo.td: Added debug and condition register sets
        * x86-64-pic-3.ll: Fixed testcase to reflect call qualifier
        * peep-test-3.ll: Fixed testcase to reflect test qualifier
        * cmov.ll: Fixed testcase to reflect cmov qualifier
        * loop-blocks.ll: Fixed testcase to reflect call qualifier
        * x86-64-pic-11.ll: Fixed testcase to reflect call qualifier
        * 2009-11-04-SubregCoalescingBug.ll: Fixed testcase to reflect call
          qualifier
        * x86-64-pic-2.ll: Fixed testcase to reflect call qualifier
        * live-out-reg-info.ll: Fixed testcase to reflect test qualifier
        * tail-opts.ll: Fixed testcase to reflect call qualifiers
        * x86-64-pic-10.ll: Fixed testcase to reflect call qualifier
        * bss-pagealigned.ll: Fixed testcase to reflect call qualifier
        * x86-64-pic-1.ll: Fixed testcase to reflect call qualifier
        * widen_load-1.ll: Fixed testcase to reflect call qualifier
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@91638 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 20ec3073cae13b5c8a1661dabdfdb97500e6e26c
    Author: John McCall <rjmccall at apple.com>
    Date:   Thu Dec 17 23:49:16 2009 +0000
    
        Sundry dependent-name fixes flagged by clang++.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@91636 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 4ca952ac3fa90cc373192bb96a3ddbdfd71f9f21
    Author: Bill Wendling <isanbard at gmail.com>
    Date:   Thu Dec 17 23:45:18 2009 +0000
    
        Revert accidental commit.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@91635 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit af37dac175cdf3ea8698ecc653c0f4d978a352f0
    Author: Bill Wendling <isanbard at gmail.com>
    Date:   Thu Dec 17 23:42:32 2009 +0000
    
        Turn off critical edge splitting for landing pads. The introduction of a
        non-landing pad basic block as the successor to a block that ends in an
        unconditional jump will cause block folding to remove the added block as a
        successor. Thus eventually removing it AND the landing pad entirely. Critical
        edge splitting is an optimization, so we can safely turn it off when dealing
        with landing pads.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@91634 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit b246171d0203ce94b4403d68fd8aa7eead2c998a
    Author: Jeffrey Yasskin <jyasskin at google.com>
    Date:   Thu Dec 17 22:44:34 2009 +0000
    
        Revert r91623 to unbreak the buildbots.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@91632 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit bf3381db96461b1fd215fae3feaf559e7aeb3f96
    Author: Eli Friedman <eli.friedman at gmail.com>
    Date:   Thu Dec 17 22:42:29 2009 +0000
    
        Allow instcombine to combine "sext(a) >u const" to "a >u trunc(const)".
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@91631 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 9bc1fde05ecc916e64ad0f39feb6c5d14fb5cfe4
    Author: Jeffrey Yasskin <jyasskin at google.com>
    Date:   Thu Dec 17 21:35:29 2009 +0000
    
        Don't codegen available_externally functions.  Fixes http://llvm.org/PR5735.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@91626 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 459ca567824a4f39cf8ad280401de1ec15d5bc2b
    Author: Eli Friedman <eli.friedman at gmail.com>
    Date:   Thu Dec 17 21:27:47 2009 +0000
    
        Make the ptrtoint comparison simplification work if one side is a global.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@91624 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit fe77cf2b9eda42bd7f805e0f1c853ffae9739c10
    Author: Evan Cheng <evan.cheng at apple.com>
    Date:   Thu Dec 17 21:23:58 2009 +0000
    
        Remove an unused option.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@91623 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 8f109efcd44af6875cd87dc939bbcf2f6777ea9a
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Thu Dec 17 21:23:46 2009 +0000
    
        tabs -> spaces.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@91622 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit f1ff2c77af0c898a7673f1ff16e8ecda1a443357
    Author: Eli Friedman <eli.friedman at gmail.com>
    Date:   Thu Dec 17 21:07:31 2009 +0000
    
        Slightly generalize transformation of memmove(a,a,n) so that it also applies
        to memcpy. (Such a memcpy is technically illegal, but in practice is safe
        and is generated by struct self-assignment in C code.)
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@91621 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 15408333ed53187bf1c886db1756b38c9aac9c6d
    Author: Jeffrey Yasskin <jyasskin at google.com>
    Date:   Thu Dec 17 21:02:39 2009 +0000
    
        Make Path use StringRef instead of std::string where possible.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@91620 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 3da0d8c2d10e49cd7df0e2ba33247580dcf57645
    Author: Bill Wendling <isanbard at gmail.com>
    Date:   Thu Dec 17 20:41:01 2009 +0000
    
        Temporarily revert 91337. It's causing testcase failures.
    
        $ svn merge -c -91337 https://llvm.org/svn/llvm-project/llvm/trunk
        --- Reverse-merging r91337 into '.':
        U    lib/CodeGen/AsmPrinter/DwarfException.cpp
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@91618 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 1898b53fca4fe2862b5404a3a80a802620060020
    Author: Steve Naroff <snaroff at apple.com>
    Date:   Thu Dec 17 20:39:34 2009 +0000
    
        Fix Windows build breakage...
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@91617 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit afb4e48b3b1ab458736885d32d33ff550bb4a503
    Author: Ken Dyck <cfe-commits at cs.uiuc.edu>
    Date:   Thu Dec 17 20:09:43 2009 +0000
    
        Introduce EVT::getHalfSizedIntegerVT() for use in ExpandUnalignedStore() in
        LegalizeDAG.cpp. Unlike the code it replaces, which simply decrements the simple
        type by one, getHalfSizedIntegerVT() searches for the smallest simple integer
        type that is at least half the size of the type it is called on. This approach
        has the advantage that it will continue working if a new value type (such as
        i24) is added to MVT.
    
        Also, in preparation for new value types, remove the assertions that
        non-power-of-2 8-bit-mutiple types are Extended when legalizing extload and
        truncstore operations.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@91614 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit cfb5c54ba03e4b004314dfce5eeec919471a5fb9
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Thu Dec 17 20:00:21 2009 +0000
    
        finish cleaning up StructLayoutMap.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@91612 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit f3c8e732244684983ee5f9526d71ae636d5823c0
    Author: Jeffrey Yasskin <jyasskin at google.com>
    Date:   Thu Dec 17 19:55:06 2009 +0000
    
        This fixes a memory leak in OpaqueType found by Google's internal heapchecker.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@91611 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 878ec1002b2f3232687cd932ef029b403b46feda
    Author: Eric Christopher <echristo at apple.com>
    Date:   Thu Dec 17 19:07:19 2009 +0000
    
        Fix unused variable warning.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@91609 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 11f4e3ba76bd0cce3d9e4ad622c294b23b6c0db9
    Author: Bob Wilson <bob.wilson at apple.com>
    Date:   Thu Dec 17 18:34:24 2009 +0000
    
        Re-revert 91459.  It's breaking the x86_64 darwin bootstrap.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@91607 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 0bbe84e28cb5f73f1a84c487086144b39b257358
    Author: Evan Cheng <evan.cheng at apple.com>
    Date:   Thu Dec 17 18:03:12 2009 +0000
    
        Remove debugging code.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@91604 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 5a461bf3a0e2b3e4cffa9fd3130d5fbcfe1ada8e
    Author: Jim Grosbach <grosbach at apple.com>
    Date:   Thu Dec 17 17:18:11 2009 +0000
    
        Add more detail for getting started on Windows.
    
        Patch from jon.forums at gmail.com
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@91603 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 5f7bbf56080ea6adfaeac4cea32a6934bfeb81ee
    Author: Ken Dyck <cfe-commits at cs.uiuc.edu>
    Date:   Thu Dec 17 15:31:52 2009 +0000
    
        In LowerEXTRACT_VECTOR_ELT, force an i32 value type for PEXTWR instead of
        incrementing the simple value type of the 16-bit type, which would give the
        wrong type if an intemediate MVT (such as i24) were introduced.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@91602 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 38e52386fcf379c1a97c557bf81c345fd1233a04
    Author: Evan Cheng <evan.cheng at apple.com>
    Date:   Thu Dec 17 09:39:49 2009 +0000
    
        Revert 91280-91283, 91286-91289, 91291, 91293, 91295-91296. It apparently introduced a non-deterministic behavior in the optimizer somewhere.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@91598 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 75192f45c3fe86b5afe94a303a0cbbe0dbed923c
    Author: Mikhail Glushenkov <foldr at codedgers.com>
    Date:   Thu Dec 17 07:49:26 2009 +0000
    
        Regenerate.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@91595 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 098d20a0ba341dd7f3db8ffbf3ec908fb60b0be1
    Author: Mikhail Glushenkov <foldr at codedgers.com>
    Date:   Thu Dec 17 07:49:16 2009 +0000
    
        Add a 'set_option' action for use in OptionPreprocessor.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@91594 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 410ace8ad7a9495dc9ab4a111bc220426ecf17fd
    Author: Mikhail Glushenkov <foldr at codedgers.com>
    Date:   Thu Dec 17 07:48:49 2009 +0000
    
        Refactoring, no functionality change.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@91593 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 4bd5523d989fddfb885d2b4c33058530919b21af
    Author: Mikhail Glushenkov <foldr at codedgers.com>
    Date:   Thu Dec 17 07:48:34 2009 +0000
    
        s/TokenizeCmdline/TokenizeCmdLine/
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@91592 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 7c3da9b61ecc4532780f2503832dd9f1e18f4130
    Author: Chandler Carruth <chandlerc at gmail.com>
    Date:   Thu Dec 17 06:35:17 2009 +0000
    
        Update CMake build to include HexDisassembler.cpp.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@91589 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit a06cd190d48a95b42f2226fb6bcfbe263ffc2dc3
    Author: Eli Friedman <eli.friedman at gmail.com>
    Date:   Thu Dec 17 06:07:04 2009 +0000
    
        Aggressively flip compare constant expressions where appropriate; constant
        folding in particular expects null to be on the RHS.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@91587 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 3c06eac1ead1ee19d5da7f6244fc1eba2ca17240
    Author: Bob Wilson <bob.wilson at apple.com>
    Date:   Thu Dec 17 05:07:36 2009 +0000
    
        Fix a comment grammaro.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@91584 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit b1d966ed5e666c21c6bef2178778f11b006e8211
    Author: Bob Wilson <bob.wilson at apple.com>
    Date:   Thu Dec 17 05:05:36 2009 +0000
    
        BIT_CONVERT nodes are used for vector types, too.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@91582 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 72af3d737fd75f62999dec911911103f41aba75a
    Author: Sean Callanan <scallanan at apple.com>
    Date:   Thu Dec 17 01:49:59 2009 +0000
    
        Test harness for the LLVM disassembler.  When invoked
        with -disassemble, llvm-mc now accepts lines of the
        form
        0x00 0x00
        and passes the resulting bytes to the disassembler for
        the chosen (or default) target, printing the result.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@91579 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 7d8baee6a05ca3383f39a5745c3c93aeb09aa749
    Author: Evan Cheng <evan.cheng at apple.com>
    Date:   Thu Dec 17 00:40:05 2009 +0000
    
        Revert this dag combine change:
        Fold (zext (and x, cst)) -> (and (zext x), cst)
    
        DAG combiner likes to optimize expression in the other way so this would end up cause an infinite looping.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@91574 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 8ae68e420a7fd0e4bc77674860a3d2fb619a3b15
    Author: Johnny Chen <johnny.chen at apple.com>
    Date:   Wed Dec 16 23:36:52 2009 +0000
    
        Renamed "tCMNZ" to "tCMNz" to be consistent with other similar namings.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@91571 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 0be22d12589480b482126c2d8eb816a9590d54d8
    Author: John McCall <rjmccall at apple.com>
    Date:   Wed Dec 16 20:31:50 2009 +0000
    
        Silence a clang warning about the deprecated (but perfectly reasonable in
        context) increment-of-bool idiom.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@91564 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit b3fc6a97363135cc1bf0f825a2424b1faf53eb26
    Author: Daniel Dunbar <daniel at zuster.org>
    Date:   Wed Dec 16 20:10:05 2009 +0000
    
        Reapply r91392, it was only unmasking the bug, and since TOT is still broken having it reverted does no good.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@91560 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 1c9fe58fab7095cbd17c99970631f5c3c9044a94
    Author: Daniel Dunbar <daniel at zuster.org>
    Date:   Wed Dec 16 20:09:53 2009 +0000
    
        Reapply r91459, it was only unmasking the bug, and since TOT is still broken having it reverted does no good.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@91559 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 724d7a167ac9c576db2e9217adfbee9ac8cd7b24
    Author: Jim Grosbach <grosbach at apple.com>
    Date:   Wed Dec 16 19:44:06 2009 +0000
    
        Mark STREX* as earlyclobber for the success result register.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@91555 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit ef786f2f846f89ff2675dd29e74c517d24db0d24
    Author: Jim Grosbach <grosbach at apple.com>
    Date:   Wed Dec 16 19:43:02 2009 +0000
    
        Add @earlyclobber TableGen constraint
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@91554 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 4e7896b88c383b5c64ddf574995fbafb4105884c
    Author: Bill Wendling <isanbard at gmail.com>
    Date:   Wed Dec 16 19:36:42 2009 +0000
    
        Remove superfluous 'extern' variable that was causing a warning with clang.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@91552 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 87450c87ff52d57481df9e6263f9a1b68cc8005a
    Author: Jakob Stoklund Olesen <stoklund at 2pi.dk>
    Date:   Wed Dec 16 18:55:53 2009 +0000
    
        Reuse lowered phi nodes.
    
        Tail duplication produces lots of identical phi nodes in different basic
        blocks. Teach PHIElimination to reuse the join registers when lowering a phi
        node that is identical to an already lowered node. This saves virtual
        registers, and more importantly it avoids creating copies the the coalescer
        doesn't know how to eliminate.
    
        Teach LiveIntervalAnalysis about the phi joins with multiple uses.
    
        This patch significantly reduces code size produced by -pre-regalloc-taildup.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@91549 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 903dc204e38ab5efaef573815ac96a6c2028c495
    Author: Daniel Dunbar <daniel at zuster.org>
    Date:   Wed Dec 16 11:38:03 2009 +0000
    
        Fix one more missing this-> to placate that picky clang++.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@91536 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 53dee55036da98a51831d75dfc9f704a91cd5f48
    Author: Daniel Dunbar <daniel at zuster.org>
    Date:   Wed Dec 16 10:56:17 2009 +0000
    
        Revert "Reapply 91184 with fixes and an addition to the testcase to cover the
        problem", this broke llvm-gcc bootstrap for release builds on
        x86_64-apple-darwin10.
    
        This reverts commit db22309800b224a9f5f51baf76071d7a93ce59c9.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@91534 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit e95e6c2b9690abed80c74de593f537d230318a98
    Author: Daniel Dunbar <daniel at zuster.org>
    Date:   Wed Dec 16 10:56:02 2009 +0000
    
        Revert "Initial work on disabling the scheduler. This is a work in progress, and
        this", this broke llvm-gcc bootstrap for release builds on
        x86_64-apple-darwin10.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@91533 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 325984d0b8274e8b0b891df078f74f8330a24a2e
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Wed Dec 16 09:32:05 2009 +0000
    
        reapply my strstr optimization.  I have reproduced the x86-64 bootstrap
        miscompile (i386.o miscompares) but it happens both with and without
        this patch.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@91532 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit a967657dbc081c7a72a734688e073bead60492f5
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Wed Dec 16 09:17:12 2009 +0000
    
        fix more missing this->'s to placate clang++
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@91531 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 1306a7b58af44ff7d4d3d55fb55f05d90dc17b51
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Wed Dec 16 09:09:54 2009 +0000
    
        Fix a missing this-> that clang++ notices.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@91530 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit bb29769774962df2dcb7a31f8bd158a0f6871ceb
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Wed Dec 16 08:44:24 2009 +0000
    
        now that libsystem no longer uses SmallVector, we can move
        SmallVectorBase::grow_pod out of line, finally satisfying PR3758.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@91529 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit a01522ef1c2bb894f01f20481aeec71b41760c2a
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Wed Dec 16 08:40:44 2009 +0000
    
        remove use of SmallVector from Path::makeUnique.  Path::makeUnique
        is not used by anything performance sensitive, so just use std::string.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@91528 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 75d7b5b99547063514152db411a60befc99f5302
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Wed Dec 16 08:35:54 2009 +0000
    
        eliminate an extraneous use of SmallVector in a case where
        a fixed size buffer is perfectly fine.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@91527 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 3ab8c0e1d42d4ee44349b7b1f51e5130b9c12530
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Wed Dec 16 08:34:40 2009 +0000
    
        factor out the grow() method for all pod implementations into one
        common function.  It is still an inline method, which will be fixed next.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@91526 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit dd44d1b21d8f77f7d3015d258383337dc6646780
    Author: Victor Hernandez <vhernandez at apple.com>
    Date:   Wed Dec 16 08:10:57 2009 +0000
    
        Use different name for argument and field
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@91524 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit b9e74f2cb039a0d6581861171f4c7f42d0880f25
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Wed Dec 16 08:09:23 2009 +0000
    
        pull destroy_range and uninitialized_copy up to the
        SmallVectorTemplateBase class, which allows us to statically
        dispatch on isPodLike instead of dynamically.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@91523 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit d8d02743d04afde76e9bfa7ba918911f40a9b1cb
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Wed Dec 16 08:05:48 2009 +0000
    
        sink most of the meat in smallvector back from SmallVectorTemplateCommon
        down into SmallVectorImpl.  This requires sprinking a ton of this->'s in,
        but gives us a place to factor.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@91522 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 59ebf30b613ebf2617a1519fbd0dbc1f2b4aeb13
    Author: Nick Lewycky <nicholas at mxc.ca>
    Date:   Wed Dec 16 07:35:25 2009 +0000
    
        Make this test pass on Linux.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@91521 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 3521fb7b3e549c489faebd76084d86a1c30cba8a
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Wed Dec 16 06:55:45 2009 +0000
    
        substantial refactoring of SmallVector, now most code is in SmallVectorTemplateCommon,
        and there is a new SmallVectorTemplateBase class in between it and SmallVectorImpl.
        SmallVectorTemplateBase can be specialized based on isPodLike.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@91518 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit a6b8642f72c98125b56e34469baaed50819ef2a2
    Author: Victor Hernandez <vhernandez at apple.com>
    Date:   Wed Dec 16 02:52:09 2009 +0000
    
        MDNodes that refer to an instruction are local to a function; in that case, explicitly keep track of the function they are local to
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@91497 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit aa640d3fc7563e889ef2ef93994b4c6ff38933f1
    Author: Johnny Chen <johnny.chen at apple.com>
    Date:   Wed Dec 16 02:32:54 2009 +0000
    
        Add encoding bits for some Thumb instructions.  Plus explicitly set the top two
        bytes of Inst to 0x0000 for the benefit of the Thumb decoder.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@91496 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 95150a6fa59d69f01bcd481e9bb2dc937c548107
    Author: Devang Patel <dpatel at apple.com>
    Date:   Wed Dec 16 02:11:38 2009 +0000
    
        XFAIL on ppc-darwin.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@91495 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit edeb169061593870ac603ab898a58d4db579d162
    Author: Evan Cheng <evan.cheng at apple.com>
    Date:   Wed Dec 16 00:53:11 2009 +0000
    
        Re-enable 91381 with fixes.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@91489 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 1ea083faf048bcaa812098ebdc3f55c0ddcc1687
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Wed Dec 16 00:46:02 2009 +0000
    
        revert my strstr optimization, I'm told it breaks x86-64 bootstrap.
    
        Will reapply with a fix when I get a chance.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@91486 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 5d25f9bbd3a3106239e6b957c233b598d36576fe
    Author: Dale Johannesen <dalej at apple.com>
    Date:   Wed Dec 16 00:29:41 2009 +0000
    
        Do better with physical reg operands (typically, from inline asm)
        in local register allocator.  If a reg-reg copy has a phys reg
        input and a virt reg output, and this is the last use of the phys
        reg, assign the phys reg to the virt reg.  If a reg-reg copy has
        a phys reg output and we need to reload its spilled input, reload
        it directly into the phys reg than passing it through another reg.
    
        Following 76208, there is sometimes no dependency between the def of
        a phys reg and its use; this creates a window where that phys reg
        can be used for spilling (this is true in linear scan also).  This
        is bad and needs to be fixed a better way, although 76208 works too
        well in practice to be reverted.  However, there should normally be
        no spilling within inline asm blocks.  The patch here goes a long way
        towards making this actually be true.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@91485 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit b6bf5b926d84470a0527691b20f60c31f6bf7978
    Author: John McCall <rjmccall at apple.com>
    Date:   Wed Dec 16 00:15:28 2009 +0000
    
        Every anonymous namespace is different.  Caught by clang++.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@91481 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 86dd20e4ab83c6d96b5a526bc95fc0ee6170e4ec
    Author: John McCall <rjmccall at apple.com>
    Date:   Wed Dec 16 00:13:24 2009 +0000
    
        Explicit template instantiations must happen in the template's immediately
        enclosing namespace.  Caught by clang++.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@91480 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit bf525e4cf50311df9aa8ed97d0eaf81a49fa6c3d
    Author: Bill Wendling <isanbard at gmail.com>
    Date:   Wed Dec 16 00:08:36 2009 +0000
    
        Helpful comment added. Some code cleanup. No functional change.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@91479 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 7cb7f462c39ffc68a5fe873078c2bb545c44f086
    Author: Bill Wendling <isanbard at gmail.com>
    Date:   Wed Dec 16 00:01:27 2009 +0000
    
        Initialize uninitialized variables.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@91477 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 9d8b6523d77b512dc276857e3e58e1b226856445
    Author: Bill Wendling <isanbard at gmail.com>
    Date:   Wed Dec 16 00:00:18 2009 +0000
    
        Initialize uninitialized variables.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@91475 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 43a22289bdf5f2e1d884de469b9ab5b849c077b7
    Author: Jeffrey Yasskin <jyasskin at google.com>
    Date:   Tue Dec 15 22:42:46 2009 +0000
    
        Change indirect-globals to use a dedicated allocIndirectGV.  This lets us
        remove start/finishGVStub and the BufferState helper class from the
        MachineCodeEmitter interface.  It has the side-effect of not setting the
        indirect global writable and then executable on ARM, but that shouldn't be
        necessary.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@91464 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit d91158b66c9ed1337bdad3741a2a1bbb78525d69
    Author: Bill Wendling <isanbard at gmail.com>
    Date:   Tue Dec 15 22:42:19 2009 +0000
    
        Some command lines don't like numbers with leading zeros. Remove them.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@91463 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 2c03fdc0413fbc193bca3214907aa7827f3a0a11
    Author: Bob Wilson <bob.wilson at apple.com>
    Date:   Tue Dec 15 22:00:51 2009 +0000
    
        Reapply 91184 with fixes and an addition to the testcase to cover the problem
        found last time.  Instead of trying to modify the IR while iterating over it,
        I've change it to keep a list of WeakVH references to dead instructions, and
        then delete those instructions later.  I also added some special case code to
        detect and handle the situation when both operands of a memcpy intrinsic are
        referencing the same alloca.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@91459 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 91978076a03bdda4a5c8167436659916e4a31621
    Author: Daniel Dunbar <daniel at zuster.org>
    Date:   Tue Dec 15 22:00:37 2009 +0000
    
        lit: Improve error when gtest discovery fails.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@91458 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 8f4f994c3cefa11b5df46f4aaaab86b15d4d4504
    Author: Dan Gohman <gohman at apple.com>
    Date:   Tue Dec 15 20:21:44 2009 +0000
    
        Revert 90628, which was incorrect.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@91448 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 51260423ed052b52a4246c7643d33cb7750f3bd0
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Tue Dec 15 19:34:20 2009 +0000
    
        Fix GetConstantStringInfo to not look into MDString (it works on
        real data, not metadata) and fix DbgInfoPrinter to not abuse
        GetConstantStringInfo.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@91444 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 465e633b903c1418bcf46021b2b1e02974505c8c
    Author: Jim Grosbach <grosbach at apple.com>
    Date:   Tue Dec 15 19:28:13 2009 +0000
    
        whitespace
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@91442 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 8287d66aee7d877aaa80a3fb40cc90580e669493
    Author: Devang Patel <dpatel at apple.com>
    Date:   Tue Dec 15 19:16:48 2009 +0000
    
        Add support to emit debug info for C++ namespaces.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@91440 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit a5b475def3406f1175162574ccbe300f9ab61bfd
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Tue Dec 15 19:14:40 2009 +0000
    
        optimize strstr, PR5783
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@91438 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 0d810c281064e69c6a0cab4f9f43870628628566
    Author: Johnny Chen <johnny.chen at apple.com>
    Date:   Tue Dec 15 17:24:14 2009 +0000
    
        Added encoding bits for the Thumb ISA.  Initial checkin.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@91434 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 7725021a32f9cb0f289b052a953ea77bfcda2bcf
    Author: Dan Gohman <gohman at apple.com>
    Date:   Tue Dec 15 16:30:09 2009 +0000
    
        Delete an unused function.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@91432 91177308-0d34-0410-b5e6-96231b3b80d8

diff --git a/libclamav/c++/llvm/cmake/modules/LLVMLibDeps.cmake b/libclamav/c++/llvm/cmake/modules/LLVMLibDeps.cmake
index 6a35354..97d07bd 100644
--- a/libclamav/c++/llvm/cmake/modules/LLVMLibDeps.cmake
+++ b/libclamav/c++/llvm/cmake/modules/LLVMLibDeps.cmake
@@ -57,8 +57,8 @@ set(MSVC_LIB_DEPS_LLVMTarget LLVMCore LLVMMC LLVMSupport LLVMSystem)
 set(MSVC_LIB_DEPS_LLVMTransformUtils LLVMAnalysis LLVMCore LLVMSupport LLVMSystem LLVMTarget LLVMipa)
 set(MSVC_LIB_DEPS_LLVMX86AsmParser LLVMMC LLVMX86Info)
 set(MSVC_LIB_DEPS_LLVMX86AsmPrinter LLVMAsmPrinter LLVMCodeGen LLVMCore LLVMMC LLVMSupport LLVMSystem LLVMTarget LLVMX86CodeGen LLVMX86Info)
-set(MSVC_LIB_DEPS_LLVMX86CodeGen LLVMCodeGen LLVMCore LLVMMC LLVMSelectionDAG LLVMSupport LLVMSystem LLVMTarget LLVMX86Info)
-set(MSVC_LIB_DEPS_LLVMX86Disassembler LLVMX86Info)
+set(MSVC_LIB_DEPS_LLVMX86CodeGen LLVMCodeGen LLVMCore LLVMMC LLVMSelectionDAG LLVMSupport LLVMSystem LLVMTarget LLVMX86Disassembler LLVMX86Info)
+set(MSVC_LIB_DEPS_LLVMX86Disassembler )
 set(MSVC_LIB_DEPS_LLVMX86Info LLVMSupport)
 set(MSVC_LIB_DEPS_LLVMXCore LLVMCodeGen LLVMCore LLVMMC LLVMSelectionDAG LLVMSupport LLVMSystem LLVMTarget LLVMXCoreInfo)
 set(MSVC_LIB_DEPS_LLVMXCoreAsmPrinter LLVMAsmPrinter LLVMCodeGen LLVMCore LLVMMC LLVMSupport LLVMSystem LLVMTarget LLVMXCoreInfo)
diff --git a/libclamav/c++/llvm/docs/CompilerDriver.html b/libclamav/c++/llvm/docs/CompilerDriver.html
index 0a3f877..5f5788c 100644
--- a/libclamav/c++/llvm/docs/CompilerDriver.html
+++ b/libclamav/c++/llvm/docs/CompilerDriver.html
@@ -334,8 +334,8 @@ once). Incompatible with <tt class="docutils literal"><span class="pre">zero_or_
 only for list options in conjunction with <tt class="docutils literal"><span class="pre">multi_val</span></tt>; for ordinary lists
 it is synonymous with <tt class="docutils literal"><span class="pre">required</span></tt>. Incompatible with <tt class="docutils literal"><span class="pre">required</span></tt> and
 <tt class="docutils literal"><span class="pre">zero_or_one</span></tt>.</li>
-<li><tt class="docutils literal"><span class="pre">zero_or_one</span></tt> - the option can be specified zero or one times. Useful
-only for list options in conjunction with <tt class="docutils literal"><span class="pre">multi_val</span></tt>. Incompatible with
+<li><tt class="docutils literal"><span class="pre">optional</span></tt> - the option can be specified zero or one times. Useful only
+for list options in conjunction with <tt class="docutils literal"><span class="pre">multi_val</span></tt>. Incompatible with
 <tt class="docutils literal"><span class="pre">required</span></tt> and <tt class="docutils literal"><span class="pre">one_or_more</span></tt>.</li>
 <li><tt class="docutils literal"><span class="pre">hidden</span></tt> - the description of this option will not appear in
 the <tt class="docutils literal"><span class="pre">--help</span></tt> output (but will appear in the <tt class="docutils literal"><span class="pre">--help-hidden</span></tt>
@@ -350,13 +350,14 @@ gcc's <tt class="docutils literal"><span class="pre">-Wa,</span></tt>.</li>
 <li><tt class="docutils literal"><span class="pre">multi_val</span> <span class="pre">n</span></tt> - this option takes <em>n</em> arguments (can be useful in some
 special cases). Usage example: <tt class="docutils literal"><span class="pre">(parameter_list_option</span> <span class="pre">&quot;foo&quot;,</span> <span class="pre">(multi_val</span>
 <span class="pre">3))</span></tt>; the command-line syntax is '-foo a b c'. Only list options can have
-this attribute; you can, however, use the <tt class="docutils literal"><span class="pre">one_or_more</span></tt>, <tt class="docutils literal"><span class="pre">zero_or_one</span></tt>
+this attribute; you can, however, use the <tt class="docutils literal"><span class="pre">one_or_more</span></tt>, <tt class="docutils literal"><span class="pre">optional</span></tt>
 and <tt class="docutils literal"><span class="pre">required</span></tt> properties.</li>
 <li><tt class="docutils literal"><span class="pre">init</span></tt> - this option has a default value, either a string (if it is a
-parameter), or a boolean (if it is a switch; boolean constants are called
-<tt class="docutils literal"><span class="pre">true</span></tt> and <tt class="docutils literal"><span class="pre">false</span></tt>). List options can't have this attribute. Usage
-examples: <tt class="docutils literal"><span class="pre">(switch_option</span> <span class="pre">&quot;foo&quot;,</span> <span class="pre">(init</span> <span class="pre">true))</span></tt>; <tt class="docutils literal"><span class="pre">(prefix_option</span> <span class="pre">&quot;bar&quot;,</span>
-<span class="pre">(init</span> <span class="pre">&quot;baz&quot;))</span></tt>.</li>
+parameter), or a boolean (if it is a switch; as in C++, boolean constants
+are called <tt class="docutils literal"><span class="pre">true</span></tt> and <tt class="docutils literal"><span class="pre">false</span></tt>). List options can't have <tt class="docutils literal"><span class="pre">init</span></tt>
+attribute.
+Usage examples: <tt class="docutils literal"><span class="pre">(switch_option</span> <span class="pre">&quot;foo&quot;,</span> <span class="pre">(init</span> <span class="pre">true))</span></tt>; <tt class="docutils literal"><span class="pre">(prefix_option</span>
+<span class="pre">&quot;bar&quot;,</span> <span class="pre">(init</span> <span class="pre">&quot;baz&quot;))</span></tt>.</li>
 <li><tt class="docutils literal"><span class="pre">extern</span></tt> - this option is defined in some other plugin, see <a class="reference internal" href="#extern">below</a>.</li>
 </ul>
 </blockquote>
@@ -604,10 +605,10 @@ def LanguageMap : LanguageMap&lt;
 $ llvmc hello.cpp
 llvmc: Unknown suffix: cpp
 </pre>
-<p>The language map entries should be added only for tools that are
-linked with the root node. Since tools are not allowed to have
-multiple output languages, for nodes &quot;inside&quot; the graph the input and
-output languages should match. This is enforced at compile-time.</p>
+<p>The language map entries are needed only for the tools that are linked from the
+root node. Since a tool can't have multiple output languages, for inner nodes of
+the graph the input and output languages should match. This is enforced at
+compile-time.</p>
 </div>
 <div class="section" id="option-preprocessor">
 <h1><a class="toc-backref" href="#id20">Option preprocessor</a></h1>
@@ -619,22 +620,30 @@ the driver with both of these options enabled.</p>
 occasions. Example (adapted from the built-in Base plugin):</p>
 <pre class="literal-block">
 def Preprocess : OptionPreprocessor&lt;
-(case (and (switch_on &quot;O3&quot;), (any_switch_on [&quot;O0&quot;, &quot;O1&quot;, &quot;O2&quot;])),
-           [(unset_option [&quot;O0&quot;, &quot;O1&quot;, &quot;O2&quot;]),
-            (warning &quot;Multiple -O options specified, defaulted to -O3.&quot;)],
+(case (not (any_switch_on [&quot;O0&quot;, &quot;O1&quot;, &quot;O2&quot;, &quot;O3&quot;])),
+           (set_option &quot;O2&quot;),
+      (and (switch_on &quot;O3&quot;), (any_switch_on [&quot;O0&quot;, &quot;O1&quot;, &quot;O2&quot;])),
+           (unset_option [&quot;O0&quot;, &quot;O1&quot;, &quot;O2&quot;]),
       (and (switch_on &quot;O2&quot;), (any_switch_on [&quot;O0&quot;, &quot;O1&quot;])),
            (unset_option [&quot;O0&quot;, &quot;O1&quot;]),
       (and (switch_on &quot;O1&quot;), (switch_on &quot;O0&quot;)),
            (unset_option &quot;O0&quot;))
 &gt;;
 </pre>
-<p>Here, <tt class="docutils literal"><span class="pre">OptionPreprocessor</span></tt> is used to unset all spurious optimization options
-(so that they are not forwarded to the compiler).</p>
+<p>Here, <tt class="docutils literal"><span class="pre">OptionPreprocessor</span></tt> is used to unset all spurious <tt class="docutils literal"><span class="pre">-O</span></tt> options so
+that they are not forwarded to the compiler. If no optimization options are
+specified, <tt class="docutils literal"><span class="pre">-O2</span></tt> is enabled.</p>
 <p><tt class="docutils literal"><span class="pre">OptionPreprocessor</span></tt> is basically a single big <tt class="docutils literal"><span class="pre">case</span></tt> expression, which is
 evaluated only once right after the plugin is loaded. The only allowed actions
-in <tt class="docutils literal"><span class="pre">OptionPreprocessor</span></tt> are <tt class="docutils literal"><span class="pre">error</span></tt>, <tt class="docutils literal"><span class="pre">warning</span></tt> and a special action
-<tt class="docutils literal"><span class="pre">unset_option</span></tt>, which, as the name suggests, unsets a given option. For
-convenience, <tt class="docutils literal"><span class="pre">unset_option</span></tt> also works on lists.</p>
+in <tt class="docutils literal"><span class="pre">OptionPreprocessor</span></tt> are <tt class="docutils literal"><span class="pre">error</span></tt>, <tt class="docutils literal"><span class="pre">warning</span></tt>, and two special actions:
+<tt class="docutils literal"><span class="pre">unset_option</span></tt> and <tt class="docutils literal"><span class="pre">set_option</span></tt>. As their names suggest, they can be used to
+set or unset a given option. To set an option with <tt class="docutils literal"><span class="pre">set_option</span></tt>, use the
+two-argument form: <tt class="docutils literal"><span class="pre">(set_option</span> <span class="pre">&quot;parameter&quot;,</span> <span class="pre">VALUE)</span></tt>. Here, <tt class="docutils literal"><span class="pre">VALUE</span></tt> can be
+either a string, a string list, or a boolean constant.</p>
+<p>For convenience, <tt class="docutils literal"><span class="pre">set_option</span></tt> and <tt class="docutils literal"><span class="pre">unset_option</span></tt> also work on lists. That
+is, instead of <tt class="docutils literal"><span class="pre">[(unset_option</span> <span class="pre">&quot;A&quot;),</span> <span class="pre">(unset_option</span> <span class="pre">&quot;B&quot;)]</span></tt> you can use
+<tt class="docutils literal"><span class="pre">(unset_option</span> <span class="pre">[&quot;A&quot;,</span> <span class="pre">&quot;B&quot;])</span></tt>. Obviously, <tt class="docutils literal"><span class="pre">(set_option</span> <span class="pre">[&quot;A&quot;,</span> <span class="pre">&quot;B&quot;])</span></tt> is valid
+only if both <tt class="docutils literal"><span class="pre">A</span></tt> and <tt class="docutils literal"><span class="pre">B</span></tt> are switches.</p>
 </div>
 <div class="section" id="more-advanced-topics">
 <h1><a class="toc-backref" href="#id21">More advanced topics</a></h1>
diff --git a/libclamav/c++/llvm/docs/GettingStarted.html b/libclamav/c++/llvm/docs/GettingStarted.html
index 6dd32a8..c27101e 100644
--- a/libclamav/c++/llvm/docs/GettingStarted.html
+++ b/libclamav/c++/llvm/docs/GettingStarted.html
@@ -114,13 +114,15 @@ and performance.
   <li>Read the documentation.</li>
   <li>Read the documentation.</li>
   <li>Remember that you were warned twice about reading the documentation.</li>
-  <li>Install the llvm-gcc-4.2 front end if you intend to compile C or C++:
+  <li>Install the llvm-gcc-4.2 front end if you intend to compile C or C++
+      (see <a href="#installcf">Install the GCC Front End</a> for details):</li>
     <ol>
       <li><tt>cd <i>where-you-want-the-C-front-end-to-live</i></tt></li>
-      <li><tt>gunzip --stdout llvm-gcc-4.2-<i>version</i>-<i>platform</i>.tar.gz | tar -xvf -</tt>
-      </li>
-      <li>Note: If the binary extension is ".bz" use bunzip2 instead of gunzip.</li>
-      <li>Add llvm-gcc's "bin" directory to your PATH variable.</li>
+      <li><tt>gunzip --stdout llvm-gcc-4.2-<i>version</i>-<i>platform</i>.tar.gz | tar -xvf -</tt></li>
+	  <li><tt><i>install-binutils-binary-from-MinGW</i></tt> (Windows only)</li>
+	  <li>Note: If the binary extension is "<tt>.bz</tt>" use <tt>bunzip2</tt> instead of <tt>gunzip</tt>.</li>
+	  <li>Note: On Windows, use <a href="http://www.7-zip.org">7-Zip</a> or a similar archiving tool.</li>
+	  <li>Add <tt>llvm-gcc</tt>'s "<tt>bin</tt>" directory to your <tt>PATH</tt> environment variable.</li>
     </ol></li>
 
   <li>Get the LLVM Source Code
@@ -774,13 +776,14 @@ instructions</a> to successfully get and build the LLVM GCC front-end.</p>
 
 <div class="doc_text">
 
-<p>Before configuring and compiling the LLVM suite, you can optionally extract the 
-LLVM GCC front end from the binary distribution.  It is used for running the 
-llvm-test testsuite and for compiling C/C++ programs.  Note that you can optionally
-<a href="GCCFEBuildInstrs.html">build llvm-gcc yourself</a> after building the
+<p>Before configuring and compiling the LLVM suite (or if you want to use just the LLVM
+GCC front end) you can optionally extract the front end from the binary distribution.
+It is used for running the llvm-test testsuite and for compiling C/C++ programs.  Note that
+you can optionally <a href="GCCFEBuildInstrs.html">build llvm-gcc yourself</a> after building the
 main LLVM repository.</p>
 
-<p>To install the GCC front end, do the following:</p>
+<p>To install the GCC front end, do the following (on Windows, use an archival tool
+like <a href="http://www.7-zip.org">7-zip</a> that understands gzipped tars):</p>
 
 <ol>
   <li><tt>cd <i>where-you-want-the-front-end-to-live</i></tt></li>
@@ -788,22 +791,51 @@ main LLVM repository.</p>
       -</tt></li>
 </ol>
 
-<p>Once the binary is uncompressed, you should add a symlink for llvm-gcc and 
-llvm-g++ to some directory in your path.  When you configure LLVM, it will 
-automatically detect llvm-gcc's presence (if it is in your path) enabling its
-use in llvm-test.  Note that you can always build or install llvm-gcc at any
-pointer after building the main LLVM repository: just reconfigure llvm and 
+<p>Once the binary is uncompressed, if you're using a *nix-based system, add a symlink for
+<tt>llvm-gcc</tt> and <tt>llvm-g++</tt> to some directory in your path.  If you're using a
+Windows-based system, add the <tt>bin</tt> subdirectory of your front end installation directory
+to your <tt>PATH</tt> environment variable.  For example, if you uncompressed the binary to
+<tt>c:\llvm-gcc</tt>, add <tt>c:\llvm-gcc\bin</tt> to your <tt>PATH</tt>.</p>
+
+<p>If you now want to build LLVM from source, when you configure LLVM, it will 
+automatically detect <tt>llvm-gcc</tt>'s presence (if it is in your path) enabling its
+use in llvm-test.  Note that you can always build or install <tt>llvm-gcc</tt> at any
+point after building the main LLVM repository: just reconfigure llvm and 
 llvm-test will pick it up.
 </p>
 
-<p>The binary versions of the GCC front end may not suit all of your needs.  For
-example, the binary distribution may include an old version of a system header
-file, not "fix" a header file that needs to be fixed for GCC, or it may be
-linked with libraries not available on your system.</p>
+<p>As a convenience for Windows users, the front end binaries for MinGW/x86 include
+versions of the required w32api and mingw-runtime binaries.  The last remaining step for
+Windows users is to simply uncompress the binary binutils package from
+<a href="http://mingw.org/">MinGW</a> into your front end installation directory.  While the
+front end installation steps are not quite the same as a typical manual MinGW installation,
+they should be similar enough to those who have previously installed MinGW on Windows systems.</p>
+
+<p>To install binutils on Windows:</p>
+
+<ol>
+  <li><tt><i>download GNU Binutils from <a href="http://sourceforge.net/projects/mingw/files/">MinGW Downloads</a></i></tt></li>
+  <li><tt>cd <i>where-you-uncompressed-the-front-end</i></tt></li>
+  <li><tt><i>uncompress archived binutils directories (not the tar file) into the current directory</i></tt></li>
+</ol>
 
-<p>In cases like these, you may want to try <a
-href="GCCFEBuildInstrs.html">building the GCC front end from source.</a> This is
-much easier now than it was in the past.</p>
+<p>The binary versions of the LLVM GCC front end may not suit all of your needs.  For
+example, the binary distribution may include an old version of a system header
+file, not "fix" a header file that needs to be fixed for GCC, or it may be linked with
+libraries not available on your system.  In cases like these, you may want to try
+<a href="GCCFEBuildInstrs.html">building the GCC front end from source</a>.  Thankfully,
+this is much easier now than it was in the past.</p>
+
+<p>We also do not currently support updating of the GCC front end by manually overlaying
+newer versions of the w32api and mingw-runtime binary packages that may become available
+from MinGW.  At this time, it's best to think of the MinGW LLVM GCC front end binary as
+a self-contained convenience package that requires Windows users to simply download and
+uncompress the GNU Binutils binary package from the MinGW project.</p>
+
+<p>Regardless of your platform, if you discover that installing the LLVM GCC front end
+binaries is not as easy as previously described, or you would like to suggest improvements,
+please let us know how you would like to see things improved by dropping us a note on our
+<a href="http://llvm.org/docs/#maillist">mailing list</a>.</p>
 
 </div>
 
@@ -1171,7 +1203,6 @@ Cummings for pointing this out!
 
 </div>
 
-
 <!-- *********************************************************************** -->
 <div class="doc_section">
   <a name="layout"><b>Program Layout</b></a>
diff --git a/libclamav/c++/llvm/docs/LangRef.html b/libclamav/c++/llvm/docs/LangRef.html
index 45f6f38..526f119 100644
--- a/libclamav/c++/llvm/docs/LangRef.html
+++ b/libclamav/c++/llvm/docs/LangRef.html
@@ -7257,8 +7257,8 @@ LLVM</a>.</p>
 
 <h5>Syntax:</h5>
 <pre>
-  declare i32 @llvm.objectsize.i32( i8* &lt;ptr&gt;, i32 &lt;type&gt; )
-  declare i64 @llvm.objectsize.i64( i8* &lt;ptr&gt;, i32 &lt;type&gt; )
+  declare i32 @llvm.objectsize.i32( i8* &lt;object&gt;, i1 &lt;type&gt; )
+  declare i64 @llvm.objectsize.i64( i8* &lt;object&gt;, i1 &lt;type&gt; )
 </pre>
 
 <h5>Overview:</h5>
@@ -7267,34 +7267,15 @@ LLVM</a>.</p>
    operation like memcpy will either overflow a buffer that corresponds to
    an object, or b) to determine that a runtime check for overflow isn't
    necessary. An object in this context means an allocation of a
-   specific <a href="#typesystem">type</a>.</p>
+   specific class, structure, array, or other object.</p>
 
 <h5>Arguments:</h5>
 <p>The <tt>llvm.objectsize</tt> intrinsic takes two arguments.  The first
-   argument is a pointer to the object <tt>ptr</tt>. The second argument
-   is an integer <tt>type</tt> which ranges from 0 to 3. The first bit in
-   the type corresponds to a return value based on whole objects,
-   and the second bit whether or not we return the maximum or minimum
-   remaining bytes computed.</p>
-<table class="layout">
-  <tr class="layout">
-    <td class="left"><tt>00</tt></td>
-    <td class="left">whole object, maximum number of bytes</td>
-  </tr>
-  <tr class="layout">
-    <td class="left"><tt>01</tt></td>
-    <td class="left">partial object, maximum number of bytes</td>
-  </tr>
-  <tr class="layout">
-    <td class="left"><tt>10</tt></td>
-    <td class="left">whole object, minimum number of bytes</td>
-  </tr>
-  <tr class="layout">
-    <td class="left"><tt>11</tt></td>
-    <td class="left">partial object, minimum number of bytes</td>
-  </tr>
-</table>
-
+   argument is a pointer to or into the <tt>object</tt>. The second argument
+   is a boolean 0 or 1.  This argument determines whether you want the 
+   maximum (0) or minimum (1) bytes remaining.  This needs to be a literal 0 or
+   1, variables are not allowed.</p>
+   
 <h5>Semantics:</h5>
 <p>The <tt>llvm.objectsize</tt> intrinsic is lowered to either a constant
    representing the size of the object concerned or <tt>i32/i64 -1 or 0</tt>
diff --git a/libclamav/c++/llvm/include/llvm-c/Target.h b/libclamav/c++/llvm/include/llvm-c/Target.h
index 4338851..0057182 100644
--- a/libclamav/c++/llvm/include/llvm-c/Target.h
+++ b/libclamav/c++/llvm/include/llvm-c/Target.h
@@ -35,9 +35,11 @@ typedef struct LLVMStructLayout *LLVMStructLayoutRef;
 /* Declare all of the target-initialization functions that are available. */
 #define LLVM_TARGET(TargetName) void LLVMInitialize##TargetName##TargetInfo();
 #include "llvm/Config/Targets.def"
-
+#undef LLVM_TARGET  /* Explicit undef to make SWIG happier */
+  
 #define LLVM_TARGET(TargetName) void LLVMInitialize##TargetName##Target();
 #include "llvm/Config/Targets.def"
+#undef LLVM_TARGET  /* Explicit undef to make SWIG happier */
 
 /** LLVMInitializeAllTargetInfos - The main program should call this function if
     it wants access to all available targets that LLVM is configured to
@@ -45,6 +47,7 @@ typedef struct LLVMStructLayout *LLVMStructLayoutRef;
 static inline void LLVMInitializeAllTargetInfos() {
 #define LLVM_TARGET(TargetName) LLVMInitialize##TargetName##TargetInfo();
 #include "llvm/Config/Targets.def"
+#undef LLVM_TARGET  /* Explicit undef to make SWIG happier */
 }
 
 /** LLVMInitializeAllTargets - The main program should call this function if it
@@ -53,6 +56,7 @@ static inline void LLVMInitializeAllTargetInfos() {
 static inline void LLVMInitializeAllTargets() {
 #define LLVM_TARGET(TargetName) LLVMInitialize##TargetName##Target();
 #include "llvm/Config/Targets.def"
+#undef LLVM_TARGET  /* Explicit undef to make SWIG happier */
 }
   
 /** LLVMInitializeNativeTarget - The main program should call this function to
diff --git a/libclamav/c++/llvm/include/llvm/ADT/APFloat.h b/libclamav/c++/llvm/include/llvm/ADT/APFloat.h
index 30d998f..f81109a 100644
--- a/libclamav/c++/llvm/include/llvm/ADT/APFloat.h
+++ b/libclamav/c++/llvm/include/llvm/ADT/APFloat.h
@@ -191,6 +191,7 @@ namespace llvm {
     static APFloat getInf(const fltSemantics &Sem, bool Negative = false) {
       return APFloat(Sem, fcInfinity, Negative);
     }
+
     /// getNaN - Factory for QNaN values.
     ///
     /// \param Negative - True iff the NaN generated should be negative.
@@ -201,6 +202,26 @@ namespace llvm {
       return APFloat(Sem, fcNaN, Negative, type);
     }
 
+    /// getLargest - Returns the largest finite number in the given
+    /// semantics.
+    ///
+    /// \param Negative - True iff the number should be negative
+    static APFloat getLargest(const fltSemantics &Sem, bool Negative = false);
+
+    /// getSmallest - Returns the smallest (by magnitude) finite number
+    /// in the given semantics.  Might be denormalized, which implies a
+    /// relative loss of precision.
+    ///
+    /// \param Negative - True iff the number should be negative
+    static APFloat getSmallest(const fltSemantics &Sem, bool Negative = false);
+
+    /// getSmallestNormalized - Returns the smallest (by magnitude)
+    /// normalized finite number in the given semantics.
+    ///
+    /// \param Negative - True iff the number should be negative
+    static APFloat getSmallestNormalized(const fltSemantics &Sem,
+                                         bool Negative = false);
+
     /// Profile - Used to insert APFloat objects, or objects that contain
     ///  APFloat objects, into FoldingSets.
     void Profile(FoldingSetNodeID& NID) const;
@@ -277,6 +298,30 @@ namespace llvm {
     /* Return an arbitrary integer value usable for hashing. */
     uint32_t getHashValue() const;
 
+    /// Converts this value into a decimal string.
+    ///
+    /// \param FormatPrecision The maximum number of digits of
+    ///   precision to output.  If there are fewer digits available,
+    ///   zero padding will not be used unless the value is
+    ///   integral and small enough to be expressed in
+    ///   FormatPrecision digits.  0 means to use the natural
+    ///   precision of the number.
+    /// \param FormatMaxPadding The maximum number of zeros to
+    ///   consider inserting before falling back to scientific
+    ///   notation.  0 means to always use scientific notation.
+    ///
+    /// Number       Precision    MaxPadding      Result
+    /// ------       ---------    ----------      ------
+    /// 1.01E+4              5             2       10100
+    /// 1.01E+4              4             2       1.01E+4
+    /// 1.01E+4              5             1       1.01E+4
+    /// 1.01E-2              5             2       0.0101
+    /// 1.01E-2              4             2       0.0101
+    /// 1.01E-2              4             1       1.01E-2
+    void toString(SmallVectorImpl<char> &Str,
+                  unsigned FormatPrecision = 0,
+                  unsigned FormatMaxPadding = 3);
+
   private:
 
     /* Trivial queries.  */
diff --git a/libclamav/c++/llvm/include/llvm/ADT/DenseMap.h b/libclamav/c++/llvm/include/llvm/ADT/DenseMap.h
index 8b62f2d..8b161ea 100644
--- a/libclamav/c++/llvm/include/llvm/ADT/DenseMap.h
+++ b/libclamav/c++/llvm/include/llvm/ADT/DenseMap.h
@@ -46,7 +46,7 @@ public:
   typedef ValueT mapped_type;
   typedef BucketT value_type;
 
-  DenseMap(const DenseMap& other) {
+  DenseMap(const DenseMap &other) {
     NumBuckets = 0;
     CopyFrom(other);
   }
@@ -55,6 +55,12 @@ public:
     init(NumInitBuckets);
   }
 
+  template<typename InputIt>
+  DenseMap(const InputIt &I, const InputIt &E) {
+    init(64);
+    insert(I, E);
+  }
+  
   ~DenseMap() {
     const KeyT EmptyKey = getEmptyKey(), TombstoneKey = getTombstoneKey();
     for (BucketT *P = Buckets, *E = Buckets+NumBuckets; P != E; ++P) {
diff --git a/libclamav/c++/llvm/include/llvm/ADT/SCCIterator.h b/libclamav/c++/llvm/include/llvm/ADT/SCCIterator.h
index 3afcabd..d38ce4c 100644
--- a/libclamav/c++/llvm/include/llvm/ADT/SCCIterator.h
+++ b/libclamav/c++/llvm/include/llvm/ADT/SCCIterator.h
@@ -72,7 +72,7 @@ class scc_iterator
     SCCNodeStack.push_back(N);
     MinVisitNumStack.push_back(visitNum);
     VisitStack.push_back(std::make_pair(N, GT::child_begin(N)));
-    //errs() << "TarjanSCC: Node " << N <<
+    //dbgs() << "TarjanSCC: Node " << N <<
     //      " : visitNum = " << visitNum << "\n";
   }
 
@@ -107,7 +107,7 @@ class scc_iterator
       if (!MinVisitNumStack.empty() && MinVisitNumStack.back() > minVisitNum)
         MinVisitNumStack.back() = minVisitNum;
 
-      //errs() << "TarjanSCC: Popped node " << visitingN <<
+      //dbgs() << "TarjanSCC: Popped node " << visitingN <<
       //      " : minVisitNum = " << minVisitNum << "; Node visit num = " <<
       //      nodeVisitNumbers[visitingN] << "\n";
 
diff --git a/libclamav/c++/llvm/include/llvm/ADT/SmallVector.h b/libclamav/c++/llvm/include/llvm/ADT/SmallVector.h
index b16649e..89acefd 100644
--- a/libclamav/c++/llvm/include/llvm/ADT/SmallVector.h
+++ b/libclamav/c++/llvm/include/llvm/ADT/SmallVector.h
@@ -80,55 +80,56 @@ protected:
     return BeginX == static_cast<const void*>(&FirstEl);
   }
   
+  /// size_in_bytes - This returns size()*sizeof(T).
+  size_t size_in_bytes() const {
+    return size_t((char*)EndX - (char*)BeginX);
+  }
+  
+  /// capacity_in_bytes - This returns capacity()*sizeof(T).
+  size_t capacity_in_bytes() const {
+    return size_t((char*)CapacityX - (char*)BeginX);
+  }
+  
+  /// grow_pod - This is an implementation of the grow() method which only works
+  /// on POD-like datatypes and is out of line to reduce code duplication.
+  void grow_pod(size_t MinSizeInBytes, size_t TSize);
   
 public:
   bool empty() const { return BeginX == EndX; }
 };
   
-/// SmallVectorImpl - This class consists of common code factored out of the
-/// SmallVector class to reduce code duplication based on the SmallVector 'N'
-/// template parameter.
+
 template <typename T>
-class SmallVectorImpl : public SmallVectorBase {
-  void setEnd(T *P) { EndX = P; }
+class SmallVectorTemplateCommon : public SmallVectorBase {
+protected:
+  void setEnd(T *P) { this->EndX = P; }
 public:
-  // Default ctor - Initialize to empty.
-  explicit SmallVectorImpl(unsigned N) : SmallVectorBase(N*sizeof(T)) {
-  }
-
-  ~SmallVectorImpl() {
-    // Destroy the constructed elements in the vector.
-    destroy_range(begin(), end());
-
-    // If this wasn't grown from the inline copy, deallocate the old space.
-    if (!isSmall())
-      operator delete(begin());
-  }
-
+  SmallVectorTemplateCommon(size_t Size) : SmallVectorBase(Size) {}
+  
   typedef size_t size_type;
   typedef ptrdiff_t difference_type;
   typedef T value_type;
   typedef T *iterator;
   typedef const T *const_iterator;
-
+  
   typedef std::reverse_iterator<const_iterator> const_reverse_iterator;
   typedef std::reverse_iterator<iterator> reverse_iterator;
-
+  
   typedef T &reference;
   typedef const T &const_reference;
   typedef T *pointer;
   typedef const T *const_pointer;
-
+  
   // forward iterator creation methods.
-  iterator begin() { return (iterator)BeginX; }
-  const_iterator begin() const { return (const_iterator)BeginX; }
-  iterator end() { return (iterator)EndX; }
-  const_iterator end() const { return (const_iterator)EndX; }
-private:
-  iterator capacity_ptr() { return (iterator)CapacityX; }
-  const_iterator capacity_ptr() const { return (const_iterator)CapacityX; }
+  iterator begin() { return (iterator)this->BeginX; }
+  const_iterator begin() const { return (const_iterator)this->BeginX; }
+  iterator end() { return (iterator)this->EndX; }
+  const_iterator end() const { return (const_iterator)this->EndX; }
+protected:
+  iterator capacity_ptr() { return (iterator)this->CapacityX; }
+  const_iterator capacity_ptr() const { return (const_iterator)this->CapacityX;}
 public:
-
+  
   // reverse iterator creation methods.
   reverse_iterator rbegin()            { return reverse_iterator(end()); }
   const_reverse_iterator rbegin() const{ return const_reverse_iterator(end()); }
@@ -169,248 +170,359 @@ public:
   const_reference back() const {
     return end()[-1];
   }
+};
+  
+/// SmallVectorTemplateBase<isPodLike = false> - This is where we put method
+/// implementations that are designed to work with non-POD-like T's.
+template <typename T, bool isPodLike>
+class SmallVectorTemplateBase : public SmallVectorTemplateCommon<T> {
+public:
+  SmallVectorTemplateBase(size_t Size) : SmallVectorTemplateCommon<T>(Size) {}
 
-  void push_back(const_reference Elt) {
-    if (EndX < CapacityX) {
-  Retry:
-      new (end()) T(Elt);
-      setEnd(end()+1);
-      return;
+  static void destroy_range(T *S, T *E) {
+    while (S != E) {
+      --E;
+      E->~T();
     }
-    grow();
-    goto Retry;
   }
-
-  void pop_back() {
-    setEnd(end()-1);
-    end()->~T();
+  
+  /// uninitialized_copy - Copy the range [I, E) onto the uninitialized memory
+  /// starting with "Dest", constructing elements into it as needed.
+  template<typename It1, typename It2>
+  static void uninitialized_copy(It1 I, It1 E, It2 Dest) {
+    std::uninitialized_copy(I, E, Dest);
   }
+  
+  /// grow - double the size of the allocated memory, guaranteeing space for at
+  /// least one more element or MinSize if specified.
+  void grow(size_t MinSize = 0);
+};
 
-  T pop_back_val() {
-    T Result = back();
-    pop_back();
-    return Result;
+// Define this out-of-line to dissuade the C++ compiler from inlining it.
+template <typename T, bool isPodLike>
+void SmallVectorTemplateBase<T, isPodLike>::grow(size_t MinSize) {
+  size_t CurCapacity = this->capacity();
+  size_t CurSize = this->size();
+  size_t NewCapacity = 2*CurCapacity;
+  if (NewCapacity < MinSize)
+    NewCapacity = MinSize;
+  T *NewElts = static_cast<T*>(operator new(NewCapacity*sizeof(T)));
+  
+  // Copy the elements over.
+  this->uninitialized_copy(this->begin(), this->end(), NewElts);
+  
+  // Destroy the original elements.
+  destroy_range(this->begin(), this->end());
+  
+  // If this wasn't grown from the inline copy, deallocate the old space.
+  if (!this->isSmall())
+    operator delete(this->begin());
+  
+  this->setEnd(NewElts+CurSize);
+  this->BeginX = NewElts;
+  this->CapacityX = this->begin()+NewCapacity;
+}
+  
+  
+/// SmallVectorTemplateBase<isPodLike = true> - This is where we put method
+/// implementations that are designed to work with POD-like T's.
+template <typename T>
+class SmallVectorTemplateBase<T, true> : public SmallVectorTemplateCommon<T> {
+public:
+  SmallVectorTemplateBase(size_t Size) : SmallVectorTemplateCommon<T>(Size) {}
+  
+  // No need to do a destroy loop for POD's.
+  static void destroy_range(T *, T *) {}
+  
+  /// uninitialized_copy - Copy the range [I, E) onto the uninitialized memory
+  /// starting with "Dest", constructing elements into it as needed.
+  template<typename It1, typename It2>
+  static void uninitialized_copy(It1 I, It1 E, It2 Dest) {
+    // Use memcpy for PODs: std::uninitialized_copy optimizes to memmove, memcpy
+    // is better.
+    memcpy(&*Dest, &*I, (E-I)*sizeof(T));
   }
-
+  
+  /// grow - double the size of the allocated memory, guaranteeing space for at
+  /// least one more element or MinSize if specified.
+  void grow(size_t MinSize = 0) {
+    this->grow_pod(MinSize*sizeof(T), sizeof(T));
+  }
+};
+  
+  
+/// SmallVectorImpl - This class consists of common code factored out of the
+/// SmallVector class to reduce code duplication based on the SmallVector 'N'
+/// template parameter.
+template <typename T>
+class SmallVectorImpl : public SmallVectorTemplateBase<T, isPodLike<T>::value> {
+  typedef SmallVectorTemplateBase<T, isPodLike<T>::value > SuperClass;
+public:
+  typedef typename SuperClass::iterator iterator;
+  typedef typename SuperClass::size_type size_type;
+  
+  // Default ctor - Initialize to empty.
+  explicit SmallVectorImpl(unsigned N)
+    : SmallVectorTemplateBase<T, isPodLike<T>::value>(N*sizeof(T)) {
+  }
+  
+  ~SmallVectorImpl() {
+    // Destroy the constructed elements in the vector.
+    this->destroy_range(this->begin(), this->end());
+    
+    // If this wasn't grown from the inline copy, deallocate the old space.
+    if (!this->isSmall())
+      operator delete(this->begin());
+  }
+  
+  
   void clear() {
-    destroy_range(begin(), end());
-    EndX = BeginX;
+    this->destroy_range(this->begin(), this->end());
+    this->EndX = this->BeginX;
   }
 
   void resize(unsigned N) {
-    if (N < size()) {
-      destroy_range(begin()+N, end());
-      setEnd(begin()+N);
-    } else if (N > size()) {
-      if (capacity() < N)
-        grow(N);
-      construct_range(end(), begin()+N, T());
-      setEnd(begin()+N);
+    if (N < this->size()) {
+      this->destroy_range(this->begin()+N, this->end());
+      this->setEnd(this->begin()+N);
+    } else if (N > this->size()) {
+      if (this->capacity() < N)
+        this->grow(N);
+      this->construct_range(this->end(), this->begin()+N, T());
+      this->setEnd(this->begin()+N);
     }
   }
 
   void resize(unsigned N, const T &NV) {
-    if (N < size()) {
-      destroy_range(begin()+N, end());
-      setEnd(begin()+N);
-    } else if (N > size()) {
-      if (capacity() < N)
-        grow(N);
-      construct_range(end(), begin()+N, NV);
-      setEnd(begin()+N);
+    if (N < this->size()) {
+      this->destroy_range(this->begin()+N, this->end());
+      this->setEnd(this->begin()+N);
+    } else if (N > this->size()) {
+      if (this->capacity() < N)
+        this->grow(N);
+      construct_range(this->end(), this->begin()+N, NV);
+      this->setEnd(this->begin()+N);
     }
   }
 
   void reserve(unsigned N) {
-    if (capacity() < N)
-      grow(N);
+    if (this->capacity() < N)
+      this->grow(N);
   }
-
+  
+  void push_back(const T &Elt) {
+    if (this->EndX < this->CapacityX) {
+    Retry:
+      new (this->end()) T(Elt);
+      this->setEnd(this->end()+1);
+      return;
+    }
+    this->grow();
+    goto Retry;
+  }
+  
+  void pop_back() {
+    this->setEnd(this->end()-1);
+    this->end()->~T();
+  }
+  
+  T pop_back_val() {
+    T Result = this->back();
+    pop_back();
+    return Result;
+  }
+  
+  
   void swap(SmallVectorImpl &RHS);
-
+  
   /// append - Add the specified range to the end of the SmallVector.
   ///
   template<typename in_iter>
   void append(in_iter in_start, in_iter in_end) {
     size_type NumInputs = std::distance(in_start, in_end);
     // Grow allocated space if needed.
-    if (NumInputs > size_type(capacity_ptr()-end()))
-      grow(size()+NumInputs);
-
+    if (NumInputs > size_type(this->capacity_ptr()-this->end()))
+      this->grow(this->size()+NumInputs);
+    
     // Copy the new elements over.
-    std::uninitialized_copy(in_start, in_end, end());
-    setEnd(end() + NumInputs);
+    // TODO: NEED To compile time dispatch on whether in_iter is a random access
+    // iterator to use the fast uninitialized_copy.
+    std::uninitialized_copy(in_start, in_end, this->end());
+    this->setEnd(this->end() + NumInputs);
   }
-
+  
   /// append - Add the specified range to the end of the SmallVector.
   ///
   void append(size_type NumInputs, const T &Elt) {
     // Grow allocated space if needed.
-    if (NumInputs > size_type(capacity_ptr()-end()))
-      grow(size()+NumInputs);
-
+    if (NumInputs > size_type(this->capacity_ptr()-this->end()))
+      this->grow(this->size()+NumInputs);
+    
     // Copy the new elements over.
-    std::uninitialized_fill_n(end(), NumInputs, Elt);
-    setEnd(end() + NumInputs);
+    std::uninitialized_fill_n(this->end(), NumInputs, Elt);
+    this->setEnd(this->end() + NumInputs);
   }
-
+  
   void assign(unsigned NumElts, const T &Elt) {
     clear();
-    if (capacity() < NumElts)
-      grow(NumElts);
-    setEnd(begin()+NumElts);
-    construct_range(begin(), end(), Elt);
+    if (this->capacity() < NumElts)
+      this->grow(NumElts);
+    this->setEnd(this->begin()+NumElts);
+    construct_range(this->begin(), this->end(), Elt);
   }
-
+  
   iterator erase(iterator I) {
     iterator N = I;
     // Shift all elts down one.
-    std::copy(I+1, end(), I);
+    std::copy(I+1, this->end(), I);
     // Drop the last elt.
     pop_back();
     return(N);
   }
-
+  
   iterator erase(iterator S, iterator E) {
     iterator N = S;
     // Shift all elts down.
-    iterator I = std::copy(E, end(), S);
+    iterator I = std::copy(E, this->end(), S);
     // Drop the last elts.
-    destroy_range(I, end());
-    setEnd(I);
+    this->destroy_range(I, this->end());
+    this->setEnd(I);
     return(N);
   }
-
+  
   iterator insert(iterator I, const T &Elt) {
-    if (I == end()) {  // Important special case for empty vector.
+    if (I == this->end()) {  // Important special case for empty vector.
       push_back(Elt);
-      return end()-1;
+      return this->end()-1;
     }
-
-    if (EndX < CapacityX) {
-  Retry:
-      new (end()) T(back());
-      setEnd(end()+1);
+    
+    if (this->EndX < this->CapacityX) {
+    Retry:
+      new (this->end()) T(this->back());
+      this->setEnd(this->end()+1);
       // Push everything else over.
-      std::copy_backward(I, end()-1, end());
+      std::copy_backward(I, this->end()-1, this->end());
       *I = Elt;
       return I;
     }
-    size_t EltNo = I-begin();
-    grow();
-    I = begin()+EltNo;
+    size_t EltNo = I-this->begin();
+    this->grow();
+    I = this->begin()+EltNo;
     goto Retry;
   }
-
+  
   iterator insert(iterator I, size_type NumToInsert, const T &Elt) {
-    if (I == end()) {  // Important special case for empty vector.
+    if (I == this->end()) {  // Important special case for empty vector.
       append(NumToInsert, Elt);
-      return end()-1;
+      return this->end()-1;
     }
-
+    
     // Convert iterator to elt# to avoid invalidating iterator when we reserve()
-    size_t InsertElt = I-begin();
-
+    size_t InsertElt = I - this->begin();
+    
     // Ensure there is enough space.
-    reserve(static_cast<unsigned>(size() + NumToInsert));
-
+    reserve(static_cast<unsigned>(this->size() + NumToInsert));
+    
     // Uninvalidate the iterator.
-    I = begin()+InsertElt;
-
+    I = this->begin()+InsertElt;
+    
     // If there are more elements between the insertion point and the end of the
     // range than there are being inserted, we can use a simple approach to
     // insertion.  Since we already reserved space, we know that this won't
     // reallocate the vector.
-    if (size_t(end()-I) >= NumToInsert) {
-      T *OldEnd = end();
-      append(end()-NumToInsert, end());
-
+    if (size_t(this->end()-I) >= NumToInsert) {
+      T *OldEnd = this->end();
+      append(this->end()-NumToInsert, this->end());
+      
       // Copy the existing elements that get replaced.
       std::copy_backward(I, OldEnd-NumToInsert, OldEnd);
-
+      
       std::fill_n(I, NumToInsert, Elt);
       return I;
     }
-
+    
     // Otherwise, we're inserting more elements than exist already, and we're
     // not inserting at the end.
-
+    
     // Copy over the elements that we're about to overwrite.
-    T *OldEnd = end();
-    setEnd(end() + NumToInsert);
+    T *OldEnd = this->end();
+    this->setEnd(this->end() + NumToInsert);
     size_t NumOverwritten = OldEnd-I;
-    std::uninitialized_copy(I, OldEnd, end()-NumOverwritten);
-
+    this->uninitialized_copy(I, OldEnd, this->end()-NumOverwritten);
+    
     // Replace the overwritten part.
     std::fill_n(I, NumOverwritten, Elt);
-
+    
     // Insert the non-overwritten middle part.
     std::uninitialized_fill_n(OldEnd, NumToInsert-NumOverwritten, Elt);
     return I;
   }
-
+  
   template<typename ItTy>
   iterator insert(iterator I, ItTy From, ItTy To) {
-    if (I == end()) {  // Important special case for empty vector.
+    if (I == this->end()) {  // Important special case for empty vector.
       append(From, To);
-      return end()-1;
+      return this->end()-1;
     }
-
+    
     size_t NumToInsert = std::distance(From, To);
     // Convert iterator to elt# to avoid invalidating iterator when we reserve()
-    size_t InsertElt = I-begin();
-
+    size_t InsertElt = I - this->begin();
+    
     // Ensure there is enough space.
-    reserve(static_cast<unsigned>(size() + NumToInsert));
-
+    reserve(static_cast<unsigned>(this->size() + NumToInsert));
+    
     // Uninvalidate the iterator.
-    I = begin()+InsertElt;
-
+    I = this->begin()+InsertElt;
+    
     // If there are more elements between the insertion point and the end of the
     // range than there are being inserted, we can use a simple approach to
     // insertion.  Since we already reserved space, we know that this won't
     // reallocate the vector.
-    if (size_t(end()-I) >= NumToInsert) {
-      T *OldEnd = end();
-      append(end()-NumToInsert, end());
-
+    if (size_t(this->end()-I) >= NumToInsert) {
+      T *OldEnd = this->end();
+      append(this->end()-NumToInsert, this->end());
+      
       // Copy the existing elements that get replaced.
       std::copy_backward(I, OldEnd-NumToInsert, OldEnd);
-
+      
       std::copy(From, To, I);
       return I;
     }
-
+    
     // Otherwise, we're inserting more elements than exist already, and we're
     // not inserting at the end.
-
+    
     // Copy over the elements that we're about to overwrite.
-    T *OldEnd = end();
-    setEnd(end() + NumToInsert);
+    T *OldEnd = this->end();
+    this->setEnd(this->end() + NumToInsert);
     size_t NumOverwritten = OldEnd-I;
-    std::uninitialized_copy(I, OldEnd, end()-NumOverwritten);
-
+    this->uninitialized_copy(I, OldEnd, this->end()-NumOverwritten);
+    
     // Replace the overwritten part.
     std::copy(From, From+NumOverwritten, I);
-
+    
     // Insert the non-overwritten middle part.
-    std::uninitialized_copy(From+NumOverwritten, To, OldEnd);
+    this->uninitialized_copy(From+NumOverwritten, To, OldEnd);
     return I;
   }
-
-  const SmallVectorImpl &operator=(const SmallVectorImpl &RHS);
-
+  
+  const SmallVectorImpl
+  &operator=(const SmallVectorImpl &RHS);
+  
   bool operator==(const SmallVectorImpl &RHS) const {
-    if (size() != RHS.size()) return false;
-    return std::equal(begin(), end(), RHS.begin());
+    if (this->size() != RHS.size()) return false;
+    return std::equal(this->begin(), this->end(), RHS.begin());
   }
-  bool operator!=(const SmallVectorImpl &RHS) const { return !(*this == RHS); }
-
+  bool operator!=(const SmallVectorImpl &RHS) const {
+    return !(*this == RHS);
+  }
+  
   bool operator<(const SmallVectorImpl &RHS) const {
-    return std::lexicographical_compare(begin(), end(),
+    return std::lexicographical_compare(this->begin(), this->end(),
                                         RHS.begin(), RHS.end());
   }
-
+  
   /// set_size - Set the array size to \arg N, which the current array must have
   /// enough capacity for.
   ///
@@ -421,145 +533,105 @@ public:
   /// update the size later. This avoids the cost of value initializing elements
   /// which will only be overwritten.
   void set_size(unsigned N) {
-    assert(N <= capacity());
-    setEnd(begin() + N);
+    assert(N <= this->capacity());
+    this->setEnd(this->begin() + N);
   }
-
+  
 private:
-  /// grow - double the size of the allocated memory, guaranteeing space for at
-  /// least one more element or MinSize if specified.
-  void grow(size_type MinSize = 0);
-
-  void construct_range(T *S, T *E, const T &Elt) {
+  static void construct_range(T *S, T *E, const T &Elt) {
     for (; S != E; ++S)
       new (S) T(Elt);
   }
-
-  void destroy_range(T *S, T *E) {
-    // No need to do a destroy loop for POD's.
-    if (isPodLike<T>::value) return;
-    
-    while (S != E) {
-      --E;
-      E->~T();
-    }
-  }
 };
-
-// Define this out-of-line to dissuade the C++ compiler from inlining it.
-template <typename T>
-void SmallVectorImpl<T>::grow(size_t MinSize) {
-  size_t CurCapacity = capacity();
-  size_t CurSize = size();
-  size_t NewCapacity = 2*CurCapacity;
-  if (NewCapacity < MinSize)
-    NewCapacity = MinSize;
-  T *NewElts = static_cast<T*>(operator new(NewCapacity*sizeof(T)));
-
-  // Copy the elements over.
-  if (isPodLike<T>::value)
-    // Use memcpy for PODs: std::uninitialized_copy optimizes to memmove.
-    memcpy(NewElts, begin(), CurSize * sizeof(T));
-  else
-    std::uninitialized_copy(begin(), end(), NewElts);
-
-  // Destroy the original elements.
-  destroy_range(begin(), end());
-
-  // If this wasn't grown from the inline copy, deallocate the old space.
-  if (!isSmall())
-    operator delete(begin());
-
-  setEnd(NewElts+CurSize);
-  BeginX = NewElts;
-  CapacityX = begin()+NewCapacity;
-}
+  
 
 template <typename T>
 void SmallVectorImpl<T>::swap(SmallVectorImpl<T> &RHS) {
   if (this == &RHS) return;
 
   // We can only avoid copying elements if neither vector is small.
-  if (!isSmall() && !RHS.isSmall()) {
-    std::swap(BeginX, RHS.BeginX);
-    std::swap(EndX, RHS.EndX);
-    std::swap(CapacityX, RHS.CapacityX);
+  if (!this->isSmall() && !RHS.isSmall()) {
+    std::swap(this->BeginX, RHS.BeginX);
+    std::swap(this->EndX, RHS.EndX);
+    std::swap(this->CapacityX, RHS.CapacityX);
     return;
   }
-  if (RHS.size() > capacity())
-    grow(RHS.size());
-  if (size() > RHS.capacity())
-    RHS.grow(size());
+  if (RHS.size() > this->capacity())
+    this->grow(RHS.size());
+  if (this->size() > RHS.capacity())
+    RHS.grow(this->size());
 
   // Swap the shared elements.
-  size_t NumShared = size();
+  size_t NumShared = this->size();
   if (NumShared > RHS.size()) NumShared = RHS.size();
   for (unsigned i = 0; i != static_cast<unsigned>(NumShared); ++i)
     std::swap((*this)[i], RHS[i]);
 
   // Copy over the extra elts.
-  if (size() > RHS.size()) {
-    size_t EltDiff = size() - RHS.size();
-    std::uninitialized_copy(begin()+NumShared, end(), RHS.end());
+  if (this->size() > RHS.size()) {
+    size_t EltDiff = this->size() - RHS.size();
+    this->uninitialized_copy(this->begin()+NumShared, this->end(), RHS.end());
     RHS.setEnd(RHS.end()+EltDiff);
-    destroy_range(begin()+NumShared, end());
-    setEnd(begin()+NumShared);
-  } else if (RHS.size() > size()) {
-    size_t EltDiff = RHS.size() - size();
-    std::uninitialized_copy(RHS.begin()+NumShared, RHS.end(), end());
-    setEnd(end() + EltDiff);
-    destroy_range(RHS.begin()+NumShared, RHS.end());
+    this->destroy_range(this->begin()+NumShared, this->end());
+    this->setEnd(this->begin()+NumShared);
+  } else if (RHS.size() > this->size()) {
+    size_t EltDiff = RHS.size() - this->size();
+    this->uninitialized_copy(RHS.begin()+NumShared, RHS.end(), this->end());
+    this->setEnd(this->end() + EltDiff);
+    this->destroy_range(RHS.begin()+NumShared, RHS.end());
     RHS.setEnd(RHS.begin()+NumShared);
   }
 }
 
 template <typename T>
-const SmallVectorImpl<T> &
-SmallVectorImpl<T>::operator=(const SmallVectorImpl<T> &RHS) {
+const SmallVectorImpl<T> &SmallVectorImpl<T>::
+  operator=(const SmallVectorImpl<T> &RHS) {
   // Avoid self-assignment.
   if (this == &RHS) return *this;
 
   // If we already have sufficient space, assign the common elements, then
   // destroy any excess.
   size_t RHSSize = RHS.size();
-  size_t CurSize = size();
+  size_t CurSize = this->size();
   if (CurSize >= RHSSize) {
     // Assign common elements.
     iterator NewEnd;
     if (RHSSize)
-      NewEnd = std::copy(RHS.begin(), RHS.begin()+RHSSize, begin());
+      NewEnd = std::copy(RHS.begin(), RHS.begin()+RHSSize, this->begin());
     else
-      NewEnd = begin();
+      NewEnd = this->begin();
 
     // Destroy excess elements.
-    destroy_range(NewEnd, end());
+    this->destroy_range(NewEnd, this->end());
 
     // Trim.
-    setEnd(NewEnd);
+    this->setEnd(NewEnd);
     return *this;
   }
 
   // If we have to grow to have enough elements, destroy the current elements.
   // This allows us to avoid copying them during the grow.
-  if (capacity() < RHSSize) {
+  if (this->capacity() < RHSSize) {
     // Destroy current elements.
-    destroy_range(begin(), end());
-    setEnd(begin());
+    this->destroy_range(this->begin(), this->end());
+    this->setEnd(this->begin());
     CurSize = 0;
-    grow(RHSSize);
+    this->grow(RHSSize);
   } else if (CurSize) {
     // Otherwise, use assignment for the already-constructed elements.
-    std::copy(RHS.begin(), RHS.begin()+CurSize, begin());
+    std::copy(RHS.begin(), RHS.begin()+CurSize, this->begin());
   }
 
   // Copy construct the new elements in place.
-  std::uninitialized_copy(RHS.begin()+CurSize, RHS.end(), begin()+CurSize);
+  this->uninitialized_copy(RHS.begin()+CurSize, RHS.end(),
+                           this->begin()+CurSize);
 
   // Set end.
-  setEnd(begin()+RHSSize);
+  this->setEnd(this->begin()+RHSSize);
   return *this;
 }
 
+
 /// SmallVector - This is a 'vector' (really, a variable-sized array), optimized
 /// for the case when the array is small.  It contains some number of elements
 /// in-place, which allows it to avoid heap allocation when the actual number of
diff --git a/libclamav/c++/llvm/include/llvm/ADT/StringRef.h b/libclamav/c++/llvm/include/llvm/ADT/StringRef.h
index f299f5f..a744266 100644
--- a/libclamav/c++/llvm/include/llvm/ADT/StringRef.h
+++ b/libclamav/c++/llvm/include/llvm/ADT/StringRef.h
@@ -159,12 +159,14 @@ namespace llvm {
 
     /// startswith - Check if this string starts with the given \arg Prefix.
     bool startswith(StringRef Prefix) const {
-      return substr(0, Prefix.Length).equals(Prefix);
+      return Length >= Prefix.Length &&
+             memcmp(Data, Prefix.Data, Prefix.Length) == 0;
     }
 
     /// endswith - Check if this string ends with the given \arg Suffix.
     bool endswith(StringRef Suffix) const {
-      return slice(size() - Suffix.Length, size()).equals(Suffix);
+      return Length >= Suffix.Length &&
+             memcmp(end() - Suffix.Length, Suffix.Data, Suffix.Length) == 0;
     }
 
     /// @}
diff --git a/libclamav/c++/llvm/include/llvm/Analysis/DebugInfo.h b/libclamav/c++/llvm/include/llvm/Analysis/DebugInfo.h
index 232804e..a6ccc29 100644
--- a/libclamav/c++/llvm/include/llvm/Analysis/DebugInfo.h
+++ b/libclamav/c++/llvm/include/llvm/Analysis/DebugInfo.h
@@ -99,6 +99,7 @@ namespace llvm {
     bool isGlobalVariable() const;
     bool isScope() const;
     bool isCompileUnit() const;
+    bool isNameSpace() const;
     bool isLexicalBlock() const;
     bool isSubrange() const;
     bool isEnumerator() const;
@@ -218,7 +219,7 @@ namespace llvm {
     virtual ~DIType() {}
 
     DIDescriptor getContext() const     { return getDescriptorField(1); }
-    StringRef getName() const         { return getStringField(2);     }
+    StringRef getName() const           { return getStringField(2);     }
     DICompileUnit getCompileUnit() const{ return getFieldAs<DICompileUnit>(3); }
     unsigned getLineNumber() const      { return getUnsignedField(4); }
     uint64_t getSizeInBits() const      { return getUInt64Field(5); }
@@ -470,6 +471,22 @@ namespace llvm {
     StringRef getFilename() const  { return getContext().getFilename(); }
   };
 
+  /// DINameSpace - A wrapper for a C++ style name space.
+  class DINameSpace : public DIScope { 
+  public:
+    explicit DINameSpace(MDNode *N = 0) : DIScope(N) {
+      if (DbgNode && !isNameSpace())
+        DbgNode = 0;
+    }
+
+    DIScope getContext() const     { return getFieldAs<DIScope>(1);      }
+    StringRef getName() const      { return getStringField(2);           }
+    StringRef getDirectory() const { return getContext().getDirectory(); }
+    StringRef getFilename() const  { return getContext().getFilename();  }
+    DICompileUnit getCompileUnit() const { return getFieldAs<DICompileUnit>(3); }
+    unsigned getLineNumber() const { return getUnsignedField(4);         }
+  };
+
   /// DILocation - This object holds location information. This object
   /// is not associated with any DWARF tag.
   class DILocation : public DIDescriptor {
@@ -624,6 +641,11 @@ namespace llvm {
     /// with the specified parent context.
     DILexicalBlock CreateLexicalBlock(DIDescriptor Context);
 
+    /// CreateNameSpace - This creates new descriptor for a namespace
+    /// with the specified parent context.
+    DINameSpace CreateNameSpace(DIDescriptor Context, StringRef Name,
+                                DICompileUnit CU, unsigned LineNo);
+
     /// CreateLocation - Creates a debug info location.
     DILocation CreateLocation(unsigned LineNo, unsigned ColumnNo,
                               DIScope S, DILocation OrigLoc);
diff --git a/libclamav/c++/llvm/include/llvm/Analysis/LoopInfo.h b/libclamav/c++/llvm/include/llvm/Analysis/LoopInfo.h
index 2294e53..060286f 100644
--- a/libclamav/c++/llvm/include/llvm/Analysis/LoopInfo.h
+++ b/libclamav/c++/llvm/include/llvm/Analysis/LoopInfo.h
@@ -93,12 +93,28 @@ public:
   BlockT *getHeader() const { return Blocks.front(); }
   LoopT *getParentLoop() const { return ParentLoop; }
 
-  /// contains - Return true if the specified basic block is in this loop
+  /// contains - Return true if the specified loop is contained within in
+  /// this loop.
+  ///
+  bool contains(const LoopT *L) const {
+    if (L == this) return true;
+    if (L == 0)    return false;
+    return contains(L->getParentLoop());
+  }
+    
+  /// contains - Return true if the specified basic block is in this loop.
   ///
   bool contains(const BlockT *BB) const {
     return std::find(block_begin(), block_end(), BB) != block_end();
   }
 
+  /// contains - Return true if the specified instruction is in this loop.
+  ///
+  template<class InstT>
+  bool contains(const InstT *Inst) const {
+    return contains(Inst->getParent());
+  }
+
   /// iterator/begin/end - Return the loops contained entirely within this loop.
   ///
   const std::vector<LoopT *> &getSubLoops() const { return SubLoops; }
@@ -463,10 +479,6 @@ public:
       (*I)->print(OS, Depth+2);
   }
   
-  void dump() const {
-    print(errs());
-  }
-  
 protected:
   friend class LoopInfoBase<BlockT, LoopT>;
   explicit LoopBase(BlockT *BB) : ParentLoop(0) {
diff --git a/libclamav/c++/llvm/include/llvm/Analysis/MemoryDependenceAnalysis.h b/libclamav/c++/llvm/include/llvm/Analysis/MemoryDependenceAnalysis.h
index c04631b..f83cc4f 100644
--- a/libclamav/c++/llvm/include/llvm/Analysis/MemoryDependenceAnalysis.h
+++ b/libclamav/c++/llvm/include/llvm/Analysis/MemoryDependenceAnalysis.h
@@ -132,21 +132,17 @@ namespace llvm {
     }
   };
 
-  /// NonLocalDepEntry - This is an entry in the NonLocalDepInfo cache, and an
-  /// entry in the results set for a non-local query.  For each BasicBlock (the
-  /// BB entry) it keeps a MemDepResult and the (potentially phi translated)
-  /// address that was live in the block.
-  class NonLocalDepEntry {
+  /// NonLocalDepResult - This is a result from a NonLocal dependence query.
+  /// For each BasicBlock (the BB entry) it keeps a MemDepResult and the
+  /// (potentially phi translated) address that was live in the block.
+  class NonLocalDepResult {
     BasicBlock *BB;
     MemDepResult Result;
-    WeakVH Address;
+    Value *Address;
   public:
-    NonLocalDepEntry(BasicBlock *bb, MemDepResult result, Value *address)
+    NonLocalDepResult(BasicBlock *bb, MemDepResult result, Value *address)
       : BB(bb), Result(result), Address(address) {}
-
-    // This is used for searches.
-    NonLocalDepEntry(BasicBlock *bb) : BB(bb) {}
-
+    
     // BB is the sort key, it can't be changed.
     BasicBlock *getBB() const { return BB; }
     
@@ -154,7 +150,7 @@ namespace llvm {
       Result = R;
       Address = Addr;
     }
-
+    
     const MemDepResult &getResult() const { return Result; }
     
     /// getAddress - Return the address of this pointer in this block.  This can
@@ -165,7 +161,27 @@ namespace llvm {
     ///
     /// The address is always null for a non-local 'call' dependence.
     Value *getAddress() const { return Address; }
+  };
+  
+  /// NonLocalDepEntry - This is an entry in the NonLocalDepInfo cache.  For
+  /// each BasicBlock (the BB entry) it keeps a MemDepResult.
+  class NonLocalDepEntry {
+    BasicBlock *BB;
+    MemDepResult Result;
+  public:
+    NonLocalDepEntry(BasicBlock *bb, MemDepResult result)
+      : BB(bb), Result(result) {}
+
+    // This is used for searches.
+    NonLocalDepEntry(BasicBlock *bb) : BB(bb) {}
 
+    // BB is the sort key, it can't be changed.
+    BasicBlock *getBB() const { return BB; }
+    
+    void setResult(const MemDepResult &R) { Result = R; }
+
+    const MemDepResult &getResult() const { return Result; }
+    
     bool operator<(const NonLocalDepEntry &RHS) const {
       return BB < RHS.BB;
     }
@@ -283,7 +299,7 @@ namespace llvm {
     /// This method assumes the pointer has a "NonLocal" dependency within BB.
     void getNonLocalPointerDependency(Value *Pointer, bool isLoad,
                                       BasicBlock *BB,
-                                     SmallVectorImpl<NonLocalDepEntry> &Result);
+                                    SmallVectorImpl<NonLocalDepResult> &Result);
     
     /// removeInstruction - Remove an instruction from the dependence analysis,
     /// updating the dependence of instructions that previously depended on it.
@@ -307,7 +323,7 @@ namespace llvm {
                                            BasicBlock *BB);
     bool getNonLocalPointerDepFromBB(const PHITransAddr &Pointer, uint64_t Size,
                                      bool isLoad, BasicBlock *BB,
-                                     SmallVectorImpl<NonLocalDepEntry> &Result,
+                                     SmallVectorImpl<NonLocalDepResult> &Result,
                                      DenseMap<BasicBlock*, Value*> &Visited,
                                      bool SkipFirstBlock = false);
     MemDepResult GetNonLocalInfoForBlock(Value *Pointer, uint64_t PointeeSize,
diff --git a/libclamav/c++/llvm/include/llvm/Analysis/ProfileInfo.h b/libclamav/c++/llvm/include/llvm/Analysis/ProfileInfo.h
index 80ba6d8..300a027 100644
--- a/libclamav/c++/llvm/include/llvm/Analysis/ProfileInfo.h
+++ b/libclamav/c++/llvm/include/llvm/Analysis/ProfileInfo.h
@@ -38,7 +38,7 @@ namespace llvm {
   class MachineBasicBlock;
   class MachineFunction;
 
-  // Helper for dumping edges to errs().
+  // Helper for dumping edges to dbgs().
   raw_ostream& operator<<(raw_ostream &O, std::pair<const BasicBlock *, const BasicBlock *> E);
   raw_ostream& operator<<(raw_ostream &O, std::pair<const MachineBasicBlock *, const MachineBasicBlock *> E);
 
@@ -123,7 +123,7 @@ namespace llvm {
 
     void setEdgeWeight(Edge e, double w) {
       DEBUG_WITH_TYPE("profile-info",
-            errs() << "Creating Edge " << e
+            dbgs() << "Creating Edge " << e
                    << " (weight: " << format("%.20g",w) << ")\n");
       EdgeInformation[getFunction(e)][e] = w;
     }
@@ -170,18 +170,18 @@ namespace llvm {
     void repair(const FType *F);
 
     void dump(FType *F = 0, bool real = true) {
-      errs() << "**** This is ProfileInfo " << this << " speaking:\n";
+      dbgs() << "**** This is ProfileInfo " << this << " speaking:\n";
       if (!real) {
         typename std::set<const FType*> Functions;
 
-        errs() << "Functions: \n";
+        dbgs() << "Functions: \n";
         if (F) {
-          errs() << F << "@" << format("%p", F) << ": " << format("%.20g",getExecutionCount(F)) << "\n";
+          dbgs() << F << "@" << format("%p", F) << ": " << format("%.20g",getExecutionCount(F)) << "\n";
           Functions.insert(F);
         } else {
           for (typename std::map<const FType*, double>::iterator fi = FunctionInformation.begin(),
                fe = FunctionInformation.end(); fi != fe; ++fi) {
-            errs() << fi->first << "@" << format("%p",fi->first) << ": " << format("%.20g",fi->second) << "\n";
+            dbgs() << fi->first << "@" << format("%p",fi->first) << ": " << format("%.20g",fi->second) << "\n";
             Functions.insert(fi->first);
           }
         }
@@ -190,34 +190,34 @@ namespace llvm {
              FI != FE; ++FI) {
           const FType *F = *FI;
           typename std::map<const FType*, BlockCounts>::iterator bwi = BlockInformation.find(F);
-          errs() << "BasicBlocks for Function " << F << ":\n";
+          dbgs() << "BasicBlocks for Function " << F << ":\n";
           for (typename BlockCounts::const_iterator bi = bwi->second.begin(), be = bwi->second.end(); bi != be; ++bi) {
-            errs() << bi->first << "@" << format("%p", bi->first) << ": " << format("%.20g",bi->second) << "\n";
+            dbgs() << bi->first << "@" << format("%p", bi->first) << ": " << format("%.20g",bi->second) << "\n";
           }
         }
 
         for (typename std::set<const FType*>::iterator FI = Functions.begin(), FE = Functions.end();
              FI != FE; ++FI) {
           typename std::map<const FType*, EdgeWeights>::iterator ei = EdgeInformation.find(*FI);
-          errs() << "Edges for Function " << ei->first << ":\n";
+          dbgs() << "Edges for Function " << ei->first << ":\n";
           for (typename EdgeWeights::iterator ewi = ei->second.begin(), ewe = ei->second.end(); 
                ewi != ewe; ++ewi) {
-            errs() << ewi->first << ": " << format("%.20g",ewi->second) << "\n";
+            dbgs() << ewi->first << ": " << format("%.20g",ewi->second) << "\n";
           }
         }
       } else {
         assert(F && "No function given, this is not supported!");
-        errs() << "Functions: \n";
-        errs() << F << "@" << format("%p", F) << ": " << format("%.20g",getExecutionCount(F)) << "\n";
+        dbgs() << "Functions: \n";
+        dbgs() << F << "@" << format("%p", F) << ": " << format("%.20g",getExecutionCount(F)) << "\n";
 
-        errs() << "BasicBlocks for Function " << F << ":\n";
+        dbgs() << "BasicBlocks for Function " << F << ":\n";
         for (typename FType::const_iterator BI = F->begin(), BE = F->end();
              BI != BE; ++BI) {
           const BType *BB = &(*BI);
-          errs() << BB << "@" << format("%p", BB) << ": " << format("%.20g",getExecutionCount(BB)) << "\n";
+          dbgs() << BB << "@" << format("%p", BB) << ": " << format("%.20g",getExecutionCount(BB)) << "\n";
         }
       }
-      errs() << "**** ProfileInfo " << this << ", over and out.\n";
+      dbgs() << "**** ProfileInfo " << this << ", over and out.\n";
     }
 
     bool CalculateMissingEdge(const BType *BB, Edge &removed, bool assumeEmptyExit = false);
diff --git a/libclamav/c++/llvm/include/llvm/Analysis/ScalarEvolution.h b/libclamav/c++/llvm/include/llvm/Analysis/ScalarEvolution.h
index 4aa3dfa..6f57c74 100644
--- a/libclamav/c++/llvm/include/llvm/Analysis/ScalarEvolution.h
+++ b/libclamav/c++/llvm/include/llvm/Analysis/ScalarEvolution.h
@@ -243,7 +243,7 @@ namespace llvm {
 
     /// createNodeForGEP - Provide the special handling we need to analyze GEP
     /// SCEVs.
-    const SCEV *createNodeForGEP(Operator *GEP);
+    const SCEV *createNodeForGEP(GEPOperator *GEP);
 
     /// computeSCEVAtScope - Implementation code for getSCEVAtScope; called
     /// at most once for each SCEV+Loop pair.
diff --git a/libclamav/c++/llvm/include/llvm/Analysis/SparsePropagation.h b/libclamav/c++/llvm/include/llvm/Analysis/SparsePropagation.h
index 677d41d..c3c2f4b 100644
--- a/libclamav/c++/llvm/include/llvm/Analysis/SparsePropagation.h
+++ b/libclamav/c++/llvm/include/llvm/Analysis/SparsePropagation.h
@@ -30,7 +30,6 @@ namespace llvm {
   class BasicBlock;
   class Function;
   class SparseSolver;
-  class LLVMContext;
   class raw_ostream;
 
   template<typename T> class SmallVectorImpl;
@@ -120,8 +119,6 @@ class SparseSolver {
   /// compute transfer functions.
   AbstractLatticeFunction *LatticeFunc;
   
-  LLVMContext *Context;
-  
   DenseMap<Value*, LatticeVal> ValueState;  // The state each value is in.
   SmallPtrSet<BasicBlock*, 16> BBExecutable;   // The bbs that are executable.
   
@@ -137,8 +134,8 @@ class SparseSolver {
   SparseSolver(const SparseSolver&);    // DO NOT IMPLEMENT
   void operator=(const SparseSolver&);  // DO NOT IMPLEMENT
 public:
-  explicit SparseSolver(AbstractLatticeFunction *Lattice, LLVMContext *C)
-    : LatticeFunc(Lattice), Context(C) {}
+  explicit SparseSolver(AbstractLatticeFunction *Lattice)
+    : LatticeFunc(Lattice) {}
   ~SparseSolver() {
     delete LatticeFunc;
   }
diff --git a/libclamav/c++/llvm/include/llvm/Analysis/ValueTracking.h b/libclamav/c++/llvm/include/llvm/Analysis/ValueTracking.h
index 5f3c671..7c673c3 100644
--- a/libclamav/c++/llvm/include/llvm/Analysis/ValueTracking.h
+++ b/libclamav/c++/llvm/include/llvm/Analysis/ValueTracking.h
@@ -24,7 +24,6 @@ namespace llvm {
   class Instruction;
   class APInt;
   class TargetData;
-  class LLVMContext;
   
   /// ComputeMaskedBits - Determine which of the bits specified in Mask are
   /// known to be either zero or one and return them in the KnownZero/KnownOne
diff --git a/libclamav/c++/llvm/include/llvm/CodeGen/JITCodeEmitter.h b/libclamav/c++/llvm/include/llvm/CodeGen/JITCodeEmitter.h
index ea3e59b..9c4e5b9 100644
--- a/libclamav/c++/llvm/include/llvm/CodeGen/JITCodeEmitter.h
+++ b/libclamav/c++/llvm/include/llvm/CodeGen/JITCodeEmitter.h
@@ -68,29 +68,11 @@ public:
   ///
   virtual bool finishFunction(MachineFunction &F) = 0;
   
-  /// startGVStub - This callback is invoked when the JIT needs the address of a
-  /// GV (e.g. function) that has not been code generated yet.  The StubSize
-  /// specifies the total size required by the stub.  The BufferState must be
-  /// passed to finishGVStub, and start/finish pairs with the same BufferState
-  /// must be properly nested.
-  ///
-  virtual void startGVStub(BufferState &BS, const GlobalValue* GV,
-                           unsigned StubSize, unsigned Alignment = 1) = 0;
-
-  /// startGVStub - This callback is invoked when the JIT needs the address of a
-  /// GV (e.g. function) that has not been code generated yet.  Buffer points to
-  /// memory already allocated for this stub.  The BufferState must be passed to
-  /// finishGVStub, and start/finish pairs with the same BufferState must be
-  /// properly nested.
-  ///
-  virtual void startGVStub(BufferState &BS, void *Buffer,
-                           unsigned StubSize) = 0;
-
-  /// finishGVStub - This callback is invoked to terminate a GV stub and returns
-  /// the start address of the stub.  The BufferState must first have been
-  /// passed to startGVStub.
-  ///
-  virtual void *finishGVStub(BufferState &BS) = 0;
+  /// allocIndirectGV - Allocates and fills storage for an indirect
+  /// GlobalValue, and returns the address.
+  virtual void *allocIndirectGV(const GlobalValue *GV,
+                                const uint8_t *Buffer, size_t Size,
+                                unsigned Alignment) = 0;
 
   /// emitByte - This callback is invoked when a byte needs to be written to the
   /// output stream.
diff --git a/libclamav/c++/llvm/include/llvm/CodeGen/MachineCodeEmitter.h b/libclamav/c++/llvm/include/llvm/CodeGen/MachineCodeEmitter.h
index 791db00..d598a93 100644
--- a/libclamav/c++/llvm/include/llvm/CodeGen/MachineCodeEmitter.h
+++ b/libclamav/c++/llvm/include/llvm/CodeGen/MachineCodeEmitter.h
@@ -48,41 +48,16 @@ class Function;
 /// occurred, more memory is allocated, and we reemit the code into it.
 /// 
 class MachineCodeEmitter {
-public:
-  class BufferState {
-    friend class MachineCodeEmitter;
-    /// BufferBegin/BufferEnd - Pointers to the start and end of the memory
-    /// allocated for this code buffer.
-    uint8_t *BufferBegin, *BufferEnd;
-
-    /// CurBufferPtr - Pointer to the next byte of memory to fill when emitting
-    /// code.  This is guranteed to be in the range [BufferBegin,BufferEnd].  If
-    /// this pointer is at BufferEnd, it will never move due to code emission,
-    /// and all code emission requests will be ignored (this is the buffer
-    /// overflow condition).
-    uint8_t *CurBufferPtr;
-  public:
-    BufferState() : BufferBegin(NULL), BufferEnd(NULL), CurBufferPtr(NULL) {}
-  };
-
 protected:
-  /// These have the same meanings as the fields in BufferState
-  uint8_t *BufferBegin, *BufferEnd, *CurBufferPtr;
-
-  /// Save or restore the current buffer state.  The BufferState objects must be
-  /// used as a stack.
-  void SaveStateTo(BufferState &BS) {
-    assert(BS.BufferBegin == NULL &&
-           "Can't save state into the same BufferState twice.");
-    BS.BufferBegin = BufferBegin;
-    BS.BufferEnd = BufferEnd;
-    BS.CurBufferPtr = CurBufferPtr;
-  }
-  void RestoreStateFrom(BufferState &BS) {
-    BufferBegin = BS.BufferBegin;
-    BufferEnd = BS.BufferEnd;
-    CurBufferPtr = BS.CurBufferPtr;
-  }
+  /// BufferBegin/BufferEnd - Pointers to the start and end of the memory
+  /// allocated for this code buffer.
+  uint8_t *BufferBegin, *BufferEnd;
+  /// CurBufferPtr - Pointer to the next byte of memory to fill when emitting
+  /// code.  This is guranteed to be in the range [BufferBegin,BufferEnd].  If
+  /// this pointer is at BufferEnd, it will never move due to code emission, and
+  /// all code emission requests will be ignored (this is the buffer overflow
+  /// condition).
+  uint8_t *CurBufferPtr;
 
 public:
   virtual ~MachineCodeEmitter() {}
@@ -113,15 +88,23 @@ public:
   ///
   void emitWordLE(uint32_t W) {
     if (4 <= BufferEnd-CurBufferPtr) {
-      *CurBufferPtr++ = (uint8_t)(W >>  0);
-      *CurBufferPtr++ = (uint8_t)(W >>  8);
-      *CurBufferPtr++ = (uint8_t)(W >> 16);
-      *CurBufferPtr++ = (uint8_t)(W >> 24);
+      emitWordLEInto(CurBufferPtr, W);
     } else {
       CurBufferPtr = BufferEnd;
     }
   }
-  
+
+  /// emitWordLEInto - This callback is invoked when a 32-bit word needs to be
+  /// written to an arbitrary buffer in little-endian format.  Buf must have at
+  /// least 4 bytes of available space.
+  ///
+  static void emitWordLEInto(uint8_t *&Buf, uint32_t W) {
+    *Buf++ = (uint8_t)(W >>  0);
+    *Buf++ = (uint8_t)(W >>  8);
+    *Buf++ = (uint8_t)(W >> 16);
+    *Buf++ = (uint8_t)(W >> 24);
+  }
+
   /// emitWordBE - This callback is invoked when a 32-bit word needs to be
   /// written to the output stream in big-endian format.
   ///
diff --git a/libclamav/c++/llvm/include/llvm/CodeGen/MachineModuleInfo.h b/libclamav/c++/llvm/include/llvm/CodeGen/MachineModuleInfo.h
index bac9fce..e9b645b 100644
--- a/libclamav/c++/llvm/include/llvm/CodeGen/MachineModuleInfo.h
+++ b/libclamav/c++/llvm/include/llvm/CodeGen/MachineModuleInfo.h
@@ -43,6 +43,7 @@
 #include "llvm/GlobalValue.h"
 #include "llvm/Pass.h"
 #include "llvm/Metadata.h"
+#include "llvm/Support/ValueHandle.h"
 
 namespace llvm {
 
diff --git a/libclamav/c++/llvm/include/llvm/CodeGen/MachinePassRegistry.h b/libclamav/c++/llvm/include/llvm/CodeGen/MachinePassRegistry.h
index 680d2b8..6ee2e90 100644
--- a/libclamav/c++/llvm/include/llvm/CodeGen/MachinePassRegistry.h
+++ b/libclamav/c++/llvm/include/llvm/CodeGen/MachinePassRegistry.h
@@ -129,9 +129,9 @@ public:
     // Add existing passes to option.
     for (RegistryClass *Node = RegistryClass::getList();
          Node; Node = Node->getNext()) {
-      addLiteralOption(Node->getName(),
+      this->addLiteralOption(Node->getName(),
                       (typename RegistryClass::FunctionPassCtor)Node->getCtor(),
-                      Node->getDescription());
+                             Node->getDescription());
     }
     
     // Make sure we listen for list changes.
diff --git a/libclamav/c++/llvm/include/llvm/CodeGen/SelectionDAG.h b/libclamav/c++/llvm/include/llvm/CodeGen/SelectionDAG.h
index c09c634..d55dd7f 100644
--- a/libclamav/c++/llvm/include/llvm/CodeGen/SelectionDAG.h
+++ b/libclamav/c++/llvm/include/llvm/CodeGen/SelectionDAG.h
@@ -29,12 +29,13 @@
 namespace llvm {
 
 class AliasAnalysis;
-class TargetLowering;
-class MachineModuleInfo;
 class DwarfWriter;
-class MachineFunction;
-class MachineConstantPoolValue;
 class FunctionLoweringInfo;
+class MachineConstantPoolValue;
+class MachineFunction;
+class MachineModuleInfo;
+class SDNodeOrdering;
+class TargetLowering;
 
 template<> struct ilist_traits<SDNode> : public ilist_default_traits<SDNode> {
 private:
@@ -110,45 +111,9 @@ class SelectionDAG {
   /// SelectionDAG.
   BumpPtrAllocator Allocator;
 
-  /// NodeOrdering - Assigns a "line number" value to each SDNode that
-  /// corresponds to the "line number" of the original LLVM instruction. This
-  /// used for turning off scheduling, because we'll forgo the normal scheduling
-  /// algorithm and output the instructions according to this ordering.
-  class NodeOrdering {
-    /// LineNo - The line of the instruction the node corresponds to. A value of
-    /// `0' means it's not assigned.
-    unsigned LineNo;
-    std::map<const SDNode*, unsigned> Order;
-
-    void operator=(const NodeOrdering&); // Do not implement.
-    NodeOrdering(const NodeOrdering&);   // Do not implement.
-  public:
-    NodeOrdering() : LineNo(0) {}
-
-    void add(const SDNode *Node) {
-      assert(LineNo && "Invalid line number!");
-      Order[Node] = LineNo;
-    }
-    void remove(const SDNode *Node) {
-      std::map<const SDNode*, unsigned>::iterator Itr = Order.find(Node);
-      if (Itr != Order.end())
-        Order.erase(Itr);
-    }
-    void clear() {
-      Order.clear();
-      LineNo = 1;
-    }
-    unsigned getLineNo(const SDNode *Node) {
-      unsigned LN = Order[Node];
-      assert(LN && "Node isn't in ordering map!");
-      return LN;
-    }
-    void newInst() {
-      ++LineNo;
-    }
-
-    void dump() const;
-  } *Ordering;
+  /// SDNodeOrdering - The ordering of the SDNodes. It roughly corresponds to
+  /// the ordering of the original LLVM instructions.
+  SDNodeOrdering *Ordering;
 
   /// VerifyNode - Sanity check the given node.  Aborts if it is invalid.
   void VerifyNode(SDNode *N);
@@ -242,13 +207,6 @@ public:
     return Root = N;
   }
 
-  /// NewInst - Tell the ordering object that we're processing a new
-  /// instruction.
-  void NewInst() {
-    if (Ordering)
-      Ordering->newInst();
-  }
-
   /// Combine - This iterates over the nodes in the SelectionDAG, folding
   /// certain types of nodes together, or eliminating superfluous nodes.  The
   /// Level argument controls whether Combine is allowed to produce nodes and
@@ -873,6 +831,12 @@ public:
     }
   }
 
+  /// AssignOrdering - Assign an order to the SDNode.
+  void AssignOrdering(SDNode *SD, unsigned Order);
+
+  /// GetOrdering - Get the order for the SDNode.
+  unsigned GetOrdering(const SDNode *SD) const;
+
   void dump() const;
 
   /// CreateStackTemporary - Create a stack temporary, suitable for holding the
diff --git a/libclamav/c++/llvm/include/llvm/CodeGen/SelectionDAGNodes.h b/libclamav/c++/llvm/include/llvm/CodeGen/SelectionDAGNodes.h
index 571db47..7b1931a 100644
--- a/libclamav/c++/llvm/include/llvm/CodeGen/SelectionDAGNodes.h
+++ b/libclamav/c++/llvm/include/llvm/CodeGen/SelectionDAGNodes.h
@@ -414,12 +414,13 @@ namespace ISD {
     /// X = FP_EXTEND(Y) - Extend a smaller FP type into a larger FP type.
     FP_EXTEND,
 
-    // BIT_CONVERT - Theis operator converts between integer and FP values, as
-    // if one was stored to memory as integer and the other was loaded from the
-    // same address (or equivalently for vector format conversions, etc).  The
-    // source and result are required to have the same bit size (e.g.
-    // f32 <-> i32).  This can also be used for int-to-int or fp-to-fp
-    // conversions, but that is a noop, deleted by getNode().
+    // BIT_CONVERT - This operator converts between integer, vector and FP
+    // values, as if the value was stored to memory with one type and loaded
+    // from the same address with the other type (or equivalently for vector
+    // format conversions, etc).  The source and result are required to have
+    // the same bit size (e.g.  f32 <-> i32).  This can also be used for
+    // int-to-int or fp-to-fp conversions, but that is a noop, deleted by
+    // getNode().
     BIT_CONVERT,
 
     // CONVERT_RNDSAT - This operator is used to support various conversions
@@ -1227,7 +1228,7 @@ public:
   SDVTList getVTList() const {
     SDVTList X = { ValueList, NumValues };
     return X;
-  };
+  }
 
   /// getFlaggedNode - If this node has a flag operand, return the node
   /// to which the flag operand points. Otherwise return NULL.
diff --git a/libclamav/c++/llvm/include/llvm/CodeGen/SlotIndexes.h b/libclamav/c++/llvm/include/llvm/CodeGen/SlotIndexes.h
index 9a85ee1..163642a 100644
--- a/libclamav/c++/llvm/include/llvm/CodeGen/SlotIndexes.h
+++ b/libclamav/c++/llvm/include/llvm/CodeGen/SlotIndexes.h
@@ -176,7 +176,7 @@ namespace llvm {
     // Construct a new slot index from the given one, set the phi flag on the
     // new index to the value of the phi parameter.
     SlotIndex(const SlotIndex &li, bool phi)
-      : lie(&li.entry(), phi ? PHI_BIT & li.getSlot() : (unsigned)li.getSlot()){
+      : lie(&li.entry(), phi ? PHI_BIT | li.getSlot() : (unsigned)li.getSlot()){
       assert(lie.getPointer() != 0 &&
              "Attempt to construct index with 0 pointer.");
     }
@@ -184,7 +184,7 @@ namespace llvm {
     // Construct a new slot index from the given one, set the phi flag on the
     // new index to the value of the phi parameter, and the slot to the new slot.
     SlotIndex(const SlotIndex &li, bool phi, Slot s)
-      : lie(&li.entry(), phi ? PHI_BIT & s : (unsigned)s) {
+      : lie(&li.entry(), phi ? PHI_BIT | s : (unsigned)s) {
       assert(lie.getPointer() != 0 &&
              "Attempt to construct index with 0 pointer.");
     }
@@ -579,7 +579,7 @@ namespace llvm {
          (I == idx2MBBMap.end() && idx2MBBMap.size()>0)) ? (I-1): I;
 
       assert(J != idx2MBBMap.end() && J->first <= index &&
-             index <= getMBBEndIdx(J->second) &&
+             index < getMBBEndIdx(J->second) &&
              "index does not correspond to an MBB");
       return J->second;
     }
diff --git a/libclamav/c++/llvm/include/llvm/CodeGen/ValueTypes.h b/libclamav/c++/llvm/include/llvm/CodeGen/ValueTypes.h
index 06e07f3..9dc4c7b 100644
--- a/libclamav/c++/llvm/include/llvm/CodeGen/ValueTypes.h
+++ b/libclamav/c++/llvm/include/llvm/CodeGen/ValueTypes.h
@@ -589,7 +589,25 @@ namespace llvm {
         return getIntegerVT(Context, 1 << Log2_32_Ceil(BitWidth));
     }
 
-    /// isPow2VectorType - Retuns true if the given vector is a power of 2.
+    /// getHalfSizedIntegerVT - Finds the smallest simple value type that is
+    /// greater than or equal to half the width of this EVT. If no simple
+    /// value type can be found, an extended integer value type of half the
+    /// size (rounded up) is returned.
+    EVT getHalfSizedIntegerVT(LLVMContext &Context) const {
+      assert(isInteger() && !isVector() && "Invalid integer type!");
+      unsigned EVTSize = getSizeInBits();
+      for (unsigned IntVT = MVT::FIRST_INTEGER_VALUETYPE;
+          IntVT <= MVT::LAST_INTEGER_VALUETYPE;
+          ++IntVT) {
+        EVT HalfVT = EVT((MVT::SimpleValueType)IntVT);
+        if(HalfVT.getSizeInBits() * 2 >= EVTSize) { 
+          return HalfVT;
+        }
+      }
+      return getIntegerVT(Context, (EVTSize + 1) / 2);
+    }
+
+    /// isPow2VectorType - Returns true if the given vector is a power of 2.
     bool isPow2VectorType() const {
       unsigned NElts = getVectorNumElements();
       return !(NElts & (NElts - 1));
diff --git a/libclamav/c++/llvm/include/llvm/CompilerDriver/Common.td b/libclamav/c++/llvm/include/llvm/CompilerDriver/Common.td
index 8d2f63b..9c3e861 100644
--- a/libclamav/c++/llvm/include/llvm/CompilerDriver/Common.td
+++ b/libclamav/c++/llvm/include/llvm/CompilerDriver/Common.td
@@ -84,6 +84,7 @@ def stop_compilation;
 def unpack_values;
 def warning;
 def error;
+def set_option;
 def unset_option;
 
 // Increase/decrease the edge weight.
diff --git a/libclamav/c++/llvm/include/llvm/Constants.h b/libclamav/c++/llvm/include/llvm/Constants.h
index caa13f6..79c1eaa 100644
--- a/libclamav/c++/llvm/include/llvm/Constants.h
+++ b/libclamav/c++/llvm/include/llvm/Constants.h
@@ -692,8 +692,10 @@ public:
   static Constant *getIntToPtr(Constant *C, const Type *Ty);
   static Constant *getBitCast (Constant *C, const Type *Ty);
 
+  static Constant *getNSWNeg(Constant *C);
   static Constant *getNSWAdd(Constant *C1, Constant *C2);
   static Constant *getNSWSub(Constant *C1, Constant *C2);
+  static Constant *getNSWMul(Constant *C1, Constant *C2);
   static Constant *getExactSDiv(Constant *C1, Constant *C2);
 
   /// Transparently provide more efficient getOperand methods.
diff --git a/libclamav/c++/llvm/include/llvm/DerivedTypes.h b/libclamav/c++/llvm/include/llvm/DerivedTypes.h
index fb51430..c220608 100644
--- a/libclamav/c++/llvm/include/llvm/DerivedTypes.h
+++ b/libclamav/c++/llvm/include/llvm/DerivedTypes.h
@@ -502,9 +502,7 @@ class OpaqueType : public DerivedType {
 public:
   /// OpaqueType::get - Static factory method for the OpaqueType class...
   ///
-  static OpaqueType *get(LLVMContext &C) {
-    return new OpaqueType(C);           // All opaque types are distinct
-  }
+  static OpaqueType *get(LLVMContext &C);
 
   // Implement support for type inquiry through isa, cast, and dyn_cast:
   static inline bool classof(const OpaqueType *) { return true; }
diff --git a/libclamav/c++/llvm/include/llvm/InstrTypes.h b/libclamav/c++/llvm/include/llvm/InstrTypes.h
index bc89969..109aa26 100644
--- a/libclamav/c++/llvm/include/llvm/InstrTypes.h
+++ b/libclamav/c++/llvm/include/llvm/InstrTypes.h
@@ -277,6 +277,27 @@ public:
     return BO;
   }
 
+  /// CreateNSWMul - Create a Mul operator with the NSW flag set.
+  ///
+  static BinaryOperator *CreateNSWMul(Value *V1, Value *V2,
+                                      const Twine &Name = "") {
+    BinaryOperator *BO = CreateMul(V1, V2, Name);
+    BO->setHasNoSignedWrap(true);
+    return BO;
+  }
+  static BinaryOperator *CreateNSWMul(Value *V1, Value *V2,
+                                      const Twine &Name, BasicBlock *BB) {
+    BinaryOperator *BO = CreateMul(V1, V2, Name, BB);
+    BO->setHasNoSignedWrap(true);
+    return BO;
+  }
+  static BinaryOperator *CreateNSWMul(Value *V1, Value *V2,
+                                      const Twine &Name, Instruction *I) {
+    BinaryOperator *BO = CreateMul(V1, V2, Name, I);
+    BO->setHasNoSignedWrap(true);
+    return BO;
+  }
+
   /// CreateExactSDiv - Create an SDiv operator with the exact flag set.
   ///
   static BinaryOperator *CreateExactSDiv(Value *V1, Value *V2,
@@ -308,6 +329,10 @@ public:
                                    Instruction *InsertBefore = 0);
   static BinaryOperator *CreateNeg(Value *Op, const Twine &Name,
                                    BasicBlock *InsertAtEnd);
+  static BinaryOperator *CreateNSWNeg(Value *Op, const Twine &Name = "",
+                                      Instruction *InsertBefore = 0);
+  static BinaryOperator *CreateNSWNeg(Value *Op, const Twine &Name,
+                                      BasicBlock *InsertAtEnd);
   static BinaryOperator *CreateFNeg(Value *Op, const Twine &Name = "",
                                     Instruction *InsertBefore = 0);
   static BinaryOperator *CreateFNeg(Value *Op, const Twine &Name,
diff --git a/libclamav/c++/llvm/include/llvm/Intrinsics.td b/libclamav/c++/llvm/include/llvm/Intrinsics.td
index 6ff87ba..c472f2b 100644
--- a/libclamav/c++/llvm/include/llvm/Intrinsics.td
+++ b/libclamav/c++/llvm/include/llvm/Intrinsics.td
@@ -260,7 +260,7 @@ def int_sigsetjmp  : Intrinsic<[llvm_i32_ty] , [llvm_ptr_ty, llvm_i32_ty]>;
 def int_siglongjmp : Intrinsic<[llvm_void_ty], [llvm_ptr_ty, llvm_i32_ty]>;
 
 // Internal interface for object size checking
-def int_objectsize : Intrinsic<[llvm_anyint_ty], [llvm_ptr_ty, llvm_i32_ty],
+def int_objectsize : Intrinsic<[llvm_anyint_ty], [llvm_ptr_ty, llvm_i1_ty],
                                [IntrReadArgMem]>,
                                GCCBuiltin<"__builtin_object_size">;
 
diff --git a/libclamav/c++/llvm/include/llvm/MC/MCAssembler.h b/libclamav/c++/llvm/include/llvm/MC/MCAssembler.h
index 8656927..be017bf 100644
--- a/libclamav/c++/llvm/include/llvm/MC/MCAssembler.h
+++ b/libclamav/c++/llvm/include/llvm/MC/MCAssembler.h
@@ -76,7 +76,7 @@ public:
   virtual uint64_t getMaxFileSize() const {
     assert(0 && "Invalid getMaxFileSize call!");
     return 0;
-  };
+  }
 
   /// @name Assembler Backend Support
   /// @{
diff --git a/libclamav/c++/llvm/include/llvm/Metadata.h b/libclamav/c++/llvm/include/llvm/Metadata.h
index c7f2b44..1ece559 100644
--- a/libclamav/c++/llvm/include/llvm/Metadata.h
+++ b/libclamav/c++/llvm/include/llvm/Metadata.h
@@ -17,17 +17,16 @@
 #define LLVM_METADATA_H
 
 #include "llvm/Value.h"
-#include "llvm/Type.h"
 #include "llvm/ADT/FoldingSet.h"
-#include "llvm/ADT/SmallVector.h"
 #include "llvm/ADT/ilist_node.h"
-#include "llvm/Support/ValueHandle.h"
 
 namespace llvm {
 class Constant;
 class Instruction;
 class LLVMContext;
+class Module;
 class MetadataContextImpl;
+template <typename T> class SmallVectorImpl;
 
 //===----------------------------------------------------------------------===//
 // MetadataBase  - A base class for MDNode, MDString and NamedMDNode.
@@ -55,8 +54,7 @@ class MDString : public MetadataBase {
 
   StringRef Str;
 protected:
-  explicit MDString(LLVMContext &C, StringRef S)
-    : MetadataBase(Type::getMetadataTy(C), Value::MDStringVal), Str(S) {}
+  explicit MDString(LLVMContext &C, StringRef S);
 
 public:
   static MDString *get(LLVMContext &Context, StringRef Str);
@@ -83,53 +81,51 @@ public:
   }
 };
 
+  
+class MDNodeElement;
+  
 //===----------------------------------------------------------------------===//
 /// MDNode - a tuple of other values.
 /// These contain a list of the values that represent the metadata. 
 /// MDNode is always unnamed.
 class MDNode : public MetadataBase, public FoldingSetNode {
   MDNode(const MDNode &);                // DO NOT IMPLEMENT
+  void operator=(const MDNode &);        // DO NOT IMPLEMENT
+  friend class MDNodeElement;
 
-  friend class ElementVH;
-  // Use CallbackVH to hold MDNode elements.
-  struct ElementVH : public CallbackVH {
-    MDNode *Parent;
-    ElementVH() {}
-    ElementVH(Value *V, MDNode *P) : CallbackVH(V), Parent(P) {}
-    ~ElementVH() {}
-
-    virtual void deleted() {
-      Parent->replaceElement(this->operator Value*(), 0);
-    }
-
-    virtual void allUsesReplacedWith(Value *NV) {
-      Parent->replaceElement(this->operator Value*(), NV);
-    }
+  MDNodeElement *Operands;
+  unsigned NumOperands;
+  
+  // Subclass data enums.
+  enum {
+    FunctionLocalBit = 1
   };
+  
   // Replace each instance of F from the element list of this node with T.
-  void replaceElement(Value *F, Value *T);
-
-  ElementVH *Node;
-  unsigned NodeSize;
+  void replaceElement(MDNodeElement *Op, Value *NewVal);
 
 protected:
-  explicit MDNode(LLVMContext &C, Value *const *Vals, unsigned NumVals);
+  explicit MDNode(LLVMContext &C, Value *const *Vals, unsigned NumVals,
+                  bool isFunctionLocal);
 public:
   // Constructors and destructors.
-  static MDNode *get(LLVMContext &Context, 
-                     Value *const *Vals, unsigned NumVals);
+  static MDNode *get(LLVMContext &Context, Value *const *Vals, unsigned NumVals,
+                     bool isFunctionLocal = false);
 
   /// ~MDNode - Destroy MDNode.
   ~MDNode();
   
   /// getElement - Return specified element.
-  Value *getElement(unsigned i) const {
-    assert(i < getNumElements() && "Invalid element number!");
-    return Node[i];
-  }
-
+  Value *getElement(unsigned i) const;
+  
   /// getNumElements - Return number of MDNode elements.
-  unsigned getNumElements() const { return NodeSize; }
+  unsigned getNumElements() const { return NumOperands; }
+  
+  /// isFunctionLocal - Return whether MDNode is local to a function.
+  /// Note: MDNodes are designated as function-local when created, and keep
+  ///       that designation even if their operands are modified to no longer
+  ///       refer to function-local IR.
+  bool isFunctionLocal() const { return SubclassData & FunctionLocalBit; }
 
   /// Profile - calculate a unique identifier for this MDNode to collapse
   /// duplicates
@@ -155,7 +151,7 @@ class NamedMDNode : public MetadataBase, public ilist_node<NamedMDNode> {
   NamedMDNode(const NamedMDNode &);      // DO NOT IMPLEMENT
 
   Module *Parent;
-  SmallVector<TrackingVH<MetadataBase>, 4> Node;
+  void *Operands; // SmallVector<TrackingVH<MetadataBase>, 4>
 
   void setParent(Module *M) { Parent = M; }
 protected:
@@ -185,30 +181,14 @@ public:
   inline const Module *getParent() const { return Parent; }
 
   /// getElement - Return specified element.
-  MetadataBase *getElement(unsigned i) const {
-    assert(i < getNumElements() && "Invalid element number!");
-    return Node[i];
-  }
-
+  MetadataBase *getElement(unsigned i) const;
+  
   /// getNumElements - Return number of NamedMDNode elements.
-  unsigned getNumElements() const {
-    return (unsigned)Node.size();
-  }
+  unsigned getNumElements() const;
 
   /// addElement - Add metadata element.
-  void addElement(MetadataBase *M) {
-    Node.push_back(TrackingVH<MetadataBase>(M));
-  }
-
-  typedef SmallVectorImpl<TrackingVH<MetadataBase> >::iterator elem_iterator;
-  typedef SmallVectorImpl<TrackingVH<MetadataBase> >::const_iterator 
-    const_elem_iterator;
-  bool elem_empty() const                { return Node.empty(); }
-  const_elem_iterator elem_begin() const { return Node.begin(); }
-  const_elem_iterator elem_end() const   { return Node.end();   }
-  elem_iterator elem_begin()             { return Node.begin(); }
-  elem_iterator elem_end()               { return Node.end();   }
-
+  void addElement(MetadataBase *M);
+  
   /// Methods for support type inquiry through isa, cast, and dyn_cast:
   static inline bool classof(const NamedMDNode *) { return true; }
   static bool classof(const Value *V) {
@@ -249,7 +229,7 @@ public:
 
   /// getMDs - Get the metadata attached to an Instruction.
   void getMDs(const Instruction *Inst, 
-        SmallVectorImpl<std::pair<unsigned, TrackingVH<MDNode> > > &MDs) const;
+              SmallVectorImpl<std::pair<unsigned, MDNode*> > &MDs) const;
 
   /// addMD - Attach the metadata of given kind to an Instruction.
   void addMD(unsigned Kind, MDNode *Node, Instruction *Inst);
diff --git a/libclamav/c++/llvm/include/llvm/Support/Casting.h b/libclamav/c++/llvm/include/llvm/Support/Casting.h
index 35fb29e..37a7c3b 100644
--- a/libclamav/c++/llvm/include/llvm/Support/Casting.h
+++ b/libclamav/c++/llvm/include/llvm/Support/Casting.h
@@ -251,7 +251,7 @@ struct foo {
 };
 
 template <> inline bool isa_impl<foo,bar>(const bar &Val) {
-  errs() << "Classof: " << &Val << "\n";
+  dbgs() << "Classof: " << &Val << "\n";
   return true;
 }
 
diff --git a/libclamav/c++/llvm/include/llvm/Support/Compiler.h b/libclamav/c++/llvm/include/llvm/Support/Compiler.h
index 8861a20..1376e46 100644
--- a/libclamav/c++/llvm/include/llvm/Support/Compiler.h
+++ b/libclamav/c++/llvm/include/llvm/Support/Compiler.h
@@ -29,6 +29,12 @@
 #define ATTRIBUTE_USED
 #endif
 
+#if (__GNUC__ >= 4 || (__GNUC__ == 3 && __GNUC_MINOR__ >= 1))
+#define ATTRIBUTE_UNUSED __attribute__((__unused__))
+#else
+#define ATTRIBUTE_UNUSED
+#endif
+
 #ifdef __GNUC__ // aka 'ATTRIBUTE_CONST' but following LLVM Conventions.
 #define ATTRIBUTE_READNONE __attribute__((__const__))
 #else
diff --git a/libclamav/c++/llvm/include/llvm/Support/ConstantFolder.h b/libclamav/c++/llvm/include/llvm/Support/ConstantFolder.h
index b73cea0..1339e9f 100644
--- a/libclamav/c++/llvm/include/llvm/Support/ConstantFolder.h
+++ b/libclamav/c++/llvm/include/llvm/Support/ConstantFolder.h
@@ -54,6 +54,9 @@ public:
   Constant *CreateMul(Constant *LHS, Constant *RHS) const {
     return ConstantExpr::getMul(LHS, RHS);
   }
+  Constant *CreateNSWMul(Constant *LHS, Constant *RHS) const {
+    return ConstantExpr::getNSWMul(LHS, RHS);
+  }
   Constant *CreateFMul(Constant *LHS, Constant *RHS) const {
     return ConstantExpr::getFMul(LHS, RHS);
   }
@@ -109,6 +112,9 @@ public:
   Constant *CreateNeg(Constant *C) const {
     return ConstantExpr::getNeg(C);
   }
+  Constant *CreateNSWNeg(Constant *C) const {
+    return ConstantExpr::getNSWNeg(C);
+  }
   Constant *CreateFNeg(Constant *C) const {
     return ConstantExpr::getFNeg(C);
   }
diff --git a/libclamav/c++/llvm/include/llvm/Support/Debug.h b/libclamav/c++/llvm/include/llvm/Support/Debug.h
index e8bc0ce..8651fc1 100644
--- a/libclamav/c++/llvm/include/llvm/Support/Debug.h
+++ b/libclamav/c++/llvm/include/llvm/Support/Debug.h
@@ -28,6 +28,8 @@
 
 namespace llvm {
 
+class raw_ostream;
+
 /// DEBUG_TYPE macro - Files can specify a DEBUG_TYPE as a string, which causes
 /// all of their DEBUG statements to be activatable with -debug-only=thatstring.
 #ifndef DEBUG_TYPE
@@ -58,7 +60,7 @@ void SetCurrentDebugType(const char *Type);
 /// this is a debug build, then the code specified as the option to the macro
 /// will be executed.  Otherwise it will not be.  Example:
 ///
-/// DEBUG_WITH_TYPE("bitset", errs() << "Bitset contains: " << Bitset << "\n");
+/// DEBUG_WITH_TYPE("bitset", dbgs() << "Bitset contains: " << Bitset << "\n");
 ///
 /// This will emit the debug information if -debug is present, and -debug-only
 /// is not specified, or is specified as "bitset".
@@ -72,15 +74,28 @@ void SetCurrentDebugType(const char *Type);
 #define DEBUG_WITH_TYPE(TYPE, X) do { } while (0)
 #endif
 
+/// EnableDebugBuffering - This defaults to false.  If true, the debug
+/// stream will install signal handlers to dump any buffered debug
+/// output.  It allows clients to selectively allow the debug stream
+/// to install signal handlers if they are certain there will be no
+/// conflict.
+///
+extern bool EnableDebugBuffering;
+
+/// dbgs() - This returns a reference to a raw_ostream for debugging
+/// messages.  If debugging is disabled it returns errs().  Use it
+/// like: dbgs() << "foo" << "bar";
+raw_ostream &dbgs();
+
 // DEBUG macro - This macro should be used by passes to emit debug information.
 // In the '-debug' option is specified on the commandline, and if this is a
 // debug build, then the code specified as the option to the macro will be
 // executed.  Otherwise it will not be.  Example:
 //
-// DEBUG(errs() << "Bitset contains: " << Bitset << "\n");
+// DEBUG(dbgs() << "Bitset contains: " << Bitset << "\n");
 //
 #define DEBUG(X) DEBUG_WITH_TYPE(DEBUG_TYPE, X)
-  
+
 } // End llvm namespace
 
 #endif
diff --git a/libclamav/c++/llvm/include/llvm/Support/Format.h b/libclamav/c++/llvm/include/llvm/Support/Format.h
index 340f517..f64e3db 100644
--- a/libclamav/c++/llvm/include/llvm/Support/Format.h
+++ b/libclamav/c++/llvm/include/llvm/Support/Format.h
@@ -25,7 +25,12 @@
 
 #include <cassert>
 #include <cstdio>
-#ifdef WIN32
+#ifdef _MSC_VER
+// FIXME: This define is wrong:
+//  - _snprintf does not guarantee that trailing null is always added - if
+//    there is no space for null, it does not report any error.
+//  - According to C++ standard, snprintf should be visible in the 'std' 
+//    namespace - this define makes this impossible.
 #define snprintf _snprintf
 #endif
 
diff --git a/libclamav/c++/llvm/include/llvm/Support/FormattedStream.h b/libclamav/c++/llvm/include/llvm/Support/FormattedStream.h
index 24a3546..09ab17c 100644
--- a/libclamav/c++/llvm/include/llvm/Support/FormattedStream.h
+++ b/libclamav/c++/llvm/include/llvm/Support/FormattedStream.h
@@ -59,7 +59,7 @@ namespace llvm
 
     /// current_pos - Return the current position within the stream,
     /// not counting the bytes currently in the buffer.
-    virtual uint64_t current_pos() { 
+    virtual uint64_t current_pos() const { 
       // This has the same effect as calling TheStream.current_pos(),
       // but that interface is private.
       return TheStream->tell() - TheStream->GetNumBytesInBuffer();
diff --git a/libclamav/c++/llvm/include/llvm/Support/IRBuilder.h b/libclamav/c++/llvm/include/llvm/Support/IRBuilder.h
index 1310d70..543ea85 100644
--- a/libclamav/c++/llvm/include/llvm/Support/IRBuilder.h
+++ b/libclamav/c++/llvm/include/llvm/Support/IRBuilder.h
@@ -353,6 +353,12 @@ public:
         return Folder.CreateMul(LC, RC);
     return Insert(BinaryOperator::CreateMul(LHS, RHS), Name);
   }
+  Value *CreateNSWMul(Value *LHS, Value *RHS, const Twine &Name = "") {
+    if (Constant *LC = dyn_cast<Constant>(LHS))
+      if (Constant *RC = dyn_cast<Constant>(RHS))
+        return Folder.CreateNSWMul(LC, RC);
+    return Insert(BinaryOperator::CreateNSWMul(LHS, RHS), Name);
+  }
   Value *CreateFMul(Value *LHS, Value *RHS, const Twine &Name = "") {
     if (Constant *LC = dyn_cast<Constant>(LHS))
       if (Constant *RC = dyn_cast<Constant>(RHS))
@@ -478,6 +484,11 @@ public:
       return Folder.CreateNeg(VC);
     return Insert(BinaryOperator::CreateNeg(V), Name);
   }
+  Value *CreateNSWNeg(Value *V, const Twine &Name = "") {
+    if (Constant *VC = dyn_cast<Constant>(V))
+      return Folder.CreateNSWNeg(VC);
+    return Insert(BinaryOperator::CreateNSWNeg(V), Name);
+  }
   Value *CreateFNeg(Value *V, const Twine &Name = "") {
     if (Constant *VC = dyn_cast<Constant>(V))
       return Folder.CreateFNeg(VC);
diff --git a/libclamav/c++/llvm/include/llvm/Support/NoFolder.h b/libclamav/c++/llvm/include/llvm/Support/NoFolder.h
index 7f2f149..78a9035 100644
--- a/libclamav/c++/llvm/include/llvm/Support/NoFolder.h
+++ b/libclamav/c++/llvm/include/llvm/Support/NoFolder.h
@@ -60,6 +60,9 @@ public:
   Value *CreateMul(Constant *LHS, Constant *RHS) const {
     return BinaryOperator::CreateMul(LHS, RHS);
   }
+  Value *CreateNSWMul(Constant *LHS, Constant *RHS) const {
+    return BinaryOperator::CreateNSWMul(LHS, RHS);
+  }
   Value *CreateFMul(Constant *LHS, Constant *RHS) const {
     return BinaryOperator::CreateFMul(LHS, RHS);
   }
@@ -115,6 +118,9 @@ public:
   Value *CreateNeg(Constant *C) const {
     return BinaryOperator::CreateNeg(C);
   }
+  Value *CreateNSWNeg(Constant *C) const {
+    return BinaryOperator::CreateNSWNeg(C);
+  }
   Value *CreateNot(Constant *C) const {
     return BinaryOperator::CreateNot(C);
   }
diff --git a/libclamav/c++/llvm/include/llvm/Support/StandardPasses.h b/libclamav/c++/llvm/include/llvm/Support/StandardPasses.h
index 18be1ad..f233c18 100644
--- a/libclamav/c++/llvm/include/llvm/Support/StandardPasses.h
+++ b/libclamav/c++/llvm/include/llvm/Support/StandardPasses.h
@@ -137,7 +137,8 @@ namespace llvm {
     if (UnrollLoops)
       PM->add(createLoopUnrollPass());          // Unroll small loops
     PM->add(createInstructionCombiningPass());  // Clean up after the unroller
-    PM->add(createGVNPass());                   // Remove redundancies
+    if (OptimizationLevel > 1)
+      PM->add(createGVNPass());                 // Remove redundancies
     PM->add(createMemCpyOptPass());             // Remove memcpy / form memset
     PM->add(createSCCPPass());                  // Constant prop with SCCP
   
diff --git a/libclamav/c++/llvm/include/llvm/Support/TargetFolder.h b/libclamav/c++/llvm/include/llvm/Support/TargetFolder.h
index afed853..59dd29b 100644
--- a/libclamav/c++/llvm/include/llvm/Support/TargetFolder.h
+++ b/libclamav/c++/llvm/include/llvm/Support/TargetFolder.h
@@ -67,6 +67,9 @@ public:
   Constant *CreateMul(Constant *LHS, Constant *RHS) const {
     return Fold(ConstantExpr::getMul(LHS, RHS));
   }
+  Constant *CreateNSWMul(Constant *LHS, Constant *RHS) const {
+    return Fold(ConstantExpr::getNSWMul(LHS, RHS));
+  }
   Constant *CreateFMul(Constant *LHS, Constant *RHS) const {
     return Fold(ConstantExpr::getFMul(LHS, RHS));
   }
@@ -122,6 +125,9 @@ public:
   Constant *CreateNeg(Constant *C) const {
     return Fold(ConstantExpr::getNeg(C));
   }
+  Constant *CreateNSWNeg(Constant *C) const {
+    return Fold(ConstantExpr::getNSWNeg(C));
+  }
   Constant *CreateFNeg(Constant *C) const {
     return Fold(ConstantExpr::getFNeg(C));
   }
diff --git a/libclamav/c++/llvm/include/llvm/Support/circular_raw_ostream.h b/libclamav/c++/llvm/include/llvm/Support/circular_raw_ostream.h
new file mode 100644
index 0000000..2b3c329
--- /dev/null
+++ b/libclamav/c++/llvm/include/llvm/Support/circular_raw_ostream.h
@@ -0,0 +1,171 @@
+//===-- llvm/Support/circular_raw_ostream.h - Buffered streams --*- C++ -*-===//
+//
+//                     The LLVM Compiler Infrastructure
+//
+// This file is distributed under the University of Illinois Open Source
+// License. See LICENSE.TXT for details.
+//
+//===----------------------------------------------------------------------===//
+//
+// This file contains raw_ostream implementations for streams to do circular
+// buffering of their output.
+//
+//===----------------------------------------------------------------------===//
+
+#ifndef LLVM_SUPPORT_CIRCULAR_RAW_OSTREAM_H
+#define LLVM_SUPPORT_CIRCULAR_RAW_OSTREAM_H
+
+#include "llvm/Support/raw_ostream.h"
+
+namespace llvm 
+{
+  /// circular_raw_ostream - A raw_ostream which *can* save its data
+  /// to a circular buffer, or can pass it through directly to an
+  /// underlying stream if specified with a buffer of zero.
+  ///
+  class circular_raw_ostream : public raw_ostream {
+  public:
+    /// TAKE_OWNERSHIP - Tell this stream that it owns the underlying
+    /// stream and is responsible for cleanup, memory management
+    /// issues, etc.
+    ///
+    static const bool TAKE_OWNERSHIP = true;
+
+    /// REFERENCE_ONLY - Tell this stream it should not manage the
+    /// held stream.
+    ///
+    static const bool REFERENCE_ONLY = false;
+
+  private:
+    /// TheStream - The real stream we output to. We set it to be
+    /// unbuffered, since we're already doing our own buffering.
+    ///
+    raw_ostream *TheStream;
+
+    /// OwnsStream - Are we responsible for managing the underlying
+    /// stream?
+    ///
+    bool OwnsStream;
+
+    /// BufferSize - The size of the buffer in bytes.
+    ///
+    size_t BufferSize;
+
+    /// BufferArray - The actual buffer storage.
+    ///
+    char *BufferArray;
+
+    /// Cur - Pointer to the current output point in BufferArray.
+    ///
+    char *Cur;
+
+    /// Filled - Indicate whether the buffer has been completely
+    /// filled.  This helps avoid garbage output.
+    ///
+    bool Filled;
+
+    /// Banner - A pointer to a banner to print before dumping the
+    /// log.
+    ///
+    const char *Banner;
+
+    /// flushBuffer - Dump the contents of the buffer to Stream.
+    ///
+    void flushBuffer(void) {
+      if (Filled)
+        // Write the older portion of the buffer.
+        TheStream->write(Cur, BufferArray + BufferSize - Cur);
+      // Write the newer portion of the buffer.
+      TheStream->write(BufferArray, Cur - BufferArray);
+      Cur = BufferArray;
+      Filled = false;
+    }
+
+    virtual void write_impl(const char *Ptr, size_t Size);
+
+    /// current_pos - Return the current position within the stream,
+    /// not counting the bytes currently in the buffer.
+    ///
+    virtual uint64_t current_pos() const { 
+      // This has the same effect as calling TheStream.current_pos(),
+      // but that interface is private.
+      return TheStream->tell() - TheStream->GetNumBytesInBuffer();
+    }
+
+  public:
+    /// circular_raw_ostream - Construct an optionally
+    /// circular-buffered stream, handing it an underlying stream to
+    /// do the "real" output.
+    ///
+    /// As a side effect, if BuffSize is nonzero, the given Stream is
+    /// set to be Unbuffered.  This is because circular_raw_ostream
+    /// does its own buffering, so it doesn't want another layer of
+    /// buffering to be happening underneath it.
+    ///
+    /// "Owns" tells the circular_raw_ostream whether it is
+    /// responsible for managing the held stream, doing memory
+    /// management of it, etc.
+    ///
+    circular_raw_ostream(raw_ostream &Stream, const char *Header,
+                         size_t BuffSize = 0, bool Owns = REFERENCE_ONLY) 
+        : raw_ostream(/*unbuffered*/true),
+            TheStream(0),
+            OwnsStream(Owns),
+            BufferSize(BuffSize),
+            BufferArray(0),
+            Filled(false),
+            Banner(Header) {
+      if (BufferSize != 0)
+        BufferArray = new char[BufferSize];
+      Cur = BufferArray;
+      setStream(Stream, Owns);
+    }
+    explicit circular_raw_ostream()
+        : raw_ostream(/*unbuffered*/true),
+            TheStream(0),
+            OwnsStream(REFERENCE_ONLY),
+            BufferArray(0),
+            Filled(false),
+            Banner("") {
+      Cur = BufferArray;
+    }
+
+    ~circular_raw_ostream() {
+      flush();
+      flushBufferWithBanner();
+      releaseStream();
+      delete[] BufferArray;
+    }
+
+    /// setStream - Tell the circular_raw_ostream to output a
+    /// different stream.  "Owns" tells circular_raw_ostream whether
+    /// it should take responsibility for managing the underlying
+    /// stream.
+    ///
+    void setStream(raw_ostream &Stream, bool Owns = REFERENCE_ONLY) {
+      releaseStream();
+      TheStream = &Stream;
+      OwnsStream = Owns;
+    }
+
+    /// flushBufferWithBanner - Force output of the buffer along with
+    /// a small header.
+    ///
+    void flushBufferWithBanner(void);
+
+  private:
+    /// releaseStream - Delete the held stream if needed. Otherwise,
+    /// transfer the buffer settings from this circular_raw_ostream
+    /// back to the underlying stream.
+    ///
+    void releaseStream() {
+      if (!TheStream)
+        return;
+      if (OwnsStream)
+        delete TheStream;
+    }
+  };
+} // end llvm namespace
+
+
+#endif
diff --git a/libclamav/c++/llvm/include/llvm/Support/raw_os_ostream.h b/libclamav/c++/llvm/include/llvm/Support/raw_os_ostream.h
index e0978b2..4f5d361 100644
--- a/libclamav/c++/llvm/include/llvm/Support/raw_os_ostream.h
+++ b/libclamav/c++/llvm/include/llvm/Support/raw_os_ostream.h
@@ -30,7 +30,7 @@ class raw_os_ostream : public raw_ostream {
   
   /// current_pos - Return the current position within the stream, not
   /// counting the bytes currently in the buffer.
-  virtual uint64_t current_pos();
+  virtual uint64_t current_pos() const;
   
 public:
   raw_os_ostream(std::ostream &O) : OS(O) {}
diff --git a/libclamav/c++/llvm/include/llvm/Support/raw_ostream.h b/libclamav/c++/llvm/include/llvm/Support/raw_ostream.h
index 2b3341d..d3c45c2 100644
--- a/libclamav/c++/llvm/include/llvm/Support/raw_ostream.h
+++ b/libclamav/c++/llvm/include/llvm/Support/raw_ostream.h
@@ -85,7 +85,7 @@ public:
   virtual ~raw_ostream();
 
   /// tell - Return the current offset with the file.
-  uint64_t tell() { return current_pos() + GetNumBytesInBuffer(); }
+  uint64_t tell() const { return current_pos() + GetNumBytesInBuffer(); }
 
   /// has_error - Return the value of the flag in this raw_ostream indicating
   /// whether an output error has been encountered.
@@ -116,7 +116,7 @@ public:
     SetBufferAndMode(new char[Size], Size, InternalBuffer);
   }
 
-  size_t GetBufferSize() {
+  size_t GetBufferSize() const {
     // If we're supposed to be buffered but haven't actually gotten around
     // to allocating the buffer yet, return the value that would be used.
     if (BufferMode != Unbuffered && OutBufStart == 0)
@@ -269,7 +269,7 @@ private:
 
   /// current_pos - Return the current position within the stream, not
   /// counting the bytes currently in the buffer.
-  virtual uint64_t current_pos() = 0;
+  virtual uint64_t current_pos() const = 0;
 
 protected:
   /// SetBuffer - Use the provided buffer as the raw_ostream buffer. This is
@@ -282,7 +282,7 @@ protected:
 
   /// preferred_buffer_size - Return an efficient buffer size for the
   /// underlying output mechanism.
-  virtual size_t preferred_buffer_size();
+  virtual size_t preferred_buffer_size() const;
 
   /// error_detected - Set the flag indicating that an output error has
   /// been encountered.
@@ -325,10 +325,10 @@ class raw_fd_ostream : public raw_ostream {
 
   /// current_pos - Return the current position within the stream, not
   /// counting the bytes currently in the buffer.
-  virtual uint64_t current_pos() { return pos; }
+  virtual uint64_t current_pos() const { return pos; }
 
   /// preferred_buffer_size - Determine an efficient buffer size.
-  virtual size_t preferred_buffer_size();
+  virtual size_t preferred_buffer_size() const;
 
 public:
 
@@ -423,7 +423,7 @@ class raw_string_ostream : public raw_ostream {
 
   /// current_pos - Return the current position within the stream, not
   /// counting the bytes currently in the buffer.
-  virtual uint64_t current_pos() { return OS.size(); }
+  virtual uint64_t current_pos() const { return OS.size(); }
 public:
   explicit raw_string_ostream(std::string &O) : OS(O) {}
   ~raw_string_ostream();
@@ -447,7 +447,7 @@ class raw_svector_ostream : public raw_ostream {
 
   /// current_pos - Return the current position within the stream, not
   /// counting the bytes currently in the buffer.
-  virtual uint64_t current_pos();
+  virtual uint64_t current_pos() const;
 public:
   /// Construct a new raw_svector_ostream.
   ///
@@ -468,7 +468,7 @@ class raw_null_ostream : public raw_ostream {
 
   /// current_pos - Return the current position within the stream, not
   /// counting the bytes currently in the buffer.
-  virtual uint64_t current_pos();
+  virtual uint64_t current_pos() const;
 
 public:
   explicit raw_null_ostream() {}
diff --git a/libclamav/c++/llvm/include/llvm/System/Path.h b/libclamav/c++/llvm/include/llvm/System/Path.h
index b8554c8..bdfb9aa 100644
--- a/libclamav/c++/llvm/include/llvm/System/Path.h
+++ b/libclamav/c++/llvm/include/llvm/System/Path.h
@@ -14,6 +14,7 @@
 #ifndef LLVM_SYSTEM_PATH_H
 #define LLVM_SYSTEM_PATH_H
 
+#include "llvm/ADT/StringRef.h"
 #include "llvm/System/TimeValue.h"
 #include <set>
 #include <string>
@@ -159,7 +160,7 @@ namespace sys {
       /// between processes.
       /// @returns The dynamic link library suffix for the current platform.
       /// @brief Return the dynamic link library suffix.
-      static std::string GetDLLSuffix();
+      static StringRef GetDLLSuffix();
 
       /// GetMainExecutable - Return the path to the main executable, given the
       /// value of argv[0] from program startup and the address of main itself.
@@ -174,12 +175,12 @@ namespace sys {
       Path() : path() {}
       Path(const Path &that) : path(that.path) {}
 
-      /// This constructor will accept a std::string as a path. No checking is
-      /// done on this path to determine if it is valid. To determine validity
-      /// of the path, use the isValid method.
+      /// This constructor will accept a char* or std::string as a path. No
+      /// checking is done on this path to determine if it is valid. To
+      /// determine validity of the path, use the isValid method.
       /// @param p The path to assign.
       /// @brief Construct a Path from a string.
-      explicit Path(const std::string& p);
+      explicit Path(StringRef p);
 
       /// This constructor will accept a character range as a path.  No checking
       /// is done on this path to determine if it is valid.  To determine
@@ -202,10 +203,10 @@ namespace sys {
       }
 
       /// Makes a copy of \p that to \p this.
-      /// @param \p that A std::string denoting the path
+      /// @param \p that A StringRef denoting the path
       /// @returns \p this
       /// @brief Assignment Operator
-      Path &operator=(const std::string &that);
+      Path &operator=(StringRef that);
 
       /// Compares \p this Path with \p that Path for equality.
       /// @returns true if \p this and \p that refer to the same thing.
@@ -251,28 +252,28 @@ namespace sys {
       /// component is the file or directory name occuring after the last
       /// directory separator. If no directory separator is present, the entire
       /// path name is returned (i.e. same as toString).
-      /// @returns std::string containing the last component of the path name.
+      /// @returns StringRef containing the last component of the path name.
       /// @brief Returns the last component of the path name.
-      std::string getLast() const;
+      StringRef getLast() const;
 
       /// This function strips off the path and suffix of the file or directory
       /// name and returns just the basename. For example /a/foo.bar would cause
       /// this function to return "foo".
-      /// @returns std::string containing the basename of the path
+      /// @returns StringRef containing the basename of the path
       /// @brief Get the base name of the path
-      std::string getBasename() const;
+      StringRef getBasename() const;
 
       /// This function strips off the suffix of the path beginning with the
       /// path separator ('/' on Unix, '\' on Windows) and returns the result.
-      std::string getDirname() const;
+      StringRef getDirname() const;
 
       /// This function strips off the path and basename(up to and
       /// including the last dot) of the file or directory name and
       /// returns just the suffix. For example /a/foo.bar would cause
       /// this function to return "bar".
-      /// @returns std::string containing the suffix of the path
+      /// @returns StringRef containing the suffix of the path
       /// @brief Get the suffix of the path
-      std::string getSuffix() const;
+      StringRef getSuffix() const;
 
       /// Obtain a 'C' string for the path name.
       /// @returns a 'C' string containing the path name.
@@ -315,7 +316,7 @@ namespace sys {
       /// cases (file not found, file not accessible, etc.) it returns false.
       /// @returns true if the magic number of the file matches \p magic.
       /// @brief Determine if file has a specific magic number
-      bool hasMagicNumber(const std::string& magic) const;
+      bool hasMagicNumber(StringRef magic) const;
 
       /// This function retrieves the first \p len bytes of the file associated
       /// with \p this. These bytes are returned as the "magic number" in the
@@ -422,8 +423,8 @@ namespace sys {
       /// Path object takes on the path value of \p unverified_path
       /// @returns true if the path was set, false otherwise.
       /// @param unverified_path The path to be set in Path object.
-      /// @brief Set a full path from a std::string
-      bool set(const std::string& unverified_path);
+      /// @brief Set a full path from a StringRef
+      bool set(StringRef unverified_path);
 
       /// One path component is removed from the Path. If only one component is
       /// present in the path, the Path object becomes empty. If the Path object
@@ -437,7 +438,7 @@ namespace sys {
       /// needed.
       /// @returns false if the path component could not be added.
       /// @brief Appends one path component to the Path.
-      bool appendComponent( const std::string& component );
+      bool appendComponent(StringRef component);
 
       /// A period and the \p suffix are appended to the end of the pathname.
       /// The precondition for this function is that the Path reference a file
@@ -446,7 +447,7 @@ namespace sys {
       /// become invalid for the host operating system, false is returned.
       /// @returns false if the suffix could not be added, true if it was.
       /// @brief Adds a period and the \p suffix to the end of the pathname.
-      bool appendSuffix(const std::string& suffix);
+      bool appendSuffix(StringRef suffix);
 
       /// The suffix of the filename is erased. The suffix begins with and
       /// includes the last . character in the filename after the last directory
@@ -620,12 +621,12 @@ namespace sys {
       PathWithStatus(const Path &other)
         : Path(other), status(), fsIsValid(false) {}
 
-      /// This constructor will accept a std::string as a path. No checking is
-      /// done on this path to determine if it is valid. To determine validity
-      /// of the path, use the isValid method.
+      /// This constructor will accept a char* or std::string as a path. No
+      /// checking is done on this path to determine if it is valid. To
+      /// determine validity of the path, use the isValid method.
       /// @brief Construct a Path from a string.
       explicit PathWithStatus(
-        const std::string& p ///< The path to assign.
+        StringRef p ///< The path to assign.
       ) : Path(p), status(), fsIsValid(false) {}
 
       /// This constructor will accept a character range as a path.  No checking
diff --git a/libclamav/c++/llvm/include/llvm/Target/TargetInstrDesc.h b/libclamav/c++/llvm/include/llvm/Target/TargetInstrDesc.h
index b0ed0bf..9efb683 100644
--- a/libclamav/c++/llvm/include/llvm/Target/TargetInstrDesc.h
+++ b/libclamav/c++/llvm/include/llvm/Target/TargetInstrDesc.h
@@ -25,9 +25,10 @@ class TargetRegisterInfo;
 //===----------------------------------------------------------------------===//
   
 namespace TOI {
-  // Operand constraints: only "tied_to" for now.
+  // Operand constraints
   enum OperandConstraint {
-    TIED_TO = 0  // Must be allocated the same register as.
+    TIED_TO = 0,    // Must be allocated the same register as.
+    EARLY_CLOBBER   // Operand is an early clobber register operand
   };
   
   /// OperandFlags - These are flags set on operands, but should be considered
diff --git a/libclamav/c++/llvm/include/llvm/Target/TargetLowering.h b/libclamav/c++/llvm/include/llvm/Target/TargetLowering.h
index 9536e04..dd28a87 100644
--- a/libclamav/c++/llvm/include/llvm/Target/TargetLowering.h
+++ b/libclamav/c++/llvm/include/llvm/Target/TargetLowering.h
@@ -139,6 +139,12 @@ public:
   virtual
   MVT::SimpleValueType getSetCCResultType(EVT VT) const;
 
+  /// getCmpLibcallReturnType - Return the ValueType for comparison 
+  /// libcalls. Comparions libcalls include floating point comparion calls,
+  /// and Ordered/Unordered check calls on floating point numbers.
+  virtual 
+  MVT::SimpleValueType getCmpLibcallReturnType() const;
+
   /// getBooleanContents - For targets without i1 registers, this gives the
   /// nature of the high-bits of boolean values held in types wider than i1.
   /// "Boolean values" are special true/false values produced by nodes like
@@ -1136,7 +1142,7 @@ public:
               bool isVarArg, bool isInreg, unsigned NumFixedArgs,
               CallingConv::ID CallConv, bool isTailCall,
               bool isReturnValueUsed, SDValue Callee, ArgListTy &Args,
-              SelectionDAG &DAG, DebugLoc dl);
+              SelectionDAG &DAG, DebugLoc dl, unsigned Order);
 
   /// LowerCall - This hook must be implemented to lower calls into the
   /// the specified DAG. The outgoing arguments to the call are described
@@ -1291,20 +1297,6 @@ public:
     return false;
   }
 
-  /// GetPossiblePreceedingTailCall - Get preceeding TailCallNodeOpCode node if
-  /// it exists. Skip a possible ISD::TokenFactor.
-  static SDValue GetPossiblePreceedingTailCall(SDValue Chain,
-                                                 unsigned TailCallNodeOpCode) {
-    if (Chain.getOpcode() == TailCallNodeOpCode) {
-      return Chain;
-    } else if (Chain.getOpcode() == ISD::TokenFactor) {
-      if (Chain.getNumOperands() &&
-          Chain.getOperand(0).getOpcode() == TailCallNodeOpCode)
-        return Chain.getOperand(0);
-    }
-    return Chain;
-  }
-
   /// getTargetNodeName() - This method returns the name of a target specific
   /// DAG node.
   virtual const char *getTargetNodeName(unsigned Opcode) const;
diff --git a/libclamav/c++/llvm/include/llvm/Target/TargetMachine.h b/libclamav/c++/llvm/include/llvm/Target/TargetMachine.h
index 1104635..84cd5b4 100644
--- a/libclamav/c++/llvm/include/llvm/Target/TargetMachine.h
+++ b/libclamav/c++/llvm/include/llvm/Target/TargetMachine.h
@@ -292,6 +292,13 @@ protected: // Can only create subclasses.
   ///
   bool addCommonCodeGenPasses(PassManagerBase &, CodeGenOpt::Level);
 
+private:
+  // These routines are used by addPassesToEmitFileFinish and
+  // addPassesToEmitMachineCode to set the CodeModel if it's still marked
+  // as default.
+  virtual void setCodeModelForJIT();
+  virtual void setCodeModelForStatic();
+  
 public:
   
   /// addPassesToEmitFile - Add passes to the specified pass manager to get the
diff --git a/libclamav/c++/llvm/include/llvm/Target/TargetOptions.h b/libclamav/c++/llvm/include/llvm/Target/TargetOptions.h
index 8d52dad..b43450d 100644
--- a/libclamav/c++/llvm/include/llvm/Target/TargetOptions.h
+++ b/libclamav/c++/llvm/include/llvm/Target/TargetOptions.h
@@ -141,6 +141,11 @@ namespace llvm {
   /// wth earlier copy coalescing.
   extern bool StrongPHIElim;
 
+  /// DisableScheduling - This flag disables instruction scheduling. In
+  /// particular, it assigns an ordering to the SDNodes, which the scheduler
+  /// uses instead of its normal heuristics to perform scheduling.
+  extern bool DisableScheduling;
+
 } // End llvm namespace
 
 #endif
diff --git a/libclamav/c++/llvm/include/llvm/Transforms/Utils/Cloning.h b/libclamav/c++/llvm/include/llvm/Transforms/Utils/Cloning.h
index e9099f8..7fbbef9 100644
--- a/libclamav/c++/llvm/include/llvm/Transforms/Utils/Cloning.h
+++ b/libclamav/c++/llvm/include/llvm/Transforms/Utils/Cloning.h
@@ -38,7 +38,6 @@ class CallGraph;
 class TargetData;
 class Loop;
 class LoopInfo;
-class LLVMContext;
 class AllocaInst;
 template <typename T> class SmallVectorImpl;
 
diff --git a/libclamav/c++/llvm/include/llvm/Transforms/Utils/Local.h b/libclamav/c++/llvm/include/llvm/Transforms/Utils/Local.h
index e6687bb..2cdd31f 100644
--- a/libclamav/c++/llvm/include/llvm/Transforms/Utils/Local.h
+++ b/libclamav/c++/llvm/include/llvm/Transforms/Utils/Local.h
@@ -27,7 +27,6 @@ class PHINode;
 class AllocaInst;
 class ConstantExpr;
 class TargetData;
-class LLVMContext;
 struct DbgInfoIntrinsic;
 
 template<typename T> class SmallVectorImpl;
diff --git a/libclamav/c++/llvm/include/llvm/Transforms/Utils/SSAUpdater.h b/libclamav/c++/llvm/include/llvm/Transforms/Utils/SSAUpdater.h
index 2364330..927e156 100644
--- a/libclamav/c++/llvm/include/llvm/Transforms/Utils/SSAUpdater.h
+++ b/libclamav/c++/llvm/include/llvm/Transforms/Utils/SSAUpdater.h
@@ -29,8 +29,8 @@ namespace llvm {
 class SSAUpdater {
   /// AvailableVals - This keeps track of which value to use on a per-block
   /// basis.  When we insert PHI nodes, we keep track of them here.  We use
-  /// WeakVH's for the value of the map because we RAUW PHI nodes when we
-  /// eliminate them, and want the WeakVH to track this.
+  /// TrackingVH's for the value of the map because we RAUW PHI nodes when we
+  /// eliminate them, and want the TrackingVH's to track this.
   //typedef DenseMap<BasicBlock*, TrackingVH<Value> > AvailableValsTy;
   void *AV;
 
diff --git a/libclamav/c++/llvm/include/llvm/Type.h b/libclamav/c++/llvm/include/llvm/Type.h
index 752635c..e516982 100644
--- a/libclamav/c++/llvm/include/llvm/Type.h
+++ b/libclamav/c++/llvm/include/llvm/Type.h
@@ -7,14 +7,12 @@
 //
 //===----------------------------------------------------------------------===//
 
-
 #ifndef LLVM_TYPE_H
 #define LLVM_TYPE_H
 
 #include "llvm/AbstractTypeUser.h"
 #include "llvm/Support/Casting.h"
 #include "llvm/System/DataTypes.h"
-#include "llvm/System/Atomic.h"
 #include "llvm/ADT/GraphTraits.h"
 #include <string>
 #include <vector>
@@ -104,7 +102,7 @@ private:
   /// has no AbstractTypeUsers, the type is deleted.  This is only sensical for
   /// derived types.
   ///
-  mutable sys::cas_flag RefCount;
+  mutable unsigned RefCount;
 
   /// Context - This refers to the LLVMContext in which this type was uniqued.
   LLVMContext &Context;
@@ -401,7 +399,7 @@ public:
 
   void addRef() const {
     assert(isAbstract() && "Cannot add a reference to a non-abstract type!");
-    sys::AtomicIncrement(&RefCount);
+    ++RefCount;
   }
 
   void dropRef() const {
@@ -410,8 +408,7 @@ public:
 
     // If this is the last PATypeHolder using this object, and there are no
     // PATypeHandles using it, the type is dead, delete it now.
-    sys::cas_flag OldCount = sys::AtomicDecrement(&RefCount);
-    if (OldCount == 0 && AbstractTypeUsers.empty())
+    if (--RefCount == 0 && AbstractTypeUsers.empty())
       this->destroy();
   }
   
diff --git a/libclamav/c++/llvm/lib/Analysis/AliasAnalysisCounter.cpp b/libclamav/c++/llvm/lib/Analysis/AliasAnalysisCounter.cpp
index 030bcd2..ae28b55 100644
--- a/libclamav/c++/llvm/lib/Analysis/AliasAnalysisCounter.cpp
+++ b/libclamav/c++/llvm/lib/Analysis/AliasAnalysisCounter.cpp
@@ -17,6 +17,7 @@
 #include "llvm/Analysis/AliasAnalysis.h"
 #include "llvm/Assembly/Writer.h"
 #include "llvm/Support/CommandLine.h"
+#include "llvm/Support/Debug.h"
 #include "llvm/Support/ErrorHandling.h"
 #include "llvm/Support/raw_ostream.h"
 using namespace llvm;
diff --git a/libclamav/c++/llvm/lib/Analysis/AliasAnalysisEvaluator.cpp b/libclamav/c++/llvm/lib/Analysis/AliasAnalysisEvaluator.cpp
index 6a2564c..6b0a956 100644
--- a/libclamav/c++/llvm/lib/Analysis/AliasAnalysisEvaluator.cpp
+++ b/libclamav/c++/llvm/lib/Analysis/AliasAnalysisEvaluator.cpp
@@ -26,6 +26,7 @@
 #include "llvm/Analysis/AliasAnalysis.h"
 #include "llvm/Assembly/Writer.h"
 #include "llvm/Target/TargetData.h"
+#include "llvm/Support/Debug.h"
 #include "llvm/Support/InstIterator.h"
 #include "llvm/Support/CommandLine.h"
 #include "llvm/Support/raw_ostream.h"
diff --git a/libclamav/c++/llvm/lib/Analysis/AliasSetTracker.cpp b/libclamav/c++/llvm/lib/Analysis/AliasSetTracker.cpp
index 6634600..02aff50 100644
--- a/libclamav/c++/llvm/lib/Analysis/AliasSetTracker.cpp
+++ b/libclamav/c++/llvm/lib/Analysis/AliasSetTracker.cpp
@@ -19,6 +19,7 @@
 #include "llvm/Type.h"
 #include "llvm/Target/TargetData.h"
 #include "llvm/Assembly/Writer.h"
+#include "llvm/Support/Debug.h"
 #include "llvm/Support/ErrorHandling.h"
 #include "llvm/Support/InstIterator.h"
 #include "llvm/Support/Format.h"
@@ -549,8 +550,8 @@ void AliasSetTracker::print(raw_ostream &OS) const {
   OS << "\n";
 }
 
-void AliasSet::dump() const { print(errs()); }
-void AliasSetTracker::dump() const { print(errs()); }
+void AliasSet::dump() const { print(dbgs()); }
+void AliasSetTracker::dump() const { print(dbgs()); }
 
 //===----------------------------------------------------------------------===//
 //                     ASTCallbackVH Class Implementation
diff --git a/libclamav/c++/llvm/lib/Analysis/DbgInfoPrinter.cpp b/libclamav/c++/llvm/lib/Analysis/DbgInfoPrinter.cpp
index ab92e3f..b90a996 100644
--- a/libclamav/c++/llvm/lib/Analysis/DbgInfoPrinter.cpp
+++ b/libclamav/c++/llvm/lib/Analysis/DbgInfoPrinter.cpp
@@ -22,7 +22,6 @@
 #include "llvm/Assembly/Writer.h"
 #include "llvm/Analysis/DebugInfo.h"
 #include "llvm/Analysis/Passes.h"
-#include "llvm/Analysis/ValueTracking.h"
 #include "llvm/Support/CFG.h"
 #include "llvm/Support/CommandLine.h"
 #include "llvm/Support/raw_ostream.h"
@@ -75,18 +74,16 @@ void PrintDbgInfo::printVariableDeclaration(const Value *V) {
 }
 
 void PrintDbgInfo::printStopPoint(const DbgStopPointInst *DSI) {
-  if (PrintDirectory) {
-    std::string dir;
-    GetConstantStringInfo(DSI->getDirectory(), dir);
-    Out << dir << "/";
-  }
+  if (PrintDirectory)
+    if (MDString *Str = dyn_cast<MDString>(DSI->getDirectory()))
+      Out << Str->getString() << '/';
 
-  std::string file;
-  GetConstantStringInfo(DSI->getFileName(), file);
-  Out << file << ":" << DSI->getLine();
+  if (MDString *Str = dyn_cast<MDString>(DSI->getFileName()))
+    Out << Str->getString();
+  Out << ':' << DSI->getLine();
 
   if (unsigned Col = DSI->getColumn())
-    Out << ":" << Col;
+    Out << ':' << Col;
 }
 
 void PrintDbgInfo::printFuncStart(const DbgFuncStartInst *FS) {
diff --git a/libclamav/c++/llvm/lib/Analysis/DebugInfo.cpp b/libclamav/c++/llvm/lib/Analysis/DebugInfo.cpp
index 1c9f500..c8cb60f 100644
--- a/libclamav/c++/llvm/lib/Analysis/DebugInfo.cpp
+++ b/libclamav/c++/llvm/lib/Analysis/DebugInfo.cpp
@@ -22,6 +22,7 @@
 #include "llvm/Module.h"
 #include "llvm/Analysis/ValueTracking.h"
 #include "llvm/ADT/SmallPtrSet.h"
+#include "llvm/Support/Debug.h"
 #include "llvm/Support/Dwarf.h"
 #include "llvm/Support/DebugLoc.h"
 #include "llvm/Support/raw_ostream.h"
@@ -227,6 +228,7 @@ bool DIDescriptor::isScope() const {
     case dwarf::DW_TAG_compile_unit:
     case dwarf::DW_TAG_lexical_block:
     case dwarf::DW_TAG_subprogram:
+    case dwarf::DW_TAG_namespace:
       return true;
     default:
       break;
@@ -242,6 +244,14 @@ bool DIDescriptor::isCompileUnit() const {
   return Tag == dwarf::DW_TAG_compile_unit;
 }
 
+/// isNameSpace - Return true if the specified tag is DW_TAG_namespace.
+bool DIDescriptor::isNameSpace() const {
+  assert (!isNull() && "Invalid descriptor!");
+  unsigned Tag = getTag();
+
+  return Tag == dwarf::DW_TAG_namespace;
+}
+
 /// isLexicalBlock - Return true if the specified tag is DW_TAG_lexical_block.
 bool DIDescriptor::isLexicalBlock() const {
   assert (!isNull() && "Invalid descriptor!");
@@ -438,6 +448,8 @@ StringRef DIScope::getFilename() const {
     return DISubprogram(DbgNode).getFilename();
   else if (isCompileUnit())
     return DICompileUnit(DbgNode).getFilename();
+  else if (isNameSpace())
+    return DINameSpace(DbgNode).getFilename();
   else 
     assert (0 && "Invalid DIScope!");
   return StringRef();
@@ -450,6 +462,8 @@ StringRef DIScope::getDirectory() const {
     return DISubprogram(DbgNode).getDirectory();
   else if (isCompileUnit())
     return DICompileUnit(DbgNode).getDirectory();
+  else if (isNameSpace())
+    return DINameSpace(DbgNode).getDirectory();
   else 
     assert (0 && "Invalid DIScope!");
   return StringRef();
@@ -462,16 +476,16 @@ StringRef DIScope::getDirectory() const {
 
 /// dump - Print descriptor.
 void DIDescriptor::dump() const {
-  errs() << "[" << dwarf::TagString(getTag()) << "] ";
-  errs().write_hex((intptr_t) &*DbgNode) << ']';
+  dbgs() << "[" << dwarf::TagString(getTag()) << "] ";
+  dbgs().write_hex((intptr_t) &*DbgNode) << ']';
 }
 
 /// dump - Print compile unit.
 void DICompileUnit::dump() const {
   if (getLanguage())
-    errs() << " [" << dwarf::LanguageString(getLanguage()) << "] ";
+    dbgs() << " [" << dwarf::LanguageString(getLanguage()) << "] ";
 
-  errs() << " [" << getDirectory() << "/" << getFilename() << " ]";
+  dbgs() << " [" << getDirectory() << "/" << getFilename() << " ]";
 }
 
 /// dump - Print type.
@@ -480,14 +494,14 @@ void DIType::dump() const {
 
   StringRef Res = getName();
   if (!Res.empty())
-    errs() << " [" << Res << "] ";
+    dbgs() << " [" << Res << "] ";
 
   unsigned Tag = getTag();
-  errs() << " [" << dwarf::TagString(Tag) << "] ";
+  dbgs() << " [" << dwarf::TagString(Tag) << "] ";
 
   // TODO : Print context
   getCompileUnit().dump();
-  errs() << " ["
+  dbgs() << " ["
          << getLineNumber() << ", "
          << getSizeInBits() << ", "
          << getAlignInBits() << ", "
@@ -495,12 +509,12 @@ void DIType::dump() const {
          << "] ";
 
   if (isPrivate())
-    errs() << " [private] ";
+    dbgs() << " [private] ";
   else if (isProtected())
-    errs() << " [protected] ";
+    dbgs() << " [protected] ";
 
   if (isForwardDecl())
-    errs() << " [fwd] ";
+    dbgs() << " [fwd] ";
 
   if (isBasicType())
     DIBasicType(DbgNode).dump();
@@ -509,21 +523,21 @@ void DIType::dump() const {
   else if (isCompositeType())
     DICompositeType(DbgNode).dump();
   else {
-    errs() << "Invalid DIType\n";
+    dbgs() << "Invalid DIType\n";
     return;
   }
 
-  errs() << "\n";
+  dbgs() << "\n";
 }
 
 /// dump - Print basic type.
 void DIBasicType::dump() const {
-  errs() << " [" << dwarf::AttributeEncodingString(getEncoding()) << "] ";
+  dbgs() << " [" << dwarf::AttributeEncodingString(getEncoding()) << "] ";
 }
 
 /// dump - Print derived type.
 void DIDerivedType::dump() const {
-  errs() << "\n\t Derived From: "; getTypeDerivedFrom().dump();
+  dbgs() << "\n\t Derived From: "; getTypeDerivedFrom().dump();
 }
 
 /// dump - Print composite type.
@@ -531,73 +545,73 @@ void DICompositeType::dump() const {
   DIArray A = getTypeArray();
   if (A.isNull())
     return;
-  errs() << " [" << A.getNumElements() << " elements]";
+  dbgs() << " [" << A.getNumElements() << " elements]";
 }
 
 /// dump - Print global.
 void DIGlobal::dump() const {
   StringRef Res = getName();
   if (!Res.empty())
-    errs() << " [" << Res << "] ";
+    dbgs() << " [" << Res << "] ";
 
   unsigned Tag = getTag();
-  errs() << " [" << dwarf::TagString(Tag) << "] ";
+  dbgs() << " [" << dwarf::TagString(Tag) << "] ";
 
   // TODO : Print context
   getCompileUnit().dump();
-  errs() << " [" << getLineNumber() << "] ";
+  dbgs() << " [" << getLineNumber() << "] ";
 
   if (isLocalToUnit())
-    errs() << " [local] ";
+    dbgs() << " [local] ";
 
   if (isDefinition())
-    errs() << " [def] ";
+    dbgs() << " [def] ";
 
   if (isGlobalVariable())
     DIGlobalVariable(DbgNode).dump();
 
-  errs() << "\n";
+  dbgs() << "\n";
 }
 
 /// dump - Print subprogram.
 void DISubprogram::dump() const {
   StringRef Res = getName();
   if (!Res.empty())
-    errs() << " [" << Res << "] ";
+    dbgs() << " [" << Res << "] ";
 
   unsigned Tag = getTag();
-  errs() << " [" << dwarf::TagString(Tag) << "] ";
+  dbgs() << " [" << dwarf::TagString(Tag) << "] ";
 
   // TODO : Print context
   getCompileUnit().dump();
-  errs() << " [" << getLineNumber() << "] ";
+  dbgs() << " [" << getLineNumber() << "] ";
 
   if (isLocalToUnit())
-    errs() << " [local] ";
+    dbgs() << " [local] ";
 
   if (isDefinition())
-    errs() << " [def] ";
+    dbgs() << " [def] ";
 
-  errs() << "\n";
+  dbgs() << "\n";
 }
 
 /// dump - Print global variable.
 void DIGlobalVariable::dump() const {
-  errs() << " [";
+  dbgs() << " [";
   getGlobal()->dump();
-  errs() << "] ";
+  dbgs() << "] ";
 }
 
 /// dump - Print variable.
 void DIVariable::dump() const {
   StringRef Res = getName();
   if (!Res.empty())
-    errs() << " [" << Res << "] ";
+    dbgs() << " [" << Res << "] ";
 
   getCompileUnit().dump();
-  errs() << " [" << getLineNumber() << "] ";
+  dbgs() << " [" << getLineNumber() << "] ";
   getType().dump();
-  errs() << "\n";
+  dbgs() << "\n";
 
   // FIXME: Dump complex addresses
 }
@@ -996,6 +1010,21 @@ DILexicalBlock DIFactory::CreateLexicalBlock(DIDescriptor Context) {
   return DILexicalBlock(MDNode::get(VMContext, &Elts[0], 2));
 }
 
+/// CreateNameSpace - This creates new descriptor for a namespace
+/// with the specified parent context.
+DINameSpace DIFactory::CreateNameSpace(DIDescriptor Context, StringRef Name,
+                                       DICompileUnit CompileUnit, 
+                                       unsigned LineNo) {
+  Value *Elts[] = {
+    GetTagConstant(dwarf::DW_TAG_namespace),
+    Context.getNode(),
+    MDString::get(VMContext, Name),
+    CompileUnit.getNode(),
+    ConstantInt::get(Type::getInt32Ty(VMContext), LineNo)
+  };
+  return DINameSpace(MDNode::get(VMContext, &Elts[0], 5));
+}
+
 /// CreateLocation - Creates a debug info location.
 DILocation DIFactory::CreateLocation(unsigned LineNo, unsigned ColumnNo,
                                      DIScope S, DILocation OrigLoc) {
diff --git a/libclamav/c++/llvm/lib/Analysis/IPA/Andersens.cpp b/libclamav/c++/llvm/lib/Analysis/IPA/Andersens.cpp
index 4d5b312..28c66af 100644
--- a/libclamav/c++/llvm/lib/Analysis/IPA/Andersens.cpp
+++ b/libclamav/c++/llvm/lib/Analysis/IPA/Andersens.cpp
@@ -1402,7 +1402,7 @@ void Andersens::ClumpAddressTaken() {
     unsigned Pos = NewPos++;
     Translate[i] = Pos;
     NewGraphNodes.push_back(GraphNodes[i]);
-    DEBUG(errs() << "Renumbering node " << i << " to node " << Pos << "\n");
+    DEBUG(dbgs() << "Renumbering node " << i << " to node " << Pos << "\n");
   }
 
   // I believe this ends up being faster than making two vectors and splicing
@@ -1412,7 +1412,7 @@ void Andersens::ClumpAddressTaken() {
       unsigned Pos = NewPos++;
       Translate[i] = Pos;
       NewGraphNodes.push_back(GraphNodes[i]);
-      DEBUG(errs() << "Renumbering node " << i << " to node " << Pos << "\n");
+      DEBUG(dbgs() << "Renumbering node " << i << " to node " << Pos << "\n");
     }
   }
 
@@ -1421,7 +1421,7 @@ void Andersens::ClumpAddressTaken() {
       unsigned Pos = NewPos++;
       Translate[i] = Pos;
       NewGraphNodes.push_back(GraphNodes[i]);
-      DEBUG(errs() << "Renumbering node " << i << " to node " << Pos << "\n");
+      DEBUG(dbgs() << "Renumbering node " << i << " to node " << Pos << "\n");
     }
   }
 
@@ -1493,7 +1493,7 @@ void Andersens::ClumpAddressTaken() {
 /// receive &D from E anyway.
 
 void Andersens::HVN() {
-  DEBUG(errs() << "Beginning HVN\n");
+  DEBUG(dbgs() << "Beginning HVN\n");
   // Build a predecessor graph.  This is like our constraint graph with the
   // edges going in the opposite direction, and there are edges for all the
   // constraints, instead of just copy constraints.  We also build implicit
@@ -1564,7 +1564,7 @@ void Andersens::HVN() {
   Node2DFS.clear();
   Node2Deleted.clear();
   Node2Visited.clear();
-  DEBUG(errs() << "Finished HVN\n");
+  DEBUG(dbgs() << "Finished HVN\n");
 
 }
 
@@ -1688,7 +1688,7 @@ void Andersens::HVNValNum(unsigned NodeIndex) {
 /// and is equivalent to value numbering the collapsed constraint graph
 /// including evaluating unions.
 void Andersens::HU() {
-  DEBUG(errs() << "Beginning HU\n");
+  DEBUG(dbgs() << "Beginning HU\n");
   // Build a predecessor graph.  This is like our constraint graph with the
   // edges going in the opposite direction, and there are edges for all the
   // constraints, instead of just copy constraints.  We also build implicit
@@ -1768,7 +1768,7 @@ void Andersens::HU() {
   }
   // PEClass nodes will be deleted by the deleting of N->PointsTo in our caller.
   Set2PEClass.clear();
-  DEBUG(errs() << "Finished HU\n");
+  DEBUG(dbgs() << "Finished HU\n");
 }
 
 
@@ -1946,12 +1946,12 @@ void Andersens::RewriteConstraints() {
     // to anything.
     if (LHSLabel == 0) {
       DEBUG(PrintNode(&GraphNodes[LHSNode]));
-      DEBUG(errs() << " is a non-pointer, ignoring constraint.\n");
+      DEBUG(dbgs() << " is a non-pointer, ignoring constraint.\n");
       continue;
     }
     if (RHSLabel == 0) {
       DEBUG(PrintNode(&GraphNodes[RHSNode]));
-      DEBUG(errs() << " is a non-pointer, ignoring constraint.\n");
+      DEBUG(dbgs() << " is a non-pointer, ignoring constraint.\n");
       continue;
     }
     // This constraint may be useless, and it may become useless as we translate
@@ -1999,16 +1999,16 @@ void Andersens::PrintLabels() const {
     if (i < FirstRefNode) {
       PrintNode(&GraphNodes[i]);
     } else if (i < FirstAdrNode) {
-      DEBUG(errs() << "REF(");
+      DEBUG(dbgs() << "REF(");
       PrintNode(&GraphNodes[i-FirstRefNode]);
-      DEBUG(errs() <<")");
+      DEBUG(dbgs() <<")");
     } else {
-      DEBUG(errs() << "ADR(");
+      DEBUG(dbgs() << "ADR(");
       PrintNode(&GraphNodes[i-FirstAdrNode]);
-      DEBUG(errs() <<")");
+      DEBUG(dbgs() <<")");
     }
 
-    DEBUG(errs() << " has pointer label " << GraphNodes[i].PointerEquivLabel
+    DEBUG(dbgs() << " has pointer label " << GraphNodes[i].PointerEquivLabel
          << " and SCC rep " << VSSCCRep[i]
          << " and is " << (GraphNodes[i].Direct ? "Direct" : "Not direct")
          << "\n");
@@ -2025,7 +2025,7 @@ void Andersens::PrintLabels() const {
 /// operation are stored in SDT and are later used in SolveContraints()
 /// and UniteNodes().
 void Andersens::HCD() {
-  DEBUG(errs() << "Starting HCD.\n");
+  DEBUG(dbgs() << "Starting HCD.\n");
   HCDSCCRep.resize(GraphNodes.size());
 
   for (unsigned i = 0; i < GraphNodes.size(); ++i) {
@@ -2074,7 +2074,7 @@ void Andersens::HCD() {
   Node2Visited.clear();
   Node2Deleted.clear();
   HCDSCCRep.clear();
-  DEBUG(errs() << "HCD complete.\n");
+  DEBUG(dbgs() << "HCD complete.\n");
 }
 
 // Component of HCD: 
@@ -2146,7 +2146,7 @@ void Andersens::Search(unsigned Node) {
 /// Optimize the constraints by performing offline variable substitution and
 /// other optimizations.
 void Andersens::OptimizeConstraints() {
-  DEBUG(errs() << "Beginning constraint optimization\n");
+  DEBUG(dbgs() << "Beginning constraint optimization\n");
 
   SDTActive = false;
 
@@ -2230,7 +2230,7 @@ void Andersens::OptimizeConstraints() {
 
   // HCD complete.
 
-  DEBUG(errs() << "Finished constraint optimization\n");
+  DEBUG(dbgs() << "Finished constraint optimization\n");
   FirstRefNode = 0;
   FirstAdrNode = 0;
 }
@@ -2238,7 +2238,7 @@ void Andersens::OptimizeConstraints() {
 /// Unite pointer but not location equivalent variables, now that the constraint
 /// graph is built.
 void Andersens::UnitePointerEquivalences() {
-  DEBUG(errs() << "Uniting remaining pointer equivalences\n");
+  DEBUG(dbgs() << "Uniting remaining pointer equivalences\n");
   for (unsigned i = 0; i < GraphNodes.size(); ++i) {
     if (GraphNodes[i].AddressTaken && GraphNodes[i].isRep()) {
       unsigned Label = GraphNodes[i].PointerEquivLabel;
@@ -2247,7 +2247,7 @@ void Andersens::UnitePointerEquivalences() {
         UniteNodes(i, PENLEClass2Node[Label]);
     }
   }
-  DEBUG(errs() << "Finished remaining pointer equivalences\n");
+  DEBUG(dbgs() << "Finished remaining pointer equivalences\n");
   PENLEClass2Node.clear();
 }
 
@@ -2403,7 +2403,7 @@ void Andersens::SolveConstraints() {
   std::vector<unsigned int> RSV;
 #endif
   while( !CurrWL->empty() ) {
-    DEBUG(errs() << "Starting iteration #" << ++NumIters << "\n");
+    DEBUG(dbgs() << "Starting iteration #" << ++NumIters << "\n");
 
     Node* CurrNode;
     unsigned CurrNodeIndex;
@@ -2706,11 +2706,11 @@ unsigned Andersens::UniteNodes(unsigned First, unsigned Second,
   SecondNode->OldPointsTo = NULL;
 
   NumUnified++;
-  DEBUG(errs() << "Unified Node ");
+  DEBUG(dbgs() << "Unified Node ");
   DEBUG(PrintNode(FirstNode));
-  DEBUG(errs() << " and Node ");
+  DEBUG(dbgs() << " and Node ");
   DEBUG(PrintNode(SecondNode));
-  DEBUG(errs() << "\n");
+  DEBUG(dbgs() << "\n");
 
   if (SDTActive)
     if (SDT[Second] >= 0) {
@@ -2755,17 +2755,17 @@ unsigned Andersens::FindNode(unsigned NodeIndex) const {
 
 void Andersens::PrintNode(const Node *N) const {
   if (N == &GraphNodes[UniversalSet]) {
-    errs() << "<universal>";
+    dbgs() << "<universal>";
     return;
   } else if (N == &GraphNodes[NullPtr]) {
-    errs() << "<nullptr>";
+    dbgs() << "<nullptr>";
     return;
   } else if (N == &GraphNodes[NullObject]) {
-    errs() << "<null>";
+    dbgs() << "<null>";
     return;
   }
   if (!N->getValue()) {
-    errs() << "artificial" << (intptr_t) N;
+    dbgs() << "artificial" << (intptr_t) N;
     return;
   }
 
@@ -2774,85 +2774,85 @@ void Andersens::PrintNode(const Node *N) const {
   if (Function *F = dyn_cast<Function>(V)) {
     if (isa<PointerType>(F->getFunctionType()->getReturnType()) &&
         N == &GraphNodes[getReturnNode(F)]) {
-      errs() << F->getName() << ":retval";
+      dbgs() << F->getName() << ":retval";
       return;
     } else if (F->getFunctionType()->isVarArg() &&
                N == &GraphNodes[getVarargNode(F)]) {
-      errs() << F->getName() << ":vararg";
+      dbgs() << F->getName() << ":vararg";
       return;
     }
   }
 
   if (Instruction *I = dyn_cast<Instruction>(V))
-    errs() << I->getParent()->getParent()->getName() << ":";
+    dbgs() << I->getParent()->getParent()->getName() << ":";
   else if (Argument *Arg = dyn_cast<Argument>(V))
-    errs() << Arg->getParent()->getName() << ":";
+    dbgs() << Arg->getParent()->getName() << ":";
 
   if (V->hasName())
-    errs() << V->getName();
+    dbgs() << V->getName();
   else
-    errs() << "(unnamed)";
+    dbgs() << "(unnamed)";
 
   if (isa<GlobalValue>(V) || isa<AllocaInst>(V) || isMalloc(V))
     if (N == &GraphNodes[getObject(V)])
-      errs() << "<mem>";
+      dbgs() << "<mem>";
 }
 void Andersens::PrintConstraint(const Constraint &C) const {
   if (C.Type == Constraint::Store) {
-    errs() << "*";
+    dbgs() << "*";
     if (C.Offset != 0)
-      errs() << "(";
+      dbgs() << "(";
   }
   PrintNode(&GraphNodes[C.Dest]);
   if (C.Type == Constraint::Store && C.Offset != 0)
-    errs() << " + " << C.Offset << ")";
-  errs() << " = ";
+    dbgs() << " + " << C.Offset << ")";
+  dbgs() << " = ";
   if (C.Type == Constraint::Load) {
-    errs() << "*";
+    dbgs() << "*";
     if (C.Offset != 0)
-      errs() << "(";
+      dbgs() << "(";
   }
   else if (C.Type == Constraint::AddressOf)
-    errs() << "&";
+    dbgs() << "&";
   PrintNode(&GraphNodes[C.Src]);
   if (C.Offset != 0 && C.Type != Constraint::Store)
-    errs() << " + " << C.Offset;
+    dbgs() << " + " << C.Offset;
   if (C.Type == Constraint::Load && C.Offset != 0)
-    errs() << ")";
-  errs() << "\n";
+    dbgs() << ")";
+  dbgs() << "\n";
 }
 
 void Andersens::PrintConstraints() const {
-  errs() << "Constraints:\n";
+  dbgs() << "Constraints:\n";
 
   for (unsigned i = 0, e = Constraints.size(); i != e; ++i)
     PrintConstraint(Constraints[i]);
 }
 
 void Andersens::PrintPointsToGraph() const {
-  errs() << "Points-to graph:\n";
+  dbgs() << "Points-to graph:\n";
   for (unsigned i = 0, e = GraphNodes.size(); i != e; ++i) {
     const Node *N = &GraphNodes[i];
     if (FindNode(i) != i) {
       PrintNode(N);
-      errs() << "\t--> same as ";
+      dbgs() << "\t--> same as ";
       PrintNode(&GraphNodes[FindNode(i)]);
-      errs() << "\n";
+      dbgs() << "\n";
     } else {
-      errs() << "[" << (N->PointsTo->count()) << "] ";
+      dbgs() << "[" << (N->PointsTo->count()) << "] ";
       PrintNode(N);
-      errs() << "\t--> ";
+      dbgs() << "\t--> ";
 
       bool first = true;
       for (SparseBitVector<>::iterator bi = N->PointsTo->begin();
            bi != N->PointsTo->end();
            ++bi) {
         if (!first)
-          errs() << ", ";
+          dbgs() << ", ";
         PrintNode(&GraphNodes[*bi]);
         first = false;
       }
-      errs() << "\n";
+      dbgs() << "\n";
     }
   }
 }
diff --git a/libclamav/c++/llvm/lib/Analysis/IPA/CallGraph.cpp b/libclamav/c++/llvm/lib/Analysis/IPA/CallGraph.cpp
index 9cd8bb8..a826177 100644
--- a/libclamav/c++/llvm/lib/Analysis/IPA/CallGraph.cpp
+++ b/libclamav/c++/llvm/lib/Analysis/IPA/CallGraph.cpp
@@ -17,6 +17,7 @@
 #include "llvm/Instructions.h"
 #include "llvm/IntrinsicInst.h"
 #include "llvm/Support/CallSite.h"
+#include "llvm/Support/Debug.h"
 #include "llvm/Support/raw_ostream.h"
 using namespace llvm;
 
@@ -181,7 +182,7 @@ void CallGraph::print(raw_ostream &OS, Module*) const {
     I->second->print(OS);
 }
 void CallGraph::dump() const {
-  print(errs(), 0);
+  print(dbgs(), 0);
 }
 
 //===----------------------------------------------------------------------===//
@@ -232,7 +233,7 @@ void CallGraphNode::print(raw_ostream &OS) const {
   OS << "\n";
 }
 
-void CallGraphNode::dump() const { print(errs()); }
+void CallGraphNode::dump() const { print(dbgs()); }
 
 /// removeCallEdgeFor - This method removes the edge in the node for the
 /// specified call site.  Note that this method takes linear time, so it
diff --git a/libclamav/c++/llvm/lib/Analysis/IPA/CallGraphSCCPass.cpp b/libclamav/c++/llvm/lib/Analysis/IPA/CallGraphSCCPass.cpp
index a96a5c5..5504b9b 100644
--- a/libclamav/c++/llvm/lib/Analysis/IPA/CallGraphSCCPass.cpp
+++ b/libclamav/c++/llvm/lib/Analysis/IPA/CallGraphSCCPass.cpp
@@ -126,7 +126,7 @@ bool CGPassManager::RunPassOnSCC(Pass *P, std::vector<CallGraphNode*> &CurSCC,
   // The function pass(es) modified the IR, they may have clobbered the
   // callgraph.
   if (Changed && CallGraphUpToDate) {
-    DEBUG(errs() << "CGSCCPASSMGR: Pass Dirtied SCC: "
+    DEBUG(dbgs() << "CGSCCPASSMGR: Pass Dirtied SCC: "
                  << P->getPassName() << '\n');
     CallGraphUpToDate = false;
   }
@@ -143,7 +143,7 @@ void CGPassManager::RefreshCallGraph(std::vector<CallGraphNode*> &CurSCC,
                                      CallGraph &CG, bool CheckingMode) {
   DenseMap<Value*, CallGraphNode*> CallSites;
   
-  DEBUG(errs() << "CGSCCPASSMGR: Refreshing SCC with " << CurSCC.size()
+  DEBUG(dbgs() << "CGSCCPASSMGR: Refreshing SCC with " << CurSCC.size()
                << " nodes:\n";
         for (unsigned i = 0, e = CurSCC.size(); i != e; ++i)
           CurSCC[i]->dump();
@@ -277,11 +277,11 @@ void CGPassManager::RefreshCallGraph(std::vector<CallGraphNode*> &CurSCC,
   }
 
   DEBUG(if (MadeChange) {
-          errs() << "CGSCCPASSMGR: Refreshed SCC is now:\n";
+          dbgs() << "CGSCCPASSMGR: Refreshed SCC is now:\n";
           for (unsigned i = 0, e = CurSCC.size(); i != e; ++i)
             CurSCC[i]->dump();
          } else {
-           errs() << "CGSCCPASSMGR: SCC Refresh didn't change call graph.\n";
+           dbgs() << "CGSCCPASSMGR: SCC Refresh didn't change call graph.\n";
          }
         );
 }
diff --git a/libclamav/c++/llvm/lib/Analysis/IVUsers.cpp b/libclamav/c++/llvm/lib/Analysis/IVUsers.cpp
index 627dbbb..df9e31c 100644
--- a/libclamav/c++/llvm/lib/Analysis/IVUsers.cpp
+++ b/libclamav/c++/llvm/lib/Analysis/IVUsers.cpp
@@ -53,7 +53,7 @@ static bool containsAddRecFromDifferentLoop(const SCEV *S, Loop *L) {
       if (newLoop == L)
         return false;
       // if newLoop is an outer loop of L, this is OK.
-      if (newLoop->contains(L->getHeader()))
+      if (newLoop->contains(L))
         return false;
     }
     return true;
@@ -128,7 +128,7 @@ static bool getSCEVStartAndStride(const SCEV *&SH, Loop *L, Loop *UseLoop,
     if (!AddRecStride->properlyDominates(Header, DT))
       return false;
 
-    DEBUG(errs() << "[" << L->getHeader()->getName()
+    DEBUG(dbgs() << "[" << L->getHeader()->getName()
                  << "] Variable stride: " << *AddRec << "\n");
   }
 
@@ -148,7 +148,7 @@ static bool IVUseShouldUsePostIncValue(Instruction *User, Instruction *IV,
                                        Loop *L, LoopInfo *LI, DominatorTree *DT,
                                        Pass *P) {
   // If the user is in the loop, use the preinc value.
-  if (L->contains(User->getParent())) return false;
+  if (L->contains(User)) return false;
 
   BasicBlock *LatchBlock = L->getLoopLatch();
   if (!LatchBlock)
@@ -209,7 +209,7 @@ bool IVUsers::AddUsersIfInteresting(Instruction *I) {
     return false;  // Non-reducible symbolic expression, bail out.
 
   // Keep things simple. Don't touch loop-variant strides.
-  if (!Stride->isLoopInvariant(L) && L->contains(I->getParent()))
+  if (!Stride->isLoopInvariant(L) && L->contains(I))
     return false;
 
   SmallPtrSet<Instruction *, 4> UniqueUsers;
@@ -233,13 +233,13 @@ bool IVUsers::AddUsersIfInteresting(Instruction *I) {
     if (LI->getLoopFor(User->getParent()) != L) {
       if (isa<PHINode>(User) || Processed.count(User) ||
           !AddUsersIfInteresting(User)) {
-        DEBUG(errs() << "FOUND USER in other loop: " << *User << '\n'
+        DEBUG(dbgs() << "FOUND USER in other loop: " << *User << '\n'
                      << "   OF SCEV: " << *ISE << '\n');
         AddUserToIVUsers = true;
       }
     } else if (Processed.count(User) ||
                !AddUsersIfInteresting(User)) {
-      DEBUG(errs() << "FOUND USER: " << *User << '\n'
+      DEBUG(dbgs() << "FOUND USER: " << *User << '\n'
                    << "   OF SCEV: " << *ISE << '\n');
       AddUserToIVUsers = true;
     }
@@ -262,7 +262,7 @@ bool IVUsers::AddUsersIfInteresting(Instruction *I) {
         const SCEV *NewStart = SE->getMinusSCEV(Start, Stride);
         StrideUses->addUser(NewStart, User, I);
         StrideUses->Users.back().setIsUseOfPostIncrementedValue(true);
-        DEBUG(errs() << "   USING POSTINC SCEV, START=" << *NewStart<< "\n");
+        DEBUG(dbgs() << "   USING POSTINC SCEV, START=" << *NewStart<< "\n");
       } else {
         StrideUses->addUser(Start, User, I);
       }
@@ -307,7 +307,6 @@ bool IVUsers::runOnLoop(Loop *l, LPPassManager &LPM) {
   for (BasicBlock::iterator I = L->getHeader()->begin(); isa<PHINode>(I); ++I)
     AddUsersIfInteresting(I);
 
-  Processed.clear();
   return false;
 }
 
@@ -325,7 +324,7 @@ const SCEV *IVUsers::getReplacementExpr(const IVStrideUse &U) const {
   if (U.isUseOfPostIncrementedValue())
     RetVal = SE->getAddExpr(RetVal, U.getParent()->Stride);
   // Evaluate the expression out of the loop, if possible.
-  if (!L->contains(U.getUser()->getParent())) {
+  if (!L->contains(U.getUser())) {
     const SCEV *ExitVal = SE->getSCEVAtScope(RetVal, L->getParentLoop());
     if (ExitVal->isLoopInvariant(L))
       RetVal = ExitVal;
@@ -364,12 +363,13 @@ void IVUsers::print(raw_ostream &OS, const Module *M) const {
 }
 
 void IVUsers::dump() const {
-  print(errs());
+  print(dbgs());
 }
 
 void IVUsers::releaseMemory() {
   IVUsesByStride.clear();
   StrideOrder.clear();
+  Processed.clear();
   IVUses.clear();
 }
 
diff --git a/libclamav/c++/llvm/lib/Analysis/InstCount.cpp b/libclamav/c++/llvm/lib/Analysis/InstCount.cpp
index a4b041f..bb2cf53 100644
--- a/libclamav/c++/llvm/lib/Analysis/InstCount.cpp
+++ b/libclamav/c++/llvm/lib/Analysis/InstCount.cpp
@@ -15,6 +15,7 @@
 #include "llvm/Analysis/Passes.h"
 #include "llvm/Pass.h"
 #include "llvm/Function.h"
+#include "llvm/Support/Debug.h"
 #include "llvm/Support/ErrorHandling.h"
 #include "llvm/Support/InstVisitor.h"
 #include "llvm/Support/raw_ostream.h"
diff --git a/libclamav/c++/llvm/lib/Analysis/LazyValueInfo.cpp b/libclamav/c++/llvm/lib/Analysis/LazyValueInfo.cpp
index 5796c6f..ff9026b 100644
--- a/libclamav/c++/llvm/lib/Analysis/LazyValueInfo.cpp
+++ b/libclamav/c++/llvm/lib/Analysis/LazyValueInfo.cpp
@@ -342,7 +342,7 @@ LVILatticeVal LVIQuery::getBlockValue(BasicBlock *BB) {
   
   // If we've already computed this block's value, return it.
   if (!BBLV.isUndefined()) {
-    DEBUG(errs() << "  reuse BB '" << BB->getName() << "' val=" << BBLV <<'\n');
+    DEBUG(dbgs() << "  reuse BB '" << BB->getName() << "' val=" << BBLV <<'\n');
     return BBLV;
   }
 
@@ -365,7 +365,7 @@ LVILatticeVal LVIQuery::getBlockValue(BasicBlock *BB) {
       // If we hit overdefined, exit early.  The BlockVals entry is already set
       // to overdefined.
       if (Result.isOverdefined()) {
-        DEBUG(errs() << " compute BB '" << BB->getName()
+        DEBUG(dbgs() << " compute BB '" << BB->getName()
                      << "' - overdefined because of pred.\n");
         return Result;
       }
@@ -394,7 +394,7 @@ LVILatticeVal LVIQuery::getBlockValue(BasicBlock *BB) {
     
   }
   
-  DEBUG(errs() << " compute BB '" << BB->getName()
+  DEBUG(dbgs() << " compute BB '" << BB->getName()
                << "' - overdefined because inst def found.\n");
 
   LVILatticeVal Result;
@@ -471,12 +471,12 @@ LVILatticeVal LazyValueInfoCache::getValueInBlock(Value *V, BasicBlock *BB) {
   if (Constant *VC = dyn_cast<Constant>(V))
     return LVILatticeVal::get(VC);
   
-  DEBUG(errs() << "LVI Getting block end value " << *V << " at '"
+  DEBUG(dbgs() << "LVI Getting block end value " << *V << " at '"
         << BB->getName() << "'\n");
   
   LVILatticeVal Result = LVIQuery(V, ValueCache[V]).getBlockValue(BB);
   
-  DEBUG(errs() << "  Result = " << Result << "\n");
+  DEBUG(dbgs() << "  Result = " << Result << "\n");
   return Result;
 }
 
@@ -486,12 +486,12 @@ getValueOnEdge(Value *V, BasicBlock *FromBB, BasicBlock *ToBB) {
   if (Constant *VC = dyn_cast<Constant>(V))
     return LVILatticeVal::get(VC);
   
-  DEBUG(errs() << "LVI Getting edge value " << *V << " from '"
+  DEBUG(dbgs() << "LVI Getting edge value " << *V << " from '"
         << FromBB->getName() << "' to '" << ToBB->getName() << "'\n");
   LVILatticeVal Result =
     LVIQuery(V, ValueCache[V]).getEdgeValue(FromBB, ToBB);
   
-  DEBUG(errs() << "  Result = " << Result << "\n");
+  DEBUG(dbgs() << "  Result = " << Result << "\n");
   
   return Result;
 }
diff --git a/libclamav/c++/llvm/lib/Analysis/LoopDependenceAnalysis.cpp b/libclamav/c++/llvm/lib/Analysis/LoopDependenceAnalysis.cpp
index 32d2266..bb4f46d 100644
--- a/libclamav/c++/llvm/lib/Analysis/LoopDependenceAnalysis.cpp
+++ b/libclamav/c++/llvm/lib/Analysis/LoopDependenceAnalysis.cpp
@@ -181,15 +181,15 @@ LoopDependenceAnalysis::DependenceResult
 LoopDependenceAnalysis::analyseSubscript(const SCEV *A,
                                          const SCEV *B,
                                          Subscript *S) const {
-  DEBUG(errs() << "  Testing subscript: " << *A << ", " << *B << "\n");
+  DEBUG(dbgs() << "  Testing subscript: " << *A << ", " << *B << "\n");
 
   if (A == B) {
-    DEBUG(errs() << "  -> [D] same SCEV\n");
+    DEBUG(dbgs() << "  -> [D] same SCEV\n");
     return Dependent;
   }
 
   if (!isAffine(A) || !isAffine(B)) {
-    DEBUG(errs() << "  -> [?] not affine\n");
+    DEBUG(dbgs() << "  -> [?] not affine\n");
     return Unknown;
   }
 
@@ -204,12 +204,12 @@ LoopDependenceAnalysis::analyseSubscript(const SCEV *A,
 
 LoopDependenceAnalysis::DependenceResult
 LoopDependenceAnalysis::analysePair(DependencePair *P) const {
-  DEBUG(errs() << "Analysing:\n" << *P->A << "\n" << *P->B << "\n");
+  DEBUG(dbgs() << "Analysing:\n" << *P->A << "\n" << *P->B << "\n");
 
   // We only analyse loads and stores but no possible memory accesses by e.g.
   // free, call, or invoke instructions.
   if (!IsLoadOrStoreInst(P->A) || !IsLoadOrStoreInst(P->B)) {
-    DEBUG(errs() << "--> [?] no load/store\n");
+    DEBUG(dbgs() << "--> [?] no load/store\n");
     return Unknown;
   }
 
@@ -219,12 +219,12 @@ LoopDependenceAnalysis::analysePair(DependencePair *P) const {
   switch (UnderlyingObjectsAlias(AA, aPtr, bPtr)) {
   case AliasAnalysis::MayAlias:
     // We can not analyse objects if we do not know about their aliasing.
-    DEBUG(errs() << "---> [?] may alias\n");
+    DEBUG(dbgs() << "---> [?] may alias\n");
     return Unknown;
 
   case AliasAnalysis::NoAlias:
     // If the objects noalias, they are distinct, accesses are independent.
-    DEBUG(errs() << "---> [I] no alias\n");
+    DEBUG(dbgs() << "---> [I] no alias\n");
     return Independent;
 
   case AliasAnalysis::MustAlias:
diff --git a/libclamav/c++/llvm/lib/Analysis/LoopInfo.cpp b/libclamav/c++/llvm/lib/Analysis/LoopInfo.cpp
index 34089ee..5d31c11 100644
--- a/libclamav/c++/llvm/lib/Analysis/LoopInfo.cpp
+++ b/libclamav/c++/llvm/lib/Analysis/LoopInfo.cpp
@@ -56,7 +56,7 @@ bool Loop::isLoopInvariant(Value *V) const {
 /// loop-invariant.
 ///
 bool Loop::isLoopInvariant(Instruction *I) const {
-  return !contains(I->getParent());
+  return !contains(I);
 }
 
 /// makeLoopInvariant - If the given value is an instruciton inside of the
diff --git a/libclamav/c++/llvm/lib/Analysis/MemoryDependenceAnalysis.cpp b/libclamav/c++/llvm/lib/Analysis/MemoryDependenceAnalysis.cpp
index a0c7706..2d74709 100644
--- a/libclamav/c++/llvm/lib/Analysis/MemoryDependenceAnalysis.cpp
+++ b/libclamav/c++/llvm/lib/Analysis/MemoryDependenceAnalysis.cpp
@@ -275,7 +275,8 @@ getPointerDependencyFrom(Value *MemPtr, uint64_t MemSize, bool isLoad,
     // a subsequent bitcast of the malloc call result.  There can be stores to
     // the malloced memory between the malloc call and its bitcast uses, and we
     // need to continue scanning until the malloc call.
-    if (isa<AllocaInst>(Inst) || extractMallocCall(Inst)) {
+    if (isa<AllocaInst>(Inst) ||
+        (isa<CallInst>(Inst) && extractMallocCall(Inst))) {
       Value *AccessPtr = MemPtr->getUnderlyingObject();
       
       if (AccessPtr == Inst ||
@@ -546,9 +547,9 @@ MemoryDependenceAnalysis::getNonLocalCallDependency(CallSite QueryCS) {
     // If we had a dirty entry for the block, update it.  Otherwise, just add
     // a new entry.
     if (ExistingResult)
-      ExistingResult->setResult(Dep, 0);
+      ExistingResult->setResult(Dep);
     else
-      Cache.push_back(NonLocalDepEntry(DirtyBB, Dep, 0));
+      Cache.push_back(NonLocalDepEntry(DirtyBB, Dep));
     
     // If the block has a dependency (i.e. it isn't completely transparent to
     // the value), remember the association!
@@ -578,7 +579,7 @@ MemoryDependenceAnalysis::getNonLocalCallDependency(CallSite QueryCS) {
 ///
 void MemoryDependenceAnalysis::
 getNonLocalPointerDependency(Value *Pointer, bool isLoad, BasicBlock *FromBB,
-                             SmallVectorImpl<NonLocalDepEntry> &Result) {
+                             SmallVectorImpl<NonLocalDepResult> &Result) {
   assert(isa<PointerType>(Pointer->getType()) &&
          "Can't get pointer deps of a non-pointer!");
   Result.clear();
@@ -599,9 +600,9 @@ getNonLocalPointerDependency(Value *Pointer, bool isLoad, BasicBlock *FromBB,
                                    Result, Visited, true))
     return;
   Result.clear();
-  Result.push_back(NonLocalDepEntry(FromBB,
-                                    MemDepResult::getClobber(FromBB->begin()),
-                                    Pointer));
+  Result.push_back(NonLocalDepResult(FromBB,
+                                     MemDepResult::getClobber(FromBB->begin()),
+                                     Pointer));
 }
 
 /// GetNonLocalInfoForBlock - Compute the memdep value for BB with
@@ -656,9 +657,9 @@ GetNonLocalInfoForBlock(Value *Pointer, uint64_t PointeeSize,
   // If we had a dirty entry for the block, update it.  Otherwise, just add
   // a new entry.
   if (ExistingResult)
-    ExistingResult->setResult(Dep, Pointer);
+    ExistingResult->setResult(Dep);
   else
-    Cache->push_back(NonLocalDepEntry(BB, Dep, Pointer));
+    Cache->push_back(NonLocalDepEntry(BB, Dep));
   
   // If the block has a dependency (i.e. it isn't completely transparent to
   // the value), remember the reverse association because we just added it
@@ -726,7 +727,7 @@ SortNonLocalDepInfoCache(MemoryDependenceAnalysis::NonLocalDepInfo &Cache,
 bool MemoryDependenceAnalysis::
 getNonLocalPointerDepFromBB(const PHITransAddr &Pointer, uint64_t PointeeSize,
                             bool isLoad, BasicBlock *StartBB,
-                            SmallVectorImpl<NonLocalDepEntry> &Result,
+                            SmallVectorImpl<NonLocalDepResult> &Result,
                             DenseMap<BasicBlock*, Value*> &Visited,
                             bool SkipFirstBlock) {
   
@@ -759,11 +760,12 @@ getNonLocalPointerDepFromBB(const PHITransAddr &Pointer, uint64_t PointeeSize,
       }
     }
     
+    Value *Addr = Pointer.getAddr();
     for (NonLocalDepInfo::iterator I = Cache->begin(), E = Cache->end();
          I != E; ++I) {
-      Visited.insert(std::make_pair(I->getBB(), Pointer.getAddr()));
+      Visited.insert(std::make_pair(I->getBB(), Addr));
       if (!I->getResult().isNonLocal())
-        Result.push_back(*I);
+        Result.push_back(NonLocalDepResult(I->getBB(), I->getResult(), Addr));
     }
     ++NumCacheCompleteNonLocalPtr;
     return false;
@@ -807,7 +809,7 @@ getNonLocalPointerDepFromBB(const PHITransAddr &Pointer, uint64_t PointeeSize,
       
       // If we got a Def or Clobber, add this to the list of results.
       if (!Dep.isNonLocal()) {
-        Result.push_back(NonLocalDepEntry(BB, Dep, Pointer.getAddr()));
+        Result.push_back(NonLocalDepResult(BB, Dep, Pointer.getAddr()));
         continue;
       }
     }
@@ -889,41 +891,17 @@ getNonLocalPointerDepFromBB(const PHITransAddr &Pointer, uint64_t PointeeSize,
       // a computation of the pointer in this predecessor.
       if (PredPtrVal == 0) {
         // Add the entry to the Result list.
-        NonLocalDepEntry Entry(Pred,
-                               MemDepResult::getClobber(Pred->getTerminator()),
-                               PredPtrVal);
+        NonLocalDepResult Entry(Pred,
+                                MemDepResult::getClobber(Pred->getTerminator()),
+                                PredPtrVal);
         Result.push_back(Entry);
 
-        // Add it to the cache for this CacheKey so that subsequent queries get
-        // this result.
-        Cache = &NonLocalPointerDeps[CacheKey].second;
-        MemoryDependenceAnalysis::NonLocalDepInfo::iterator It =
-          std::upper_bound(Cache->begin(), Cache->end(), Entry);
-        
-        if (It != Cache->begin() && (It-1)->getBB() == Pred)
-          --It;
-
-        if (It == Cache->end() || It->getBB() != Pred) {
-          Cache->insert(It, Entry);
-          // Add it to the reverse map.
-          ReverseNonLocalPtrDeps[Pred->getTerminator()].insert(CacheKey);
-        } else if (!It->getResult().isDirty()) {
-          // noop
-        } else if (It->getResult().getInst() == Pred->getTerminator()) {
-          // Same instruction, clear the dirty marker.
-          It->setResult(Entry.getResult(), PredPtrVal);
-        } else if (It->getResult().getInst() == 0) {
-          // Dirty, with no instruction, just add this.
-          It->setResult(Entry.getResult(), PredPtrVal);
-          ReverseNonLocalPtrDeps[Pred->getTerminator()].insert(CacheKey);
-        } else {
-          // Otherwise, dirty with a different instruction.
-          RemoveFromReverseMap(ReverseNonLocalPtrDeps,
-                               It->getResult().getInst(), CacheKey);
-          It->setResult(Entry.getResult(),PredPtrVal);
-          ReverseNonLocalPtrDeps[Pred->getTerminator()].insert(CacheKey);
-        }
-        Cache = 0;
+        // Since we had a phi translation failure, the cache for CacheKey won't
+        // include all of the entries that we need to immediately satisfy future
+        // queries.  Mark this in NonLocalPointerDeps by setting the
+        // BBSkipFirstBlockPair pointer to null.  This requires reuse of the
+        // cached value to do more work but not miss the phi trans failure.
+        NonLocalPointerDeps[CacheKey].first = BBSkipFirstBlockPair();
         continue;
       }
 
@@ -961,10 +939,10 @@ getNonLocalPointerDepFromBB(const PHITransAddr &Pointer, uint64_t PointeeSize,
       NumSortedEntries = Cache->size();
     }
     
-    // Since we did phi translation, the "Cache" set won't contain all of the
+    // Since we failed phi translation, the "Cache" set won't contain all of the
     // results for the query.  This is ok (we can still use it to accelerate
     // specific block queries) but we can't do the fastpath "return all
-    // results from the set"  Clear out the indicator for this.
+    // results from the set".  Clear out the indicator for this.
     CacheInfo->first = BBSkipFirstBlockPair();
     
     // If *nothing* works, mark the pointer as being clobbered by the first
@@ -983,9 +961,10 @@ getNonLocalPointerDepFromBB(const PHITransAddr &Pointer, uint64_t PointeeSize,
       
       assert(I->getResult().isNonLocal() &&
              "Should only be here with transparent block");
-      I->setResult(MemDepResult::getClobber(BB->begin()), Pointer.getAddr());
+      I->setResult(MemDepResult::getClobber(BB->begin()));
       ReverseNonLocalPtrDeps[BB->begin()].insert(CacheKey);
-      Result.push_back(*I);
+      Result.push_back(NonLocalDepResult(I->getBB(), I->getResult(),
+                                         Pointer.getAddr()));
       break;
     }
   }
@@ -1139,7 +1118,7 @@ void MemoryDependenceAnalysis::removeInstruction(Instruction *RemInst) {
         if (DI->getResult().getInst() != RemInst) continue;
         
         // Convert to a dirty entry for the subsequent instruction.
-        DI->setResult(NewDirtyVal, DI->getAddress());
+        DI->setResult(NewDirtyVal);
         
         if (Instruction *NextI = NewDirtyVal.getInst())
           ReverseDepsToAdd.push_back(std::make_pair(NextI, *I));
@@ -1181,7 +1160,7 @@ void MemoryDependenceAnalysis::removeInstruction(Instruction *RemInst) {
         if (DI->getResult().getInst() != RemInst) continue;
         
         // Convert to a dirty entry for the subsequent instruction.
-        DI->setResult(NewDirtyVal, DI->getAddress());
+        DI->setResult(NewDirtyVal);
         
         if (Instruction *NewDirtyInst = NewDirtyVal.getInst())
           ReversePtrDepsToAdd.push_back(std::make_pair(NewDirtyInst, P));
diff --git a/libclamav/c++/llvm/lib/Analysis/PHITransAddr.cpp b/libclamav/c++/llvm/lib/Analysis/PHITransAddr.cpp
index 07e2919..334a188 100644
--- a/libclamav/c++/llvm/lib/Analysis/PHITransAddr.cpp
+++ b/libclamav/c++/llvm/lib/Analysis/PHITransAddr.cpp
@@ -14,6 +14,7 @@
 #include "llvm/Analysis/PHITransAddr.h"
 #include "llvm/Analysis/Dominators.h"
 #include "llvm/Analysis/InstructionSimplify.h"
+#include "llvm/Support/Debug.h"
 #include "llvm/Support/raw_ostream.h"
 using namespace llvm;
 
@@ -35,12 +36,12 @@ static bool CanPHITrans(Instruction *Inst) {
 
 void PHITransAddr::dump() const {
   if (Addr == 0) {
-    errs() << "PHITransAddr: null\n";
+    dbgs() << "PHITransAddr: null\n";
     return;
   }
-  errs() << "PHITransAddr: " << *Addr << "\n";
+  dbgs() << "PHITransAddr: " << *Addr << "\n";
   for (unsigned i = 0, e = InstInputs.size(); i != e; ++i)
-    errs() << "  Input #" << i << " is " << *InstInputs[i] << "\n";
+    dbgs() << "  Input #" << i << " is " << *InstInputs[i] << "\n";
 }
 
 
diff --git a/libclamav/c++/llvm/lib/Analysis/PostDominators.cpp b/libclamav/c++/llvm/lib/Analysis/PostDominators.cpp
index 69d6b47..c38e050 100644
--- a/libclamav/c++/llvm/lib/Analysis/PostDominators.cpp
+++ b/libclamav/c++/llvm/lib/Analysis/PostDominators.cpp
@@ -33,7 +33,7 @@ F("postdomtree", "Post-Dominator Tree Construction", true, true);
 
 bool PostDominatorTree::runOnFunction(Function &F) {
   DT->recalculate(F);
-  DEBUG(DT->print(errs()));
+  DEBUG(DT->print(dbgs()));
   return false;
 }
 
diff --git a/libclamav/c++/llvm/lib/Analysis/ProfileEstimatorPass.cpp b/libclamav/c++/llvm/lib/Analysis/ProfileEstimatorPass.cpp
index 8148429..cf9311a 100644
--- a/libclamav/c++/llvm/lib/Analysis/ProfileEstimatorPass.cpp
+++ b/libclamav/c++/llvm/lib/Analysis/ProfileEstimatorPass.cpp
@@ -87,11 +87,11 @@ static double ignoreMissing(double w) {
 }
 
 static void inline printEdgeError(ProfileInfo::Edge e, const char *M) {
-  DEBUG(errs() << "-- Edge " << e << " is not calculated, " << M << "\n");
+  DEBUG(dbgs() << "-- Edge " << e << " is not calculated, " << M << "\n");
 }
 
 void inline ProfileEstimatorPass::printEdgeWeight(Edge E) {
-  DEBUG(errs() << "-- Weight of Edge " << E << ":"
+  DEBUG(dbgs() << "-- Weight of Edge " << E << ":"
                << format("%20.20g", getEdgeWeight(E)) << "\n");
 }
 
@@ -179,7 +179,7 @@ void ProfileEstimatorPass::recurseBasicBlock(BasicBlock *BB) {
           // from weight.
           if (MinimalWeight.find(*ei) != MinimalWeight.end()) {
             incoming -= MinimalWeight[*ei];
-            DEBUG(errs() << "Reserving " << format("%.20g",MinimalWeight[*ei]) << " at " << (*ei) << "\n");
+            DEBUG(dbgs() << "Reserving " << format("%.20g",MinimalWeight[*ei]) << " at " << (*ei) << "\n");
           }
         } else {
           incoming -= w;
@@ -217,7 +217,7 @@ void ProfileEstimatorPass::recurseBasicBlock(BasicBlock *BB) {
       // Read necessary minimal weight.
       if (MinimalWeight.find(*ei) != MinimalWeight.end()) {
         EdgeInformation[BB->getParent()][*ei] += MinimalWeight[*ei];
-        DEBUG(errs() << "Additionally " << format("%.20g",MinimalWeight[*ei]) << " at " << (*ei) << "\n");
+        DEBUG(dbgs() << "Additionally " << format("%.20g",MinimalWeight[*ei]) << " at " << (*ei) << "\n");
       }
       printEdgeWeight(*ei);
       
@@ -232,7 +232,7 @@ void ProfileEstimatorPass::recurseBasicBlock(BasicBlock *BB) {
           MinimalWeight[e] = 0;
         }
         MinimalWeight[e] += w;
-        DEBUG(errs() << "Minimal Weight for " << e << ": " << format("%.20g",MinimalWeight[e]) << "\n");
+        DEBUG(dbgs() << "Minimal Weight for " << e << ": " << format("%.20g",MinimalWeight[e]) << "\n");
         Dest = Parent;
       }
     }
@@ -268,7 +268,7 @@ void ProfileEstimatorPass::recurseBasicBlock(BasicBlock *BB) {
         // from block weight, this is readded later on.
         if (MinimalWeight.find(edge) != MinimalWeight.end()) {
           BBWeight -= MinimalWeight[edge];
-          DEBUG(errs() << "Reserving " << format("%.20g",MinimalWeight[edge]) << " at " << edge << "\n");
+          DEBUG(dbgs() << "Reserving " << format("%.20g",MinimalWeight[edge]) << " at " << edge << "\n");
         }
       }
     }
@@ -288,7 +288,7 @@ void ProfileEstimatorPass::recurseBasicBlock(BasicBlock *BB) {
     // Readd minial necessary weight.
     if (MinimalWeight.find(*ei) != MinimalWeight.end()) {
       EdgeInformation[BB->getParent()][*ei] += MinimalWeight[*ei];
-      DEBUG(errs() << "Additionally " << format("%.20g",MinimalWeight[*ei]) << " at " << (*ei) << "\n");
+      DEBUG(dbgs() << "Additionally " << format("%.20g",MinimalWeight[*ei]) << " at " << (*ei) << "\n");
     }
     printEdgeWeight(*ei);
   }
@@ -319,7 +319,7 @@ bool ProfileEstimatorPass::runOnFunction(Function &F) {
   // Clear Minimal Edges.
   MinimalWeight.clear();
 
-  DEBUG(errs() << "Working on function " << F.getNameStr() << "\n");
+  DEBUG(dbgs() << "Working on function " << F.getNameStr() << "\n");
 
   // Since the entry block is the first one and has no predecessors, the edge
   // (0,entry) is inserted with the starting weight of 1.
@@ -366,7 +366,7 @@ bool ProfileEstimatorPass::runOnFunction(Function &F) {
             if (Dest != *bbi) {
               // If there is no circle, just set edge weight to 0
               EdgeInformation[&F][e] = 0;
-              DEBUG(errs() << "Assuming edge weight: ");
+              DEBUG(dbgs() << "Assuming edge weight: ");
               printEdgeWeight(e);
               found = true;
             }
@@ -375,7 +375,7 @@ bool ProfileEstimatorPass::runOnFunction(Function &F) {
       }
       if (!found) {
         cleanup = true;
-        DEBUG(errs() << "No assumption possible in Fuction "<<F.getName()<<", setting all to zero\n");
+        DEBUG(dbgs() << "No assumption possible in Fuction "<<F.getName()<<", setting all to zero\n");
       }
     }
   }
diff --git a/libclamav/c++/llvm/lib/Analysis/ProfileInfo.cpp b/libclamav/c++/llvm/lib/Analysis/ProfileInfo.cpp
index c49c6e1..afd86b1 100644
--- a/libclamav/c++/llvm/lib/Analysis/ProfileInfo.cpp
+++ b/libclamav/c++/llvm/lib/Analysis/ProfileInfo.cpp
@@ -163,7 +163,7 @@ double ProfileInfoT<MachineFunction, MachineBasicBlock>::
 template<>
 void ProfileInfoT<Function,BasicBlock>::
         setExecutionCount(const BasicBlock *BB, double w) {
-  DEBUG(errs() << "Creating Block " << BB->getName() 
+  DEBUG(dbgs() << "Creating Block " << BB->getName() 
                << " (weight: " << format("%.20g",w) << ")\n");
   BlockInformation[BB->getParent()][BB] = w;
 }
@@ -171,7 +171,7 @@ void ProfileInfoT<Function,BasicBlock>::
 template<>
 void ProfileInfoT<MachineFunction, MachineBasicBlock>::
         setExecutionCount(const MachineBasicBlock *MBB, double w) {
-  DEBUG(errs() << "Creating Block " << MBB->getBasicBlock()->getName()
+  DEBUG(dbgs() << "Creating Block " << MBB->getBasicBlock()->getName()
                << " (weight: " << format("%.20g",w) << ")\n");
   BlockInformation[MBB->getParent()][MBB] = w;
 }
@@ -180,7 +180,7 @@ template<>
 void ProfileInfoT<Function,BasicBlock>::addEdgeWeight(Edge e, double w) {
   double oldw = getEdgeWeight(e);
   assert (oldw != MissingValue && "Adding weight to Edge with no previous weight");
-  DEBUG(errs() << "Adding to Edge " << e
+  DEBUG(dbgs() << "Adding to Edge " << e
                << " (new weight: " << format("%.20g",oldw + w) << ")\n");
   EdgeInformation[getFunction(e)][e] = oldw + w;
 }
@@ -190,7 +190,7 @@ void ProfileInfoT<Function,BasicBlock>::
         addExecutionCount(const BasicBlock *BB, double w) {
   double oldw = getExecutionCount(BB);
   assert (oldw != MissingValue && "Adding weight to Block with no previous weight");
-  DEBUG(errs() << "Adding to Block " << BB->getName()
+  DEBUG(dbgs() << "Adding to Block " << BB->getName()
                << " (new weight: " << format("%.20g",oldw + w) << ")\n");
   BlockInformation[BB->getParent()][BB] = oldw + w;
 }
@@ -201,7 +201,7 @@ void ProfileInfoT<Function,BasicBlock>::removeBlock(const BasicBlock *BB) {
     BlockInformation.find(BB->getParent());
   if (J == BlockInformation.end()) return;
 
-  DEBUG(errs() << "Deleting " << BB->getName() << "\n");
+  DEBUG(dbgs() << "Deleting " << BB->getName() << "\n");
   J->second.erase(BB);
 }
 
@@ -211,7 +211,7 @@ void ProfileInfoT<Function,BasicBlock>::removeEdge(Edge e) {
     EdgeInformation.find(getFunction(e));
   if (J == EdgeInformation.end()) return;
 
-  DEBUG(errs() << "Deleting" << e << "\n");
+  DEBUG(dbgs() << "Deleting" << e << "\n");
   J->second.erase(e);
 }
 
@@ -221,10 +221,10 @@ void ProfileInfoT<Function,BasicBlock>::
   double w;
   if ((w = getEdgeWeight(newedge)) == MissingValue) {
     w = getEdgeWeight(oldedge);
-    DEBUG(errs() << "Replacing " << oldedge << " with " << newedge  << "\n");
+    DEBUG(dbgs() << "Replacing " << oldedge << " with " << newedge  << "\n");
   } else {
     w += getEdgeWeight(oldedge);
-    DEBUG(errs() << "Adding " << oldedge << " to " << newedge  << "\n");
+    DEBUG(dbgs() << "Adding " << oldedge << " to " << newedge  << "\n");
   }
   setEdgeWeight(newedge,w);
   removeEdge(oldedge);
@@ -277,7 +277,7 @@ const BasicBlock *ProfileInfoT<Function,BasicBlock>::
 template<>
 void ProfileInfoT<Function,BasicBlock>::
         divertFlow(const Edge &oldedge, const Edge &newedge) {
-  DEBUG(errs() << "Diverting " << oldedge << " via " << newedge );
+  DEBUG(dbgs() << "Diverting " << oldedge << " via " << newedge );
 
   // First check if the old edge was taken, if not, just delete it...
   if (getEdgeWeight(oldedge) == 0) {
@@ -291,7 +291,7 @@ void ProfileInfoT<Function,BasicBlock>::
   const BasicBlock *BB = GetPath(newedge.second,oldedge.second,P,GetPathToExit | GetPathToDest);
 
   double w = getEdgeWeight (oldedge);
-  DEBUG(errs() << ", Weight: " << format("%.20g",w) << "\n");
+  DEBUG(dbgs() << ", Weight: " << format("%.20g",w) << "\n");
   do {
     const BasicBlock *Parent = P.find(BB)->second;
     Edge e = getEdge(Parent,BB);
@@ -312,7 +312,7 @@ void ProfileInfoT<Function,BasicBlock>::
 template<>
 void ProfileInfoT<Function,BasicBlock>::
         replaceAllUses(const BasicBlock *RmBB, const BasicBlock *DestBB) {
-  DEBUG(errs() << "Replacing " << RmBB->getName()
+  DEBUG(dbgs() << "Replacing " << RmBB->getName()
                << " with " << DestBB->getName() << "\n");
   const Function *F = DestBB->getParent();
   std::map<const Function*, EdgeWeights>::iterator J =
@@ -413,7 +413,7 @@ void ProfileInfoT<Function,BasicBlock>::splitBlock(const BasicBlock *Old,
     EdgeInformation.find(F);
   if (J == EdgeInformation.end()) return;
 
-  DEBUG(errs() << "Splitting " << Old->getName() << " to " << New->getName() << "\n");
+  DEBUG(dbgs() << "Splitting " << Old->getName() << " to " << New->getName() << "\n");
 
   std::set<Edge> Edges;
   for (EdgeWeights::iterator ewi = J->second.begin(), ewe = J->second.end(); 
@@ -444,7 +444,7 @@ void ProfileInfoT<Function,BasicBlock>::splitBlock(const BasicBlock *BB,
     EdgeInformation.find(F);
   if (J == EdgeInformation.end()) return;
 
-  DEBUG(errs() << "Splitting " << NumPreds << " Edges from " << BB->getName() 
+  DEBUG(dbgs() << "Splitting " << NumPreds << " Edges from " << BB->getName() 
                << " to " << NewBB->getName() << "\n");
 
   // Collect weight that was redirected over NewBB.
@@ -474,7 +474,7 @@ void ProfileInfoT<Function,BasicBlock>::splitBlock(const BasicBlock *BB,
 template<>
 void ProfileInfoT<Function,BasicBlock>::transfer(const Function *Old,
                                                  const Function *New) {
-  DEBUG(errs() << "Replacing Function " << Old->getName() << " with "
+  DEBUG(dbgs() << "Replacing Function " << Old->getName() << " with "
                << New->getName() << "\n");
   std::map<const Function*, EdgeWeights>::iterator J =
     EdgeInformation.find(Old);
@@ -552,7 +552,7 @@ bool ProfileInfoT<Function,BasicBlock>::
     } else {
       EdgeInformation[BB->getParent()][edgetocalc] = incount-outcount;
     }
-    DEBUG(errs() << "--Calc Edge Counter for " << edgetocalc << ": "
+    DEBUG(dbgs() << "--Calc Edge Counter for " << edgetocalc << ": "
                  << format("%.20g", getEdgeWeight(edgetocalc)) << "\n");
     removed = edgetocalc;
     return true;
@@ -982,9 +982,9 @@ void ProfileInfoT<Function,BasicBlock>::repair(const Function *F) {
     FI = Unvisited.begin(), FE = Unvisited.end();
     while(FI != FE) {
       const BasicBlock *BB = *FI; ++FI;
-      errs() << BB->getName();
+      dbgs() << BB->getName();
       if (FI != FE)
-        errs() << ",";
+        dbgs() << ",";
     }
     errs() << "}";
 
diff --git a/libclamav/c++/llvm/lib/Analysis/ProfileInfoLoaderPass.cpp b/libclamav/c++/llvm/lib/Analysis/ProfileInfoLoaderPass.cpp
index cbd0430..d8c511f 100644
--- a/libclamav/c++/llvm/lib/Analysis/ProfileInfoLoaderPass.cpp
+++ b/libclamav/c++/llvm/lib/Analysis/ProfileInfoLoaderPass.cpp
@@ -131,7 +131,7 @@ void LoaderPass::readEdge(ProfileInfo::Edge e,
       // in double.
       EdgeInformation[getFunction(e)][e] += (double)weight;
 
-      DEBUG(errs() << "--Read Edge Counter for " << e
+      DEBUG(dbgs() << "--Read Edge Counter for " << e
                    << " (# "<< (ReadCount-1) << "): "
                    << (unsigned)getEdgeWeight(e) << "\n");
     } else {
@@ -151,7 +151,7 @@ bool LoaderPass::runOnModule(Module &M) {
     ReadCount = 0;
     for (Module::iterator F = M.begin(), E = M.end(); F != E; ++F) {
       if (F->isDeclaration()) continue;
-      DEBUG(errs()<<"Working on "<<F->getNameStr()<<"\n");
+      DEBUG(dbgs()<<"Working on "<<F->getNameStr()<<"\n");
       readEdge(getEdge(0,&F->getEntryBlock()), Counters);
       for (Function::iterator BB = F->begin(), E = F->end(); BB != E; ++BB) {
         TerminatorInst *TI = BB->getTerminator();
@@ -172,7 +172,7 @@ bool LoaderPass::runOnModule(Module &M) {
     ReadCount = 0;
     for (Module::iterator F = M.begin(), E = M.end(); F != E; ++F) {
       if (F->isDeclaration()) continue;
-      DEBUG(errs()<<"Working on "<<F->getNameStr()<<"\n");
+      DEBUG(dbgs()<<"Working on "<<F->getNameStr()<<"\n");
       readEdge(getEdge(0,&F->getEntryBlock()), Counters);
       for (Function::iterator BB = F->begin(), E = F->end(); BB != E; ++BB) {
         TerminatorInst *TI = BB->getTerminator();
@@ -198,10 +198,10 @@ bool LoaderPass::runOnModule(Module &M) {
         }
 
         if (SpanningTree.size() == size) {
-          DEBUG(errs()<<"{");
+          DEBUG(dbgs()<<"{");
           for (std::set<Edge>::iterator ei = SpanningTree.begin(),
                ee = SpanningTree.end(); ei != ee; ++ei) {
-            DEBUG(errs()<< *ei <<",");
+            DEBUG(dbgs()<< *ei <<",");
           }
           assert(0 && "No edge calculated!");
         }
diff --git a/libclamav/c++/llvm/lib/Analysis/ProfileVerifierPass.cpp b/libclamav/c++/llvm/lib/Analysis/ProfileVerifierPass.cpp
index 36a80ba..a2ddc8e 100644
--- a/libclamav/c++/llvm/lib/Analysis/ProfileVerifierPass.cpp
+++ b/libclamav/c++/llvm/lib/Analysis/ProfileVerifierPass.cpp
@@ -102,7 +102,7 @@ namespace llvm {
         typename ProfileInfoT<FType, BType>::Edge E = PI->getEdge(*bbi,BB);
         double EdgeWeight = PI->getEdgeWeight(E);
         if (EdgeWeight == ProfileInfoT<FType, BType>::MissingValue) { EdgeWeight = 0; }
-        errs() << "calculated in-edge " << E << ": " 
+        dbgs() << "calculated in-edge " << E << ": " 
                << format("%20.20g",EdgeWeight) << "\n";
         inWeight += EdgeWeight;
         inCount++;
@@ -117,13 +117,13 @@ namespace llvm {
         typename ProfileInfoT<FType, BType>::Edge E = PI->getEdge(BB,*bbi);
         double EdgeWeight = PI->getEdgeWeight(E);
         if (EdgeWeight == ProfileInfoT<FType, BType>::MissingValue) { EdgeWeight = 0; }
-        errs() << "calculated out-edge " << E << ": " 
+        dbgs() << "calculated out-edge " << E << ": " 
                << format("%20.20g",EdgeWeight) << "\n";
         outWeight += EdgeWeight;
         outCount++;
       }
     }
-    errs() << "Block " << BB->getNameStr()                << " in " 
+    dbgs() << "Block " << BB->getNameStr()                << " in " 
            << BB->getParent()->getNameStr()               << ":"
            << "BBWeight="  << format("%20.20g",BBWeight)  << ","
            << "inWeight="  << format("%20.20g",inWeight)  << ","
@@ -141,7 +141,7 @@ namespace llvm {
 
   template<class FType, class BType>
   void ProfileVerifierPassT<FType, BType>::debugEntry (DetailedBlockInfo *DI) {
-    errs() << "TROUBLE: Block " << DI->BB->getNameStr()       << " in "
+    dbgs() << "TROUBLE: Block " << DI->BB->getNameStr()       << " in "
            << DI->BB->getParent()->getNameStr()               << ":"
            << "BBWeight="  << format("%20.20g",DI->BBWeight)  << ","
            << "inWeight="  << format("%20.20g",DI->inWeight)  << ","
@@ -191,20 +191,20 @@ namespace llvm {
   }
 
   #define ASSERTMESSAGE(M) \
-    { errs() << "ASSERT:" << (M) << "\n"; \
+    { dbgs() << "ASSERT:" << (M) << "\n"; \
       if (!DisableAssertions) assert(0 && (M)); }
 
   template<class FType, class BType>
   double ProfileVerifierPassT<FType, BType>::ReadOrAssert(typename ProfileInfoT<FType, BType>::Edge E) {
     double EdgeWeight = PI->getEdgeWeight(E);
     if (EdgeWeight == ProfileInfoT<FType, BType>::MissingValue) {
-      errs() << "Edge " << E << " in Function " 
+      dbgs() << "Edge " << E << " in Function " 
              << ProfileInfoT<FType, BType>::getFunction(E)->getNameStr() << ": ";
       ASSERTMESSAGE("Edge has missing value");
       return 0;
     } else {
       if (EdgeWeight < 0) {
-        errs() << "Edge " << E << " in Function " 
+        dbgs() << "Edge " << E << " in Function " 
                << ProfileInfoT<FType, BType>::getFunction(E)->getNameStr() << ": ";
         ASSERTMESSAGE("Edge has negative value");
       }
@@ -218,7 +218,7 @@ namespace llvm {
                                                       DetailedBlockInfo *DI) {
     if (Error) {
       DEBUG(debugEntry(DI));
-      errs() << "Block " << DI->BB->getNameStr() << " in Function " 
+      dbgs() << "Block " << DI->BB->getNameStr() << " in Function " 
              << DI->BB->getParent()->getNameStr() << ": ";
       ASSERTMESSAGE(Message);
     }
diff --git a/libclamav/c++/llvm/lib/Analysis/ScalarEvolution.cpp b/libclamav/c++/llvm/lib/Analysis/ScalarEvolution.cpp
index c6835ef..17dc686 100644
--- a/libclamav/c++/llvm/lib/Analysis/ScalarEvolution.cpp
+++ b/libclamav/c++/llvm/lib/Analysis/ScalarEvolution.cpp
@@ -75,6 +75,7 @@
 #include "llvm/Target/TargetData.h"
 #include "llvm/Support/CommandLine.h"
 #include "llvm/Support/ConstantRange.h"
+#include "llvm/Support/Debug.h"
 #include "llvm/Support/ErrorHandling.h"
 #include "llvm/Support/GetElementPtrTypeIterator.h"
 #include "llvm/Support/InstIterator.h"
@@ -117,8 +118,8 @@ char ScalarEvolution::ID = 0;
 SCEV::~SCEV() {}
 
 void SCEV::dump() const {
-  print(errs());
-  errs() << '\n';
+  print(dbgs());
+  dbgs() << '\n';
 }
 
 bool SCEV::isZero() const {
@@ -298,7 +299,7 @@ bool SCEVAddRecExpr::isLoopInvariant(const Loop *QueryLoop) const {
     return false;
 
   // This recurrence is variant w.r.t. QueryLoop if QueryLoop contains L.
-  if (QueryLoop->contains(L->getHeader()))
+  if (QueryLoop->contains(L))
     return false;
 
   // This recurrence is variant w.r.t. QueryLoop if any of its operands
@@ -333,7 +334,7 @@ bool SCEVUnknown::isLoopInvariant(const Loop *L) const {
   // Instructions are never considered invariant in the function body
   // (null loop) because they are defined within the "loop".
   if (Instruction *I = dyn_cast<Instruction>(V))
-    return L && !L->contains(I->getParent());
+    return L && !L->contains(I);
   return true;
 }
 
@@ -1457,10 +1458,13 @@ const SCEV *ScalarEvolution::getAddExpr(SmallVectorImpl<const SCEV *> &Ops,
       LIOps.push_back(AddRec->getStart());
 
       SmallVector<const SCEV *, 4> AddRecOps(AddRec->op_begin(),
-                                           AddRec->op_end());
+                                             AddRec->op_end());
       AddRecOps[0] = getAddExpr(LIOps);
 
+      // It's tempting to propagate NUW/NSW flags here, but nuw/nsw addition
+      // is not associative so this isn't necessarily safe.
       const SCEV *NewRec = getAddRecExpr(AddRecOps, AddRec->getLoop());
+
       // If all of the other operands were loop invariant, we are done.
       if (Ops.size() == 1) return NewRec;
 
@@ -1636,6 +1640,8 @@ const SCEV *ScalarEvolution::getMulExpr(SmallVectorImpl<const SCEV *> &Ops,
         }
       }
 
+      // It's tempting to propagate the NSW flag here, but nsw multiplication
+      // is not associative so this isn't necessarily safe.
       const SCEV *NewRec = getAddRecExpr(NewOps, AddRec->getLoop());
 
       // If all of the other operands were loop invariant, we are done.
@@ -1838,10 +1844,10 @@ ScalarEvolution::getAddRecExpr(SmallVectorImpl<const SCEV *> &Operands,
 
   // Canonicalize nested AddRecs in by nesting them in order of loop depth.
   if (const SCEVAddRecExpr *NestedAR = dyn_cast<SCEVAddRecExpr>(Operands[0])) {
-    const Loop* NestedLoop = NestedAR->getLoop();
+    const Loop *NestedLoop = NestedAR->getLoop();
     if (L->getLoopDepth() < NestedLoop->getLoopDepth()) {
       SmallVector<const SCEV *, 4> NestedOperands(NestedAR->op_begin(),
-                                                NestedAR->op_end());
+                                                  NestedAR->op_end());
       Operands[0] = NestedAR->getStart();
       // AddRecs require their operands be loop-invariant with respect to their
       // loops. Don't perform this transformation if it would break this
@@ -2441,7 +2447,7 @@ ScalarEvolution::ForgetSymbolicName(Instruction *I, const SCEV *SymName) {
     Instruction *I = Worklist.pop_back_val();
     if (!Visited.insert(I)) continue;
 
-    std::map<SCEVCallbackVH, const SCEV*>::iterator It =
+    std::map<SCEVCallbackVH, const SCEV *>::iterator It =
       Scalars.find(static_cast<Value *>(I));
     if (It != Scalars.end()) {
       // Short-circuit the def-use traversal if the symbolic name
@@ -2592,8 +2598,9 @@ const SCEV *ScalarEvolution::createNodeForPHI(PHINode *PN) {
 /// createNodeForGEP - Expand GEP instructions into add and multiply
 /// operations. This allows them to be analyzed by regular SCEV code.
 ///
-const SCEV *ScalarEvolution::createNodeForGEP(Operator *GEP) {
+const SCEV *ScalarEvolution::createNodeForGEP(GEPOperator *GEP) {
 
+  bool InBounds = GEP->isInBounds();
   const Type *IntPtrTy = getEffectiveSCEVType(GEP->getType());
   Value *Base = GEP->getOperand(0);
   // Don't attempt to analyze GEPs over unsized objects.
@@ -2610,18 +2617,23 @@ const SCEV *ScalarEvolution::createNodeForGEP(Operator *GEP) {
       // For a struct, add the member offset.
       unsigned FieldNo = cast<ConstantInt>(Index)->getZExtValue();
       TotalOffset = getAddExpr(TotalOffset,
-                               getFieldOffsetExpr(STy, FieldNo));
+                               getFieldOffsetExpr(STy, FieldNo),
+                               /*HasNUW=*/false, /*HasNSW=*/InBounds);
     } else {
       // For an array, add the element offset, explicitly scaled.
       const SCEV *LocalOffset = getSCEV(Index);
       if (!isa<PointerType>(LocalOffset->getType()))
         // Getelementptr indicies are signed.
         LocalOffset = getTruncateOrSignExtend(LocalOffset, IntPtrTy);
-      LocalOffset = getMulExpr(LocalOffset, getAllocSizeExpr(*GTI));
-      TotalOffset = getAddExpr(TotalOffset, LocalOffset);
+      // Lower "inbounds" GEPs to NSW arithmetic.
+      LocalOffset = getMulExpr(LocalOffset, getAllocSizeExpr(*GTI),
+                               /*HasNUW=*/false, /*HasNSW=*/InBounds);
+      TotalOffset = getAddExpr(TotalOffset, LocalOffset,
+                               /*HasNUW=*/false, /*HasNSW=*/InBounds);
     }
   }
-  return getAddExpr(getSCEV(Base), TotalOffset);
+  return getAddExpr(getSCEV(Base), TotalOffset,
+                    /*HasNUW=*/false, /*HasNSW=*/InBounds);
 }
 
 /// GetMinTrailingZeros - Determine the minimum number of zero bits that S is
@@ -3130,7 +3142,7 @@ const SCEV *ScalarEvolution::createSCEV(Value *V) {
     // expressions we handle are GEPs and address literals.
 
   case Instruction::GetElementPtr:
-    return createNodeForGEP(U);
+    return createNodeForGEP(cast<GEPOperator>(U));
 
   case Instruction::PHI:
     return createNodeForPHI(cast<PHINode>(U));
@@ -3241,7 +3253,7 @@ ScalarEvolution::getBackedgeTakenInfo(const Loop *L) {
   // update the value. The temporary CouldNotCompute value tells SCEV
   // code elsewhere that it shouldn't attempt to request a new
   // backedge-taken count, which could result in infinite recursion.
-  std::pair<std::map<const Loop*, BackedgeTakenInfo>::iterator, bool> Pair =
+  std::pair<std::map<const Loop *, BackedgeTakenInfo>::iterator, bool> Pair =
     BackedgeTakenCounts.insert(std::make_pair(L, getCouldNotCompute()));
   if (Pair.second) {
     BackedgeTakenInfo ItCount = ComputeBackedgeTakenCount(L);
@@ -3276,7 +3288,7 @@ ScalarEvolution::getBackedgeTakenInfo(const Loop *L) {
         Instruction *I = Worklist.pop_back_val();
         if (!Visited.insert(I)) continue;
 
-        std::map<SCEVCallbackVH, const SCEV*>::iterator It =
+        std::map<SCEVCallbackVH, const SCEV *>::iterator It =
           Scalars.find(static_cast<Value *>(I));
         if (It != Scalars.end()) {
           // SCEVUnknown for a PHI either means that it has an unrecognized
@@ -3316,7 +3328,7 @@ void ScalarEvolution::forgetLoop(const Loop *L) {
     Instruction *I = Worklist.pop_back_val();
     if (!Visited.insert(I)) continue;
 
-    std::map<SCEVCallbackVH, const SCEV*>::iterator It =
+    std::map<SCEVCallbackVH, const SCEV *>::iterator It =
       Scalars.find(static_cast<Value *>(I));
     if (It != Scalars.end()) {
       ValuesAtScopes.erase(It->second);
@@ -3333,7 +3345,7 @@ void ScalarEvolution::forgetLoop(const Loop *L) {
 /// of the specified loop will execute.
 ScalarEvolution::BackedgeTakenInfo
 ScalarEvolution::ComputeBackedgeTakenCount(const Loop *L) {
-  SmallVector<BasicBlock*, 8> ExitingBlocks;
+  SmallVector<BasicBlock *, 8> ExitingBlocks;
   L->getExitingBlocks(ExitingBlocks);
 
   // Examine all exits and pick the most conservative values.
@@ -3616,10 +3628,10 @@ ScalarEvolution::ComputeBackedgeTakenCountFromExitCondICmp(const Loop *L,
   }
   default:
 #if 0
-    errs() << "ComputeBackedgeTakenCount ";
+    dbgs() << "ComputeBackedgeTakenCount ";
     if (ExitCond->getOperand(0)->getType()->isUnsigned())
-      errs() << "[unsigned] ";
-    errs() << *LHS << "   "
+      dbgs() << "[unsigned] ";
+    dbgs() << *LHS << "   "
          << Instruction::getOpcodeName(Instruction::ICmp)
          << "   " << *RHS << "\n";
 #endif
@@ -3740,7 +3752,7 @@ ScalarEvolution::ComputeLoadConstantCompareBackedgeTakenCount(
     if (!isa<ConstantInt>(Result)) break;  // Couldn't decide for sure
     if (cast<ConstantInt>(Result)->getValue().isMinValue()) {
 #if 0
-      errs() << "\n***\n*** Computed loop count " << *ItCst
+      dbgs() << "\n***\n*** Computed loop count " << *ItCst
              << "\n*** From global " << *GV << "*** BB: " << *L->getHeader()
              << "***\n";
 #endif
@@ -3774,7 +3786,7 @@ static PHINode *getConstantEvolvingPHI(Value *V, const Loop *L) {
   // If this is not an instruction, or if this is an instruction outside of the
   // loop, it can't be derived from a loop PHI.
   Instruction *I = dyn_cast<Instruction>(V);
-  if (I == 0 || !L->contains(I->getParent())) return 0;
+  if (I == 0 || !L->contains(I)) return 0;
 
   if (PHINode *PN = dyn_cast<PHINode>(I)) {
     if (L->getHeader() == I->getParent())
@@ -3839,7 +3851,7 @@ static Constant *EvaluateExpression(Value *V, Constant *PHIVal,
 /// involving constants, fold it.
 Constant *
 ScalarEvolution::getConstantEvolutionLoopExitValue(PHINode *PN,
-                                                   const APInt& BEs,
+                                                   const APInt &BEs,
                                                    const Loop *L) {
   std::map<PHINode*, Constant*>::iterator I =
     ConstantEvolutionLoopExitValue.find(PN);
@@ -4008,7 +4020,7 @@ const SCEV *ScalarEvolution::computeSCEVAtScope(const SCEV *V, const Loop *L) {
             if (!isSCEVable(Op->getType()))
               return V;
 
-            const SCEV* OpV = getSCEVAtScope(Op, L);
+            const SCEV *OpV = getSCEVAtScope(Op, L);
             if (const SCEVConstant *SC = dyn_cast<SCEVConstant>(OpV)) {
               Constant *C = SC->getValue();
               if (C->getType() != Op->getType())
@@ -4091,7 +4103,7 @@ const SCEV *ScalarEvolution::computeSCEVAtScope(const SCEV *V, const Loop *L) {
   // If this is a loop recurrence for a loop that does not contain L, then we
   // are dealing with the final value computed by the loop.
   if (const SCEVAddRecExpr *AddRec = dyn_cast<SCEVAddRecExpr>(V)) {
-    if (!L || !AddRec->getLoop()->contains(L->getHeader())) {
+    if (!L || !AddRec->getLoop()->contains(L)) {
       // To evaluate this recurrence, we need to know how many times the AddRec
       // loop iterates.  Compute this now.
       const SCEV *BackedgeTakenCount = getBackedgeTakenCount(AddRec->getLoop());
@@ -4306,7 +4318,7 @@ const SCEV *ScalarEvolution::HowFarToZero(const SCEV *V, const Loop *L) {
     const SCEVConstant *R2 = dyn_cast<SCEVConstant>(Roots.second);
     if (R1) {
 #if 0
-      errs() << "HFTZ: " << *V << " - sol#1: " << *R1
+      dbgs() << "HFTZ: " << *V << " - sol#1: " << *R1
              << "  sol#2: " << *R2 << "\n";
 #endif
       // Pick the smallest positive root value.
@@ -5183,7 +5195,7 @@ static void PrintLoopInfo(raw_ostream &OS, ScalarEvolution *SE,
 
   OS << "Loop " << L->getHeader()->getName() << ": ";
 
-  SmallVector<BasicBlock*, 8> ExitBlocks;
+  SmallVector<BasicBlock *, 8> ExitBlocks;
   L->getExitBlocks(ExitBlocks);
   if (ExitBlocks.size() != 1)
     OS << "<multiple exits> ";
@@ -5206,14 +5218,14 @@ static void PrintLoopInfo(raw_ostream &OS, ScalarEvolution *SE,
   OS << "\n";
 }
 
-void ScalarEvolution::print(raw_ostream &OS, const Module* ) const {
+void ScalarEvolution::print(raw_ostream &OS, const Module *) const {
   // ScalarEvolution's implementaiton of the print method is to print
   // out SCEV values of all instructions that are interesting. Doing
   // this potentially causes it to create new SCEV objects though,
   // which technically conflicts with the const qualifier. This isn't
   // observable from outside the class though, so casting away the
   // const isn't dangerous.
-  ScalarEvolution &SE = *const_cast<ScalarEvolution*>(this);
+  ScalarEvolution &SE = *const_cast<ScalarEvolution *>(this);
 
   OS << "Classifying expressions for: " << F->getName() << "\n";
   for (inst_iterator I = inst_begin(F), E = inst_end(F); I != E; ++I)
diff --git a/libclamav/c++/llvm/lib/Analysis/SparsePropagation.cpp b/libclamav/c++/llvm/lib/Analysis/SparsePropagation.cpp
index d7bcac2..d8c207b 100644
--- a/libclamav/c++/llvm/lib/Analysis/SparsePropagation.cpp
+++ b/libclamav/c++/llvm/lib/Analysis/SparsePropagation.cpp
@@ -17,7 +17,6 @@
 #include "llvm/Constants.h"
 #include "llvm/Function.h"
 #include "llvm/Instructions.h"
-#include "llvm/LLVMContext.h"
 #include "llvm/Support/Debug.h"
 #include "llvm/Support/raw_ostream.h"
 using namespace llvm;
@@ -89,7 +88,7 @@ void SparseSolver::UpdateState(Instruction &Inst, LatticeVal V) {
 /// MarkBlockExecutable - This method can be used by clients to mark all of
 /// the blocks that are known to be intrinsically live in the processed unit.
 void SparseSolver::MarkBlockExecutable(BasicBlock *BB) {
-  DEBUG(errs() << "Marking Block Executable: " << BB->getName() << "\n");
+  DEBUG(dbgs() << "Marking Block Executable: " << BB->getName() << "\n");
   BBExecutable.insert(BB);   // Basic block is executable!
   BBWorkList.push_back(BB);  // Add the block to the work list!
 }
@@ -100,7 +99,7 @@ void SparseSolver::markEdgeExecutable(BasicBlock *Source, BasicBlock *Dest) {
   if (!KnownFeasibleEdges.insert(Edge(Source, Dest)).second)
     return;  // This edge is already known to be executable!
   
-  DEBUG(errs() << "Marking Edge Executable: " << Source->getName()
+  DEBUG(dbgs() << "Marking Edge Executable: " << Source->getName()
         << " -> " << Dest->getName() << "\n");
 
   if (BBExecutable.count(Dest)) {
@@ -155,7 +154,7 @@ void SparseSolver::getFeasibleSuccessors(TerminatorInst &TI,
     }
 
     // Constant condition variables mean the branch can only go a single way
-    Succs[C == ConstantInt::getFalse(*Context)] = true;
+    Succs[C->isNullValue()] = true;
     return;
   }
   
@@ -300,7 +299,7 @@ void SparseSolver::Solve(Function &F) {
       Instruction *I = InstWorkList.back();
       InstWorkList.pop_back();
 
-      DEBUG(errs() << "\nPopped off I-WL: " << *I << "\n");
+      DEBUG(dbgs() << "\nPopped off I-WL: " << *I << "\n");
 
       // "I" got into the work list because it made a transition.  See if any
       // users are both live and in need of updating.
@@ -317,7 +316,7 @@ void SparseSolver::Solve(Function &F) {
       BasicBlock *BB = BBWorkList.back();
       BBWorkList.pop_back();
 
-      DEBUG(errs() << "\nPopped off BBWL: " << *BB);
+      DEBUG(dbgs() << "\nPopped off BBWL: " << *BB);
 
       // Notify all instructions in this basic block that they are newly
       // executable.
diff --git a/libclamav/c++/llvm/lib/Analysis/Trace.cpp b/libclamav/c++/llvm/lib/Analysis/Trace.cpp
index c9b303b..68a39cd 100644
--- a/libclamav/c++/llvm/lib/Analysis/Trace.cpp
+++ b/libclamav/c++/llvm/lib/Analysis/Trace.cpp
@@ -18,6 +18,7 @@
 #include "llvm/Analysis/Trace.h"
 #include "llvm/Function.h"
 #include "llvm/Assembly/Writer.h"
+#include "llvm/Support/Debug.h"
 #include "llvm/Support/raw_ostream.h"
 using namespace llvm;
 
@@ -46,5 +47,5 @@ void Trace::print(raw_ostream &O) const {
 /// output stream.
 ///
 void Trace::dump() const {
-  print(errs());
+  print(dbgs());
 }
diff --git a/libclamav/c++/llvm/lib/Analysis/ValueTracking.cpp b/libclamav/c++/llvm/lib/Analysis/ValueTracking.cpp
index 22c6e3b..acd3119 100644
--- a/libclamav/c++/llvm/lib/Analysis/ValueTracking.cpp
+++ b/libclamav/c++/llvm/lib/Analysis/ValueTracking.cpp
@@ -1369,11 +1369,6 @@ bool llvm::GetConstantStringInfo(Value *V, std::string &Str, uint64_t Offset,
                                  StopAtNul);
   }
   
-  if (MDString *MDStr = dyn_cast<MDString>(V)) {
-    Str = MDStr->getString();
-    return true;
-  }
-
   // The GEP instruction, constant or instruction, must reference a global
   // variable that is a constant and is initialized. The referenced constant
   // initializer is the array that we'll use for optimization.
diff --git a/libclamav/c++/llvm/lib/AsmParser/LLParser.cpp b/libclamav/c++/llvm/lib/AsmParser/LLParser.cpp
index 0333eed..48b6e87 100644
--- a/libclamav/c++/llvm/lib/AsmParser/LLParser.cpp
+++ b/libclamav/c++/llvm/lib/AsmParser/LLParser.cpp
@@ -1213,7 +1213,7 @@ PATypeHolder LLParser::HandleUpRefs(const Type *ty) {
 
   PATypeHolder Ty(ty);
 #if 0
-  errs() << "Type '" << Ty->getDescription()
+  dbgs() << "Type '" << Ty->getDescription()
          << "' newly formed.  Resolving upreferences.\n"
          << UpRefs.size() << " upreferences active!\n";
 #endif
@@ -1231,7 +1231,7 @@ PATypeHolder LLParser::HandleUpRefs(const Type *ty) {
                 UpRefs[i].LastContainedTy) != Ty->subtype_end();
 
 #if 0
-    errs() << "  UR#" << i << " - TypeContains(" << Ty->getDescription() << ", "
+    dbgs() << "  UR#" << i << " - TypeContains(" << Ty->getDescription() << ", "
            << UpRefs[i].LastContainedTy->getDescription() << ") = "
            << (ContainsType ? "true" : "false")
            << " level=" << UpRefs[i].NestingLevel << "\n";
@@ -1248,7 +1248,7 @@ PATypeHolder LLParser::HandleUpRefs(const Type *ty) {
       continue;
 
 #if 0
-    errs() << "  * Resolving upreference for " << UpRefs[i].UpRefTy << "\n";
+    dbgs() << "  * Resolving upreference for " << UpRefs[i].UpRefTy << "\n";
 #endif
     if (!TypeToResolve)
       TypeToResolve = UpRefs[i].UpRefTy;
diff --git a/libclamav/c++/llvm/lib/AsmParser/LLParser.h b/libclamav/c++/llvm/lib/AsmParser/LLParser.h
index d14b1cb..eec524a 100644
--- a/libclamav/c++/llvm/lib/AsmParser/LLParser.h
+++ b/libclamav/c++/llvm/lib/AsmParser/LLParser.h
@@ -18,6 +18,7 @@
 #include "llvm/Module.h"
 #include "llvm/Type.h"
 #include <map>
+#include "llvm/Support/ValueHandle.h"
 
 namespace llvm {
   class Module;
diff --git a/libclamav/c++/llvm/lib/Bitcode/Reader/Deserialize.cpp b/libclamav/c++/llvm/lib/Bitcode/Reader/Deserialize.cpp
index b8e720a..45ed61a 100644
--- a/libclamav/c++/llvm/lib/Bitcode/Reader/Deserialize.cpp
+++ b/libclamav/c++/llvm/lib/Bitcode/Reader/Deserialize.cpp
@@ -353,7 +353,7 @@ void Deserializer::RegisterPtr(const SerializedPtrID& PtrId,
   assert (!HasFinalPtr(E) && "Pointer already registered.");
 
 #ifdef DEBUG_BACKPATCH
-  errs() << "RegisterPtr: " << PtrId << " => " << Ptr << "\n";
+  dbgs() << "RegisterPtr: " << PtrId << " => " << Ptr << "\n";
 #endif 
   
   SetPtr(E,Ptr);
@@ -373,7 +373,7 @@ void Deserializer::ReadUIntPtr(uintptr_t& PtrRef,
     PtrRef = GetFinalPtr(E);
 
 #ifdef DEBUG_BACKPATCH
-    errs() << "ReadUintPtr: " << PtrId
+    dbgs() << "ReadUintPtr: " << PtrId
            << " <-- " <<  (void*) GetFinalPtr(E) << '\n';
 #endif    
   }
@@ -382,7 +382,7 @@ void Deserializer::ReadUIntPtr(uintptr_t& PtrRef,
             "Client forbids backpatching for this pointer.");
     
 #ifdef DEBUG_BACKPATCH
-    errs() << "ReadUintPtr: " << PtrId << " (NO PTR YET)\n";
+    dbgs() << "ReadUintPtr: " << PtrId << " (NO PTR YET)\n";
 #endif
     
     // Register backpatch.  Check the freelist for a BPNode.
diff --git a/libclamav/c++/llvm/lib/Bitcode/Writer/BitcodeWriter.cpp b/libclamav/c++/llvm/lib/Bitcode/Writer/BitcodeWriter.cpp
index af0b8ac..ab514d2 100644
--- a/libclamav/c++/llvm/lib/Bitcode/Writer/BitcodeWriter.cpp
+++ b/libclamav/c++/llvm/lib/Bitcode/Writer/BitcodeWriter.cpp
@@ -562,7 +562,7 @@ static void WriteMetadataAttachment(const Function &F,
   // Write metadata attachments
   // METADATA_ATTACHMENT - [m x [value, [n x [id, mdnode]]]
   MetadataContext &TheMetadata = F.getContext().getMetadata();
-  typedef SmallVector<std::pair<unsigned, TrackingVH<MDNode> >, 2> MDMapTy;
+  typedef SmallVector<std::pair<unsigned, MDNode*>, 2> MDMapTy;
   MDMapTy MDs;
   for (Function::const_iterator BB = F.begin(), E = F.end(); BB != E; ++BB)
     for (BasicBlock::const_iterator I = BB->begin(), E = BB->end();
diff --git a/libclamav/c++/llvm/lib/Bitcode/Writer/Serialize.cpp b/libclamav/c++/llvm/lib/Bitcode/Writer/Serialize.cpp
index a6beb17..24bf66f 100644
--- a/libclamav/c++/llvm/lib/Bitcode/Writer/Serialize.cpp
+++ b/libclamav/c++/llvm/lib/Bitcode/Writer/Serialize.cpp
@@ -83,7 +83,7 @@ SerializedPtrID Serializer::getPtrId(const void* ptr) {
   if (I == PtrMap.end()) {
     unsigned id = PtrMap.size()+1;
 #ifdef DEBUG_BACKPATCH
-    errs() << "Registered PTR: " << ptr << " => " << id << "\n";
+    dbgs() << "Registered PTR: " << ptr << " => " << id << "\n";
 #endif
     PtrMap[ptr] = id;
     return id;
diff --git a/libclamav/c++/llvm/lib/Bitcode/Writer/ValueEnumerator.cpp b/libclamav/c++/llvm/lib/Bitcode/Writer/ValueEnumerator.cpp
index d840d4a..29c6d37 100644
--- a/libclamav/c++/llvm/lib/Bitcode/Writer/ValueEnumerator.cpp
+++ b/libclamav/c++/llvm/lib/Bitcode/Writer/ValueEnumerator.cpp
@@ -88,7 +88,7 @@ ValueEnumerator::ValueEnumerator(const Module *M) {
       EnumerateType(I->getType());
 
     MetadataContext &TheMetadata = F->getContext().getMetadata();
-    typedef SmallVector<std::pair<unsigned, TrackingVH<MDNode> >, 2> MDMapTy;
+    typedef SmallVector<std::pair<unsigned, MDNode*>, 2> MDMapTy;
     MDMapTy MDs;
     for (Function::const_iterator BB = F->begin(), E = F->end(); BB != E; ++BB)
       for (BasicBlock::const_iterator I = BB->begin(), E = BB->end(); I!=E;++I){
@@ -226,11 +226,8 @@ void ValueEnumerator::EnumerateMetadata(const MetadataBase *MD) {
   }
   
   if (const NamedMDNode *N = dyn_cast<NamedMDNode>(MD)) {
-    for(NamedMDNode::const_elem_iterator I = N->elem_begin(),
-          E = N->elem_end(); I != E; ++I) {
-      MetadataBase *M = *I;
-      EnumerateValue(M);
-    }
+    for (unsigned i = 0, e = N->getNumElements(); i != e; ++i)
+      EnumerateValue(N->getElement(i));
     MDValues.push_back(std::make_pair(MD, 1U));
     MDValueMap[MD] = Values.size();
     return;
diff --git a/libclamav/c++/llvm/lib/CodeGen/AggressiveAntiDepBreaker.cpp b/libclamav/c++/llvm/lib/CodeGen/AggressiveAntiDepBreaker.cpp
index bb61682..761fbc6 100644
--- a/libclamav/c++/llvm/lib/CodeGen/AggressiveAntiDepBreaker.cpp
+++ b/libclamav/c++/llvm/lib/CodeGen/AggressiveAntiDepBreaker.cpp
@@ -127,11 +127,11 @@ AggressiveAntiDepBreaker(MachineFunction& MFi,
       CriticalPathSet |= CPSet;
    }
  
-  DEBUG(errs() << "AntiDep Critical-Path Registers:");
+  DEBUG(dbgs() << "AntiDep Critical-Path Registers:");
   DEBUG(for (int r = CriticalPathSet.find_first(); r != -1; 
              r = CriticalPathSet.find_next(r))
-          errs() << " " << TRI->getName(r));
-  DEBUG(errs() << '\n');
+          dbgs() << " " << TRI->getName(r));
+  DEBUG(dbgs() << '\n');
 }
 
 AggressiveAntiDepBreaker::~AggressiveAntiDepBreaker() {
@@ -218,9 +218,9 @@ void AggressiveAntiDepBreaker::Observe(MachineInstr *MI, unsigned Count,
   PrescanInstruction(MI, Count, PassthruRegs);
   ScanInstruction(MI, Count);
 
-  DEBUG(errs() << "Observe: ");
+  DEBUG(dbgs() << "Observe: ");
   DEBUG(MI->dump());
-  DEBUG(errs() << "\tRegs:");
+  DEBUG(dbgs() << "\tRegs:");
 
   unsigned *DefIndices = State->GetDefIndices();
   for (unsigned Reg = 0; Reg != TRI->getNumRegs(); ++Reg) {
@@ -232,14 +232,14 @@ void AggressiveAntiDepBreaker::Observe(MachineInstr *MI, unsigned Count,
     // schedule region).
     if (State->IsLive(Reg)) {
       DEBUG(if (State->GetGroup(Reg) != 0)
-              errs() << " " << TRI->getName(Reg) << "=g" << 
+              dbgs() << " " << TRI->getName(Reg) << "=g" << 
                 State->GetGroup(Reg) << "->g0(region live-out)");
       State->UnionGroups(Reg, 0);
     } else if ((DefIndices[Reg] < InsertPosIndex) && (DefIndices[Reg] >= Count)) {
       DefIndices[Reg] = Count;
     }
   }
-  DEBUG(errs() << '\n');
+  DEBUG(dbgs() << '\n');
 }
 
 bool AggressiveAntiDepBreaker::IsImplicitDefUse(MachineInstr *MI,
@@ -333,8 +333,8 @@ void AggressiveAntiDepBreaker::HandleLastUse(unsigned Reg, unsigned KillIdx,
     RegRefs.erase(Reg);
     State->LeaveGroup(Reg);
     DEBUG(if (header != NULL) {
-        errs() << header << TRI->getName(Reg); header = NULL; });
-    DEBUG(errs() << "->g" << State->GetGroup(Reg) << tag);
+        dbgs() << header << TRI->getName(Reg); header = NULL; });
+    DEBUG(dbgs() << "->g" << State->GetGroup(Reg) << tag);
   }
   // Repeat for subregisters.
   for (const unsigned *Subreg = TRI->getSubRegisters(Reg);
@@ -346,13 +346,13 @@ void AggressiveAntiDepBreaker::HandleLastUse(unsigned Reg, unsigned KillIdx,
       RegRefs.erase(SubregReg);
       State->LeaveGroup(SubregReg);
       DEBUG(if (header != NULL) {
-          errs() << header << TRI->getName(Reg); header = NULL; });
-      DEBUG(errs() << " " << TRI->getName(SubregReg) << "->g" <<
+          dbgs() << header << TRI->getName(Reg); header = NULL; });
+      DEBUG(dbgs() << " " << TRI->getName(SubregReg) << "->g" <<
             State->GetGroup(SubregReg) << tag);
     }
   }
 
-  DEBUG(if ((header == NULL) && (footer != NULL)) errs() << footer);
+  DEBUG(if ((header == NULL) && (footer != NULL)) dbgs() << footer);
 }
 
 void AggressiveAntiDepBreaker::PrescanInstruction(MachineInstr *MI, unsigned Count,
@@ -375,20 +375,20 @@ void AggressiveAntiDepBreaker::PrescanInstruction(MachineInstr *MI, unsigned Cou
     HandleLastUse(Reg, Count + 1, "", "\tDead Def: ", "\n");
   }
 
-  DEBUG(errs() << "\tDef Groups:");
+  DEBUG(dbgs() << "\tDef Groups:");
   for (unsigned i = 0, e = MI->getNumOperands(); i != e; ++i) {
     MachineOperand &MO = MI->getOperand(i);
     if (!MO.isReg() || !MO.isDef()) continue;
     unsigned Reg = MO.getReg();
     if (Reg == 0) continue;
 
-    DEBUG(errs() << " " << TRI->getName(Reg) << "=g" << State->GetGroup(Reg)); 
+    DEBUG(dbgs() << " " << TRI->getName(Reg) << "=g" << State->GetGroup(Reg)); 
 
     // If MI's defs have a special allocation requirement, don't allow
     // any def registers to be changed. Also assume all registers
     // defined in a call must not be changed (ABI).
     if (MI->getDesc().isCall() || MI->getDesc().hasExtraDefRegAllocReq()) {
-      DEBUG(if (State->GetGroup(Reg) != 0) errs() << "->g0(alloc-req)");
+      DEBUG(if (State->GetGroup(Reg) != 0) dbgs() << "->g0(alloc-req)");
       State->UnionGroups(Reg, 0);
     }
 
@@ -398,7 +398,7 @@ void AggressiveAntiDepBreaker::PrescanInstruction(MachineInstr *MI, unsigned Cou
       unsigned AliasReg = *Alias;
       if (State->IsLive(AliasReg)) {
         State->UnionGroups(Reg, AliasReg);
-        DEBUG(errs() << "->g" << State->GetGroup(Reg) << "(via " << 
+        DEBUG(dbgs() << "->g" << State->GetGroup(Reg) << "(via " << 
               TRI->getName(AliasReg) << ")");
       }
     }
@@ -411,7 +411,7 @@ void AggressiveAntiDepBreaker::PrescanInstruction(MachineInstr *MI, unsigned Cou
     RegRefs.insert(std::make_pair(Reg, RR));
   }
 
-  DEBUG(errs() << '\n');
+  DEBUG(dbgs() << '\n');
 
   // Scan the register defs for this instruction and update
   // live-ranges.
@@ -437,7 +437,7 @@ void AggressiveAntiDepBreaker::PrescanInstruction(MachineInstr *MI, unsigned Cou
 
 void AggressiveAntiDepBreaker::ScanInstruction(MachineInstr *MI,
                                            unsigned Count) {
-  DEBUG(errs() << "\tUse Groups:");
+  DEBUG(dbgs() << "\tUse Groups:");
   std::multimap<unsigned, AggressiveAntiDepState::RegisterReference>& 
     RegRefs = State->GetRegRefs();
 
@@ -449,7 +449,7 @@ void AggressiveAntiDepBreaker::ScanInstruction(MachineInstr *MI,
     unsigned Reg = MO.getReg();
     if (Reg == 0) continue;
     
-    DEBUG(errs() << " " << TRI->getName(Reg) << "=g" << 
+    DEBUG(dbgs() << " " << TRI->getName(Reg) << "=g" << 
           State->GetGroup(Reg)); 
 
     // It wasn't previously live but now it is, this is a kill. Forget
@@ -461,7 +461,7 @@ void AggressiveAntiDepBreaker::ScanInstruction(MachineInstr *MI,
     // any use registers to be changed. Also assume all registers
     // used in a call must not be changed (ABI).
     if (MI->getDesc().isCall() || MI->getDesc().hasExtraSrcRegAllocReq()) {
-      DEBUG(if (State->GetGroup(Reg) != 0) errs() << "->g0(alloc-req)");
+      DEBUG(if (State->GetGroup(Reg) != 0) dbgs() << "->g0(alloc-req)");
       State->UnionGroups(Reg, 0);
     }
 
@@ -473,12 +473,12 @@ void AggressiveAntiDepBreaker::ScanInstruction(MachineInstr *MI,
     RegRefs.insert(std::make_pair(Reg, RR));
   }
   
-  DEBUG(errs() << '\n');
+  DEBUG(dbgs() << '\n');
 
   // Form a group of all defs and uses of a KILL instruction to ensure
   // that all registers are renamed as a group.
   if (MI->getOpcode() == TargetInstrInfo::KILL) {
-    DEBUG(errs() << "\tKill Group:");
+    DEBUG(dbgs() << "\tKill Group:");
 
     unsigned FirstReg = 0;
     for (unsigned i = 0, e = MI->getNumOperands(); i != e; ++i) {
@@ -488,15 +488,15 @@ void AggressiveAntiDepBreaker::ScanInstruction(MachineInstr *MI,
       if (Reg == 0) continue;
       
       if (FirstReg != 0) {
-        DEBUG(errs() << "=" << TRI->getName(Reg));
+        DEBUG(dbgs() << "=" << TRI->getName(Reg));
         State->UnionGroups(FirstReg, Reg);
       } else {
-        DEBUG(errs() << " " << TRI->getName(Reg));
+        DEBUG(dbgs() << " " << TRI->getName(Reg));
         FirstReg = Reg;
       }
     }
   
-    DEBUG(errs() << "->g" << State->GetGroup(FirstReg) << '\n');
+    DEBUG(dbgs() << "->g" << State->GetGroup(FirstReg) << '\n');
   }
 }
 
@@ -525,7 +525,7 @@ BitVector AggressiveAntiDepBreaker::GetRenameRegisters(unsigned Reg) {
       BV &= RCBV;
     }
 
-    DEBUG(errs() << " " << RC->getName());
+    DEBUG(dbgs() << " " << RC->getName());
   }
   
   return BV;
@@ -552,7 +552,7 @@ bool AggressiveAntiDepBreaker::FindSuitableFreeRegisters(
   // Find the "superest" register in the group. At the same time,
   // collect the BitVector of registers that can be used to rename
   // each register.
-  DEBUG(errs() << "\tRename Candidates for Group g" << AntiDepGroupIndex << ":\n");
+  DEBUG(dbgs() << "\tRename Candidates for Group g" << AntiDepGroupIndex << ":\n");
   std::map<unsigned, BitVector> RenameRegisterMap;
   unsigned SuperReg = 0;
   for (unsigned i = 0, e = Regs.size(); i != e; ++i) {
@@ -562,15 +562,15 @@ bool AggressiveAntiDepBreaker::FindSuitableFreeRegisters(
 
     // If Reg has any references, then collect possible rename regs
     if (RegRefs.count(Reg) > 0) {
-      DEBUG(errs() << "\t\t" << TRI->getName(Reg) << ":");
+      DEBUG(dbgs() << "\t\t" << TRI->getName(Reg) << ":");
     
       BitVector BV = GetRenameRegisters(Reg);
       RenameRegisterMap.insert(std::pair<unsigned, BitVector>(Reg, BV));
 
-      DEBUG(errs() << " ::");
+      DEBUG(dbgs() << " ::");
       DEBUG(for (int r = BV.find_first(); r != -1; r = BV.find_next(r))
-              errs() << " " << TRI->getName(r));
-      DEBUG(errs() << "\n");
+              dbgs() << " " << TRI->getName(r));
+      DEBUG(dbgs() << "\n");
     }
   }
 
@@ -591,7 +591,7 @@ bool AggressiveAntiDepBreaker::FindSuitableFreeRegisters(
     if (renamecnt++ % DebugDiv != DebugMod)
       return false;
     
-    errs() << "*** Performing rename " << TRI->getName(SuperReg) <<
+    dbgs() << "*** Performing rename " << TRI->getName(SuperReg) <<
       " for debug ***\n";
   }
 #endif
@@ -606,11 +606,11 @@ bool AggressiveAntiDepBreaker::FindSuitableFreeRegisters(
   const TargetRegisterClass::iterator RB = SuperRC->allocation_order_begin(MF);
   const TargetRegisterClass::iterator RE = SuperRC->allocation_order_end(MF);
   if (RB == RE) {
-    DEBUG(errs() << "\tEmpty Super Regclass!!\n");
+    DEBUG(dbgs() << "\tEmpty Super Regclass!!\n");
     return false;
   }
 
-  DEBUG(errs() << "\tFind Registers:");
+  DEBUG(dbgs() << "\tFind Registers:");
 
   if (RenameOrder.count(SuperRC) == 0)
     RenameOrder.insert(RenameOrderType::value_type(SuperRC, RE));
@@ -625,7 +625,7 @@ bool AggressiveAntiDepBreaker::FindSuitableFreeRegisters(
     // Don't replace a register with itself.
     if (NewSuperReg == SuperReg) continue;
     
-    DEBUG(errs() << " [" << TRI->getName(NewSuperReg) << ':');
+    DEBUG(dbgs() << " [" << TRI->getName(NewSuperReg) << ':');
     RenameMap.clear();
 
     // For each referenced group register (which must be a SuperReg or
@@ -642,12 +642,12 @@ bool AggressiveAntiDepBreaker::FindSuitableFreeRegisters(
           NewReg = TRI->getSubReg(NewSuperReg, NewSubRegIdx);
       }
 
-      DEBUG(errs() << " " << TRI->getName(NewReg));
+      DEBUG(dbgs() << " " << TRI->getName(NewReg));
       
       // Check if Reg can be renamed to NewReg.
       BitVector BV = RenameRegisterMap[Reg];
       if (!BV.test(NewReg)) {
-        DEBUG(errs() << "(no rename)");
+        DEBUG(dbgs() << "(no rename)");
         goto next_super_reg;
       }
 
@@ -656,7 +656,7 @@ bool AggressiveAntiDepBreaker::FindSuitableFreeRegisters(
       // must also check all aliases of NewReg, because we can't define a
       // register when any sub or super is already live.
       if (State->IsLive(NewReg) || (KillIndices[Reg] > DefIndices[NewReg])) {
-        DEBUG(errs() << "(live)");
+        DEBUG(dbgs() << "(live)");
         goto next_super_reg;
       } else {
         bool found = false;
@@ -664,7 +664,7 @@ bool AggressiveAntiDepBreaker::FindSuitableFreeRegisters(
              *Alias; ++Alias) {
           unsigned AliasReg = *Alias;
           if (State->IsLive(AliasReg) || (KillIndices[Reg] > DefIndices[AliasReg])) {
-            DEBUG(errs() << "(alias " << TRI->getName(AliasReg) << " live)");
+            DEBUG(dbgs() << "(alias " << TRI->getName(AliasReg) << " live)");
             found = true;
             break;
           }
@@ -681,14 +681,14 @@ bool AggressiveAntiDepBreaker::FindSuitableFreeRegisters(
     // renamed, as recorded in RenameMap.
     RenameOrder.erase(SuperRC);
     RenameOrder.insert(RenameOrderType::value_type(SuperRC, R));
-    DEBUG(errs() << "]\n");
+    DEBUG(dbgs() << "]\n");
     return true;
 
   next_super_reg:
-    DEBUG(errs() << ']');
+    DEBUG(dbgs() << ']');
   } while (R != EndR);
 
-  DEBUG(errs() << '\n');
+  DEBUG(dbgs() << '\n');
 
   // No registers are free and available!
   return false;
@@ -740,13 +740,13 @@ unsigned AggressiveAntiDepBreaker::BreakAntiDependencies(
   }
 
 #ifndef NDEBUG 
-  DEBUG(errs() << "\n===== Aggressive anti-dependency breaking\n");
-  DEBUG(errs() << "Available regs:");
+  DEBUG(dbgs() << "\n===== Aggressive anti-dependency breaking\n");
+  DEBUG(dbgs() << "Available regs:");
   for (unsigned Reg = 0; Reg < TRI->getNumRegs(); ++Reg) {
     if (!State->IsLive(Reg))
-      DEBUG(errs() << " " << TRI->getName(Reg));
+      DEBUG(dbgs() << " " << TRI->getName(Reg));
   }
-  DEBUG(errs() << '\n');
+  DEBUG(dbgs() << '\n');
 #endif
 
   // Attempt to break anti-dependence edges. Walk the instructions
@@ -758,7 +758,7 @@ unsigned AggressiveAntiDepBreaker::BreakAntiDependencies(
        I != E; --Count) {
     MachineInstr *MI = --I;
 
-    DEBUG(errs() << "Anti: ");
+    DEBUG(dbgs() << "Anti: ");
     DEBUG(MI->dump());
 
     std::set<unsigned> PassthruRegs;
@@ -795,30 +795,30 @@ unsigned AggressiveAntiDepBreaker::BreakAntiDependencies(
             (Edge->getKind() != SDep::Output)) continue;
         
         unsigned AntiDepReg = Edge->getReg();
-        DEBUG(errs() << "\tAntidep reg: " << TRI->getName(AntiDepReg));
+        DEBUG(dbgs() << "\tAntidep reg: " << TRI->getName(AntiDepReg));
         assert(AntiDepReg != 0 && "Anti-dependence on reg0?");
         
         if (!AllocatableSet.test(AntiDepReg)) {
           // Don't break anti-dependencies on non-allocatable registers.
-          DEBUG(errs() << " (non-allocatable)\n");
+          DEBUG(dbgs() << " (non-allocatable)\n");
           continue;
         } else if ((ExcludeRegs != NULL) && ExcludeRegs->test(AntiDepReg)) {
           // Don't break anti-dependencies for critical path registers
           // if not on the critical path
-          DEBUG(errs() << " (not critical-path)\n");
+          DEBUG(dbgs() << " (not critical-path)\n");
           continue;
         } else if (PassthruRegs.count(AntiDepReg) != 0) {
           // If the anti-dep register liveness "passes-thru", then
           // don't try to change it. It will be changed along with
           // the use if required to break an earlier antidep.
-          DEBUG(errs() << " (passthru)\n");
+          DEBUG(dbgs() << " (passthru)\n");
           continue;
         } else {
           // No anti-dep breaking for implicit deps
           MachineOperand *AntiDepOp = MI->findRegisterDefOperand(AntiDepReg);
           assert(AntiDepOp != NULL && "Can't find index for defined register operand");
           if ((AntiDepOp == NULL) || AntiDepOp->isImplicit()) {
-            DEBUG(errs() << " (implicit)\n");
+            DEBUG(dbgs() << " (implicit)\n");
             continue;
           }
           
@@ -844,13 +844,13 @@ unsigned AggressiveAntiDepBreaker::BreakAntiDependencies(
                  PE = PathSU->Preds.end(); P != PE; ++P) {
             if ((P->getSUnit() == NextSU) && (P->getKind() != SDep::Anti) &&
                 (P->getKind() != SDep::Output)) {
-              DEBUG(errs() << " (real dependency)\n");
+              DEBUG(dbgs() << " (real dependency)\n");
               AntiDepReg = 0;
               break;
             } else if ((P->getSUnit() != NextSU) && 
                        (P->getKind() == SDep::Data) && 
                        (P->getReg() == AntiDepReg)) {
-              DEBUG(errs() << " (other dependency)\n");
+              DEBUG(dbgs() << " (other dependency)\n");
               AntiDepReg = 0;
               break;
             }
@@ -865,16 +865,16 @@ unsigned AggressiveAntiDepBreaker::BreakAntiDependencies(
         // Determine AntiDepReg's register group.
         const unsigned GroupIndex = State->GetGroup(AntiDepReg);
         if (GroupIndex == 0) {
-          DEBUG(errs() << " (zero group)\n");
+          DEBUG(dbgs() << " (zero group)\n");
           continue;
         }
         
-        DEBUG(errs() << '\n');
+        DEBUG(dbgs() << '\n');
         
         // Look for a suitable register to use to break the anti-dependence.
         std::map<unsigned, unsigned> RenameMap;
         if (FindSuitableFreeRegisters(GroupIndex, RenameOrder, RenameMap)) {
-          DEBUG(errs() << "\tBreaking anti-dependence edge on "
+          DEBUG(dbgs() << "\tBreaking anti-dependence edge on "
                 << TRI->getName(AntiDepReg) << ":");
           
           // Handle each group register...
@@ -883,7 +883,7 @@ unsigned AggressiveAntiDepBreaker::BreakAntiDependencies(
             unsigned CurrReg = S->first;
             unsigned NewReg = S->second;
             
-            DEBUG(errs() << " " << TRI->getName(CurrReg) << "->" << 
+            DEBUG(dbgs() << " " << TRI->getName(CurrReg) << "->" << 
                   TRI->getName(NewReg) << "(" <<  
                   RegRefs.count(CurrReg) << " refs)");
             
@@ -917,7 +917,7 @@ unsigned AggressiveAntiDepBreaker::BreakAntiDependencies(
           }
           
           ++Broken;
-          DEBUG(errs() << '\n');
+          DEBUG(dbgs() << '\n');
         }
       }
     }
diff --git a/libclamav/c++/llvm/lib/CodeGen/AsmPrinter/AsmPrinter.cpp b/libclamav/c++/llvm/lib/CodeGen/AsmPrinter/AsmPrinter.cpp
index 44fd176..6b24e24 100644
--- a/libclamav/c++/llvm/lib/CodeGen/AsmPrinter/AsmPrinter.cpp
+++ b/libclamav/c++/llvm/lib/CodeGen/AsmPrinter/AsmPrinter.cpp
@@ -236,7 +236,7 @@ namespace {
     const MCSection *S;
     unsigned Alignment;
     SmallVector<unsigned, 4> CPEs;
-    SectionCPs(const MCSection *s, unsigned a) : S(s), Alignment(a) {};
+    SectionCPs(const MCSection *s, unsigned a) : S(s), Alignment(a) {}
   };
 }
 
@@ -1905,7 +1905,6 @@ void AsmPrinter::EmitComments(const MachineInstr &MI) const {
       if (Newline) O << '\n';
       O.PadToColumn(MAI->getCommentColumn());
       O << MAI->getCommentString() << " Reload Reuse";
-      Newline = true;
     }
   }
 }
diff --git a/libclamav/c++/llvm/lib/CodeGen/AsmPrinter/DIE.cpp b/libclamav/c++/llvm/lib/CodeGen/AsmPrinter/DIE.cpp
index 0e93b98..b85e11a 100644
--- a/libclamav/c++/llvm/lib/CodeGen/AsmPrinter/DIE.cpp
+++ b/libclamav/c++/llvm/lib/CodeGen/AsmPrinter/DIE.cpp
@@ -16,6 +16,7 @@
 #include "llvm/CodeGen/AsmPrinter.h"
 #include "llvm/MC/MCAsmInfo.h"
 #include "llvm/Target/TargetData.h"
+#include "llvm/Support/Debug.h"
 #include "llvm/Support/ErrorHandling.h"
 #include "llvm/Support/Format.h"
 using namespace llvm;
@@ -93,7 +94,7 @@ void DIEAbbrev::print(raw_ostream &O) {
       << '\n';
   }
 }
-void DIEAbbrev::dump() { print(errs()); }
+void DIEAbbrev::dump() { print(dbgs()); }
 #endif
 
 //===----------------------------------------------------------------------===//
@@ -164,14 +165,14 @@ void DIE::print(raw_ostream &O, unsigned IncIndent) {
 }
 
 void DIE::dump() {
-  print(errs());
+  print(dbgs());
 }
 #endif
 
 
 #ifndef NDEBUG
 void DIEValue::dump() {
-  print(errs());
+  print(dbgs());
 }
 #endif
 
diff --git a/libclamav/c++/llvm/lib/CodeGen/AsmPrinter/DIE.h b/libclamav/c++/llvm/lib/CodeGen/AsmPrinter/DIE.h
index cad8b89..a6dc9b6 100644
--- a/libclamav/c++/llvm/lib/CodeGen/AsmPrinter/DIE.h
+++ b/libclamav/c++/llvm/lib/CodeGen/AsmPrinter/DIE.h
@@ -68,6 +68,7 @@ namespace llvm {
     /// Data - Raw data bytes for abbreviation.
     ///
     SmallVector<DIEAbbrevData, 8> Data;
+
   public:
     DIEAbbrev(unsigned T, unsigned C) : Tag(T), ChildrenFlag(C), Data() {}
     virtual ~DIEAbbrev() {}
@@ -131,19 +132,18 @@ namespace llvm {
     ///
     std::vector<DIE *> Children;
 
+    DIE *Parent;
+
     /// Attributes values.
     ///
     SmallVector<DIEValue*, 32> Values;
 
-    /// Abstract compile unit.
-    CompileUnit *AbstractCU;
-    
     // Private data for print()
     mutable unsigned IndentCount;
   public:
     explicit DIE(unsigned Tag)
       : Abbrev(Tag, dwarf::DW_CHILDREN_no), Offset(0),
-        Size(0), IndentCount(0) {}
+        Size(0), Parent (0), IndentCount(0) {}
     virtual ~DIE();
 
     // Accessors.
@@ -154,13 +154,12 @@ namespace llvm {
     unsigned getSize() const { return Size; }
     const std::vector<DIE *> &getChildren() const { return Children; }
     SmallVector<DIEValue*, 32> &getValues() { return Values; }
-    CompileUnit *getAbstractCompileUnit() const { return AbstractCU; }
-
+    DIE *getParent() const { return Parent; }
     void setTag(unsigned Tag) { Abbrev.setTag(Tag); }
     void setOffset(unsigned O) { Offset = O; }
     void setSize(unsigned S) { Size = S; }
-    void setAbstractCompileUnit(CompileUnit *CU) { AbstractCU = CU; }
-
+    void setParent(DIE *P) { Parent = P; }
+    
     /// addValue - Add a value and attributes to a DIE.
     ///
     void addValue(unsigned Attribute, unsigned Form, DIEValue *Value) {
@@ -179,8 +178,13 @@ namespace llvm {
     /// addChild - Add a child to the DIE.
     ///
     void addChild(DIE *Child) {
+      if (Child->getParent()) {
+        assert (Child->getParent() == this && "Unexpected DIE Parent!");
+        return;
+      }
       Abbrev.setChildrenFlag(dwarf::DW_CHILDREN_yes);
       Children.push_back(Child);
+      Child->setParent(this);
     }
 
 #ifndef NDEBUG
diff --git a/libclamav/c++/llvm/lib/CodeGen/AsmPrinter/DwarfDebug.cpp b/libclamav/c++/llvm/lib/CodeGen/AsmPrinter/DwarfDebug.cpp
index c200a46..8a3ceb6 100644
--- a/libclamav/c++/llvm/lib/CodeGen/AsmPrinter/DwarfDebug.cpp
+++ b/libclamav/c++/llvm/lib/CodeGen/AsmPrinter/DwarfDebug.cpp
@@ -30,11 +30,6 @@
 #include "llvm/System/Path.h"
 using namespace llvm;
 
-static TimerGroup &getDwarfTimerGroup() {
-  static TimerGroup DwarfTimerGroup("Dwarf Debugging");
-  return DwarfTimerGroup;
-}
-
 //===----------------------------------------------------------------------===//
 
 /// Configuration values for initial hash set sizes (log2).
@@ -112,7 +107,12 @@ public:
 
   /// getDIEEntry - Returns the debug information entry for the speciefied
   /// debug variable.
-  DIEEntry *getDIEEntry(MDNode *N) { return GVToDIEEntryMap.lookup(N); }
+  DIEEntry *getDIEEntry(MDNode *N) { 
+    ValueMap<MDNode *, DIEEntry *>::iterator I = GVToDIEEntryMap.find(N);
+    if (I == GVToDIEEntryMap.end())
+      return NULL;
+    return I->second;
+  }
 
   /// insertDIEEntry - Insert debug information entry into the map.
   void insertDIEEntry(MDNode *N, DIEEntry *E) {
@@ -234,7 +234,7 @@ public:
 
 #ifndef NDEBUG
 void DbgScope::dump() const {
-  raw_ostream &err = errs();
+  raw_ostream &err = dbgs();
   err.indent(IndentLevel);
   MDNode *N = Desc.getNode();
   N->dump();
@@ -269,8 +269,7 @@ DwarfDebug::DwarfDebug(raw_ostream &OS, AsmPrinter *A, const MCAsmInfo *T)
     SectionSourceLines(), didInitial(false), shouldEmit(false),
     CurrentFnDbgScope(0), DebugTimer(0) {
   if (TimePassesIsEnabled)
-    DebugTimer = new Timer("Dwarf Debug Writer",
-                           getDwarfTimerGroup());
+    DebugTimer = new Timer("Dwarf Debug Writer");
 }
 DwarfDebug::~DwarfDebug() {
   for (unsigned j = 0, M = DIEValues.size(); j < M; ++j)
@@ -446,6 +445,23 @@ void DwarfDebug::addSourceLine(DIE *Die, const DIType *Ty) {
   addUInt(Die, dwarf::DW_AT_decl_line, 0, Line);
 }
 
+/// addSourceLine - Add location information to specified debug information
+/// entry.
+void DwarfDebug::addSourceLine(DIE *Die, const DINameSpace *NS) {
+  // If there is no compile unit specified, don't add a line #.
+  if (NS->getCompileUnit().isNull())
+    return;
+
+  unsigned Line = NS->getLineNumber();
+  StringRef FN = NS->getFilename();
+  StringRef Dir = NS->getDirectory();
+
+  unsigned FileID = GetOrCreateSourceID(Dir, FN);
+  assert(FileID && "Invalid file id");
+  addUInt(Die, dwarf::DW_AT_decl_file, 0, FileID);
+  addUInt(Die, dwarf::DW_AT_decl_line, 0, Line);
+}
+
 /* Byref variables, in Blocks, are declared by the programmer as
    "SomeType VarName;", but the compiler creates a
    __Block_byref_x_VarName struct, and gives the variable VarName
@@ -745,6 +761,9 @@ void DwarfDebug::addToContextOwner(DIE *Die, DIDescriptor Context) {
   else if (Context.isType()) {
     DIE *ContextDIE = getOrCreateTypeDIE(DIType(Context.getNode()));
     ContextDIE->addChild(Die);
+  } else if (Context.isNameSpace()) {
+    DIE *ContextDIE = getOrCreateNameSpace(DINameSpace(Context.getNode()));
+    ContextDIE->addChild(Die);
   } else if (DIE *ContextDIE = ModuleCU->getDIE(Context.getNode()))
     ContextDIE->addChild(Die);
   else 
@@ -781,7 +800,6 @@ void DwarfDebug::addType(DIE *Entity, DIType Ty) {
 
   // Check for pre-existence.
   DIEEntry *Entry = ModuleCU->getDIEEntry(Ty.getNode());
-
   // If it exists then use the existing value.
   if (Entry) {
     Entity->addValue(dwarf::DW_AT_type, dwarf::DW_FORM_ref4, Entry);
@@ -1030,13 +1048,6 @@ DIE *DwarfDebug::createGlobalVariableDIE(const DIGlobalVariable &GV) {
     addUInt(GVDie, dwarf::DW_AT_external, dwarf::DW_FORM_flag, 1);
   addSourceLine(GVDie, &GV);
 
-  // Add address.
-  DIEBlock *Block = new DIEBlock();
-  addUInt(Block, 0, dwarf::DW_FORM_data1, dwarf::DW_OP_addr);
-  addObjectLabel(Block, 0, dwarf::DW_FORM_udata,
-                 Asm->Mang->getMangledName(GV.getGlobal()));
-  addBlock(GVDie, dwarf::DW_AT_location, 0, Block);
-
   return GVDie;
 }
 
@@ -1285,7 +1296,6 @@ DIE *DwarfDebug::updateSubprogramScopeDIE(MDNode *SPNode) {
    SPDie = new DIE(dwarf::DW_TAG_subprogram);
    addDIEEntry(SPDie, dwarf::DW_AT_specification, dwarf::DW_FORM_ref4, 
                SPDeclDie);
-   
    ModuleCU->addDie(SPDie);
  }
    
@@ -1559,6 +1569,20 @@ unsigned DwarfDebug::GetOrCreateSourceID(StringRef DirName, StringRef FileName)
   return SrcId;
 }
 
+/// getOrCreateNameSpace - Create a DIE for DINameSpace.
+DIE *DwarfDebug::getOrCreateNameSpace(DINameSpace NS) {
+  DIE *NDie = ModuleCU->getDIE(NS.getNode());
+  if (NDie)
+    return NDie;
+  NDie = new DIE(dwarf::DW_TAG_namespace);
+  ModuleCU->insertDIE(NS.getNode(), NDie);
+  if (!NS.getName().empty())
+    addString(NDie, dwarf::DW_AT_name, dwarf::DW_FORM_string, NS.getName());
+  addSourceLine(NDie, &NS);
+  addToContextOwner(NDie, NS.getContext());
+  return NDie;
+}
+
 CompileUnit *DwarfDebug::constructCompileUnit(MDNode *N) {
   DICompileUnit DIUnit(N);
   StringRef FN = DIUnit.getFilename();
@@ -1620,6 +1644,25 @@ void DwarfDebug::constructGlobalVariableDIE(MDNode *N) {
   ModuleCU->insertDIE(N, VariableDie);
 
   // Add to context owner.
+  if (DI_GV.isDefinition() 
+      && !DI_GV.getContext().isCompileUnit()) {
+    // Create specification DIE.
+    DIE *VariableSpecDIE = new DIE(dwarf::DW_TAG_variable);
+    addDIEEntry(VariableSpecDIE, dwarf::DW_AT_specification,
+                dwarf::DW_FORM_ref4, VariableDie);
+    DIEBlock *Block = new DIEBlock();
+    addUInt(Block, 0, dwarf::DW_FORM_data1, dwarf::DW_OP_addr);
+    addObjectLabel(Block, 0, dwarf::DW_FORM_udata,
+                   Asm->Mang->getMangledName(DI_GV.getGlobal()));
+    addBlock(VariableSpecDIE, dwarf::DW_AT_location, 0, Block);
+    ModuleCU->addDie(VariableSpecDIE);
+  } else {
+    DIEBlock *Block = new DIEBlock();
+    addUInt(Block, 0, dwarf::DW_FORM_data1, dwarf::DW_OP_addr);
+    addObjectLabel(Block, 0, dwarf::DW_FORM_udata,
+                   Asm->Mang->getMangledName(DI_GV.getGlobal()));
+    addBlock(VariableDie, dwarf::DW_AT_location, 0, Block);
+  }
   addToContextOwner(VariableDie, DI_GV.getContext());
   
   // Expose as global. FIXME - need to check external flag.
@@ -1652,9 +1695,7 @@ void DwarfDebug::constructSubprogramDIE(MDNode *N) {
   ModuleCU->insertDIE(N, SubprogramDie);
 
   // Add to context owner.
-  if (SP.getContext().getNode() == SP.getCompileUnit().getNode())
-    if (TopLevelDIEs.insert(SubprogramDie))
-      TopLevelDIEsVector.push_back(SubprogramDie);
+  addToContextOwner(SubprogramDie, SP.getContext());
 
   // Expose as global.
   ModuleCU->addGlobal(SP.getName(), SubprogramDie);
@@ -2365,7 +2406,6 @@ void DwarfDebug::emitDebugInfo() {
   EmitLabel("info_end", ModuleCU->getID());
 
   Asm->EOL();
-
 }
 
 /// emitAbbreviations - Emit the abbreviation section.
diff --git a/libclamav/c++/llvm/lib/CodeGen/AsmPrinter/DwarfDebug.h b/libclamav/c++/llvm/lib/CodeGen/AsmPrinter/DwarfDebug.h
index 12ad322..2b8164e 100644
--- a/libclamav/c++/llvm/lib/CodeGen/AsmPrinter/DwarfDebug.h
+++ b/libclamav/c++/llvm/lib/CodeGen/AsmPrinter/DwarfDebug.h
@@ -285,6 +285,7 @@ class DwarfDebug : public Dwarf {
   void addSourceLine(DIE *Die, const DIGlobal *G);
   void addSourceLine(DIE *Die, const DISubprogram *SP);
   void addSourceLine(DIE *Die, const DIType *Ty);
+  void addSourceLine(DIE *Die, const DINameSpace *NS);
 
   /// addAddress - Add an address attribute to a die based on the location
   /// provided.
@@ -315,6 +316,10 @@ class DwarfDebug : public Dwarf {
   /// addType - Add a new type attribute to the specified entity.
   void addType(DIE *Entity, DIType Ty);
 
+ 
+  /// getOrCreateNameSpace - Create a DIE for DINameSpace.
+  DIE *getOrCreateNameSpace(DINameSpace NS);
+
   /// getOrCreateTypeDIE - Find existing DIE or create new DIE for the
   /// given DIType.
   DIE *getOrCreateTypeDIE(DIType Ty);
diff --git a/libclamav/c++/llvm/lib/CodeGen/AsmPrinter/DwarfException.cpp b/libclamav/c++/llvm/lib/CodeGen/AsmPrinter/DwarfException.cpp
index 3fd077f..d01f300 100644
--- a/libclamav/c++/llvm/lib/CodeGen/AsmPrinter/DwarfException.cpp
+++ b/libclamav/c++/llvm/lib/CodeGen/AsmPrinter/DwarfException.cpp
@@ -35,19 +35,13 @@
 #include "llvm/ADT/StringExtras.h"
 using namespace llvm;
 
-static TimerGroup &getDwarfTimerGroup() {
-  static TimerGroup DwarfTimerGroup("DWARF Exception");
-  return DwarfTimerGroup;
-}
-
 DwarfException::DwarfException(raw_ostream &OS, AsmPrinter *A,
                                const MCAsmInfo *T)
   : Dwarf(OS, A, T, "eh"), shouldEmitTable(false), shouldEmitMoves(false),
     shouldEmitTableModule(false), shouldEmitMovesModule(false),
     ExceptionTimer(0) {
   if (TimePassesIsEnabled)
-    ExceptionTimer = new Timer("DWARF Exception Writer",
-                               getDwarfTimerGroup());
+    ExceptionTimer = new Timer("DWARF Exception Writer");
 }
 
 DwarfException::~DwarfException() {
@@ -292,13 +286,14 @@ void DwarfException::EmitFDE(const FunctionEHFrameInfo &EHFrameInfo) {
       Asm->EmitULEB128Bytes(is4Byte ? 4 : 8);
       Asm->EOL("Augmentation size");
 
-      // We force 32-bits here because we've encoded our LSDA in the CIE with
-      // `dwarf::DW_EH_PE_sdata4'. And the CIE and FDE should agree.
       if (EHFrameInfo.hasLandingPads)
-        EmitReference("exception", EHFrameInfo.Number, true, true);
-      else
-        Asm->EmitInt32((int)0);
-
+        EmitReference("exception", EHFrameInfo.Number, true, false);
+      else {
+        if (is4Byte)
+          Asm->EmitInt32((int)0);
+        else
+          Asm->EmitInt64((int)0);
+      }
       Asm->EOL("Language Specific Data Area");
     } else {
       Asm->EmitULEB128Bytes(0);
diff --git a/libclamav/c++/llvm/lib/CodeGen/BranchFolding.cpp b/libclamav/c++/llvm/lib/CodeGen/BranchFolding.cpp
index 3887e6d..92849d3 100644
--- a/libclamav/c++/llvm/lib/CodeGen/BranchFolding.cpp
+++ b/libclamav/c++/llvm/lib/CodeGen/BranchFolding.cpp
@@ -98,7 +98,7 @@ BranchFolder::BranchFolder(bool defaultEnableTailMerge) {
 /// function, updating the CFG.
 void BranchFolder::RemoveDeadBlock(MachineBasicBlock *MBB) {
   assert(MBB->pred_empty() && "MBB must be dead!");
-  DEBUG(errs() << "\nRemoving MBB: " << *MBB);
+  DEBUG(dbgs() << "\nRemoving MBB: " << *MBB);
 
   MachineFunction *MF = MBB->getParent();
   // drop all successors.
@@ -636,7 +636,7 @@ unsigned BranchFolder::CreateCommonTailOnlyBlock(MachineBasicBlock *&PredBB,
     SameTails[commonTailIndex].getTailStartPos();
   MachineBasicBlock *MBB = SameTails[commonTailIndex].getBlock();
 
-  DEBUG(errs() << "\nSplitting BB#" << MBB->getNumber() << ", size "
+  DEBUG(dbgs() << "\nSplitting BB#" << MBB->getNumber() << ", size "
                << maxCommonTailLength);
 
   MachineBasicBlock *newMBB = SplitMBBAt(*MBB, BBI);
@@ -666,18 +666,18 @@ bool BranchFolder::TryTailMergeBlocks(MachineBasicBlock *SuccBB,
   // this many instructions in common.
   unsigned minCommonTailLength = TailMergeSize;
 
-  DEBUG(errs() << "\nTryTailMergeBlocks: ";
+  DEBUG(dbgs() << "\nTryTailMergeBlocks: ";
         for (unsigned i = 0, e = MergePotentials.size(); i != e; ++i)
-          errs() << "BB#" << MergePotentials[i].getBlock()->getNumber()
+          dbgs() << "BB#" << MergePotentials[i].getBlock()->getNumber()
                  << (i == e-1 ? "" : ", ");
-        errs() << "\n";
+        dbgs() << "\n";
         if (SuccBB) {
-          errs() << "  with successor BB#" << SuccBB->getNumber() << '\n';
+          dbgs() << "  with successor BB#" << SuccBB->getNumber() << '\n';
           if (PredBB)
-            errs() << "  which has fall-through from BB#"
+            dbgs() << "  which has fall-through from BB#"
                    << PredBB->getNumber() << "\n";
         }
-        errs() << "Looking for common tails of at least "
+        dbgs() << "Looking for common tails of at least "
                << minCommonTailLength << " instruction"
                << (minCommonTailLength == 1 ? "" : "s") << '\n';
        );
@@ -748,19 +748,19 @@ bool BranchFolder::TryTailMergeBlocks(MachineBasicBlock *SuccBB,
     MachineBasicBlock *MBB = SameTails[commonTailIndex].getBlock();
     // MBB is common tail.  Adjust all other BB's to jump to this one.
     // Traversal must be forwards so erases work.
-    DEBUG(errs() << "\nUsing common tail in BB#" << MBB->getNumber()
+    DEBUG(dbgs() << "\nUsing common tail in BB#" << MBB->getNumber()
                  << " for ");
     for (unsigned int i=0, e = SameTails.size(); i != e; ++i) {
       if (commonTailIndex == i)
         continue;
-      DEBUG(errs() << "BB#" << SameTails[i].getBlock()->getNumber()
+      DEBUG(dbgs() << "BB#" << SameTails[i].getBlock()->getNumber()
                    << (i == e-1 ? "" : ", "));
       // Hack the end off BB i, making it jump to BB commonTailIndex instead.
       ReplaceTailWithBranchTo(SameTails[i].getTailStartPos(), MBB);
       // BB i is no longer a predecessor of SuccBB; remove it from the worklist.
       MergePotentials.erase(SameTails[i].getMPIter());
     }
-    DEBUG(errs() << "\n");
+    DEBUG(dbgs() << "\n");
     // We leave commonTailIndex in the worklist in case there are other blocks
     // that match it with a smaller number of instructions.
     MadeChange = true;
@@ -999,7 +999,7 @@ ReoptimizeBlock:
     if (PriorCond.empty() && !PriorTBB && MBB->pred_size() == 1 &&
         PrevBB.succ_size() == 1 &&
         !MBB->hasAddressTaken()) {
-      DEBUG(errs() << "\nMerging into block: " << PrevBB
+      DEBUG(dbgs() << "\nMerging into block: " << PrevBB
                    << "From MBB: " << *MBB);
       PrevBB.splice(PrevBB.end(), MBB, MBB->begin(), MBB->end());
       PrevBB.removeSuccessor(PrevBB.succ_begin());;
@@ -1084,7 +1084,7 @@ ReoptimizeBlock:
         // Reverse the branch so we will fall through on the previous true cond.
         SmallVector<MachineOperand, 4> NewPriorCond(PriorCond);
         if (!TII->ReverseBranchCondition(NewPriorCond)) {
-          DEBUG(errs() << "\nMoving MBB: " << *MBB
+          DEBUG(dbgs() << "\nMoving MBB: " << *MBB
                        << "To make fallthrough to: " << *PriorTBB << "\n");
 
           TII->RemoveBranch(PrevBB);
@@ -1222,7 +1222,7 @@ ReoptimizeBlock:
         // Analyze the branch at the end of the pred.
         MachineBasicBlock *PredBB = *PI;
         MachineFunction::iterator PredFallthrough = PredBB; ++PredFallthrough;
-        MachineBasicBlock *PredTBB, *PredFBB;
+        MachineBasicBlock *PredTBB = 0, *PredFBB = 0;
         SmallVector<MachineOperand, 4> PredCond;
         if (PredBB != MBB && !PredBB->canFallThrough() &&
             !TII->AnalyzeBranch(*PredBB, PredTBB, PredFBB, PredCond, true)
@@ -1274,7 +1274,7 @@ ReoptimizeBlock:
       // Okay, there is no really great place to put this block.  If, however,
       // the block before this one would be a fall-through if this block were
       // removed, move this block to the end of the function.
-      MachineBasicBlock *PrevTBB, *PrevFBB;
+      MachineBasicBlock *PrevTBB = 0, *PrevFBB = 0;
       SmallVector<MachineOperand, 4> PrevCond;
       if (FallThrough != MF.end() &&
           !TII->AnalyzeBranch(PrevBB, PrevTBB, PrevFBB, PrevCond, true) &&
diff --git a/libclamav/c++/llvm/lib/CodeGen/CalcSpillWeights.cpp b/libclamav/c++/llvm/lib/CodeGen/CalcSpillWeights.cpp
index dcffb8a..b8ef219 100644
--- a/libclamav/c++/llvm/lib/CodeGen/CalcSpillWeights.cpp
+++ b/libclamav/c++/llvm/lib/CodeGen/CalcSpillWeights.cpp
@@ -37,7 +37,7 @@ void CalculateSpillWeights::getAnalysisUsage(AnalysisUsage &au) const {
 
 bool CalculateSpillWeights::runOnMachineFunction(MachineFunction &fn) {
 
-  DEBUG(errs() << "********** Compute Spill Weights **********\n"
+  DEBUG(dbgs() << "********** Compute Spill Weights **********\n"
                << "********** Function: "
                << fn.getFunction()->getName() << '\n');
 
@@ -95,7 +95,7 @@ bool CalculateSpillWeights::runOnMachineFunction(MachineFunction &fn) {
           SlotIndex defIdx = lis->getInstructionIndex(mi).getDefIndex();
           const LiveRange *dlr =
             lis->getInterval(reg).getLiveRangeContaining(defIdx);
-          if (dlr->end > mbbEnd)
+          if (dlr->end >= mbbEnd)
             weight *= 3.0F;
         }
         regInt.weight += weight;
diff --git a/libclamav/c++/llvm/lib/CodeGen/CodePlacementOpt.cpp b/libclamav/c++/llvm/lib/CodeGen/CodePlacementOpt.cpp
index ff71f6b..126700b 100644
--- a/libclamav/c++/llvm/lib/CodeGen/CodePlacementOpt.cpp
+++ b/libclamav/c++/llvm/lib/CodeGen/CodePlacementOpt.cpp
@@ -233,7 +233,6 @@ bool CodePlacementOpt::EliminateUnconditionalJumpsToTop(MachineFunction &MF,
       !BotHasFallthrough &&
       HasFallthrough(L->getBottomBlock())) {
     ++NumIntraElim;
-    BotHasFallthrough = true;
   }
 
   return Changed;
diff --git a/libclamav/c++/llvm/lib/CodeGen/ELF.h b/libclamav/c++/llvm/lib/CodeGen/ELF.h
index e303ebb..cb5a8c0 100644
--- a/libclamav/c++/llvm/lib/CodeGen/ELF.h
+++ b/libclamav/c++/llvm/lib/CodeGen/ELF.h
@@ -82,14 +82,14 @@ namespace llvm {
     const GlobalValue *getGlobalValue() const {
       assert(SourceType == isGV && "This is not a global value");
       return Source.GV;
-    };
+    }
 
     // getExternalSym - If this is an external symbol which originated the
     // elf symbol, return a reference to it.
     const char *getExternalSymbol() const {
       assert(SourceType == isExtSym && "This is not an external symbol");
       return Source.Ext;
-    };
+    }
 
     // getGV - From a global value return a elf symbol to represent it
     static ELFSym *getGV(const GlobalValue *GV, unsigned Bind,
diff --git a/libclamav/c++/llvm/lib/CodeGen/LLVMTargetMachine.cpp b/libclamav/c++/llvm/lib/CodeGen/LLVMTargetMachine.cpp
index 297dd31..d5fd051 100644
--- a/libclamav/c++/llvm/lib/CodeGen/LLVMTargetMachine.cpp
+++ b/libclamav/c++/llvm/lib/CodeGen/LLVMTargetMachine.cpp
@@ -83,7 +83,18 @@ LLVMTargetMachine::LLVMTargetMachine(const Target &T,
   AsmInfo = T.createAsmInfo(TargetTriple);
 }
 
+// Set the default code model for the JIT for a generic target.
+// FIXME: Is small right here? or .is64Bit() ? Large : Small?
+void
+LLVMTargetMachine::setCodeModelForJIT() {
+  setCodeModel(CodeModel::Small);
+}
 
+// Set the default code model for static compilation for a generic target.
+void
+LLVMTargetMachine::setCodeModelForStatic() {
+  setCodeModel(CodeModel::Small);
+}
 
 FileModel::Model
 LLVMTargetMachine::addPassesToEmitFile(PassManagerBase &PM,
@@ -130,6 +141,9 @@ bool LLVMTargetMachine::addAssemblyEmitter(PassManagerBase &PM,
 bool LLVMTargetMachine::addPassesToEmitFileFinish(PassManagerBase &PM,
                                                   MachineCodeEmitter *MCE,
                                                   CodeGenOpt::Level OptLevel) {
+  // Make sure the code model is set.
+  setCodeModelForStatic();
+  
   if (MCE)
     addSimpleCodeEmitter(PM, OptLevel, *MCE);
   if (PrintEmittedAsm)
@@ -146,6 +160,9 @@ bool LLVMTargetMachine::addPassesToEmitFileFinish(PassManagerBase &PM,
 bool LLVMTargetMachine::addPassesToEmitFileFinish(PassManagerBase &PM,
                                                   JITCodeEmitter *JCE,
                                                   CodeGenOpt::Level OptLevel) {
+  // Make sure the code model is set.
+  setCodeModelForJIT();
+  
   if (JCE)
     addSimpleCodeEmitter(PM, OptLevel, *JCE);
   if (PrintEmittedAsm)
@@ -162,6 +179,9 @@ bool LLVMTargetMachine::addPassesToEmitFileFinish(PassManagerBase &PM,
 bool LLVMTargetMachine::addPassesToEmitFileFinish(PassManagerBase &PM,
                                                   ObjectCodeEmitter *OCE,
                                                   CodeGenOpt::Level OptLevel) {
+  // Make sure the code model is set.
+  setCodeModelForStatic();
+  
   if (OCE)
     addSimpleCodeEmitter(PM, OptLevel, *OCE);
   if (PrintEmittedAsm)
@@ -181,6 +201,9 @@ bool LLVMTargetMachine::addPassesToEmitFileFinish(PassManagerBase &PM,
 bool LLVMTargetMachine::addPassesToEmitMachineCode(PassManagerBase &PM,
                                                    MachineCodeEmitter &MCE,
                                                    CodeGenOpt::Level OptLevel) {
+  // Make sure the code model is set.
+  setCodeModelForJIT();
+  
   // Add common CodeGen passes.
   if (addCommonCodeGenPasses(PM, OptLevel))
     return true;
@@ -203,6 +226,9 @@ bool LLVMTargetMachine::addPassesToEmitMachineCode(PassManagerBase &PM,
 bool LLVMTargetMachine::addPassesToEmitMachineCode(PassManagerBase &PM,
                                                    JITCodeEmitter &JCE,
                                                    CodeGenOpt::Level OptLevel) {
+  // Make sure the code model is set.
+  setCodeModelForJIT();
+  
   // Add common CodeGen passes.
   if (addCommonCodeGenPasses(PM, OptLevel))
     return true;
diff --git a/libclamav/c++/llvm/lib/CodeGen/LiveIntervalAnalysis.cpp b/libclamav/c++/llvm/lib/CodeGen/LiveIntervalAnalysis.cpp
index 8806439..452f872 100644
--- a/libclamav/c++/llvm/lib/CodeGen/LiveIntervalAnalysis.cpp
+++ b/libclamav/c++/llvm/lib/CodeGen/LiveIntervalAnalysis.cpp
@@ -324,8 +324,7 @@ void LiveIntervals::handleVirtualRegisterDef(MachineBasicBlock *mbb,
     // of the defining block, potentially live across some blocks, then is
     // live into some number of blocks, but gets killed.  Start by adding a
     // range that goes from this definition to the end of the defining block.
-    LiveRange NewLR(defIndex, getMBBEndIdx(mbb).getNextIndex().getLoadIndex(),
-                    ValNo);
+    LiveRange NewLR(defIndex, getMBBEndIdx(mbb), ValNo);
     DEBUG(errs() << " +" << NewLR);
     interval.addRange(NewLR);
 
@@ -334,10 +333,8 @@ void LiveIntervals::handleVirtualRegisterDef(MachineBasicBlock *mbb,
     // live interval.
     for (SparseBitVector<>::iterator I = vi.AliveBlocks.begin(), 
              E = vi.AliveBlocks.end(); I != E; ++I) {
-      LiveRange LR(
-          getMBBStartIdx(mf_->getBlockNumbered(*I)),
-          getMBBEndIdx(mf_->getBlockNumbered(*I)).getNextIndex().getLoadIndex(),
-          ValNo);
+      MachineBasicBlock *aliveBlock = mf_->getBlockNumbered(*I);
+      LiveRange LR(getMBBStartIdx(aliveBlock), getMBBEndIdx(aliveBlock), ValNo);
       interval.addRange(LR);
       DEBUG(errs() << " +" << LR);
     }
@@ -415,19 +412,32 @@ void LiveIntervals::handleVirtualRegisterDef(MachineBasicBlock *mbb,
       // first redefinition of the vreg that we have seen, go back and change
       // the live range in the PHI block to be a different value number.
       if (interval.containsOneValue()) {
-        // Remove the old range that we now know has an incorrect number.
+
         VNInfo *VNI = interval.getValNumInfo(0);
-        MachineInstr *Killer = vi.Kills[0];
-        SlotIndex Start = getMBBStartIdx(Killer->getParent());
-        SlotIndex End = getInstructionIndex(Killer).getDefIndex();
-        DEBUG({
-            errs() << " Removing [" << Start << "," << End << "] from: ";
-            interval.print(errs(), tri_);
-            errs() << "\n";
-          });
-        interval.removeRange(Start, End);        
-        assert(interval.ranges.size() == 1 &&
-               "Newly discovered PHI interval has >1 ranges.");
+        // Phi elimination may have reused the register for multiple identical
+        // phi nodes. There will be a kill per phi. Remove the old ranges that
+        // we now know have an incorrect number.
+        for (unsigned ki=0, ke=vi.Kills.size(); ki != ke; ++ki) {
+          MachineInstr *Killer = vi.Kills[ki];
+          SlotIndex Start = getMBBStartIdx(Killer->getParent());
+          SlotIndex End = getInstructionIndex(Killer).getDefIndex();
+          DEBUG({
+              errs() << "\n\t\trenaming [" << Start << "," << End << "] in: ";
+              interval.print(errs(), tri_);
+            });
+          interval.removeRange(Start, End);
+
+          // Replace the interval with one of a NEW value number.  Note that
+          // this value number isn't actually defined by an instruction, weird
+          // huh? :)
+          LiveRange LR(Start, End,
+                       interval.getNextValue(SlotIndex(Start, true),
+                                             0, false, VNInfoAllocator));
+          LR.valno->setIsPHIDef(true);
+          interval.addRange(LR);
+          LR.valno->addKill(End);
+        }
+
         MachineBasicBlock *killMBB = getMBBFromIndex(VNI->def);
         VNI->addKill(indexes_->getTerminatorGap(killMBB));
         VNI->setHasPHIKill(true);
@@ -435,20 +445,6 @@ void LiveIntervals::handleVirtualRegisterDef(MachineBasicBlock *mbb,
             errs() << " RESULT: ";
             interval.print(errs(), tri_);
           });
-
-        // Replace the interval with one of a NEW value number.  Note that this
-        // value number isn't actually defined by an instruction, weird huh? :)
-        LiveRange LR(Start, End,
-                     interval.getNextValue(SlotIndex(getMBBStartIdx(Killer->getParent()), true),
-                       0, false, VNInfoAllocator));
-        LR.valno->setIsPHIDef(true);
-        DEBUG(errs() << " replace range with " << LR);
-        interval.addRange(LR);
-        LR.valno->addKill(End);
-        DEBUG({
-            errs() << " RESULT: ";
-            interval.print(errs(), tri_);
-          });
       }
 
       // In the case of PHI elimination, each variable definition is only
@@ -468,7 +464,7 @@ void LiveIntervals::handleVirtualRegisterDef(MachineBasicBlock *mbb,
         CopyMI = mi;
       ValNo = interval.getNextValue(defIndex, CopyMI, true, VNInfoAllocator);
       
-      SlotIndex killIndex = getMBBEndIdx(mbb).getNextIndex().getLoadIndex();
+      SlotIndex killIndex = getMBBEndIdx(mbb);
       LiveRange LR(defIndex, killIndex, ValNo);
       interval.addRange(LR);
       ValNo->addKill(indexes_->getTerminatorGap(mbb));
@@ -1248,7 +1244,7 @@ bool LiveIntervals::anyKillInMBBAfterIdx(const LiveInterval &li,
       continue;
 
     SlotIndex KillIdx = VNI->kills[j];
-    if (KillIdx > Idx && KillIdx < End)
+    if (KillIdx > Idx && KillIdx <= End)
       return true;
   }
   return false;
@@ -2086,7 +2082,7 @@ LiveRange LiveIntervals::addLiveRangeToEndOfBlock(unsigned reg,
   VN->kills.push_back(indexes_->getTerminatorGap(startInst->getParent()));
   LiveRange LR(
      SlotIndex(getInstructionIndex(startInst).getDefIndex()),
-     getMBBEndIdx(startInst->getParent()).getNextIndex().getBaseIndex(), VN);
+     getMBBEndIdx(startInst->getParent()), VN);
   Interval.addRange(LR);
   
   return LR;
diff --git a/libclamav/c++/llvm/lib/CodeGen/MachineBasicBlock.cpp b/libclamav/c++/llvm/lib/CodeGen/MachineBasicBlock.cpp
index a58286d..74a0d57 100644
--- a/libclamav/c++/llvm/lib/CodeGen/MachineBasicBlock.cpp
+++ b/libclamav/c++/llvm/lib/CodeGen/MachineBasicBlock.cpp
@@ -450,14 +450,29 @@ void MachineBasicBlock::ReplaceUsesOfBlockWith(MachineBasicBlock *Old,
 
 /// CorrectExtraCFGEdges - Various pieces of code can cause excess edges in the
 /// CFG to be inserted.  If we have proven that MBB can only branch to DestA and
-/// DestB, remove any other MBB successors from the CFG.  DestA and DestB can
-/// be null.
+/// DestB, remove any other MBB successors from the CFG.  DestA and DestB can be
+/// null.
+/// 
 /// Besides DestA and DestB, retain other edges leading to LandingPads
 /// (currently there can be only one; we don't check or require that here).
 /// Note it is possible that DestA and/or DestB are LandingPads.
 bool MachineBasicBlock::CorrectExtraCFGEdges(MachineBasicBlock *DestA,
                                              MachineBasicBlock *DestB,
                                              bool isCond) {
+  // The values of DestA and DestB frequently come from a call to the
+  // 'TargetInstrInfo::AnalyzeBranch' method. We take our meaning of the initial
+  // values from there.
+  //
+  // 1. If both DestA and DestB are null, then the block ends with no branches
+  //    (it falls through to its successor).
+  // 2. If DestA is set, DestB is null, and isCond is false, then the block ends
+  //    with only an unconditional branch.
+  // 3. If DestA is set, DestB is null, and isCond is true, then the block ends
+  //    with a conditional branch that falls through to a successor (DestB).
+  // 4. If DestA and DestB is set and isCond is true, then the block ends with a
+  //    conditional branch followed by an unconditional branch. DestA is the
+  //    'true' destination and DestB is the 'false' destination.
+
   bool MadeChange = false;
   bool AddedFallThrough = false;
 
@@ -483,14 +498,15 @@ bool MachineBasicBlock::CorrectExtraCFGEdges(MachineBasicBlock *DestA,
   MachineBasicBlock::succ_iterator SI = succ_begin();
   MachineBasicBlock *OrigDestA = DestA, *OrigDestB = DestB;
   while (SI != succ_end()) {
-    if (*SI == DestA) {
+    const MachineBasicBlock *MBB = *SI;
+    if (MBB == DestA) {
       DestA = 0;
       ++SI;
-    } else if (*SI == DestB) {
+    } else if (MBB == DestB) {
       DestB = 0;
       ++SI;
-    } else if ((*SI)->isLandingPad() && 
-               *SI!=OrigDestA && *SI!=OrigDestB) {
+    } else if (MBB->isLandingPad() && 
+               MBB != OrigDestA && MBB != OrigDestB) {
       ++SI;
     } else {
       // Otherwise, this is a superfluous edge, remove it.
@@ -498,12 +514,12 @@ bool MachineBasicBlock::CorrectExtraCFGEdges(MachineBasicBlock *DestA,
       MadeChange = true;
     }
   }
-  if (!AddedFallThrough) {
-    assert(DestA == 0 && DestB == 0 &&
-           "MachineCFG is missing edges!");
-  } else if (isCond) {
+
+  if (!AddedFallThrough)
+    assert(DestA == 0 && DestB == 0 && "MachineCFG is missing edges!");
+  else if (isCond)
     assert(DestA == 0 && "MachineCFG is missing edges!");
-  }
+
   return MadeChange;
 }
 
diff --git a/libclamav/c++/llvm/lib/CodeGen/MachineDominators.cpp b/libclamav/c++/llvm/lib/CodeGen/MachineDominators.cpp
index 0f796f3..4088739 100644
--- a/libclamav/c++/llvm/lib/CodeGen/MachineDominators.cpp
+++ b/libclamav/c++/llvm/lib/CodeGen/MachineDominators.cpp
@@ -17,8 +17,10 @@
 
 using namespace llvm;
 
+namespace llvm {
 TEMPLATE_INSTANTIATION(class DomTreeNodeBase<MachineBasicBlock>);
 TEMPLATE_INSTANTIATION(class DominatorTreeBase<MachineBasicBlock>);
+}
 
 char MachineDominatorTree::ID = 0;
 
diff --git a/libclamav/c++/llvm/lib/CodeGen/MachineInstr.cpp b/libclamav/c++/llvm/lib/CodeGen/MachineInstr.cpp
index 12b974d..a761c2d 100644
--- a/libclamav/c++/llvm/lib/CodeGen/MachineInstr.cpp
+++ b/libclamav/c++/llvm/lib/CodeGen/MachineInstr.cpp
@@ -15,6 +15,7 @@
 #include "llvm/Constants.h"
 #include "llvm/Function.h"
 #include "llvm/InlineAsm.h"
+#include "llvm/Type.h"
 #include "llvm/Value.h"
 #include "llvm/Assembly/Writer.h"
 #include "llvm/CodeGen/MachineFunction.h"
@@ -555,8 +556,13 @@ void MachineInstr::addOperand(const MachineOperand &Op) {
       Operands.back().ParentMI = this;
   
       // If the operand is a register, update the operand's use list.
-      if (Op.isReg())
+      if (Op.isReg()) {
         Operands.back().AddRegOperandToRegInfo(RegInfo);
+        // If the register operand is flagged as early, mark the operand as such
+        unsigned OpNo = Operands.size() - 1;
+        if (TID->getOperandConstraint(OpNo, TOI::EARLY_CLOBBER) != -1)
+          Operands[OpNo].setIsEarlyClobber(true);
+      }
       return;
     }
   }
@@ -573,8 +579,12 @@ void MachineInstr::addOperand(const MachineOperand &Op) {
 
     // Do explicitly set the reginfo for this operand though, to ensure the
     // next/prev fields are properly nulled out.
-    if (Operands[OpNo].isReg())
+    if (Operands[OpNo].isReg()) {
       Operands[OpNo].AddRegOperandToRegInfo(0);
+      // If the register operand is flagged as early, mark the operand as such
+      if (TID->getOperandConstraint(OpNo, TOI::EARLY_CLOBBER) != -1)
+        Operands[OpNo].setIsEarlyClobber(true);
+    }
 
   } else if (Operands.size()+1 <= Operands.capacity()) {
     // Otherwise, we have to remove register operands from their register use
@@ -594,8 +604,12 @@ void MachineInstr::addOperand(const MachineOperand &Op) {
     Operands.insert(Operands.begin()+OpNo, Op);
     Operands[OpNo].ParentMI = this;
 
-    if (Operands[OpNo].isReg())
+    if (Operands[OpNo].isReg()) {
       Operands[OpNo].AddRegOperandToRegInfo(RegInfo);
+      // If the register operand is flagged as early, mark the operand as such
+      if (TID->getOperandConstraint(OpNo, TOI::EARLY_CLOBBER) != -1)
+        Operands[OpNo].setIsEarlyClobber(true);
+    }
     
     // Re-add all the implicit ops.
     for (unsigned i = OpNo+1, e = Operands.size(); i != e; ++i) {
@@ -613,6 +627,11 @@ void MachineInstr::addOperand(const MachineOperand &Op) {
   
     // Re-add all the operands.
     AddRegOperandsToUseLists(*RegInfo);
+
+      // If the register operand is flagged as early, mark the operand as such
+    if (Operands[OpNo].isReg()
+        && TID->getOperandConstraint(OpNo, TOI::EARLY_CLOBBER) != -1)
+      Operands[OpNo].setIsEarlyClobber(true);
   }
 }
 
@@ -1141,7 +1160,7 @@ void MachineInstr::print(raw_ostream &OS, const TargetMachine *TM) const {
 
   // Briefly indicate whether any call clobbers were omitted.
   if (OmittedAnyCallClobbers) {
-    if (FirstOp) FirstOp = false; else OS << ",";
+    if (!FirstOp) OS << ",";
     OS << " ...";
   }
 
@@ -1159,7 +1178,7 @@ void MachineInstr::print(raw_ostream &OS, const TargetMachine *TM) const {
   }
 
   if (!debugLoc.isUnknown() && MF) {
-    if (!HaveSemi) OS << ";"; HaveSemi = true;
+    if (!HaveSemi) OS << ";";
 
     // TODO: print InlinedAtLoc information
 
diff --git a/libclamav/c++/llvm/lib/CodeGen/MachineLICM.cpp b/libclamav/c++/llvm/lib/CodeGen/MachineLICM.cpp
index 66de535..0a57ea1 100644
--- a/libclamav/c++/llvm/lib/CodeGen/MachineLICM.cpp
+++ b/libclamav/c++/llvm/lib/CodeGen/MachineLICM.cpp
@@ -322,7 +322,7 @@ bool MachineLICM::IsLoopInvariantInst(MachineInstr &I) {
 
     // If the loop contains the definition of an operand, then the instruction
     // isn't loop invariant.
-    if (CurLoop->contains(RegInfo->getVRegDef(Reg)->getParent()))
+    if (CurLoop->contains(RegInfo->getVRegDef(Reg)))
       return false;
   }
 
diff --git a/libclamav/c++/llvm/lib/CodeGen/MachineLoopInfo.cpp b/libclamav/c++/llvm/lib/CodeGen/MachineLoopInfo.cpp
index 63f4f18..d561a5b 100644
--- a/libclamav/c++/llvm/lib/CodeGen/MachineLoopInfo.cpp
+++ b/libclamav/c++/llvm/lib/CodeGen/MachineLoopInfo.cpp
@@ -19,12 +19,14 @@
 #include "llvm/CodeGen/Passes.h"
 using namespace llvm;
 
+namespace llvm {
 #define MLB class LoopBase<MachineBasicBlock, MachineLoop>
 TEMPLATE_INSTANTIATION(MLB);
 #undef MLB
 #define MLIB class LoopInfoBase<MachineBasicBlock, MachineLoop>
 TEMPLATE_INSTANTIATION(MLIB);
 #undef MLIB
+}
 
 char MachineLoopInfo::ID = 0;
 static RegisterPass<MachineLoopInfo>
diff --git a/libclamav/c++/llvm/lib/CodeGen/MachineVerifier.cpp b/libclamav/c++/llvm/lib/CodeGen/MachineVerifier.cpp
index 917d053..0772319 100644
--- a/libclamav/c++/llvm/lib/CodeGen/MachineVerifier.cpp
+++ b/libclamav/c++/llvm/lib/CodeGen/MachineVerifier.cpp
@@ -365,24 +365,6 @@ void
 MachineVerifier::visitMachineBasicBlockBefore(const MachineBasicBlock *MBB) {
   const TargetInstrInfo *TII = MF->getTarget().getInstrInfo();
 
-  // Start with minimal CFG sanity checks.
-  MachineFunction::const_iterator MBBI = MBB;
-  ++MBBI;
-  if (MBBI != MF->end()) {
-    // Block is not last in function.
-    if (!MBB->isSuccessor(MBBI)) {
-      // Block does not fall through.
-      if (MBB->empty()) {
-        report("MBB doesn't fall through but is empty!", MBB);
-      }
-    }
-  } else {
-    // Block is last in function.
-    if (MBB->empty()) {
-      report("MBB is last in function but is empty!", MBB);
-    }
-  }
-
   // Call AnalyzeBranch. If it succeeds, there several more conditions to check.
   MachineBasicBlock *TBB = 0, *FBB = 0;
   SmallVector<MachineOperand, 4> Cond;
@@ -553,7 +535,8 @@ MachineVerifier::visitMachineOperand(const MachineOperand *MO, unsigned MONum) {
         report("Explicit operand marked as implicit", MO, MONum);
     }
   } else {
-    if (MO->isReg() && !MO->isImplicit() && !TI.isVariadic())
+    // ARM adds %reg0 operands to indicate predicates. We'll allow that.
+    if (MO->isReg() && !MO->isImplicit() && !TI.isVariadic() && MO->getReg())
       report("Extra explicit operand on non-variadic instruction", MO, MONum);
   }
 
diff --git a/libclamav/c++/llvm/lib/CodeGen/PBQP/AnnotatedGraph.h b/libclamav/c++/llvm/lib/CodeGen/PBQP/AnnotatedGraph.h
index 904061c..a47dce9 100644
--- a/libclamav/c++/llvm/lib/CodeGen/PBQP/AnnotatedGraph.h
+++ b/libclamav/c++/llvm/lib/CodeGen/PBQP/AnnotatedGraph.h
@@ -132,19 +132,19 @@ public:
   }
 
   NodeData& getNodeData(const NodeIterator &nodeItr) {
-    return getNodeEntry(nodeItr).getNodeData();
+    return PGraph::getNodeEntry(nodeItr).getNodeData();
   }
 
   const NodeData& getNodeData(const NodeIterator &nodeItr) const {
-    return getNodeEntry(nodeItr).getNodeData();
+    return PGraph::getNodeEntry(nodeItr).getNodeData();
   }
 
   EdgeData& getEdgeData(const EdgeIterator &edgeItr) {
-    return getEdgeEntry(edgeItr).getEdgeData();
+    return PGraph::getEdgeEntry(edgeItr).getEdgeData();
   }
 
   const EdgeEntry& getEdgeData(const EdgeIterator &edgeItr) const {
-    return getEdgeEntry(edgeItr).getEdgeData();
+    return PGraph::getEdgeEntry(edgeItr).getEdgeData();
   }
 
   SimpleGraph toSimpleGraph() const {
diff --git a/libclamav/c++/llvm/lib/CodeGen/PBQP/GraphBase.h b/libclamav/c++/llvm/lib/CodeGen/PBQP/GraphBase.h
index cc3e017..0c7493b 100644
--- a/libclamav/c++/llvm/lib/CodeGen/PBQP/GraphBase.h
+++ b/libclamav/c++/llvm/lib/CodeGen/PBQP/GraphBase.h
@@ -298,7 +298,7 @@ public:
 
     for (ConstAdjEdgeIterator adjEdgeItr = adjEdgesBegin(node1Itr),
          adjEdgeEnd = adjEdgesEnd(node1Itr);
-         adjEdgeItr != adjEdgesEnd; ++adjEdgeItr) {
+         adjEdgeItr != adjEdgeEnd; ++adjEdgeItr) {
       if ((getEdgeNode1Itr(*adjEdgeItr) == node2Itr) ||
           (getEdgeNode2Itr(*adjEdgeItr) == node2Itr)) {
         return *adjEdgeItr;
diff --git a/libclamav/c++/llvm/lib/CodeGen/PBQP/HeuristicSolver.h b/libclamav/c++/llvm/lib/CodeGen/PBQP/HeuristicSolver.h
index e786246..1670877 100644
--- a/libclamav/c++/llvm/lib/CodeGen/PBQP/HeuristicSolver.h
+++ b/libclamav/c++/llvm/lib/CodeGen/PBQP/HeuristicSolver.h
@@ -536,7 +536,7 @@ private:
       else reductionFinished = true;
     }
       
-  };
+  }
 
   void processR1() {
 
diff --git a/libclamav/c++/llvm/lib/CodeGen/PHIElimination.cpp b/libclamav/c++/llvm/lib/CodeGen/PHIElimination.cpp
index c62d179..58c3dec 100644
--- a/libclamav/c++/llvm/lib/CodeGen/PHIElimination.cpp
+++ b/libclamav/c++/llvm/lib/CodeGen/PHIElimination.cpp
@@ -35,6 +35,7 @@ using namespace llvm;
 
 STATISTIC(NumAtomic, "Number of atomic phis lowered");
 STATISTIC(NumSplits, "Number of critical edges split on demand");
+STATISTIC(NumReused, "Number of reused lowered phis");
 
 char PHIElimination::ID = 0;
 static RegisterPass<PHIElimination>
@@ -70,7 +71,7 @@ bool llvm::PHIElimination::runOnMachineFunction(MachineFunction &Fn) {
     Changed |= EliminatePHINodes(Fn, *I);
 
   // Remove dead IMPLICIT_DEF instructions.
-  for (SmallPtrSet<MachineInstr*,4>::iterator I = ImpDefs.begin(),
+  for (SmallPtrSet<MachineInstr*, 4>::iterator I = ImpDefs.begin(),
          E = ImpDefs.end(); I != E; ++I) {
     MachineInstr *DefMI = *I;
     unsigned DefReg = DefMI->getOperand(0).getReg();
@@ -78,6 +79,12 @@ bool llvm::PHIElimination::runOnMachineFunction(MachineFunction &Fn) {
       DefMI->eraseFromParent();
   }
 
+  // Clean up the lowered PHI instructions.
+  for (LoweredPHIMap::iterator I = LoweredPHIs.begin(), E = LoweredPHIs.end();
+       I != E; ++I)
+    Fn.DeleteMachineInstr(I->first);
+
+  LoweredPHIs.clear();
   ImpDefs.clear();
   VRegPHIUseCount.clear();
   return Changed;
@@ -168,6 +175,7 @@ llvm::PHIElimination::FindCopyInsertPoint(MachineBasicBlock &MBB,
 void llvm::PHIElimination::LowerAtomicPHINode(
                                       MachineBasicBlock &MBB,
                                       MachineBasicBlock::iterator AfterPHIsIt) {
+  ++NumAtomic;
   // Unlink the PHI node from the basic block, but don't delete the PHI yet.
   MachineInstr *MPhi = MBB.remove(MBB.begin());
 
@@ -179,6 +187,7 @@ void llvm::PHIElimination::LowerAtomicPHINode(
   MachineFunction &MF = *MBB.getParent();
   const TargetRegisterClass *RC = MF.getRegInfo().getRegClass(DestReg);
   unsigned IncomingReg = 0;
+  bool reusedIncoming = false;  // Is IncomingReg reused from an earlier PHI?
 
   // Insert a register to register copy at the top of the current block (but
   // after any remaining phi nodes) which copies the new incoming register
@@ -190,7 +199,18 @@ void llvm::PHIElimination::LowerAtomicPHINode(
     BuildMI(MBB, AfterPHIsIt, MPhi->getDebugLoc(),
             TII->get(TargetInstrInfo::IMPLICIT_DEF), DestReg);
   else {
-    IncomingReg = MF.getRegInfo().createVirtualRegister(RC);
+    // Can we reuse an earlier PHI node? This only happens for critical edges,
+    // typically those created by tail duplication.
+    unsigned &entry = LoweredPHIs[MPhi];
+    if (entry) {
+      // An identical PHI node was already lowered. Reuse the incoming register.
+      IncomingReg = entry;
+      reusedIncoming = true;
+      ++NumReused;
+      DEBUG(errs() << "Reusing %reg" << IncomingReg << " for " << *MPhi);
+    } else {
+      entry = IncomingReg = MF.getRegInfo().createVirtualRegister(RC);
+    }
     TII->copyRegToReg(MBB, AfterPHIsIt, DestReg, IncomingReg, RC, RC);
   }
 
@@ -204,8 +224,20 @@ void llvm::PHIElimination::LowerAtomicPHINode(
     MachineInstr *PHICopy = prior(AfterPHIsIt);
 
     if (IncomingReg) {
+      LiveVariables::VarInfo &VI = LV->getVarInfo(IncomingReg);
+
       // Increment use count of the newly created virtual register.
-      LV->getVarInfo(IncomingReg).NumUses++;
+      VI.NumUses++;
+
+      // When we are reusing the incoming register, it may already have been
+      // killed in this block. The old kill will also have been inserted at
+      // AfterPHIsIt, so it appears before the current PHICopy.
+      if (reusedIncoming)
+        if (MachineInstr *OldKill = VI.findKill(&MBB)) {
+          DEBUG(errs() << "Remove old kill from " << *OldKill);
+          LV->removeVirtualRegisterKilled(IncomingReg, OldKill);
+          DEBUG(MBB.dump());
+        }
 
       // Add information to LiveVariables to know that the incoming value is
       // killed.  Note that because the value is defined in several places (once
@@ -228,7 +260,7 @@ void llvm::PHIElimination::LowerAtomicPHINode(
 
   // Adjust the VRegPHIUseCount map to account for the removal of this PHI node.
   for (unsigned i = 1; i != MPhi->getNumOperands(); i += 2)
-    --VRegPHIUseCount[BBVRegPair(MPhi->getOperand(i + 1).getMBB(),
+    --VRegPHIUseCount[BBVRegPair(MPhi->getOperand(i+1).getMBB()->getNumber(),
                                  MPhi->getOperand(i).getReg())];
 
   // Now loop over all of the incoming arguments, changing them to copy into the
@@ -266,7 +298,8 @@ void llvm::PHIElimination::LowerAtomicPHINode(
       FindCopyInsertPoint(opBlock, MBB, SrcReg);
 
     // Insert the copy.
-    TII->copyRegToReg(opBlock, InsertPos, IncomingReg, SrcReg, RC, RC);
+    if (!reusedIncoming && IncomingReg)
+      TII->copyRegToReg(opBlock, InsertPos, IncomingReg, SrcReg, RC, RC);
 
     // Now update live variable information if we have it.  Otherwise we're done
     if (!LV) continue;
@@ -283,7 +316,7 @@ void llvm::PHIElimination::LowerAtomicPHINode(
     // point later.
 
     // Is it used by any PHI instructions in this block?
-    bool ValueIsUsed = VRegPHIUseCount[BBVRegPair(&opBlock, SrcReg)] != 0;
+    bool ValueIsUsed = VRegPHIUseCount[BBVRegPair(opBlock.getNumber(), SrcReg)];
 
     // Okay, if we now know that the value is not live out of the block, we can
     // add a kill marker in this block saying that it kills the incoming value!
@@ -293,11 +326,10 @@ void llvm::PHIElimination::LowerAtomicPHINode(
       // terminator instruction at the end of the block may also use the value.
       // In this case, we should mark *it* as being the killing block, not the
       // copy.
-      MachineBasicBlock::iterator KillInst = prior(InsertPos);
+      MachineBasicBlock::iterator KillInst;
       MachineBasicBlock::iterator Term = opBlock.getFirstTerminator();
-      if (Term != opBlock.end()) {
-        if (Term->readsRegister(SrcReg))
-          KillInst = Term;
+      if (Term != opBlock.end() && Term->readsRegister(SrcReg)) {
+        KillInst = Term;
 
         // Check that no other terminators use values.
 #ifndef NDEBUG
@@ -308,7 +340,17 @@ void llvm::PHIElimination::LowerAtomicPHINode(
                  "they are the first terminator in a block!");
         }
 #endif
+      } else if (reusedIncoming || !IncomingReg) {
+        // We may have to rewind a bit if we didn't insert a copy this time.
+        KillInst = Term;
+        while (KillInst != opBlock.begin())
+          if ((--KillInst)->readsRegister(SrcReg))
+            break;
+      } else {
+        // We just inserted this copy.
+        KillInst = prior(InsertPos);
       }
+      assert(KillInst->readsRegister(SrcReg) && "Cannot find kill instruction");
 
       // Finally, mark it killed.
       LV->addVirtualRegisterKilled(SrcReg, KillInst);
@@ -319,9 +361,9 @@ void llvm::PHIElimination::LowerAtomicPHINode(
     }
   }
 
-  // Really delete the PHI instruction now!
-  MF.DeleteMachineInstr(MPhi);
-  ++NumAtomic;
+  // Really delete the PHI instruction now, if it is not in the LoweredPHIs map.
+  if (reusedIncoming || !IncomingReg)
+    MF.DeleteMachineInstr(MPhi);
 }
 
 /// analyzePHINodes - Gather information about the PHI nodes in here. In
@@ -335,14 +377,15 @@ void llvm::PHIElimination::analyzePHINodes(const MachineFunction& Fn) {
     for (MachineBasicBlock::const_iterator BBI = I->begin(), BBE = I->end();
          BBI != BBE && BBI->getOpcode() == TargetInstrInfo::PHI; ++BBI)
       for (unsigned i = 1, e = BBI->getNumOperands(); i != e; i += 2)
-        ++VRegPHIUseCount[BBVRegPair(BBI->getOperand(i + 1).getMBB(),
+        ++VRegPHIUseCount[BBVRegPair(BBI->getOperand(i+1).getMBB()->getNumber(),
                                      BBI->getOperand(i).getReg())];
 }
 
 bool llvm::PHIElimination::SplitPHIEdges(MachineFunction &MF,
                                          MachineBasicBlock &MBB,
                                          LiveVariables &LV) {
-  if (MBB.empty() || MBB.front().getOpcode() != TargetInstrInfo::PHI)
+  if (MBB.empty() || MBB.front().getOpcode() != TargetInstrInfo::PHI ||
+      MBB.isLandingPad())
     return false;   // Quick exit for basic blocks without PHIs.
 
   for (MachineBasicBlock::const_iterator BBI = MBB.begin(), BBE = MBB.end();
@@ -408,3 +451,34 @@ MachineBasicBlock *PHIElimination::SplitCriticalEdge(MachineBasicBlock *A,
 
   return NMBB;
 }
+
+unsigned
+PHIElimination::PHINodeTraits::getHashValue(const MachineInstr *MI) {
+  if (!MI || MI==getEmptyKey() || MI==getTombstoneKey())
+    return DenseMapInfo<MachineInstr*>::getHashValue(MI);
+  unsigned hash = 0;
+  for (unsigned ni = 1, ne = MI->getNumOperands(); ni != ne; ni += 2)
+    hash = hash*37 + DenseMapInfo<BBVRegPair>::
+      getHashValue(BBVRegPair(MI->getOperand(ni+1).getMBB()->getNumber(),
+                              MI->getOperand(ni).getReg()));
+  return hash;
+}
+
+bool PHIElimination::PHINodeTraits::isEqual(const MachineInstr *LHS,
+                                            const MachineInstr *RHS) {
+  const MachineInstr *EmptyKey = getEmptyKey();
+  const MachineInstr *TombstoneKey = getTombstoneKey();
+  if (!LHS || !RHS || LHS==EmptyKey || RHS==EmptyKey ||
+      LHS==TombstoneKey || RHS==TombstoneKey)
+    return LHS==RHS;
+
+  unsigned ne = LHS->getNumOperands();
+  if (ne != RHS->getNumOperands())
+      return false;
+  // Ignore operand 0, the defined register.
+  for (unsigned ni = 1; ni != ne; ni += 2)
+    if (LHS->getOperand(ni).getReg() != RHS->getOperand(ni).getReg() ||
+        LHS->getOperand(ni+1).getMBB() != RHS->getOperand(ni+1).getMBB())
+      return false;
+  return true;
+}
diff --git a/libclamav/c++/llvm/lib/CodeGen/PHIElimination.h b/libclamav/c++/llvm/lib/CodeGen/PHIElimination.h
index b0b71ce..1bcc9dc 100644
--- a/libclamav/c++/llvm/lib/CodeGen/PHIElimination.h
+++ b/libclamav/c++/llvm/lib/CodeGen/PHIElimination.h
@@ -16,8 +16,6 @@
 #include "llvm/CodeGen/MachineFunctionPass.h"
 #include "llvm/Target/TargetInstrInfo.h"
 
-#include <map>
-
 namespace llvm {
 
   /// Lower PHI instructions to copies.  
@@ -120,8 +118,8 @@ namespace llvm {
       return I;
     }
 
-    typedef std::pair<const MachineBasicBlock*, unsigned> BBVRegPair;
-    typedef std::map<BBVRegPair, unsigned> VRegPHIUse;
+    typedef std::pair<unsigned, unsigned> BBVRegPair;
+    typedef DenseMap<BBVRegPair, unsigned> VRegPHIUse;
 
     VRegPHIUse VRegPHIUseCount;
     PHIDefMap PHIDefs;
@@ -129,6 +127,17 @@ namespace llvm {
 
     // Defs of PHI sources which are implicit_def.
     SmallPtrSet<MachineInstr*, 4> ImpDefs;
+
+    // Lowered PHI nodes may be reused. We provide special DenseMap traits to
+    // match PHI nodes with identical arguments.
+    struct PHINodeTraits : public DenseMapInfo<MachineInstr*> {
+      static unsigned getHashValue(const MachineInstr *PtrVal);
+      static bool isEqual(const MachineInstr *LHS, const MachineInstr *RHS);
+    };
+
+    // Map reusable lowered PHI node -> incoming join register.
+    typedef DenseMap<MachineInstr*, unsigned, PHINodeTraits> LoweredPHIMap;
+    LoweredPHIMap LoweredPHIs;
   };
 
 }
diff --git a/libclamav/c++/llvm/lib/CodeGen/PreAllocSplitting.cpp b/libclamav/c++/llvm/lib/CodeGen/PreAllocSplitting.cpp
index b0d7a47..1c5222c 100644
--- a/libclamav/c++/llvm/lib/CodeGen/PreAllocSplitting.cpp
+++ b/libclamav/c++/llvm/lib/CodeGen/PreAllocSplitting.cpp
@@ -378,7 +378,7 @@ PreAllocSplitting::UpdateSpillSlotInterval(VNInfo *ValNo, SlotIndex SpillIndex,
 
   SmallPtrSet<MachineBasicBlock*, 4> Processed;
   SlotIndex EndIdx = LIs->getMBBEndIdx(MBB);
-  LiveRange SLR(SpillIndex, EndIdx.getNextSlot(), CurrSValNo);
+  LiveRange SLR(SpillIndex, EndIdx, CurrSValNo);
   CurrSLI->addRange(SLR);
   Processed.insert(MBB);
 
@@ -475,7 +475,7 @@ PreAllocSplitting::PerformPHIConstruction(MachineBasicBlock::iterator UseI,
     SlotIndex EndIndex = LIs->getMBBEndIdx(MBB);
     
     RetVNI = NewVNs[Walker];
-    LI->addRange(LiveRange(DefIndex, EndIndex.getNextSlot(), RetVNI));
+    LI->addRange(LiveRange(DefIndex, EndIndex, RetVNI));
   } else if (!ContainsDefs && ContainsUses) {
     SmallPtrSet<MachineInstr*, 2>& BlockUses = Uses[MBB];
     
@@ -511,8 +511,7 @@ PreAllocSplitting::PerformPHIConstruction(MachineBasicBlock::iterator UseI,
     UseIndex = UseIndex.getUseIndex();
     SlotIndex EndIndex;
     if (IsIntraBlock) {
-      EndIndex = LIs->getInstructionIndex(UseI);
-      EndIndex = EndIndex.getUseIndex();
+      EndIndex = LIs->getInstructionIndex(UseI).getDefIndex();
     } else
       EndIndex = LIs->getMBBEndIdx(MBB);
 
@@ -521,7 +520,7 @@ PreAllocSplitting::PerformPHIConstruction(MachineBasicBlock::iterator UseI,
     RetVNI = PerformPHIConstruction(Walker, MBB, LI, Visited, Defs, Uses,
                                     NewVNs, LiveOut, Phis, false, true);
     
-    LI->addRange(LiveRange(UseIndex, EndIndex.getNextSlot(), RetVNI));
+    LI->addRange(LiveRange(UseIndex, EndIndex, RetVNI));
     
     // FIXME: Need to set kills properly for inter-block stuff.
     if (RetVNI->isKill(UseIndex)) RetVNI->removeKill(UseIndex);
@@ -571,8 +570,7 @@ PreAllocSplitting::PerformPHIConstruction(MachineBasicBlock::iterator UseI,
     StartIndex = foundDef ? StartIndex.getDefIndex() : StartIndex.getUseIndex();
     SlotIndex EndIndex;
     if (IsIntraBlock) {
-      EndIndex = LIs->getInstructionIndex(UseI);
-      EndIndex = EndIndex.getUseIndex();
+      EndIndex = LIs->getInstructionIndex(UseI).getDefIndex();
     } else
       EndIndex = LIs->getMBBEndIdx(MBB);
 
@@ -582,7 +580,7 @@ PreAllocSplitting::PerformPHIConstruction(MachineBasicBlock::iterator UseI,
       RetVNI = PerformPHIConstruction(Walker, MBB, LI, Visited, Defs, Uses,
                                       NewVNs, LiveOut, Phis, false, true);
 
-    LI->addRange(LiveRange(StartIndex, EndIndex.getNextSlot(), RetVNI));
+    LI->addRange(LiveRange(StartIndex, EndIndex, RetVNI));
     
     if (foundUse && RetVNI->isKill(StartIndex))
       RetVNI->removeKill(StartIndex);
@@ -663,7 +661,7 @@ PreAllocSplitting::PerformPHIConstructionFallBack(MachineBasicBlock::iterator Us
     for (DenseMap<MachineBasicBlock*, VNInfo*>::iterator I =
            IncomingVNs.begin(), E = IncomingVNs.end(); I != E; ++I) {
       I->second->setHasPHIKill(true);
-      SlotIndex KillIndex = LIs->getMBBEndIdx(I->first);
+      SlotIndex KillIndex(LIs->getMBBEndIdx(I->first), true);
       if (!I->second->isKill(KillIndex))
         I->second->addKill(KillIndex);
     }
@@ -671,11 +669,10 @@ PreAllocSplitting::PerformPHIConstructionFallBack(MachineBasicBlock::iterator Us
       
   SlotIndex EndIndex;
   if (IsIntraBlock) {
-    EndIndex = LIs->getInstructionIndex(UseI);
-    EndIndex = EndIndex.getUseIndex();
+    EndIndex = LIs->getInstructionIndex(UseI).getDefIndex();
   } else
     EndIndex = LIs->getMBBEndIdx(MBB);
-  LI->addRange(LiveRange(StartIndex, EndIndex.getNextSlot(), RetVNI));
+  LI->addRange(LiveRange(StartIndex, EndIndex, RetVNI));
   if (IsIntraBlock)
     RetVNI->addKill(EndIndex);
 
@@ -902,8 +899,6 @@ MachineInstr* PreAllocSplitting::FoldSpill(unsigned vreg,
                                            MachineBasicBlock* MBB,
                                            int& SS,
                                     SmallPtrSet<MachineInstr*, 4>& RefsInMBB) {
-  MachineBasicBlock::iterator Pt = MBB->begin();
-
   // Go top down if RefsInMBB is empty.
   if (RefsInMBB.empty())
     return 0;
diff --git a/libclamav/c++/llvm/lib/CodeGen/PrologEpilogInserter.cpp b/libclamav/c++/llvm/lib/CodeGen/PrologEpilogInserter.cpp
index e94247f..709d46a 100644
--- a/libclamav/c++/llvm/lib/CodeGen/PrologEpilogInserter.cpp
+++ b/libclamav/c++/llvm/lib/CodeGen/PrologEpilogInserter.cpp
@@ -860,7 +860,7 @@ void PEI::scavengeFrameVirtualRegs(MachineFunction &Fn) {
               // Remove all instructions up 'til the last use, since they're
               // just calculating the value we already have.
               BB->erase(I, LastUseMI);
-              MI = I = LastUseMI;
+              I = LastUseMI;
 
               // Extend the live range of the scratch register
               PrevLastUseMI->getOperand(PrevLastUseOp).setIsKill(false);
diff --git a/libclamav/c++/llvm/lib/CodeGen/RegAllocLinearScan.cpp b/libclamav/c++/llvm/lib/CodeGen/RegAllocLinearScan.cpp
index c02d47b..9e97d89 100644
--- a/libclamav/c++/llvm/lib/CodeGen/RegAllocLinearScan.cpp
+++ b/libclamav/c++/llvm/lib/CodeGen/RegAllocLinearScan.cpp
@@ -891,7 +891,7 @@ namespace {
     const RALinScan &Allocator;
 
   public:
-    WeightCompare(const RALinScan &Alloc) : Allocator(Alloc) {};
+    WeightCompare(const RALinScan &Alloc) : Allocator(Alloc) {}
 
     typedef std::pair<unsigned, float> RegWeightPair;
     bool operator()(const RegWeightPair &LHS, const RegWeightPair &RHS) const {
diff --git a/libclamav/c++/llvm/lib/CodeGen/RegAllocLocal.cpp b/libclamav/c++/llvm/lib/CodeGen/RegAllocLocal.cpp
index 7bb020a..aea5cff 100644
--- a/libclamav/c++/llvm/lib/CodeGen/RegAllocLocal.cpp
+++ b/libclamav/c++/llvm/lib/CodeGen/RegAllocLocal.cpp
@@ -233,14 +233,17 @@ namespace {
     /// in one of several ways: if the register is available in a physical
     /// register already, it uses that physical register.  If the value is not
     /// in a physical register, and if there are physical registers available,
-    /// it loads it into a register.  If register pressure is high, and it is
-    /// possible, it tries to fold the load of the virtual register into the
-    /// instruction itself.  It avoids doing this if register pressure is low to
-    /// improve the chance that subsequent instructions can use the reloaded
-    /// value.  This method returns the modified instruction.
+    /// it loads it into a register: PhysReg if that is an available physical
+    /// register, otherwise any physical register of the right class.
+    /// If register pressure is high, and it is possible, it tries to fold the
+    /// load of the virtual register into the instruction itself.  It avoids
+    /// doing this if register pressure is low to improve the chance that
+    /// subsequent instructions can use the reloaded value.  This method
+    /// returns the modified instruction.
     ///
     MachineInstr *reloadVirtReg(MachineBasicBlock &MBB, MachineInstr *MI,
-                                unsigned OpNum, SmallSet<unsigned, 4> &RRegs);
+                                unsigned OpNum, SmallSet<unsigned, 4> &RRegs,
+                                unsigned PhysReg);
 
     /// ComputeLocalLiveness - Computes liveness of registers within a basic
     /// block, setting the killed/dead flags as appropriate.
@@ -471,15 +474,17 @@ unsigned RALocal::getReg(MachineBasicBlock &MBB, MachineInstr *I,
 /// one of several ways: if the register is available in a physical register
 /// already, it uses that physical register.  If the value is not in a physical
 /// register, and if there are physical registers available, it loads it into a
+/// register: PhysReg if that is an available physical register, otherwise any
 /// register.  If register pressure is high, and it is possible, it tries to
 /// fold the load of the virtual register into the instruction itself.  It
 /// avoids doing this if register pressure is low to improve the chance that
-/// subsequent instructions can use the reloaded value.  This method returns the
-/// modified instruction.
+/// subsequent instructions can use the reloaded value.  This method returns
+/// the modified instruction.
 ///
 MachineInstr *RALocal::reloadVirtReg(MachineBasicBlock &MBB, MachineInstr *MI,
                                      unsigned OpNum,
-                                     SmallSet<unsigned, 4> &ReloadedRegs) {
+                                     SmallSet<unsigned, 4> &ReloadedRegs,
+                                     unsigned PhysReg) {
   unsigned VirtReg = MI->getOperand(OpNum).getReg();
 
   // If the virtual register is already available, just update the instruction
@@ -494,7 +499,11 @@ MachineInstr *RALocal::reloadVirtReg(MachineBasicBlock &MBB, MachineInstr *MI,
   // Otherwise, we need to fold it into the current instruction, or reload it.
   // If we have registers available to hold the value, use them.
   const TargetRegisterClass *RC = MF->getRegInfo().getRegClass(VirtReg);
-  unsigned PhysReg = getFreeReg(RC);
+  // If we already have a PhysReg (this happens when the instruction is a
+  // reg-to-reg copy with a PhysReg destination) use that.
+  if (!PhysReg || !TargetRegisterInfo::isPhysicalRegister(PhysReg) ||
+      !isPhysRegAvailable(PhysReg))
+    PhysReg = getFreeReg(RC);
   int FrameIndex = getStackSpaceFor(VirtReg, RC);
 
   if (PhysReg) {   // Register is available, allocate it!
@@ -752,6 +761,12 @@ void RALocal::AllocateBasicBlock(MachineBasicBlock &MBB) {
         errs() << '\n';
       });
 
+    // Determine whether this is a copy instruction.  The cases where the
+    // source or destination are phys regs are handled specially.
+    unsigned SrcCopyReg, DstCopyReg, SrcCopySubReg, DstCopySubReg;
+    bool isCopy = TII->isMoveInstr(*MI, SrcCopyReg, DstCopyReg, 
+                                   SrcCopySubReg, DstCopySubReg);
+
     // Loop over the implicit uses, making sure that they are at the head of the
     // use order list, so they don't get reallocated.
     if (TID.ImplicitUses) {
@@ -835,7 +850,8 @@ void RALocal::AllocateBasicBlock(MachineBasicBlock &MBB) {
       // here we are looking for only used operands (never def&use)
       if (MO.isReg() && !MO.isDef() && MO.getReg() && !MO.isImplicit() &&
           TargetRegisterInfo::isVirtualRegister(MO.getReg()))
-        MI = reloadVirtReg(MBB, MI, i, ReloadedRegs);
+        MI = reloadVirtReg(MBB, MI, i, ReloadedRegs,
+                           isCopy ? DstCopyReg : 0);
     }
 
     // If this instruction is the last user of this register, kill the
@@ -948,8 +964,17 @@ void RALocal::AllocateBasicBlock(MachineBasicBlock &MBB) {
         unsigned DestPhysReg;
 
         // If DestVirtReg already has a value, use it.
-        if (!(DestPhysReg = getVirt2PhysRegMapSlot(DestVirtReg)))
-          DestPhysReg = getReg(MBB, MI, DestVirtReg);
+        if (!(DestPhysReg = getVirt2PhysRegMapSlot(DestVirtReg))) {
+          // If this is a copy, the source reg is a phys reg, and
+          // that reg is available, use that phys reg for DestPhysReg.
+          if (isCopy &&
+              TargetRegisterInfo::isPhysicalRegister(SrcCopyReg) &&
+              isPhysRegAvailable(SrcCopyReg)) {
+            DestPhysReg = SrcCopyReg;
+            assignVirtToPhysReg(DestVirtReg, DestPhysReg);
+          } else
+            DestPhysReg = getReg(MBB, MI, DestVirtReg);
+        }
         MF->getRegInfo().setPhysRegUsed(DestPhysReg);
         markVirtRegModified(DestVirtReg);
         getVirtRegLastUse(DestVirtReg) = std::make_pair((MachineInstr*)0, 0);
@@ -995,9 +1020,9 @@ void RALocal::AllocateBasicBlock(MachineBasicBlock &MBB) {
     // Finally, if this is a noop copy instruction, zap it.  (Except that if
     // the copy is dead, it must be kept to avoid messing up liveness info for
     // the register scavenger.  See pr4100.)
-    unsigned SrcReg, DstReg, SrcSubReg, DstSubReg;
-    if (TII->isMoveInstr(*MI, SrcReg, DstReg, SrcSubReg, DstSubReg) &&
-        SrcReg == DstReg && DeadDefs.empty())
+    if (TII->isMoveInstr(*MI, SrcCopyReg, DstCopyReg,
+                         SrcCopySubReg, DstCopySubReg) &&
+        SrcCopyReg == DstCopyReg && DeadDefs.empty())
       MBB.erase(MI);
   }
 
diff --git a/libclamav/c++/llvm/lib/CodeGen/SelectionDAG/DAGCombiner.cpp b/libclamav/c++/llvm/lib/CodeGen/SelectionDAG/DAGCombiner.cpp
index 2b52187..e6aa14c 100644
--- a/libclamav/c++/llvm/lib/CodeGen/SelectionDAG/DAGCombiner.cpp
+++ b/libclamav/c++/llvm/lib/CodeGen/SelectionDAG/DAGCombiner.cpp
@@ -2755,7 +2755,34 @@ SDValue DAGCombiner::visitSRL(SDNode *N) {
   if (N1C && SimplifyDemandedBits(SDValue(N, 0)))
     return SDValue(N, 0);
 
-  return N1C ? visitShiftByConstant(N, N1C->getZExtValue()) : SDValue();
+  if (N1C) {
+    SDValue NewSRL = visitShiftByConstant(N, N1C->getZExtValue());
+    if (NewSRL.getNode())
+      return NewSRL;
+  }
+
+  // Here is a common situation. We want to optimize:
+  //
+  //   %a = ...
+  //   %b = and i32 %a, 2
+  //   %c = srl i32 %b, 1
+  //   brcond i32 %c ...
+  //
+  // into
+  // 
+  //   %a = ...
+  //   %b = and %a, 2
+  //   %c = setcc eq %b, 0
+  //   brcond %c ...
+  //
+  // However when after the source operand of SRL is optimized into AND, the SRL
+  // itself may not be optimized further. Look for it and add the BRCOND into
+  // the worklist.
+  if (N->hasOneUse() &&
+      N->use_begin()->getOpcode() == ISD::BRCOND)
+    AddToWorkList(*N->use_begin());
+
+  return SDValue();
 }
 
 SDValue DAGCombiner::visitCTLZ(SDNode *N) {
@@ -3202,19 +3229,6 @@ SDValue DAGCombiner::visitZERO_EXTEND(SDNode *N) {
                        X, DAG.getConstant(Mask, VT));
   }
 
-  // Fold (zext (and x, cst)) -> (and (zext x), cst)
-  if (N0.getOpcode() == ISD::AND &&
-      N0.getOperand(1).getOpcode() == ISD::Constant &&
-      N0.getOperand(0).getOpcode() != ISD::TRUNCATE &&
-      N0.getOperand(0).hasOneUse()) {
-    APInt Mask = cast<ConstantSDNode>(N0.getOperand(1))->getAPIntValue();
-    Mask.zext(VT.getSizeInBits());
-    return DAG.getNode(ISD::AND, N->getDebugLoc(), VT,
-                       DAG.getNode(ISD::ZERO_EXTEND, N->getDebugLoc(), VT,
-                                   N0.getOperand(0)),
-                       DAG.getConstant(Mask, VT));
-  }
-
   // fold (zext (load x)) -> (zext (truncate (zextload x)))
   if (ISD::isNON_EXTLoad(N0.getNode()) &&
       ((!LegalOperations && !cast<LoadSDNode>(N0)->isVolatile()) ||
diff --git a/libclamav/c++/llvm/lib/CodeGen/SelectionDAG/FastISel.cpp b/libclamav/c++/llvm/lib/CodeGen/SelectionDAG/FastISel.cpp
index 4ead9c9..9e182ef 100644
--- a/libclamav/c++/llvm/lib/CodeGen/SelectionDAG/FastISel.cpp
+++ b/libclamav/c++/llvm/lib/CodeGen/SelectionDAG/FastISel.cpp
@@ -548,9 +548,6 @@ FastISel::SelectInstruction(Instruction *I) {
 /// the CFG.
 void
 FastISel::FastEmitBranch(MachineBasicBlock *MSucc) {
-  MachineFunction::iterator NextMBB =
-     llvm::next(MachineFunction::iterator(MBB));
-
   if (MBB->isLayoutSuccessor(MSucc)) {
     // The unconditional fall-through case, which needs no instructions.
   } else {
diff --git a/libclamav/c++/llvm/lib/CodeGen/SelectionDAG/LegalizeDAG.cpp b/libclamav/c++/llvm/lib/CodeGen/SelectionDAG/LegalizeDAG.cpp
index f9c05d0..474d833 100644
--- a/libclamav/c++/llvm/lib/CodeGen/SelectionDAG/LegalizeDAG.cpp
+++ b/libclamav/c++/llvm/lib/CodeGen/SelectionDAG/LegalizeDAG.cpp
@@ -187,7 +187,6 @@ SDValue
 SelectionDAGLegalize::ShuffleWithNarrowerEltType(EVT NVT, EVT VT,  DebugLoc dl, 
                                                  SDValue N1, SDValue N2,
                                              SmallVectorImpl<int> &Mask) const {
-  EVT EltVT = NVT.getVectorElementType();
   unsigned NumMaskElts = VT.getVectorNumElements();
   unsigned NumDestElts = NVT.getVectorNumElements();
   unsigned NumEltsGrowth = NumDestElts / NumMaskElts;
@@ -461,8 +460,7 @@ SDValue ExpandUnalignedStore(StoreSDNode *ST, SelectionDAG &DAG,
          !ST->getMemoryVT().isVector() &&
          "Unaligned store of unknown type.");
   // Get the half-size VT
-  EVT NewStoredVT =
-    (MVT::SimpleValueType)(ST->getMemoryVT().getSimpleVT().SimpleTy - 1);
+  EVT NewStoredVT = ST->getMemoryVT().getHalfSizedIntegerVT(*DAG.getContext());
   int NumBits = NewStoredVT.getSizeInBits();
   int IncrementSize = NumBits / 8;
 
@@ -1170,8 +1168,7 @@ SDValue SelectionDAGLegalize::LegalizeOp(SDValue Op) {
         Tmp2 = LegalizeOp(Ch);
       } else if (SrcWidth & (SrcWidth - 1)) {
         // If not loading a power-of-2 number of bits, expand as two loads.
-        assert(SrcVT.isExtended() && !SrcVT.isVector() &&
-               "Unsupported extload!");
+        assert(!SrcVT.isVector() && "Unsupported extload!");
         unsigned RoundWidth = 1 << Log2_32(SrcWidth);
         assert(RoundWidth < SrcWidth);
         unsigned ExtraWidth = SrcWidth - RoundWidth;
@@ -1384,8 +1381,7 @@ SDValue SelectionDAGLegalize::LegalizeOp(SDValue Op) {
                                    SVOffset, NVT, isVolatile, Alignment);
       } else if (StWidth & (StWidth - 1)) {
         // If not storing a power-of-2 number of bits, expand as two stores.
-        assert(StVT.isExtended() && !StVT.isVector() &&
-               "Unsupported truncstore!");
+        assert(!StVT.isVector() && "Unsupported truncstore!");
         unsigned RoundWidth = 1 << Log2_32(StWidth);
         assert(RoundWidth < StWidth);
         unsigned ExtraWidth = StWidth - RoundWidth;
@@ -1869,7 +1865,7 @@ SDValue SelectionDAGLegalize::ExpandLibCall(RTLIB::Libcall LC, SDNode *Node,
                     0, TLI.getLibcallCallingConv(LC), false,
                     /*isReturnValueUsed=*/true,
                     Callee, Args, DAG,
-                    Node->getDebugLoc());
+                    Node->getDebugLoc(), DAG.GetOrdering(Node));
 
   // Legalize the call sequence, starting with the chain.  This will advance
   // the LastCALLSEQ_END to the legalized version of the CALLSEQ_END node that
@@ -2274,7 +2270,7 @@ void SelectionDAGLegalize::ExpandNode(SDNode *Node,
                       false, false, false, false, 0, CallingConv::C, false,
                       /*isReturnValueUsed=*/true,
                       DAG.getExternalSymbol("abort", TLI.getPointerTy()),
-                      Args, DAG, dl);
+                      Args, DAG, dl, DAG.GetOrdering(Node));
     Results.push_back(CallResult.second);
     break;
   }
@@ -2750,7 +2746,7 @@ void SelectionDAGLegalize::ExpandNode(SDNode *Node,
     SDValue RHS = Node->getOperand(1);
     SDValue BottomHalf;
     SDValue TopHalf;
-    static unsigned Ops[2][3] =
+    static const unsigned Ops[2][3] =
         { { ISD::MULHU, ISD::UMUL_LOHI, ISD::ZERO_EXTEND },
           { ISD::MULHS, ISD::SMUL_LOHI, ISD::SIGN_EXTEND }};
     bool isSigned = Node->getOpcode() == ISD::SMULO;
@@ -2967,7 +2963,7 @@ void SelectionDAGLegalize::PromoteNode(SDNode *Node,
     break;
   case ISD::BSWAP: {
     unsigned DiffBits = NVT.getSizeInBits() - OVT.getSizeInBits();
-    Tmp1 = DAG.getNode(ISD::ZERO_EXTEND, dl, NVT, Tmp1);
+    Tmp1 = DAG.getNode(ISD::ZERO_EXTEND, dl, NVT, Node->getOperand(0));
     Tmp1 = DAG.getNode(ISD::BSWAP, dl, NVT, Tmp1);
     Tmp1 = DAG.getNode(ISD::SRL, dl, NVT, Tmp1,
                           DAG.getConstant(DiffBits, TLI.getShiftAmountTy()));
diff --git a/libclamav/c++/llvm/lib/CodeGen/SelectionDAG/LegalizeFloatTypes.cpp b/libclamav/c++/llvm/lib/CodeGen/SelectionDAG/LegalizeFloatTypes.cpp
index 84e39b4..2831617 100644
--- a/libclamav/c++/llvm/lib/CodeGen/SelectionDAG/LegalizeFloatTypes.cpp
+++ b/libclamav/c++/llvm/lib/CodeGen/SelectionDAG/LegalizeFloatTypes.cpp
@@ -637,7 +637,8 @@ void DAGTypeLegalizer::SoftenSetCCOperands(SDValue &NewLHS, SDValue &NewRHS,
     }
   }
 
-  EVT RetVT = MVT::i32; // FIXME: is this the correct return type?
+  // Use the target specific return value for comparions lib calls.
+  EVT RetVT = TLI.getCmpLibcallReturnType();
   SDValue Ops[2] = { LHSInt, RHSInt };
   NewLHS = MakeLibCall(LC1, RetVT, Ops, 2, false/*sign irrelevant*/, dl);
   NewRHS = DAG.getConstant(0, RetVT);
diff --git a/libclamav/c++/llvm/lib/CodeGen/SelectionDAG/LegalizeIntegerTypes.cpp b/libclamav/c++/llvm/lib/CodeGen/SelectionDAG/LegalizeIntegerTypes.cpp
index 2f4457e..bd3b97a 100644
--- a/libclamav/c++/llvm/lib/CodeGen/SelectionDAG/LegalizeIntegerTypes.cpp
+++ b/libclamav/c++/llvm/lib/CodeGen/SelectionDAG/LegalizeIntegerTypes.cpp
@@ -2026,8 +2026,6 @@ void DAGTypeLegalizer::IntegerExpandSetCCOperands(SDValue &NewLHS,
   GetExpandedInteger(NewLHS, LHSLo, LHSHi);
   GetExpandedInteger(NewRHS, RHSLo, RHSHi);
 
-  EVT VT = NewLHS.getValueType();
-
   if (CCCode == ISD::SETEQ || CCCode == ISD::SETNE) {
     if (RHSLo == RHSHi) {
       if (ConstantSDNode *RHSCST = dyn_cast<ConstantSDNode>(RHSLo)) {
diff --git a/libclamav/c++/llvm/lib/CodeGen/SelectionDAG/LegalizeTypes.cpp b/libclamav/c++/llvm/lib/CodeGen/SelectionDAG/LegalizeTypes.cpp
index 003cea7..d9efd4f 100644
--- a/libclamav/c++/llvm/lib/CodeGen/SelectionDAG/LegalizeTypes.cpp
+++ b/libclamav/c++/llvm/lib/CodeGen/SelectionDAG/LegalizeTypes.cpp
@@ -1033,7 +1033,8 @@ SDValue DAGTypeLegalizer::MakeLibCall(RTLIB::Libcall LC, EVT RetVT,
     TLI.LowerCallTo(DAG.getEntryNode(), RetTy, isSigned, !isSigned, false,
                     false, 0, TLI.getLibcallCallingConv(LC), false,
                     /*isReturnValueUsed=*/true,
-                    Callee, Args, DAG, dl);
+                    Callee, Args, DAG, dl,
+                    DAG.GetOrdering(DAG.getEntryNode().getNode()));
   return CallInfo.first;
 }
 
diff --git a/libclamav/c++/llvm/lib/CodeGen/SelectionDAG/LegalizeTypesGeneric.cpp b/libclamav/c++/llvm/lib/CodeGen/SelectionDAG/LegalizeTypesGeneric.cpp
index dbd3e39..a1b6ced 100644
--- a/libclamav/c++/llvm/lib/CodeGen/SelectionDAG/LegalizeTypesGeneric.cpp
+++ b/libclamav/c++/llvm/lib/CodeGen/SelectionDAG/LegalizeTypesGeneric.cpp
@@ -464,7 +464,6 @@ void DAGTypeLegalizer::SplitRes_SELECT_CC(SDNode *N, SDValue &Lo,
 
 void DAGTypeLegalizer::SplitRes_UNDEF(SDNode *N, SDValue &Lo, SDValue &Hi) {
   EVT LoVT, HiVT;
-  DebugLoc dl = N->getDebugLoc();
   GetSplitDestVTs(N->getValueType(0), LoVT, HiVT);
   Lo = DAG.getUNDEF(LoVT);
   Hi = DAG.getUNDEF(HiVT);
diff --git a/libclamav/c++/llvm/lib/CodeGen/SelectionDAG/SDNodeOrdering.h b/libclamav/c++/llvm/lib/CodeGen/SelectionDAG/SDNodeOrdering.h
new file mode 100644
index 0000000..f88b26d
--- /dev/null
+++ b/libclamav/c++/llvm/lib/CodeGen/SelectionDAG/SDNodeOrdering.h
@@ -0,0 +1,54 @@
+//===-- llvm/CodeGen/SDNodeOrdering.h - SDNode Ordering ---------*- C++ -*-===//
+//
+//                     The LLVM Compiler Infrastructure
+//
+// This file is distributed under the University of Illinois Open Source
+// License. See LICENSE.TXT for details.
+//
+//===----------------------------------------------------------------------===//
+//
+// This file declares the SDNodeOrdering class.
+//
+//===----------------------------------------------------------------------===//
+
+#ifndef LLVM_CODEGEN_SDNODEORDERING_H
+#define LLVM_CODEGEN_SDNODEORDERING_H
+
+#include "llvm/ADT/DenseMap.h"
+
+namespace llvm {
+
+class SDNode;
+
+/// SDNodeOrdering - Maps a unique (monotonically increasing) value to each
+/// SDNode that roughly corresponds to the ordering of the original LLVM
+/// instruction. This is used for turning off scheduling, because we'll forgo
+/// the normal scheduling algorithms and output the instructions according to
+/// this ordering.
+class SDNodeOrdering {
+  DenseMap<const SDNode*, unsigned> OrderMap;
+
+  void operator=(const SDNodeOrdering&);   // Do not implement.
+  SDNodeOrdering(const SDNodeOrdering&);   // Do not implement.
+public:
+  SDNodeOrdering() {}
+
+  void add(const SDNode *Node, unsigned O) {
+    OrderMap[Node] = O;
+  }
+  void remove(const SDNode *Node) {
+    DenseMap<const SDNode*, unsigned>::iterator Itr = OrderMap.find(Node);
+    if (Itr != OrderMap.end())
+      OrderMap.erase(Itr);
+  }
+  void clear() {
+    OrderMap.clear();
+  }
+  unsigned getOrder(const SDNode *Node) {
+    return OrderMap[Node];
+  }
+};
+
+} // end llvm namespace
+
+#endif
diff --git a/libclamav/c++/llvm/lib/CodeGen/SelectionDAG/ScheduleDAGSDNodes.cpp b/libclamav/c++/llvm/lib/CodeGen/SelectionDAG/ScheduleDAGSDNodes.cpp
index b2ee8bb..d53de34 100644
--- a/libclamav/c++/llvm/lib/CodeGen/SelectionDAG/ScheduleDAGSDNodes.cpp
+++ b/libclamav/c++/llvm/lib/CodeGen/SelectionDAG/ScheduleDAGSDNodes.cpp
@@ -20,16 +20,10 @@
 #include "llvm/Target/TargetInstrInfo.h"
 #include "llvm/Target/TargetRegisterInfo.h"
 #include "llvm/Target/TargetSubtarget.h"
-#include "llvm/Support/CommandLine.h"
 #include "llvm/Support/Debug.h"
 #include "llvm/Support/raw_ostream.h"
 using namespace llvm;
 
-cl::opt<bool>
-DisableInstScheduling("disable-inst-scheduling",
-                      cl::init(false),
-                      cl::desc("Disable instruction scheduling"));
-
 ScheduleDAGSDNodes::ScheduleDAGSDNodes(MachineFunction &mf)
   : ScheduleDAG(mf) {
 }
diff --git a/libclamav/c++/llvm/lib/CodeGen/SelectionDAG/SelectionDAG.cpp b/libclamav/c++/llvm/lib/CodeGen/SelectionDAG/SelectionDAG.cpp
index da55e6b..77301b0 100644
--- a/libclamav/c++/llvm/lib/CodeGen/SelectionDAG/SelectionDAG.cpp
+++ b/libclamav/c++/llvm/lib/CodeGen/SelectionDAG/SelectionDAG.cpp
@@ -10,7 +10,9 @@
 // This implements the SelectionDAG class.
 //
 //===----------------------------------------------------------------------===//
+
 #include "llvm/CodeGen/SelectionDAG.h"
+#include "SDNodeOrdering.h"
 #include "llvm/Constants.h"
 #include "llvm/Analysis/ValueTracking.h"
 #include "llvm/Function.h"
@@ -48,8 +50,6 @@
 #include <cmath>
 using namespace llvm;
 
-extern cl::opt<bool> DisableInstScheduling;
-
 /// makeVTList - Return an instance of the SDVTList struct initialized with the
 /// specified members.
 static SDVTList makeVTList(const EVT *VTs, unsigned NumVTs) {
@@ -554,9 +554,6 @@ void SelectionDAG::RemoveDeadNodes(SmallVectorImpl<SDNode *> &DeadNodes,
     }
 
     DeallocateNode(N);
-
-    // Remove the ordering of this node.
-    if (Ordering) Ordering->remove(N);
   }
 }
 
@@ -582,9 +579,6 @@ void SelectionDAG::DeleteNodeNotInCSEMaps(SDNode *N) {
   N->DropOperands();
 
   DeallocateNode(N);
-
-  // Remove the ordering of this node.
-  if (Ordering) Ordering->remove(N);
 }
 
 void SelectionDAG::DeallocateNode(SDNode *N) {
@@ -703,7 +697,6 @@ SDNode *SelectionDAG::FindModifiedNodeSlot(SDNode *N, SDValue Op,
   AddNodeIDNode(ID, N->getOpcode(), N->getVTList(), Ops, 1);
   AddNodeIDCustom(ID, N);
   SDNode *Node = CSEMap.FindNodeOrInsertPos(ID, InsertPos);
-  if (Ordering) Ordering->remove(Node);
   return Node;
 }
 
@@ -722,7 +715,6 @@ SDNode *SelectionDAG::FindModifiedNodeSlot(SDNode *N,
   AddNodeIDNode(ID, N->getOpcode(), N->getVTList(), Ops, 2);
   AddNodeIDCustom(ID, N);
   SDNode *Node = CSEMap.FindNodeOrInsertPos(ID, InsertPos);
-  if (Ordering) Ordering->remove(Node);
   return Node;
 }
 
@@ -741,7 +733,6 @@ SDNode *SelectionDAG::FindModifiedNodeSlot(SDNode *N,
   AddNodeIDNode(ID, N->getOpcode(), N->getVTList(), Ops, NumOps);
   AddNodeIDCustom(ID, N);
   SDNode *Node = CSEMap.FindNodeOrInsertPos(ID, InsertPos);
-  if (Ordering) Ordering->remove(Node);
   return Node;
 }
 
@@ -798,10 +789,8 @@ SelectionDAG::SelectionDAG(TargetLowering &tli, FunctionLoweringInfo &fli)
               getVTList(MVT::Other)),
     Root(getEntryNode()), Ordering(0) {
   AllNodes.push_back(&EntryNode);
-  if (DisableInstScheduling) {
-    Ordering = new NodeOrdering();
-    Ordering->add(&EntryNode);
-  }
+  if (DisableScheduling)
+    Ordering = new SDNodeOrdering();
 }
 
 void SelectionDAG::init(MachineFunction &mf, MachineModuleInfo *mmi,
@@ -840,10 +829,8 @@ void SelectionDAG::clear() {
   EntryNode.UseList = 0;
   AllNodes.push_back(&EntryNode);
   Root = getEntryNode();
-  if (DisableInstScheduling) {
-    Ordering = new NodeOrdering();
-    Ordering->add(&EntryNode);
-  }
+  if (DisableScheduling)
+    Ordering = new SDNodeOrdering();
 }
 
 SDValue SelectionDAG::getSExtOrTrunc(SDValue Op, DebugLoc DL, EVT VT) {
@@ -904,17 +891,15 @@ SDValue SelectionDAG::getConstant(const ConstantInt &Val, EVT VT, bool isT) {
   ID.AddPointer(&Val);
   void *IP = 0;
   SDNode *N = NULL;
-  if ((N = CSEMap.FindNodeOrInsertPos(ID, IP))) {
-    if (Ordering) Ordering->add(N);
+  if ((N = CSEMap.FindNodeOrInsertPos(ID, IP)))
     if (!VT.isVector())
       return SDValue(N, 0);
-  }
+
   if (!N) {
     N = NodeAllocator.Allocate<ConstantSDNode>();
     new (N) ConstantSDNode(isT, &Val, EltVT);
     CSEMap.InsertNode(N, IP);
     AllNodes.push_back(N);
-    if (Ordering) Ordering->add(N);
   }
 
   SDValue Result(N, 0);
@@ -951,17 +936,15 @@ SDValue SelectionDAG::getConstantFP(const ConstantFP& V, EVT VT, bool isTarget){
   ID.AddPointer(&V);
   void *IP = 0;
   SDNode *N = NULL;
-  if ((N = CSEMap.FindNodeOrInsertPos(ID, IP))) {
-    if (Ordering) Ordering->add(N);
+  if ((N = CSEMap.FindNodeOrInsertPos(ID, IP)))
     if (!VT.isVector())
       return SDValue(N, 0);
-  }
+
   if (!N) {
     N = NodeAllocator.Allocate<ConstantFPSDNode>();
     new (N) ConstantFPSDNode(isTarget, &V, EltVT);
     CSEMap.InsertNode(N, IP);
     AllNodes.push_back(N);
-    if (Ordering) Ordering->add(N);
   }
 
   SDValue Result(N, 0);
@@ -1016,15 +999,13 @@ SDValue SelectionDAG::getGlobalAddress(const GlobalValue *GV,
   ID.AddInteger(Offset);
   ID.AddInteger(TargetFlags);
   void *IP = 0;
-  if (SDNode *E = CSEMap.FindNodeOrInsertPos(ID, IP)) {
-    if (Ordering) Ordering->add(E);
+  if (SDNode *E = CSEMap.FindNodeOrInsertPos(ID, IP))
     return SDValue(E, 0);
-  }
+
   SDNode *N = NodeAllocator.Allocate<GlobalAddressSDNode>();
   new (N) GlobalAddressSDNode(Opc, GV, VT, Offset, TargetFlags);
   CSEMap.InsertNode(N, IP);
   AllNodes.push_back(N);
-  if (Ordering) Ordering->add(N);
   return SDValue(N, 0);
 }
 
@@ -1034,15 +1015,13 @@ SDValue SelectionDAG::getFrameIndex(int FI, EVT VT, bool isTarget) {
   AddNodeIDNode(ID, Opc, getVTList(VT), 0, 0);
   ID.AddInteger(FI);
   void *IP = 0;
-  if (SDNode *E = CSEMap.FindNodeOrInsertPos(ID, IP)) {
-    if (Ordering) Ordering->add(E);
+  if (SDNode *E = CSEMap.FindNodeOrInsertPos(ID, IP))
     return SDValue(E, 0);
-  }
+
   SDNode *N = NodeAllocator.Allocate<FrameIndexSDNode>();
   new (N) FrameIndexSDNode(FI, VT, isTarget);
   CSEMap.InsertNode(N, IP);
   AllNodes.push_back(N);
-  if (Ordering) Ordering->add(N);
   return SDValue(N, 0);
 }
 
@@ -1056,15 +1035,13 @@ SDValue SelectionDAG::getJumpTable(int JTI, EVT VT, bool isTarget,
   ID.AddInteger(JTI);
   ID.AddInteger(TargetFlags);
   void *IP = 0;
-  if (SDNode *E = CSEMap.FindNodeOrInsertPos(ID, IP)) {
-    if (Ordering) Ordering->add(E);
+  if (SDNode *E = CSEMap.FindNodeOrInsertPos(ID, IP))
     return SDValue(E, 0);
-  }
+
   SDNode *N = NodeAllocator.Allocate<JumpTableSDNode>();
   new (N) JumpTableSDNode(JTI, VT, isTarget, TargetFlags);
   CSEMap.InsertNode(N, IP);
   AllNodes.push_back(N);
-  if (Ordering) Ordering->add(N);
   return SDValue(N, 0);
 }
 
@@ -1084,15 +1061,13 @@ SDValue SelectionDAG::getConstantPool(Constant *C, EVT VT,
   ID.AddPointer(C);
   ID.AddInteger(TargetFlags);
   void *IP = 0;
-  if (SDNode *E = CSEMap.FindNodeOrInsertPos(ID, IP)) {
-    if (Ordering) Ordering->add(E);
+  if (SDNode *E = CSEMap.FindNodeOrInsertPos(ID, IP))
     return SDValue(E, 0);
-  }
+
   SDNode *N = NodeAllocator.Allocate<ConstantPoolSDNode>();
   new (N) ConstantPoolSDNode(isTarget, C, VT, Offset, Alignment, TargetFlags);
   CSEMap.InsertNode(N, IP);
   AllNodes.push_back(N);
-  if (Ordering) Ordering->add(N);
   return SDValue(N, 0);
 }
 
@@ -1113,15 +1088,13 @@ SDValue SelectionDAG::getConstantPool(MachineConstantPoolValue *C, EVT VT,
   C->AddSelectionDAGCSEId(ID);
   ID.AddInteger(TargetFlags);
   void *IP = 0;
-  if (SDNode *E = CSEMap.FindNodeOrInsertPos(ID, IP)) {
-    if (Ordering) Ordering->add(E);
+  if (SDNode *E = CSEMap.FindNodeOrInsertPos(ID, IP))
     return SDValue(E, 0);
-  }
+
   SDNode *N = NodeAllocator.Allocate<ConstantPoolSDNode>();
   new (N) ConstantPoolSDNode(isTarget, C, VT, Offset, Alignment, TargetFlags);
   CSEMap.InsertNode(N, IP);
   AllNodes.push_back(N);
-  if (Ordering) Ordering->add(N);
   return SDValue(N, 0);
 }
 
@@ -1130,15 +1103,13 @@ SDValue SelectionDAG::getBasicBlock(MachineBasicBlock *MBB) {
   AddNodeIDNode(ID, ISD::BasicBlock, getVTList(MVT::Other), 0, 0);
   ID.AddPointer(MBB);
   void *IP = 0;
-  if (SDNode *E = CSEMap.FindNodeOrInsertPos(ID, IP)) {
-    if (Ordering) Ordering->add(E);
+  if (SDNode *E = CSEMap.FindNodeOrInsertPos(ID, IP))
     return SDValue(E, 0);
-  }
+
   SDNode *N = NodeAllocator.Allocate<BasicBlockSDNode>();
   new (N) BasicBlockSDNode(MBB);
   CSEMap.InsertNode(N, IP);
   AllNodes.push_back(N);
-  if (Ordering) Ordering->add(N);
   return SDValue(N, 0);
 }
 
@@ -1154,7 +1125,6 @@ SDValue SelectionDAG::getValueType(EVT VT) {
   N = NodeAllocator.Allocate<VTSDNode>();
   new (N) VTSDNode(VT);
   AllNodes.push_back(N);
-  if (Ordering) Ordering->add(N);
   return SDValue(N, 0);
 }
 
@@ -1164,7 +1134,6 @@ SDValue SelectionDAG::getExternalSymbol(const char *Sym, EVT VT) {
   N = NodeAllocator.Allocate<ExternalSymbolSDNode>();
   new (N) ExternalSymbolSDNode(false, Sym, 0, VT);
   AllNodes.push_back(N);
-  if (Ordering) Ordering->add(N);
   return SDValue(N, 0);
 }
 
@@ -1177,7 +1146,6 @@ SDValue SelectionDAG::getTargetExternalSymbol(const char *Sym, EVT VT,
   N = NodeAllocator.Allocate<ExternalSymbolSDNode>();
   new (N) ExternalSymbolSDNode(true, Sym, TargetFlags, VT);
   AllNodes.push_back(N);
-  if (Ordering) Ordering->add(N);
   return SDValue(N, 0);
 }
 
@@ -1190,8 +1158,8 @@ SDValue SelectionDAG::getCondCode(ISD::CondCode Cond) {
     new (N) CondCodeSDNode(Cond);
     CondCodeNodes[Cond] = N;
     AllNodes.push_back(N);
-    if (Ordering) Ordering->add(N);
   }
+
   return SDValue(CondCodeNodes[Cond], 0);
 }
 
@@ -1283,10 +1251,8 @@ SDValue SelectionDAG::getVectorShuffle(EVT VT, DebugLoc dl, SDValue N1,
     ID.AddInteger(MaskVec[i]);
 
   void* IP = 0;
-  if (SDNode *E = CSEMap.FindNodeOrInsertPos(ID, IP)) {
-    if (Ordering) Ordering->add(E);
+  if (SDNode *E = CSEMap.FindNodeOrInsertPos(ID, IP))
     return SDValue(E, 0);
-  }
 
   // Allocate the mask array for the node out of the BumpPtrAllocator, since
   // SDNode doesn't have access to it.  This memory will be "leaked" when
@@ -1298,7 +1264,6 @@ SDValue SelectionDAG::getVectorShuffle(EVT VT, DebugLoc dl, SDValue N1,
   new (N) ShuffleVectorSDNode(VT, dl, N1, N2, MaskAlloc);
   CSEMap.InsertNode(N, IP);
   AllNodes.push_back(N);
-  if (Ordering) Ordering->add(N);
   return SDValue(N, 0);
 }
 
@@ -1316,15 +1281,13 @@ SDValue SelectionDAG::getConvertRndSat(EVT VT, DebugLoc dl,
   SDValue Ops[] = { Val, DTy, STy, Rnd, Sat };
   AddNodeIDNode(ID, ISD::CONVERT_RNDSAT, getVTList(VT), &Ops[0], 5);
   void* IP = 0;
-  if (SDNode *E = CSEMap.FindNodeOrInsertPos(ID, IP)) {
-    if (Ordering) Ordering->add(E);
+  if (SDNode *E = CSEMap.FindNodeOrInsertPos(ID, IP))
     return SDValue(E, 0);
-  }
+
   CvtRndSatSDNode *N = NodeAllocator.Allocate<CvtRndSatSDNode>();
   new (N) CvtRndSatSDNode(VT, dl, Ops, 5, Code);
   CSEMap.InsertNode(N, IP);
   AllNodes.push_back(N);
-  if (Ordering) Ordering->add(N);
   return SDValue(N, 0);
 }
 
@@ -1333,15 +1296,13 @@ SDValue SelectionDAG::getRegister(unsigned RegNo, EVT VT) {
   AddNodeIDNode(ID, ISD::Register, getVTList(VT), 0, 0);
   ID.AddInteger(RegNo);
   void *IP = 0;
-  if (SDNode *E = CSEMap.FindNodeOrInsertPos(ID, IP)) {
-    if (Ordering) Ordering->add(E);
+  if (SDNode *E = CSEMap.FindNodeOrInsertPos(ID, IP))
     return SDValue(E, 0);
-  }
+
   SDNode *N = NodeAllocator.Allocate<RegisterSDNode>();
   new (N) RegisterSDNode(RegNo, VT);
   CSEMap.InsertNode(N, IP);
   AllNodes.push_back(N);
-  if (Ordering) Ordering->add(N);
   return SDValue(N, 0);
 }
 
@@ -1353,15 +1314,13 @@ SDValue SelectionDAG::getLabel(unsigned Opcode, DebugLoc dl,
   AddNodeIDNode(ID, Opcode, getVTList(MVT::Other), &Ops[0], 1);
   ID.AddInteger(LabelID);
   void *IP = 0;
-  if (SDNode *E = CSEMap.FindNodeOrInsertPos(ID, IP)) {
-    if (Ordering) Ordering->add(E);
+  if (SDNode *E = CSEMap.FindNodeOrInsertPos(ID, IP))
     return SDValue(E, 0);
-  }
+
   SDNode *N = NodeAllocator.Allocate<LabelSDNode>();
   new (N) LabelSDNode(Opcode, dl, Root, LabelID);
   CSEMap.InsertNode(N, IP);
   AllNodes.push_back(N);
-  if (Ordering) Ordering->add(N);
   return SDValue(N, 0);
 }
 
@@ -1375,15 +1334,13 @@ SDValue SelectionDAG::getBlockAddress(BlockAddress *BA, EVT VT,
   ID.AddPointer(BA);
   ID.AddInteger(TargetFlags);
   void *IP = 0;
-  if (SDNode *E = CSEMap.FindNodeOrInsertPos(ID, IP)) {
-    if (Ordering) Ordering->add(E);
+  if (SDNode *E = CSEMap.FindNodeOrInsertPos(ID, IP))
     return SDValue(E, 0);
-  }
+
   SDNode *N = NodeAllocator.Allocate<BlockAddressSDNode>();
   new (N) BlockAddressSDNode(Opc, VT, BA, TargetFlags);
   CSEMap.InsertNode(N, IP);
   AllNodes.push_back(N);
-  if (Ordering) Ordering->add(N);
   return SDValue(N, 0);
 }
 
@@ -1396,16 +1353,13 @@ SDValue SelectionDAG::getSrcValue(const Value *V) {
   ID.AddPointer(V);
 
   void *IP = 0;
-  if (SDNode *E = CSEMap.FindNodeOrInsertPos(ID, IP)) {
-    if (Ordering) Ordering->add(E);
+  if (SDNode *E = CSEMap.FindNodeOrInsertPos(ID, IP))
     return SDValue(E, 0);
-  }
 
   SDNode *N = NodeAllocator.Allocate<SrcValueSDNode>();
   new (N) SrcValueSDNode(V);
   CSEMap.InsertNode(N, IP);
   AllNodes.push_back(N);
-  if (Ordering) Ordering->add(N);
   return SDValue(N, 0);
 }
 
@@ -2316,16 +2270,14 @@ SDValue SelectionDAG::getNode(unsigned Opcode, DebugLoc DL, EVT VT) {
   FoldingSetNodeID ID;
   AddNodeIDNode(ID, Opcode, getVTList(VT), 0, 0);
   void *IP = 0;
-  if (SDNode *E = CSEMap.FindNodeOrInsertPos(ID, IP)) {
-    if (Ordering) Ordering->add(E);
+  if (SDNode *E = CSEMap.FindNodeOrInsertPos(ID, IP))
     return SDValue(E, 0);
-  }
+
   SDNode *N = NodeAllocator.Allocate<SDNode>();
   new (N) SDNode(Opcode, DL, getVTList(VT));
   CSEMap.InsertNode(N, IP);
 
   AllNodes.push_back(N);
-  if (Ordering) Ordering->add(N);
 #ifndef NDEBUG
   VerifyNode(N);
 #endif
@@ -2549,10 +2501,9 @@ SDValue SelectionDAG::getNode(unsigned Opcode, DebugLoc DL,
     SDValue Ops[1] = { Operand };
     AddNodeIDNode(ID, Opcode, VTs, Ops, 1);
     void *IP = 0;
-    if (SDNode *E = CSEMap.FindNodeOrInsertPos(ID, IP)) {
-      if (Ordering) Ordering->add(E);
+    if (SDNode *E = CSEMap.FindNodeOrInsertPos(ID, IP))
       return SDValue(E, 0);
-    }
+
     N = NodeAllocator.Allocate<UnarySDNode>();
     new (N) UnarySDNode(Opcode, DL, VTs, Operand);
     CSEMap.InsertNode(N, IP);
@@ -2562,7 +2513,6 @@ SDValue SelectionDAG::getNode(unsigned Opcode, DebugLoc DL,
   }
 
   AllNodes.push_back(N);
-  if (Ordering) Ordering->add(N);
 #ifndef NDEBUG
   VerifyNode(N);
 #endif
@@ -2970,10 +2920,9 @@ SDValue SelectionDAG::getNode(unsigned Opcode, DebugLoc DL, EVT VT,
     FoldingSetNodeID ID;
     AddNodeIDNode(ID, Opcode, VTs, Ops, 2);
     void *IP = 0;
-    if (SDNode *E = CSEMap.FindNodeOrInsertPos(ID, IP)) {
-      if (Ordering) Ordering->add(E);
+    if (SDNode *E = CSEMap.FindNodeOrInsertPos(ID, IP))
       return SDValue(E, 0);
-    }
+
     N = NodeAllocator.Allocate<BinarySDNode>();
     new (N) BinarySDNode(Opcode, DL, VTs, N1, N2);
     CSEMap.InsertNode(N, IP);
@@ -2983,7 +2932,6 @@ SDValue SelectionDAG::getNode(unsigned Opcode, DebugLoc DL, EVT VT,
   }
 
   AllNodes.push_back(N);
-  if (Ordering) Ordering->add(N);
 #ifndef NDEBUG
   VerifyNode(N);
 #endif
@@ -3050,10 +2998,9 @@ SDValue SelectionDAG::getNode(unsigned Opcode, DebugLoc DL, EVT VT,
     FoldingSetNodeID ID;
     AddNodeIDNode(ID, Opcode, VTs, Ops, 3);
     void *IP = 0;
-    if (SDNode *E = CSEMap.FindNodeOrInsertPos(ID, IP)) {
-      if (Ordering) Ordering->add(E);
+    if (SDNode *E = CSEMap.FindNodeOrInsertPos(ID, IP))
       return SDValue(E, 0);
-    }
+
     N = NodeAllocator.Allocate<TernarySDNode>();
     new (N) TernarySDNode(Opcode, DL, VTs, N1, N2, N3);
     CSEMap.InsertNode(N, IP);
@@ -3063,7 +3010,6 @@ SDValue SelectionDAG::getNode(unsigned Opcode, DebugLoc DL, EVT VT,
   }
 
   AllNodes.push_back(N);
-  if (Ordering) Ordering->add(N);
 #ifndef NDEBUG
   VerifyNode(N);
 #endif
@@ -3503,7 +3449,7 @@ SDValue SelectionDAG::getMemcpy(SDValue Chain, DebugLoc dl, SDValue Dst,
                     /*isReturnValueUsed=*/false,
                     getExternalSymbol(TLI.getLibcallName(RTLIB::MEMCPY),
                                       TLI.getPointerTy()),
-                    Args, *this, dl);
+                    Args, *this, dl, GetOrdering(Chain.getNode()));
   return CallResult.second;
 }
 
@@ -3552,7 +3498,7 @@ SDValue SelectionDAG::getMemmove(SDValue Chain, DebugLoc dl, SDValue Dst,
                     /*isReturnValueUsed=*/false,
                     getExternalSymbol(TLI.getLibcallName(RTLIB::MEMMOVE),
                                       TLI.getPointerTy()),
-                    Args, *this, dl);
+                    Args, *this, dl, GetOrdering(Chain.getNode()));
   return CallResult.second;
 }
 
@@ -3611,7 +3557,7 @@ SDValue SelectionDAG::getMemset(SDValue Chain, DebugLoc dl, SDValue Dst,
                     /*isReturnValueUsed=*/false,
                     getExternalSymbol(TLI.getLibcallName(RTLIB::MEMSET),
                                       TLI.getPointerTy()),
-                    Args, *this, dl);
+                    Args, *this, dl, GetOrdering(Chain.getNode()));
   return CallResult.second;
 }
 
@@ -3659,14 +3605,12 @@ SDValue SelectionDAG::getAtomic(unsigned Opcode, DebugLoc dl, EVT MemVT,
   void* IP = 0;
   if (SDNode *E = CSEMap.FindNodeOrInsertPos(ID, IP)) {
     cast<AtomicSDNode>(E)->refineAlignment(MMO);
-    if (Ordering) Ordering->add(E);
     return SDValue(E, 0);
   }
   SDNode* N = NodeAllocator.Allocate<AtomicSDNode>();
   new (N) AtomicSDNode(Opcode, dl, VTs, MemVT, Chain, Ptr, Cmp, Swp, MMO);
   CSEMap.InsertNode(N, IP);
   AllNodes.push_back(N);
-  if (Ordering) Ordering->add(N);
   return SDValue(N, 0);
 }
 
@@ -3724,14 +3668,12 @@ SDValue SelectionDAG::getAtomic(unsigned Opcode, DebugLoc dl, EVT MemVT,
   void* IP = 0;
   if (SDNode *E = CSEMap.FindNodeOrInsertPos(ID, IP)) {
     cast<AtomicSDNode>(E)->refineAlignment(MMO);
-    if (Ordering) Ordering->add(E);
     return SDValue(E, 0);
   }
   SDNode* N = NodeAllocator.Allocate<AtomicSDNode>();
   new (N) AtomicSDNode(Opcode, dl, VTs, MemVT, Chain, Ptr, Val, MMO);
   CSEMap.InsertNode(N, IP);
   AllNodes.push_back(N);
-  if (Ordering) Ordering->add(N);
   return SDValue(N, 0);
 }
 
@@ -3804,7 +3746,6 @@ SelectionDAG::getMemIntrinsicNode(unsigned Opcode, DebugLoc dl, SDVTList VTList,
     void *IP = 0;
     if (SDNode *E = CSEMap.FindNodeOrInsertPos(ID, IP)) {
       cast<MemIntrinsicSDNode>(E)->refineAlignment(MMO);
-      if (Ordering) Ordering->add(E);
       return SDValue(E, 0);
     }
 
@@ -3816,7 +3757,6 @@ SelectionDAG::getMemIntrinsicNode(unsigned Opcode, DebugLoc dl, SDVTList VTList,
     new (N) MemIntrinsicSDNode(Opcode, dl, VTList, Ops, NumOps, MemVT, MMO);
   }
   AllNodes.push_back(N);
-  if (Ordering) Ordering->add(N);
   return SDValue(N, 0);
 }
 
@@ -3881,14 +3821,12 @@ SelectionDAG::getLoad(ISD::MemIndexedMode AM, DebugLoc dl,
   void *IP = 0;
   if (SDNode *E = CSEMap.FindNodeOrInsertPos(ID, IP)) {
     cast<LoadSDNode>(E)->refineAlignment(MMO);
-    if (Ordering) Ordering->add(E);
     return SDValue(E, 0);
   }
   SDNode *N = NodeAllocator.Allocate<LoadSDNode>();
   new (N) LoadSDNode(Ops, dl, VTs, AM, ExtType, MemVT, MMO);
   CSEMap.InsertNode(N, IP);
   AllNodes.push_back(N);
-  if (Ordering) Ordering->add(N);
   return SDValue(N, 0);
 }
 
@@ -3959,14 +3897,12 @@ SDValue SelectionDAG::getStore(SDValue Chain, DebugLoc dl, SDValue Val,
   void *IP = 0;
   if (SDNode *E = CSEMap.FindNodeOrInsertPos(ID, IP)) {
     cast<StoreSDNode>(E)->refineAlignment(MMO);
-    if (Ordering) Ordering->add(E);
     return SDValue(E, 0);
   }
   SDNode *N = NodeAllocator.Allocate<StoreSDNode>();
   new (N) StoreSDNode(Ops, dl, VTs, ISD::UNINDEXED, false, VT, MMO);
   CSEMap.InsertNode(N, IP);
   AllNodes.push_back(N);
-  if (Ordering) Ordering->add(N);
   return SDValue(N, 0);
 }
 
@@ -4021,14 +3957,12 @@ SDValue SelectionDAG::getTruncStore(SDValue Chain, DebugLoc dl, SDValue Val,
   void *IP = 0;
   if (SDNode *E = CSEMap.FindNodeOrInsertPos(ID, IP)) {
     cast<StoreSDNode>(E)->refineAlignment(MMO);
-    if (Ordering) Ordering->add(E);
     return SDValue(E, 0);
   }
   SDNode *N = NodeAllocator.Allocate<StoreSDNode>();
   new (N) StoreSDNode(Ops, dl, VTs, ISD::UNINDEXED, true, SVT, MMO);
   CSEMap.InsertNode(N, IP);
   AllNodes.push_back(N);
-  if (Ordering) Ordering->add(N);
   return SDValue(N, 0);
 }
 
@@ -4045,17 +3979,15 @@ SelectionDAG::getIndexedStore(SDValue OrigStore, DebugLoc dl, SDValue Base,
   ID.AddInteger(ST->getMemoryVT().getRawBits());
   ID.AddInteger(ST->getRawSubclassData());
   void *IP = 0;
-  if (SDNode *E = CSEMap.FindNodeOrInsertPos(ID, IP)) {
-    if (Ordering) Ordering->add(E);
+  if (SDNode *E = CSEMap.FindNodeOrInsertPos(ID, IP))
     return SDValue(E, 0);
-  }
+
   SDNode *N = NodeAllocator.Allocate<StoreSDNode>();
   new (N) StoreSDNode(Ops, dl, VTs, AM,
                       ST->isTruncatingStore(), ST->getMemoryVT(),
                       ST->getMemOperand());
   CSEMap.InsertNode(N, IP);
   AllNodes.push_back(N);
-  if (Ordering) Ordering->add(N);
   return SDValue(N, 0);
 }
 
@@ -4121,10 +4053,8 @@ SDValue SelectionDAG::getNode(unsigned Opcode, DebugLoc DL, EVT VT,
     AddNodeIDNode(ID, Opcode, VTs, Ops, NumOps);
     void *IP = 0;
 
-    if (SDNode *E = CSEMap.FindNodeOrInsertPos(ID, IP)) {
-      if (Ordering) Ordering->add(E);
+    if (SDNode *E = CSEMap.FindNodeOrInsertPos(ID, IP))
       return SDValue(E, 0);
-    }
 
     N = NodeAllocator.Allocate<SDNode>();
     new (N) SDNode(Opcode, DL, VTs, Ops, NumOps);
@@ -4135,7 +4065,6 @@ SDValue SelectionDAG::getNode(unsigned Opcode, DebugLoc DL, EVT VT,
   }
 
   AllNodes.push_back(N);
-  if (Ordering) Ordering->add(N);
 #ifndef NDEBUG
   VerifyNode(N);
 #endif
@@ -4191,10 +4120,9 @@ SDValue SelectionDAG::getNode(unsigned Opcode, DebugLoc DL, SDVTList VTList,
     FoldingSetNodeID ID;
     AddNodeIDNode(ID, Opcode, VTList, Ops, NumOps);
     void *IP = 0;
-    if (SDNode *E = CSEMap.FindNodeOrInsertPos(ID, IP)) {
-      if (Ordering) Ordering->add(E);
+    if (SDNode *E = CSEMap.FindNodeOrInsertPos(ID, IP))
       return SDValue(E, 0);
-    }
+
     if (NumOps == 1) {
       N = NodeAllocator.Allocate<UnarySDNode>();
       new (N) UnarySDNode(Opcode, DL, VTList, Ops[0]);
@@ -4225,7 +4153,6 @@ SDValue SelectionDAG::getNode(unsigned Opcode, DebugLoc DL, SDVTList VTList,
     }
   }
   AllNodes.push_back(N);
-  if (Ordering) Ordering->add(N);
 #ifndef NDEBUG
   VerifyNode(N);
 #endif
@@ -4325,6 +4252,7 @@ SDVTList SelectionDAG::getVTList(const EVT *VTs, unsigned NumVTs) {
     case 1: return getVTList(VTs[0]);
     case 2: return getVTList(VTs[0], VTs[1]);
     case 3: return getVTList(VTs[0], VTs[1], VTs[2]);
+    case 4: return getVTList(VTs[0], VTs[1], VTs[2], VTs[3]);
     default: break;
   }
 
@@ -4688,10 +4616,8 @@ SDNode *SelectionDAG::MorphNodeTo(SDNode *N, unsigned Opc,
   if (VTs.VTs[VTs.NumVTs-1] != MVT::Flag) {
     FoldingSetNodeID ID;
     AddNodeIDNode(ID, Opc, VTs, Ops, NumOps);
-    if (SDNode *ON = CSEMap.FindNodeOrInsertPos(ID, IP)) {
-      if (Ordering) Ordering->add(ON);
+    if (SDNode *ON = CSEMap.FindNodeOrInsertPos(ID, IP))
       return ON;
-    }
   }
 
   if (!RemoveNodeFromCSEMaps(N))
@@ -4755,7 +4681,6 @@ SDNode *SelectionDAG::MorphNodeTo(SDNode *N, unsigned Opc,
 
   if (IP)
     CSEMap.InsertNode(N, IP);   // Memoize the new node.
-  if (Ordering) Ordering->add(N);
   return N;
 }
 
@@ -4894,10 +4819,8 @@ SelectionDAG::getMachineNode(unsigned Opcode, DebugLoc DL, SDVTList VTs,
     FoldingSetNodeID ID;
     AddNodeIDNode(ID, ~Opcode, VTs, Ops, NumOps);
     IP = 0;
-    if (SDNode *E = CSEMap.FindNodeOrInsertPos(ID, IP)) {
-      if (Ordering) Ordering->add(E);
+    if (SDNode *E = CSEMap.FindNodeOrInsertPos(ID, IP))
       return cast<MachineSDNode>(E);
-    }
   }
 
   // Allocate a new MachineSDNode.
@@ -4919,7 +4842,6 @@ SelectionDAG::getMachineNode(unsigned Opcode, DebugLoc DL, SDVTList VTs,
     CSEMap.InsertNode(N, IP);
 
   AllNodes.push_back(N);
-  if (Ordering) Ordering->add(N);
 #ifndef NDEBUG
   VerifyNode(N);
 #endif
@@ -4956,10 +4878,8 @@ SDNode *SelectionDAG::getNodeIfExists(unsigned Opcode, SDVTList VTList,
     FoldingSetNodeID ID;
     AddNodeIDNode(ID, Opcode, VTList, Ops, NumOps);
     void *IP = 0;
-    if (SDNode *E = CSEMap.FindNodeOrInsertPos(ID, IP)) {
-      if (Ordering) Ordering->add(E);
+    if (SDNode *E = CSEMap.FindNodeOrInsertPos(ID, IP))
       return E;
-    }
   }
   return NULL;
 }
@@ -5291,6 +5211,18 @@ unsigned SelectionDAG::AssignTopologicalOrder() {
   return DAGSize;
 }
 
+/// AssignOrdering - Assign an order to the SDNode.
+void SelectionDAG::AssignOrdering(SDNode *SD, unsigned Order) {
+  assert(SD && "Trying to assign an order to a null node!");
+  if (Ordering)
+    Ordering->add(SD, Order);
+}
+
+/// GetOrdering - Get the order for the SDNode.
+unsigned SelectionDAG::GetOrdering(const SDNode *SD) const {
+  assert(SD && "Trying to get the order of a null node!");
+  return Ordering ? Ordering->getOrder(SD) : 0;
+}
 
 
 //===----------------------------------------------------------------------===//
@@ -5931,6 +5863,10 @@ void SDNode::print_details(raw_ostream &OS, const SelectionDAG *G) const {
     if (unsigned int TF = BA->getTargetFlags())
       OS << " [TF=" << TF << ']';
   }
+
+  if (G)
+    if (unsigned Order = G->GetOrdering(this))
+      OS << " [ORD=" << Order << ']';
 }
 
 void SDNode::print(raw_ostream &OS, const SelectionDAG *G) const {
@@ -6126,9 +6062,6 @@ void SelectionDAG::dump() const {
   errs() << "\n\n";
 }
 
-void SelectionDAG::NodeOrdering::dump() const {
-}
-
 void SDNode::printr(raw_ostream &OS, const SelectionDAG *G) const {
   print_types(OS, G);
   print_details(OS, G);
@@ -6139,25 +6072,31 @@ static void DumpNodesr(raw_ostream &OS, const SDNode *N, unsigned indent,
                        const SelectionDAG *G, VisitedSDNodeSet &once) {
   if (!once.insert(N))          // If we've been here before, return now.
     return;
+
   // Dump the current SDNode, but don't end the line yet.
   OS << std::string(indent, ' ');
   N->printr(OS, G);
+
   // Having printed this SDNode, walk the children:
   for (unsigned i = 0, e = N->getNumOperands(); i != e; ++i) {
     const SDNode *child = N->getOperand(i).getNode();
+
     if (i) OS << ",";
     OS << " ";
+
     if (child->getNumOperands() == 0) {
       // This child has no grandchildren; print it inline right here.
       child->printr(OS, G);
       once.insert(child);
-    } else {          // Just the address.  FIXME: also print the child's opcode
+    } else {         // Just the address. FIXME: also print the child's opcode.
       OS << (void*)child;
       if (unsigned RN = N->getOperand(i).getResNo())
         OS << ":" << RN;
     }
   }
+
   OS << "\n";
+
   // Dump children that have grandchildren on their own line(s).
   for (unsigned i = 0, e = N->getNumOperands(); i != e; ++i) {
     const SDNode *child = N->getOperand(i).getNode();
diff --git a/libclamav/c++/llvm/lib/CodeGen/SelectionDAG/SelectionDAGBuilder.cpp b/libclamav/c++/llvm/lib/CodeGen/SelectionDAG/SelectionDAGBuilder.cpp
index 7568384..8fe7c45 100644
--- a/libclamav/c++/llvm/lib/CodeGen/SelectionDAG/SelectionDAGBuilder.cpp
+++ b/libclamav/c++/llvm/lib/CodeGen/SelectionDAG/SelectionDAGBuilder.cpp
@@ -17,6 +17,7 @@
 #include "llvm/ADT/BitVector.h"
 #include "llvm/ADT/SmallSet.h"
 #include "llvm/Analysis/AliasAnalysis.h"
+#include "llvm/Analysis/ConstantFolding.h"
 #include "llvm/Constants.h"
 #include "llvm/CallingConv.h"
 #include "llvm/DerivedTypes.h"
@@ -144,22 +145,23 @@ namespace {
     /// this value and returns the result as a ValueVTs value.  This uses
     /// Chain/Flag as the input and updates them for the output Chain/Flag.
     /// If the Flag pointer is NULL, no flag is used.
-    SDValue getCopyFromRegs(SelectionDAG &DAG, DebugLoc dl,
-                              SDValue &Chain, SDValue *Flag) const;
+    SDValue getCopyFromRegs(SelectionDAG &DAG, DebugLoc dl, unsigned Order,
+                            SDValue &Chain, SDValue *Flag) const;
 
     /// getCopyToRegs - Emit a series of CopyToReg nodes that copies the
     /// specified value into the registers specified by this object.  This uses
     /// Chain/Flag as the input and updates them for the output Chain/Flag.
     /// If the Flag pointer is NULL, no flag is used.
     void getCopyToRegs(SDValue Val, SelectionDAG &DAG, DebugLoc dl,
-                       SDValue &Chain, SDValue *Flag) const;
+                       unsigned Order, SDValue &Chain, SDValue *Flag) const;
 
     /// AddInlineAsmOperands - Add this value to the specified inlineasm node
     /// operand list.  This adds the code marker, matching input operand index
     /// (if applicable), and includes the number of values added into it.
     void AddInlineAsmOperands(unsigned Code,
                               bool HasMatching, unsigned MatchingIdx,
-                              SelectionDAG &DAG, std::vector<SDValue> &Ops) const;
+                              SelectionDAG &DAG, unsigned Order,
+                              std::vector<SDValue> &Ops) const;
   };
 }
 
@@ -168,13 +170,14 @@ namespace {
 /// larger then ValueVT then AssertOp can be used to specify whether the extra
 /// bits are known to be zero (ISD::AssertZext) or sign extended from ValueVT
 /// (ISD::AssertSext).
-static SDValue getCopyFromParts(SelectionDAG &DAG, DebugLoc dl,
+static SDValue getCopyFromParts(SelectionDAG &DAG, DebugLoc dl, unsigned Order,
                                 const SDValue *Parts,
                                 unsigned NumParts, EVT PartVT, EVT ValueVT,
                                 ISD::NodeType AssertOp = ISD::DELETED_NODE) {
   assert(NumParts > 0 && "No parts to assemble!");
   const TargetLowering &TLI = DAG.getTargetLoweringInfo();
   SDValue Val = Parts[0];
+  if (DisableScheduling) DAG.AssignOrdering(Val.getNode(), Order);
 
   if (NumParts > 1) {
     // Assemble the value from multiple parts.
@@ -193,23 +196,32 @@ static SDValue getCopyFromParts(SelectionDAG &DAG, DebugLoc dl,
       EVT HalfVT = EVT::getIntegerVT(*DAG.getContext(), RoundBits/2);
 
       if (RoundParts > 2) {
-        Lo = getCopyFromParts(DAG, dl, Parts, RoundParts/2, PartVT, HalfVT);
-        Hi = getCopyFromParts(DAG, dl, Parts+RoundParts/2, RoundParts/2,
+        Lo = getCopyFromParts(DAG, dl, Order, Parts, RoundParts / 2,
                               PartVT, HalfVT);
+        Hi = getCopyFromParts(DAG, dl, Order, Parts + RoundParts / 2,
+                              RoundParts / 2, PartVT, HalfVT);
       } else {
         Lo = DAG.getNode(ISD::BIT_CONVERT, dl, HalfVT, Parts[0]);
         Hi = DAG.getNode(ISD::BIT_CONVERT, dl, HalfVT, Parts[1]);
       }
+
       if (TLI.isBigEndian())
         std::swap(Lo, Hi);
+
       Val = DAG.getNode(ISD::BUILD_PAIR, dl, RoundVT, Lo, Hi);
 
+      if (DisableScheduling) {
+        DAG.AssignOrdering(Lo.getNode(), Order);
+        DAG.AssignOrdering(Hi.getNode(), Order);
+        DAG.AssignOrdering(Val.getNode(), Order);
+      }
+
       if (RoundParts < NumParts) {
         // Assemble the trailing non-power-of-2 part.
         unsigned OddParts = NumParts - RoundParts;
         EVT OddVT = EVT::getIntegerVT(*DAG.getContext(), OddParts * PartBits);
-        Hi = getCopyFromParts(DAG, dl,
-                              Parts+RoundParts, OddParts, PartVT, OddVT);
+        Hi = getCopyFromParts(DAG, dl, Order,
+                              Parts + RoundParts, OddParts, PartVT, OddVT);
 
         // Combine the round and odd parts.
         Lo = Val;
@@ -217,11 +229,15 @@ static SDValue getCopyFromParts(SelectionDAG &DAG, DebugLoc dl,
           std::swap(Lo, Hi);
         EVT TotalVT = EVT::getIntegerVT(*DAG.getContext(), NumParts * PartBits);
         Hi = DAG.getNode(ISD::ANY_EXTEND, dl, TotalVT, Hi);
+        if (DisableScheduling) DAG.AssignOrdering(Hi.getNode(), Order);
         Hi = DAG.getNode(ISD::SHL, dl, TotalVT, Hi,
                          DAG.getConstant(Lo.getValueType().getSizeInBits(),
                                          TLI.getPointerTy()));
+        if (DisableScheduling) DAG.AssignOrdering(Hi.getNode(), Order);
         Lo = DAG.getNode(ISD::ZERO_EXTEND, dl, TotalVT, Lo);
+        if (DisableScheduling) DAG.AssignOrdering(Lo.getNode(), Order);
         Val = DAG.getNode(ISD::OR, dl, TotalVT, Lo, Hi);
+        if (DisableScheduling) DAG.AssignOrdering(Val.getNode(), Order);
       }
     } else if (ValueVT.isVector()) {
       // Handle a multi-element vector.
@@ -242,7 +258,7 @@ static SDValue getCopyFromParts(SelectionDAG &DAG, DebugLoc dl,
         // If the register was not expanded, truncate or copy the value,
         // as appropriate.
         for (unsigned i = 0; i != NumParts; ++i)
-          Ops[i] = getCopyFromParts(DAG, dl, &Parts[i], 1,
+          Ops[i] = getCopyFromParts(DAG, dl, Order, &Parts[i], 1,
                                     PartVT, IntermediateVT);
       } else if (NumParts > 0) {
         // If the intermediate type was expanded, build the intermediate operands
@@ -251,7 +267,7 @@ static SDValue getCopyFromParts(SelectionDAG &DAG, DebugLoc dl,
                "Must expand into a divisible number of parts!");
         unsigned Factor = NumParts / NumIntermediates;
         for (unsigned i = 0; i != NumIntermediates; ++i)
-          Ops[i] = getCopyFromParts(DAG, dl, &Parts[i * Factor], Factor,
+          Ops[i] = getCopyFromParts(DAG, dl, Order, &Parts[i * Factor], Factor,
                                     PartVT, IntermediateVT);
       }
 
@@ -260,6 +276,7 @@ static SDValue getCopyFromParts(SelectionDAG &DAG, DebugLoc dl,
       Val = DAG.getNode(IntermediateVT.isVector() ?
                         ISD::CONCAT_VECTORS : ISD::BUILD_VECTOR, dl,
                         ValueVT, &Ops[0], NumIntermediates);
+      if (DisableScheduling) DAG.AssignOrdering(Val.getNode(), Order);
     } else if (PartVT.isFloatingPoint()) {
       // FP split into multiple FP parts (for ppcf128)
       assert(ValueVT == EVT(MVT::ppcf128) && PartVT == EVT(MVT::f64) &&
@@ -270,12 +287,18 @@ static SDValue getCopyFromParts(SelectionDAG &DAG, DebugLoc dl,
       if (TLI.isBigEndian())
         std::swap(Lo, Hi);
       Val = DAG.getNode(ISD::BUILD_PAIR, dl, ValueVT, Lo, Hi);
+
+      if (DisableScheduling) {
+        DAG.AssignOrdering(Hi.getNode(), Order);
+        DAG.AssignOrdering(Lo.getNode(), Order);
+        DAG.AssignOrdering(Val.getNode(), Order);
+      }
     } else {
       // FP split into integer parts (soft fp)
       assert(ValueVT.isFloatingPoint() && PartVT.isInteger() &&
              !PartVT.isVector() && "Unexpected split");
       EVT IntVT = EVT::getIntegerVT(*DAG.getContext(), ValueVT.getSizeInBits());
-      Val = getCopyFromParts(DAG, dl, Parts, NumParts, PartVT, IntVT);
+      Val = getCopyFromParts(DAG, dl, Order, Parts, NumParts, PartVT, IntVT);
     }
   }
 
@@ -287,14 +310,20 @@ static SDValue getCopyFromParts(SelectionDAG &DAG, DebugLoc dl,
 
   if (PartVT.isVector()) {
     assert(ValueVT.isVector() && "Unknown vector conversion!");
-    return DAG.getNode(ISD::BIT_CONVERT, dl, ValueVT, Val);
+    SDValue Res = DAG.getNode(ISD::BIT_CONVERT, dl, ValueVT, Val);
+    if (DisableScheduling)
+      DAG.AssignOrdering(Res.getNode(), Order);
+    return Res;
   }
 
   if (ValueVT.isVector()) {
     assert(ValueVT.getVectorElementType() == PartVT &&
            ValueVT.getVectorNumElements() == 1 &&
            "Only trivial scalar-to-vector conversions should get here!");
-    return DAG.getNode(ISD::BUILD_VECTOR, dl, ValueVT, Val);
+    SDValue Res = DAG.getNode(ISD::BUILD_VECTOR, dl, ValueVT, Val);
+    if (DisableScheduling)
+      DAG.AssignOrdering(Res.getNode(), Order);
+    return Res;
   }
 
   if (PartVT.isInteger() &&
@@ -306,22 +335,36 @@ static SDValue getCopyFromParts(SelectionDAG &DAG, DebugLoc dl,
       if (AssertOp != ISD::DELETED_NODE)
         Val = DAG.getNode(AssertOp, dl, PartVT, Val,
                           DAG.getValueType(ValueVT));
-      return DAG.getNode(ISD::TRUNCATE, dl, ValueVT, Val);
+      if (DisableScheduling) DAG.AssignOrdering(Val.getNode(), Order);
+      Val = DAG.getNode(ISD::TRUNCATE, dl, ValueVT, Val);
+      if (DisableScheduling) DAG.AssignOrdering(Val.getNode(), Order);
+      return Val;
     } else {
-      return DAG.getNode(ISD::ANY_EXTEND, dl, ValueVT, Val);
+      Val = DAG.getNode(ISD::ANY_EXTEND, dl, ValueVT, Val);
+      if (DisableScheduling) DAG.AssignOrdering(Val.getNode(), Order);
+      return Val;
     }
   }
 
   if (PartVT.isFloatingPoint() && ValueVT.isFloatingPoint()) {
-    if (ValueVT.bitsLT(Val.getValueType()))
+    if (ValueVT.bitsLT(Val.getValueType())) {
       // FP_ROUND's are always exact here.
-      return DAG.getNode(ISD::FP_ROUND, dl, ValueVT, Val,
-                         DAG.getIntPtrConstant(1));
-    return DAG.getNode(ISD::FP_EXTEND, dl, ValueVT, Val);
+      Val = DAG.getNode(ISD::FP_ROUND, dl, ValueVT, Val,
+                        DAG.getIntPtrConstant(1));
+      if (DisableScheduling) DAG.AssignOrdering(Val.getNode(), Order);
+      return Val;
+    }
+
+    Val = DAG.getNode(ISD::FP_EXTEND, dl, ValueVT, Val);
+    if (DisableScheduling) DAG.AssignOrdering(Val.getNode(), Order);
+    return Val;
   }
 
-  if (PartVT.getSizeInBits() == ValueVT.getSizeInBits())
-    return DAG.getNode(ISD::BIT_CONVERT, dl, ValueVT, Val);
+  if (PartVT.getSizeInBits() == ValueVT.getSizeInBits()) {
+    Val = DAG.getNode(ISD::BIT_CONVERT, dl, ValueVT, Val);
+    if (DisableScheduling) DAG.AssignOrdering(Val.getNode(), Order);
+    return Val;
+  }
 
   llvm_unreachable("Unknown mismatch!");
   return SDValue();
@@ -330,8 +373,9 @@ static SDValue getCopyFromParts(SelectionDAG &DAG, DebugLoc dl,
 /// getCopyToParts - Create a series of nodes that contain the specified value
 /// split into legal parts.  If the parts contain more bits than Val, then, for
 /// integers, ExtendKind can be used to specify how to generate the extra bits.
-static void getCopyToParts(SelectionDAG &DAG, DebugLoc dl, SDValue Val,
-                           SDValue *Parts, unsigned NumParts, EVT PartVT,
+static void getCopyToParts(SelectionDAG &DAG, DebugLoc dl, unsigned Order,
+                           SDValue Val, SDValue *Parts, unsigned NumParts,
+                           EVT PartVT,
                            ISD::NodeType ExtendKind = ISD::ANY_EXTEND) {
   const TargetLowering &TLI = DAG.getTargetLoweringInfo();
   EVT PtrVT = TLI.getPointerTy();
@@ -375,6 +419,8 @@ static void getCopyToParts(SelectionDAG &DAG, DebugLoc dl, SDValue Val,
       }
     }
 
+    if (DisableScheduling) DAG.AssignOrdering(Val.getNode(), Order);
+
     // The value may have changed - recompute ValueVT.
     ValueVT = Val.getValueType();
     assert(NumParts * PartBits == ValueVT.getSizeInBits() &&
@@ -397,13 +443,21 @@ static void getCopyToParts(SelectionDAG &DAG, DebugLoc dl, SDValue Val,
       SDValue OddVal = DAG.getNode(ISD::SRL, dl, ValueVT, Val,
                                    DAG.getConstant(RoundBits,
                                                    TLI.getPointerTy()));
-      getCopyToParts(DAG, dl, OddVal, Parts + RoundParts, OddParts, PartVT);
+      getCopyToParts(DAG, dl, Order, OddVal, Parts + RoundParts,
+                     OddParts, PartVT);
+
       if (TLI.isBigEndian())
         // The odd parts were reversed by getCopyToParts - unreverse them.
         std::reverse(Parts + RoundParts, Parts + NumParts);
+
       NumParts = RoundParts;
       ValueVT = EVT::getIntegerVT(*DAG.getContext(), NumParts * PartBits);
       Val = DAG.getNode(ISD::TRUNCATE, dl, ValueVT, Val);
+
+      if (DisableScheduling) {
+        DAG.AssignOrdering(OddVal.getNode(), Order);
+        DAG.AssignOrdering(Val.getNode(), Order);
+      }
     }
 
     // The number of parts is a power of 2.  Repeatedly bisect the value using
@@ -411,6 +465,10 @@ static void getCopyToParts(SelectionDAG &DAG, DebugLoc dl, SDValue Val,
     Parts[0] = DAG.getNode(ISD::BIT_CONVERT, dl,
                            EVT::getIntegerVT(*DAG.getContext(), ValueVT.getSizeInBits()),
                            Val);
+
+    if (DisableScheduling)
+      DAG.AssignOrdering(Parts[0].getNode(), Order);
+
     for (unsigned StepSize = NumParts; StepSize > 1; StepSize /= 2) {
       for (unsigned i = 0; i < NumParts; i += StepSize) {
         unsigned ThisBits = StepSize * PartBits / 2;
@@ -425,11 +483,20 @@ static void getCopyToParts(SelectionDAG &DAG, DebugLoc dl, SDValue Val,
                             ThisVT, Part0,
                             DAG.getConstant(0, PtrVT));
 
+        if (DisableScheduling) {
+          DAG.AssignOrdering(Part0.getNode(), Order);
+          DAG.AssignOrdering(Part1.getNode(), Order);
+        }
+
         if (ThisBits == PartBits && ThisVT != PartVT) {
           Part0 = DAG.getNode(ISD::BIT_CONVERT, dl,
                                                 PartVT, Part0);
           Part1 = DAG.getNode(ISD::BIT_CONVERT, dl,
                                                 PartVT, Part1);
+          if (DisableScheduling) {
+            DAG.AssignOrdering(Part0.getNode(), Order);
+            DAG.AssignOrdering(Part1.getNode(), Order);
+          }
         }
       }
     }
@@ -443,7 +510,7 @@ static void getCopyToParts(SelectionDAG &DAG, DebugLoc dl, SDValue Val,
   // Vector ValueVT.
   if (NumParts == 1) {
     if (PartVT != ValueVT) {
-      if (PartVT.isVector()) {
+      if (PartVT.getSizeInBits() == ValueVT.getSizeInBits()) {
         Val = DAG.getNode(ISD::BIT_CONVERT, dl, PartVT, Val);
       } else {
         assert(ValueVT.getVectorElementType() == PartVT &&
@@ -455,6 +522,9 @@ static void getCopyToParts(SelectionDAG &DAG, DebugLoc dl, SDValue Val,
       }
     }
 
+    if (DisableScheduling)
+      DAG.AssignOrdering(Val.getNode(), Order);
+
     Parts[0] = Val;
     return;
   }
@@ -472,7 +542,7 @@ static void getCopyToParts(SelectionDAG &DAG, DebugLoc dl, SDValue Val,
 
   // Split the vector into intermediate operands.
   SmallVector<SDValue, 8> Ops(NumIntermediates);
-  for (unsigned i = 0; i != NumIntermediates; ++i)
+  for (unsigned i = 0; i != NumIntermediates; ++i) {
     if (IntermediateVT.isVector())
       Ops[i] = DAG.getNode(ISD::EXTRACT_SUBVECTOR, dl,
                            IntermediateVT, Val,
@@ -483,12 +553,16 @@ static void getCopyToParts(SelectionDAG &DAG, DebugLoc dl, SDValue Val,
                            IntermediateVT, Val,
                            DAG.getConstant(i, PtrVT));
 
+    if (DisableScheduling)
+      DAG.AssignOrdering(Ops[i].getNode(), Order);
+  }
+
   // Split the intermediate operands into legal parts.
   if (NumParts == NumIntermediates) {
     // If the register was not expanded, promote or copy the value,
     // as appropriate.
     for (unsigned i = 0; i != NumParts; ++i)
-      getCopyToParts(DAG, dl, Ops[i], &Parts[i], 1, PartVT);
+      getCopyToParts(DAG, dl, Order, Ops[i], &Parts[i], 1, PartVT);
   } else if (NumParts > 0) {
     // If the intermediate type was expanded, split each the value into
     // legal parts.
@@ -496,7 +570,7 @@ static void getCopyToParts(SelectionDAG &DAG, DebugLoc dl, SDValue Val,
            "Must expand into a divisible number of parts!");
     unsigned Factor = NumParts / NumIntermediates;
     for (unsigned i = 0; i != NumIntermediates; ++i)
-      getCopyToParts(DAG, dl, Ops[i], &Parts[i * Factor], Factor, PartVT);
+      getCopyToParts(DAG, dl, Order, Ops[i], &Parts[i*Factor], Factor, PartVT);
   }
 }
 
@@ -583,8 +657,8 @@ void SelectionDAGBuilder::visit(Instruction &I) {
 }
 
 void SelectionDAGBuilder::visit(unsigned Opcode, User &I) {
-  // Tell the DAG that we're processing a new instruction.
-  DAG.NewInst();
+  // We're processing a new instruction.
+  ++SDNodeOrder;
 
   // Note: this doesn't use InstVisitor, because it has to work with
   // ConstantExpr's in addition to instructions.
@@ -592,7 +666,7 @@ void SelectionDAGBuilder::visit(unsigned Opcode, User &I) {
   default: llvm_unreachable("Unknown instruction type encountered!");
     // Build the switch statement using the Instruction.def file.
 #define HANDLE_INST(NUM, OPCODE, CLASS) \
-  case Instruction::OPCODE:return visit##OPCODE((CLASS&)I);
+  case Instruction::OPCODE: return visit##OPCODE((CLASS&)I);
 #include "llvm/Instruction.def"
   }
 }
@@ -638,8 +712,12 @@ SDValue SelectionDAGBuilder::getValue(const Value *V) {
         for (unsigned i = 0, e = Val->getNumValues(); i != e; ++i)
           Constants.push_back(SDValue(Val, i));
       }
-      return DAG.getMergeValues(&Constants[0], Constants.size(),
-                                getCurDebugLoc());
+
+      SDValue Res = DAG.getMergeValues(&Constants[0], Constants.size(),
+                                       getCurDebugLoc());
+      if (DisableScheduling)
+        DAG.AssignOrdering(Res.getNode(), SDNodeOrder);
+      return Res;
     }
 
     if (isa<StructType>(C->getType()) || isa<ArrayType>(C->getType())) {
@@ -661,7 +739,12 @@ SDValue SelectionDAGBuilder::getValue(const Value *V) {
         else
           Constants[i] = DAG.getConstant(0, EltVT);
       }
-      return DAG.getMergeValues(&Constants[0], NumElts, getCurDebugLoc());
+
+      SDValue Res = DAG.getMergeValues(&Constants[0], NumElts,
+                                       getCurDebugLoc());
+      if (DisableScheduling)
+        DAG.AssignOrdering(Res.getNode(), SDNodeOrder);
+      return Res;
     }
 
     if (BlockAddress *BA = dyn_cast<BlockAddress>(C))
@@ -689,8 +772,12 @@ SDValue SelectionDAGBuilder::getValue(const Value *V) {
     }
 
     // Create a BUILD_VECTOR node.
-    return NodeMap[V] = DAG.getNode(ISD::BUILD_VECTOR, getCurDebugLoc(),
-                                    VT, &Ops[0], Ops.size());
+    SDValue Res = DAG.getNode(ISD::BUILD_VECTOR, getCurDebugLoc(),
+                              VT, &Ops[0], Ops.size());
+    if (DisableScheduling)
+      DAG.AssignOrdering(Res.getNode(), SDNodeOrder);
+
+    return NodeMap[V] = Res;
   }
 
   // If this is a static alloca, generate it as the frameindex instead of
@@ -707,7 +794,8 @@ SDValue SelectionDAGBuilder::getValue(const Value *V) {
 
   RegsForValue RFV(*DAG.getContext(), TLI, InReg, V->getType());
   SDValue Chain = DAG.getEntryNode();
-  return RFV.getCopyFromRegs(DAG, getCurDebugLoc(), Chain, NULL);
+  return RFV.getCopyFromRegs(DAG, getCurDebugLoc(),
+                             SDNodeOrder, Chain, NULL);
 }
 
 /// Get the EVTs and ArgFlags collections that represent the return type
@@ -788,16 +876,26 @@ void SelectionDAGBuilder::visitRet(ReturnInst &I) {
 
     SmallVector<SDValue, 4> Chains(NumValues);
     EVT PtrVT = PtrValueVTs[0];
-    for (unsigned i = 0; i != NumValues; ++i)
-      Chains[i] = DAG.getStore(Chain, getCurDebugLoc(),
-                  SDValue(RetOp.getNode(), RetOp.getResNo() + i),
-                  DAG.getNode(ISD::ADD, getCurDebugLoc(), PtrVT, RetPtr,
-                  DAG.getConstant(Offsets[i], PtrVT)),
-                  NULL, Offsets[i], false, 0);
+    for (unsigned i = 0; i != NumValues; ++i) {
+      SDValue Add = DAG.getNode(ISD::ADD, getCurDebugLoc(), PtrVT, RetPtr,
+                                DAG.getConstant(Offsets[i], PtrVT));
+      Chains[i] =
+        DAG.getStore(Chain, getCurDebugLoc(),
+                     SDValue(RetOp.getNode(), RetOp.getResNo() + i),
+                     Add, NULL, Offsets[i], false, 0);
+
+      if (DisableScheduling) {
+        DAG.AssignOrdering(Add.getNode(), SDNodeOrder);
+        DAG.AssignOrdering(Chains[i].getNode(), SDNodeOrder);
+      }
+    }
+
     Chain = DAG.getNode(ISD::TokenFactor, getCurDebugLoc(),
                         MVT::Other, &Chains[0], NumValues);
-  }
-  else {
+
+    if (DisableScheduling)
+      DAG.AssignOrdering(Chain.getNode(), SDNodeOrder);
+  } else {
     for (unsigned i = 0, e = I.getNumOperands(); i != e; ++i) {
       SmallVector<EVT, 4> ValueVTs;
       ComputeValueVTs(TLI, I.getOperand(i)->getType(), ValueVTs);
@@ -829,7 +927,7 @@ void SelectionDAGBuilder::visitRet(ReturnInst &I) {
         unsigned NumParts = TLI.getNumRegisters(*DAG.getContext(), VT);
         EVT PartVT = TLI.getRegisterType(*DAG.getContext(), VT);
         SmallVector<SDValue, 4> Parts(NumParts);
-        getCopyToParts(DAG, getCurDebugLoc(),
+        getCopyToParts(DAG, getCurDebugLoc(), SDNodeOrder,
                        SDValue(RetOp.getNode(), RetOp.getResNo() + j),
                        &Parts[0], NumParts, PartVT, ExtendKind);
 
@@ -862,6 +960,9 @@ void SelectionDAGBuilder::visitRet(ReturnInst &I) {
 
   // Update the DAG with the new chain value resulting from return lowering.
   DAG.setRoot(Chain);
+
+  if (DisableScheduling)
+    DAG.AssignOrdering(Chain.getNode(), SDNodeOrder);
 }
 
 /// CopyToExportRegsIfNeeded - If the given value has virtual registers
@@ -1110,10 +1211,16 @@ void SelectionDAGBuilder::visitBr(BranchInst &I) {
     CurMBB->addSuccessor(Succ0MBB);
 
     // If this is not a fall-through branch, emit the branch.
-    if (Succ0MBB != NextBlock)
-      DAG.setRoot(DAG.getNode(ISD::BR, getCurDebugLoc(),
+    if (Succ0MBB != NextBlock) {
+      SDValue V = DAG.getNode(ISD::BR, getCurDebugLoc(),
                               MVT::Other, getControlRoot(),
-                              DAG.getBasicBlock(Succ0MBB)));
+                              DAG.getBasicBlock(Succ0MBB));
+      DAG.setRoot(V);
+
+      if (DisableScheduling)
+        DAG.AssignOrdering(V.getNode(), SDNodeOrder);
+    }
+
     return;
   }
 
@@ -1172,6 +1279,7 @@ void SelectionDAGBuilder::visitBr(BranchInst &I) {
   // Create a CaseBlock record representing this branch.
   CaseBlock CB(ISD::SETEQ, CondVal, ConstantInt::getTrue(*DAG.getContext()),
                NULL, Succ0MBB, Succ1MBB, CurMBB);
+
   // Use visitSwitchCase to actually insert the fast branch sequence for this
   // cond branch.
   visitSwitchCase(CB);
@@ -1217,6 +1325,9 @@ void SelectionDAGBuilder::visitSwitchCase(CaseBlock &CB) {
     }
   }
 
+  if (DisableScheduling)
+    DAG.AssignOrdering(Cond.getNode(), SDNodeOrder);
+
   // Update successor info
   CurMBB->addSuccessor(CB.TrueBB);
   CurMBB->addSuccessor(CB.FalseBB);
@@ -1234,26 +1345,36 @@ void SelectionDAGBuilder::visitSwitchCase(CaseBlock &CB) {
     std::swap(CB.TrueBB, CB.FalseBB);
     SDValue True = DAG.getConstant(1, Cond.getValueType());
     Cond = DAG.getNode(ISD::XOR, dl, Cond.getValueType(), Cond, True);
+
+    if (DisableScheduling)
+      DAG.AssignOrdering(Cond.getNode(), SDNodeOrder);
   }
+
   SDValue BrCond = DAG.getNode(ISD::BRCOND, dl,
                                MVT::Other, getControlRoot(), Cond,
                                DAG.getBasicBlock(CB.TrueBB));
 
+  if (DisableScheduling)
+    DAG.AssignOrdering(BrCond.getNode(), SDNodeOrder);
+
   // If the branch was constant folded, fix up the CFG.
   if (BrCond.getOpcode() == ISD::BR) {
     CurMBB->removeSuccessor(CB.FalseBB);
-    DAG.setRoot(BrCond);
   } else {
     // Otherwise, go ahead and insert the false branch.
     if (BrCond == getControlRoot())
       CurMBB->removeSuccessor(CB.TrueBB);
 
-    if (CB.FalseBB == NextBlock)
-      DAG.setRoot(BrCond);
-    else
-      DAG.setRoot(DAG.getNode(ISD::BR, dl, MVT::Other, BrCond,
-                              DAG.getBasicBlock(CB.FalseBB)));
+    if (CB.FalseBB != NextBlock) {
+      BrCond = DAG.getNode(ISD::BR, dl, MVT::Other, BrCond,
+                           DAG.getBasicBlock(CB.FalseBB));
+
+      if (DisableScheduling)
+        DAG.AssignOrdering(BrCond.getNode(), SDNodeOrder);
+    }
   }
+
+  DAG.setRoot(BrCond);
 }
 
 /// visitJumpTable - Emit JumpTable node in the current MBB
@@ -1264,9 +1385,16 @@ void SelectionDAGBuilder::visitJumpTable(JumpTable &JT) {
   SDValue Index = DAG.getCopyFromReg(getControlRoot(), getCurDebugLoc(),
                                      JT.Reg, PTy);
   SDValue Table = DAG.getJumpTable(JT.JTI, PTy);
-  DAG.setRoot(DAG.getNode(ISD::BR_JT, getCurDebugLoc(),
-                          MVT::Other, Index.getValue(1),
-                          Table, Index));
+  SDValue BrJumpTable = DAG.getNode(ISD::BR_JT, getCurDebugLoc(),
+                                    MVT::Other, Index.getValue(1),
+                                    Table, Index);
+  DAG.setRoot(BrJumpTable);
+
+  if (DisableScheduling) {
+    DAG.AssignOrdering(Index.getNode(), SDNodeOrder);
+    DAG.AssignOrdering(Table.getNode(), SDNodeOrder);
+    DAG.AssignOrdering(BrJumpTable.getNode(), SDNodeOrder);
+  }
 }
 
 /// visitJumpTableHeader - This function emits necessary code to produce index
@@ -1278,7 +1406,7 @@ void SelectionDAGBuilder::visitJumpTableHeader(JumpTable &JT,
   // difference between smallest and largest cases.
   SDValue SwitchOp = getValue(JTH.SValue);
   EVT VT = SwitchOp.getValueType();
-  SDValue SUB = DAG.getNode(ISD::SUB, getCurDebugLoc(), VT, SwitchOp,
+  SDValue Sub = DAG.getNode(ISD::SUB, getCurDebugLoc(), VT, SwitchOp,
                             DAG.getConstant(JTH.First, VT));
 
   // The SDNode we just created, which holds the value being switched on minus
@@ -1286,7 +1414,7 @@ void SelectionDAGBuilder::visitJumpTableHeader(JumpTable &JT,
   // can be used as an index into the jump table in a subsequent basic block.
   // This value may be smaller or larger than the target's pointer type, and
   // therefore require extension or truncating.
-  SwitchOp = DAG.getZExtOrTrunc(SUB, getCurDebugLoc(), TLI.getPointerTy());
+  SwitchOp = DAG.getZExtOrTrunc(Sub, getCurDebugLoc(), TLI.getPointerTy());
 
   unsigned JumpTableReg = FuncInfo.MakeReg(TLI.getPointerTy());
   SDValue CopyTo = DAG.getCopyToReg(getControlRoot(), getCurDebugLoc(),
@@ -1297,14 +1425,22 @@ void SelectionDAGBuilder::visitJumpTableHeader(JumpTable &JT,
   // for the switch statement if the value being switched on exceeds the largest
   // case in the switch.
   SDValue CMP = DAG.getSetCC(getCurDebugLoc(),
-                             TLI.getSetCCResultType(SUB.getValueType()), SUB,
+                             TLI.getSetCCResultType(Sub.getValueType()), Sub,
                              DAG.getConstant(JTH.Last-JTH.First,VT),
                              ISD::SETUGT);
 
+  if (DisableScheduling) {
+    DAG.AssignOrdering(Sub.getNode(), SDNodeOrder);
+    DAG.AssignOrdering(SwitchOp.getNode(), SDNodeOrder);
+    DAG.AssignOrdering(CopyTo.getNode(), SDNodeOrder);
+    DAG.AssignOrdering(CMP.getNode(), SDNodeOrder);
+  }
+
   // Set NextBlock to be the MBB immediately after the current one, if any.
   // This is used to avoid emitting unnecessary branches to the next block.
   MachineBasicBlock *NextBlock = 0;
   MachineFunction::iterator BBI = CurMBB;
+
   if (++BBI != FuncInfo.MF->end())
     NextBlock = BBI;
 
@@ -1312,11 +1448,18 @@ void SelectionDAGBuilder::visitJumpTableHeader(JumpTable &JT,
                                MVT::Other, CopyTo, CMP,
                                DAG.getBasicBlock(JT.Default));
 
-  if (JT.MBB == NextBlock)
-    DAG.setRoot(BrCond);
-  else
-    DAG.setRoot(DAG.getNode(ISD::BR, getCurDebugLoc(), MVT::Other, BrCond,
-                            DAG.getBasicBlock(JT.MBB)));
+  if (DisableScheduling)
+    DAG.AssignOrdering(BrCond.getNode(), SDNodeOrder);
+
+  if (JT.MBB != NextBlock) {
+    BrCond = DAG.getNode(ISD::BR, getCurDebugLoc(), MVT::Other, BrCond,
+                         DAG.getBasicBlock(JT.MBB));
+
+    if (DisableScheduling)
+      DAG.AssignOrdering(BrCond.getNode(), SDNodeOrder);
+  }
+
+  DAG.setRoot(BrCond);
 }
 
 /// visitBitTestHeader - This function emits necessary code to produce value
@@ -1325,21 +1468,29 @@ void SelectionDAGBuilder::visitBitTestHeader(BitTestBlock &B) {
   // Subtract the minimum value
   SDValue SwitchOp = getValue(B.SValue);
   EVT VT = SwitchOp.getValueType();
-  SDValue SUB = DAG.getNode(ISD::SUB, getCurDebugLoc(), VT, SwitchOp,
+  SDValue Sub = DAG.getNode(ISD::SUB, getCurDebugLoc(), VT, SwitchOp,
                             DAG.getConstant(B.First, VT));
 
   // Check range
   SDValue RangeCmp = DAG.getSetCC(getCurDebugLoc(),
-                                  TLI.getSetCCResultType(SUB.getValueType()),
-                                  SUB, DAG.getConstant(B.Range, VT),
+                                  TLI.getSetCCResultType(Sub.getValueType()),
+                                  Sub, DAG.getConstant(B.Range, VT),
                                   ISD::SETUGT);
 
-  SDValue ShiftOp = DAG.getZExtOrTrunc(SUB, getCurDebugLoc(), TLI.getPointerTy());
+  SDValue ShiftOp = DAG.getZExtOrTrunc(Sub, getCurDebugLoc(),
+                                       TLI.getPointerTy());
 
   B.Reg = FuncInfo.MakeReg(TLI.getPointerTy());
   SDValue CopyTo = DAG.getCopyToReg(getControlRoot(), getCurDebugLoc(),
                                     B.Reg, ShiftOp);
 
+  if (DisableScheduling) {
+    DAG.AssignOrdering(Sub.getNode(), SDNodeOrder);
+    DAG.AssignOrdering(RangeCmp.getNode(), SDNodeOrder);
+    DAG.AssignOrdering(ShiftOp.getNode(), SDNodeOrder);
+    DAG.AssignOrdering(CopyTo.getNode(), SDNodeOrder);
+  }
+
   // Set NextBlock to be the MBB immediately after the current one, if any.
   // This is used to avoid emitting unnecessary branches to the next block.
   MachineBasicBlock *NextBlock = 0;
@@ -1356,11 +1507,18 @@ void SelectionDAGBuilder::visitBitTestHeader(BitTestBlock &B) {
                                 MVT::Other, CopyTo, RangeCmp,
                                 DAG.getBasicBlock(B.Default));
 
-  if (MBB == NextBlock)
-    DAG.setRoot(BrRange);
-  else
-    DAG.setRoot(DAG.getNode(ISD::BR, getCurDebugLoc(), MVT::Other, CopyTo,
-                            DAG.getBasicBlock(MBB)));
+  if (DisableScheduling)
+    DAG.AssignOrdering(BrRange.getNode(), SDNodeOrder);
+
+  if (MBB != NextBlock) {
+    BrRange = DAG.getNode(ISD::BR, getCurDebugLoc(), MVT::Other, CopyTo,
+                          DAG.getBasicBlock(MBB));
+
+    if (DisableScheduling)
+      DAG.AssignOrdering(BrRange.getNode(), SDNodeOrder);
+  }
+
+  DAG.setRoot(BrRange);
 }
 
 /// visitBitTestCase - this function produces one "bit test"
@@ -1384,6 +1542,13 @@ void SelectionDAGBuilder::visitBitTestCase(MachineBasicBlock* NextMBB,
                                 AndOp, DAG.getConstant(0, TLI.getPointerTy()),
                                 ISD::SETNE);
 
+  if (DisableScheduling) {
+    DAG.AssignOrdering(ShiftOp.getNode(), SDNodeOrder);
+    DAG.AssignOrdering(SwitchVal.getNode(), SDNodeOrder);
+    DAG.AssignOrdering(AndOp.getNode(), SDNodeOrder);
+    DAG.AssignOrdering(AndCmp.getNode(), SDNodeOrder);
+  }
+
   CurMBB->addSuccessor(B.TargetBB);
   CurMBB->addSuccessor(NextMBB);
 
@@ -1391,6 +1556,9 @@ void SelectionDAGBuilder::visitBitTestCase(MachineBasicBlock* NextMBB,
                               MVT::Other, getControlRoot(),
                               AndCmp, DAG.getBasicBlock(B.TargetBB));
 
+  if (DisableScheduling)
+    DAG.AssignOrdering(BrAnd.getNode(), SDNodeOrder);
+
   // Set NextBlock to be the MBB immediately after the current one, if any.
   // This is used to avoid emitting unnecessary branches to the next block.
   MachineBasicBlock *NextBlock = 0;
@@ -1398,11 +1566,15 @@ void SelectionDAGBuilder::visitBitTestCase(MachineBasicBlock* NextMBB,
   if (++BBI != FuncInfo.MF->end())
     NextBlock = BBI;
 
-  if (NextMBB == NextBlock)
-    DAG.setRoot(BrAnd);
-  else
-    DAG.setRoot(DAG.getNode(ISD::BR, getCurDebugLoc(), MVT::Other, BrAnd,
-                            DAG.getBasicBlock(NextMBB)));
+  if (NextMBB != NextBlock) {
+    BrAnd = DAG.getNode(ISD::BR, getCurDebugLoc(), MVT::Other, BrAnd,
+                        DAG.getBasicBlock(NextMBB));
+
+    if (DisableScheduling)
+      DAG.AssignOrdering(BrAnd.getNode(), SDNodeOrder);
+  }
+
+  DAG.setRoot(BrAnd);
 }
 
 void SelectionDAGBuilder::visitInvoke(InvokeInst &I) {
@@ -1425,9 +1597,13 @@ void SelectionDAGBuilder::visitInvoke(InvokeInst &I) {
   CurMBB->addSuccessor(LandingPad);
 
   // Drop into normal successor.
-  DAG.setRoot(DAG.getNode(ISD::BR, getCurDebugLoc(),
-                          MVT::Other, getControlRoot(),
-                          DAG.getBasicBlock(Return)));
+  SDValue Branch = DAG.getNode(ISD::BR, getCurDebugLoc(),
+                               MVT::Other, getControlRoot(),
+                               DAG.getBasicBlock(Return));
+  DAG.setRoot(Branch);
+
+  if (DisableScheduling)
+    DAG.AssignOrdering(Branch.getNode(), SDNodeOrder);
 }
 
 void SelectionDAGBuilder::visitUnwind(UnwindInst &I) {
@@ -1883,7 +2059,6 @@ bool SelectionDAGBuilder::handleBitTestsSwitchCase(CaseRec& CR,
   return true;
 }
 
-
 /// Clusterify - Transform simple list of Cases into list of CaseRange's
 size_t SelectionDAGBuilder::Clusterify(CaseVector& Cases,
                                        const SwitchInst& SI) {
@@ -1930,7 +2105,6 @@ size_t SelectionDAGBuilder::Clusterify(CaseVector& Cases,
 void SelectionDAGBuilder::visitSwitch(SwitchInst &SI) {
   // Figure out which block is immediately after the current one.
   MachineBasicBlock *NextBlock = 0;
-
   MachineBasicBlock *Default = FuncInfo.MBBMap[SI.getDefaultDest()];
 
   // If there is only the default destination, branch to it if it is not the
@@ -1940,10 +2114,16 @@ void SelectionDAGBuilder::visitSwitch(SwitchInst &SI) {
 
     // If this is not a fall-through branch, emit the branch.
     CurMBB->addSuccessor(Default);
-    if (Default != NextBlock)
-      DAG.setRoot(DAG.getNode(ISD::BR, getCurDebugLoc(),
-                              MVT::Other, getControlRoot(),
-                              DAG.getBasicBlock(Default)));
+    if (Default != NextBlock) {
+      SDValue Res = DAG.getNode(ISD::BR, getCurDebugLoc(),
+                                MVT::Other, getControlRoot(),
+                                DAG.getBasicBlock(Default));
+      DAG.setRoot(Res);
+
+      if (DisableScheduling)
+        DAG.AssignOrdering(Res.getNode(), SDNodeOrder);
+    }
+
     return;
   }
 
@@ -1995,11 +2175,14 @@ void SelectionDAGBuilder::visitIndirectBr(IndirectBrInst &I) {
   for (unsigned i = 0, e = I.getNumSuccessors(); i != e; ++i)
     CurMBB->addSuccessor(FuncInfo.MBBMap[I.getSuccessor(i)]);
 
-  DAG.setRoot(DAG.getNode(ISD::BRIND, getCurDebugLoc(),
-                          MVT::Other, getControlRoot(),
-                          getValue(I.getAddress())));
-}
+  SDValue Res = DAG.getNode(ISD::BRIND, getCurDebugLoc(),
+                            MVT::Other, getControlRoot(),
+                            getValue(I.getAddress()));
+  DAG.setRoot(Res);
 
+  if (DisableScheduling)
+    DAG.AssignOrdering(Res.getNode(), SDNodeOrder);
+}
 
 void SelectionDAGBuilder::visitFSub(User &I) {
   // -0.0 - X --> fneg
@@ -2013,17 +2196,28 @@ void SelectionDAGBuilder::visitFSub(User &I) {
       Constant *CNZ = ConstantVector::get(&NZ[0], NZ.size());
       if (CV == CNZ) {
         SDValue Op2 = getValue(I.getOperand(1));
-        setValue(&I, DAG.getNode(ISD::FNEG, getCurDebugLoc(),
-                                 Op2.getValueType(), Op2));
+        SDValue Res = DAG.getNode(ISD::FNEG, getCurDebugLoc(),
+                                  Op2.getValueType(), Op2); 
+        setValue(&I, Res);
+
+        if (DisableScheduling)
+          DAG.AssignOrdering(Res.getNode(), SDNodeOrder);
+
         return;
       }
     }
   }
+
   if (ConstantFP *CFP = dyn_cast<ConstantFP>(I.getOperand(0)))
     if (CFP->isExactlyValue(ConstantFP::getNegativeZero(Ty)->getValueAPF())) {
       SDValue Op2 = getValue(I.getOperand(1));
-      setValue(&I, DAG.getNode(ISD::FNEG, getCurDebugLoc(),
-                               Op2.getValueType(), Op2));
+      SDValue Res = DAG.getNode(ISD::FNEG, getCurDebugLoc(),
+                                Op2.getValueType(), Op2);
+      setValue(&I, Res);
+
+      if (DisableScheduling)
+        DAG.AssignOrdering(Res.getNode(), SDNodeOrder);
+
       return;
     }
 
@@ -2033,9 +2227,12 @@ void SelectionDAGBuilder::visitFSub(User &I) {
 void SelectionDAGBuilder::visitBinary(User &I, unsigned OpCode) {
   SDValue Op1 = getValue(I.getOperand(0));
   SDValue Op2 = getValue(I.getOperand(1));
+  SDValue Res = DAG.getNode(OpCode, getCurDebugLoc(),
+                            Op1.getValueType(), Op1, Op2);
+  setValue(&I, Res);
 
-  setValue(&I, DAG.getNode(OpCode, getCurDebugLoc(),
-                           Op1.getValueType(), Op1, Op2));
+  if (DisableScheduling)
+    DAG.AssignOrdering(Res.getNode(), SDNodeOrder);
 }
 
 void SelectionDAGBuilder::visitShift(User &I, unsigned Opcode) {
@@ -2068,8 +2265,15 @@ void SelectionDAGBuilder::visitShift(User &I, unsigned Opcode) {
                         TLI.getPointerTy(), Op2);
   }
 
-  setValue(&I, DAG.getNode(Opcode, getCurDebugLoc(),
-                           Op1.getValueType(), Op1, Op2));
+  SDValue Res = DAG.getNode(Opcode, getCurDebugLoc(),
+                            Op1.getValueType(), Op1, Op2);
+  setValue(&I, Res);
+
+  if (DisableScheduling) {
+    DAG.AssignOrdering(Op1.getNode(), SDNodeOrder);
+    DAG.AssignOrdering(Op2.getNode(), SDNodeOrder);
+    DAG.AssignOrdering(Res.getNode(), SDNodeOrder);
+  }
 }
 
 void SelectionDAGBuilder::visitICmp(User &I) {
@@ -2083,7 +2287,11 @@ void SelectionDAGBuilder::visitICmp(User &I) {
   ISD::CondCode Opcode = getICmpCondCode(predicate);
   
   EVT DestVT = TLI.getValueType(I.getType());
-  setValue(&I, DAG.getSetCC(getCurDebugLoc(), DestVT, Op1, Op2, Opcode));
+  SDValue Res = DAG.getSetCC(getCurDebugLoc(), DestVT, Op1, Op2, Opcode);
+  setValue(&I, Res);
+
+  if (DisableScheduling)
+    DAG.AssignOrdering(Res.getNode(), SDNodeOrder);
 }
 
 void SelectionDAGBuilder::visitFCmp(User &I) {
@@ -2096,37 +2304,54 @@ void SelectionDAGBuilder::visitFCmp(User &I) {
   SDValue Op2 = getValue(I.getOperand(1));
   ISD::CondCode Condition = getFCmpCondCode(predicate);
   EVT DestVT = TLI.getValueType(I.getType());
-  setValue(&I, DAG.getSetCC(getCurDebugLoc(), DestVT, Op1, Op2, Condition));
+  SDValue Res = DAG.getSetCC(getCurDebugLoc(), DestVT, Op1, Op2, Condition);
+  setValue(&I, Res);
+
+  if (DisableScheduling)
+    DAG.AssignOrdering(Res.getNode(), SDNodeOrder);
 }
 
 void SelectionDAGBuilder::visitSelect(User &I) {
   SmallVector<EVT, 4> ValueVTs;
   ComputeValueVTs(TLI, I.getType(), ValueVTs);
   unsigned NumValues = ValueVTs.size();
-  if (NumValues != 0) {
-    SmallVector<SDValue, 4> Values(NumValues);
-    SDValue Cond     = getValue(I.getOperand(0));
-    SDValue TrueVal  = getValue(I.getOperand(1));
-    SDValue FalseVal = getValue(I.getOperand(2));
+  if (NumValues == 0) return;
+
+  SmallVector<SDValue, 4> Values(NumValues);
+  SDValue Cond     = getValue(I.getOperand(0));
+  SDValue TrueVal  = getValue(I.getOperand(1));
+  SDValue FalseVal = getValue(I.getOperand(2));
 
-    for (unsigned i = 0; i != NumValues; ++i)
-      Values[i] = DAG.getNode(ISD::SELECT, getCurDebugLoc(),
-                              TrueVal.getNode()->getValueType(i), Cond,
-                              SDValue(TrueVal.getNode(), TrueVal.getResNo() + i),
-                              SDValue(FalseVal.getNode(), FalseVal.getResNo() + i));
+  for (unsigned i = 0; i != NumValues; ++i) {
+    Values[i] = DAG.getNode(ISD::SELECT, getCurDebugLoc(),
+                            TrueVal.getNode()->getValueType(i), Cond,
+                            SDValue(TrueVal.getNode(),
+                                    TrueVal.getResNo() + i),
+                            SDValue(FalseVal.getNode(),
+                                    FalseVal.getResNo() + i));
 
-    setValue(&I, DAG.getNode(ISD::MERGE_VALUES, getCurDebugLoc(),
-                             DAG.getVTList(&ValueVTs[0], NumValues),
-                             &Values[0], NumValues));
+    if (DisableScheduling)
+      DAG.AssignOrdering(Values[i].getNode(), SDNodeOrder);
   }
-}
 
+  SDValue Res = DAG.getNode(ISD::MERGE_VALUES, getCurDebugLoc(),
+                            DAG.getVTList(&ValueVTs[0], NumValues),
+                            &Values[0], NumValues);
+  setValue(&I, Res);
+
+  if (DisableScheduling)
+    DAG.AssignOrdering(Res.getNode(), SDNodeOrder);
+}
 
 void SelectionDAGBuilder::visitTrunc(User &I) {
   // TruncInst cannot be a no-op cast because sizeof(src) > sizeof(dest).
   SDValue N = getValue(I.getOperand(0));
   EVT DestVT = TLI.getValueType(I.getType());
-  setValue(&I, DAG.getNode(ISD::TRUNCATE, getCurDebugLoc(), DestVT, N));
+  SDValue Res = DAG.getNode(ISD::TRUNCATE, getCurDebugLoc(), DestVT, N);
+  setValue(&I, Res);
+
+  if (DisableScheduling)
+    DAG.AssignOrdering(Res.getNode(), SDNodeOrder);
 }
 
 void SelectionDAGBuilder::visitZExt(User &I) {
@@ -2134,7 +2359,11 @@ void SelectionDAGBuilder::visitZExt(User &I) {
   // ZExt also can't be a cast to bool for same reason. So, nothing much to do
   SDValue N = getValue(I.getOperand(0));
   EVT DestVT = TLI.getValueType(I.getType());
-  setValue(&I, DAG.getNode(ISD::ZERO_EXTEND, getCurDebugLoc(), DestVT, N));
+  SDValue Res = DAG.getNode(ISD::ZERO_EXTEND, getCurDebugLoc(), DestVT, N);
+  setValue(&I, Res);
+
+  if (DisableScheduling)
+    DAG.AssignOrdering(Res.getNode(), SDNodeOrder);
 }
 
 void SelectionDAGBuilder::visitSExt(User &I) {
@@ -2142,50 +2371,78 @@ void SelectionDAGBuilder::visitSExt(User &I) {
   // SExt also can't be a cast to bool for same reason. So, nothing much to do
   SDValue N = getValue(I.getOperand(0));
   EVT DestVT = TLI.getValueType(I.getType());
-  setValue(&I, DAG.getNode(ISD::SIGN_EXTEND, getCurDebugLoc(), DestVT, N));
+  SDValue Res = DAG.getNode(ISD::SIGN_EXTEND, getCurDebugLoc(), DestVT, N);
+  setValue(&I, Res);
+
+  if (DisableScheduling)
+    DAG.AssignOrdering(Res.getNode(), SDNodeOrder);
 }
 
 void SelectionDAGBuilder::visitFPTrunc(User &I) {
   // FPTrunc is never a no-op cast, no need to check
   SDValue N = getValue(I.getOperand(0));
   EVT DestVT = TLI.getValueType(I.getType());
-  setValue(&I, DAG.getNode(ISD::FP_ROUND, getCurDebugLoc(),
-                           DestVT, N, DAG.getIntPtrConstant(0)));
+  SDValue Res = DAG.getNode(ISD::FP_ROUND, getCurDebugLoc(),
+                            DestVT, N, DAG.getIntPtrConstant(0));
+  setValue(&I, Res);
+
+  if (DisableScheduling)
+    DAG.AssignOrdering(Res.getNode(), SDNodeOrder);
 }
 
 void SelectionDAGBuilder::visitFPExt(User &I){
   // FPTrunc is never a no-op cast, no need to check
   SDValue N = getValue(I.getOperand(0));
   EVT DestVT = TLI.getValueType(I.getType());
-  setValue(&I, DAG.getNode(ISD::FP_EXTEND, getCurDebugLoc(), DestVT, N));
+  SDValue Res = DAG.getNode(ISD::FP_EXTEND, getCurDebugLoc(), DestVT, N);
+  setValue(&I, Res);
+
+  if (DisableScheduling)
+    DAG.AssignOrdering(Res.getNode(), SDNodeOrder);
 }
 
 void SelectionDAGBuilder::visitFPToUI(User &I) {
   // FPToUI is never a no-op cast, no need to check
   SDValue N = getValue(I.getOperand(0));
   EVT DestVT = TLI.getValueType(I.getType());
-  setValue(&I, DAG.getNode(ISD::FP_TO_UINT, getCurDebugLoc(), DestVT, N));
+  SDValue Res = DAG.getNode(ISD::FP_TO_UINT, getCurDebugLoc(), DestVT, N);
+  setValue(&I, Res);
+
+  if (DisableScheduling)
+    DAG.AssignOrdering(Res.getNode(), SDNodeOrder);
 }
 
 void SelectionDAGBuilder::visitFPToSI(User &I) {
   // FPToSI is never a no-op cast, no need to check
   SDValue N = getValue(I.getOperand(0));
   EVT DestVT = TLI.getValueType(I.getType());
-  setValue(&I, DAG.getNode(ISD::FP_TO_SINT, getCurDebugLoc(), DestVT, N));
+  SDValue Res = DAG.getNode(ISD::FP_TO_SINT, getCurDebugLoc(), DestVT, N);
+  setValue(&I, Res);
+
+  if (DisableScheduling)
+    DAG.AssignOrdering(Res.getNode(), SDNodeOrder);
 }
 
 void SelectionDAGBuilder::visitUIToFP(User &I) {
   // UIToFP is never a no-op cast, no need to check
   SDValue N = getValue(I.getOperand(0));
   EVT DestVT = TLI.getValueType(I.getType());
-  setValue(&I, DAG.getNode(ISD::UINT_TO_FP, getCurDebugLoc(), DestVT, N));
+  SDValue Res = DAG.getNode(ISD::UINT_TO_FP, getCurDebugLoc(), DestVT, N);
+  setValue(&I, Res);
+
+  if (DisableScheduling)
+    DAG.AssignOrdering(Res.getNode(), SDNodeOrder);
 }
 
 void SelectionDAGBuilder::visitSIToFP(User &I){
   // SIToFP is never a no-op cast, no need to check
   SDValue N = getValue(I.getOperand(0));
   EVT DestVT = TLI.getValueType(I.getType());
-  setValue(&I, DAG.getNode(ISD::SINT_TO_FP, getCurDebugLoc(), DestVT, N));
+  SDValue Res = DAG.getNode(ISD::SINT_TO_FP, getCurDebugLoc(), DestVT, N);
+  setValue(&I, Res);
+
+  if (DisableScheduling)
+    DAG.AssignOrdering(Res.getNode(), SDNodeOrder);
 }
 
 void SelectionDAGBuilder::visitPtrToInt(User &I) {
@@ -2194,8 +2451,11 @@ void SelectionDAGBuilder::visitPtrToInt(User &I) {
   SDValue N = getValue(I.getOperand(0));
   EVT SrcVT = N.getValueType();
   EVT DestVT = TLI.getValueType(I.getType());
-  SDValue Result = DAG.getZExtOrTrunc(N, getCurDebugLoc(), DestVT);
-  setValue(&I, Result);
+  SDValue Res = DAG.getZExtOrTrunc(N, getCurDebugLoc(), DestVT);
+  setValue(&I, Res);
+
+  if (DisableScheduling)
+    DAG.AssignOrdering(Res.getNode(), SDNodeOrder);
 }
 
 void SelectionDAGBuilder::visitIntToPtr(User &I) {
@@ -2204,41 +2464,61 @@ void SelectionDAGBuilder::visitIntToPtr(User &I) {
   SDValue N = getValue(I.getOperand(0));
   EVT SrcVT = N.getValueType();
   EVT DestVT = TLI.getValueType(I.getType());
-  setValue(&I, DAG.getZExtOrTrunc(N, getCurDebugLoc(), DestVT));
+  SDValue Res = DAG.getZExtOrTrunc(N, getCurDebugLoc(), DestVT);
+  setValue(&I, Res);
+
+  if (DisableScheduling)
+    DAG.AssignOrdering(Res.getNode(), SDNodeOrder);
 }
 
 void SelectionDAGBuilder::visitBitCast(User &I) {
   SDValue N = getValue(I.getOperand(0));
   EVT DestVT = TLI.getValueType(I.getType());
 
-  // BitCast assures us that source and destination are the same size so this
-  // is either a BIT_CONVERT or a no-op.
-  if (DestVT != N.getValueType())
-    setValue(&I, DAG.getNode(ISD::BIT_CONVERT, getCurDebugLoc(),
-                             DestVT, N)); // convert types
-  else
-    setValue(&I, N); // noop cast.
+  // BitCast assures us that source and destination are the same size so this is
+  // either a BIT_CONVERT or a no-op.
+  if (DestVT != N.getValueType()) {
+    SDValue Res = DAG.getNode(ISD::BIT_CONVERT, getCurDebugLoc(),
+                              DestVT, N); // convert types.
+    setValue(&I, Res);
+
+    if (DisableScheduling)
+      DAG.AssignOrdering(Res.getNode(), SDNodeOrder);
+  } else {
+    setValue(&I, N);            // noop cast.
+  }
 }
 
 void SelectionDAGBuilder::visitInsertElement(User &I) {
   SDValue InVec = getValue(I.getOperand(0));
   SDValue InVal = getValue(I.getOperand(1));
   SDValue InIdx = DAG.getNode(ISD::ZERO_EXTEND, getCurDebugLoc(),
-                                TLI.getPointerTy(),
-                                getValue(I.getOperand(2)));
+                              TLI.getPointerTy(),
+                              getValue(I.getOperand(2)));
+  SDValue Res = DAG.getNode(ISD::INSERT_VECTOR_ELT, getCurDebugLoc(),
+                            TLI.getValueType(I.getType()),
+                            InVec, InVal, InIdx);
+  setValue(&I, Res);
 
-  setValue(&I, DAG.getNode(ISD::INSERT_VECTOR_ELT, getCurDebugLoc(),
-                           TLI.getValueType(I.getType()),
-                           InVec, InVal, InIdx));
+  if (DisableScheduling) {
+    DAG.AssignOrdering(InIdx.getNode(), SDNodeOrder);
+    DAG.AssignOrdering(Res.getNode(), SDNodeOrder);
+  }
 }
 
 void SelectionDAGBuilder::visitExtractElement(User &I) {
   SDValue InVec = getValue(I.getOperand(0));
   SDValue InIdx = DAG.getNode(ISD::ZERO_EXTEND, getCurDebugLoc(),
-                                TLI.getPointerTy(),
-                                getValue(I.getOperand(1)));
-  setValue(&I, DAG.getNode(ISD::EXTRACT_VECTOR_ELT, getCurDebugLoc(),
-                           TLI.getValueType(I.getType()), InVec, InIdx));
+                              TLI.getPointerTy(),
+                              getValue(I.getOperand(1)));
+  SDValue Res = DAG.getNode(ISD::EXTRACT_VECTOR_ELT, getCurDebugLoc(),
+                            TLI.getValueType(I.getType()), InVec, InIdx);
+  setValue(&I, Res);
+
+  if (DisableScheduling) {
+    DAG.AssignOrdering(InIdx.getNode(), SDNodeOrder);
+    DAG.AssignOrdering(Res.getNode(), SDNodeOrder);
+  }
 }
 
 
@@ -2275,8 +2555,13 @@ void SelectionDAGBuilder::visitShuffleVector(User &I) {
   unsigned SrcNumElts = SrcVT.getVectorNumElements();
 
   if (SrcNumElts == MaskNumElts) {
-    setValue(&I, DAG.getVectorShuffle(VT, getCurDebugLoc(), Src1, Src2,
-                                      &Mask[0]));
+    SDValue Res = DAG.getVectorShuffle(VT, getCurDebugLoc(), Src1, Src2,
+                                       &Mask[0]);
+    setValue(&I, Res);
+
+    if (DisableScheduling)
+      DAG.AssignOrdering(Res.getNode(), SDNodeOrder);
+
     return;
   }
 
@@ -2287,8 +2572,13 @@ void SelectionDAGBuilder::visitShuffleVector(User &I) {
     // lengths match.
     if (SrcNumElts*2 == MaskNumElts && SequentialMask(Mask, 0)) {
       // The shuffle is concatenating two vectors together.
-      setValue(&I, DAG.getNode(ISD::CONCAT_VECTORS, getCurDebugLoc(),
-                               VT, Src1, Src2));
+      SDValue Res = DAG.getNode(ISD::CONCAT_VECTORS, getCurDebugLoc(),
+                                VT, Src1, Src2);
+      setValue(&I, Res);
+
+      if (DisableScheduling)
+        DAG.AssignOrdering(Res.getNode(), SDNodeOrder);
+
       return;
     }
 
@@ -2319,8 +2609,17 @@ void SelectionDAGBuilder::visitShuffleVector(User &I) {
       else
         MappedOps.push_back(Idx + MaskNumElts - SrcNumElts);
     }
-    setValue(&I, DAG.getVectorShuffle(VT, getCurDebugLoc(), Src1, Src2, 
-                                      &MappedOps[0]));
+
+    SDValue Res = DAG.getVectorShuffle(VT, getCurDebugLoc(), Src1, Src2, 
+                                       &MappedOps[0]);
+    setValue(&I, Res);
+
+    if (DisableScheduling) {
+      DAG.AssignOrdering(Src1.getNode(), SDNodeOrder);
+      DAG.AssignOrdering(Src2.getNode(), SDNodeOrder);
+      DAG.AssignOrdering(Res.getNode(), SDNodeOrder);
+    }
+
     return;
   }
 
@@ -2371,20 +2670,28 @@ void SelectionDAGBuilder::visitShuffleVector(User &I) {
     }
 
     if (RangeUse[0] == 0 && RangeUse[1] == 0) {
-      setValue(&I, DAG.getUNDEF(VT));  // Vectors are not used.
+      SDValue Res = DAG.getUNDEF(VT);
+      setValue(&I, Res);  // Vectors are not used.
+
+      if (DisableScheduling)
+        DAG.AssignOrdering(Res.getNode(), SDNodeOrder);
+
       return;
     }
     else if (RangeUse[0] < 2 && RangeUse[1] < 2) {
       // Extract appropriate subvector and generate a vector shuffle
       for (int Input=0; Input < 2; ++Input) {
-        SDValue& Src = Input == 0 ? Src1 : Src2;
-        if (RangeUse[Input] == 0) {
+        SDValue &Src = Input == 0 ? Src1 : Src2;
+        if (RangeUse[Input] == 0)
           Src = DAG.getUNDEF(VT);
-        } else {
+        else
           Src = DAG.getNode(ISD::EXTRACT_SUBVECTOR, getCurDebugLoc(), VT,
                             Src, DAG.getIntPtrConstant(StartIdx[Input]));
-        }
+
+        if (DisableScheduling)
+          DAG.AssignOrdering(Src.getNode(), SDNodeOrder);
       }
+
       // Calculate new mask.
       SmallVector<int, 8> MappedOps;
       for (unsigned i = 0; i != MaskNumElts; ++i) {
@@ -2396,8 +2703,14 @@ void SelectionDAGBuilder::visitShuffleVector(User &I) {
         else
           MappedOps.push_back(Idx - SrcNumElts - StartIdx[1] + MaskNumElts);
       }
-      setValue(&I, DAG.getVectorShuffle(VT, getCurDebugLoc(), Src1, Src2,
-                                        &MappedOps[0]));
+
+      SDValue Res = DAG.getVectorShuffle(VT, getCurDebugLoc(), Src1, Src2,
+                                         &MappedOps[0]);
+      setValue(&I, Res);
+
+      if (DisableScheduling)
+        DAG.AssignOrdering(Res.getNode(), SDNodeOrder);
+
       return;
     }
   }
@@ -2413,17 +2726,29 @@ void SelectionDAGBuilder::visitShuffleVector(User &I) {
       Ops.push_back(DAG.getUNDEF(EltVT));
     } else {
       int Idx = Mask[i];
+      SDValue Res;
+
       if (Idx < (int)SrcNumElts)
-        Ops.push_back(DAG.getNode(ISD::EXTRACT_VECTOR_ELT, getCurDebugLoc(),
-                                  EltVT, Src1, DAG.getConstant(Idx, PtrVT)));
+        Res = DAG.getNode(ISD::EXTRACT_VECTOR_ELT, getCurDebugLoc(),
+                          EltVT, Src1, DAG.getConstant(Idx, PtrVT));
       else
-        Ops.push_back(DAG.getNode(ISD::EXTRACT_VECTOR_ELT, getCurDebugLoc(),
-                                  EltVT, Src2,
-                                  DAG.getConstant(Idx - SrcNumElts, PtrVT)));
+        Res = DAG.getNode(ISD::EXTRACT_VECTOR_ELT, getCurDebugLoc(),
+                          EltVT, Src2,
+                          DAG.getConstant(Idx - SrcNumElts, PtrVT));
+
+      Ops.push_back(Res);
+
+      if (DisableScheduling)
+        DAG.AssignOrdering(Res.getNode(), SDNodeOrder);
     }
   }
-  setValue(&I, DAG.getNode(ISD::BUILD_VECTOR, getCurDebugLoc(),
-                           VT, &Ops[0], Ops.size()));
+
+  SDValue Res = DAG.getNode(ISD::BUILD_VECTOR, getCurDebugLoc(),
+                            VT, &Ops[0], Ops.size());
+  setValue(&I, Res);
+
+  if (DisableScheduling)
+    DAG.AssignOrdering(Res.getNode(), SDNodeOrder);
 }
 
 void SelectionDAGBuilder::visitInsertValue(InsertValueInst &I) {
@@ -2462,9 +2787,13 @@ void SelectionDAGBuilder::visitInsertValue(InsertValueInst &I) {
     Values[i] = IntoUndef ? DAG.getUNDEF(AggValueVTs[i]) :
                 SDValue(Agg.getNode(), Agg.getResNo() + i);
 
-  setValue(&I, DAG.getNode(ISD::MERGE_VALUES, getCurDebugLoc(),
-                           DAG.getVTList(&AggValueVTs[0], NumAggValues),
-                           &Values[0], NumAggValues));
+  SDValue Res = DAG.getNode(ISD::MERGE_VALUES, getCurDebugLoc(),
+                            DAG.getVTList(&AggValueVTs[0], NumAggValues),
+                            &Values[0], NumAggValues);
+  setValue(&I, Res);
+
+  if (DisableScheduling)
+    DAG.AssignOrdering(Res.getNode(), SDNodeOrder);
 }
 
 void SelectionDAGBuilder::visitExtractValue(ExtractValueInst &I) {
@@ -2490,11 +2819,14 @@ void SelectionDAGBuilder::visitExtractValue(ExtractValueInst &I) {
         DAG.getUNDEF(Agg.getNode()->getValueType(Agg.getResNo() + i)) :
         SDValue(Agg.getNode(), Agg.getResNo() + i);
 
-  setValue(&I, DAG.getNode(ISD::MERGE_VALUES, getCurDebugLoc(),
-                           DAG.getVTList(&ValValueVTs[0], NumValValues),
-                           &Values[0], NumValValues));
-}
+  SDValue Res = DAG.getNode(ISD::MERGE_VALUES, getCurDebugLoc(),
+                            DAG.getVTList(&ValValueVTs[0], NumValValues),
+                            &Values[0], NumValValues);
+  setValue(&I, Res);
 
+  if (DisableScheduling)
+    DAG.AssignOrdering(Res.getNode(), SDNodeOrder);
+}
 
 void SelectionDAGBuilder::visitGetElementPtr(User &I) {
   SDValue N = getValue(I.getOperand(0));
@@ -2510,7 +2842,11 @@ void SelectionDAGBuilder::visitGetElementPtr(User &I) {
         uint64_t Offset = TD->getStructLayout(StTy)->getElementOffset(Field);
         N = DAG.getNode(ISD::ADD, getCurDebugLoc(), N.getValueType(), N,
                         DAG.getIntPtrConstant(Offset));
+
+        if (DisableScheduling)
+          DAG.AssignOrdering(N.getNode(), SDNodeOrder);
       }
+
       Ty = StTy->getElementType(Field);
     } else {
       Ty = cast<SequentialType>(Ty)->getElementType();
@@ -2523,14 +2859,21 @@ void SelectionDAGBuilder::visitGetElementPtr(User &I) {
         SDValue OffsVal;
         EVT PTy = TLI.getPointerTy();
         unsigned PtrBits = PTy.getSizeInBits();
-        if (PtrBits < 64) {
+        if (PtrBits < 64)
           OffsVal = DAG.getNode(ISD::TRUNCATE, getCurDebugLoc(),
                                 TLI.getPointerTy(),
                                 DAG.getConstant(Offs, MVT::i64));
-        } else
+        else
           OffsVal = DAG.getIntPtrConstant(Offs);
+
         N = DAG.getNode(ISD::ADD, getCurDebugLoc(), N.getValueType(), N,
                         OffsVal);
+
+        if (DisableScheduling) {
+          DAG.AssignOrdering(OffsVal.getNode(), SDNodeOrder);
+          DAG.AssignOrdering(N.getNode(), SDNodeOrder);
+        }
+
         continue;
       }
 
@@ -2556,12 +2899,19 @@ void SelectionDAGBuilder::visitGetElementPtr(User &I) {
           IdxN = DAG.getNode(ISD::MUL, getCurDebugLoc(),
                              N.getValueType(), IdxN, Scale);
         }
+
+        if (DisableScheduling)
+          DAG.AssignOrdering(IdxN.getNode(), SDNodeOrder);
       }
 
       N = DAG.getNode(ISD::ADD, getCurDebugLoc(),
                       N.getValueType(), N, IdxN);
+
+      if (DisableScheduling)
+        DAG.AssignOrdering(N.getNode(), SDNodeOrder);
     }
   }
+
   setValue(&I, N);
 }
 
@@ -2583,11 +2933,15 @@ void SelectionDAGBuilder::visitAlloca(AllocaInst &I) {
                           AllocSize,
                           DAG.getConstant(TySize, AllocSize.getValueType()));
   
-  
+  if (DisableScheduling)
+    DAG.AssignOrdering(AllocSize.getNode(), SDNodeOrder);
   
   EVT IntPtr = TLI.getPointerTy();
   AllocSize = DAG.getZExtOrTrunc(AllocSize, getCurDebugLoc(), IntPtr);
 
+  if (DisableScheduling)
+    DAG.AssignOrdering(AllocSize.getNode(), SDNodeOrder);
+  
   // Handle alignment.  If the requested alignment is less than or equal to
   // the stack alignment, ignore it.  If the size is greater than or equal to
   // the stack alignment, we note this in the DYNAMIC_STACKALLOC node.
@@ -2601,10 +2955,15 @@ void SelectionDAGBuilder::visitAlloca(AllocaInst &I) {
   AllocSize = DAG.getNode(ISD::ADD, getCurDebugLoc(),
                           AllocSize.getValueType(), AllocSize,
                           DAG.getIntPtrConstant(StackAlign-1));
+  if (DisableScheduling)
+    DAG.AssignOrdering(AllocSize.getNode(), SDNodeOrder);
+
   // Mask out the low bits for alignment purposes.
   AllocSize = DAG.getNode(ISD::AND, getCurDebugLoc(),
                           AllocSize.getValueType(), AllocSize,
                           DAG.getIntPtrConstant(~(uint64_t)(StackAlign-1)));
+  if (DisableScheduling)
+    DAG.AssignOrdering(AllocSize.getNode(), SDNodeOrder);
 
   SDValue Ops[] = { getRoot(), AllocSize, DAG.getIntPtrConstant(Align) };
   SDVTList VTs = DAG.getVTList(AllocSize.getValueType(), MVT::Other);
@@ -2613,6 +2972,9 @@ void SelectionDAGBuilder::visitAlloca(AllocaInst &I) {
   setValue(&I, DSA);
   DAG.setRoot(DSA.getValue(1));
 
+  if (DisableScheduling)
+    DAG.AssignOrdering(DSA.getNode(), SDNodeOrder);
+
   // Inform the Frame Information that we have just allocated a variable-sized
   // object.
   FuncInfo.MF->getFrameInfo()->CreateVariableSizedObject();
@@ -2651,30 +3013,41 @@ void SelectionDAGBuilder::visitLoad(LoadInst &I) {
   SmallVector<SDValue, 4> Chains(NumValues);
   EVT PtrVT = Ptr.getValueType();
   for (unsigned i = 0; i != NumValues; ++i) {
+    SDValue A = DAG.getNode(ISD::ADD, getCurDebugLoc(),
+                            PtrVT, Ptr,
+                            DAG.getConstant(Offsets[i], PtrVT));
     SDValue L = DAG.getLoad(ValueVTs[i], getCurDebugLoc(), Root,
-                            DAG.getNode(ISD::ADD, getCurDebugLoc(),
-                                        PtrVT, Ptr,
-                                        DAG.getConstant(Offsets[i], PtrVT)),
-                            SV, Offsets[i], isVolatile, Alignment);
+                            A, SV, Offsets[i], isVolatile, Alignment);
+
     Values[i] = L;
     Chains[i] = L.getValue(1);
+
+    if (DisableScheduling) {
+      DAG.AssignOrdering(A.getNode(), SDNodeOrder);
+      DAG.AssignOrdering(L.getNode(), SDNodeOrder);
+    }
   }
 
   if (!ConstantMemory) {
     SDValue Chain = DAG.getNode(ISD::TokenFactor, getCurDebugLoc(),
-                                  MVT::Other,
-                                  &Chains[0], NumValues);
+                                MVT::Other, &Chains[0], NumValues);
     if (isVolatile)
       DAG.setRoot(Chain);
     else
       PendingLoads.push_back(Chain);
+
+    if (DisableScheduling)
+      DAG.AssignOrdering(Chain.getNode(), SDNodeOrder);
   }
 
-  setValue(&I, DAG.getNode(ISD::MERGE_VALUES, getCurDebugLoc(),
-                           DAG.getVTList(&ValueVTs[0], NumValues),
-                           &Values[0], NumValues));
-}
+  SDValue Res = DAG.getNode(ISD::MERGE_VALUES, getCurDebugLoc(),
+                            DAG.getVTList(&ValueVTs[0], NumValues),
+                            &Values[0], NumValues);
+  setValue(&I, Res);
 
+  if (DisableScheduling)
+    DAG.AssignOrdering(Res.getNode(), SDNodeOrder);
+}
 
 void SelectionDAGBuilder::visitStore(StoreInst &I) {
   Value *SrcV = I.getOperand(0);
@@ -2698,16 +3071,26 @@ void SelectionDAGBuilder::visitStore(StoreInst &I) {
   EVT PtrVT = Ptr.getValueType();
   bool isVolatile = I.isVolatile();
   unsigned Alignment = I.getAlignment();
-  for (unsigned i = 0; i != NumValues; ++i)
+
+  for (unsigned i = 0; i != NumValues; ++i) {
+    SDValue Add = DAG.getNode(ISD::ADD, getCurDebugLoc(), PtrVT, Ptr,
+                              DAG.getConstant(Offsets[i], PtrVT));
     Chains[i] = DAG.getStore(Root, getCurDebugLoc(),
                              SDValue(Src.getNode(), Src.getResNo() + i),
-                             DAG.getNode(ISD::ADD, getCurDebugLoc(),
-                                         PtrVT, Ptr,
-                                         DAG.getConstant(Offsets[i], PtrVT)),
-                             PtrV, Offsets[i], isVolatile, Alignment);
+                             Add, PtrV, Offsets[i], isVolatile, Alignment);
+
+    if (DisableScheduling) {
+      DAG.AssignOrdering(Add.getNode(), SDNodeOrder);
+      DAG.AssignOrdering(Chains[i].getNode(), SDNodeOrder);
+    }
+  }
+
+  SDValue Res = DAG.getNode(ISD::TokenFactor, getCurDebugLoc(),
+                            MVT::Other, &Chains[0], NumValues);
+  DAG.setRoot(Res);
 
-  DAG.setRoot(DAG.getNode(ISD::TokenFactor, getCurDebugLoc(),
-                          MVT::Other, &Chains[0], NumValues));
+  if (DisableScheduling)
+    DAG.AssignOrdering(Res.getNode(), SDNodeOrder);
 }
 
 /// visitTargetIntrinsic - Lower a call of a target intrinsic to an INTRINSIC
@@ -2752,6 +3135,7 @@ void SelectionDAGBuilder::visitTargetIntrinsic(CallInst &I,
            "Intrinsic uses a non-legal type?");
   }
 #endif // NDEBUG
+
   if (HasChain)
     ValueVTs.push_back(MVT::Other);
 
@@ -2766,16 +3150,19 @@ void SelectionDAGBuilder::visitTargetIntrinsic(CallInst &I,
                                      Info.memVT, Info.ptrVal, Info.offset,
                                      Info.align, Info.vol,
                                      Info.readMem, Info.writeMem);
-  }
-  else if (!HasChain)
+  } else if (!HasChain) {
     Result = DAG.getNode(ISD::INTRINSIC_WO_CHAIN, getCurDebugLoc(),
                          VTs, &Ops[0], Ops.size());
-  else if (I.getType() != Type::getVoidTy(*DAG.getContext()))
+  } else if (I.getType() != Type::getVoidTy(*DAG.getContext())) {
     Result = DAG.getNode(ISD::INTRINSIC_W_CHAIN, getCurDebugLoc(),
                          VTs, &Ops[0], Ops.size());
-  else
+  } else {
     Result = DAG.getNode(ISD::INTRINSIC_VOID, getCurDebugLoc(),
                          VTs, &Ops[0], Ops.size());
+  }
+
+  if (DisableScheduling)
+    DAG.AssignOrdering(Result.getNode(), SDNodeOrder);
 
   if (HasChain) {
     SDValue Chain = Result.getValue(Result.getNode()->getNumValues()-1);
@@ -2784,11 +3171,16 @@ void SelectionDAGBuilder::visitTargetIntrinsic(CallInst &I,
     else
       DAG.setRoot(Chain);
   }
+
   if (I.getType() != Type::getVoidTy(*DAG.getContext())) {
     if (const VectorType *PTy = dyn_cast<VectorType>(I.getType())) {
       EVT VT = TLI.getValueType(PTy);
       Result = DAG.getNode(ISD::BIT_CONVERT, getCurDebugLoc(), VT, Result);
+
+      if (DisableScheduling)
+        DAG.AssignOrdering(Result.getNode(), SDNodeOrder);
     }
+
     setValue(&I, Result);
   }
 }
@@ -2800,12 +3192,20 @@ void SelectionDAGBuilder::visitTargetIntrinsic(CallInst &I,
 ///
 /// where Op is the hexidecimal representation of floating point value.
 static SDValue
-GetSignificand(SelectionDAG &DAG, SDValue Op, DebugLoc dl) {
+GetSignificand(SelectionDAG &DAG, SDValue Op, DebugLoc dl, unsigned Order) {
   SDValue t1 = DAG.getNode(ISD::AND, dl, MVT::i32, Op,
                            DAG.getConstant(0x007fffff, MVT::i32));
   SDValue t2 = DAG.getNode(ISD::OR, dl, MVT::i32, t1,
                            DAG.getConstant(0x3f800000, MVT::i32));
-  return DAG.getNode(ISD::BIT_CONVERT, dl, MVT::f32, t2);
+  SDValue Res = DAG.getNode(ISD::BIT_CONVERT, dl, MVT::f32, t2);
+
+  if (DisableScheduling) {
+    DAG.AssignOrdering(t1.getNode(), Order);
+    DAG.AssignOrdering(t2.getNode(), Order);
+    DAG.AssignOrdering(Res.getNode(), Order);
+  }
+
+  return Res;
 }
 
 /// GetExponent - Get the exponent:
@@ -2815,14 +3215,23 @@ GetSignificand(SelectionDAG &DAG, SDValue Op, DebugLoc dl) {
 /// where Op is the hexidecimal representation of floating point value.
 static SDValue
 GetExponent(SelectionDAG &DAG, SDValue Op, const TargetLowering &TLI,
-            DebugLoc dl) {
+            DebugLoc dl, unsigned Order) {
   SDValue t0 = DAG.getNode(ISD::AND, dl, MVT::i32, Op,
                            DAG.getConstant(0x7f800000, MVT::i32));
   SDValue t1 = DAG.getNode(ISD::SRL, dl, MVT::i32, t0,
                            DAG.getConstant(23, TLI.getPointerTy()));
   SDValue t2 = DAG.getNode(ISD::SUB, dl, MVT::i32, t1,
                            DAG.getConstant(127, MVT::i32));
-  return DAG.getNode(ISD::SINT_TO_FP, dl, MVT::f32, t2);
+  SDValue Res = DAG.getNode(ISD::SINT_TO_FP, dl, MVT::f32, t2);
+
+  if (DisableScheduling) {
+    DAG.AssignOrdering(t0.getNode(), Order);
+    DAG.AssignOrdering(t1.getNode(), Order);
+    DAG.AssignOrdering(t2.getNode(), Order);
+    DAG.AssignOrdering(Res.getNode(), Order);
+  }
+
+  return Res;
 }
 
 /// getF32Constant - Get 32-bit floating point constant.
@@ -2846,6 +3255,10 @@ SelectionDAGBuilder::implVisitBinaryAtomic(CallInst& I, ISD::NodeType Op) {
                   I.getOperand(1));
   setValue(&I, L);
   DAG.setRoot(L.getValue(1));
+
+  if (DisableScheduling)
+    DAG.AssignOrdering(L.getNode(), SDNodeOrder);
+
   return 0;
 }
 
@@ -2859,6 +3272,10 @@ SelectionDAGBuilder::implVisitAluOverflow(CallInst &I, ISD::NodeType Op) {
   SDValue Result = DAG.getNode(Op, getCurDebugLoc(), VTs, Op1, Op2);
 
   setValue(&I, Result);
+
+  if (DisableScheduling)
+    DAG.AssignOrdering(Result.getNode(), SDNodeOrder);
+
   return 0;
 }
 
@@ -2886,10 +3303,20 @@ SelectionDAGBuilder::visitExp(CallInst &I) {
     SDValue t1 = DAG.getNode(ISD::SINT_TO_FP, dl, MVT::f32, IntegerPartOfX);
     SDValue X = DAG.getNode(ISD::FSUB, dl, MVT::f32, t0, t1);
 
+    if (DisableScheduling) {
+      DAG.AssignOrdering(t0.getNode(), SDNodeOrder);
+      DAG.AssignOrdering(IntegerPartOfX.getNode(), SDNodeOrder);
+      DAG.AssignOrdering(t1.getNode(), SDNodeOrder);
+      DAG.AssignOrdering(X.getNode(), SDNodeOrder);
+    }
+
     //   IntegerPartOfX <<= 23;
     IntegerPartOfX = DAG.getNode(ISD::SHL, dl, MVT::i32, IntegerPartOfX,
                                  DAG.getConstant(23, TLI.getPointerTy()));
 
+    if (DisableScheduling)
+      DAG.AssignOrdering(IntegerPartOfX.getNode(), SDNodeOrder);
+
     if (LimitFloatPrecision <= 6) {
       // For floating-point precision of 6:
       //
@@ -2912,6 +3339,16 @@ SelectionDAGBuilder::visitExp(CallInst &I) {
                                TwoToFracPartOfX, IntegerPartOfX);
 
       result = DAG.getNode(ISD::BIT_CONVERT, dl, MVT::f32, t6);
+
+      if (DisableScheduling) {
+        DAG.AssignOrdering(t2.getNode(), SDNodeOrder);
+        DAG.AssignOrdering(t3.getNode(), SDNodeOrder);
+        DAG.AssignOrdering(t4.getNode(), SDNodeOrder);
+        DAG.AssignOrdering(t5.getNode(), SDNodeOrder);
+        DAG.AssignOrdering(t6.getNode(), SDNodeOrder);
+        DAG.AssignOrdering(TwoToFracPartOfX.getNode(), SDNodeOrder);
+        DAG.AssignOrdering(result.getNode(), SDNodeOrder);
+      }
     } else if (LimitFloatPrecision > 6 && LimitFloatPrecision <= 12) {
       // For floating-point precision of 12:
       //
@@ -2938,6 +3375,18 @@ SelectionDAGBuilder::visitExp(CallInst &I) {
                                TwoToFracPartOfX, IntegerPartOfX);
 
       result = DAG.getNode(ISD::BIT_CONVERT, dl, MVT::f32, t8);
+
+      if (DisableScheduling) {
+        DAG.AssignOrdering(t2.getNode(), SDNodeOrder);
+        DAG.AssignOrdering(t3.getNode(), SDNodeOrder);
+        DAG.AssignOrdering(t4.getNode(), SDNodeOrder);
+        DAG.AssignOrdering(t5.getNode(), SDNodeOrder);
+        DAG.AssignOrdering(t6.getNode(), SDNodeOrder);
+        DAG.AssignOrdering(t7.getNode(), SDNodeOrder);
+        DAG.AssignOrdering(t8.getNode(), SDNodeOrder);
+        DAG.AssignOrdering(TwoToFracPartOfX.getNode(), SDNodeOrder);
+        DAG.AssignOrdering(result.getNode(), SDNodeOrder);
+      }
     } else { // LimitFloatPrecision > 12 && LimitFloatPrecision <= 18
       // For floating-point precision of 18:
       //
@@ -2977,12 +3426,32 @@ SelectionDAGBuilder::visitExp(CallInst &I) {
                                 TwoToFracPartOfX, IntegerPartOfX);
 
       result = DAG.getNode(ISD::BIT_CONVERT, dl, MVT::f32, t14);
+
+      if (DisableScheduling) {
+        DAG.AssignOrdering(t2.getNode(), SDNodeOrder);
+        DAG.AssignOrdering(t3.getNode(), SDNodeOrder);
+        DAG.AssignOrdering(t4.getNode(), SDNodeOrder);
+        DAG.AssignOrdering(t5.getNode(), SDNodeOrder);
+        DAG.AssignOrdering(t6.getNode(), SDNodeOrder);
+        DAG.AssignOrdering(t7.getNode(), SDNodeOrder);
+        DAG.AssignOrdering(t8.getNode(), SDNodeOrder);
+        DAG.AssignOrdering(t9.getNode(), SDNodeOrder);
+        DAG.AssignOrdering(t10.getNode(), SDNodeOrder);
+        DAG.AssignOrdering(t11.getNode(), SDNodeOrder);
+        DAG.AssignOrdering(t12.getNode(), SDNodeOrder);
+        DAG.AssignOrdering(t13.getNode(), SDNodeOrder);
+        DAG.AssignOrdering(t14.getNode(), SDNodeOrder);
+        DAG.AssignOrdering(TwoToFracPartOfX.getNode(), SDNodeOrder);
+        DAG.AssignOrdering(result.getNode(), SDNodeOrder);
+      }
     }
   } else {
     // No special expansion.
     result = DAG.getNode(ISD::FEXP, dl,
                          getValue(I.getOperand(1)).getValueType(),
                          getValue(I.getOperand(1)));
+    if (DisableScheduling)
+      DAG.AssignOrdering(result.getNode(), SDNodeOrder);
   }
 
   setValue(&I, result);
@@ -3000,14 +3469,20 @@ SelectionDAGBuilder::visitLog(CallInst &I) {
     SDValue Op = getValue(I.getOperand(1));
     SDValue Op1 = DAG.getNode(ISD::BIT_CONVERT, dl, MVT::i32, Op);
 
+    if (DisableScheduling)
+      DAG.AssignOrdering(Op1.getNode(), SDNodeOrder);
+
     // Scale the exponent by log(2) [0.69314718f].
-    SDValue Exp = GetExponent(DAG, Op1, TLI, dl);
+    SDValue Exp = GetExponent(DAG, Op1, TLI, dl, SDNodeOrder);
     SDValue LogOfExponent = DAG.getNode(ISD::FMUL, dl, MVT::f32, Exp,
                                         getF32Constant(DAG, 0x3f317218));
 
+    if (DisableScheduling)
+      DAG.AssignOrdering(LogOfExponent.getNode(), SDNodeOrder);
+
     // Get the significand and build it into a floating-point number with
     // exponent of 1.
-    SDValue X = GetSignificand(DAG, Op1, dl);
+    SDValue X = GetSignificand(DAG, Op1, dl, SDNodeOrder);
 
     if (LimitFloatPrecision <= 6) {
       // For floating-point precision of 6:
@@ -3027,6 +3502,14 @@ SelectionDAGBuilder::visitLog(CallInst &I) {
 
       result = DAG.getNode(ISD::FADD, dl,
                            MVT::f32, LogOfExponent, LogOfMantissa);
+
+      if (DisableScheduling) {
+        DAG.AssignOrdering(t0.getNode(), SDNodeOrder);
+        DAG.AssignOrdering(t1.getNode(), SDNodeOrder);
+        DAG.AssignOrdering(t2.getNode(), SDNodeOrder);
+        DAG.AssignOrdering(LogOfMantissa.getNode(), SDNodeOrder);
+        DAG.AssignOrdering(result.getNode(), SDNodeOrder);
+      }
     } else if (LimitFloatPrecision > 6 && LimitFloatPrecision <= 12) {
       // For floating-point precision of 12:
       //
@@ -3053,6 +3536,18 @@ SelectionDAGBuilder::visitLog(CallInst &I) {
 
       result = DAG.getNode(ISD::FADD, dl,
                            MVT::f32, LogOfExponent, LogOfMantissa);
+
+      if (DisableScheduling) {
+        DAG.AssignOrdering(t0.getNode(), SDNodeOrder);
+        DAG.AssignOrdering(t1.getNode(), SDNodeOrder);
+        DAG.AssignOrdering(t2.getNode(), SDNodeOrder);
+        DAG.AssignOrdering(t3.getNode(), SDNodeOrder);
+        DAG.AssignOrdering(t4.getNode(), SDNodeOrder);
+        DAG.AssignOrdering(t5.getNode(), SDNodeOrder);
+        DAG.AssignOrdering(t6.getNode(), SDNodeOrder);
+        DAG.AssignOrdering(LogOfMantissa.getNode(), SDNodeOrder);
+        DAG.AssignOrdering(result.getNode(), SDNodeOrder);
+      }
     } else { // LimitFloatPrecision > 12 && LimitFloatPrecision <= 18
       // For floating-point precision of 18:
       //
@@ -3087,12 +3582,31 @@ SelectionDAGBuilder::visitLog(CallInst &I) {
 
       result = DAG.getNode(ISD::FADD, dl,
                            MVT::f32, LogOfExponent, LogOfMantissa);
+
+      if (DisableScheduling) {
+        DAG.AssignOrdering(t0.getNode(), SDNodeOrder);
+        DAG.AssignOrdering(t1.getNode(), SDNodeOrder);
+        DAG.AssignOrdering(t2.getNode(), SDNodeOrder);
+        DAG.AssignOrdering(t3.getNode(), SDNodeOrder);
+        DAG.AssignOrdering(t4.getNode(), SDNodeOrder);
+        DAG.AssignOrdering(t5.getNode(), SDNodeOrder);
+        DAG.AssignOrdering(t6.getNode(), SDNodeOrder);
+        DAG.AssignOrdering(t7.getNode(), SDNodeOrder);
+        DAG.AssignOrdering(t8.getNode(), SDNodeOrder);
+        DAG.AssignOrdering(t9.getNode(), SDNodeOrder);
+        DAG.AssignOrdering(t10.getNode(), SDNodeOrder);
+        DAG.AssignOrdering(LogOfMantissa.getNode(), SDNodeOrder);
+        DAG.AssignOrdering(result.getNode(), SDNodeOrder);
+      }
     }
   } else {
     // No special expansion.
     result = DAG.getNode(ISD::FLOG, dl,
                          getValue(I.getOperand(1)).getValueType(),
                          getValue(I.getOperand(1)));
+
+    if (DisableScheduling)
+      DAG.AssignOrdering(result.getNode(), SDNodeOrder);
   }
 
   setValue(&I, result);
@@ -3110,12 +3624,18 @@ SelectionDAGBuilder::visitLog2(CallInst &I) {
     SDValue Op = getValue(I.getOperand(1));
     SDValue Op1 = DAG.getNode(ISD::BIT_CONVERT, dl, MVT::i32, Op);
 
+    if (DisableScheduling)
+      DAG.AssignOrdering(Op1.getNode(), SDNodeOrder);
+
     // Get the exponent.
-    SDValue LogOfExponent = GetExponent(DAG, Op1, TLI, dl);
+    SDValue LogOfExponent = GetExponent(DAG, Op1, TLI, dl, SDNodeOrder);
+
+    if (DisableScheduling)
+      DAG.AssignOrdering(LogOfExponent.getNode(), SDNodeOrder);
 
     // Get the significand and build it into a floating-point number with
     // exponent of 1.
-    SDValue X = GetSignificand(DAG, Op1, dl);
+    SDValue X = GetSignificand(DAG, Op1, dl, SDNodeOrder);
 
     // Different possible minimax approximations of significand in
     // floating-point for various degrees of accuracy over [1,2].
@@ -3135,6 +3655,14 @@ SelectionDAGBuilder::visitLog2(CallInst &I) {
 
       result = DAG.getNode(ISD::FADD, dl,
                            MVT::f32, LogOfExponent, Log2ofMantissa);
+
+      if (DisableScheduling) {
+        DAG.AssignOrdering(t0.getNode(), SDNodeOrder);
+        DAG.AssignOrdering(t1.getNode(), SDNodeOrder);
+        DAG.AssignOrdering(t2.getNode(), SDNodeOrder);
+        DAG.AssignOrdering(Log2ofMantissa.getNode(), SDNodeOrder);
+        DAG.AssignOrdering(result.getNode(), SDNodeOrder);
+      }
     } else if (LimitFloatPrecision > 6 && LimitFloatPrecision <= 12) {
       // For floating-point precision of 12:
       //
@@ -3161,6 +3689,18 @@ SelectionDAGBuilder::visitLog2(CallInst &I) {
 
       result = DAG.getNode(ISD::FADD, dl,
                            MVT::f32, LogOfExponent, Log2ofMantissa);
+
+      if (DisableScheduling) {
+        DAG.AssignOrdering(t0.getNode(), SDNodeOrder);
+        DAG.AssignOrdering(t1.getNode(), SDNodeOrder);
+        DAG.AssignOrdering(t2.getNode(), SDNodeOrder);
+        DAG.AssignOrdering(t3.getNode(), SDNodeOrder);
+        DAG.AssignOrdering(t4.getNode(), SDNodeOrder);
+        DAG.AssignOrdering(t5.getNode(), SDNodeOrder);
+        DAG.AssignOrdering(t6.getNode(), SDNodeOrder);
+        DAG.AssignOrdering(Log2ofMantissa.getNode(), SDNodeOrder);
+        DAG.AssignOrdering(result.getNode(), SDNodeOrder);
+      }
     } else { // LimitFloatPrecision > 12 && LimitFloatPrecision <= 18
       // For floating-point precision of 18:
       //
@@ -3196,12 +3736,31 @@ SelectionDAGBuilder::visitLog2(CallInst &I) {
 
       result = DAG.getNode(ISD::FADD, dl,
                            MVT::f32, LogOfExponent, Log2ofMantissa);
+
+      if (DisableScheduling) {
+        DAG.AssignOrdering(t0.getNode(), SDNodeOrder);
+        DAG.AssignOrdering(t1.getNode(), SDNodeOrder);
+        DAG.AssignOrdering(t2.getNode(), SDNodeOrder);
+        DAG.AssignOrdering(t3.getNode(), SDNodeOrder);
+        DAG.AssignOrdering(t4.getNode(), SDNodeOrder);
+        DAG.AssignOrdering(t5.getNode(), SDNodeOrder);
+        DAG.AssignOrdering(t6.getNode(), SDNodeOrder);
+        DAG.AssignOrdering(t7.getNode(), SDNodeOrder);
+        DAG.AssignOrdering(t8.getNode(), SDNodeOrder);
+        DAG.AssignOrdering(t9.getNode(), SDNodeOrder);
+        DAG.AssignOrdering(t10.getNode(), SDNodeOrder);
+        DAG.AssignOrdering(Log2ofMantissa.getNode(), SDNodeOrder);
+        DAG.AssignOrdering(result.getNode(), SDNodeOrder);
+      }
     }
   } else {
     // No special expansion.
     result = DAG.getNode(ISD::FLOG2, dl,
                          getValue(I.getOperand(1)).getValueType(),
                          getValue(I.getOperand(1)));
+
+    if (DisableScheduling)
+      DAG.AssignOrdering(result.getNode(), SDNodeOrder);
   }
 
   setValue(&I, result);
@@ -3219,14 +3778,20 @@ SelectionDAGBuilder::visitLog10(CallInst &I) {
     SDValue Op = getValue(I.getOperand(1));
     SDValue Op1 = DAG.getNode(ISD::BIT_CONVERT, dl, MVT::i32, Op);
 
+    if (DisableScheduling)
+      DAG.AssignOrdering(Op1.getNode(), SDNodeOrder);
+
     // Scale the exponent by log10(2) [0.30102999f].
-    SDValue Exp = GetExponent(DAG, Op1, TLI, dl);
+    SDValue Exp = GetExponent(DAG, Op1, TLI, dl, SDNodeOrder);
     SDValue LogOfExponent = DAG.getNode(ISD::FMUL, dl, MVT::f32, Exp,
                                         getF32Constant(DAG, 0x3e9a209a));
 
+    if (DisableScheduling)
+      DAG.AssignOrdering(LogOfExponent.getNode(), SDNodeOrder);
+
     // Get the significand and build it into a floating-point number with
     // exponent of 1.
-    SDValue X = GetSignificand(DAG, Op1, dl);
+    SDValue X = GetSignificand(DAG, Op1, dl, SDNodeOrder);
 
     if (LimitFloatPrecision <= 6) {
       // For floating-point precision of 6:
@@ -3246,6 +3811,14 @@ SelectionDAGBuilder::visitLog10(CallInst &I) {
 
       result = DAG.getNode(ISD::FADD, dl,
                            MVT::f32, LogOfExponent, Log10ofMantissa);
+
+      if (DisableScheduling) {
+        DAG.AssignOrdering(t0.getNode(), SDNodeOrder);
+        DAG.AssignOrdering(t1.getNode(), SDNodeOrder);
+        DAG.AssignOrdering(t2.getNode(), SDNodeOrder);
+        DAG.AssignOrdering(Log10ofMantissa.getNode(), SDNodeOrder);
+        DAG.AssignOrdering(result.getNode(), SDNodeOrder);
+      }
     } else if (LimitFloatPrecision > 6 && LimitFloatPrecision <= 12) {
       // For floating-point precision of 12:
       //
@@ -3268,6 +3841,16 @@ SelectionDAGBuilder::visitLog10(CallInst &I) {
 
       result = DAG.getNode(ISD::FADD, dl,
                            MVT::f32, LogOfExponent, Log10ofMantissa);
+
+      if (DisableScheduling) {
+        DAG.AssignOrdering(t0.getNode(), SDNodeOrder);
+        DAG.AssignOrdering(t1.getNode(), SDNodeOrder);
+        DAG.AssignOrdering(t2.getNode(), SDNodeOrder);
+        DAG.AssignOrdering(t3.getNode(), SDNodeOrder);
+        DAG.AssignOrdering(t4.getNode(), SDNodeOrder);
+        DAG.AssignOrdering(Log10ofMantissa.getNode(), SDNodeOrder);
+        DAG.AssignOrdering(result.getNode(), SDNodeOrder);
+      }
     } else { // LimitFloatPrecision > 12 && LimitFloatPrecision <= 18
       // For floating-point precision of 18:
       //
@@ -3298,12 +3881,29 @@ SelectionDAGBuilder::visitLog10(CallInst &I) {
 
       result = DAG.getNode(ISD::FADD, dl,
                            MVT::f32, LogOfExponent, Log10ofMantissa);
+
+      if (DisableScheduling) {
+        DAG.AssignOrdering(t0.getNode(), SDNodeOrder);
+        DAG.AssignOrdering(t1.getNode(), SDNodeOrder);
+        DAG.AssignOrdering(t2.getNode(), SDNodeOrder);
+        DAG.AssignOrdering(t3.getNode(), SDNodeOrder);
+        DAG.AssignOrdering(t4.getNode(), SDNodeOrder);
+        DAG.AssignOrdering(t5.getNode(), SDNodeOrder);
+        DAG.AssignOrdering(t6.getNode(), SDNodeOrder);
+        DAG.AssignOrdering(t7.getNode(), SDNodeOrder);
+        DAG.AssignOrdering(t8.getNode(), SDNodeOrder);
+        DAG.AssignOrdering(Log10ofMantissa.getNode(), SDNodeOrder);
+        DAG.AssignOrdering(result.getNode(), SDNodeOrder);
+      }
     }
   } else {
     // No special expansion.
     result = DAG.getNode(ISD::FLOG10, dl,
                          getValue(I.getOperand(1)).getValueType(),
                          getValue(I.getOperand(1)));
+
+    if (DisableScheduling)
+      DAG.AssignOrdering(result.getNode(), SDNodeOrder);
   }
 
   setValue(&I, result);
@@ -3322,6 +3922,9 @@ SelectionDAGBuilder::visitExp2(CallInst &I) {
 
     SDValue IntegerPartOfX = DAG.getNode(ISD::FP_TO_SINT, dl, MVT::i32, Op);
 
+    if (DisableScheduling)
+      DAG.AssignOrdering(IntegerPartOfX.getNode(), SDNodeOrder);
+
     //   FractionalPartOfX = x - (float)IntegerPartOfX;
     SDValue t1 = DAG.getNode(ISD::SINT_TO_FP, dl, MVT::f32, IntegerPartOfX);
     SDValue X = DAG.getNode(ISD::FSUB, dl, MVT::f32, Op, t1);
@@ -3330,6 +3933,12 @@ SelectionDAGBuilder::visitExp2(CallInst &I) {
     IntegerPartOfX = DAG.getNode(ISD::SHL, dl, MVT::i32, IntegerPartOfX,
                                  DAG.getConstant(23, TLI.getPointerTy()));
 
+    if (DisableScheduling) {
+      DAG.AssignOrdering(t1.getNode(), SDNodeOrder);
+      DAG.AssignOrdering(X.getNode(), SDNodeOrder);
+      DAG.AssignOrdering(IntegerPartOfX.getNode(), SDNodeOrder);
+    }
+
     if (LimitFloatPrecision <= 6) {
       // For floating-point precision of 6:
       //
@@ -3351,6 +3960,16 @@ SelectionDAGBuilder::visitExp2(CallInst &I) {
 
       result = DAG.getNode(ISD::BIT_CONVERT, dl,
                            MVT::f32, TwoToFractionalPartOfX);
+
+      if (DisableScheduling) {
+        DAG.AssignOrdering(t2.getNode(), SDNodeOrder);
+        DAG.AssignOrdering(t3.getNode(), SDNodeOrder);
+        DAG.AssignOrdering(t4.getNode(), SDNodeOrder);
+        DAG.AssignOrdering(t5.getNode(), SDNodeOrder);
+        DAG.AssignOrdering(t6.getNode(), SDNodeOrder);
+        DAG.AssignOrdering(TwoToFractionalPartOfX.getNode(), SDNodeOrder);
+        DAG.AssignOrdering(result.getNode(), SDNodeOrder);
+      }
     } else if (LimitFloatPrecision > 6 && LimitFloatPrecision <= 12) {
       // For floating-point precision of 12:
       //
@@ -3376,6 +3995,18 @@ SelectionDAGBuilder::visitExp2(CallInst &I) {
 
       result = DAG.getNode(ISD::BIT_CONVERT, dl,
                            MVT::f32, TwoToFractionalPartOfX);
+
+      if (DisableScheduling) {
+        DAG.AssignOrdering(t2.getNode(), SDNodeOrder);
+        DAG.AssignOrdering(t3.getNode(), SDNodeOrder);
+        DAG.AssignOrdering(t4.getNode(), SDNodeOrder);
+        DAG.AssignOrdering(t5.getNode(), SDNodeOrder);
+        DAG.AssignOrdering(t6.getNode(), SDNodeOrder);
+        DAG.AssignOrdering(t7.getNode(), SDNodeOrder);
+        DAG.AssignOrdering(t8.getNode(), SDNodeOrder);
+        DAG.AssignOrdering(TwoToFractionalPartOfX.getNode(), SDNodeOrder);
+        DAG.AssignOrdering(result.getNode(), SDNodeOrder);
+      }
     } else { // LimitFloatPrecision > 12 && LimitFloatPrecision <= 18
       // For floating-point precision of 18:
       //
@@ -3412,12 +4043,33 @@ SelectionDAGBuilder::visitExp2(CallInst &I) {
 
       result = DAG.getNode(ISD::BIT_CONVERT, dl,
                            MVT::f32, TwoToFractionalPartOfX);
+
+      if (DisableScheduling) {
+        DAG.AssignOrdering(t2.getNode(), SDNodeOrder);
+        DAG.AssignOrdering(t3.getNode(), SDNodeOrder);
+        DAG.AssignOrdering(t4.getNode(), SDNodeOrder);
+        DAG.AssignOrdering(t5.getNode(), SDNodeOrder);
+        DAG.AssignOrdering(t6.getNode(), SDNodeOrder);
+        DAG.AssignOrdering(t7.getNode(), SDNodeOrder);
+        DAG.AssignOrdering(t8.getNode(), SDNodeOrder);
+        DAG.AssignOrdering(t9.getNode(), SDNodeOrder);
+        DAG.AssignOrdering(t10.getNode(), SDNodeOrder);
+        DAG.AssignOrdering(t11.getNode(), SDNodeOrder);
+        DAG.AssignOrdering(t12.getNode(), SDNodeOrder);
+        DAG.AssignOrdering(t13.getNode(), SDNodeOrder);
+        DAG.AssignOrdering(t14.getNode(), SDNodeOrder);
+        DAG.AssignOrdering(TwoToFractionalPartOfX.getNode(), SDNodeOrder);
+        DAG.AssignOrdering(result.getNode(), SDNodeOrder);
+      }
     }
   } else {
     // No special expansion.
     result = DAG.getNode(ISD::FEXP2, dl,
                          getValue(I.getOperand(1)).getValueType(),
                          getValue(I.getOperand(1)));
+
+    if (DisableScheduling)
+      DAG.AssignOrdering(result.getNode(), SDNodeOrder);
   }
 
   setValue(&I, result);
@@ -3459,10 +4111,20 @@ SelectionDAGBuilder::visitPow(CallInst &I) {
     SDValue t1 = DAG.getNode(ISD::SINT_TO_FP, dl, MVT::f32, IntegerPartOfX);
     SDValue X = DAG.getNode(ISD::FSUB, dl, MVT::f32, t0, t1);
 
+    if (DisableScheduling) {
+      DAG.AssignOrdering(t0.getNode(), SDNodeOrder);
+      DAG.AssignOrdering(t1.getNode(), SDNodeOrder);
+      DAG.AssignOrdering(IntegerPartOfX.getNode(), SDNodeOrder);
+      DAG.AssignOrdering(X.getNode(), SDNodeOrder);
+    }
+
     //   IntegerPartOfX <<= 23;
     IntegerPartOfX = DAG.getNode(ISD::SHL, dl, MVT::i32, IntegerPartOfX,
                                  DAG.getConstant(23, TLI.getPointerTy()));
 
+    if (DisableScheduling)
+      DAG.AssignOrdering(IntegerPartOfX.getNode(), SDNodeOrder);
+
     if (LimitFloatPrecision <= 6) {
       // For floating-point precision of 6:
       //
@@ -3484,6 +4146,16 @@ SelectionDAGBuilder::visitPow(CallInst &I) {
 
       result = DAG.getNode(ISD::BIT_CONVERT, dl,
                            MVT::f32, TwoToFractionalPartOfX);
+
+      if (DisableScheduling) {
+        DAG.AssignOrdering(t2.getNode(), SDNodeOrder);
+        DAG.AssignOrdering(t3.getNode(), SDNodeOrder);
+        DAG.AssignOrdering(t4.getNode(), SDNodeOrder);
+        DAG.AssignOrdering(t5.getNode(), SDNodeOrder);
+        DAG.AssignOrdering(t6.getNode(), SDNodeOrder);
+        DAG.AssignOrdering(TwoToFractionalPartOfX.getNode(), SDNodeOrder);
+        DAG.AssignOrdering(result.getNode(), SDNodeOrder);
+      }
     } else if (LimitFloatPrecision > 6 && LimitFloatPrecision <= 12) {
       // For floating-point precision of 12:
       //
@@ -3509,6 +4181,18 @@ SelectionDAGBuilder::visitPow(CallInst &I) {
 
       result = DAG.getNode(ISD::BIT_CONVERT, dl,
                            MVT::f32, TwoToFractionalPartOfX);
+
+      if (DisableScheduling) {
+        DAG.AssignOrdering(t2.getNode(), SDNodeOrder);
+        DAG.AssignOrdering(t3.getNode(), SDNodeOrder);
+        DAG.AssignOrdering(t4.getNode(), SDNodeOrder);
+        DAG.AssignOrdering(t5.getNode(), SDNodeOrder);
+        DAG.AssignOrdering(t6.getNode(), SDNodeOrder);
+        DAG.AssignOrdering(t7.getNode(), SDNodeOrder);
+        DAG.AssignOrdering(t8.getNode(), SDNodeOrder);
+        DAG.AssignOrdering(TwoToFractionalPartOfX.getNode(), SDNodeOrder);
+        DAG.AssignOrdering(result.getNode(), SDNodeOrder);
+      }
     } else { // LimitFloatPrecision > 12 && LimitFloatPrecision <= 18
       // For floating-point precision of 18:
       //
@@ -3545,6 +4229,24 @@ SelectionDAGBuilder::visitPow(CallInst &I) {
 
       result = DAG.getNode(ISD::BIT_CONVERT, dl,
                            MVT::f32, TwoToFractionalPartOfX);
+
+      if (DisableScheduling) {
+        DAG.AssignOrdering(t2.getNode(), SDNodeOrder);
+        DAG.AssignOrdering(t3.getNode(), SDNodeOrder);
+        DAG.AssignOrdering(t4.getNode(), SDNodeOrder);
+        DAG.AssignOrdering(t5.getNode(), SDNodeOrder);
+        DAG.AssignOrdering(t6.getNode(), SDNodeOrder);
+        DAG.AssignOrdering(t7.getNode(), SDNodeOrder);
+        DAG.AssignOrdering(t8.getNode(), SDNodeOrder);
+        DAG.AssignOrdering(t9.getNode(), SDNodeOrder);
+        DAG.AssignOrdering(t10.getNode(), SDNodeOrder);
+        DAG.AssignOrdering(t11.getNode(), SDNodeOrder);
+        DAG.AssignOrdering(t12.getNode(), SDNodeOrder);
+        DAG.AssignOrdering(t13.getNode(), SDNodeOrder);
+        DAG.AssignOrdering(t14.getNode(), SDNodeOrder);
+        DAG.AssignOrdering(TwoToFractionalPartOfX.getNode(), SDNodeOrder);
+        DAG.AssignOrdering(result.getNode(), SDNodeOrder);
+      }
     }
   } else {
     // No special expansion.
@@ -3552,6 +4254,9 @@ SelectionDAGBuilder::visitPow(CallInst &I) {
                          getValue(I.getOperand(1)).getValueType(),
                          getValue(I.getOperand(1)),
                          getValue(I.getOperand(2)));
+
+    if (DisableScheduling)
+      DAG.AssignOrdering(result.getNode(), SDNodeOrder);
   }
 
   setValue(&I, result);
@@ -3563,6 +4268,8 @@ SelectionDAGBuilder::visitPow(CallInst &I) {
 const char *
 SelectionDAGBuilder::visitIntrinsicCall(CallInst &I, unsigned Intrinsic) {
   DebugLoc dl = getCurDebugLoc();
+  SDValue Res;
+
   switch (Intrinsic) {
   default:
     // By default, turn this into a target intrinsic node.
@@ -3572,26 +4279,33 @@ SelectionDAGBuilder::visitIntrinsicCall(CallInst &I, unsigned Intrinsic) {
   case Intrinsic::vaend:    visitVAEnd(I); return 0;
   case Intrinsic::vacopy:   visitVACopy(I); return 0;
   case Intrinsic::returnaddress:
-    setValue(&I, DAG.getNode(ISD::RETURNADDR, dl, TLI.getPointerTy(),
-                             getValue(I.getOperand(1))));
+    Res = DAG.getNode(ISD::RETURNADDR, dl, TLI.getPointerTy(),
+                      getValue(I.getOperand(1)));
+    setValue(&I, Res);
+    if (DisableScheduling)
+      DAG.AssignOrdering(Res.getNode(), SDNodeOrder);
     return 0;
   case Intrinsic::frameaddress:
-    setValue(&I, DAG.getNode(ISD::FRAMEADDR, dl, TLI.getPointerTy(),
-                             getValue(I.getOperand(1))));
+    Res = DAG.getNode(ISD::FRAMEADDR, dl, TLI.getPointerTy(),
+                      getValue(I.getOperand(1)));
+    setValue(&I, Res);
+    if (DisableScheduling)
+      DAG.AssignOrdering(Res.getNode(), SDNodeOrder);
     return 0;
   case Intrinsic::setjmp:
     return "_setjmp"+!TLI.usesUnderscoreSetJmp();
-    break;
   case Intrinsic::longjmp:
     return "_longjmp"+!TLI.usesUnderscoreLongJmp();
-    break;
   case Intrinsic::memcpy: {
     SDValue Op1 = getValue(I.getOperand(1));
     SDValue Op2 = getValue(I.getOperand(2));
     SDValue Op3 = getValue(I.getOperand(3));
     unsigned Align = cast<ConstantInt>(I.getOperand(4))->getZExtValue();
-    DAG.setRoot(DAG.getMemcpy(getRoot(), dl, Op1, Op2, Op3, Align, false,
-                              I.getOperand(1), 0, I.getOperand(2), 0));
+    Res = DAG.getMemcpy(getRoot(), dl, Op1, Op2, Op3, Align, false,
+                        I.getOperand(1), 0, I.getOperand(2), 0);
+    DAG.setRoot(Res);
+    if (DisableScheduling)
+      DAG.AssignOrdering(Res.getNode(), SDNodeOrder);
     return 0;
   }
   case Intrinsic::memset: {
@@ -3599,8 +4313,11 @@ SelectionDAGBuilder::visitIntrinsicCall(CallInst &I, unsigned Intrinsic) {
     SDValue Op2 = getValue(I.getOperand(2));
     SDValue Op3 = getValue(I.getOperand(3));
     unsigned Align = cast<ConstantInt>(I.getOperand(4))->getZExtValue();
-    DAG.setRoot(DAG.getMemset(getRoot(), dl, Op1, Op2, Op3, Align,
-                              I.getOperand(1), 0));
+    Res = DAG.getMemset(getRoot(), dl, Op1, Op2, Op3, Align,
+                        I.getOperand(1), 0);
+    DAG.setRoot(Res);
+    if (DisableScheduling)
+      DAG.AssignOrdering(Res.getNode(), SDNodeOrder);
     return 0;
   }
   case Intrinsic::memmove: {
@@ -3616,13 +4333,19 @@ SelectionDAGBuilder::visitIntrinsicCall(CallInst &I, unsigned Intrinsic) {
       Size = C->getZExtValue();
     if (AA->alias(I.getOperand(1), Size, I.getOperand(2), Size) ==
         AliasAnalysis::NoAlias) {
-      DAG.setRoot(DAG.getMemcpy(getRoot(), dl, Op1, Op2, Op3, Align, false,
-                                I.getOperand(1), 0, I.getOperand(2), 0));
+      Res = DAG.getMemcpy(getRoot(), dl, Op1, Op2, Op3, Align, false,
+                          I.getOperand(1), 0, I.getOperand(2), 0);
+      DAG.setRoot(Res);
+      if (DisableScheduling)
+        DAG.AssignOrdering(Res.getNode(), SDNodeOrder);
       return 0;
     }
 
-    DAG.setRoot(DAG.getMemmove(getRoot(), dl, Op1, Op2, Op3, Align,
-                               I.getOperand(1), 0, I.getOperand(2), 0));
+    Res = DAG.getMemmove(getRoot(), dl, Op1, Op2, Op3, Align,
+                         I.getOperand(1), 0, I.getOperand(2), 0);
+    DAG.setRoot(Res);
+    if (DisableScheduling)
+      DAG.AssignOrdering(Res.getNode(), SDNodeOrder);
     return 0;
   }
   case Intrinsic::dbg_stoppoint: 
@@ -3675,6 +4398,8 @@ SelectionDAGBuilder::visitIntrinsicCall(CallInst &I, unsigned Intrinsic) {
     SDValue Op = DAG.getNode(ISD::EXCEPTIONADDR, dl, VTs, Ops, 1);
     setValue(&I, Op);
     DAG.setRoot(Op.getValue(1));
+    if (DisableScheduling)
+      DAG.AssignOrdering(Op.getNode(), SDNodeOrder);
     return 0;
   }
 
@@ -3701,7 +4426,12 @@ SelectionDAGBuilder::visitIntrinsicCall(CallInst &I, unsigned Intrinsic) {
 
     DAG.setRoot(Op.getValue(1));
 
-    setValue(&I, DAG.getSExtOrTrunc(Op, dl, MVT::i32));
+    Res = DAG.getSExtOrTrunc(Op, dl, MVT::i32);
+    setValue(&I, Res);
+    if (DisableScheduling) {
+      DAG.AssignOrdering(Op.getNode(), SDNodeOrder);
+      DAG.AssignOrdering(Res.getNode(), SDNodeOrder);
+    }
     return 0;
   }
 
@@ -3711,14 +4441,16 @@ SelectionDAGBuilder::visitIntrinsicCall(CallInst &I, unsigned Intrinsic) {
     if (MMI) {
       // Find the type id for the given typeinfo.
       GlobalVariable *GV = ExtractTypeInfo(I.getOperand(1));
-
       unsigned TypeID = MMI->getTypeIDFor(GV);
-      setValue(&I, DAG.getConstant(TypeID, MVT::i32));
+      Res = DAG.getConstant(TypeID, MVT::i32);
     } else {
       // Return something different to eh_selector.
-      setValue(&I, DAG.getConstant(1, MVT::i32));
+      Res = DAG.getConstant(1, MVT::i32);
     }
 
+    setValue(&I, Res);
+    if (DisableScheduling)
+      DAG.AssignOrdering(Res.getNode(), SDNodeOrder);
     return 0;
   }
 
@@ -3726,11 +4458,14 @@ SelectionDAGBuilder::visitIntrinsicCall(CallInst &I, unsigned Intrinsic) {
   case Intrinsic::eh_return_i64:
     if (MachineModuleInfo *MMI = DAG.getMachineModuleInfo()) {
       MMI->setCallsEHReturn(true);
-      DAG.setRoot(DAG.getNode(ISD::EH_RETURN, dl,
-                              MVT::Other,
-                              getControlRoot(),
-                              getValue(I.getOperand(1)),
-                              getValue(I.getOperand(2))));
+      Res = DAG.getNode(ISD::EH_RETURN, dl,
+                        MVT::Other,
+                        getControlRoot(),
+                        getValue(I.getOperand(1)),
+                        getValue(I.getOperand(2)));
+      DAG.setRoot(Res);
+      if (DisableScheduling)
+        DAG.AssignOrdering(Res.getNode(), SDNodeOrder);
     } else {
       setValue(&I, DAG.getConstant(0, TLI.getPointerTy()));
     }
@@ -3740,26 +4475,28 @@ SelectionDAGBuilder::visitIntrinsicCall(CallInst &I, unsigned Intrinsic) {
     if (MachineModuleInfo *MMI = DAG.getMachineModuleInfo()) {
       MMI->setCallsUnwindInit(true);
     }
-
     return 0;
-
   case Intrinsic::eh_dwarf_cfa: {
     EVT VT = getValue(I.getOperand(1)).getValueType();
     SDValue CfaArg = DAG.getSExtOrTrunc(getValue(I.getOperand(1)), dl,
                                         TLI.getPointerTy());
-
     SDValue Offset = DAG.getNode(ISD::ADD, dl,
                                  TLI.getPointerTy(),
                                  DAG.getNode(ISD::FRAME_TO_ARGS_OFFSET, dl,
                                              TLI.getPointerTy()),
                                  CfaArg);
-    setValue(&I, DAG.getNode(ISD::ADD, dl,
+    SDValue FA = DAG.getNode(ISD::FRAMEADDR, dl,
                              TLI.getPointerTy(),
-                             DAG.getNode(ISD::FRAMEADDR, dl,
-                                         TLI.getPointerTy(),
-                                         DAG.getConstant(0,
-                                                         TLI.getPointerTy())),
-                             Offset));
+                             DAG.getConstant(0, TLI.getPointerTy()));
+    Res = DAG.getNode(ISD::ADD, dl, TLI.getPointerTy(),
+                      FA, Offset);
+    setValue(&I, Res);
+    if (DisableScheduling) {
+      DAG.AssignOrdering(CfaArg.getNode(), SDNodeOrder);
+      DAG.AssignOrdering(Offset.getNode(), SDNodeOrder);
+      DAG.AssignOrdering(FA.getNode(), SDNodeOrder);
+      DAG.AssignOrdering(Res.getNode(), SDNodeOrder);
+    }
     return 0;
   }
   case Intrinsic::convertff:
@@ -3784,36 +4521,50 @@ SelectionDAGBuilder::visitIntrinsicCall(CallInst &I, unsigned Intrinsic) {
     case Intrinsic::convertuu:  Code = ISD::CVT_UU; break;
     }
     EVT DestVT = TLI.getValueType(I.getType());
-    Value* Op1 = I.getOperand(1);
-    setValue(&I, DAG.getConvertRndSat(DestVT, getCurDebugLoc(), getValue(Op1),
-                                DAG.getValueType(DestVT),
-                                DAG.getValueType(getValue(Op1).getValueType()),
-                                getValue(I.getOperand(2)),
-                                getValue(I.getOperand(3)),
-                                Code));
+    Value *Op1 = I.getOperand(1);
+    Res = DAG.getConvertRndSat(DestVT, getCurDebugLoc(), getValue(Op1),
+                               DAG.getValueType(DestVT),
+                               DAG.getValueType(getValue(Op1).getValueType()),
+                               getValue(I.getOperand(2)),
+                               getValue(I.getOperand(3)),
+                               Code);
+    setValue(&I, Res);
+    if (DisableScheduling)
+      DAG.AssignOrdering(Res.getNode(), SDNodeOrder);
     return 0;
   }
-
   case Intrinsic::sqrt:
-    setValue(&I, DAG.getNode(ISD::FSQRT, dl,
-                             getValue(I.getOperand(1)).getValueType(),
-                             getValue(I.getOperand(1))));
+    Res = DAG.getNode(ISD::FSQRT, dl,
+                      getValue(I.getOperand(1)).getValueType(),
+                      getValue(I.getOperand(1)));
+    setValue(&I, Res);
+    if (DisableScheduling)
+      DAG.AssignOrdering(Res.getNode(), SDNodeOrder);
     return 0;
   case Intrinsic::powi:
-    setValue(&I, DAG.getNode(ISD::FPOWI, dl,
-                             getValue(I.getOperand(1)).getValueType(),
-                             getValue(I.getOperand(1)),
-                             getValue(I.getOperand(2))));
+    Res = DAG.getNode(ISD::FPOWI, dl,
+                      getValue(I.getOperand(1)).getValueType(),
+                      getValue(I.getOperand(1)),
+                      getValue(I.getOperand(2)));
+    setValue(&I, Res);
+    if (DisableScheduling)
+      DAG.AssignOrdering(Res.getNode(), SDNodeOrder);
     return 0;
   case Intrinsic::sin:
-    setValue(&I, DAG.getNode(ISD::FSIN, dl,
-                             getValue(I.getOperand(1)).getValueType(),
-                             getValue(I.getOperand(1))));
+    Res = DAG.getNode(ISD::FSIN, dl,
+                      getValue(I.getOperand(1)).getValueType(),
+                      getValue(I.getOperand(1)));
+    setValue(&I, Res);
+    if (DisableScheduling)
+      DAG.AssignOrdering(Res.getNode(), SDNodeOrder);
     return 0;
   case Intrinsic::cos:
-    setValue(&I, DAG.getNode(ISD::FCOS, dl,
-                             getValue(I.getOperand(1)).getValueType(),
-                             getValue(I.getOperand(1))));
+    Res = DAG.getNode(ISD::FCOS, dl,
+                      getValue(I.getOperand(1)).getValueType(),
+                      getValue(I.getOperand(1)));
+    setValue(&I, Res);
+    if (DisableScheduling)
+      DAG.AssignOrdering(Res.getNode(), SDNodeOrder);
     return 0;
   case Intrinsic::log:
     visitLog(I);
@@ -3835,55 +4586,74 @@ SelectionDAGBuilder::visitIntrinsicCall(CallInst &I, unsigned Intrinsic) {
     return 0;
   case Intrinsic::pcmarker: {
     SDValue Tmp = getValue(I.getOperand(1));
-    DAG.setRoot(DAG.getNode(ISD::PCMARKER, dl, MVT::Other, getRoot(), Tmp));
+    Res = DAG.getNode(ISD::PCMARKER, dl, MVT::Other, getRoot(), Tmp);
+    DAG.setRoot(Res);
+    if (DisableScheduling)
+      DAG.AssignOrdering(Res.getNode(), SDNodeOrder);
     return 0;
   }
   case Intrinsic::readcyclecounter: {
     SDValue Op = getRoot();
-    SDValue Tmp = DAG.getNode(ISD::READCYCLECOUNTER, dl,
-                              DAG.getVTList(MVT::i64, MVT::Other),
-                              &Op, 1);
-    setValue(&I, Tmp);
-    DAG.setRoot(Tmp.getValue(1));
+    Res = DAG.getNode(ISD::READCYCLECOUNTER, dl,
+                      DAG.getVTList(MVT::i64, MVT::Other),
+                      &Op, 1);
+    setValue(&I, Res);
+    DAG.setRoot(Res.getValue(1));
+    if (DisableScheduling)
+      DAG.AssignOrdering(Res.getNode(), SDNodeOrder);
     return 0;
   }
   case Intrinsic::bswap:
-    setValue(&I, DAG.getNode(ISD::BSWAP, dl,
-                             getValue(I.getOperand(1)).getValueType(),
-                             getValue(I.getOperand(1))));
+    Res = DAG.getNode(ISD::BSWAP, dl,
+                      getValue(I.getOperand(1)).getValueType(),
+                      getValue(I.getOperand(1)));
+    setValue(&I, Res);
+    if (DisableScheduling)
+      DAG.AssignOrdering(Res.getNode(), SDNodeOrder);
     return 0;
   case Intrinsic::cttz: {
     SDValue Arg = getValue(I.getOperand(1));
     EVT Ty = Arg.getValueType();
-    SDValue result = DAG.getNode(ISD::CTTZ, dl, Ty, Arg);
-    setValue(&I, result);
+    Res = DAG.getNode(ISD::CTTZ, dl, Ty, Arg);
+    setValue(&I, Res);
+    if (DisableScheduling)
+      DAG.AssignOrdering(Res.getNode(), SDNodeOrder);
     return 0;
   }
   case Intrinsic::ctlz: {
     SDValue Arg = getValue(I.getOperand(1));
     EVT Ty = Arg.getValueType();
-    SDValue result = DAG.getNode(ISD::CTLZ, dl, Ty, Arg);
-    setValue(&I, result);
+    Res = DAG.getNode(ISD::CTLZ, dl, Ty, Arg);
+    setValue(&I, Res);
+    if (DisableScheduling)
+      DAG.AssignOrdering(Res.getNode(), SDNodeOrder);
     return 0;
   }
   case Intrinsic::ctpop: {
     SDValue Arg = getValue(I.getOperand(1));
     EVT Ty = Arg.getValueType();
-    SDValue result = DAG.getNode(ISD::CTPOP, dl, Ty, Arg);
-    setValue(&I, result);
+    Res = DAG.getNode(ISD::CTPOP, dl, Ty, Arg);
+    setValue(&I, Res);
+    if (DisableScheduling)
+      DAG.AssignOrdering(Res.getNode(), SDNodeOrder);
     return 0;
   }
   case Intrinsic::stacksave: {
     SDValue Op = getRoot();
-    SDValue Tmp = DAG.getNode(ISD::STACKSAVE, dl,
-              DAG.getVTList(TLI.getPointerTy(), MVT::Other), &Op, 1);
-    setValue(&I, Tmp);
-    DAG.setRoot(Tmp.getValue(1));
+    Res = DAG.getNode(ISD::STACKSAVE, dl,
+                      DAG.getVTList(TLI.getPointerTy(), MVT::Other), &Op, 1);
+    setValue(&I, Res);
+    DAG.setRoot(Res.getValue(1));
+    if (DisableScheduling)
+      DAG.AssignOrdering(Res.getNode(), SDNodeOrder);
     return 0;
   }
   case Intrinsic::stackrestore: {
-    SDValue Tmp = getValue(I.getOperand(1));
-    DAG.setRoot(DAG.getNode(ISD::STACKRESTORE, dl, MVT::Other, getRoot(), Tmp));
+    Res = getValue(I.getOperand(1));
+    Res = DAG.getNode(ISD::STACKRESTORE, dl, MVT::Other, getRoot(), Res);
+    DAG.setRoot(Res);
+    if (DisableScheduling)
+      DAG.AssignOrdering(Res.getNode(), SDNodeOrder);
     return 0;
   }
   case Intrinsic::stackprotector: {
@@ -3901,11 +4671,13 @@ SelectionDAGBuilder::visitIntrinsicCall(CallInst &I, unsigned Intrinsic) {
     SDValue FIN = DAG.getFrameIndex(FI, PtrTy);
 
     // Store the stack protector onto the stack.
-    SDValue Result = DAG.getStore(getRoot(), getCurDebugLoc(), Src, FIN,
-                                  PseudoSourceValue::getFixedStack(FI),
-                                  0, true);
-    setValue(&I, Result);
-    DAG.setRoot(Result);
+    Res = DAG.getStore(getRoot(), getCurDebugLoc(), Src, FIN,
+                       PseudoSourceValue::getFixedStack(FI),
+                       0, true);
+    setValue(&I, Res);
+    DAG.setRoot(Res);
+    if (DisableScheduling)
+      DAG.AssignOrdering(Res.getNode(), SDNodeOrder);
     return 0;
   }
   case Intrinsic::objectsize: {
@@ -3917,10 +4689,14 @@ SelectionDAGBuilder::visitIntrinsicCall(CallInst &I, unsigned Intrinsic) {
     SDValue Arg = getValue(I.getOperand(0));
     EVT Ty = Arg.getValueType();
 
-    if (CI->getZExtValue() < 2)
-      setValue(&I, DAG.getConstant(-1ULL, Ty));
+    if (CI->getZExtValue() == 0)
+      Res = DAG.getConstant(-1ULL, Ty);
     else
-      setValue(&I, DAG.getConstant(0, Ty));
+      Res = DAG.getConstant(0, Ty);
+
+    setValue(&I, Res);
+    if (DisableScheduling)
+      DAG.AssignOrdering(Res.getNode(), SDNodeOrder);
     return 0;
   }
   case Intrinsic::var_annotation:
@@ -3938,15 +4714,16 @@ SelectionDAGBuilder::visitIntrinsicCall(CallInst &I, unsigned Intrinsic) {
     Ops[4] = DAG.getSrcValue(I.getOperand(1));
     Ops[5] = DAG.getSrcValue(F);
 
-    SDValue Tmp = DAG.getNode(ISD::TRAMPOLINE, dl,
-                              DAG.getVTList(TLI.getPointerTy(), MVT::Other),
-                              Ops, 6);
+    Res = DAG.getNode(ISD::TRAMPOLINE, dl,
+                      DAG.getVTList(TLI.getPointerTy(), MVT::Other),
+                      Ops, 6);
 
-    setValue(&I, Tmp);
-    DAG.setRoot(Tmp.getValue(1));
+    setValue(&I, Res);
+    DAG.setRoot(Res.getValue(1));
+    if (DisableScheduling)
+      DAG.AssignOrdering(Res.getNode(), SDNodeOrder);
     return 0;
   }
-
   case Intrinsic::gcroot:
     if (GFI) {
       Value *Alloca = I.getOperand(1);
@@ -3956,22 +4733,22 @@ SelectionDAGBuilder::visitIntrinsicCall(CallInst &I, unsigned Intrinsic) {
       GFI->addStackRoot(FI->getIndex(), TypeMap);
     }
     return 0;
-
   case Intrinsic::gcread:
   case Intrinsic::gcwrite:
     llvm_unreachable("GC failed to lower gcread/gcwrite intrinsics!");
     return 0;
-
-  case Intrinsic::flt_rounds: {
-    setValue(&I, DAG.getNode(ISD::FLT_ROUNDS_, dl, MVT::i32));
+  case Intrinsic::flt_rounds:
+    Res = DAG.getNode(ISD::FLT_ROUNDS_, dl, MVT::i32);
+    setValue(&I, Res);
+    if (DisableScheduling)
+      DAG.AssignOrdering(Res.getNode(), SDNodeOrder);
     return 0;
-  }
-
-  case Intrinsic::trap: {
-    DAG.setRoot(DAG.getNode(ISD::TRAP, dl,MVT::Other, getRoot()));
+  case Intrinsic::trap:
+    Res = DAG.getNode(ISD::TRAP, dl,MVT::Other, getRoot());
+    DAG.setRoot(Res);
+    if (DisableScheduling)
+      DAG.AssignOrdering(Res.getNode(), SDNodeOrder);
     return 0;
-  }
-
   case Intrinsic::uadd_with_overflow:
     return implVisitAluOverflow(I, ISD::UADDO);
   case Intrinsic::sadd_with_overflow:
@@ -3991,7 +4768,10 @@ SelectionDAGBuilder::visitIntrinsicCall(CallInst &I, unsigned Intrinsic) {
     Ops[1] = getValue(I.getOperand(1));
     Ops[2] = getValue(I.getOperand(2));
     Ops[3] = getValue(I.getOperand(3));
-    DAG.setRoot(DAG.getNode(ISD::PREFETCH, dl, MVT::Other, &Ops[0], 4));
+    Res = DAG.getNode(ISD::PREFETCH, dl, MVT::Other, &Ops[0], 4);
+    DAG.setRoot(Res);
+    if (DisableScheduling)
+      DAG.AssignOrdering(Res.getNode(), SDNodeOrder);
     return 0;
   }
 
@@ -4001,7 +4781,10 @@ SelectionDAGBuilder::visitIntrinsicCall(CallInst &I, unsigned Intrinsic) {
     for (int x = 1; x < 6; ++x)
       Ops[x] = getValue(I.getOperand(x));
 
-    DAG.setRoot(DAG.getNode(ISD::MEMBARRIER, dl, MVT::Other, &Ops[0], 6));
+    Res = DAG.getNode(ISD::MEMBARRIER, dl, MVT::Other, &Ops[0], 6);
+    DAG.setRoot(Res);
+    if (DisableScheduling)
+      DAG.AssignOrdering(Res.getNode(), SDNodeOrder);
     return 0;
   }
   case Intrinsic::atomic_cmp_swap: {
@@ -4016,6 +4799,8 @@ SelectionDAGBuilder::visitIntrinsicCall(CallInst &I, unsigned Intrinsic) {
                     I.getOperand(1));
     setValue(&I, L);
     DAG.setRoot(L.getValue(1));
+    if (DisableScheduling)
+      DAG.AssignOrdering(L.getNode(), SDNodeOrder);
     return 0;
   }
   case Intrinsic::atomic_load_add:
@@ -4044,7 +4829,10 @@ SelectionDAGBuilder::visitIntrinsicCall(CallInst &I, unsigned Intrinsic) {
   case Intrinsic::invariant_start:
   case Intrinsic::lifetime_start:
     // Discard region information.
-    setValue(&I, DAG.getUNDEF(TLI.getPointerTy()));
+    Res = DAG.getUNDEF(TLI.getPointerTy());
+    setValue(&I, Res);
+    if (DisableScheduling)
+      DAG.AssignOrdering(Res.getNode(), SDNodeOrder);
     return 0;
   case Intrinsic::invariant_end:
   case Intrinsic::lifetime_end:
@@ -4144,8 +4932,7 @@ void SelectionDAGBuilder::LowerCallTo(CallSite CS, SDValue Callee,
   SmallVector<ISD::ArgFlagsTy, 4> OutsFlags;
   SmallVector<uint64_t, 4> Offsets;
   getReturnInfo(RetTy, CS.getAttributes().getRetAttributes(), 
-    OutVTs, OutsFlags, TLI, &Offsets);
-  
+                OutVTs, OutsFlags, TLI, &Offsets);
 
   bool CanLowerReturn = TLI.CanLowerReturn(CS.getCallingConv(), 
                         FTy->isVarArg(), OutVTs, OutsFlags, DAG);
@@ -4219,14 +5006,16 @@ void SelectionDAGBuilder::LowerCallTo(CallSite CS, SDValue Callee,
                     CS.getCallingConv(),
                     isTailCall,
                     !CS.getInstruction()->use_empty(),
-                    Callee, Args, DAG, getCurDebugLoc());
+                    Callee, Args, DAG, getCurDebugLoc(), SDNodeOrder);
   assert((isTailCall || Result.second.getNode()) &&
          "Non-null chain expected with non-tail call!");
   assert((Result.second.getNode() || !Result.first.getNode()) &&
          "Null value expected with tail call!");
-  if (Result.first.getNode())
+  if (Result.first.getNode()) {
     setValue(CS.getInstruction(), Result.first);
-  else if (!CanLowerReturn && Result.second.getNode()) {
+    if (DisableScheduling)
+      DAG.AssignOrdering(Result.first.getNode(), SDNodeOrder);
+  } else if (!CanLowerReturn && Result.second.getNode()) {
     // The instruction result is the result of loading from the
     // hidden sret parameter.
     SmallVector<EVT, 1> PVTs;
@@ -4240,27 +5029,40 @@ void SelectionDAGBuilder::LowerCallTo(CallSite CS, SDValue Callee,
     SmallVector<SDValue, 4> Chains(NumValues);
 
     for (unsigned i = 0; i < NumValues; ++i) {
+      SDValue Add = DAG.getNode(ISD::ADD, getCurDebugLoc(), PtrVT,
+                                DemoteStackSlot,
+                                DAG.getConstant(Offsets[i], PtrVT));
       SDValue L = DAG.getLoad(OutVTs[i], getCurDebugLoc(), Result.second,
-        DAG.getNode(ISD::ADD, getCurDebugLoc(), PtrVT, DemoteStackSlot,
-        DAG.getConstant(Offsets[i], PtrVT)),
-        NULL, Offsets[i], false, 1);
+                              Add, NULL, Offsets[i], false, 1);
       Values[i] = L;
       Chains[i] = L.getValue(1);
     }
+
     SDValue Chain = DAG.getNode(ISD::TokenFactor, getCurDebugLoc(),
                                 MVT::Other, &Chains[0], NumValues);
     PendingLoads.push_back(Chain);
 
-    setValue(CS.getInstruction(), DAG.getNode(ISD::MERGE_VALUES,
-             getCurDebugLoc(), DAG.getVTList(&OutVTs[0], NumValues),
-             &Values[0], NumValues));
+    SDValue MV = DAG.getNode(ISD::MERGE_VALUES,
+                             getCurDebugLoc(),
+                             DAG.getVTList(&OutVTs[0], NumValues),
+                             &Values[0], NumValues);
+    setValue(CS.getInstruction(), MV);
+
+    if (DisableScheduling) {
+      DAG.AssignOrdering(Chain.getNode(), SDNodeOrder);
+      DAG.AssignOrdering(MV.getNode(), SDNodeOrder);
+    }
   }
-  // As a special case, a null chain means that a tail call has
-  // been emitted and the DAG root is already updated.
-  if (Result.second.getNode())
+
+  // As a special case, a null chain means that a tail call has been emitted and
+  // the DAG root is already updated.
+  if (Result.second.getNode()) {
     DAG.setRoot(Result.second);
-  else
+    if (DisableScheduling)
+      DAG.AssignOrdering(Result.second.getNode(), SDNodeOrder);
+  } else {
     HasTailCall = true;
+  }
 
   if (LandingPad && MMI) {
     // Insert a label at the end of the invoke call to mark the try range.  This
@@ -4274,6 +5076,140 @@ void SelectionDAGBuilder::LowerCallTo(CallSite CS, SDValue Callee,
   }
 }
 
+/// IsOnlyUsedInZeroEqualityComparison - Return true if it only matters that the
+/// value is equal or not-equal to zero.
+static bool IsOnlyUsedInZeroEqualityComparison(Value *V) {
+  for (Value::use_iterator UI = V->use_begin(), E = V->use_end();
+       UI != E; ++UI) {
+    if (ICmpInst *IC = dyn_cast<ICmpInst>(*UI))
+      if (IC->isEquality())
+        if (Constant *C = dyn_cast<Constant>(IC->getOperand(1)))
+          if (C->isNullValue())
+            continue;
+    // Unknown instruction.
+    return false;
+  }
+  return true;
+}
+
+static SDValue getMemCmpLoad(Value *PtrVal, MVT LoadVT, const Type *LoadTy,
+                             SelectionDAGBuilder &Builder) {
+  
+  // Check to see if this load can be trivially constant folded, e.g. if the
+  // input is from a string literal.
+  if (Constant *LoadInput = dyn_cast<Constant>(PtrVal)) {
+    // Cast pointer to the type we really want to load.
+    LoadInput = ConstantExpr::getBitCast(LoadInput,
+                                         PointerType::getUnqual(LoadTy));
+    
+    if (Constant *LoadCst = ConstantFoldLoadFromConstPtr(LoadInput, Builder.TD))
+      return Builder.getValue(LoadCst);
+  }
+  
+  // Otherwise, we have to emit the load.  If the pointer is to unfoldable but
+  // still constant memory, the input chain can be the entry node.
+  SDValue Root;
+  bool ConstantMemory = false;
+  
+  // Do not serialize (non-volatile) loads of constant memory with anything.
+  if (Builder.AA->pointsToConstantMemory(PtrVal)) {
+    Root = Builder.DAG.getEntryNode();
+    ConstantMemory = true;
+  } else {
+    // Do not serialize non-volatile loads against each other.
+    Root = Builder.DAG.getRoot();
+  }
+  
+  SDValue Ptr = Builder.getValue(PtrVal);
+  SDValue LoadVal = Builder.DAG.getLoad(LoadVT, Builder.getCurDebugLoc(), Root,
+                                        Ptr, PtrVal /*SrcValue*/, 0/*SVOffset*/,
+                                        false /*volatile*/, 1 /* align=1 */);
+  
+  if (!ConstantMemory)
+    Builder.PendingLoads.push_back(LoadVal.getValue(1));
+  return LoadVal;
+}
+
+
+/// visitMemCmpCall - See if we can lower a call to memcmp in an optimized form.
+/// If so, return true and lower it, otherwise return false and it will be
+/// lowered like a normal call.
+bool SelectionDAGBuilder::visitMemCmpCall(CallInst &I) {
+  // Verify that the prototype makes sense.  int memcmp(void*,void*,size_t)
+  if (I.getNumOperands() != 4)
+    return false;
+  
+  Value *LHS = I.getOperand(1), *RHS = I.getOperand(2);
+  if (!isa<PointerType>(LHS->getType()) || !isa<PointerType>(RHS->getType()) ||
+      !isa<IntegerType>(I.getOperand(3)->getType()) ||
+      !isa<IntegerType>(I.getType()))
+    return false;    
+  
+  ConstantInt *Size = dyn_cast<ConstantInt>(I.getOperand(3));
+  
+  // memcmp(S1,S2,2) != 0 -> (*(short*)LHS != *(short*)RHS)  != 0
+  // memcmp(S1,S2,4) != 0 -> (*(int*)LHS != *(int*)RHS)  != 0
+  if (Size && IsOnlyUsedInZeroEqualityComparison(&I)) {
+    bool ActuallyDoIt = true;
+    MVT LoadVT;
+    const Type *LoadTy;
+    switch (Size->getZExtValue()) {
+    default:
+      LoadVT = MVT::Other;
+      LoadTy = 0;
+      ActuallyDoIt = false;
+      break;
+    case 2:
+      LoadVT = MVT::i16;
+      LoadTy = Type::getInt16Ty(Size->getContext());
+      break;
+    case 4:
+      LoadVT = MVT::i32;
+      LoadTy = Type::getInt32Ty(Size->getContext()); 
+      break;
+    case 8:
+      LoadVT = MVT::i64;
+      LoadTy = Type::getInt64Ty(Size->getContext()); 
+      break;
+        /*
+    case 16:
+      LoadVT = MVT::v4i32;
+      LoadTy = Type::getInt32Ty(Size->getContext()); 
+      LoadTy = VectorType::get(LoadTy, 4);
+      break;
+         */
+    }
+    
+    // This turns into unaligned loads.  We only do this if the target natively
+    // supports the MVT we'll be loading or if it is small enough (<= 4) that
+    // we'll only produce a small number of byte loads.
+    
+    // Require that we can find a legal MVT, and only do this if the target
+    // supports unaligned loads of that type.  Expanding into byte loads would
+    // bloat the code.
+    if (ActuallyDoIt && Size->getZExtValue() > 4) {
+      // TODO: Handle 5 byte compare as 4-byte + 1 byte.
+      // TODO: Handle 8 byte compare on x86-32 as two 32-bit loads.
+      if (!TLI.isTypeLegal(LoadVT) ||!TLI.allowsUnalignedMemoryAccesses(LoadVT))
+        ActuallyDoIt = false;
+    }
+    
+    if (ActuallyDoIt) {
+      SDValue LHSVal = getMemCmpLoad(LHS, LoadVT, LoadTy, *this);
+      SDValue RHSVal = getMemCmpLoad(RHS, LoadVT, LoadTy, *this);
+      
+      SDValue Res = DAG.getSetCC(getCurDebugLoc(), MVT::i1, LHSVal, RHSVal,
+                                 ISD::SETNE);
+      EVT CallVT = TLI.getValueType(I.getType(), true);
+      setValue(&I, DAG.getZExtOrTrunc(Res, getCurDebugLoc(), CallVT));
+      return true;
+    }
+  }
+  
+  
+  return false;
+}
+
 
 void SelectionDAGBuilder::visitCall(CallInst &I) {
   const char *RenameFn = 0;
@@ -4348,6 +5284,9 @@ void SelectionDAGBuilder::visitCall(CallInst &I) {
                                    Tmp.getValueType(), Tmp));
           return;
         }
+      } else if (Name == "memcmp") {
+        if (visitMemCmpCall(I))
+          return;
       }
     }
   } else if (isa<InlineAsm>(I.getOperand(0))) {
@@ -4361,21 +5300,19 @@ void SelectionDAGBuilder::visitCall(CallInst &I) {
   else
     Callee = DAG.getExternalSymbol(RenameFn, TLI.getPointerTy());
 
-  // Check if we can potentially perform a tail call. More detailed
-  // checking is be done within LowerCallTo, after more information
-  // about the call is known.
+  // Check if we can potentially perform a tail call. More detailed checking is
+  // be done within LowerCallTo, after more information about the call is known.
   bool isTailCall = PerformTailCallOpt && I.isTailCall();
 
   LowerCallTo(&I, Callee, isTailCall);
 }
 
-
 /// getCopyFromRegs - Emit a series of CopyFromReg nodes that copies from
 /// this value and returns the result as a ValueVT value.  This uses
 /// Chain/Flag as the input and updates them for the output Chain/Flag.
 /// If the Flag pointer is NULL, no flag is used.
 SDValue RegsForValue::getCopyFromRegs(SelectionDAG &DAG, DebugLoc dl,
-                                      SDValue &Chain,
+                                      unsigned Order, SDValue &Chain,
                                       SDValue *Flag) const {
   // Assemble the legal parts into the final values.
   SmallVector<SDValue, 4> Values(ValueVTs.size());
@@ -4389,14 +5326,18 @@ SDValue RegsForValue::getCopyFromRegs(SelectionDAG &DAG, DebugLoc dl,
     Parts.resize(NumRegs);
     for (unsigned i = 0; i != NumRegs; ++i) {
       SDValue P;
-      if (Flag == 0)
+      if (Flag == 0) {
         P = DAG.getCopyFromReg(Chain, dl, Regs[Part+i], RegisterVT);
-      else {
+      } else {
         P = DAG.getCopyFromReg(Chain, dl, Regs[Part+i], RegisterVT, *Flag);
         *Flag = P.getValue(2);
       }
+
       Chain = P.getValue(1);
 
+      if (DisableScheduling)
+        DAG.AssignOrdering(P.getNode(), Order);
+
       // If the source register was virtual and if we know something about it,
       // add an assert node.
       if (TargetRegisterInfo::isVirtualRegister(Regs[Part+i]) &&
@@ -4435,6 +5376,8 @@ SDValue RegsForValue::getCopyFromRegs(SelectionDAG &DAG, DebugLoc dl,
             P = DAG.getNode(isSExt ? ISD::AssertSext : ISD::AssertZext, dl,
                             RegisterVT, P, DAG.getValueType(FromVT));
 
+            if (DisableScheduling)
+              DAG.AssignOrdering(P.getNode(), Order);
           }
         }
       }
@@ -4442,15 +5385,20 @@ SDValue RegsForValue::getCopyFromRegs(SelectionDAG &DAG, DebugLoc dl,
       Parts[i] = P;
     }
 
-    Values[Value] = getCopyFromParts(DAG, dl, Parts.begin(),
+    Values[Value] = getCopyFromParts(DAG, dl, Order, Parts.begin(),
                                      NumRegs, RegisterVT, ValueVT);
+    if (DisableScheduling)
+      DAG.AssignOrdering(Values[Value].getNode(), Order);
     Part += NumRegs;
     Parts.clear();
   }
 
-  return DAG.getNode(ISD::MERGE_VALUES, dl,
-                     DAG.getVTList(&ValueVTs[0], ValueVTs.size()),
-                     &Values[0], ValueVTs.size());
+  SDValue Res = DAG.getNode(ISD::MERGE_VALUES, dl,
+                            DAG.getVTList(&ValueVTs[0], ValueVTs.size()),
+                            &Values[0], ValueVTs.size());
+  if (DisableScheduling)
+    DAG.AssignOrdering(Res.getNode(), Order);
+  return Res;
 }
 
 /// getCopyToRegs - Emit a series of CopyToReg nodes that copies the
@@ -4458,7 +5406,8 @@ SDValue RegsForValue::getCopyFromRegs(SelectionDAG &DAG, DebugLoc dl,
 /// Chain/Flag as the input and updates them for the output Chain/Flag.
 /// If the Flag pointer is NULL, no flag is used.
 void RegsForValue::getCopyToRegs(SDValue Val, SelectionDAG &DAG, DebugLoc dl,
-                                 SDValue &Chain, SDValue *Flag) const {
+                                 unsigned Order, SDValue &Chain,
+                                 SDValue *Flag) const {
   // Get the list of the values's legal parts.
   unsigned NumRegs = Regs.size();
   SmallVector<SDValue, 8> Parts(NumRegs);
@@ -4467,7 +5416,8 @@ void RegsForValue::getCopyToRegs(SDValue Val, SelectionDAG &DAG, DebugLoc dl,
     unsigned NumParts = TLI->getNumRegisters(*DAG.getContext(), ValueVT);
     EVT RegisterVT = RegVTs[Value];
 
-    getCopyToParts(DAG, dl, Val.getValue(Val.getResNo() + Value),
+    getCopyToParts(DAG, dl, Order,
+                   Val.getValue(Val.getResNo() + Value),
                    &Parts[Part], NumParts, RegisterVT);
     Part += NumParts;
   }
@@ -4476,13 +5426,17 @@ void RegsForValue::getCopyToRegs(SDValue Val, SelectionDAG &DAG, DebugLoc dl,
   SmallVector<SDValue, 8> Chains(NumRegs);
   for (unsigned i = 0; i != NumRegs; ++i) {
     SDValue Part;
-    if (Flag == 0)
+    if (Flag == 0) {
       Part = DAG.getCopyToReg(Chain, dl, Regs[i], Parts[i]);
-    else {
+    } else {
       Part = DAG.getCopyToReg(Chain, dl, Regs[i], Parts[i], *Flag);
       *Flag = Part.getValue(1);
     }
+
     Chains[i] = Part.getValue(0);
+
+    if (DisableScheduling)
+      DAG.AssignOrdering(Part.getNode(), Order);
   }
 
   if (NumRegs == 1 || Flag)
@@ -4499,6 +5453,9 @@ void RegsForValue::getCopyToRegs(SDValue Val, SelectionDAG &DAG, DebugLoc dl,
     Chain = Chains[NumRegs-1];
   else
     Chain = DAG.getNode(ISD::TokenFactor, dl, MVT::Other, &Chains[0], NumRegs);
+
+  if (DisableScheduling)
+    DAG.AssignOrdering(Chain.getNode(), Order);
 }
 
 /// AddInlineAsmOperands - Add this value to the specified inlineasm node
@@ -4506,20 +5463,28 @@ void RegsForValue::getCopyToRegs(SDValue Val, SelectionDAG &DAG, DebugLoc dl,
 /// values added into it.
 void RegsForValue::AddInlineAsmOperands(unsigned Code,
                                         bool HasMatching,unsigned MatchingIdx,
-                                        SelectionDAG &DAG,
+                                        SelectionDAG &DAG, unsigned Order,
                                         std::vector<SDValue> &Ops) const {
-  EVT IntPtrTy = DAG.getTargetLoweringInfo().getPointerTy();
   assert(Regs.size() < (1 << 13) && "Too many inline asm outputs!");
   unsigned Flag = Code | (Regs.size() << 3);
   if (HasMatching)
     Flag |= 0x80000000 | (MatchingIdx << 16);
-  Ops.push_back(DAG.getTargetConstant(Flag, IntPtrTy));
+  SDValue Res = DAG.getTargetConstant(Flag, MVT::i32);
+  Ops.push_back(Res);
+
+  if (DisableScheduling)
+    DAG.AssignOrdering(Res.getNode(), Order);
+
   for (unsigned Value = 0, Reg = 0, e = ValueVTs.size(); Value != e; ++Value) {
     unsigned NumRegs = TLI->getNumRegisters(*DAG.getContext(), ValueVTs[Value]);
     EVT RegisterVT = RegVTs[Value];
     for (unsigned i = 0; i != NumRegs; ++i) {
       assert(Reg < Regs.size() && "Mismatch in # registers expected");
-      Ops.push_back(DAG.getRegister(Regs[Reg++], RegisterVT));
+      SDValue Res = DAG.getRegister(Regs[Reg++], RegisterVT);
+      Ops.push_back(Res);
+
+      if (DisableScheduling)
+        DAG.AssignOrdering(Res.getNode(), Order);
     }
   }
 }
@@ -4623,8 +5588,12 @@ public:
 
     // If this is an indirect operand, the operand is a pointer to the
     // accessed type.
-    if (isIndirect)
-      OpTy = cast<PointerType>(OpTy)->getElementType();
+    if (isIndirect) {
+      const llvm::PointerType *PtrTy = dyn_cast<PointerType>(OpTy);
+      if (!PtrTy)
+        llvm_report_error("Indirect operand for inline asm not a pointer!");
+      OpTy = PtrTy->getElementType();
+    }
 
     // If OpTy is not a single value, it may be a struct/union that we
     // can tile with integers.
@@ -4663,8 +5632,8 @@ private:
 
 /// GetRegistersForValue - Assign registers (virtual or physical) for the
 /// specified operand.  We prefer to assign virtual registers, to allow the
-/// register allocator handle the assignment process.  However, if the asm uses
-/// features that we can't model on machineinstrs, we have SDISel do the
+/// register allocator to handle the assignment process.  However, if the asm
+/// uses features that we can't model on machineinstrs, we have SDISel do the
 /// allocation.  This produces generally horrible, but correct, code.
 ///
 ///   OpInfo describes the operand.
@@ -4734,6 +5703,9 @@ GetRegistersForValue(SDISelAsmOperandInfo &OpInfo,
                                          RegVT, OpInfo.CallOperand);
         OpInfo.ConstraintVT = RegVT;
       }
+
+      if (DisableScheduling)
+        DAG.AssignOrdering(OpInfo.CallOperand.getNode(), SDNodeOrder);
     }
 
     NumRegs = TLI.getNumRegisters(Context, OpInfo.ConstraintVT);
@@ -4770,6 +5742,7 @@ GetRegistersForValue(SDISelAsmOperandInfo &OpInfo,
         Regs.push_back(*I);
       }
     }
+
     OpInfo.AssignedRegs = RegsForValue(TLI, Regs, RegVT, ValueVT);
     const TargetRegisterInfo *TRI = DAG.getTarget().getRegisterInfo();
     OpInfo.MarkAllocatedRegs(isOutReg, isInReg, OutputRegs, InputRegs, *TRI);
@@ -5004,6 +5977,7 @@ void SelectionDAGBuilder::visitInlineAsm(CallSite CS) {
 
       // There is no longer a Value* corresponding to this operand.
       OpInfo.CallOperandVal = 0;
+
       // It is now an indirect operand.
       OpInfo.isIndirect = true;
     }
@@ -5013,8 +5987,8 @@ void SelectionDAGBuilder::visitInlineAsm(CallSite CS) {
     if (OpInfo.ConstraintType == TargetLowering::C_Register)
       GetRegistersForValue(OpInfo, OutputRegs, InputRegs);
   }
-  ConstraintInfos.clear();
 
+  ConstraintInfos.clear();
 
   // Second pass - Loop over all of the operands, assigning virtual or physregs
   // to register class operands.
@@ -5088,7 +6062,8 @@ void SelectionDAGBuilder::visitInlineAsm(CallSite CS) {
                                                2 /* REGDEF */ ,
                                                false,
                                                0,
-                                               DAG, AsmNodeOperands);
+                                               DAG, SDNodeOrder,
+                                               AsmNodeOperands);
       break;
     }
     case InlineAsm::isInput: {
@@ -5135,10 +6110,10 @@ void SelectionDAGBuilder::visitInlineAsm(CallSite CS) {
 
           // Use the produced MatchedRegs object to
           MatchedRegs.getCopyToRegs(InOperandVal, DAG, getCurDebugLoc(),
-                                    Chain, &Flag);
+                                    SDNodeOrder, Chain, &Flag);
           MatchedRegs.AddInlineAsmOperands(1 /*REGUSE*/,
                                            true, OpInfo.getMatchedOperand(),
-                                           DAG, AsmNodeOperands);
+                                           DAG, SDNodeOrder, AsmNodeOperands);
           break;
         } else {
           assert(((OpFlag & 7) == 4) && "Unknown matching constraint!");
@@ -5198,10 +6173,11 @@ void SelectionDAGBuilder::visitInlineAsm(CallSite CS) {
       }
 
       OpInfo.AssignedRegs.getCopyToRegs(InOperandVal, DAG, getCurDebugLoc(),
-                                        Chain, &Flag);
+                                        SDNodeOrder, Chain, &Flag);
 
       OpInfo.AssignedRegs.AddInlineAsmOperands(1/*REGUSE*/, false, 0,
-                                               DAG, AsmNodeOperands);
+                                               DAG, SDNodeOrder,
+                                               AsmNodeOperands);
       break;
     }
     case InlineAsm::isClobber: {
@@ -5209,7 +6185,8 @@ void SelectionDAGBuilder::visitInlineAsm(CallSite CS) {
       // allocator is aware that the physreg got clobbered.
       if (!OpInfo.AssignedRegs.Regs.empty())
         OpInfo.AssignedRegs.AddInlineAsmOperands(6 /* EARLYCLOBBER REGDEF */,
-                                                 false, 0, DAG,AsmNodeOperands);
+                                                 false, 0, DAG, SDNodeOrder,
+                                                 AsmNodeOperands);
       break;
     }
     }
@@ -5228,7 +6205,7 @@ void SelectionDAGBuilder::visitInlineAsm(CallSite CS) {
   // and set it as the value of the call.
   if (!RetValRegs.Regs.empty()) {
     SDValue Val = RetValRegs.getCopyFromRegs(DAG, getCurDebugLoc(),
-                                             Chain, &Flag);
+                                             SDNodeOrder, Chain, &Flag);
 
     // FIXME: Why don't we do this for inline asms with MRVs?
     if (CS.getType()->isSingleValueType() && CS.getType()->isSized()) {
@@ -5268,21 +6245,25 @@ void SelectionDAGBuilder::visitInlineAsm(CallSite CS) {
     RegsForValue &OutRegs = IndirectStoresToEmit[i].first;
     Value *Ptr = IndirectStoresToEmit[i].second;
     SDValue OutVal = OutRegs.getCopyFromRegs(DAG, getCurDebugLoc(),
-                                             Chain, &Flag);
+                                             SDNodeOrder, Chain, &Flag);
     StoresToEmit.push_back(std::make_pair(OutVal, Ptr));
 
   }
 
   // Emit the non-flagged stores from the physregs.
   SmallVector<SDValue, 8> OutChains;
-  for (unsigned i = 0, e = StoresToEmit.size(); i != e; ++i)
-    OutChains.push_back(DAG.getStore(Chain, getCurDebugLoc(),
-                                    StoresToEmit[i].first,
-                                    getValue(StoresToEmit[i].second),
-                                    StoresToEmit[i].second, 0));
+  for (unsigned i = 0, e = StoresToEmit.size(); i != e; ++i) {
+    SDValue Val = DAG.getStore(Chain, getCurDebugLoc(),
+                               StoresToEmit[i].first,
+                               getValue(StoresToEmit[i].second),
+                               StoresToEmit[i].second, 0);
+    OutChains.push_back(Val);
+  }
+
   if (!OutChains.empty())
     Chain = DAG.getNode(ISD::TokenFactor, getCurDebugLoc(), MVT::Other,
                         &OutChains[0], OutChains.size());
+
   DAG.setRoot(Chain);
 }
 
@@ -5328,8 +6309,8 @@ TargetLowering::LowerCallTo(SDValue Chain, const Type *RetTy,
                             CallingConv::ID CallConv, bool isTailCall,
                             bool isReturnValueUsed,
                             SDValue Callee,
-                            ArgListTy &Args, SelectionDAG &DAG, DebugLoc dl) {
-
+                            ArgListTy &Args, SelectionDAG &DAG, DebugLoc dl,
+                            unsigned Order) {
   assert((!isTailCall || PerformTailCallOpt) &&
          "isTailCall set when tail-call optimizations are disabled!");
 
@@ -5383,7 +6364,8 @@ TargetLowering::LowerCallTo(SDValue Chain, const Type *RetTy,
       else if (Args[i].isZExt)
         ExtendKind = ISD::ZERO_EXTEND;
 
-      getCopyToParts(DAG, dl, Op, &Parts[0], NumParts, PartVT, ExtendKind);
+      getCopyToParts(DAG, dl, Order, Op, &Parts[0], NumParts,
+                     PartVT, ExtendKind);
 
       for (unsigned j = 0; j != NumParts; ++j) {
         // if it isn't first piece, alignment must be 1
@@ -5444,6 +6426,9 @@ TargetLowering::LowerCallTo(SDValue Chain, const Type *RetTy,
                  "LowerCall emitted a value with the wrong type!");
         });
 
+  if (DisableScheduling)
+    DAG.AssignOrdering(Chain.getNode(), Order);
+
   // For a tail call, the return value is merely live-out and there aren't
   // any nodes in the DAG representing it. Return a special value to
   // indicate that a tail call has been emitted and no more Instructions
@@ -5468,9 +6453,11 @@ TargetLowering::LowerCallTo(SDValue Chain, const Type *RetTy,
     unsigned NumRegs = getNumRegisters(RetTy->getContext(), VT);
 
     SDValue ReturnValue =
-      getCopyFromParts(DAG, dl, &InVals[CurReg], NumRegs, RegisterVT, VT,
-                       AssertOp);
+      getCopyFromParts(DAG, dl, Order, &InVals[CurReg], NumRegs,
+                       RegisterVT, VT, AssertOp);
     ReturnValues.push_back(ReturnValue);
+    if (DisableScheduling)
+      DAG.AssignOrdering(ReturnValue.getNode(), Order);
     CurReg += NumRegs;
   }
 
@@ -5483,7 +6470,8 @@ TargetLowering::LowerCallTo(SDValue Chain, const Type *RetTy,
   SDValue Res = DAG.getNode(ISD::MERGE_VALUES, dl,
                             DAG.getVTList(&RetTys[0], RetTys.size()),
                             &ReturnValues[0], ReturnValues.size());
-
+  if (DisableScheduling)
+    DAG.AssignOrdering(Res.getNode(), Order);
   return std::make_pair(Res, Chain);
 }
 
@@ -5500,7 +6488,6 @@ SDValue TargetLowering::LowerOperation(SDValue Op, SelectionDAG &DAG) {
   return SDValue();
 }
 
-
 void SelectionDAGBuilder::CopyValueToVirtualRegister(Value *V, unsigned Reg) {
   SDValue Op = getValue(V);
   assert((Op.getOpcode() != ISD::CopyFromReg ||
@@ -5510,7 +6497,7 @@ void SelectionDAGBuilder::CopyValueToVirtualRegister(Value *V, unsigned Reg) {
 
   RegsForValue RFV(V->getContext(), TLI, Reg, V->getType());
   SDValue Chain = DAG.getEntryNode();
-  RFV.getCopyToRegs(Op, DAG, getCurDebugLoc(), Chain, 0);
+  RFV.getCopyToRegs(Op, DAG, getCurDebugLoc(), SDNodeOrder, Chain, 0);
   PendingExports.push_back(Chain);
 }
 
@@ -5533,7 +6520,7 @@ void SelectionDAGISel::LowerArguments(BasicBlock *LLVMBB) {
   FunctionLoweringInfo &FLI = DAG.getFunctionLoweringInfo();
 
   FLI.CanLowerReturn = TLI.CanLowerReturn(F.getCallingConv(), F.isVarArg(), 
-    OutVTs, OutsFlags, DAG);
+                                          OutVTs, OutsFlags, DAG);
   if (!FLI.CanLowerReturn) {
     // Put in an sret pointer parameter before all the other parameters.
     SmallVector<EVT, 1> ValueVTs;
@@ -5613,12 +6600,14 @@ void SelectionDAGISel::LowerArguments(BasicBlock *LLVMBB) {
          "LowerFormalArguments didn't return a valid chain!");
   assert(InVals.size() == Ins.size() &&
          "LowerFormalArguments didn't emit the correct number of values!");
-  DEBUG(for (unsigned i = 0, e = Ins.size(); i != e; ++i) {
-          assert(InVals[i].getNode() &&
-                 "LowerFormalArguments emitted a null value!");
-          assert(Ins[i].VT == InVals[i].getValueType() &&
-                 "LowerFormalArguments emitted a value with the wrong type!");
-        });
+  DEBUG({
+      for (unsigned i = 0, e = Ins.size(); i != e; ++i) {
+        assert(InVals[i].getNode() &&
+               "LowerFormalArguments emitted a null value!");
+        assert(Ins[i].VT == InVals[i].getValueType() &&
+               "LowerFormalArguments emitted a value with the wrong type!");
+      }
+    });
 
   // Update the DAG with the new chain value resulting from argument lowering.
   DAG.setRoot(NewRoot);
@@ -5634,8 +6623,8 @@ void SelectionDAGISel::LowerArguments(BasicBlock *LLVMBB) {
     EVT VT = ValueVTs[0];
     EVT RegVT = TLI.getRegisterType(*CurDAG->getContext(), VT);
     ISD::NodeType AssertOp = ISD::DELETED_NODE;
-    SDValue ArgValue = getCopyFromParts(DAG, dl, &InVals[0], 1, RegVT,
-                                        VT, AssertOp);
+    SDValue ArgValue = getCopyFromParts(DAG, dl, 0, &InVals[0], 1,
+                                        RegVT, VT, AssertOp);
 
     MachineFunction& MF = SDB->DAG.getMachineFunction();
     MachineRegisterInfo& RegInfo = MF.getRegInfo();
@@ -5643,11 +6632,12 @@ void SelectionDAGISel::LowerArguments(BasicBlock *LLVMBB) {
     FLI.DemoteRegister = SRetReg;
     NewRoot = SDB->DAG.getCopyToReg(NewRoot, SDB->getCurDebugLoc(), SRetReg, ArgValue);
     DAG.setRoot(NewRoot);
-    
+
     // i indexes lowered arguments.  Bump it past the hidden sret argument.
     // Idx indexes LLVM arguments.  Don't touch it.
     ++i;
   }
+
   for (Function::arg_iterator I = F.arg_begin(), E = F.arg_end(); I != E;
       ++I, ++Idx) {
     SmallVector<SDValue, 4> ArgValues;
@@ -5666,19 +6656,25 @@ void SelectionDAGISel::LowerArguments(BasicBlock *LLVMBB) {
         else if (F.paramHasAttr(Idx, Attribute::ZExt))
           AssertOp = ISD::AssertZext;
 
-        ArgValues.push_back(getCopyFromParts(DAG, dl, &InVals[i], NumParts,
-                                             PartVT, VT, AssertOp));
+        ArgValues.push_back(getCopyFromParts(DAG, dl, 0, &InVals[i],
+                                             NumParts, PartVT, VT,
+                                             AssertOp));
       }
+
       i += NumParts;
     }
+
     if (!I->use_empty()) {
-      SDB->setValue(I, DAG.getMergeValues(&ArgValues[0], NumValues,
-                                          SDB->getCurDebugLoc()));
+      SDValue Res = DAG.getMergeValues(&ArgValues[0], NumValues,
+                                       SDB->getCurDebugLoc());
+      SDB->setValue(I, Res);
+
       // If this argument is live outside of the entry block, insert a copy from
       // whereever we got it to the vreg that other BB's will reference it as.
       SDB->CopyToExportRegsIfNeeded(I);
     }
   }
+
   assert(i == InVals.size() && "Argument register count mismatch!");
 
   // Finally, if the target has anything special to do, allow it to do so.
diff --git a/libclamav/c++/llvm/lib/CodeGen/SelectionDAG/SelectionDAGBuilder.h b/libclamav/c++/llvm/lib/CodeGen/SelectionDAG/SelectionDAGBuilder.h
index 244f9b5..88a2017 100644
--- a/libclamav/c++/llvm/lib/CodeGen/SelectionDAG/SelectionDAGBuilder.h
+++ b/libclamav/c++/llvm/lib/CodeGen/SelectionDAG/SelectionDAGBuilder.h
@@ -91,11 +91,13 @@ class SelectionDAGBuilder {
 
   DenseMap<const Value*, SDValue> NodeMap;
 
+public:
   /// PendingLoads - Loads are not emitted to the program immediately.  We bunch
   /// them up and then emit token factor nodes when possible.  This allows us to
   /// get simple disambiguation between loads without worrying about alias
   /// analysis.
   SmallVector<SDValue, 8> PendingLoads;
+private:
 
   /// PendingExports - CopyToReg nodes that copy values to virtual registers
   /// for export to other blocks need to be emitted before any terminator
@@ -104,6 +106,10 @@ class SelectionDAGBuilder {
   /// instructions.
   SmallVector<SDValue, 8> PendingExports;
 
+  /// SDNodeOrder - A unique monotonically increasing number used to order the
+  /// SDNodes we create.
+  unsigned SDNodeOrder;
+
   /// Case - A struct to record the Value for a switch case, and the
   /// case's target basic block.
   struct Case {
@@ -300,7 +306,7 @@ public:
   SelectionDAGBuilder(SelectionDAG &dag, TargetLowering &tli,
                       FunctionLoweringInfo &funcinfo,
                       CodeGenOpt::Level ol)
-    : CurDebugLoc(DebugLoc::getUnknownLoc()), 
+    : CurDebugLoc(DebugLoc::getUnknownLoc()), SDNodeOrder(0),
       TLI(tli), DAG(dag), FuncInfo(funcinfo), OptLevel(ol),
       HasTailCall(false),
       Context(dag.getContext()) {
@@ -332,6 +338,8 @@ public:
   DebugLoc getCurDebugLoc() const { return CurDebugLoc; }
   void setCurDebugLoc(DebugLoc dl) { CurDebugLoc = dl; }
 
+  unsigned getSDNodeOrder() const { return SDNodeOrder; }
+
   void CopyValueToVirtualRegister(Value *V, unsigned Reg);
 
   void visit(Instruction &I);
@@ -455,6 +463,8 @@ private:
   void visitStore(StoreInst &I);
   void visitPHI(PHINode &I) { } // PHI nodes are handled specially.
   void visitCall(CallInst &I);
+  bool visitMemCmpCall(CallInst &I);
+  
   void visitInlineAsm(CallSite CS);
   const char *visitIntrinsicCall(CallInst &I, unsigned Intrinsic);
   void visitTargetIntrinsic(CallInst &I, unsigned Intrinsic);
diff --git a/libclamav/c++/llvm/lib/CodeGen/SelectionDAG/SelectionDAGISel.cpp b/libclamav/c++/llvm/lib/CodeGen/SelectionDAG/SelectionDAGISel.cpp
index a640c7d..667be90 100644
--- a/libclamav/c++/llvm/lib/CodeGen/SelectionDAG/SelectionDAGISel.cpp
+++ b/libclamav/c++/llvm/lib/CodeGen/SelectionDAG/SelectionDAGISel.cpp
@@ -567,9 +567,9 @@ void SelectionDAGISel::CodeGenAndEmitDAG() {
   if (Changed) {
     if (TimePassesIsEnabled) {
       NamedRegionTimer T("Type Legalization 2", GroupName);
-      Changed = CurDAG->LegalizeTypes();
+      CurDAG->LegalizeTypes();
     } else {
-      Changed = CurDAG->LegalizeTypes();
+      CurDAG->LegalizeTypes();
     }
 
     if (ViewDAGCombineLT)
@@ -1182,9 +1182,8 @@ SelectInlineAsmMemoryOperands(std::vector<SDValue> &Ops) {
       }
 
       // Add this to the output node.
-      EVT IntPtrTy = TLI.getPointerTy();
       Ops.push_back(CurDAG->getTargetConstant(4/*MEM*/ | (SelOps.size()<< 3),
-                                              IntPtrTy));
+                                              MVT::i32));
       Ops.insert(Ops.end(), SelOps.begin(), SelOps.end());
       i += 2;
     }
diff --git a/libclamav/c++/llvm/lib/CodeGen/SelectionDAG/TargetLowering.cpp b/libclamav/c++/llvm/lib/CodeGen/SelectionDAG/TargetLowering.cpp
index 1026169..d9a5a13 100644
--- a/libclamav/c++/llvm/lib/CodeGen/SelectionDAG/TargetLowering.cpp
+++ b/libclamav/c++/llvm/lib/CodeGen/SelectionDAG/TargetLowering.cpp
@@ -713,6 +713,10 @@ MVT::SimpleValueType TargetLowering::getSetCCResultType(EVT VT) const {
   return PointerTy.SimpleTy;
 }
 
+MVT::SimpleValueType TargetLowering::getCmpLibcallReturnType() const {
+  return MVT::i32; // return the default value
+}
+
 /// getVectorTypeBreakdown - Vector types are broken down into some number of
 /// legal first class types.  For example, MVT::v8f32 maps to 2 MVT::v4f32
 /// with Altivec or SSE1, or 8 promoted MVT::f64 values with the X86 FP stack.
diff --git a/libclamav/c++/llvm/lib/CodeGen/SimpleRegisterCoalescing.cpp b/libclamav/c++/llvm/lib/CodeGen/SimpleRegisterCoalescing.cpp
index ed407eb..6314331 100644
--- a/libclamav/c++/llvm/lib/CodeGen/SimpleRegisterCoalescing.cpp
+++ b/libclamav/c++/llvm/lib/CodeGen/SimpleRegisterCoalescing.cpp
@@ -1065,7 +1065,7 @@ SimpleRegisterCoalescing::isWinToJoinVRWithSrcPhysReg(MachineInstr *CopyMI,
       if (SuccMBB == CopyMBB)
         continue;
       if (DstInt.overlaps(li_->getMBBStartIdx(SuccMBB),
-                      li_->getMBBEndIdx(SuccMBB).getNextIndex().getBaseIndex()))
+                          li_->getMBBEndIdx(SuccMBB)))
         return false;
     }
   }
@@ -1121,7 +1121,7 @@ SimpleRegisterCoalescing::isWinToJoinVRWithDstPhysReg(MachineInstr *CopyMI,
       if (PredMBB == SMBB)
         continue;
       if (SrcInt.overlaps(li_->getMBBStartIdx(PredMBB),
-                      li_->getMBBEndIdx(PredMBB).getNextIndex().getBaseIndex()))
+                          li_->getMBBEndIdx(PredMBB)))
         return false;
     }
   }
@@ -2246,8 +2246,9 @@ SimpleRegisterCoalescing::JoinIntervals(LiveInterval &LHS, LiveInterval &RHS,
         continue;
 
       // Figure out the value # from the RHS.
-      LHSValsDefinedFromRHS[VNI]=
-        RHS.getLiveRangeContaining(VNI->def.getPrevSlot())->valno;
+      LiveRange *lr = RHS.getLiveRangeContaining(VNI->def.getPrevSlot());
+      assert(lr && "Cannot find live range");
+      LHSValsDefinedFromRHS[VNI] = lr->valno;
     }
 
     // Loop over the value numbers of the RHS, seeing if any are defined from
@@ -2264,8 +2265,9 @@ SimpleRegisterCoalescing::JoinIntervals(LiveInterval &LHS, LiveInterval &RHS,
         continue;
 
       // Figure out the value # from the LHS.
-      RHSValsDefinedFromLHS[VNI]=
-        LHS.getLiveRangeContaining(VNI->def.getPrevSlot())->valno;
+      LiveRange *lr = LHS.getLiveRangeContaining(VNI->def.getPrevSlot());
+      assert(lr && "Cannot find live range");
+      RHSValsDefinedFromLHS[VNI] = lr->valno;
     }
 
     LHSValNoAssignments.resize(LHS.getNumValNums(), -1);
diff --git a/libclamav/c++/llvm/lib/CodeGen/SimpleRegisterCoalescing.h b/libclamav/c++/llvm/lib/CodeGen/SimpleRegisterCoalescing.h
index 605a740..f668064 100644
--- a/libclamav/c++/llvm/lib/CodeGen/SimpleRegisterCoalescing.h
+++ b/libclamav/c++/llvm/lib/CodeGen/SimpleRegisterCoalescing.h
@@ -33,7 +33,7 @@ namespace llvm {
     MachineInstr *MI;
     unsigned LoopDepth;
     CopyRec(MachineInstr *mi, unsigned depth)
-      : MI(mi), LoopDepth(depth) {};
+      : MI(mi), LoopDepth(depth) {}
   };
 
   class SimpleRegisterCoalescing : public MachineFunctionPass,
@@ -85,7 +85,7 @@ namespace llvm {
     bool coalesceFunction(MachineFunction &mf, RegallocQuery &) {
       // This runs as an independent pass, so don't do anything.
       return false;
-    };
+    }
 
     /// print - Implement the dump method.
     virtual void print(raw_ostream &O, const Module* = 0) const;
diff --git a/libclamav/c++/llvm/lib/CodeGen/SlotIndexes.cpp b/libclamav/c++/llvm/lib/CodeGen/SlotIndexes.cpp
index f85384b..782af12 100644
--- a/libclamav/c++/llvm/lib/CodeGen/SlotIndexes.cpp
+++ b/libclamav/c++/llvm/lib/CodeGen/SlotIndexes.cpp
@@ -92,13 +92,14 @@ bool SlotIndexes::runOnMachineFunction(MachineFunction &fn) {
   functionSize = 0;
   unsigned index = 0;
 
+  push_back(createEntry(0, index));
+
   // Iterate over the the function.
   for (MachineFunction::iterator mbbItr = mf->begin(), mbbEnd = mf->end();
        mbbItr != mbbEnd; ++mbbItr) {
     MachineBasicBlock *mbb = &*mbbItr;
 
     // Insert an index for the MBB start.
-    push_back(createEntry(0, index));
     SlotIndex blockStartIndex(back(), SlotIndex::LOAD);
 
     index += SlotIndex::NUM;
@@ -137,16 +138,16 @@ bool SlotIndexes::runOnMachineFunction(MachineFunction &fn) {
       index += SlotIndex::NUM;
     }
 
-    SlotIndex blockEndIndex(back(), SlotIndex::STORE);
+    // One blank instruction at the end.
+    push_back(createEntry(0, index));    
+
+    SlotIndex blockEndIndex(back(), SlotIndex::LOAD);
     mbb2IdxMap.insert(
       std::make_pair(mbb, std::make_pair(blockStartIndex, blockEndIndex)));
 
     idx2MBBMap.push_back(IdxMBBPair(blockStartIndex, mbb));
   }
 
-  // One blank instruction at the end.
-  push_back(createEntry(0, index));
-
   // Sort the Idx2MBBMap
   std::sort(idx2MBBMap.begin(), idx2MBBMap.end(), Idx2MBBCompare());
 
diff --git a/libclamav/c++/llvm/lib/CodeGen/Spiller.cpp b/libclamav/c++/llvm/lib/CodeGen/Spiller.cpp
index bc246c1..bec9294 100644
--- a/libclamav/c++/llvm/lib/CodeGen/Spiller.cpp
+++ b/libclamav/c++/llvm/lib/CodeGen/Spiller.cpp
@@ -486,10 +486,10 @@ private:
         SlotIndex newKillRangeEnd = oldKillRange->end;
         oldKillRange->end = copyIdx.getDefIndex();
 
-        if (newKillRangeEnd != lis->getMBBEndIdx(killMBB).getNextSlot()) {
-          assert(newKillRangeEnd > lis->getMBBEndIdx(killMBB).getNextSlot() &&
+        if (newKillRangeEnd != lis->getMBBEndIdx(killMBB)) {
+          assert(newKillRangeEnd > lis->getMBBEndIdx(killMBB) &&
                  "PHI kill range doesn't reach kill-block end. Not sane.");
-          newLI->addRange(LiveRange(lis->getMBBEndIdx(killMBB).getNextSlot(),
+          newLI->addRange(LiveRange(lis->getMBBEndIdx(killMBB),
                                     newKillRangeEnd, newVNI));
         }
 
@@ -500,7 +500,7 @@ private:
         newKillVNI->addKill(lis->getMBBTerminatorGap(killMBB));
         newKillVNI->setHasPHIKill(true);
         li->addRange(LiveRange(copyIdx.getDefIndex(),
-                               lis->getMBBEndIdx(killMBB).getNextSlot(),
+                               lis->getMBBEndIdx(killMBB),
                                newKillVNI));
       }
 
diff --git a/libclamav/c++/llvm/lib/ExecutionEngine/JIT/JIT.cpp b/libclamav/c++/llvm/lib/ExecutionEngine/JIT/JIT.cpp
index 52cac86..4d34331 100644
--- a/libclamav/c++/llvm/lib/ExecutionEngine/JIT/JIT.cpp
+++ b/libclamav/c++/llvm/lib/ExecutionEngine/JIT/JIT.cpp
@@ -367,6 +367,32 @@ void JIT::deleteModuleProvider(ModuleProvider *MP, std::string *E) {
   }    
 }
 
+/// materializeFunction - make sure the given function is fully read.  If the
+/// module is corrupt, this returns true and fills in the optional string with
+/// information about the problem.  If successful, this returns false.
+bool JIT::materializeFunction(Function *F, std::string *ErrInfo) {
+  // Read in the function if it exists in this Module.
+  if (F->hasNotBeenReadFromBitcode()) {
+    // Determine the module provider this function is provided by.
+    Module *M = F->getParent();
+    ModuleProvider *MP = 0;
+    for (unsigned i = 0, e = Modules.size(); i != e; ++i) {
+      if (Modules[i]->getModule() == M) {
+        MP = Modules[i];
+        break;
+      }
+    }
+    if (MP)
+      return MP->materializeFunction(F, ErrInfo);
+
+    if (ErrInfo)
+      *ErrInfo = "Function isn't in a module we know about!";
+    return true;
+  }
+  // Succeed if the function is already read.
+  return false;
+}
+
 /// run - Start execution with the specified function and arguments.
 ///
 GenericValue JIT::runFunction(Function *F,
@@ -586,11 +612,13 @@ void JIT::runJITOnFunction(Function *F, MachineCodeInfo *MCI) {
     }
   };
   MCIListener MCIL(MCI);
-  RegisterJITEventListener(&MCIL);
+  if (MCI)
+    RegisterJITEventListener(&MCIL);
 
   runJITOnFunctionUnlocked(F, locked);
 
-  UnregisterJITEventListener(&MCIL);
+  if (MCI)
+    UnregisterJITEventListener(&MCIL);
 }
 
 void JIT::runJITOnFunctionUnlocked(Function *F, const MutexGuard &locked) {
@@ -608,6 +636,9 @@ void JIT::runJITOnFunctionUnlocked(Function *F, const MutexGuard &locked) {
     Function *PF = jitstate->getPendingFunctions(locked).back();
     jitstate->getPendingFunctions(locked).pop_back();
 
+    assert(!PF->hasAvailableExternallyLinkage() &&
+           "Externally-defined function should not be in pending list.");
+
     // JIT the function
     isAlreadyCodeGenerating = true;
     jitstate->getPM(locked).run(*PF);
@@ -628,36 +659,19 @@ void *JIT::getPointerToFunction(Function *F) {
     return Addr;   // Check if function already code gen'd
 
   MutexGuard locked(lock);
-  
-  // Now that this thread owns the lock, check if another thread has already
-  // code gen'd the function.
-  if (void *Addr = getPointerToGlobalIfAvailable(F))
-    return Addr;  
 
-  // Make sure we read in the function if it exists in this Module.
-  if (F->hasNotBeenReadFromBitcode()) {
-    // Determine the module provider this function is provided by.
-    Module *M = F->getParent();
-    ModuleProvider *MP = 0;
-    for (unsigned i = 0, e = Modules.size(); i != e; ++i) {
-      if (Modules[i]->getModule() == M) {
-        MP = Modules[i];
-        break;
-      }
-    }
-    assert(MP && "Function isn't in a module we know about!");
-    
-    std::string ErrorMsg;
-    if (MP->materializeFunction(F, &ErrorMsg)) {
-      llvm_report_error("Error reading function '" + F->getName()+
-                        "' from bitcode file: " + ErrorMsg);
-    }
-
-    // Now retry to get the address.
-    if (void *Addr = getPointerToGlobalIfAvailable(F))
-      return Addr;
+  // Now that this thread owns the lock, make sure we read in the function if it
+  // exists in this Module.
+  std::string ErrorMsg;
+  if (materializeFunction(F, &ErrorMsg)) {
+    llvm_report_error("Error reading function '" + F->getName()+
+                      "' from bitcode file: " + ErrorMsg);
   }
 
+  // ... and check if another thread has already code gen'd the function.
+  if (void *Addr = getPointerToGlobalIfAvailable(F))
+    return Addr;
+
   if (F->isDeclaration() || F->hasAvailableExternallyLinkage()) {
     bool AbortOnFailure = !F->hasExternalWeakLinkage();
     void *Addr = getPointerToNamedFunction(F->getName(), AbortOnFailure);
diff --git a/libclamav/c++/llvm/lib/ExecutionEngine/JIT/JIT.h b/libclamav/c++/llvm/lib/ExecutionEngine/JIT/JIT.h
index f165bd6..b6f74ff 100644
--- a/libclamav/c++/llvm/lib/ExecutionEngine/JIT/JIT.h
+++ b/libclamav/c++/llvm/lib/ExecutionEngine/JIT/JIT.h
@@ -104,6 +104,12 @@ public:
   /// the underlying module.
   virtual void deleteModuleProvider(ModuleProvider *P,std::string *ErrInfo = 0);
 
+  /// materializeFunction - make sure the given function is fully read.  If the
+  /// module is corrupt, this returns true and fills in the optional string with
+  /// information about the problem.  If successful, this returns false.
+  ///
+  bool materializeFunction(Function *F, std::string *ErrInfo = 0);
+
   /// runFunction - Start execution with the specified function and arguments.
   ///
   virtual GenericValue runFunction(Function *F,
diff --git a/libclamav/c++/llvm/lib/ExecutionEngine/JIT/JITDwarfEmitter.cpp b/libclamav/c++/llvm/lib/ExecutionEngine/JIT/JITDwarfEmitter.cpp
index 0193486..c1051a9 100644
--- a/libclamav/c++/llvm/lib/ExecutionEngine/JIT/JITDwarfEmitter.cpp
+++ b/libclamav/c++/llvm/lib/ExecutionEngine/JIT/JITDwarfEmitter.cpp
@@ -429,13 +429,12 @@ unsigned char* JITDwarfEmitter::EmitExceptionTable(MachineFunction* MF,
 
     // Asm->EOL("Region start");
 
-    if (!S.EndLabel) {
+    if (!S.EndLabel)
       EndLabelPtr = (intptr_t)EndFunction;
-      JCE->emitInt32((intptr_t)EndFunction - BeginLabelPtr);
-    } else {
+    else
       EndLabelPtr = JCE->getLabelAddress(S.EndLabel);
-      JCE->emitInt32(EndLabelPtr - BeginLabelPtr);
-    }
+
+    JCE->emitInt32(EndLabelPtr - BeginLabelPtr);
     //Asm->EOL("Region length");
 
     if (!S.PadLabel) {
diff --git a/libclamav/c++/llvm/lib/ExecutionEngine/JIT/JITEmitter.cpp b/libclamav/c++/llvm/lib/ExecutionEngine/JIT/JITEmitter.cpp
index bbac762..ef323b5 100644
--- a/libclamav/c++/llvm/lib/ExecutionEngine/JIT/JITEmitter.cpp
+++ b/libclamav/c++/llvm/lib/ExecutionEngine/JIT/JITEmitter.cpp
@@ -59,6 +59,12 @@ STATISTIC(NumRetries, "Number of retries with more memory");
 static JIT *TheJIT = 0;
 
 
+// A declaration may stop being a declaration once it's fully read from bitcode.
+// This function returns true if F is fully read and is still a declaration.
+static bool isNonGhostDeclaration(const Function *F) {
+  return F->isDeclaration() && !F->hasNotBeenReadFromBitcode();
+}
+
 //===----------------------------------------------------------------------===//
 // JIT lazy compilation code.
 //
@@ -271,6 +277,10 @@ namespace {
   class JITEmitter : public JITCodeEmitter {
     JITMemoryManager *MemMgr;
 
+    // When outputting a function stub in the context of some other function, we
+    // save BufferBegin/BufferEnd/CurBufferPtr here.
+    uint8_t *SavedBufferBegin, *SavedBufferEnd, *SavedCurBufferPtr;
+
     // When reattempting to JIT a function after running out of space, we store
     // the estimated size of the function we're trying to JIT here, so we can
     // ask the memory manager for at least this much space.  When we
@@ -396,11 +406,13 @@ namespace {
     void initJumpTableInfo(MachineJumpTableInfo *MJTI);
     void emitJumpTableInfo(MachineJumpTableInfo *MJTI);
 
-    virtual void startGVStub(BufferState &BS, const GlobalValue* GV,
-                             unsigned StubSize, unsigned Alignment = 1);
-    virtual void startGVStub(BufferState &BS, void *Buffer,
-                             unsigned StubSize);
-    virtual void* finishGVStub(BufferState &BS);
+    void startGVStub(const GlobalValue* GV,
+                     unsigned StubSize, unsigned Alignment = 1);
+    void startGVStub(void *Buffer, unsigned StubSize);
+    void finishGVStub();
+    virtual void *allocIndirectGV(const GlobalValue *GV,
+                                  const uint8_t *Buffer, size_t Size,
+                                  unsigned Alignment);
 
     /// allocateSpace - Reserves space in the current block if any, or
     /// allocate a new one of the given size.
@@ -513,7 +525,7 @@ void *JITResolver::getLazyFunctionStub(Function *F) {
 
   // If this is an external declaration, attempt to resolve the address now
   // to place in the stub.
-  if (F->isDeclaration() && !F->hasNotBeenReadFromBitcode()) {
+  if (isNonGhostDeclaration(F) || F->hasAvailableExternallyLinkage()) {
     Actual = TheJIT->getPointerToFunction(F);
 
     // If we resolved the symbol to a null address (eg. a weak external)
@@ -521,13 +533,12 @@ void *JITResolver::getLazyFunctionStub(Function *F) {
     if (!Actual) return 0;
   }
 
-  MachineCodeEmitter::BufferState BS;
   TargetJITInfo::StubLayout SL = TheJIT->getJITInfo().getStubLayout();
-  JE.startGVStub(BS, F, SL.Size, SL.Alignment);
+  JE.startGVStub(F, SL.Size, SL.Alignment);
   // Codegen a new stub, calling the lazy resolver or the actual address of the
   // external function, if it was resolved.
   Stub = TheJIT->getJITInfo().emitFunctionStub(F, Actual, JE);
-  JE.finishGVStub(BS);
+  JE.finishGVStub();
 
   if (Actual != (void*)(intptr_t)LazyResolverFn) {
     // If we are getting the stub for an external function, we really want the
@@ -547,7 +558,7 @@ void *JITResolver::getLazyFunctionStub(Function *F) {
   // exist yet, add it to the JIT's work list so that we can fill in the stub
   // address later.
   if (!Actual && !TheJIT->isCompilingLazily())
-    if (!F->isDeclaration() || F->hasNotBeenReadFromBitcode())
+    if (!isNonGhostDeclaration(F) && !F->hasAvailableExternallyLinkage())
       TheJIT->addPendingFunction(F);
 
   return Stub;
@@ -579,11 +590,10 @@ void *JITResolver::getExternalFunctionStub(void *FnAddr) {
   void *&Stub = ExternalFnToStubMap[FnAddr];
   if (Stub) return Stub;
 
-  MachineCodeEmitter::BufferState BS;
   TargetJITInfo::StubLayout SL = TheJIT->getJITInfo().getStubLayout();
-  JE.startGVStub(BS, 0, SL.Size, SL.Alignment);
+  JE.startGVStub(0, SL.Size, SL.Alignment);
   Stub = TheJIT->getJITInfo().emitFunctionStub(0, FnAddr, JE);
-  JE.finishGVStub(BS);
+  JE.finishGVStub();
 
   DEBUG(errs() << "JIT: Stub emitted at [" << Stub
                << "] for external function at '" << FnAddr << "'\n");
@@ -753,7 +763,7 @@ void *JITEmitter::getPointerToGlobal(GlobalValue *V, void *Reference,
 
     // If this is an external function pointer, we can force the JIT to
     // 'compile' it, which really just adds it to the map.
-    if (F->isDeclaration() && !F->hasNotBeenReadFromBitcode())
+    if (isNonGhostDeclaration(F) || F->hasAvailableExternallyLinkage())
       return TheJIT->getPointerToFunction(F);
   }
 
@@ -1215,8 +1225,9 @@ bool JITEmitter::finishFunction(MachineFunction &F) {
 
   if (DwarfExceptionHandling || JITEmitDebugInfo) {
     uintptr_t ActualSize = 0;
-    BufferState BS;
-    SaveStateTo(BS);
+    SavedBufferBegin = BufferBegin;
+    SavedBufferEnd = BufferEnd;
+    SavedCurBufferPtr = CurBufferPtr;
 
     if (MemMgr->NeedsExactSize()) {
       ActualSize = DE->GetDwarfTableSizeInBytes(F, *this, FnStart, FnEnd);
@@ -1232,7 +1243,9 @@ bool JITEmitter::finishFunction(MachineFunction &F) {
     MemMgr->endExceptionTable(F.getFunction(), BufferBegin, CurBufferPtr,
                               FrameRegister);
     uint8_t *EhEnd = CurBufferPtr;
-    RestoreStateFrom(BS);
+    BufferBegin = SavedBufferBegin;
+    BufferEnd = SavedBufferEnd;
+    CurBufferPtr = SavedCurBufferPtr;
 
     if (DwarfExceptionHandling) {
       TheJIT->RegisterTable(FrameRegister);
@@ -1438,27 +1451,39 @@ void JITEmitter::emitJumpTableInfo(MachineJumpTableInfo *MJTI) {
   }
 }
 
-void JITEmitter::startGVStub(BufferState &BS, const GlobalValue* GV,
+void JITEmitter::startGVStub(const GlobalValue* GV,
                              unsigned StubSize, unsigned Alignment) {
-  SaveStateTo(BS);
+  SavedBufferBegin = BufferBegin;
+  SavedBufferEnd = BufferEnd;
+  SavedCurBufferPtr = CurBufferPtr;
 
   BufferBegin = CurBufferPtr = MemMgr->allocateStub(GV, StubSize, Alignment);
   BufferEnd = BufferBegin+StubSize+1;
 }
 
-void JITEmitter::startGVStub(BufferState &BS, void *Buffer, unsigned StubSize) {
-  SaveStateTo(BS);
+void JITEmitter::startGVStub(void *Buffer, unsigned StubSize) {
+  SavedBufferBegin = BufferBegin;
+  SavedBufferEnd = BufferEnd;
+  SavedCurBufferPtr = CurBufferPtr;
 
   BufferBegin = CurBufferPtr = (uint8_t *)Buffer;
   BufferEnd = BufferBegin+StubSize+1;
 }
 
-void *JITEmitter::finishGVStub(BufferState &BS) {
+void JITEmitter::finishGVStub() {
   assert(CurBufferPtr != BufferEnd && "Stub overflowed allocated space.");
   NumBytes += getCurrentPCOffset();
-  void *Result = BufferBegin;
-  RestoreStateFrom(BS);
-  return Result;
+  BufferBegin = SavedBufferBegin;
+  BufferEnd = SavedBufferEnd;
+  CurBufferPtr = SavedCurBufferPtr;
+}
+
+void *JITEmitter::allocIndirectGV(const GlobalValue *GV,
+                                  const uint8_t *Buffer, size_t Size,
+                                  unsigned Alignment) {
+  uint8_t *IndGV = MemMgr->allocateStub(GV, Size, Alignment);
+  memcpy(IndGV, Buffer, Size);
+  return IndGV;
 }
 
 // getConstantPoolEntryAddress - Return the address of the 'ConstantNum' entry
@@ -1543,14 +1568,14 @@ void JIT::updateFunctionStub(Function *F) {
   JITEmitter *JE = cast<JITEmitter>(getCodeEmitter());
   void *Stub = JE->getJITResolver().getLazyFunctionStub(F);
   void *Addr = getPointerToGlobalIfAvailable(F);
+  assert(Addr != Stub && "Function must have non-stub address to be updated.");
 
   // Tell the target jit info to rewrite the stub at the specified address,
   // rather than creating a new one.
-  MachineCodeEmitter::BufferState BS;
   TargetJITInfo::StubLayout layout = getJITInfo().getStubLayout();
-  JE->startGVStub(BS, Stub, layout.Size);
+  JE->startGVStub(Stub, layout.Size);
   getJITInfo().emitFunctionStub(F, Addr, *getCodeEmitter());
-  JE->finishGVStub(BS);
+  JE->finishGVStub();
 }
 
 /// freeMachineCodeForFunction - release machine code memory for given Function.
diff --git a/libclamav/c++/llvm/lib/Support/APFloat.cpp b/libclamav/c++/llvm/lib/Support/APFloat.cpp
index b9b323c..1e6d22f 100644
--- a/libclamav/c++/llvm/lib/Support/APFloat.cpp
+++ b/libclamav/c++/llvm/lib/Support/APFloat.cpp
@@ -3139,6 +3139,60 @@ APFloat::initFromAPInt(const APInt& api, bool isIEEE)
     llvm_unreachable(0);
 }
 
+APFloat APFloat::getLargest(const fltSemantics &Sem, bool Negative) {
+  APFloat Val(Sem, fcNormal, Negative);
+
+  // We want (in interchange format):
+  //   sign = {Negative}
+  //   exponent = 1..10
+  //   significand = 1..1
+
+  Val.exponent = Sem.maxExponent; // unbiased
+
+  // 1-initialize all bits....
+  Val.zeroSignificand();
+  integerPart *significand = Val.significandParts();
+  unsigned N = partCountForBits(Sem.precision);
+  for (unsigned i = 0; i != N; ++i)
+    significand[i] = ~((integerPart) 0);
+
+  // ...and then clear the top bits for internal consistency.
+  significand[N-1]
+    &= (((integerPart) 1) << ((Sem.precision % integerPartWidth) - 1)) - 1;
+
+  return Val;
+}
+
+APFloat APFloat::getSmallest(const fltSemantics &Sem, bool Negative) {
+  APFloat Val(Sem, fcNormal, Negative);
+
+  // We want (in interchange format):
+  //   sign = {Negative}
+  //   exponent = 0..0
+  //   significand = 0..01
+
+  Val.exponent = Sem.minExponent; // unbiased
+  Val.zeroSignificand();
+  Val.significandParts()[0] = 1;
+  return Val;
+}
+
+APFloat APFloat::getSmallestNormalized(const fltSemantics &Sem, bool Negative) {
+  APFloat Val(Sem, fcNormal, Negative);
+
+  // We want (in interchange format):
+  //   sign = {Negative}
+  //   exponent = 0..0
+  //   significand = 10..0
+
+  Val.exponent = Sem.minExponent;
+  Val.zeroSignificand();
+  Val.significandParts()[partCountForBits(Sem.precision)-1]
+    |= (((integerPart) 1) << ((Sem.precision % integerPartWidth) - 1));
+
+  return Val;
+}
+
 APFloat::APFloat(const APInt& api, bool isIEEE)
 {
   initFromAPInt(api, isIEEE);
@@ -3155,3 +3209,297 @@ APFloat::APFloat(double d)
   APInt api = APInt(64, 0);
   initFromAPInt(api.doubleToBits(d));
 }
+
+namespace {
+  static void append(SmallVectorImpl<char> &Buffer,
+                     unsigned N, const char *Str) {
+    unsigned Start = Buffer.size();
+    Buffer.set_size(Start + N);
+    memcpy(&Buffer[Start], Str, N);
+  }
+
+  template <unsigned N>
+  void append(SmallVectorImpl<char> &Buffer, const char (&Str)[N]) {
+    append(Buffer, N, Str);
+  }
+
+  /// Removes data from the given significand until it is no more
+  /// precise than is required for the desired precision.
+  void AdjustToPrecision(APInt &significand,
+                         int &exp, unsigned FormatPrecision) {
+    unsigned bits = significand.getActiveBits();
+
+    // 196/59 is a very slight overestimate of lg_2(10).
+    unsigned bitsRequired = (FormatPrecision * 196 + 58) / 59;
+
+    if (bits <= bitsRequired) return;
+
+    unsigned tensRemovable = (bits - bitsRequired) * 59 / 196;
+    if (!tensRemovable) return;
+
+    exp += tensRemovable;
+
+    APInt divisor(significand.getBitWidth(), 1);
+    APInt powten(significand.getBitWidth(), 10);
+    while (true) {
+      if (tensRemovable & 1)
+        divisor *= powten;
+      tensRemovable >>= 1;
+      if (!tensRemovable) break;
+      powten *= powten;
+    }
+
+    significand = significand.udiv(divisor);
+
+    // Truncate the significand down to its active bit count, but
+    // don't try to drop below 32.
+    unsigned newPrecision = std::max(32U, significand.getActiveBits());
+    significand.trunc(newPrecision);
+  }
+
+
+  void AdjustToPrecision(SmallVectorImpl<char> &buffer,
+                         int &exp, unsigned FormatPrecision) {
+    unsigned N = buffer.size();
+    if (N <= FormatPrecision) return;
+
+    // The most significant figures are the last ones in the buffer.
+    unsigned FirstSignificant = N - FormatPrecision;
+
+    // Round.
+    // FIXME: this probably shouldn't use 'round half up'.
+
+    // Rounding down is just a truncation, except we also want to drop
+    // trailing zeros from the new result.
+    if (buffer[FirstSignificant - 1] < '5') {
+      while (buffer[FirstSignificant] == '0')
+        FirstSignificant++;
+
+      exp += FirstSignificant;
+      buffer.erase(&buffer[0], &buffer[FirstSignificant]);
+      return;
+    }
+
+    // Rounding up requires a decimal add-with-carry.  If we continue
+    // the carry, the newly-introduced zeros will just be truncated.
+    for (unsigned I = FirstSignificant; I != N; ++I) {
+      if (buffer[I] == '9') {
+        FirstSignificant++;
+      } else {
+        buffer[I]++;
+        break;
+      }
+    }
+
+    // If we carried through, we have exactly one digit of precision.
+    if (FirstSignificant == N) {
+      exp += FirstSignificant;
+      buffer.clear();
+      buffer.push_back('1');
+      return;
+    }
+
+    exp += FirstSignificant;
+    buffer.erase(&buffer[0], &buffer[FirstSignificant]);
+  }
+}
+
+void APFloat::toString(SmallVectorImpl<char> &Str,
+                       unsigned FormatPrecision,
+                       unsigned FormatMaxPadding) {
+  switch (category) {
+  case fcInfinity:
+    if (isNegative())
+      return append(Str, "-Inf");
+    else
+      return append(Str, "+Inf");
+
+  case fcNaN: return append(Str, "NaN");
+
+  case fcZero:
+    if (isNegative())
+      Str.push_back('-');
+
+    if (!FormatMaxPadding)
+      append(Str, "0.0E+0");
+    else
+      Str.push_back('0');
+    return;
+
+  case fcNormal:
+    break;
+  }
+
+  if (isNegative())
+    Str.push_back('-');
+
+  // Decompose the number into an APInt and an exponent.
+  int exp = exponent - ((int) semantics->precision - 1);
+  APInt significand(semantics->precision,
+                    partCountForBits(semantics->precision),
+                    significandParts());
+
+  // Set FormatPrecision if zero.  We want to do this before we
+  // truncate trailing zeros, as those are part of the precision.
+  if (!FormatPrecision) {
+    // It's an interesting question whether to use the nominal
+    // precision or the active precision here for denormals.
+
+    // FormatPrecision = ceil(significandBits / lg_2(10))
+    FormatPrecision = (semantics->precision * 59 + 195) / 196;
+  }
+
+  // Ignore trailing binary zeros.
+  int trailingZeros = significand.countTrailingZeros();
+  exp += trailingZeros;
+  significand = significand.lshr(trailingZeros);
+
+  // Change the exponent from 2^e to 10^e.
+  if (exp == 0) {
+    // Nothing to do.
+  } else if (exp > 0) {
+    // Just shift left.
+    significand.zext(semantics->precision + exp);
+    significand <<= exp;
+    exp = 0;
+  } else { /* exp < 0 */
+    int texp = -exp;
+
+    // We transform this using the identity:
+    //   (N)(2^-e) == (N)(5^e)(10^-e)
+    // This means we have to multiply N (the significand) by 5^e.
+    // To avoid overflow, we have to operate on numbers large
+    // enough to store N * 5^e:
+    //   log2(N * 5^e) == log2(N) + e * log2(5)
+    //                 <= semantics->precision + e * 137 / 59
+    //   (log_2(5) ~ 2.321928 < 2.322034 ~ 137/59)
+    
+    unsigned precision = semantics->precision + 137 * texp / 59;
+
+    // Multiply significand by 5^e.
+    //   N * 5^0101 == N * 5^(1*1) * 5^(0*2) * 5^(1*4) * 5^(0*8)
+    significand.zext(precision);
+    APInt five_to_the_i(precision, 5);
+    while (true) {
+      if (texp & 1) significand *= five_to_the_i;
+      
+      texp >>= 1;
+      if (!texp) break;
+      five_to_the_i *= five_to_the_i;
+    }
+  }
+
+  AdjustToPrecision(significand, exp, FormatPrecision);
+
+  llvm::SmallVector<char, 256> buffer;
+
+  // Fill the buffer.
+  unsigned precision = significand.getBitWidth();
+  APInt ten(precision, 10);
+  APInt digit(precision, 0);
+
+  bool inTrail = true;
+  while (significand != 0) {
+    // digit <- significand % 10
+    // significand <- significand / 10
+    APInt::udivrem(significand, ten, significand, digit);
+
+    unsigned d = digit.getZExtValue();
+
+    // Drop trailing zeros.
+    if (inTrail && !d) exp++;
+    else {
+      buffer.push_back((char) ('0' + d));
+      inTrail = false;
+    }
+  }
+
+  assert(!buffer.empty() && "no characters in buffer!");
+
+  // Drop down to FormatPrecision.
+  // TODO: don't do more precise calculations above than are required.
+  AdjustToPrecision(buffer, exp, FormatPrecision);
+
+  unsigned NDigits = buffer.size();
+
+  // Check whether we should use scientific notation.
+  bool FormatScientific;
+  if (!FormatMaxPadding)
+    FormatScientific = true;
+  else {
+    if (exp >= 0) {
+      // 765e3 --> 765000
+      //              ^^^
+      // But we shouldn't make the number look more precise than it is.
+      FormatScientific = ((unsigned) exp > FormatMaxPadding ||
+                          NDigits + (unsigned) exp > FormatPrecision);
+    } else {
+      // Power of the most significant digit.
+      int MSD = exp + (int) (NDigits - 1);
+      if (MSD >= 0) {
+        // 765e-2 == 7.65
+        FormatScientific = false;
+      } else {
+        // 765e-5 == 0.00765
+        //           ^ ^^
+        FormatScientific = ((unsigned) -MSD) > FormatMaxPadding;
+      }
+    }
+  }
+
+  // Scientific formatting is pretty straightforward.
+  if (FormatScientific) {
+    exp += (NDigits - 1);
+
+    Str.push_back(buffer[NDigits-1]);
+    Str.push_back('.');
+    if (NDigits == 1)
+      Str.push_back('0');
+    else
+      for (unsigned I = 1; I != NDigits; ++I)
+        Str.push_back(buffer[NDigits-1-I]);
+    Str.push_back('E');
+
+    Str.push_back(exp >= 0 ? '+' : '-');
+    if (exp < 0) exp = -exp;
+    SmallVector<char, 6> expbuf;
+    do {
+      expbuf.push_back((char) ('0' + (exp % 10)));
+      exp /= 10;
+    } while (exp);
+    for (unsigned I = 0, E = expbuf.size(); I != E; ++I)
+      Str.push_back(expbuf[E-1-I]);
+    return;
+  }
+
+  // Non-scientific, positive exponents.
+  if (exp >= 0) {
+    for (unsigned I = 0; I != NDigits; ++I)
+      Str.push_back(buffer[NDigits-1-I]);
+    for (unsigned I = 0; I != (unsigned) exp; ++I)
+      Str.push_back('0');
+    return;
+  }
+
+  // Non-scientific, negative exponents.
+
+  // The number of digits to the left of the decimal point.
+  int NWholeDigits = exp + (int) NDigits;
+
+  unsigned I = 0;
+  if (NWholeDigits > 0) {
+    for (; I != (unsigned) NWholeDigits; ++I)
+      Str.push_back(buffer[NDigits-I-1]);
+    Str.push_back('.');
+  } else {
+    unsigned NZeros = 1 + (unsigned) -NWholeDigits;
+
+    Str.push_back('0');
+    Str.push_back('.');
+    for (unsigned Z = 1; Z != NZeros; ++Z)
+      Str.push_back('0');
+  }
+
+  for (; I != NDigits; ++I)
+    Str.push_back(buffer[NDigits-I-1]);
+}
diff --git a/libclamav/c++/llvm/lib/Support/APInt.cpp b/libclamav/c++/llvm/lib/Support/APInt.cpp
index 56d4773..9532e1e 100644
--- a/libclamav/c++/llvm/lib/Support/APInt.cpp
+++ b/libclamav/c++/llvm/lib/Support/APInt.cpp
@@ -2012,8 +2012,8 @@ void APInt::udivrem(const APInt &LHS, const APInt &RHS,
   }
 
   if (lhsWords < rhsWords || LHS.ult(RHS)) {
-    Quotient = 0;               // X / Y ===> 0, iff X < Y
     Remainder = LHS;            // X % Y ===> X, iff X < Y
+    Quotient = 0;               // X / Y ===> 0, iff X < Y
     return;
   }
 
diff --git a/libclamav/c++/llvm/lib/Support/CMakeLists.txt b/libclamav/c++/llvm/lib/Support/CMakeLists.txt
index ac736dc..f1347f9 100644
--- a/libclamav/c++/llvm/lib/Support/CMakeLists.txt
+++ b/libclamav/c++/llvm/lib/Support/CMakeLists.txt
@@ -3,6 +3,7 @@ add_llvm_library(LLVMSupport
   APInt.cpp
   APSInt.cpp
   Allocator.cpp
+  circular_raw_ostream.cpp
   CommandLine.cpp
   ConstantRange.cpp
   Debug.cpp
@@ -23,6 +24,7 @@ add_llvm_library(LLVMSupport
   Regex.cpp
   SlowOperationInformer.cpp
   SmallPtrSet.cpp
+  SmallVector.cpp
   SourceMgr.cpp
   Statistic.cpp
   StringExtras.cpp
diff --git a/libclamav/c++/llvm/lib/Support/Debug.cpp b/libclamav/c++/llvm/lib/Support/Debug.cpp
index 50abe01..a035771 100644
--- a/libclamav/c++/llvm/lib/Support/Debug.cpp
+++ b/libclamav/c++/llvm/lib/Support/Debug.cpp
@@ -25,6 +25,9 @@
 
 #include "llvm/Support/CommandLine.h"
 #include "llvm/Support/Debug.h"
+#include "llvm/Support/circular_raw_ostream.h"
+#include "llvm/System/Signals.h"
+
 using namespace llvm;
 
 // All Debug.h functionality is a no-op in NDEBUG mode.
@@ -37,6 +40,16 @@ static cl::opt<bool, true>
 Debug("debug", cl::desc("Enable debug output"), cl::Hidden,
       cl::location(DebugFlag));
 
+// -debug-buffer-size - Buffer the last N characters of debug output
+//until program termination.
+static cl::opt<unsigned>
+DebugBufferSize("debug-buffer-size",
+                cl::desc("Buffer the last N characters of debug output"
+                         "until program termination. "
+                         "[default 0 -- immediate print-out]"),
+                cl::Hidden,
+                cl::init(0));
+
 static std::string CurrentDebugType;
 static struct DebugOnlyOpt {
   void operator=(const std::string &Val) const {
@@ -50,6 +63,18 @@ DebugOnly("debug-only", cl::desc("Enable a specific type of debug output"),
           cl::Hidden, cl::value_desc("debug string"),
           cl::location(DebugOnlyOptLoc), cl::ValueRequired);
 
+// Signal handlers - dump debug output on termination.
+static void debug_user_sig_handler(void *Cookie)
+{
+  // This is a bit sneaky.  Since this is under #ifndef NDEBUG, we
+  // know that debug mode is enabled and dbgs() really is a
+  // circular_raw_ostream.  If NDEBUG is defined, then dbgs() ==
+  // errs() but this will never be invoked.
+  llvm::circular_raw_ostream *dbgout =
+    static_cast<llvm::circular_raw_ostream *>(&llvm::dbgs());
+  dbgout->flushBufferWithBanner();
+}
+
 // isCurrentDebugType - Return true if the specified string is the debug type
 // specified on the command line, or if none was specified on the command line
 // with the -debug-only=X option.
@@ -66,9 +91,38 @@ void llvm::SetCurrentDebugType(const char *Type) {
   CurrentDebugType = Type;
 }
 
+/// dbgs - Return a circular-buffered debug stream.
+raw_ostream &llvm::dbgs() {
+  // Do one-time initialization in a thread-safe way.
+  static struct dbgstream {
+    circular_raw_ostream strm;
+
+    dbgstream() :
+        strm(errs(), "*** Debug Log Output ***\n",
+             (!EnableDebugBuffering || !DebugFlag) ? 0 : DebugBufferSize) {
+      if (EnableDebugBuffering && DebugFlag && DebugBufferSize != 0)
+        // TODO: Add a handler for SIGUSER1-type signals so the user can
+        // force a debug dump.
+        sys::AddSignalHandler(&debug_user_sig_handler, 0);
+      // Otherwise we've already set the debug stream buffer size to
+      // zero, disabling buffering so it will output directly to errs().
+    }
+  } thestrm;
+
+  return thestrm.strm;
+}
+
 #else
 // Avoid "has no symbols" warning.
 namespace llvm {
-int Debug_dummy = 0;
+  /// dbgs - Return dbgs().
+  raw_ostream &dbgs() {
+    return dbgs();
+  }
 }
+
 #endif
+
+/// EnableDebugBuffering - Turn on signal handler installation.
+///
+bool llvm::EnableDebugBuffering = false;
diff --git a/libclamav/c++/llvm/lib/Support/MemoryBuffer.cpp b/libclamav/c++/llvm/lib/Support/MemoryBuffer.cpp
index df1aa6a..9253b01 100644
--- a/libclamav/c++/llvm/lib/Support/MemoryBuffer.cpp
+++ b/libclamav/c++/llvm/lib/Support/MemoryBuffer.cpp
@@ -46,7 +46,7 @@ MemoryBuffer::~MemoryBuffer() {
 /// successfully.
 void MemoryBuffer::initCopyOf(const char *BufStart, const char *BufEnd) {
   size_t Size = BufEnd-BufStart;
-  BufferStart = (char *)malloc((Size+1) * sizeof(char));
+  BufferStart = (char *)malloc(Size+1);
   BufferEnd = BufferStart+Size;
   memcpy(const_cast<char*>(BufferStart), BufStart, Size);
   *const_cast<char*>(BufferEnd) = 0;   // Null terminate buffer.
@@ -108,7 +108,7 @@ MemoryBuffer *MemoryBuffer::getMemBufferCopy(const char *StartPtr,
 /// the MemoryBuffer object.
 MemoryBuffer *MemoryBuffer::getNewUninitMemBuffer(size_t Size,
                                                   StringRef BufferName) {
-  char *Buf = (char *)malloc((Size+1) * sizeof(char));
+  char *Buf = (char *)malloc(Size+1);
   if (!Buf) return 0;
   Buf[Size] = 0;
   MemoryBufferMem *SB = new MemoryBufferMem(Buf, Buf+Size, BufferName);
diff --git a/libclamav/c++/llvm/lib/Support/SmallVector.cpp b/libclamav/c++/llvm/lib/Support/SmallVector.cpp
new file mode 100644
index 0000000..6821382
--- /dev/null
+++ b/libclamav/c++/llvm/lib/Support/SmallVector.cpp
@@ -0,0 +1,37 @@
+//===- llvm/ADT/SmallVector.cpp - 'Normally small' vectors ----------------===//
+//
+//                     The LLVM Compiler Infrastructure
+//
+// This file is distributed under the University of Illinois Open Source
+// License. See LICENSE.TXT for details.
+//
+//===----------------------------------------------------------------------===//
+//
+// This file implements the SmallVector class.
+//
+//===----------------------------------------------------------------------===//
+
+#include "llvm/ADT/SmallVector.h"
+using namespace llvm;
+
+/// grow_pod - This is an implementation of the grow() method which only works
+/// on POD-like datatypes and is out of line to reduce code duplication.
+void SmallVectorBase::grow_pod(size_t MinSizeInBytes, size_t TSize) {
+  size_t CurSizeBytes = size_in_bytes();
+  size_t NewCapacityInBytes = 2 * capacity_in_bytes();
+  if (NewCapacityInBytes < MinSizeInBytes)
+    NewCapacityInBytes = MinSizeInBytes;
+  void *NewElts = operator new(NewCapacityInBytes);
+  
+  // Copy the elements over.  No need to run dtors on PODs.
+  memcpy(NewElts, this->BeginX, CurSizeBytes);
+  
+  // If this wasn't grown from the inline copy, deallocate the old space.
+  if (!this->isSmall())
+    operator delete(this->BeginX);
+  
+  this->EndX = (char*)NewElts+CurSizeBytes;
+  this->BeginX = NewElts;
+  this->CapacityX = (char*)this->BeginX + NewCapacityInBytes;
+}
+
diff --git a/libclamav/c++/llvm/lib/Support/circular_raw_ostream.cpp b/libclamav/c++/llvm/lib/Support/circular_raw_ostream.cpp
new file mode 100644
index 0000000..e52996d
--- /dev/null
+++ b/libclamav/c++/llvm/lib/Support/circular_raw_ostream.cpp
@@ -0,0 +1,47 @@
+//===- circulat_raw_ostream.cpp - Implement the circular_raw_ostream class -===//
+//
+//                     The LLVM Compiler Infrastructure
+//
+// This file is distributed under the University of Illinois Open Source
+// License. See LICENSE.TXT for details.
+//
+//===----------------------------------------------------------------------===//
+//
+// This implements support for circular buffered streams.
+//
+//===----------------------------------------------------------------------===//
+
+#include "llvm/Support/circular_raw_ostream.h"
+
+#include <algorithm>
+
+using namespace llvm;
+
+void circular_raw_ostream::write_impl(const char *Ptr, size_t Size) {
+  if (BufferSize == 0) {
+    TheStream->write(Ptr, Size);
+    return;
+  }
+
+  // Write into the buffer, wrapping if necessary.
+  while (Size != 0) {
+    unsigned Bytes = std::min(Size, BufferSize - (Cur - BufferArray));
+    memcpy(Cur, Ptr, Bytes);
+    Size -= Bytes;
+    Cur += Bytes;
+    if (Cur == BufferArray + BufferSize) {
+      // Reset the output pointer to the start of the buffer.
+      Cur = BufferArray;
+      Filled = true;
+    }
+  }    
+}
+
+void circular_raw_ostream::flushBufferWithBanner(void) {
+  if (BufferSize != 0) {
+    // Write out the buffer
+    int num = std::strlen(Banner); 
+    TheStream->write(Banner, num);
+    flushBuffer();
+  }
+}
diff --git a/libclamav/c++/llvm/lib/Support/raw_os_ostream.cpp b/libclamav/c++/llvm/lib/Support/raw_os_ostream.cpp
index 3374dd7..44f2325 100644
--- a/libclamav/c++/llvm/lib/Support/raw_os_ostream.cpp
+++ b/libclamav/c++/llvm/lib/Support/raw_os_ostream.cpp
@@ -27,4 +27,4 @@ void raw_os_ostream::write_impl(const char *Ptr, size_t Size) {
   OS.write(Ptr, Size);
 }
 
-uint64_t raw_os_ostream::current_pos() { return OS.tellp(); }
+uint64_t raw_os_ostream::current_pos() const { return OS.tellp(); }
diff --git a/libclamav/c++/llvm/lib/Support/raw_ostream.cpp b/libclamav/c++/llvm/lib/Support/raw_ostream.cpp
index 0c90e77..a820210 100644
--- a/libclamav/c++/llvm/lib/Support/raw_ostream.cpp
+++ b/libclamav/c++/llvm/lib/Support/raw_ostream.cpp
@@ -67,7 +67,7 @@ raw_ostream::~raw_ostream() {
 // An out of line virtual method to provide a home for the class vtable.
 void raw_ostream::handle() {}
 
-size_t raw_ostream::preferred_buffer_size() {
+size_t raw_ostream::preferred_buffer_size() const {
   // BUFSIZ is intended to be a reasonable default.
   return BUFSIZ;
 }
@@ -440,20 +440,20 @@ uint64_t raw_fd_ostream::seek(uint64_t off) {
   return pos;  
 }
 
-size_t raw_fd_ostream::preferred_buffer_size() {
+size_t raw_fd_ostream::preferred_buffer_size() const {
 #if !defined(_MSC_VER) && !defined(__MINGW32__) // Windows has no st_blksize.
   assert(FD >= 0 && "File not yet open!");
   struct stat statbuf;
-  if (fstat(FD, &statbuf) == 0) {
-    // If this is a terminal, don't use buffering. Line buffering
-    // would be a more traditional thing to do, but it's not worth
-    // the complexity.
-    if (S_ISCHR(statbuf.st_mode) && isatty(FD))
-      return 0;
-    // Return the preferred block size.
-    return statbuf.st_blksize;
-  }
-  error_detected();
+  if (fstat(FD, &statbuf) != 0)
+    return 0;
+  
+  // If this is a terminal, don't use buffering. Line buffering
+  // would be a more traditional thing to do, but it's not worth
+  // the complexity.
+  if (S_ISCHR(statbuf.st_mode) && isatty(FD))
+    return 0;
+  // Return the preferred block size.
+  return statbuf.st_blksize;
 #endif
   return raw_ostream::preferred_buffer_size();
 }
@@ -578,7 +578,9 @@ void raw_svector_ostream::write_impl(const char *Ptr, size_t Size) {
   SetBuffer(OS.end(), OS.capacity() - OS.size());
 }
 
-uint64_t raw_svector_ostream::current_pos() { return OS.size(); }
+uint64_t raw_svector_ostream::current_pos() const {
+   return OS.size();
+}
 
 StringRef raw_svector_ostream::str() {
   flush();
@@ -601,6 +603,6 @@ raw_null_ostream::~raw_null_ostream() {
 void raw_null_ostream::write_impl(const char *Ptr, size_t Size) {
 }
 
-uint64_t raw_null_ostream::current_pos() {
+uint64_t raw_null_ostream::current_pos() const {
   return 0;
 }
diff --git a/libclamav/c++/llvm/lib/System/DynamicLibrary.cpp b/libclamav/c++/llvm/lib/System/DynamicLibrary.cpp
index f658aea..ac4daae 100644
--- a/libclamav/c++/llvm/lib/System/DynamicLibrary.cpp
+++ b/libclamav/c++/llvm/lib/System/DynamicLibrary.cpp
@@ -79,29 +79,7 @@ bool DynamicLibrary::LoadLibraryPermanently(const char *Filename,
   return false;
 }
 
-void* DynamicLibrary::SearchForAddressOfSymbol(const char* symbolName) {
-  // First check symbols added via AddSymbol().
-  if (ExplicitSymbols) {
-    std::map<std::string, void *>::iterator I =
-      ExplicitSymbols->find(symbolName);
-    std::map<std::string, void *>::iterator E = ExplicitSymbols->end();
-  
-    if (I != E)
-      return I->second;
-  }
-
-  // Now search the libraries.
-  if (OpenedHandles) {
-    for (std::vector<void *>::iterator I = OpenedHandles->begin(),
-         E = OpenedHandles->end(); I != E; ++I) {
-      //lt_ptr ptr = lt_dlsym(*I, symbolName);
-      void *ptr = dlsym(*I, symbolName);
-      if (ptr) {
-        return ptr;
-      }
-    }
-  }
-
+static void *SearchForAddressOfSpecialSymbol(const char* symbolName) {
 #define EXPLICIT_SYMBOL(SYM) \
    extern void *SYM; if (!strcmp(symbolName, #SYM)) return &SYM
 
@@ -138,6 +116,34 @@ void* DynamicLibrary::SearchForAddressOfSymbol(const char* symbolName) {
 #endif
 
 #undef EXPLICIT_SYMBOL
+  return 0;
+}
+
+void* DynamicLibrary::SearchForAddressOfSymbol(const char* symbolName) {
+  // First check symbols added via AddSymbol().
+  if (ExplicitSymbols) {
+    std::map<std::string, void *>::iterator I =
+      ExplicitSymbols->find(symbolName);
+    std::map<std::string, void *>::iterator E = ExplicitSymbols->end();
+  
+    if (I != E)
+      return I->second;
+  }
+
+  // Now search the libraries.
+  if (OpenedHandles) {
+    for (std::vector<void *>::iterator I = OpenedHandles->begin(),
+         E = OpenedHandles->end(); I != E; ++I) {
+      //lt_ptr ptr = lt_dlsym(*I, symbolName);
+      void *ptr = dlsym(*I, symbolName);
+      if (ptr) {
+        return ptr;
+      }
+    }
+  }
+
+  if (void *Result = SearchForAddressOfSpecialSymbol(symbolName))
+    return Result;
 
 // This macro returns the address of a well-known, explicit symbol
 #define EXPLICIT_SYMBOL(SYM) \
diff --git a/libclamav/c++/llvm/lib/System/Path.cpp b/libclamav/c++/llvm/lib/System/Path.cpp
index 8e1fa53..6844530 100644
--- a/libclamav/c++/llvm/lib/System/Path.cpp
+++ b/libclamav/c++/llvm/lib/System/Path.cpp
@@ -176,7 +176,7 @@ Path::FindLibrary(std::string& name) {
   return sys::Path();
 }
 
-std::string Path::GetDLLSuffix() {
+StringRef Path::GetDLLSuffix() {
   return LTDL_SHLIB_EXT;
 }
 
@@ -191,7 +191,7 @@ Path::isBitcodeFile() const {
   return FT == Bitcode_FileType;
 }
 
-bool Path::hasMagicNumber(const std::string &Magic) const {
+bool Path::hasMagicNumber(StringRef Magic) const {
   std::string actualMagic;
   if (getMagicNumber(actualMagic, static_cast<unsigned>(Magic.size())))
     return Magic == actualMagic;
@@ -217,8 +217,9 @@ static void getPathList(const char*path, std::vector<Path>& Paths) {
         Paths.push_back(tmpPath);
 }
 
-static std::string getDirnameCharSep(const std::string& path, char Sep) {
-  
+static StringRef getDirnameCharSep(StringRef path, const char *Sep) {
+  assert(Sep[0] != '\0' && Sep[1] == '\0' &&
+         "Sep must be a 1-character string literal.");
   if (path.empty())
     return ".";
   
@@ -227,31 +228,31 @@ static std::string getDirnameCharSep(const std::string& path, char Sep) {
   
   signed pos = static_cast<signed>(path.size()) - 1;
   
-  while (pos >= 0 && path[pos] == Sep)
+  while (pos >= 0 && path[pos] == Sep[0])
     --pos;
   
   if (pos < 0)
-    return path[0] == Sep ? std::string(1, Sep) : std::string(".");
+    return path[0] == Sep[0] ? Sep : ".";
   
   // Any slashes left?
   signed i = 0;
   
-  while (i < pos && path[i] != Sep)
+  while (i < pos && path[i] != Sep[0])
     ++i;
   
   if (i == pos) // No slashes?  Return "."
     return ".";
   
   // There is at least one slash left.  Remove all trailing non-slashes.  
-  while (pos >= 0 && path[pos] != Sep)
+  while (pos >= 0 && path[pos] != Sep[0])
     --pos;
   
   // Remove any trailing slashes.
-  while (pos >= 0 && path[pos] == Sep)
+  while (pos >= 0 && path[pos] == Sep[0])
     --pos;
   
   if (pos < 0)
-    return path[0] == Sep ? std::string(1, Sep) : std::string(".");
+    return path[0] == Sep[0] ? Sep : ".";
   
   return path.substr(0, pos+1);
 }
diff --git a/libclamav/c++/llvm/lib/System/Unix/Path.inc b/libclamav/c++/llvm/lib/System/Unix/Path.inc
index 33b26f7..a99720c 100644
--- a/libclamav/c++/llvm/lib/System/Unix/Path.inc
+++ b/libclamav/c++/llvm/lib/System/Unix/Path.inc
@@ -16,7 +16,6 @@
 //===          is guaranteed to work on *all* UNIX variants.
 //===----------------------------------------------------------------------===//
 
-#include "llvm/ADT/SmallVector.h"
 #include "Unix.h"
 #if HAVE_SYS_STAT_H
 #include <sys/stat.h>
@@ -79,15 +78,15 @@ using namespace sys;
 
 const char sys::PathSeparator = ':';
 
-Path::Path(const std::string& p)
+Path::Path(StringRef p)
   : path(p) {}
 
 Path::Path(const char *StrStart, unsigned StrLen)
   : path(StrStart, StrLen) {}
 
 Path&
-Path::operator=(const std::string &that) {
-  path = that;
+Path::operator=(StringRef that) {
+  path.assign(that.data(), that.size());
   return *this;
 }
 
@@ -378,11 +377,11 @@ Path Path::GetMainExecutable(const char *argv0, void *MainAddr) {
 }
 
 
-std::string Path::getDirname() const {
-  return getDirnameCharSep(path, '/');
+StringRef Path::getDirname() const {
+  return getDirnameCharSep(path, "/");
 }
 
-std::string
+StringRef
 Path::getBasename() const {
   // Find the last slash
   std::string::size_type slash = path.rfind('/');
@@ -393,12 +392,12 @@ Path::getBasename() const {
 
   std::string::size_type dot = path.rfind('.');
   if (dot == std::string::npos || dot < slash)
-    return path.substr(slash);
+    return StringRef(path).substr(slash);
   else
-    return path.substr(slash, dot - slash);
+    return StringRef(path).substr(slash, dot - slash);
 }
 
-std::string
+StringRef
 Path::getSuffix() const {
   // Find the last slash
   std::string::size_type slash = path.rfind('/');
@@ -409,26 +408,24 @@ Path::getSuffix() const {
 
   std::string::size_type dot = path.rfind('.');
   if (dot == std::string::npos || dot < slash)
-    return std::string();
+    return StringRef("");
   else
-    return path.substr(dot + 1);
+    return StringRef(path).substr(dot + 1);
 }
 
-bool Path::getMagicNumber(std::string& Magic, unsigned len) const {
+bool Path::getMagicNumber(std::string &Magic, unsigned len) const {
   assert(len < 1024 && "Request for magic string too long");
-  SmallVector<char, 128> Buf;
-  Buf.resize(1 + len);
-  char* buf = Buf.data();
+  char Buf[1025];
   int fd = ::open(path.c_str(), O_RDONLY);
   if (fd < 0)
     return false;
-  ssize_t bytes_read = ::read(fd, buf, len);
+  ssize_t bytes_read = ::read(fd, Buf, len);
   ::close(fd);
   if (ssize_t(len) != bytes_read) {
     Magic.clear();
     return false;
   }
-  Magic.assign(buf,len);
+  Magic.assign(Buf, len);
   return true;
 }
 
@@ -481,7 +478,7 @@ Path::canExecute() const {
   return true;
 }
 
-std::string
+StringRef
 Path::getLast() const {
   // Find the last slash
   size_t pos = path.rfind('/');
@@ -495,12 +492,12 @@ Path::getLast() const {
     // Find the second to last slash
     size_t pos2 = path.rfind('/', pos-1);
     if (pos2 == std::string::npos)
-      return path.substr(0,pos);
+      return StringRef(path).substr(0,pos);
     else
-      return path.substr(pos2+1,pos-pos2-1);
+      return StringRef(path).substr(pos2+1,pos-pos2-1);
   }
   // Return everything after the last slash
-  return path.substr(pos+1);
+  return StringRef(path).substr(pos+1);
 }
 
 const FileStatus *
@@ -592,7 +589,7 @@ Path::getDirectoryContents(std::set<Path>& result, std::string* ErrMsg) const {
 }
 
 bool
-Path::set(const std::string& a_path) {
+Path::set(StringRef a_path) {
   if (a_path.empty())
     return false;
   std::string save(path);
@@ -605,7 +602,7 @@ Path::set(const std::string& a_path) {
 }
 
 bool
-Path::appendComponent(const std::string& name) {
+Path::appendComponent(StringRef name) {
   if (name.empty())
     return false;
   std::string save(path);
@@ -637,7 +634,7 @@ Path::eraseComponent() {
 }
 
 bool
-Path::appendSuffix(const std::string& suffix) {
+Path::appendSuffix(StringRef suffix) {
   std::string save(path);
   path.append(".");
   path.append(suffix);
@@ -861,18 +858,15 @@ Path::makeUnique(bool reuse_current, std::string* ErrMsg) {
 
   // Append an XXXXXX pattern to the end of the file for use with mkstemp,
   // mktemp or our own implementation.
-  SmallVector<char, 128> Buf;
-  Buf.resize(path.size()+8);
-  char *FNBuffer = Buf.data();
-    path.copy(FNBuffer,path.size());
+  std::string Buf(path);
   if (isDirectory())
-    strcpy(FNBuffer+path.size(), "/XXXXXX");
+    Buf += "/XXXXXX";
   else
-    strcpy(FNBuffer+path.size(), "-XXXXXX");
+    Buf += "-XXXXXX";
 
 #if defined(HAVE_MKSTEMP)
   int TempFD;
-  if ((TempFD = mkstemp(FNBuffer)) == -1)
+  if ((TempFD = mkstemp((char*)Buf.c_str())) == -1)
     return MakeErrMsg(ErrMsg, path + ": can't make unique filename");
 
   // We don't need to hold the temp file descriptor... we will trust that no one
@@ -880,21 +874,21 @@ Path::makeUnique(bool reuse_current, std::string* ErrMsg) {
   close(TempFD);
 
   // Save the name
-  path = FNBuffer;
+  path = Buf;
 #elif defined(HAVE_MKTEMP)
   // If we don't have mkstemp, use the old and obsolete mktemp function.
-  if (mktemp(FNBuffer) == 0)
+  if (mktemp(Buf.c_str()) == 0)
     return MakeErrMsg(ErrMsg, path + ": can't make unique filename");
 
   // Save the name
-  path = FNBuffer;
+  path = Buf;
 #else
   // Okay, looks like we have to do it all by our lonesome.
   static unsigned FCounter = 0;
   unsigned offset = path.size() + 1;
-  while ( FCounter < 999999 && exists()) {
-    sprintf(FNBuffer+offset,"%06u",++FCounter);
-    path = FNBuffer;
+  while (FCounter < 999999 && exists()) {
+    sprintf(Buf.data()+offset, "%06u", ++FCounter);
+    path = Buf;
   }
   if (FCounter > 999999)
     return MakeErrMsg(ErrMsg,
diff --git a/libclamav/c++/llvm/lib/System/Unix/Process.inc b/libclamav/c++/llvm/lib/System/Unix/Process.inc
index 911b8c3..cf6a47a 100644
--- a/libclamav/c++/llvm/lib/System/Unix/Process.inc
+++ b/libclamav/c++/llvm/lib/System/Unix/Process.inc
@@ -277,7 +277,7 @@ bool Process::ColorNeedsFlush() {
     COLOR(FGBG, "7", BOLD)\
   }
 
-static const char* colorcodes[2][2][8] = {
+static const char colorcodes[2][2][8][10] = {
  { ALLCOLORS("3",""), ALLCOLORS("3","1;") },
  { ALLCOLORS("4",""), ALLCOLORS("4","1;") }
 };
diff --git a/libclamav/c++/llvm/lib/System/Win32/Path.inc b/libclamav/c++/llvm/lib/System/Win32/Path.inc
index 634fbc7..b5f6374 100644
--- a/libclamav/c++/llvm/lib/System/Win32/Path.inc
+++ b/libclamav/c++/llvm/lib/System/Win32/Path.inc
@@ -47,7 +47,7 @@ namespace llvm {
 namespace sys {
 const char PathSeparator = ';';
 
-Path::Path(const std::string& p)
+Path::Path(llvm::StringRef p)
   : path(p) {
   FlipBackSlashes(path);
 }
@@ -58,8 +58,8 @@ Path::Path(const char *StrStart, unsigned StrLen)
 }
 
 Path&
-Path::operator=(const std::string &that) {
-  path = that;
+Path::operator=(StringRef that) {
+  path.assign(that.data(), that.size());
   FlipBackSlashes(path);
   return *this;
 }
@@ -287,11 +287,11 @@ Path::isRootDirectory() const {
   return len > 0 && path[len-1] == '/';
 }
 
-std::string Path::getDirname() const {
-  return getDirnameCharSep(path, '/');
+StringRef Path::getDirname() const {
+  return getDirnameCharSep(path, "/");
 }
 
-std::string
+StringRef
 Path::getBasename() const {
   // Find the last slash
   size_t slash = path.rfind('/');
@@ -302,12 +302,12 @@ Path::getBasename() const {
 
   size_t dot = path.rfind('.');
   if (dot == std::string::npos || dot < slash)
-    return path.substr(slash);
+    return StringRef(path).substr(slash);
   else
-    return path.substr(slash, dot - slash);
+    return StringRef(path).substr(slash, dot - slash);
 }
 
-std::string
+StringRef
 Path::getSuffix() const {
   // Find the last slash
   size_t slash = path.rfind('/');
@@ -318,9 +318,9 @@ Path::getSuffix() const {
 
   size_t dot = path.rfind('.');
   if (dot == std::string::npos || dot < slash)
-    return std::string();
+    return StringRef("");
   else
-    return path.substr(dot + 1);
+    return StringRef(path).substr(dot + 1);
 }
 
 bool
@@ -364,7 +364,7 @@ Path::isRegularFile() const {
   return true;
 }
 
-std::string
+StringRef
 Path::getLast() const {
   // Find the last slash
   size_t pos = path.rfind('/');
@@ -378,7 +378,7 @@ Path::getLast() const {
     return path;
 
   // Return everything after the last slash
-  return path.substr(pos+1);
+  return StringRef(path).substr(pos+1);
 }
 
 const FileStatus *
@@ -490,7 +490,7 @@ Path::getDirectoryContents(std::set<Path>& result, std::string* ErrMsg) const {
 }
 
 bool
-Path::set(const std::string& a_path) {
+Path::set(StringRef a_path) {
   if (a_path.empty())
     return false;
   std::string save(path);
@@ -504,7 +504,7 @@ Path::set(const std::string& a_path) {
 }
 
 bool
-Path::appendComponent(const std::string& name) {
+Path::appendComponent(StringRef name) {
   if (name.empty())
     return false;
   std::string save(path);
@@ -536,7 +536,7 @@ Path::eraseComponent() {
 }
 
 bool
-Path::appendSuffix(const std::string& suffix) {
+Path::appendSuffix(StringRef suffix) {
   std::string save(path);
   path.append(".");
   path.append(suffix);
diff --git a/libclamav/c++/llvm/lib/Target/ARM/ARMBaseInstrInfo.cpp b/libclamav/c++/llvm/lib/Target/ARM/ARMBaseInstrInfo.cpp
index 1aae369..7cfa097 100644
--- a/libclamav/c++/llvm/lib/Target/ARM/ARMBaseInstrInfo.cpp
+++ b/libclamav/c++/llvm/lib/Target/ARM/ARMBaseInstrInfo.cpp
@@ -944,8 +944,6 @@ reMaterialize(MachineBasicBlock &MBB,
               unsigned DestReg, unsigned SubIdx,
               const MachineInstr *Orig,
               const TargetRegisterInfo *TRI) const {
-  DebugLoc dl = Orig->getDebugLoc();
-
   if (SubIdx && TargetRegisterInfo::isPhysicalRegister(DestReg)) {
     DestReg = TRI->getSubReg(DestReg, SubIdx);
     SubIdx = 0;
diff --git a/libclamav/c++/llvm/lib/Target/ARM/ARMBaseRegisterInfo.cpp b/libclamav/c++/llvm/lib/Target/ARM/ARMBaseRegisterInfo.cpp
index 9b5f79f..7aebdf4 100644
--- a/libclamav/c++/llvm/lib/Target/ARM/ARMBaseRegisterInfo.cpp
+++ b/libclamav/c++/llvm/lib/Target/ARM/ARMBaseRegisterInfo.cpp
@@ -1373,7 +1373,7 @@ emitPrologue(MachineFunction &MF) const {
       // bic r4, r4, MaxAlign
       // mov sp, r4
       // FIXME: It will be better just to find spare register here.
-      BuildMI(MBB, MBBI, dl, TII.get(ARM::tMOVtgpr2gpr), ARM::R4)
+      BuildMI(MBB, MBBI, dl, TII.get(ARM::tMOVgpr2tgpr), ARM::R4)
         .addReg(ARM::SP, RegState::Kill);
       AddDefaultCC(AddDefaultPred(BuildMI(MBB, MBBI, dl,
                                           TII.get(ARM::t2BICri), ARM::R4)
diff --git a/libclamav/c++/llvm/lib/Target/ARM/ARMISelLowering.cpp b/libclamav/c++/llvm/lib/Target/ARM/ARMISelLowering.cpp
index 655c762..334baae 100644
--- a/libclamav/c++/llvm/lib/Target/ARM/ARMISelLowering.cpp
+++ b/libclamav/c++/llvm/lib/Target/ARM/ARMISelLowering.cpp
@@ -1273,7 +1273,8 @@ ARMTargetLowering::LowerToTLSGeneralDynamicModel(GlobalAddressSDNode *GA,
     LowerCallTo(Chain, (const Type *) Type::getInt32Ty(*DAG.getContext()),
                 false, false, false, false,
                 0, CallingConv::C, false, /*isReturnValueUsed=*/true,
-                DAG.getExternalSymbol("__tls_get_addr", PtrVT), Args, DAG, dl);
+                DAG.getExternalSymbol("__tls_get_addr", PtrVT), Args, DAG, dl,
+                DAG.GetOrdering(Chain.getNode()));
   return CallResult.first;
 }
 
@@ -3147,6 +3148,7 @@ ARMTargetLowering::EmitAtomicBinary(MachineInstr *MI, MachineBasicBlock *BB,
   unsigned ptr = MI->getOperand(1).getReg();
   unsigned incr = MI->getOperand(2).getReg();
   DebugLoc dl = MI->getDebugLoc();
+
   bool isThumb2 = Subtarget->isThumb2();
   unsigned ldrOpc, strOpc;
   switch (Size) {
@@ -3213,6 +3215,9 @@ ARMTargetLowering::EmitAtomicBinary(MachineInstr *MI, MachineBasicBlock *BB,
   //  exitMBB:
   //   ...
   BB = exitMBB;
+
+  F->DeleteMachineInstr(MI);   // The instruction is gone now.
+
   return BB;
 }
 
@@ -4265,7 +4270,7 @@ ARMTargetLowering::getRegForInlineAsmConstraint(const std::string &Constraint,
     case 'w':
       if (VT == MVT::f32)
         return std::make_pair(0U, ARM::SPRRegisterClass);
-      if (VT == MVT::f64)
+      if (VT.getSizeInBits() == 64)
         return std::make_pair(0U, ARM::DPRRegisterClass);
       if (VT.getSizeInBits() == 128)
         return std::make_pair(0U, ARM::QPRRegisterClass);
@@ -4302,7 +4307,7 @@ getRegClassForInlineAsmConstraint(const std::string &Constraint,
                                    ARM::S20,ARM::S21,ARM::S22,ARM::S23,
                                    ARM::S24,ARM::S25,ARM::S26,ARM::S27,
                                    ARM::S28,ARM::S29,ARM::S30,ARM::S31, 0);
-    if (VT == MVT::f64)
+    if (VT.getSizeInBits() == 64)
       return make_vector<unsigned>(ARM::D0, ARM::D1, ARM::D2, ARM::D3,
                                    ARM::D4, ARM::D5, ARM::D6, ARM::D7,
                                    ARM::D8, ARM::D9, ARM::D10,ARM::D11,
diff --git a/libclamav/c++/llvm/lib/Target/ARM/ARMInstrFormats.td b/libclamav/c++/llvm/lib/Target/ARM/ARMInstrFormats.td
index cf0edff..28b2821 100644
--- a/libclamav/c++/llvm/lib/Target/ARM/ARMInstrFormats.td
+++ b/libclamav/c++/llvm/lib/Target/ARM/ARMInstrFormats.td
@@ -146,11 +146,9 @@ def s_cc_out : OptionalDefOperand<OtherVT, (ops CCR), (ops (i32 CPSR))> {
 // ARM Instruction templates.
 //
 
-class InstARM<AddrMode am, SizeFlagVal sz, IndexMode im,
-              Format f, Domain d, string cstr, InstrItinClass itin>
+class InstTemplate<AddrMode am, SizeFlagVal sz, IndexMode im,
+                   Format f, Domain d, string cstr, InstrItinClass itin>
   : Instruction {
-  field bits<32> Inst;
-
   let Namespace = "ARM";
 
   // TSFlagsFields
@@ -179,6 +177,20 @@ class InstARM<AddrMode am, SizeFlagVal sz, IndexMode im,
   let Itinerary = itin;
 }
 
+class Encoding {
+  field bits<32> Inst;
+}
+
+class InstARM<AddrMode am, SizeFlagVal sz, IndexMode im,
+              Format f, Domain d, string cstr, InstrItinClass itin>
+  : InstTemplate<am, sz, im, f, d, cstr, itin>, Encoding;
+
+// This Encoding-less class is used by Thumb1 to specify the encoding bits later
+// on by adding flavors to specific instructions.
+class InstThumb<AddrMode am, SizeFlagVal sz, IndexMode im,
+                Format f, Domain d, string cstr, InstrItinClass itin>
+  : InstTemplate<am, sz, im, f, d, cstr, itin>;
+
 class PseudoInst<dag oops, dag iops, InstrItinClass itin, 
                  string asm, list<dag> pattern>
   : InstARM<AddrModeNone, SizeSpecial, IndexModeNone, Pseudo, GenericDomain, 
@@ -861,7 +873,7 @@ class ARMV6Pat<dag pattern, dag result> : Pat<pattern, result> {
 
 class ThumbI<dag oops, dag iops, AddrMode am, SizeFlagVal sz,
              InstrItinClass itin, string asm, string cstr, list<dag> pattern>
-  : InstARM<am, sz, IndexModeNone, ThumbFrm, GenericDomain, cstr, itin> {
+  : InstThumb<am, sz, IndexModeNone, ThumbFrm, GenericDomain, cstr, itin> {
   let OutOperandList = oops;
   let InOperandList = iops;
   let AsmString   = asm;
@@ -876,9 +888,14 @@ class TI<dag oops, dag iops, InstrItinClass itin, string asm, list<dag> pattern>
 class TIt<dag oops, dag iops, InstrItinClass itin, string asm, list<dag> pattern>
   : ThumbI<oops, iops, AddrModeNone, Size2Bytes, itin, asm, "$lhs = $dst", pattern>;
 
-// tBL, tBX instructions
-class TIx2<dag oops, dag iops, InstrItinClass itin, string asm, list<dag> pattern>
-  : ThumbI<oops, iops, AddrModeNone, Size4Bytes, itin, asm, "", pattern>;
+// tBL, tBX 32-bit instructions
+class TIx2<bits<5> opcod1, bits<2> opcod2, bit opcod3,
+    dag oops, dag iops, InstrItinClass itin, string asm, list<dag> pattern>
+    : ThumbI<oops, iops, AddrModeNone, Size4Bytes, itin, asm, "", pattern>, Encoding {
+  let Inst{31-27} = opcod1;
+  let Inst{15-14} = opcod2;
+  let Inst{12} = opcod3;
+}
 
 // BR_JT instructions
 class TJTI<dag oops, dag iops, InstrItinClass itin, string asm, list<dag> pattern>
@@ -887,7 +904,7 @@ class TJTI<dag oops, dag iops, InstrItinClass itin, string asm, list<dag> patter
 // Thumb1 only
 class Thumb1I<dag oops, dag iops, AddrMode am, SizeFlagVal sz,
               InstrItinClass itin, string asm, string cstr, list<dag> pattern>
-  : InstARM<am, sz, IndexModeNone, ThumbFrm, GenericDomain, cstr, itin> {
+  : InstThumb<am, sz, IndexModeNone, ThumbFrm, GenericDomain, cstr, itin> {
   let OutOperandList = oops;
   let InOperandList = iops;
   let AsmString   = asm;
@@ -915,7 +932,7 @@ class T1It<dag oops, dag iops, InstrItinClass itin,
 class Thumb1sI<dag oops, dag iops, AddrMode am, SizeFlagVal sz,
                InstrItinClass itin,
                string opc, string asm, string cstr, list<dag> pattern>
-  : InstARM<am, sz, IndexModeNone, ThumbFrm, GenericDomain, cstr, itin> {
+  : InstThumb<am, sz, IndexModeNone, ThumbFrm, GenericDomain, cstr, itin> {
   let OutOperandList = !con(oops, (ops s_cc_out:$s));
   let InOperandList = !con(iops, (ops pred:$p));
   let AsmString = !strconcat(opc, !strconcat("${s}${p}", asm));
@@ -937,7 +954,7 @@ class T1sIt<dag oops, dag iops, InstrItinClass itin,
 class Thumb1pI<dag oops, dag iops, AddrMode am, SizeFlagVal sz,
                InstrItinClass itin,
                string opc, string asm, string cstr, list<dag> pattern>
-  : InstARM<am, sz, IndexModeNone, ThumbFrm, GenericDomain, cstr, itin> {
+  : InstThumb<am, sz, IndexModeNone, ThumbFrm, GenericDomain, cstr, itin> {
   let OutOperandList = oops;
   let InOperandList = !con(iops, (ops pred:$p));
   let AsmString = !strconcat(opc, !strconcat("${p}", asm));
@@ -968,6 +985,50 @@ class T1pIs<dag oops, dag iops,
             InstrItinClass itin, string opc, string asm, list<dag> pattern>
   : Thumb1pI<oops, iops, AddrModeT1_s, Size2Bytes, itin, opc, asm, "", pattern>;
 
+class Encoding16 : Encoding {
+  let Inst{31-16} = 0x0000;
+}
+
+// A6.2 16-bit Thumb instruction encoding
+class T1Encoding<bits<6> opcode> : Encoding16 {
+  let Inst{15-10} = opcode;
+}
+
+// A6.2.1 Shift (immediate), add, subtract, move, and compare encoding.
+class T1General<bits<5> opcode> : Encoding16 {
+  let Inst{15-14} = 0b00;
+  let Inst{13-9} = opcode;
+}
+
+// A6.2.2 Data-processing encoding.
+class T1DataProcessing<bits<4> opcode> : Encoding16 {
+  let Inst{15-10} = 0b010000;
+  let Inst{9-6} = opcode;
+}
+
+// A6.2.3 Special data instructions and branch and exchange encoding.
+class T1Special<bits<4> opcode> : Encoding16 {
+  let Inst{15-10} = 0b010001;
+  let Inst{9-6} = opcode;
+}
+
+// A6.2.4 Load/store single data item encoding.
+class T1LoadStore<bits<4> opA, bits<3> opB> : Encoding16 {
+  let Inst{15-12} = opA;
+  let Inst{11-9} = opB;
+}
+class T1LdSt<bits<3> opB> : T1LoadStore<0b0101, opB>;
+class T1LdSt4Imm<bits<3> opB> : T1LoadStore<0b0110, opB>; // Immediate, 4 bytes
+class T1LdSt1Imm<bits<3> opB> : T1LoadStore<0b0111, opB>; // Immediate, 1 byte
+class T1LdSt2Imm<bits<3> opB> : T1LoadStore<0b1000, opB>; // Immediate, 2 bytes
+class T1LdStSP<bits<3> opB> : T1LoadStore<0b1001, opB>;   // SP relative
+
+// A6.2.5 Miscellaneous 16-bit instructions encoding.
+class T1Misc<bits<7> opcode> : Encoding16 {
+  let Inst{15-12} = 0b1011;
+  let Inst{11-5} = opcode;
+}
+
 // Thumb2I - Thumb2 instruction. Almost all Thumb2 instructions are predicable.
 class Thumb2I<dag oops, dag iops, AddrMode am, SizeFlagVal sz,
               InstrItinClass itin,
@@ -1034,9 +1095,18 @@ class T2Iso<dag oops, dag iops, InstrItinClass itin,
 class T2Ipc<dag oops, dag iops, InstrItinClass itin,
             string opc, string asm, list<dag> pattern>
   : Thumb2I<oops, iops, AddrModeT2_pc, Size4Bytes, itin, opc, asm, "", pattern>;
-class T2Ii8s4<dag oops, dag iops, InstrItinClass itin,
+class T2Ii8s4<bit P, bit W, bit load, dag oops, dag iops, InstrItinClass itin,
               string opc, string asm, list<dag> pattern>
-  : Thumb2I<oops, iops, AddrModeT2_i8s4, Size4Bytes, itin, opc, asm, "", pattern>;
+  : Thumb2I<oops, iops, AddrModeT2_i8s4, Size4Bytes, itin, opc, asm, "",
+            pattern> {
+  let Inst{31-27} = 0b11101;
+  let Inst{26-25} = 0b00;
+  let Inst{24} = P;
+  let Inst{23} = ?; // The U bit.
+  let Inst{22} = 1;
+  let Inst{21} = W;
+  let Inst{20} = load;
+}
 
 class T2sI<dag oops, dag iops, InstrItinClass itin,
            string opc, string asm, list<dag> pattern>
@@ -1055,8 +1125,9 @@ class T2Ix2<dag oops, dag iops, InstrItinClass itin,
 
 
 // T2Iidxldst - Thumb2 indexed load / store instructions.
-class T2Iidxldst<dag oops, dag iops, AddrMode am, IndexMode im,
-                 InstrItinClass itin,
+class T2Iidxldst<bit signed, bits<2> opcod, bit load, bit pre,
+                 dag oops, dag iops,
+                 AddrMode am, IndexMode im, InstrItinClass itin,
                  string opc, string asm, string cstr, list<dag> pattern>
   : InstARM<am, Size4Bytes, im, ThumbFrm, GenericDomain, cstr, itin> {
   let OutOperandList = oops;
@@ -1064,6 +1135,16 @@ class T2Iidxldst<dag oops, dag iops, AddrMode am, IndexMode im,
   let AsmString = !strconcat(opc, !strconcat("${p}", asm));
   let Pattern = pattern;
   list<Predicate> Predicates = [IsThumb2];
+  let Inst{31-27} = 0b11111;
+  let Inst{26-25} = 0b00;
+  let Inst{24} = signed;
+  let Inst{23} = 0;
+  let Inst{22-21} = opcod;
+  let Inst{20} = load;
+  let Inst{11} = 1;
+  // (P, W) = (1, 1) Pre-indexed or (0, 1) Post-indexed
+  let Inst{10} = pre; // The P bit.
+  let Inst{8} = 1; // The W bit.
 }
 
 // Tv5Pat - Same as Pat<>, but requires V5T Thumb mode.
diff --git a/libclamav/c++/llvm/lib/Target/ARM/ARMInstrInfo.td b/libclamav/c++/llvm/lib/Target/ARM/ARMInstrInfo.td
index e14696a..da8b373 100644
--- a/libclamav/c++/llvm/lib/Target/ARM/ARMInstrInfo.td
+++ b/libclamav/c++/llvm/lib/Target/ARM/ARMInstrInfo.td
@@ -1740,7 +1740,7 @@ def LDREXD : AIldrex<0b01, (outs GPR:$dest, GPR:$dest2), (ins GPR:$ptr),
                     []>;
 }
 
-let mayStore = 1 in {
+let mayStore = 1, Constraints = "@earlyclobber $success" in {
 def STREXB : AIstrex<0b10, (outs GPR:$success), (ins GPR:$src, GPR:$ptr),
                     NoItinerary,
                     "strexb", "\t$success, $src, [$ptr]",
diff --git a/libclamav/c++/llvm/lib/Target/ARM/ARMInstrThumb.td b/libclamav/c++/llvm/lib/Target/ARM/ARMInstrThumb.td
index 9306bdb..34d7d8f 100644
--- a/libclamav/c++/llvm/lib/Target/ARM/ARMInstrThumb.td
+++ b/libclamav/c++/llvm/lib/Target/ARM/ARMInstrThumb.td
@@ -113,7 +113,7 @@ def t_addrmode_s1 : Operand<i32>,
 def t_addrmode_sp : Operand<i32>,
                     ComplexPattern<i32, 2, "SelectThumbAddrModeSP", []> {
   let PrintMethod = "printThumbAddrModeSPOperand";
-  let MIOperandInfo = (ops tGPR:$base, i32imm:$offsimm);
+  let MIOperandInfo = (ops JustSP:$base, i32imm:$offsimm);
 }
 
 //===----------------------------------------------------------------------===//
@@ -136,31 +136,46 @@ PseudoInst<(outs), (ins i32imm:$amt), NoItinerary,
 let isNotDuplicable = 1 in
 def tPICADD : TIt<(outs GPR:$dst), (ins GPR:$lhs, pclabel:$cp), IIC_iALUr,
                  "\n$cp:\n\tadd\t$dst, pc",
-                 [(set GPR:$dst, (ARMpic_add GPR:$lhs, imm:$cp))]>;
+                 [(set GPR:$dst, (ARMpic_add GPR:$lhs, imm:$cp))]>,
+              T1Special<{0,0,?,?}> {
+  let Inst{6-3} = 0b1111; // A8.6.6 Rm = pc
+}
 
 // PC relative add.
 def tADDrPCi : T1I<(outs tGPR:$dst), (ins t_imm_s4:$rhs), IIC_iALUi,
-                  "add\t$dst, pc, $rhs", []>;
+                  "add\t$dst, pc, $rhs", []>,
+               T1Encoding<{1,0,1,0,0,?}>; // A6.2 & A8.6.10
 
 // ADD rd, sp, #imm8
 def tADDrSPi : T1I<(outs tGPR:$dst), (ins GPR:$sp, t_imm_s4:$rhs), IIC_iALUi,
-                  "add\t$dst, $sp, $rhs", []>;
+                  "add\t$dst, $sp, $rhs", []>,
+               T1Encoding<{1,0,1,0,1,?}>; // A6.2 & A8.6.8
 
 // ADD sp, sp, #imm7
 def tADDspi : TIt<(outs GPR:$dst), (ins GPR:$lhs, t_imm_s4:$rhs), IIC_iALUi,
-                  "add\t$dst, $rhs", []>;
+                  "add\t$dst, $rhs", []>,
+              T1Misc<{0,0,0,0,0,?,?}>; // A6.2.5 & A8.6.8
 
 // SUB sp, sp, #imm7
 def tSUBspi : TIt<(outs GPR:$dst), (ins GPR:$lhs, t_imm_s4:$rhs), IIC_iALUi,
-                  "sub\t$dst, $rhs", []>;
+                  "sub\t$dst, $rhs", []>,
+              T1Misc<{0,0,0,0,1,?,?}>; // A6.2.5 & A8.6.215
 
 // ADD rm, sp
 def tADDrSP : TIt<(outs GPR:$dst), (ins GPR:$lhs, GPR:$rhs), IIC_iALUr,
-                  "add\t$dst, $rhs", []>;
+                  "add\t$dst, $rhs", []>,
+              T1Special<{0,0,?,?}> {
+  let Inst{6-3} = 0b1101; // A8.6.9 Encoding T1
+}
 
 // ADD sp, rm
 def tADDspr : TIt<(outs GPR:$dst), (ins GPR:$lhs, GPR:$rhs), IIC_iALUr,
-                  "add\t$dst, $rhs", []>;
+                  "add\t$dst, $rhs", []>,
+              T1Special<{0,0,?,?}> {
+  // A8.6.9 Encoding T2
+  let Inst{7} = 1;
+  let Inst{2-0} = 0b101;
+}
 
 // Pseudo instruction that will expand into a tSUBspi + a copy.
 let usesCustomInserter = 1 in { // Expanded after instruction selection.
@@ -180,22 +195,32 @@ def tANDsp : PseudoInst<(outs tGPR:$dst), (ins tGPR:$lhs, tGPR:$rhs),
 //
 
 let isReturn = 1, isTerminator = 1, isBarrier = 1 in {
-  def tBX_RET : TI<(outs), (ins), IIC_Br, "bx\tlr", [(ARMretflag)]>;
+  def tBX_RET : TI<(outs), (ins), IIC_Br, "bx\tlr", [(ARMretflag)]>,
+                T1Special<{1,1,0,?}> { // A6.2.3 & A8.6.25
+    let Inst{6-3} = 0b1110; // Rm = lr
+  }
   // Alternative return instruction used by vararg functions.
-  def tBX_RET_vararg : TI<(outs), (ins tGPR:$target), IIC_Br, "bx\t$target", []>;
+  def tBX_RET_vararg : TI<(outs), (ins tGPR:$target), IIC_Br, "bx\t$target", []>,
+                       T1Special<{1,1,0,?}>; // A6.2.3 & A8.6.25
 }
 
 // Indirect branches
 let isBranch = 1, isTerminator = 1, isBarrier = 1, isIndirectBranch = 1 in {
   def tBRIND : TI<(outs), (ins GPR:$dst), IIC_Br, "mov\tpc, $dst",
-                  [(brind GPR:$dst)]>;
+                  [(brind GPR:$dst)]>,
+               T1Special<{1,0,?,?}> {
+    // <Rd> = pc
+    let Inst{7} = 1;
+    let Inst{2-0} = 0b111;
+  }
 }
 
 // FIXME: remove when we have a way to marking a MI with these properties.
 let isReturn = 1, isTerminator = 1, isBarrier = 1, mayLoad = 1,
     hasExtraDefRegAllocReq = 1 in
 def tPOP_RET : T1I<(outs), (ins pred:$p, reglist:$wb, variable_ops), IIC_Br,
-                   "pop${p}\t$wb", []>;
+                   "pop${p}\t$wb", []>,
+               T1Misc<{1,1,0,?,?,?,?}>;
 
 let isCall = 1,
   Defs = [R0,  R1,  R2,  R3,  R12, LR,
@@ -203,25 +228,29 @@ let isCall = 1,
           D16, D17, D18, D19, D20, D21, D22, D23,
           D24, D25, D26, D27, D28, D29, D30, D31, CPSR, FPSCR] in {
   // Also used for Thumb2
-  def tBL  : TIx2<(outs), (ins i32imm:$func, variable_ops), IIC_Br, 
-                   "bl\t${func:call}",
-                   [(ARMtcall tglobaladdr:$func)]>,
+  def tBL  : TIx2<0b11110, 0b11, 1,
+                  (outs), (ins i32imm:$func, variable_ops), IIC_Br, 
+                  "bl\t${func:call}",
+                  [(ARMtcall tglobaladdr:$func)]>,
              Requires<[IsThumb, IsNotDarwin]>;
 
   // ARMv5T and above, also used for Thumb2
-  def tBLXi : TIx2<(outs), (ins i32imm:$func, variable_ops), IIC_Br, 
-                    "blx\t${func:call}",
-                    [(ARMcall tglobaladdr:$func)]>,
+  def tBLXi : TIx2<0b11110, 0b11, 0,
+                   (outs), (ins i32imm:$func, variable_ops), IIC_Br, 
+                   "blx\t${func:call}",
+                   [(ARMcall tglobaladdr:$func)]>,
               Requires<[IsThumb, HasV5T, IsNotDarwin]>;
 
   // Also used for Thumb2
   def tBLXr : TI<(outs), (ins GPR:$func, variable_ops), IIC_Br, 
                   "blx\t$func",
                   [(ARMtcall GPR:$func)]>,
-              Requires<[IsThumb, HasV5T, IsNotDarwin]>;
+              Requires<[IsThumb, HasV5T, IsNotDarwin]>,
+              T1Special<{1,1,1,?}>; // A6.2.3 & A8.6.24;
 
   // ARMv4T
-  def tBX : TIx2<(outs), (ins tGPR:$func, variable_ops), IIC_Br, 
+  def tBX : TIx2<{?,?,?,?,?}, {?,?}, ?,
+                  (outs), (ins tGPR:$func, variable_ops), IIC_Br, 
                   "mov\tlr, pc\n\tbx\t$func",
                   [(ARMcall_nolink tGPR:$func)]>,
             Requires<[IsThumb1Only, IsNotDarwin]>;
@@ -234,27 +263,31 @@ let isCall = 1,
           D16, D17, D18, D19, D20, D21, D22, D23,
           D24, D25, D26, D27, D28, D29, D30, D31, CPSR, FPSCR] in {
   // Also used for Thumb2
-  def tBLr9 : TIx2<(outs), (ins i32imm:$func, variable_ops), IIC_Br, 
+  def tBLr9 : TIx2<0b11110, 0b11, 1,
+                   (outs), (ins i32imm:$func, variable_ops), IIC_Br, 
                    "bl\t${func:call}",
                    [(ARMtcall tglobaladdr:$func)]>,
               Requires<[IsThumb, IsDarwin]>;
 
   // ARMv5T and above, also used for Thumb2
-  def tBLXi_r9 : TIx2<(outs), (ins i32imm:$func, variable_ops), IIC_Br, 
+  def tBLXi_r9 : TIx2<0b11110, 0b11, 0,
+                      (outs), (ins i32imm:$func, variable_ops), IIC_Br, 
                       "blx\t${func:call}",
                       [(ARMcall tglobaladdr:$func)]>,
                  Requires<[IsThumb, HasV5T, IsDarwin]>;
 
   // Also used for Thumb2
   def tBLXr_r9 : TI<(outs), (ins GPR:$func, variable_ops), IIC_Br, 
-                  "blx\t$func",
-                  [(ARMtcall GPR:$func)]>,
-                 Requires<[IsThumb, HasV5T, IsDarwin]>;
+                    "blx\t$func",
+                    [(ARMtcall GPR:$func)]>,
+                 Requires<[IsThumb, HasV5T, IsDarwin]>,
+                 T1Special<{1,1,1,?}>; // A6.2.3 & A8.6.24
 
   // ARMv4T
-  def tBXr9 : TIx2<(outs), (ins tGPR:$func, variable_ops), IIC_Br, 
-                  "mov\tlr, pc\n\tbx\t$func",
-                  [(ARMcall_nolink tGPR:$func)]>,
+  def tBXr9 : TIx2<{?,?,?,?,?}, {?,?}, ?,
+                   (outs), (ins tGPR:$func, variable_ops), IIC_Br, 
+                   "mov\tlr, pc\n\tbx\t$func",
+                   [(ARMcall_nolink tGPR:$func)]>,
               Requires<[IsThumb1Only, IsDarwin]>;
 }
 
@@ -262,17 +295,22 @@ let isBranch = 1, isTerminator = 1 in {
   let isBarrier = 1 in {
     let isPredicable = 1 in
     def tB   : T1I<(outs), (ins brtarget:$target), IIC_Br,
-                   "b\t$target", [(br bb:$target)]>;
+                   "b\t$target", [(br bb:$target)]>,
+               T1Encoding<{1,1,1,0,0,?}>;
 
   // Far jump
   let Defs = [LR] in
-  def tBfar : TIx2<(outs), (ins brtarget:$target), IIC_Br, 
+  def tBfar : TIx2<0b11110, 0b11, 1, (outs), (ins brtarget:$target), IIC_Br, 
                     "bl\t$target\t@ far jump",[]>;
 
   def tBR_JTr : T1JTI<(outs),
                       (ins tGPR:$target, jtblock_operand:$jt, i32imm:$id),
                       IIC_Br, "mov\tpc, $target\n\t.align\t2\n$jt",
-                      [(ARMbrjt tGPR:$target, tjumptable:$jt, imm:$id)]>;
+                      [(ARMbrjt tGPR:$target, tjumptable:$jt, imm:$id)]>,
+                Encoding16 {
+    let Inst{15-7} = 0b010001101;
+    let Inst{2-0} = 0b111;
+  }
   }
 }
 
@@ -281,15 +319,18 @@ let isBranch = 1, isTerminator = 1 in {
 let isBranch = 1, isTerminator = 1 in
   def tBcc : T1I<(outs), (ins brtarget:$target, pred:$cc), IIC_Br,
                  "b$cc\t$target",
-                 [/*(ARMbrcond bb:$target, imm:$cc)*/]>;
+                 [/*(ARMbrcond bb:$target, imm:$cc)*/]>,
+             T1Encoding<{1,1,0,1,?,?}>;
 
 // Compare and branch on zero / non-zero
 let isBranch = 1, isTerminator = 1 in {
   def tCBZ  : T1I<(outs), (ins tGPR:$cmp, brtarget:$target), IIC_Br,
-                  "cbz\t$cmp, $target", []>;
+                  "cbz\t$cmp, $target", []>,
+              T1Misc<{0,0,?,1,?,?,?}>;
 
   def tCBNZ : T1I<(outs), (ins tGPR:$cmp, brtarget:$target), IIC_Br,
-                  "cbnz\t$cmp, $target", []>;
+                  "cbnz\t$cmp, $target", []>,
+              T1Misc<{1,0,?,1,?,?,?}>;
 }
 
 //===----------------------------------------------------------------------===//
@@ -299,71 +340,85 @@ let isBranch = 1, isTerminator = 1 in {
 let canFoldAsLoad = 1, isReMaterializable = 1, mayHaveSideEffects = 1 in
 def tLDR : T1pI4<(outs tGPR:$dst), (ins t_addrmode_s4:$addr), IIC_iLoadr, 
                "ldr", "\t$dst, $addr",
-               [(set tGPR:$dst, (load t_addrmode_s4:$addr))]>;
+               [(set tGPR:$dst, (load t_addrmode_s4:$addr))]>,
+           T1LdSt<0b100>;
 
 def tLDRB : T1pI1<(outs tGPR:$dst), (ins t_addrmode_s1:$addr), IIC_iLoadr,
                 "ldrb", "\t$dst, $addr",
-                [(set tGPR:$dst, (zextloadi8 t_addrmode_s1:$addr))]>;
+                [(set tGPR:$dst, (zextloadi8 t_addrmode_s1:$addr))]>,
+            T1LdSt<0b110>;
 
 def tLDRH : T1pI2<(outs tGPR:$dst), (ins t_addrmode_s2:$addr), IIC_iLoadr,
                 "ldrh", "\t$dst, $addr",
-                [(set tGPR:$dst, (zextloadi16 t_addrmode_s2:$addr))]>;
+                [(set tGPR:$dst, (zextloadi16 t_addrmode_s2:$addr))]>,
+            T1LdSt<0b101>;
 
 let AddedComplexity = 10 in
 def tLDRSB : T1pI1<(outs tGPR:$dst), (ins t_addrmode_rr:$addr), IIC_iLoadr,
                  "ldrsb", "\t$dst, $addr",
-                 [(set tGPR:$dst, (sextloadi8 t_addrmode_rr:$addr))]>;
+                 [(set tGPR:$dst, (sextloadi8 t_addrmode_rr:$addr))]>,
+             T1LdSt<0b011>;
 
 let AddedComplexity = 10 in
 def tLDRSH : T1pI2<(outs tGPR:$dst), (ins t_addrmode_rr:$addr), IIC_iLoadr,
                  "ldrsh", "\t$dst, $addr",
-                 [(set tGPR:$dst, (sextloadi16 t_addrmode_rr:$addr))]>;
+                 [(set tGPR:$dst, (sextloadi16 t_addrmode_rr:$addr))]>,
+             T1LdSt<0b111>;
 
 let canFoldAsLoad = 1 in
 def tLDRspi : T1pIs<(outs tGPR:$dst), (ins t_addrmode_sp:$addr), IIC_iLoadi,
                   "ldr", "\t$dst, $addr",
-                  [(set tGPR:$dst, (load t_addrmode_sp:$addr))]>;
+                  [(set tGPR:$dst, (load t_addrmode_sp:$addr))]>,
+              T1LdStSP<{1,?,?}>;
 
 // Special instruction for restore. It cannot clobber condition register
 // when it's expanded by eliminateCallFramePseudoInstr().
 let canFoldAsLoad = 1, mayLoad = 1 in
 def tRestore : T1pIs<(outs tGPR:$dst), (ins t_addrmode_sp:$addr), IIC_iLoadi,
-                    "ldr", "\t$dst, $addr", []>;
+                    "ldr", "\t$dst, $addr", []>,
+               T1LdStSP<{1,?,?}>;
 
 // Load tconstpool
 // FIXME: Use ldr.n to work around a Darwin assembler bug.
 let canFoldAsLoad = 1, isReMaterializable = 1, mayHaveSideEffects = 1  in 
 def tLDRpci : T1pIs<(outs tGPR:$dst), (ins i32imm:$addr), IIC_iLoadi,
                   "ldr", ".n\t$dst, $addr",
-                  [(set tGPR:$dst, (load (ARMWrapper tconstpool:$addr)))]>;
+                  [(set tGPR:$dst, (load (ARMWrapper tconstpool:$addr)))]>,
+              T1Encoding<{0,1,0,0,1,?}>; // A6.2 & A8.6.59
 
 // Special LDR for loads from non-pc-relative constpools.
 let canFoldAsLoad = 1, mayLoad = 1, isReMaterializable = 1,
     mayHaveSideEffects = 1  in
 def tLDRcp  : T1pIs<(outs tGPR:$dst), (ins i32imm:$addr), IIC_iLoadi,
-                  "ldr", "\t$dst, $addr", []>;
+                  "ldr", "\t$dst, $addr", []>,
+              T1LdStSP<{1,?,?}>;
 
 def tSTR : T1pI4<(outs), (ins tGPR:$src, t_addrmode_s4:$addr), IIC_iStorer,
                "str", "\t$src, $addr",
-               [(store tGPR:$src, t_addrmode_s4:$addr)]>;
+               [(store tGPR:$src, t_addrmode_s4:$addr)]>,
+           T1LdSt<0b000>;
 
 def tSTRB : T1pI1<(outs), (ins tGPR:$src, t_addrmode_s1:$addr), IIC_iStorer,
                  "strb", "\t$src, $addr",
-                 [(truncstorei8 tGPR:$src, t_addrmode_s1:$addr)]>;
+                 [(truncstorei8 tGPR:$src, t_addrmode_s1:$addr)]>,
+            T1LdSt<0b010>;
 
 def tSTRH : T1pI2<(outs), (ins tGPR:$src, t_addrmode_s2:$addr), IIC_iStorer,
                  "strh", "\t$src, $addr",
-                 [(truncstorei16 tGPR:$src, t_addrmode_s2:$addr)]>;
+                 [(truncstorei16 tGPR:$src, t_addrmode_s2:$addr)]>,
+            T1LdSt<0b001>;
 
 def tSTRspi : T1pIs<(outs), (ins tGPR:$src, t_addrmode_sp:$addr), IIC_iStorei,
                    "str", "\t$src, $addr",
-                   [(store tGPR:$src, t_addrmode_sp:$addr)]>;
+                   [(store tGPR:$src, t_addrmode_sp:$addr)]>,
+              T1LdStSP<{0,?,?}>;
 
 let mayStore = 1 in {
 // Special instruction for spill. It cannot clobber condition register
 // when it's expanded by eliminateCallFramePseudoInstr().
 def tSpill : T1pIs<(outs), (ins tGPR:$src, t_addrmode_sp:$addr), IIC_iStorei,
-                  "str", "\t$src, $addr", []>;
+                  "str", "\t$src, $addr", []>,
+             T1LdStSP<{0,?,?}>;
 }
 
 //===----------------------------------------------------------------------===//
@@ -375,21 +430,25 @@ let mayLoad = 1, hasExtraDefRegAllocReq = 1 in
 def tLDM : T1I<(outs),
                (ins addrmode4:$addr, pred:$p, reglist:$wb, variable_ops),
                IIC_iLoadm,
-               "ldm${addr:submode}${p}\t$addr, $wb", []>;
+               "ldm${addr:submode}${p}\t$addr, $wb", []>,
+           T1Encoding<{1,1,0,0,1,?}>; // A6.2 & A8.6.53
 
 let mayStore = 1, hasExtraSrcRegAllocReq = 1 in
 def tSTM : T1I<(outs),
                (ins addrmode4:$addr, pred:$p, reglist:$wb, variable_ops),
                IIC_iStorem,
-               "stm${addr:submode}${p}\t$addr, $wb", []>;
+               "stm${addr:submode}${p}\t$addr, $wb", []>,
+           T1Encoding<{1,1,0,0,0,?}>; // A6.2 & A8.6.189
 
 let mayLoad = 1, Uses = [SP], Defs = [SP], hasExtraDefRegAllocReq = 1 in
 def tPOP : T1I<(outs), (ins pred:$p, reglist:$wb, variable_ops), IIC_Br,
-               "pop${p}\t$wb", []>;
+               "pop${p}\t$wb", []>,
+           T1Misc<{1,1,0,?,?,?,?}>;
 
 let mayStore = 1, Uses = [SP], Defs = [SP], hasExtraSrcRegAllocReq = 1 in
 def tPUSH : T1I<(outs), (ins pred:$p, reglist:$wb, variable_ops), IIC_Br,
-                "push${p}\t$wb", []>;
+                "push${p}\t$wb", []>,
+            T1Misc<{0,1,0,?,?,?,?}>;
 
 //===----------------------------------------------------------------------===//
 //  Arithmetic Instructions.
@@ -399,82 +458,98 @@ def tPUSH : T1I<(outs), (ins pred:$p, reglist:$wb, variable_ops), IIC_Br,
 let isCommutable = 1, Uses = [CPSR] in
 def tADC : T1sIt<(outs tGPR:$dst), (ins tGPR:$lhs, tGPR:$rhs), IIC_iALUr,
                  "adc", "\t$dst, $rhs",
-                 [(set tGPR:$dst, (adde tGPR:$lhs, tGPR:$rhs))]>;
+                 [(set tGPR:$dst, (adde tGPR:$lhs, tGPR:$rhs))]>,
+           T1DataProcessing<0b0101>;
 
 // Add immediate
 def tADDi3 : T1sI<(outs tGPR:$dst), (ins tGPR:$lhs, i32imm:$rhs), IIC_iALUi,
                    "add", "\t$dst, $lhs, $rhs",
-                   [(set tGPR:$dst, (add tGPR:$lhs, imm0_7:$rhs))]>;
+                   [(set tGPR:$dst, (add tGPR:$lhs, imm0_7:$rhs))]>,
+             T1General<0b01110>;
 
 def tADDi8 : T1sIt<(outs tGPR:$dst), (ins tGPR:$lhs, i32imm:$rhs), IIC_iALUi,
                    "add", "\t$dst, $rhs",
-                   [(set tGPR:$dst, (add tGPR:$lhs, imm8_255:$rhs))]>;
+                   [(set tGPR:$dst, (add tGPR:$lhs, imm8_255:$rhs))]>,
+             T1General<{1,1,0,?,?}>;
 
 // Add register
 let isCommutable = 1 in
 def tADDrr : T1sI<(outs tGPR:$dst), (ins tGPR:$lhs, tGPR:$rhs), IIC_iALUr,
                    "add", "\t$dst, $lhs, $rhs",
-                   [(set tGPR:$dst, (add tGPR:$lhs, tGPR:$rhs))]>;
+                   [(set tGPR:$dst, (add tGPR:$lhs, tGPR:$rhs))]>,
+             T1General<0b01100>;
 
 let neverHasSideEffects = 1 in
 def tADDhirr : T1pIt<(outs GPR:$dst), (ins GPR:$lhs, GPR:$rhs), IIC_iALUr,
-                     "add", "\t$dst, $rhs", []>;
+                     "add", "\t$dst, $rhs", []>,
+               T1Special<{0,0,?,?}>;
 
 // And register
 let isCommutable = 1 in
 def tAND : T1sIt<(outs tGPR:$dst), (ins tGPR:$lhs, tGPR:$rhs), IIC_iALUr,
                  "and", "\t$dst, $rhs",
-                 [(set tGPR:$dst, (and tGPR:$lhs, tGPR:$rhs))]>;
+                 [(set tGPR:$dst, (and tGPR:$lhs, tGPR:$rhs))]>,
+           T1DataProcessing<0b0000>;
 
 // ASR immediate
 def tASRri : T1sI<(outs tGPR:$dst), (ins tGPR:$lhs, i32imm:$rhs), IIC_iMOVsi,
                   "asr", "\t$dst, $lhs, $rhs",
-                  [(set tGPR:$dst, (sra tGPR:$lhs, (i32 imm:$rhs)))]>;
+                  [(set tGPR:$dst, (sra tGPR:$lhs, (i32 imm:$rhs)))]>,
+             T1General<{0,1,0,?,?}>;
 
 // ASR register
 def tASRrr : T1sIt<(outs tGPR:$dst), (ins tGPR:$lhs, tGPR:$rhs), IIC_iMOVsr,
                    "asr", "\t$dst, $rhs",
-                   [(set tGPR:$dst, (sra tGPR:$lhs, tGPR:$rhs))]>;
+                   [(set tGPR:$dst, (sra tGPR:$lhs, tGPR:$rhs))]>,
+             T1DataProcessing<0b0100>;
 
 // BIC register
 def tBIC : T1sIt<(outs tGPR:$dst), (ins tGPR:$lhs, tGPR:$rhs), IIC_iALUr,
                  "bic", "\t$dst, $rhs",
-                 [(set tGPR:$dst, (and tGPR:$lhs, (not tGPR:$rhs)))]>;
+                 [(set tGPR:$dst, (and tGPR:$lhs, (not tGPR:$rhs)))]>,
+           T1DataProcessing<0b1110>;
 
 // CMN register
 let Defs = [CPSR] in {
 def tCMN : T1pI<(outs), (ins tGPR:$lhs, tGPR:$rhs), IIC_iCMPr,
                 "cmn", "\t$lhs, $rhs",
-                [(ARMcmp tGPR:$lhs, (ineg tGPR:$rhs))]>;
-def tCMNZ : T1pI<(outs), (ins tGPR:$lhs, tGPR:$rhs), IIC_iCMPr,
+                [(ARMcmp tGPR:$lhs, (ineg tGPR:$rhs))]>,
+           T1DataProcessing<0b1011>;
+def tCMNz : T1pI<(outs), (ins tGPR:$lhs, tGPR:$rhs), IIC_iCMPr,
                  "cmn", "\t$lhs, $rhs",
-                 [(ARMcmpZ tGPR:$lhs, (ineg tGPR:$rhs))]>;
+                 [(ARMcmpZ tGPR:$lhs, (ineg tGPR:$rhs))]>,
+            T1DataProcessing<0b1011>;
 }
 
 // CMP immediate
 let Defs = [CPSR] in {
 def tCMPi8 : T1pI<(outs), (ins tGPR:$lhs, i32imm:$rhs), IIC_iCMPi,
                   "cmp", "\t$lhs, $rhs",
-                  [(ARMcmp tGPR:$lhs, imm0_255:$rhs)]>;
+                  [(ARMcmp tGPR:$lhs, imm0_255:$rhs)]>,
+             T1General<{1,0,1,?,?}>;
 def tCMPzi8 : T1pI<(outs), (ins tGPR:$lhs, i32imm:$rhs), IIC_iCMPi,
                   "cmp", "\t$lhs, $rhs",
-                  [(ARMcmpZ tGPR:$lhs, imm0_255:$rhs)]>;
-
+                  [(ARMcmpZ tGPR:$lhs, imm0_255:$rhs)]>,
+              T1General<{1,0,1,?,?}>;
 }
 
 // CMP register
 let Defs = [CPSR] in {
 def tCMPr : T1pI<(outs), (ins tGPR:$lhs, tGPR:$rhs), IIC_iCMPr,
                  "cmp", "\t$lhs, $rhs",
-                 [(ARMcmp tGPR:$lhs, tGPR:$rhs)]>;
+                 [(ARMcmp tGPR:$lhs, tGPR:$rhs)]>,
+            T1DataProcessing<0b1010>;
 def tCMPzr : T1pI<(outs), (ins tGPR:$lhs, tGPR:$rhs), IIC_iCMPr,
                   "cmp", "\t$lhs, $rhs",
-                  [(ARMcmpZ tGPR:$lhs, tGPR:$rhs)]>;
+                  [(ARMcmpZ tGPR:$lhs, tGPR:$rhs)]>,
+             T1DataProcessing<0b1010>;
 
 def tCMPhir : T1pI<(outs), (ins GPR:$lhs, GPR:$rhs), IIC_iCMPr,
-                   "cmp", "\t$lhs, $rhs", []>;
+                   "cmp", "\t$lhs, $rhs", []>,
+              T1Special<{0,1,?,?}>;
 def tCMPzhir : T1pI<(outs), (ins GPR:$lhs, GPR:$rhs), IIC_iCMPr,
-                    "cmp", "\t$lhs, $rhs", []>;
+                    "cmp", "\t$lhs, $rhs", []>,
+               T1Special<{0,1,?,?}>;
 }
 
 
@@ -482,32 +557,38 @@ def tCMPzhir : T1pI<(outs), (ins GPR:$lhs, GPR:$rhs), IIC_iCMPr,
 let isCommutable = 1 in
 def tEOR : T1sIt<(outs tGPR:$dst), (ins tGPR:$lhs, tGPR:$rhs), IIC_iALUr,
                  "eor", "\t$dst, $rhs",
-                 [(set tGPR:$dst, (xor tGPR:$lhs, tGPR:$rhs))]>;
+                 [(set tGPR:$dst, (xor tGPR:$lhs, tGPR:$rhs))]>,
+           T1DataProcessing<0b0001>;
 
 // LSL immediate
 def tLSLri : T1sI<(outs tGPR:$dst), (ins tGPR:$lhs, i32imm:$rhs), IIC_iMOVsi,
                   "lsl", "\t$dst, $lhs, $rhs",
-                  [(set tGPR:$dst, (shl tGPR:$lhs, (i32 imm:$rhs)))]>;
+                  [(set tGPR:$dst, (shl tGPR:$lhs, (i32 imm:$rhs)))]>,
+             T1General<{0,0,0,?,?}>;
 
 // LSL register
 def tLSLrr : T1sIt<(outs tGPR:$dst), (ins tGPR:$lhs, tGPR:$rhs), IIC_iMOVsr,
                    "lsl", "\t$dst, $rhs",
-                   [(set tGPR:$dst, (shl tGPR:$lhs, tGPR:$rhs))]>;
+                   [(set tGPR:$dst, (shl tGPR:$lhs, tGPR:$rhs))]>,
+             T1DataProcessing<0b0010>;
 
 // LSR immediate
 def tLSRri : T1sI<(outs tGPR:$dst), (ins tGPR:$lhs, i32imm:$rhs), IIC_iMOVsi,
                   "lsr", "\t$dst, $lhs, $rhs",
-                  [(set tGPR:$dst, (srl tGPR:$lhs, (i32 imm:$rhs)))]>;
+                  [(set tGPR:$dst, (srl tGPR:$lhs, (i32 imm:$rhs)))]>,
+             T1General<{0,0,1,?,?}>;
 
 // LSR register
 def tLSRrr : T1sIt<(outs tGPR:$dst), (ins tGPR:$lhs, tGPR:$rhs), IIC_iMOVsr,
                    "lsr", "\t$dst, $rhs",
-                   [(set tGPR:$dst, (srl tGPR:$lhs, tGPR:$rhs))]>;
+                   [(set tGPR:$dst, (srl tGPR:$lhs, tGPR:$rhs))]>,
+             T1DataProcessing<0b0011>;
 
 // move register
 def tMOVi8 : T1sI<(outs tGPR:$dst), (ins i32imm:$src), IIC_iMOVi,
                   "mov", "\t$dst, $src",
-                  [(set tGPR:$dst, imm0_255:$src)]>;
+                  [(set tGPR:$dst, imm0_255:$src)]>,
+             T1General<{1,0,0,?,?}>;
 
 // TODO: A7-73: MOV(2) - mov setting flag.
 
@@ -515,42 +596,52 @@ def tMOVi8 : T1sI<(outs tGPR:$dst), (ins i32imm:$src), IIC_iMOVi,
 let neverHasSideEffects = 1 in {
 // FIXME: Make this predicable.
 def tMOVr       : T1I<(outs tGPR:$dst), (ins tGPR:$src), IIC_iMOVr,
-                      "mov\t$dst, $src", []>;
+                      "mov\t$dst, $src", []>,
+                  T1Special<0b1000>;
 let Defs = [CPSR] in
 def tMOVSr      : T1I<(outs tGPR:$dst), (ins tGPR:$src), IIC_iMOVr,
-                       "movs\t$dst, $src", []>;
+                       "movs\t$dst, $src", []>, Encoding16 {
+  let Inst{15-6} = 0b0000000000;
+}
 
 // FIXME: Make these predicable.
 def tMOVgpr2tgpr : T1I<(outs tGPR:$dst), (ins GPR:$src), IIC_iMOVr,
-                       "mov\t$dst, $src", []>;
+                       "mov\t$dst, $src", []>,
+                   T1Special<{1,0,0,1}>;
 def tMOVtgpr2gpr : T1I<(outs GPR:$dst), (ins tGPR:$src), IIC_iMOVr,
-                       "mov\t$dst, $src", []>;
+                       "mov\t$dst, $src", []>,
+                   T1Special<{1,0,1,0}>;
 def tMOVgpr2gpr  : T1I<(outs GPR:$dst), (ins GPR:$src), IIC_iMOVr,
-                       "mov\t$dst, $src", []>;
+                       "mov\t$dst, $src", []>,
+                   T1Special<{1,0,1,1}>;
 } // neverHasSideEffects
 
 // multiply register
 let isCommutable = 1 in
 def tMUL : T1sIt<(outs tGPR:$dst), (ins tGPR:$lhs, tGPR:$rhs), IIC_iMUL32,
                  "mul", "\t$dst, $rhs",
-                 [(set tGPR:$dst, (mul tGPR:$lhs, tGPR:$rhs))]>;
+                 [(set tGPR:$dst, (mul tGPR:$lhs, tGPR:$rhs))]>,
+           T1DataProcessing<0b1101>;
 
 // move inverse register
 def tMVN : T1sI<(outs tGPR:$dst), (ins tGPR:$src), IIC_iMOVr,
                 "mvn", "\t$dst, $src",
-                [(set tGPR:$dst, (not tGPR:$src))]>;
+                [(set tGPR:$dst, (not tGPR:$src))]>,
+           T1DataProcessing<0b1111>;
 
 // bitwise or register
 let isCommutable = 1 in
 def tORR : T1sIt<(outs tGPR:$dst), (ins tGPR:$lhs, tGPR:$rhs),  IIC_iALUr,
                  "orr", "\t$dst, $rhs",
-                 [(set tGPR:$dst, (or tGPR:$lhs, tGPR:$rhs))]>;
+                 [(set tGPR:$dst, (or tGPR:$lhs, tGPR:$rhs))]>,
+           T1DataProcessing<0b1100>;
 
 // swaps
 def tREV : T1pI<(outs tGPR:$dst), (ins tGPR:$src), IIC_iUNAr,
                 "rev", "\t$dst, $src",
                 [(set tGPR:$dst, (bswap tGPR:$src))]>,
-                Requires<[IsThumb1Only, HasV6]>;
+                Requires<[IsThumb1Only, HasV6]>,
+           T1Misc<{1,0,1,0,0,0,?}>;
 
 def tREV16 : T1pI<(outs tGPR:$dst), (ins tGPR:$src), IIC_iUNAr,
                   "rev16", "\t$dst, $src",
@@ -559,7 +650,8 @@ def tREV16 : T1pI<(outs tGPR:$dst), (ins tGPR:$src), IIC_iUNAr,
                        (or (and (shl tGPR:$src, (i32 8)), 0xFF00),
                            (or (and (srl tGPR:$src, (i32 8)), 0xFF0000),
                                (and (shl tGPR:$src, (i32 8)), 0xFF000000)))))]>,
-                Requires<[IsThumb1Only, HasV6]>;
+                Requires<[IsThumb1Only, HasV6]>,
+             T1Misc<{1,0,1,0,0,1,?}>;
 
 def tREVSH : T1pI<(outs tGPR:$dst), (ins tGPR:$src), IIC_iUNAr,
                   "revsh", "\t$dst, $src",
@@ -567,37 +659,44 @@ def tREVSH : T1pI<(outs tGPR:$dst), (ins tGPR:$src), IIC_iUNAr,
                         (sext_inreg
                           (or (srl (and tGPR:$src, 0xFF00), (i32 8)),
                               (shl tGPR:$src, (i32 8))), i16))]>,
-                  Requires<[IsThumb1Only, HasV6]>;
+                  Requires<[IsThumb1Only, HasV6]>,
+             T1Misc<{1,0,1,0,1,1,?}>;
 
 // rotate right register
 def tROR : T1sIt<(outs tGPR:$dst), (ins tGPR:$lhs, tGPR:$rhs), IIC_iMOVsr,
                  "ror", "\t$dst, $rhs",
-                 [(set tGPR:$dst, (rotr tGPR:$lhs, tGPR:$rhs))]>;
+                 [(set tGPR:$dst, (rotr tGPR:$lhs, tGPR:$rhs))]>,
+           T1DataProcessing<0b0111>;
 
 // negate register
 def tRSB : T1sI<(outs tGPR:$dst), (ins tGPR:$src), IIC_iALUi,
                 "rsb", "\t$dst, $src, #0",
-                [(set tGPR:$dst, (ineg tGPR:$src))]>;
+                [(set tGPR:$dst, (ineg tGPR:$src))]>,
+           T1DataProcessing<0b1001>;
 
 // Subtract with carry register
 let Uses = [CPSR] in
 def tSBC : T1sIt<(outs tGPR:$dst), (ins tGPR:$lhs, tGPR:$rhs), IIC_iALUr,
                  "sbc", "\t$dst, $rhs",
-                 [(set tGPR:$dst, (sube tGPR:$lhs, tGPR:$rhs))]>;
+                 [(set tGPR:$dst, (sube tGPR:$lhs, tGPR:$rhs))]>,
+           T1DataProcessing<0b0110>;
 
 // Subtract immediate
 def tSUBi3 : T1sI<(outs tGPR:$dst), (ins tGPR:$lhs, i32imm:$rhs), IIC_iALUi,
                   "sub", "\t$dst, $lhs, $rhs",
-                  [(set tGPR:$dst, (add tGPR:$lhs, imm0_7_neg:$rhs))]>;
+                  [(set tGPR:$dst, (add tGPR:$lhs, imm0_7_neg:$rhs))]>,
+             T1General<0b01111>;
 
 def tSUBi8 : T1sIt<(outs tGPR:$dst), (ins tGPR:$lhs, i32imm:$rhs), IIC_iALUi,
                    "sub", "\t$dst, $rhs",
-                   [(set tGPR:$dst, (add tGPR:$lhs, imm8_255_neg:$rhs))]>;
+                   [(set tGPR:$dst, (add tGPR:$lhs, imm8_255_neg:$rhs))]>,
+             T1General<{1,1,1,?,?}>;
 
 // subtract register
 def tSUBrr : T1sI<(outs tGPR:$dst), (ins tGPR:$lhs, tGPR:$rhs), IIC_iALUr,
                   "sub", "\t$dst, $lhs, $rhs",
-                  [(set tGPR:$dst, (sub tGPR:$lhs, tGPR:$rhs))]>;
+                  [(set tGPR:$dst, (sub tGPR:$lhs, tGPR:$rhs))]>,
+             T1General<0b01101>;
 
 // TODO: A7-96: STMIA - store multiple.
 
@@ -605,31 +704,36 @@ def tSUBrr : T1sI<(outs tGPR:$dst), (ins tGPR:$lhs, tGPR:$rhs), IIC_iALUr,
 def tSXTB  : T1pI<(outs tGPR:$dst), (ins tGPR:$src), IIC_iUNAr,
                   "sxtb", "\t$dst, $src",
                   [(set tGPR:$dst, (sext_inreg tGPR:$src, i8))]>,
-                  Requires<[IsThumb1Only, HasV6]>;
+                  Requires<[IsThumb1Only, HasV6]>,
+             T1Misc<{0,0,1,0,0,1,?}>;
 
 // sign-extend short
 def tSXTH  : T1pI<(outs tGPR:$dst), (ins tGPR:$src), IIC_iUNAr,
                   "sxth", "\t$dst, $src",
                   [(set tGPR:$dst, (sext_inreg tGPR:$src, i16))]>,
-                  Requires<[IsThumb1Only, HasV6]>;
+                  Requires<[IsThumb1Only, HasV6]>,
+             T1Misc<{0,0,1,0,0,0,?}>;
 
 // test
 let isCommutable = 1, Defs = [CPSR] in
 def tTST  : T1pI<(outs), (ins tGPR:$lhs, tGPR:$rhs), IIC_iCMPr,
                  "tst", "\t$lhs, $rhs",
-                 [(ARMcmpZ (and tGPR:$lhs, tGPR:$rhs), 0)]>;
+                 [(ARMcmpZ (and tGPR:$lhs, tGPR:$rhs), 0)]>,
+            T1DataProcessing<0b1000>;
 
 // zero-extend byte
 def tUXTB  : T1pI<(outs tGPR:$dst), (ins tGPR:$src), IIC_iUNAr,
                   "uxtb", "\t$dst, $src",
                   [(set tGPR:$dst, (and tGPR:$src, 0xFF))]>,
-                  Requires<[IsThumb1Only, HasV6]>;
+                  Requires<[IsThumb1Only, HasV6]>,
+             T1Misc<{0,0,1,0,1,1,?}>;
 
 // zero-extend short
 def tUXTH  : T1pI<(outs tGPR:$dst), (ins tGPR:$src), IIC_iUNAr,
                   "uxth", "\t$dst, $src",
                   [(set tGPR:$dst, (and tGPR:$src, 0xFFFF))]>,
-                  Requires<[IsThumb1Only, HasV6]>;
+                  Requires<[IsThumb1Only, HasV6]>,
+             T1Misc<{0,0,1,0,1,0,?}>;
 
 
 // Conditional move tMOVCCr - Used to implement the Thumb SELECT_CC DAG operation.
@@ -643,19 +747,23 @@ let usesCustomInserter = 1 in  // Expanded after instruction selection.
 
 // 16-bit movcc in IT blocks for Thumb2.
 def tMOVCCr : T1pIt<(outs GPR:$dst), (ins GPR:$lhs, GPR:$rhs), IIC_iCMOVr,
-                    "mov", "\t$dst, $rhs", []>;
+                    "mov", "\t$dst, $rhs", []>,
+              T1Special<{1,0,?,?}>;
 
 def tMOVCCi : T1pIt<(outs GPR:$dst), (ins GPR:$lhs, i32imm:$rhs), IIC_iCMOVi,
-                    "mov", "\t$dst, $rhs", []>;
+                    "mov", "\t$dst, $rhs", []>,
+              T1General<{1,0,0,?,?}>;
 
 // tLEApcrel - Load a pc-relative address into a register without offending the
 // assembler.
 def tLEApcrel : T1I<(outs tGPR:$dst), (ins i32imm:$label, pred:$p), IIC_iALUi,
-                    "adr$p\t$dst, #$label", []>;
+                    "adr$p\t$dst, #$label", []>,
+                T1Encoding<{1,0,1,0,0,?}>; // A6.2 & A8.6.10
 
 def tLEApcrelJT : T1I<(outs tGPR:$dst),
                       (ins i32imm:$label, nohash_imm:$id, pred:$p),
-                      IIC_iALUi, "adr$p\t$dst, #${label}_${id}", []>;
+                      IIC_iALUi, "adr$p\t$dst, #${label}_${id}", []>,
+                  T1Encoding<{1,0,1,0,0,?}>; // A6.2 & A8.6.10
 
 //===----------------------------------------------------------------------===//
 // TLS Instructions
@@ -664,9 +772,9 @@ def tLEApcrelJT : T1I<(outs tGPR:$dst),
 // __aeabi_read_tp preserves the registers r1-r3.
 let isCall = 1,
   Defs = [R0, LR] in {
-  def tTPsoft  : TIx2<(outs), (ins), IIC_Br,
-               "bl\t__aeabi_read_tp",
-               [(set R0, ARMthread_pointer)]>;
+  def tTPsoft : TIx2<0b11110, 0b11, 1, (outs), (ins), IIC_Br,
+                     "bl\t__aeabi_read_tp",
+                     [(set R0, ARMthread_pointer)]>;
 }
 
 // SJLJ Exception handling intrinsics
diff --git a/libclamav/c++/llvm/lib/Target/ARM/ARMInstrThumb2.td b/libclamav/c++/llvm/lib/Target/ARM/ARMInstrThumb2.td
index 949ce73..6f20ed4 100644
--- a/libclamav/c++/llvm/lib/Target/ARM/ARMInstrThumb2.td
+++ b/libclamav/c++/llvm/lib/Target/ARM/ARMInstrThumb2.td
@@ -165,234 +165,465 @@ def t2addrmode_so_reg : Operand<i32>,
 /// T2I_un_irs - Defines a set of (op reg, {so_imm|r|so_reg}) patterns for a
 /// unary operation that produces a value. These are predicable and can be
 /// changed to modify CPSR.
-multiclass T2I_un_irs<string opc, PatFrag opnode, bit Cheap = 0, bit ReMat = 0>{
+multiclass T2I_un_irs<bits<4> opcod, string opc, PatFrag opnode,
+                      bit Cheap = 0, bit ReMat = 0> {
    // shifted imm
    def i : T2sI<(outs GPR:$dst), (ins t2_so_imm:$src), IIC_iMOVi,
                 opc, "\t$dst, $src",
                 [(set GPR:$dst, (opnode t2_so_imm:$src))]> {
      let isAsCheapAsAMove = Cheap;
      let isReMaterializable = ReMat;
+     let Inst{31-27} = 0b11110;
+     let Inst{25} = 0;
+     let Inst{24-21} = opcod;
+     let Inst{20} = ?; // The S bit.
+     let Inst{19-16} = 0b1111; // Rn
+     let Inst{15} = 0;
    }
    // register
    def r : T2I<(outs GPR:$dst), (ins GPR:$src), IIC_iMOVr,
                opc, ".w\t$dst, $src",
-                [(set GPR:$dst, (opnode GPR:$src))]>;
+                [(set GPR:$dst, (opnode GPR:$src))]> {
+     let Inst{31-27} = 0b11101;
+     let Inst{26-25} = 0b01;
+     let Inst{24-21} = opcod;
+     let Inst{20} = ?; // The S bit.
+     let Inst{19-16} = 0b1111; // Rn
+     let Inst{14-12} = 0b000; // imm3
+     let Inst{7-6} = 0b00; // imm2
+     let Inst{5-4} = 0b00; // type
+   }
    // shifted register
    def s : T2I<(outs GPR:$dst), (ins t2_so_reg:$src), IIC_iMOVsi,
                opc, ".w\t$dst, $src",
-               [(set GPR:$dst, (opnode t2_so_reg:$src))]>;
+               [(set GPR:$dst, (opnode t2_so_reg:$src))]> {
+     let Inst{31-27} = 0b11101;
+     let Inst{26-25} = 0b01;
+     let Inst{24-21} = opcod;
+     let Inst{20} = ?; // The S bit.
+     let Inst{19-16} = 0b1111; // Rn
+   }
 }
 
 /// T2I_bin_irs - Defines a set of (op reg, {so_imm|r|so_reg}) patterns for a
 //  binary operation that produces a value. These are predicable and can be
 /// changed to modify CPSR.
-multiclass T2I_bin_irs<string opc, PatFrag opnode, 
+multiclass T2I_bin_irs<bits<4> opcod, string opc, PatFrag opnode, 
                        bit Commutable = 0, string wide =""> {
    // shifted imm
    def ri : T2sI<(outs GPR:$dst), (ins GPR:$lhs, t2_so_imm:$rhs), IIC_iALUi,
                  opc, "\t$dst, $lhs, $rhs",
-                 [(set GPR:$dst, (opnode GPR:$lhs, t2_so_imm:$rhs))]>;
+                 [(set GPR:$dst, (opnode GPR:$lhs, t2_so_imm:$rhs))]> {
+     let Inst{31-27} = 0b11110;
+     let Inst{25} = 0;
+     let Inst{24-21} = opcod;
+     let Inst{20} = ?; // The S bit.
+     let Inst{15} = 0;
+   }
    // register
    def rr : T2sI<(outs GPR:$dst), (ins GPR:$lhs, GPR:$rhs), IIC_iALUr,
                  opc, !strconcat(wide, "\t$dst, $lhs, $rhs"),
                  [(set GPR:$dst, (opnode GPR:$lhs, GPR:$rhs))]> {
      let isCommutable = Commutable;
+     let Inst{31-27} = 0b11101;
+     let Inst{26-25} = 0b01;
+     let Inst{24-21} = opcod;
+     let Inst{20} = ?; // The S bit.
+     let Inst{14-12} = 0b000; // imm3
+     let Inst{7-6} = 0b00; // imm2
+     let Inst{5-4} = 0b00; // type
    }
    // shifted register
    def rs : T2sI<(outs GPR:$dst), (ins GPR:$lhs, t2_so_reg:$rhs), IIC_iALUsi,
                  opc, !strconcat(wide, "\t$dst, $lhs, $rhs"),
-                 [(set GPR:$dst, (opnode GPR:$lhs, t2_so_reg:$rhs))]>;
+                 [(set GPR:$dst, (opnode GPR:$lhs, t2_so_reg:$rhs))]> {
+     let Inst{31-27} = 0b11101;
+     let Inst{26-25} = 0b01;
+     let Inst{24-21} = opcod;
+     let Inst{20} = ?; // The S bit.
+   }
 }
 
 /// T2I_bin_w_irs - Same as T2I_bin_irs except these operations need
 //  the ".w" prefix to indicate that they are wide.
-multiclass T2I_bin_w_irs<string opc, PatFrag opnode, bit Commutable = 0> :
-    T2I_bin_irs<opc, opnode, Commutable, ".w">;
+multiclass T2I_bin_w_irs<bits<4> opcod, string opc, PatFrag opnode,
+                         bit Commutable = 0> :
+    T2I_bin_irs<opcod, opc, opnode, Commutable, ".w">;
 
 /// T2I_rbin_is - Same as T2I_bin_irs except the order of operands are
 /// reversed. It doesn't define the 'rr' form since it's handled by its
 /// T2I_bin_irs counterpart.
-multiclass T2I_rbin_is<string opc, PatFrag opnode> {
+multiclass T2I_rbin_is<bits<4> opcod, string opc, PatFrag opnode> {
    // shifted imm
    def ri : T2I<(outs GPR:$dst), (ins GPR:$rhs, t2_so_imm:$lhs), IIC_iALUi,
                 opc, ".w\t$dst, $rhs, $lhs",
-                [(set GPR:$dst, (opnode t2_so_imm:$lhs, GPR:$rhs))]>;
+                [(set GPR:$dst, (opnode t2_so_imm:$lhs, GPR:$rhs))]> {
+     let Inst{31-27} = 0b11110;
+     let Inst{25} = 0;
+     let Inst{24-21} = opcod;
+     let Inst{20} = 0; // The S bit.
+     let Inst{15} = 0;
+   }
    // shifted register
    def rs : T2I<(outs GPR:$dst), (ins GPR:$rhs, t2_so_reg:$lhs), IIC_iALUsi,
                 opc, "\t$dst, $rhs, $lhs",
-                [(set GPR:$dst, (opnode t2_so_reg:$lhs, GPR:$rhs))]>;
+                [(set GPR:$dst, (opnode t2_so_reg:$lhs, GPR:$rhs))]> {
+     let Inst{31-27} = 0b11101;
+     let Inst{26-25} = 0b01;
+     let Inst{24-21} = opcod;
+     let Inst{20} = 0; // The S bit.
+   }
 }
 
 /// T2I_bin_s_irs - Similar to T2I_bin_irs except it sets the 's' bit so the
 /// instruction modifies the CPSR register.
 let Defs = [CPSR] in {
-multiclass T2I_bin_s_irs<string opc, PatFrag opnode, bit Commutable = 0> {
+multiclass T2I_bin_s_irs<bits<4> opcod, string opc, PatFrag opnode,
+                         bit Commutable = 0> {
    // shifted imm
    def ri : T2I<(outs GPR:$dst), (ins GPR:$lhs, t2_so_imm:$rhs), IIC_iALUi,
                 !strconcat(opc, "s"), ".w\t$dst, $lhs, $rhs",
-                [(set GPR:$dst, (opnode GPR:$lhs, t2_so_imm:$rhs))]>;
+                [(set GPR:$dst, (opnode GPR:$lhs, t2_so_imm:$rhs))]> {
+     let Inst{31-27} = 0b11110;
+     let Inst{25} = 0;
+     let Inst{24-21} = opcod;
+     let Inst{20} = 1; // The S bit.
+     let Inst{15} = 0;
+   }
    // register
    def rr : T2I<(outs GPR:$dst), (ins GPR:$lhs, GPR:$rhs), IIC_iALUr,
                 !strconcat(opc, "s"), ".w\t$dst, $lhs, $rhs",
                 [(set GPR:$dst, (opnode GPR:$lhs, GPR:$rhs))]> {
      let isCommutable = Commutable;
+     let Inst{31-27} = 0b11101;
+     let Inst{26-25} = 0b01;
+     let Inst{24-21} = opcod;
+     let Inst{20} = 1; // The S bit.
+     let Inst{14-12} = 0b000; // imm3
+     let Inst{7-6} = 0b00; // imm2
+     let Inst{5-4} = 0b00; // type
    }
    // shifted register
    def rs : T2I<(outs GPR:$dst), (ins GPR:$lhs, t2_so_reg:$rhs), IIC_iALUsi,
                 !strconcat(opc, "s"), ".w\t$dst, $lhs, $rhs",
-                [(set GPR:$dst, (opnode GPR:$lhs, t2_so_reg:$rhs))]>;
+                [(set GPR:$dst, (opnode GPR:$lhs, t2_so_reg:$rhs))]> {
+     let Inst{31-27} = 0b11101;
+     let Inst{26-25} = 0b01;
+     let Inst{24-21} = opcod;
+     let Inst{20} = 1; // The S bit.
+   }
 }
 }
 
 /// T2I_bin_ii12rs - Defines a set of (op reg, {so_imm|imm0_4095|r|so_reg})
 /// patterns for a binary operation that produces a value.
-multiclass T2I_bin_ii12rs<string opc, PatFrag opnode, bit Commutable = 0> {
+multiclass T2I_bin_ii12rs<bits<3> op23_21, string opc, PatFrag opnode,
+                          bit Commutable = 0> {
    // shifted imm
    def ri : T2sI<(outs GPR:$dst), (ins GPR:$lhs, t2_so_imm:$rhs), IIC_iALUi,
                  opc, ".w\t$dst, $lhs, $rhs",
-                 [(set GPR:$dst, (opnode GPR:$lhs, t2_so_imm:$rhs))]>;
+                 [(set GPR:$dst, (opnode GPR:$lhs, t2_so_imm:$rhs))]> {
+     let Inst{31-27} = 0b11110;
+     let Inst{25} = 0;
+     let Inst{24} = 1;
+     let Inst{23-21} = op23_21;
+     let Inst{20} = 0; // The S bit.
+     let Inst{15} = 0;
+   }
    // 12-bit imm
    def ri12 : T2sI<(outs GPR:$dst), (ins GPR:$lhs, imm0_4095:$rhs), IIC_iALUi,
                    !strconcat(opc, "w"), "\t$dst, $lhs, $rhs",
-                   [(set GPR:$dst, (opnode GPR:$lhs, imm0_4095:$rhs))]>;
+                   [(set GPR:$dst, (opnode GPR:$lhs, imm0_4095:$rhs))]> {
+     let Inst{31-27} = 0b11110;
+     let Inst{25} = 1;
+     let Inst{24} = 0;
+     let Inst{23-21} = op23_21;
+     let Inst{20} = 0; // The S bit.
+     let Inst{15} = 0;
+   }
    // register
    def rr : T2sI<(outs GPR:$dst), (ins GPR:$lhs, GPR:$rhs), IIC_iALUr,
                  opc, ".w\t$dst, $lhs, $rhs",
                  [(set GPR:$dst, (opnode GPR:$lhs, GPR:$rhs))]> {
      let isCommutable = Commutable;
+     let Inst{31-27} = 0b11101;
+     let Inst{26-25} = 0b01;
+     let Inst{24} = 1;
+     let Inst{23-21} = op23_21;
+     let Inst{20} = 0; // The S bit.
+     let Inst{14-12} = 0b000; // imm3
+     let Inst{7-6} = 0b00; // imm2
+     let Inst{5-4} = 0b00; // type
    }
    // shifted register
    def rs : T2sI<(outs GPR:$dst), (ins GPR:$lhs, t2_so_reg:$rhs), IIC_iALUsi,
                  opc, ".w\t$dst, $lhs, $rhs",
-                 [(set GPR:$dst, (opnode GPR:$lhs, t2_so_reg:$rhs))]>;
+                 [(set GPR:$dst, (opnode GPR:$lhs, t2_so_reg:$rhs))]> {
+     let Inst{31-27} = 0b11101;
+     let Inst{24} = 1;
+     let Inst{26-25} = 0b01;
+     let Inst{23-21} = op23_21;
+     let Inst{20} = 0; // The S bit.
+   }
 }
 
 /// T2I_adde_sube_irs - Defines a set of (op reg, {so_imm|r|so_reg}) patterns
 /// for a binary operation that produces a value and use and define the carry
 /// bit. It's not predicable.
 let Uses = [CPSR] in {
-multiclass T2I_adde_sube_irs<string opc, PatFrag opnode, bit Commutable = 0> {
+multiclass T2I_adde_sube_irs<bits<4> opcod, string opc, PatFrag opnode, bit Commutable = 0> {
    // shifted imm
    def ri : T2sI<(outs GPR:$dst), (ins GPR:$lhs, t2_so_imm:$rhs), IIC_iALUi,
                  opc, "\t$dst, $lhs, $rhs",
                  [(set GPR:$dst, (opnode GPR:$lhs, t2_so_imm:$rhs))]>,
-                 Requires<[IsThumb2, CarryDefIsUnused]>;
+                 Requires<[IsThumb2, CarryDefIsUnused]> {
+     let Inst{31-27} = 0b11110;
+     let Inst{25} = 0;
+     let Inst{24-21} = opcod;
+     let Inst{20} = 0; // The S bit.
+     let Inst{15} = 0;
+   }
    // register
    def rr : T2sI<(outs GPR:$dst), (ins GPR:$lhs, GPR:$rhs), IIC_iALUr,
                  opc, ".w\t$dst, $lhs, $rhs",
                  [(set GPR:$dst, (opnode GPR:$lhs, GPR:$rhs))]>,
                  Requires<[IsThumb2, CarryDefIsUnused]> {
      let isCommutable = Commutable;
+     let Inst{31-27} = 0b11101;
+     let Inst{26-25} = 0b01;
+     let Inst{24-21} = opcod;
+     let Inst{20} = 0; // The S bit.
+     let Inst{14-12} = 0b000; // imm3
+     let Inst{7-6} = 0b00; // imm2
+     let Inst{5-4} = 0b00; // type
    }
    // shifted register
    def rs : T2sI<(outs GPR:$dst), (ins GPR:$lhs, t2_so_reg:$rhs), IIC_iALUsi,
                  opc, ".w\t$dst, $lhs, $rhs",
                  [(set GPR:$dst, (opnode GPR:$lhs, t2_so_reg:$rhs))]>,
-                 Requires<[IsThumb2, CarryDefIsUnused]>;
+                 Requires<[IsThumb2, CarryDefIsUnused]> {
+     let Inst{31-27} = 0b11101;
+     let Inst{26-25} = 0b01;
+     let Inst{24-21} = opcod;
+     let Inst{20} = 0; // The S bit.
+   }
    // Carry setting variants
    // shifted imm
    def Sri : T2XI<(outs GPR:$dst), (ins GPR:$lhs, t2_so_imm:$rhs), IIC_iALUi,
                   !strconcat(opc, "s\t$dst, $lhs, $rhs"),
                   [(set GPR:$dst, (opnode GPR:$lhs, t2_so_imm:$rhs))]>,
                   Requires<[IsThumb2, CarryDefIsUsed]> {
-                    let Defs = [CPSR];
-                  }
+     let Defs = [CPSR];
+     let Inst{31-27} = 0b11110;
+     let Inst{25} = 0;
+     let Inst{24-21} = opcod;
+     let Inst{20} = 1; // The S bit.
+     let Inst{15} = 0;
+   }
    // register
    def Srr : T2XI<(outs GPR:$dst), (ins GPR:$lhs, GPR:$rhs), IIC_iALUr,
                   !strconcat(opc, "s.w\t$dst, $lhs, $rhs"),
                   [(set GPR:$dst, (opnode GPR:$lhs, GPR:$rhs))]>,
                   Requires<[IsThumb2, CarryDefIsUsed]> {
-                    let Defs = [CPSR];
-                    let isCommutable = Commutable;
+     let Defs = [CPSR];
+     let isCommutable = Commutable;
+     let Inst{31-27} = 0b11101;
+     let Inst{26-25} = 0b01;
+     let Inst{24-21} = opcod;
+     let Inst{20} = 1; // The S bit.
+     let Inst{14-12} = 0b000; // imm3
+     let Inst{7-6} = 0b00; // imm2
+     let Inst{5-4} = 0b00; // type
    }
    // shifted register
    def Srs : T2XI<(outs GPR:$dst), (ins GPR:$lhs, t2_so_reg:$rhs), IIC_iALUsi,
                   !strconcat(opc, "s.w\t$dst, $lhs, $rhs"),
                   [(set GPR:$dst, (opnode GPR:$lhs, t2_so_reg:$rhs))]>,
                   Requires<[IsThumb2, CarryDefIsUsed]> {
-                    let Defs = [CPSR];
+     let Defs = [CPSR];
+     let Inst{31-27} = 0b11101;
+     let Inst{26-25} = 0b01;
+     let Inst{24-21} = opcod;
+     let Inst{20} = 1; // The S bit.
    }
 }
 }
 
 /// T2I_rbin_s_is - Same as T2I_rbin_is except sets 's' bit.
 let Defs = [CPSR] in {
-multiclass T2I_rbin_s_is<string opc, PatFrag opnode> {
+multiclass T2I_rbin_s_is<bits<4> opcod, string opc, PatFrag opnode> {
    // shifted imm
    def ri : T2XI<(outs GPR:$dst), (ins GPR:$rhs, t2_so_imm:$lhs, cc_out:$s),
                  IIC_iALUi,
                  !strconcat(opc, "${s}.w\t$dst, $rhs, $lhs"),
-                 [(set GPR:$dst, (opnode t2_so_imm:$lhs, GPR:$rhs))]>;
+                 [(set GPR:$dst, (opnode t2_so_imm:$lhs, GPR:$rhs))]> {
+     let Inst{31-27} = 0b11110;
+     let Inst{25} = 0;
+     let Inst{24-21} = opcod;
+     let Inst{20} = 1; // The S bit.
+     let Inst{15} = 0;
+   }
    // shifted register
    def rs : T2XI<(outs GPR:$dst), (ins GPR:$rhs, t2_so_reg:$lhs, cc_out:$s),
                  IIC_iALUsi,
                  !strconcat(opc, "${s}\t$dst, $rhs, $lhs"),
-                 [(set GPR:$dst, (opnode t2_so_reg:$lhs, GPR:$rhs))]>;
+                 [(set GPR:$dst, (opnode t2_so_reg:$lhs, GPR:$rhs))]> {
+     let Inst{31-27} = 0b11101;
+     let Inst{26-25} = 0b01;
+     let Inst{24-21} = opcod;
+     let Inst{20} = 1; // The S bit.
+   }
 }
 }
 
 /// T2I_sh_ir - Defines a set of (op reg, {so_imm|r}) patterns for a shift /
 //  rotate operation that produces a value.
-multiclass T2I_sh_ir<string opc, PatFrag opnode> {
+multiclass T2I_sh_ir<bits<2> opcod, string opc, PatFrag opnode> {
    // 5-bit imm
    def ri : T2sI<(outs GPR:$dst), (ins GPR:$lhs, i32imm:$rhs), IIC_iMOVsi,
                  opc, ".w\t$dst, $lhs, $rhs",
-                 [(set GPR:$dst, (opnode GPR:$lhs, imm1_31:$rhs))]>;
+                 [(set GPR:$dst, (opnode GPR:$lhs, imm1_31:$rhs))]> {
+     let Inst{31-27} = 0b11101;
+     let Inst{26-21} = 0b010010;
+     let Inst{19-16} = 0b1111; // Rn
+     let Inst{5-4} = opcod;
+   }
    // register
    def rr : T2sI<(outs GPR:$dst), (ins GPR:$lhs, GPR:$rhs), IIC_iMOVsr,
                  opc, ".w\t$dst, $lhs, $rhs",
-                 [(set GPR:$dst, (opnode GPR:$lhs, GPR:$rhs))]>;
+                 [(set GPR:$dst, (opnode GPR:$lhs, GPR:$rhs))]> {
+     let Inst{31-27} = 0b11111;
+     let Inst{26-23} = 0b0100;
+     let Inst{22-21} = opcod;
+     let Inst{15-12} = 0b1111;
+     let Inst{7-4} = 0b0000;
+   }
 }
 
-/// T2I_cmp_is - Defines a set of (op r, {so_imm|r|so_reg}) cmp / test
+/// T2I_cmp_irs - Defines a set of (op r, {so_imm|r|so_reg}) cmp / test
 /// patterns. Similar to T2I_bin_irs except the instruction does not produce
 /// a explicit result, only implicitly set CPSR.
 let Defs = [CPSR] in {
-multiclass T2I_cmp_is<string opc, PatFrag opnode> {
+multiclass T2I_cmp_irs<bits<4> opcod, string opc, PatFrag opnode> {
    // shifted imm
    def ri : T2I<(outs), (ins GPR:$lhs, t2_so_imm:$rhs), IIC_iCMPi,
                 opc, ".w\t$lhs, $rhs",
-                [(opnode GPR:$lhs, t2_so_imm:$rhs)]>;
+                [(opnode GPR:$lhs, t2_so_imm:$rhs)]> {
+     let Inst{31-27} = 0b11110;
+     let Inst{25} = 0;
+     let Inst{24-21} = opcod;
+     let Inst{20} = 1; // The S bit.
+     let Inst{15} = 0;
+     let Inst{11-8} = 0b1111; // Rd
+   }
    // register
    def rr : T2I<(outs), (ins GPR:$lhs, GPR:$rhs), IIC_iCMPr,
                 opc, ".w\t$lhs, $rhs",
-                [(opnode GPR:$lhs, GPR:$rhs)]>;
+                [(opnode GPR:$lhs, GPR:$rhs)]> {
+     let Inst{31-27} = 0b11101;
+     let Inst{26-25} = 0b01;
+     let Inst{24-21} = opcod;
+     let Inst{20} = 1; // The S bit.
+     let Inst{14-12} = 0b000; // imm3
+     let Inst{11-8} = 0b1111; // Rd
+     let Inst{7-6} = 0b00; // imm2
+     let Inst{5-4} = 0b00; // type
+   }
    // shifted register
    def rs : T2I<(outs), (ins GPR:$lhs, t2_so_reg:$rhs), IIC_iCMPsi,
                 opc, ".w\t$lhs, $rhs",
-                [(opnode GPR:$lhs, t2_so_reg:$rhs)]>;
+                [(opnode GPR:$lhs, t2_so_reg:$rhs)]> {
+     let Inst{31-27} = 0b11101;
+     let Inst{26-25} = 0b01;
+     let Inst{24-21} = opcod;
+     let Inst{20} = 1; // The S bit.
+     let Inst{11-8} = 0b1111; // Rd
+   }
 }
 }
 
 /// T2I_ld - Defines a set of (op r, {imm12|imm8|so_reg}) load patterns.
-multiclass T2I_ld<string opc, PatFrag opnode> {
+multiclass T2I_ld<bit signed, bits<2> opcod, string opc, PatFrag opnode> {
   def i12 : T2Ii12<(outs GPR:$dst), (ins t2addrmode_imm12:$addr), IIC_iLoadi,
                    opc, ".w\t$dst, $addr",
-                   [(set GPR:$dst, (opnode t2addrmode_imm12:$addr))]>;
+                   [(set GPR:$dst, (opnode t2addrmode_imm12:$addr))]> {
+    let Inst{31-27} = 0b11111;
+    let Inst{26-25} = 0b00;
+    let Inst{24} = signed;
+    let Inst{23} = 1;
+    let Inst{22-21} = opcod;
+    let Inst{20} = 1; // load
+  }
   def i8  : T2Ii8 <(outs GPR:$dst), (ins t2addrmode_imm8:$addr), IIC_iLoadi,
                    opc, "\t$dst, $addr",
-                   [(set GPR:$dst, (opnode t2addrmode_imm8:$addr))]>;
+                   [(set GPR:$dst, (opnode t2addrmode_imm8:$addr))]> {
+    let Inst{31-27} = 0b11111;
+    let Inst{26-25} = 0b00;
+    let Inst{24} = signed;
+    let Inst{23} = 0;
+    let Inst{22-21} = opcod;
+    let Inst{20} = 1; // load
+    let Inst{11} = 1;
+    // Offset: index==TRUE, wback==FALSE
+    let Inst{10} = 1; // The P bit.
+    let Inst{8} = 0; // The W bit.
+  }
   def s   : T2Iso <(outs GPR:$dst), (ins t2addrmode_so_reg:$addr), IIC_iLoadr,
                    opc, ".w\t$dst, $addr",
-                   [(set GPR:$dst, (opnode t2addrmode_so_reg:$addr))]>;
+                   [(set GPR:$dst, (opnode t2addrmode_so_reg:$addr))]> {
+    let Inst{31-27} = 0b11111;
+    let Inst{26-25} = 0b00;
+    let Inst{24} = signed;
+    let Inst{23} = 0;
+    let Inst{22-21} = opcod;
+    let Inst{20} = 1; // load
+    let Inst{11-6} = 0b000000;
+  }
   def pci : T2Ipc <(outs GPR:$dst), (ins i32imm:$addr), IIC_iLoadi,
                    opc, ".w\t$dst, $addr",
                    [(set GPR:$dst, (opnode (ARMWrapper tconstpool:$addr)))]> {
     let isReMaterializable = 1;
+    let Inst{31-27} = 0b11111;
+    let Inst{26-25} = 0b00;
+    let Inst{24} = signed;
+    let Inst{23} = ?; // add = (U == '1')
+    let Inst{22-21} = opcod;
+    let Inst{20} = 1; // load
+    let Inst{19-16} = 0b1111; // Rn
   }
 }
 
 /// T2I_st - Defines a set of (op r, {imm12|imm8|so_reg}) store patterns.
-multiclass T2I_st<string opc, PatFrag opnode> {
+multiclass T2I_st<bits<2> opcod, string opc, PatFrag opnode> {
   def i12 : T2Ii12<(outs), (ins GPR:$src, t2addrmode_imm12:$addr), IIC_iStorei,
                    opc, ".w\t$src, $addr",
-                   [(opnode GPR:$src, t2addrmode_imm12:$addr)]>;
+                   [(opnode GPR:$src, t2addrmode_imm12:$addr)]> {
+    let Inst{31-27} = 0b11111;
+    let Inst{26-23} = 0b0001;
+    let Inst{22-21} = opcod;
+    let Inst{20} = 0; // !load
+  }
   def i8  : T2Ii8 <(outs), (ins GPR:$src, t2addrmode_imm8:$addr), IIC_iStorei,
                    opc, "\t$src, $addr",
-                   [(opnode GPR:$src, t2addrmode_imm8:$addr)]>;
+                   [(opnode GPR:$src, t2addrmode_imm8:$addr)]> {
+    let Inst{31-27} = 0b11111;
+    let Inst{26-23} = 0b0000;
+    let Inst{22-21} = opcod;
+    let Inst{20} = 0; // !load
+    let Inst{11} = 1;
+    // Offset: index==TRUE, wback==FALSE
+    let Inst{10} = 1; // The P bit.
+    let Inst{8} = 0; // The W bit.
+  }
   def s   : T2Iso <(outs), (ins GPR:$src, t2addrmode_so_reg:$addr), IIC_iStorer,
                    opc, ".w\t$src, $addr",
-                   [(opnode GPR:$src, t2addrmode_so_reg:$addr)]>;
+                   [(opnode GPR:$src, t2addrmode_so_reg:$addr)]> {
+    let Inst{31-27} = 0b11111;
+    let Inst{26-23} = 0b0000;
+    let Inst{22-21} = opcod;
+    let Inst{20} = 0; // !load
+    let Inst{11-6} = 0b000000;
+  }
 }
 
 /// T2I_picld - Defines the PIC load pattern.
@@ -410,25 +641,55 @@ class T2I_picst<string opc, PatFrag opnode> :
 
 /// T2I_unary_rrot - A unary operation with two forms: one whose operand is a
 /// register and one whose operand is a register rotated by 8/16/24.
-multiclass T2I_unary_rrot<string opc, PatFrag opnode> {
+multiclass T2I_unary_rrot<bits<3> opcod, string opc, PatFrag opnode> {
   def r     : T2I<(outs GPR:$dst), (ins GPR:$src), IIC_iUNAr,
                   opc, ".w\t$dst, $src",
-                 [(set GPR:$dst, (opnode GPR:$src))]>;
+                 [(set GPR:$dst, (opnode GPR:$src))]> {
+     let Inst{31-27} = 0b11111;
+     let Inst{26-23} = 0b0100;
+     let Inst{22-20} = opcod;
+     let Inst{19-16} = 0b1111; // Rn
+     let Inst{15-12} = 0b1111;
+     let Inst{7} = 1;
+     let Inst{5-4} = 0b00; // rotate
+   }
   def r_rot : T2I<(outs GPR:$dst), (ins GPR:$src, i32imm:$rot), IIC_iUNAsi,
                   opc, ".w\t$dst, $src, ror $rot",
-                 [(set GPR:$dst, (opnode (rotr GPR:$src, rot_imm:$rot)))]>;
+                 [(set GPR:$dst, (opnode (rotr GPR:$src, rot_imm:$rot)))]> {
+     let Inst{31-27} = 0b11111;
+     let Inst{26-23} = 0b0100;
+     let Inst{22-20} = opcod;
+     let Inst{19-16} = 0b1111; // Rn
+     let Inst{15-12} = 0b1111;
+     let Inst{7} = 1;
+     let Inst{5-4} = {?,?}; // rotate
+   }
 }
 
 /// T2I_bin_rrot - A binary operation with two forms: one whose operand is a
 /// register and one whose operand is a register rotated by 8/16/24.
-multiclass T2I_bin_rrot<string opc, PatFrag opnode> {
+multiclass T2I_bin_rrot<bits<3> opcod, string opc, PatFrag opnode> {
   def rr     : T2I<(outs GPR:$dst), (ins GPR:$LHS, GPR:$RHS), IIC_iALUr,
                   opc, "\t$dst, $LHS, $RHS",
-                  [(set GPR:$dst, (opnode GPR:$LHS, GPR:$RHS))]>;
+                  [(set GPR:$dst, (opnode GPR:$LHS, GPR:$RHS))]> {
+     let Inst{31-27} = 0b11111;
+     let Inst{26-23} = 0b0100;
+     let Inst{22-20} = opcod;
+     let Inst{15-12} = 0b1111;
+     let Inst{7} = 1;
+     let Inst{5-4} = 0b00; // rotate
+   }
   def rr_rot : T2I<(outs GPR:$dst), (ins GPR:$LHS, GPR:$RHS, i32imm:$rot),
                   IIC_iALUsr, opc, "\t$dst, $LHS, $RHS, ror $rot",
                   [(set GPR:$dst, (opnode GPR:$LHS,
-                                          (rotr GPR:$RHS, rot_imm:$rot)))]>;
+                                          (rotr GPR:$RHS, rot_imm:$rot)))]> {
+     let Inst{31-27} = 0b11111;
+     let Inst{26-23} = 0b0100;
+     let Inst{22-20} = opcod;
+     let Inst{15-12} = 0b1111;
+     let Inst{7} = 1;
+     let Inst{5-4} = {?,?}; // rotate
+   }
 }
 
 //===----------------------------------------------------------------------===//
@@ -442,33 +703,89 @@ multiclass T2I_bin_rrot<string opc, PatFrag opnode> {
 // LEApcrel - Load a pc-relative address into a register without offending the
 // assembler.
 def t2LEApcrel : T2XI<(outs GPR:$dst), (ins i32imm:$label, pred:$p), IIC_iALUi,
-                      "adr$p.w\t$dst, #$label", []>;
-
+                      "adr$p.w\t$dst, #$label", []> {
+  let Inst{31-27} = 0b11110;
+  let Inst{25-24} = 0b10;
+  // Inst{23:21} = '11' (add = FALSE) or '00' (add = TRUE)
+  let Inst{22} = 0;
+  let Inst{20} = 0;
+  let Inst{19-16} = 0b1111; // Rn
+  let Inst{15} = 0;
+}
 def t2LEApcrelJT : T2XI<(outs GPR:$dst),
                         (ins i32imm:$label, nohash_imm:$id, pred:$p), IIC_iALUi,
-                        "adr$p.w\t$dst, #${label}_${id}", []>;
+                        "adr$p.w\t$dst, #${label}_${id}", []> {
+  let Inst{31-27} = 0b11110;
+  let Inst{25-24} = 0b10;
+  // Inst{23:21} = '11' (add = FALSE) or '00' (add = TRUE)
+  let Inst{22} = 0;
+  let Inst{20} = 0;
+  let Inst{19-16} = 0b1111; // Rn
+  let Inst{15} = 0;
+}
 
 // ADD r, sp, {so_imm|i12}
 def t2ADDrSPi   : T2sI<(outs GPR:$dst), (ins GPR:$sp, t2_so_imm:$imm),
-                        IIC_iALUi, "add", ".w\t$dst, $sp, $imm", []>;
+                        IIC_iALUi, "add", ".w\t$dst, $sp, $imm", []> {
+  let Inst{31-27} = 0b11110;
+  let Inst{25} = 0;
+  let Inst{24-21} = 0b1000;
+  let Inst{20} = ?; // The S bit.
+  let Inst{19-16} = 0b1101; // Rn = sp
+  let Inst{15} = 0;
+}
 def t2ADDrSPi12 : T2I<(outs GPR:$dst), (ins GPR:$sp, imm0_4095:$imm), 
-                       IIC_iALUi, "addw", "\t$dst, $sp, $imm", []>;
+                       IIC_iALUi, "addw", "\t$dst, $sp, $imm", []> {
+  let Inst{31-27} = 0b11110;
+  let Inst{25} = 1;
+  let Inst{24-21} = 0b0000;
+  let Inst{20} = 0; // The S bit.
+  let Inst{19-16} = 0b1101; // Rn = sp
+  let Inst{15} = 0;
+}
 
 // ADD r, sp, so_reg
 def t2ADDrSPs   : T2sI<(outs GPR:$dst), (ins GPR:$sp, t2_so_reg:$rhs),
-                        IIC_iALUsi, "add", ".w\t$dst, $sp, $rhs", []>;
+                        IIC_iALUsi, "add", ".w\t$dst, $sp, $rhs", []> {
+  let Inst{31-27} = 0b11101;
+  let Inst{26-25} = 0b01;
+  let Inst{24-21} = 0b1000;
+  let Inst{20} = ?; // The S bit.
+  let Inst{19-16} = 0b1101; // Rn = sp
+  let Inst{15} = 0;
+}
 
 // SUB r, sp, {so_imm|i12}
 def t2SUBrSPi   : T2sI<(outs GPR:$dst), (ins GPR:$sp, t2_so_imm:$imm),
-                        IIC_iALUi, "sub", ".w\t$dst, $sp, $imm", []>;
+                        IIC_iALUi, "sub", ".w\t$dst, $sp, $imm", []> {
+  let Inst{31-27} = 0b11110;
+  let Inst{25} = 0;
+  let Inst{24-21} = 0b1101;
+  let Inst{20} = ?; // The S bit.
+  let Inst{19-16} = 0b1101; // Rn = sp
+  let Inst{15} = 0;
+}
 def t2SUBrSPi12 : T2I<(outs GPR:$dst), (ins GPR:$sp, imm0_4095:$imm),
-                       IIC_iALUi, "subw", "\t$dst, $sp, $imm", []>;
+                       IIC_iALUi, "subw", "\t$dst, $sp, $imm", []> {
+  let Inst{31-27} = 0b11110;
+  let Inst{25} = 1;
+  let Inst{24-21} = 0b0101;
+  let Inst{20} = 0; // The S bit.
+  let Inst{19-16} = 0b1101; // Rn = sp
+  let Inst{15} = 0;
+}
 
 // SUB r, sp, so_reg
 def t2SUBrSPs   : T2sI<(outs GPR:$dst), (ins GPR:$sp, t2_so_reg:$rhs),
                        IIC_iALUsi,
-                       "sub", "\t$dst, $sp, $rhs", []>;
-
+                       "sub", "\t$dst, $sp, $rhs", []> {
+  let Inst{31-27} = 0b11101;
+  let Inst{26-25} = 0b01;
+  let Inst{24-21} = 0b1101;
+  let Inst{20} = ?; // The S bit.
+  let Inst{19-16} = 0b1101; // Rn = sp
+  let Inst{15} = 0;
+}
 
 // Pseudo instruction that will expand into a t2SUBrSPi + a copy.
 let usesCustomInserter = 1 in { // Expanded after instruction selection.
@@ -487,24 +804,26 @@ def t2SUBrSPs_   : PseudoInst<(outs GPR:$dst), (ins GPR:$sp, t2_so_reg:$rhs),
 
 // Load
 let canFoldAsLoad = 1, isReMaterializable = 1, mayHaveSideEffects = 1  in 
-defm t2LDR   : T2I_ld<"ldr",  UnOpFrag<(load node:$Src)>>;
+defm t2LDR   : T2I_ld<0, 0b10, "ldr",  UnOpFrag<(load node:$Src)>>;
 
 // Loads with zero extension
-defm t2LDRH  : T2I_ld<"ldrh", UnOpFrag<(zextloadi16 node:$Src)>>;
-defm t2LDRB  : T2I_ld<"ldrb", UnOpFrag<(zextloadi8  node:$Src)>>;
+defm t2LDRH  : T2I_ld<0, 0b01, "ldrh", UnOpFrag<(zextloadi16 node:$Src)>>;
+defm t2LDRB  : T2I_ld<0, 0b00, "ldrb", UnOpFrag<(zextloadi8  node:$Src)>>;
 
 // Loads with sign extension
-defm t2LDRSH : T2I_ld<"ldrsh", UnOpFrag<(sextloadi16 node:$Src)>>;
-defm t2LDRSB : T2I_ld<"ldrsb", UnOpFrag<(sextloadi8  node:$Src)>>;
+defm t2LDRSH : T2I_ld<1, 0b01, "ldrsh", UnOpFrag<(sextloadi16 node:$Src)>>;
+defm t2LDRSB : T2I_ld<1, 0b00, "ldrsb", UnOpFrag<(sextloadi8  node:$Src)>>;
 
 let mayLoad = 1, hasExtraDefRegAllocReq = 1 in {
 // Load doubleword
-def t2LDRDi8  : T2Ii8s4<(outs GPR:$dst1, GPR:$dst2),
+def t2LDRDi8  : T2Ii8s4<1, 0, 1, (outs GPR:$dst1, GPR:$dst2),
                         (ins t2addrmode_imm8s4:$addr),
                         IIC_iLoadi, "ldrd", "\t$dst1, $addr", []>;
-def t2LDRDpci : T2Ii8s4<(outs GPR:$dst1, GPR:$dst2),
+def t2LDRDpci : T2Ii8s4<?, ?, 1, (outs GPR:$dst1, GPR:$dst2),
                         (ins i32imm:$addr), IIC_iLoadi,
-                       "ldrd", "\t$dst1, $addr", []>;
+                       "ldrd", "\t$dst1, $addr", []> {
+  let Inst{19-16} = 0b1111; // Rn
+}
 }
 
 // zextload i1 -> zextload i8
@@ -549,57 +868,57 @@ def : T2Pat<(extloadi16 (ARMWrapper tconstpool:$addr)),
 
 // Indexed loads
 let mayLoad = 1 in {
-def t2LDR_PRE  : T2Iidxldst<(outs GPR:$dst, GPR:$base_wb),
+def t2LDR_PRE  : T2Iidxldst<0, 0b10, 1, 1, (outs GPR:$dst, GPR:$base_wb),
                             (ins t2addrmode_imm8:$addr),
                             AddrModeT2_i8, IndexModePre, IIC_iLoadiu,
                             "ldr", "\t$dst, $addr!", "$addr.base = $base_wb",
                             []>;
 
-def t2LDR_POST : T2Iidxldst<(outs GPR:$dst, GPR:$base_wb),
+def t2LDR_POST : T2Iidxldst<0, 0b10, 1, 0, (outs GPR:$dst, GPR:$base_wb),
                             (ins GPR:$base, t2am_imm8_offset:$offset),
                             AddrModeT2_i8, IndexModePost, IIC_iLoadiu,
                           "ldr", "\t$dst, [$base], $offset", "$base = $base_wb",
                             []>;
 
-def t2LDRB_PRE : T2Iidxldst<(outs GPR:$dst, GPR:$base_wb),
+def t2LDRB_PRE : T2Iidxldst<0, 0b00, 1, 1, (outs GPR:$dst, GPR:$base_wb),
                             (ins t2addrmode_imm8:$addr),
                             AddrModeT2_i8, IndexModePre, IIC_iLoadiu,
                             "ldrb", "\t$dst, $addr!", "$addr.base = $base_wb",
                             []>;
-def t2LDRB_POST : T2Iidxldst<(outs GPR:$dst, GPR:$base_wb),
+def t2LDRB_POST : T2Iidxldst<0, 0b00, 1, 0, (outs GPR:$dst, GPR:$base_wb),
                             (ins GPR:$base, t2am_imm8_offset:$offset),
                             AddrModeT2_i8, IndexModePost, IIC_iLoadiu,
                          "ldrb", "\t$dst, [$base], $offset", "$base = $base_wb",
                             []>;
 
-def t2LDRH_PRE : T2Iidxldst<(outs GPR:$dst, GPR:$base_wb),
+def t2LDRH_PRE : T2Iidxldst<0, 0b01, 1, 1, (outs GPR:$dst, GPR:$base_wb),
                             (ins t2addrmode_imm8:$addr),
                             AddrModeT2_i8, IndexModePre, IIC_iLoadiu,
                             "ldrh", "\t$dst, $addr!", "$addr.base = $base_wb",
                             []>;
-def t2LDRH_POST : T2Iidxldst<(outs GPR:$dst, GPR:$base_wb),
+def t2LDRH_POST : T2Iidxldst<0, 0b01, 1, 0, (outs GPR:$dst, GPR:$base_wb),
                             (ins GPR:$base, t2am_imm8_offset:$offset),
                             AddrModeT2_i8, IndexModePost, IIC_iLoadiu,
                          "ldrh", "\t$dst, [$base], $offset", "$base = $base_wb",
                             []>;
 
-def t2LDRSB_PRE : T2Iidxldst<(outs GPR:$dst, GPR:$base_wb),
+def t2LDRSB_PRE : T2Iidxldst<1, 0b00, 1, 1, (outs GPR:$dst, GPR:$base_wb),
                             (ins t2addrmode_imm8:$addr),
                             AddrModeT2_i8, IndexModePre, IIC_iLoadiu,
                             "ldrsb", "\t$dst, $addr!", "$addr.base = $base_wb",
                             []>;
-def t2LDRSB_POST : T2Iidxldst<(outs GPR:$dst, GPR:$base_wb),
+def t2LDRSB_POST : T2Iidxldst<1, 0b00, 1, 0, (outs GPR:$dst, GPR:$base_wb),
                             (ins GPR:$base, t2am_imm8_offset:$offset),
                             AddrModeT2_i8, IndexModePost, IIC_iLoadiu,
                         "ldrsb", "\t$dst, [$base], $offset", "$base = $base_wb",
                             []>;
 
-def t2LDRSH_PRE : T2Iidxldst<(outs GPR:$dst, GPR:$base_wb),
+def t2LDRSH_PRE : T2Iidxldst<1, 0b01, 1, 1, (outs GPR:$dst, GPR:$base_wb),
                             (ins t2addrmode_imm8:$addr),
                             AddrModeT2_i8, IndexModePre, IIC_iLoadiu,
                             "ldrsh", "\t$dst, $addr!", "$addr.base = $base_wb",
                             []>;
-def t2LDRSH_POST : T2Iidxldst<(outs GPR:$dst, GPR:$base_wb),
+def t2LDRSH_POST : T2Iidxldst<1, 0b01, 1, 0, (outs GPR:$dst, GPR:$base_wb),
                             (ins GPR:$base, t2am_imm8_offset:$offset),
                             AddrModeT2_i8, IndexModePost, IIC_iLoadiu,
                         "ldrsh", "\t$dst, [$base], $offset", "$base = $base_wb",
@@ -607,53 +926,53 @@ def t2LDRSH_POST : T2Iidxldst<(outs GPR:$dst, GPR:$base_wb),
 }
 
 // Store
-defm t2STR   : T2I_st<"str",  BinOpFrag<(store node:$LHS, node:$RHS)>>;
-defm t2STRB  : T2I_st<"strb", BinOpFrag<(truncstorei8 node:$LHS, node:$RHS)>>;
-defm t2STRH  : T2I_st<"strh", BinOpFrag<(truncstorei16 node:$LHS, node:$RHS)>>;
+defm t2STR   : T2I_st<0b10, "str",  BinOpFrag<(store node:$LHS, node:$RHS)>>;
+defm t2STRB  : T2I_st<0b00, "strb", BinOpFrag<(truncstorei8 node:$LHS, node:$RHS)>>;
+defm t2STRH  : T2I_st<0b01, "strh", BinOpFrag<(truncstorei16 node:$LHS, node:$RHS)>>;
 
 // Store doubleword
 let mayLoad = 1, hasExtraSrcRegAllocReq = 1 in
-def t2STRDi8 : T2Ii8s4<(outs),
+def t2STRDi8 : T2Ii8s4<1, 0, 0, (outs),
                        (ins GPR:$src1, GPR:$src2, t2addrmode_imm8s4:$addr),
                IIC_iStorer, "strd", "\t$src1, $addr", []>;
 
 // Indexed stores
-def t2STR_PRE  : T2Iidxldst<(outs GPR:$base_wb),
+def t2STR_PRE  : T2Iidxldst<0, 0b10, 0, 1, (outs GPR:$base_wb),
                             (ins GPR:$src, GPR:$base, t2am_imm8_offset:$offset),
                             AddrModeT2_i8, IndexModePre, IIC_iStoreiu,
                          "str", "\t$src, [$base, $offset]!", "$base = $base_wb",
              [(set GPR:$base_wb,
                    (pre_store GPR:$src, GPR:$base, t2am_imm8_offset:$offset))]>;
 
-def t2STR_POST : T2Iidxldst<(outs GPR:$base_wb),
+def t2STR_POST : T2Iidxldst<0, 0b10, 0, 0, (outs GPR:$base_wb),
                             (ins GPR:$src, GPR:$base, t2am_imm8_offset:$offset),
                             AddrModeT2_i8, IndexModePost, IIC_iStoreiu,
                           "str", "\t$src, [$base], $offset", "$base = $base_wb",
              [(set GPR:$base_wb,
                   (post_store GPR:$src, GPR:$base, t2am_imm8_offset:$offset))]>;
 
-def t2STRH_PRE  : T2Iidxldst<(outs GPR:$base_wb),
+def t2STRH_PRE  : T2Iidxldst<0, 0b01, 0, 1, (outs GPR:$base_wb),
                             (ins GPR:$src, GPR:$base, t2am_imm8_offset:$offset),
                             AddrModeT2_i8, IndexModePre, IIC_iStoreiu,
                         "strh", "\t$src, [$base, $offset]!", "$base = $base_wb",
         [(set GPR:$base_wb,
               (pre_truncsti16 GPR:$src, GPR:$base, t2am_imm8_offset:$offset))]>;
 
-def t2STRH_POST : T2Iidxldst<(outs GPR:$base_wb),
+def t2STRH_POST : T2Iidxldst<0, 0b01, 0, 0, (outs GPR:$base_wb),
                             (ins GPR:$src, GPR:$base, t2am_imm8_offset:$offset),
                             AddrModeT2_i8, IndexModePost, IIC_iStoreiu,
                          "strh", "\t$src, [$base], $offset", "$base = $base_wb",
        [(set GPR:$base_wb,
              (post_truncsti16 GPR:$src, GPR:$base, t2am_imm8_offset:$offset))]>;
 
-def t2STRB_PRE  : T2Iidxldst<(outs GPR:$base_wb),
+def t2STRB_PRE  : T2Iidxldst<0, 0b00, 0, 1, (outs GPR:$base_wb),
                             (ins GPR:$src, GPR:$base, t2am_imm8_offset:$offset),
                             AddrModeT2_i8, IndexModePre, IIC_iStoreiu,
                         "strb", "\t$src, [$base, $offset]!", "$base = $base_wb",
          [(set GPR:$base_wb,
                (pre_truncsti8 GPR:$src, GPR:$base, t2am_imm8_offset:$offset))]>;
 
-def t2STRB_POST : T2Iidxldst<(outs GPR:$base_wb),
+def t2STRB_POST : T2Iidxldst<0, 0b00, 0, 0, (outs GPR:$base_wb),
                             (ins GPR:$src, GPR:$base, t2am_imm8_offset:$offset),
                             AddrModeT2_i8, IndexModePost, IIC_iStoreiu,
                          "strb", "\t$src, [$base], $offset", "$base = $base_wb",
@@ -670,12 +989,26 @@ def t2STRB_POST : T2Iidxldst<(outs GPR:$base_wb),
 let mayLoad = 1, hasExtraDefRegAllocReq = 1 in
 def t2LDM : T2XI<(outs),
                  (ins addrmode4:$addr, pred:$p, reglist:$wb, variable_ops),
-              IIC_iLoadm, "ldm${addr:submode}${p}${addr:wide}\t$addr, $wb", []>;
+              IIC_iLoadm, "ldm${addr:submode}${p}${addr:wide}\t$addr, $wb", []> {
+  let Inst{31-27} = 0b11101;
+  let Inst{26-25} = 0b00;
+  let Inst{24-23} = {?, ?}; // IA: '01', DB: '10'
+  let Inst{22} = 0;
+  let Inst{21} = ?; // The W bit.
+  let Inst{20} = 1; // Load
+}
 
 let mayStore = 1, hasExtraSrcRegAllocReq = 1 in
 def t2STM : T2XI<(outs),
                  (ins addrmode4:$addr, pred:$p, reglist:$wb, variable_ops),
-             IIC_iStorem, "stm${addr:submode}${p}${addr:wide}\t$addr, $wb", []>;
+             IIC_iStorem, "stm${addr:submode}${p}${addr:wide}\t$addr, $wb", []> {
+  let Inst{31-27} = 0b11101;
+  let Inst{26-25} = 0b00;
+  let Inst{24-23} = {?, ?}; // IA: '01', DB: '10'
+  let Inst{22} = 0;
+  let Inst{21} = ?; // The W bit.
+  let Inst{20} = 0; // Store
+}
 
 //===----------------------------------------------------------------------===//
 //  Move Instructions.
@@ -683,24 +1016,51 @@ def t2STM : T2XI<(outs),
 
 let neverHasSideEffects = 1 in
 def t2MOVr : T2sI<(outs GPR:$dst), (ins GPR:$src), IIC_iMOVr,
-                   "mov", ".w\t$dst, $src", []>;
+                   "mov", ".w\t$dst, $src", []> {
+  let Inst{31-27} = 0b11101;
+  let Inst{26-25} = 0b01;
+  let Inst{24-21} = 0b0010;
+  let Inst{20} = ?; // The S bit.
+  let Inst{19-16} = 0b1111; // Rn
+  let Inst{14-12} = 0b000;
+  let Inst{7-4} = 0b0000;
+}
 
 // AddedComplexity to ensure isel tries t2MOVi before t2MOVi16.
 let isReMaterializable = 1, isAsCheapAsAMove = 1, AddedComplexity = 1 in
 def t2MOVi : T2sI<(outs GPR:$dst), (ins t2_so_imm:$src), IIC_iMOVi,
                    "mov", ".w\t$dst, $src",
-                   [(set GPR:$dst, t2_so_imm:$src)]>;
+                   [(set GPR:$dst, t2_so_imm:$src)]> {
+  let Inst{31-27} = 0b11110;
+  let Inst{25} = 0;
+  let Inst{24-21} = 0b0010;
+  let Inst{20} = ?; // The S bit.
+  let Inst{19-16} = 0b1111; // Rn
+  let Inst{15} = 0;
+}
 
 let isReMaterializable = 1, isAsCheapAsAMove = 1 in
 def t2MOVi16 : T2I<(outs GPR:$dst), (ins i32imm:$src), IIC_iMOVi,
                    "movw", "\t$dst, $src",
-                   [(set GPR:$dst, imm0_65535:$src)]>;
+                   [(set GPR:$dst, imm0_65535:$src)]> {
+  let Inst{31-27} = 0b11110;
+  let Inst{25} = 1;
+  let Inst{24-21} = 0b0010;
+  let Inst{20} = 0; // The S bit.
+  let Inst{15} = 0;
+}
 
 let Constraints = "$src = $dst" in
 def t2MOVTi16 : T2I<(outs GPR:$dst), (ins GPR:$src, i32imm:$imm), IIC_iMOVi,
                     "movt", "\t$dst, $imm",
                     [(set GPR:$dst,
-                          (or (and GPR:$src, 0xffff), lo16AllZero:$imm))]>;
+                          (or (and GPR:$src, 0xffff), lo16AllZero:$imm))]> {
+  let Inst{31-27} = 0b11110;
+  let Inst{25} = 1;
+  let Inst{24-21} = 0b0110;
+  let Inst{20} = 0; // The S bit.
+  let Inst{15} = 0;
+}
 
 def : T2Pat<(or GPR:$src, 0xffff0000), (t2MOVTi16 GPR:$src, 0xffff)>;
 
@@ -710,12 +1070,14 @@ def : T2Pat<(or GPR:$src, 0xffff0000), (t2MOVTi16 GPR:$src, 0xffff)>;
 
 // Sign extenders
 
-defm t2SXTB  : T2I_unary_rrot<"sxtb", UnOpFrag<(sext_inreg node:$Src, i8)>>;
-defm t2SXTH  : T2I_unary_rrot<"sxth", UnOpFrag<(sext_inreg node:$Src, i16)>>;
+defm t2SXTB  : T2I_unary_rrot<0b100, "sxtb",
+                              UnOpFrag<(sext_inreg node:$Src, i8)>>;
+defm t2SXTH  : T2I_unary_rrot<0b000, "sxth",
+                              UnOpFrag<(sext_inreg node:$Src, i16)>>;
 
-defm t2SXTAB : T2I_bin_rrot<"sxtab",
+defm t2SXTAB : T2I_bin_rrot<0b100, "sxtab",
                         BinOpFrag<(add node:$LHS, (sext_inreg node:$RHS, i8))>>;
-defm t2SXTAH : T2I_bin_rrot<"sxtah",
+defm t2SXTAH : T2I_bin_rrot<0b000, "sxtah",
                         BinOpFrag<(add node:$LHS, (sext_inreg node:$RHS,i16))>>;
 
 // TODO: SXT(A){B|H}16
@@ -723,18 +1085,21 @@ defm t2SXTAH : T2I_bin_rrot<"sxtah",
 // Zero extenders
 
 let AddedComplexity = 16 in {
-defm t2UXTB   : T2I_unary_rrot<"uxtb"  , UnOpFrag<(and node:$Src, 0x000000FF)>>;
-defm t2UXTH   : T2I_unary_rrot<"uxth"  , UnOpFrag<(and node:$Src, 0x0000FFFF)>>;
-defm t2UXTB16 : T2I_unary_rrot<"uxtb16", UnOpFrag<(and node:$Src, 0x00FF00FF)>>;
+defm t2UXTB   : T2I_unary_rrot<0b101, "uxtb",
+                               UnOpFrag<(and node:$Src, 0x000000FF)>>;
+defm t2UXTH   : T2I_unary_rrot<0b001, "uxth",
+                               UnOpFrag<(and node:$Src, 0x0000FFFF)>>;
+defm t2UXTB16 : T2I_unary_rrot<0b011, "uxtb16",
+                               UnOpFrag<(and node:$Src, 0x00FF00FF)>>;
 
 def : T2Pat<(and (shl GPR:$Src, (i32 8)), 0xFF00FF),
             (t2UXTB16r_rot GPR:$Src, 24)>;
 def : T2Pat<(and (srl GPR:$Src, (i32 8)), 0xFF00FF),
             (t2UXTB16r_rot GPR:$Src, 8)>;
 
-defm t2UXTAB : T2I_bin_rrot<"uxtab",
+defm t2UXTAB : T2I_bin_rrot<0b101, "uxtab",
                            BinOpFrag<(add node:$LHS, (and node:$RHS, 0x00FF))>>;
-defm t2UXTAH : T2I_bin_rrot<"uxtah",
+defm t2UXTAH : T2I_bin_rrot<0b001, "uxtah",
                            BinOpFrag<(add node:$LHS, (and node:$RHS, 0xFFFF))>>;
 }
 
@@ -742,19 +1107,27 @@ defm t2UXTAH : T2I_bin_rrot<"uxtah",
 //  Arithmetic Instructions.
 //
 
-defm t2ADD  : T2I_bin_ii12rs<"add", BinOpFrag<(add  node:$LHS, node:$RHS)>, 1>;
-defm t2SUB  : T2I_bin_ii12rs<"sub", BinOpFrag<(sub  node:$LHS, node:$RHS)>>;
+defm t2ADD  : T2I_bin_ii12rs<0b000, "add",
+                             BinOpFrag<(add  node:$LHS, node:$RHS)>, 1>;
+defm t2SUB  : T2I_bin_ii12rs<0b101, "sub",
+                             BinOpFrag<(sub  node:$LHS, node:$RHS)>>;
 
 // ADD and SUB with 's' bit set. No 12-bit immediate (T4) variants.
-defm t2ADDS : T2I_bin_s_irs <"add",  BinOpFrag<(addc node:$LHS, node:$RHS)>, 1>;
-defm t2SUBS : T2I_bin_s_irs <"sub",  BinOpFrag<(subc node:$LHS, node:$RHS)>>;
+defm t2ADDS : T2I_bin_s_irs <0b1000, "add",
+                             BinOpFrag<(addc node:$LHS, node:$RHS)>, 1>;
+defm t2SUBS : T2I_bin_s_irs <0b1101, "sub",
+                             BinOpFrag<(subc node:$LHS, node:$RHS)>>;
 
-defm t2ADC  : T2I_adde_sube_irs<"adc",BinOpFrag<(adde node:$LHS, node:$RHS)>,1>;
-defm t2SBC  : T2I_adde_sube_irs<"sbc",BinOpFrag<(sube node:$LHS, node:$RHS)>>;
+defm t2ADC  : T2I_adde_sube_irs<0b1010, "adc",
+                                BinOpFrag<(adde node:$LHS, node:$RHS)>, 1>;
+defm t2SBC  : T2I_adde_sube_irs<0b1011, "sbc",
+                                BinOpFrag<(sube node:$LHS, node:$RHS)>>;
 
 // RSB
-defm t2RSB  : T2I_rbin_is   <"rsb", BinOpFrag<(sub  node:$LHS, node:$RHS)>>;
-defm t2RSBS : T2I_rbin_s_is <"rsb", BinOpFrag<(subc node:$LHS, node:$RHS)>>;
+defm t2RSB  : T2I_rbin_is   <0b1110, "rsb",
+                             BinOpFrag<(sub  node:$LHS, node:$RHS)>>;
+defm t2RSBS : T2I_rbin_s_is <0b1110, "rsb",
+                             BinOpFrag<(subc node:$LHS, node:$RHS)>>;
 
 // (sub X, imm) gets canonicalized to (add X, -imm).  Match this form.
 let AddedComplexity = 1 in
@@ -770,54 +1143,103 @@ def : T2Pat<(add       GPR:$src, imm0_4095_neg:$imm),
 //  Shift and rotate Instructions.
 //
 
-defm t2LSL  : T2I_sh_ir<"lsl", BinOpFrag<(shl  node:$LHS, node:$RHS)>>;
-defm t2LSR  : T2I_sh_ir<"lsr", BinOpFrag<(srl  node:$LHS, node:$RHS)>>;
-defm t2ASR  : T2I_sh_ir<"asr", BinOpFrag<(sra  node:$LHS, node:$RHS)>>;
-defm t2ROR  : T2I_sh_ir<"ror", BinOpFrag<(rotr node:$LHS, node:$RHS)>>;
+defm t2LSL  : T2I_sh_ir<0b00, "lsl", BinOpFrag<(shl  node:$LHS, node:$RHS)>>;
+defm t2LSR  : T2I_sh_ir<0b01, "lsr", BinOpFrag<(srl  node:$LHS, node:$RHS)>>;
+defm t2ASR  : T2I_sh_ir<0b10, "asr", BinOpFrag<(sra  node:$LHS, node:$RHS)>>;
+defm t2ROR  : T2I_sh_ir<0b11, "ror", BinOpFrag<(rotr node:$LHS, node:$RHS)>>;
 
 let Uses = [CPSR] in {
 def t2MOVrx : T2sI<(outs GPR:$dst), (ins GPR:$src), IIC_iMOVsi,
                    "rrx", "\t$dst, $src",
-                   [(set GPR:$dst, (ARMrrx GPR:$src))]>;
+                   [(set GPR:$dst, (ARMrrx GPR:$src))]> {
+  let Inst{31-27} = 0b11101;
+  let Inst{26-25} = 0b01;
+  let Inst{24-21} = 0b0010;
+  let Inst{20} = ?; // The S bit.
+  let Inst{19-16} = 0b1111; // Rn
+  let Inst{14-12} = 0b000;
+  let Inst{7-4} = 0b0011;
+}
 }
 
 let Defs = [CPSR] in {
 def t2MOVsrl_flag : T2XI<(outs GPR:$dst), (ins GPR:$src), IIC_iMOVsi,
                          "lsrs.w\t$dst, $src, #1",
-                         [(set GPR:$dst, (ARMsrl_flag GPR:$src))]>;
+                         [(set GPR:$dst, (ARMsrl_flag GPR:$src))]> {
+  let Inst{31-27} = 0b11101;
+  let Inst{26-25} = 0b01;
+  let Inst{24-21} = 0b0010;
+  let Inst{20} = 1; // The S bit.
+  let Inst{19-16} = 0b1111; // Rn
+  let Inst{5-4} = 0b01; // Shift type.
+  // Shift amount = Inst{14-12:7-6} = 1.
+  let Inst{14-12} = 0b000;
+  let Inst{7-6} = 0b01;
+}
 def t2MOVsra_flag : T2XI<(outs GPR:$dst), (ins GPR:$src), IIC_iMOVsi,
                          "asrs.w\t$dst, $src, #1",
-                         [(set GPR:$dst, (ARMsra_flag GPR:$src))]>;
+                         [(set GPR:$dst, (ARMsra_flag GPR:$src))]> {
+  let Inst{31-27} = 0b11101;
+  let Inst{26-25} = 0b01;
+  let Inst{24-21} = 0b0010;
+  let Inst{20} = 1; // The S bit.
+  let Inst{19-16} = 0b1111; // Rn
+  let Inst{5-4} = 0b10; // Shift type.
+  // Shift amount = Inst{14-12:7-6} = 1.
+  let Inst{14-12} = 0b000;
+  let Inst{7-6} = 0b01;
+}
 }
 
 //===----------------------------------------------------------------------===//
 //  Bitwise Instructions.
 //
 
-defm t2AND  : T2I_bin_w_irs<"and", BinOpFrag<(and node:$LHS, node:$RHS)>, 1>;
-defm t2ORR  : T2I_bin_w_irs<"orr", BinOpFrag<(or  node:$LHS, node:$RHS)>, 1>;
-defm t2EOR  : T2I_bin_w_irs<"eor", BinOpFrag<(xor node:$LHS, node:$RHS)>, 1>;
+defm t2AND  : T2I_bin_w_irs<0b0000, "and",
+                            BinOpFrag<(and node:$LHS, node:$RHS)>, 1>;
+defm t2ORR  : T2I_bin_w_irs<0b0010, "orr",
+                            BinOpFrag<(or  node:$LHS, node:$RHS)>, 1>;
+defm t2EOR  : T2I_bin_w_irs<0b0100, "eor",
+                            BinOpFrag<(xor node:$LHS, node:$RHS)>, 1>;
 
-defm t2BIC  : T2I_bin_w_irs<"bic", BinOpFrag<(and node:$LHS, (not node:$RHS))>>;
+defm t2BIC  : T2I_bin_w_irs<0b0001, "bic",
+                            BinOpFrag<(and node:$LHS, (not node:$RHS))>>;
 
 let Constraints = "$src = $dst" in
 def t2BFC : T2I<(outs GPR:$dst), (ins GPR:$src, bf_inv_mask_imm:$imm),
                 IIC_iUNAsi, "bfc", "\t$dst, $imm",
-                [(set GPR:$dst, (and GPR:$src, bf_inv_mask_imm:$imm))]>;
+                [(set GPR:$dst, (and GPR:$src, bf_inv_mask_imm:$imm))]> {
+  let Inst{31-27} = 0b11110;
+  let Inst{25} = 1;
+  let Inst{24-20} = 0b10110;
+  let Inst{19-16} = 0b1111; // Rn
+  let Inst{15} = 0;
+}
 
 def t2SBFX : T2I<(outs GPR:$dst), (ins GPR:$src, imm0_31:$lsb, imm0_31:$width),
-                 IIC_iALUi, "sbfx", "\t$dst, $src, $lsb, $width", []>;
+                 IIC_iALUi, "sbfx", "\t$dst, $src, $lsb, $width", []> {
+  let Inst{31-27} = 0b11110;
+  let Inst{25} = 1;
+  let Inst{24-20} = 0b10100;
+  let Inst{15} = 0;
+}
 
 def t2UBFX : T2I<(outs GPR:$dst), (ins GPR:$src, imm0_31:$lsb, imm0_31:$width),
-                 IIC_iALUi, "ubfx", "\t$dst, $src, $lsb, $width", []>;
+                 IIC_iALUi, "ubfx", "\t$dst, $src, $lsb, $width", []> {
+  let Inst{31-27} = 0b11110;
+  let Inst{25} = 1;
+  let Inst{24-20} = 0b11100;
+  let Inst{15} = 0;
+}
 
 // FIXME: A8.6.18  BFI - Bitfield insert (Encoding T1)
 
-defm t2ORN  : T2I_bin_irs<"orn", BinOpFrag<(or  node:$LHS, (not node:$RHS))>>;
+defm t2ORN  : T2I_bin_irs<0b0011, "orn", BinOpFrag<(or  node:$LHS,
+                          (not node:$RHS))>>;
 
 // Prefer over of t2EORri ra, rb, -1 because mvn has 16-bit version
 let AddedComplexity = 1 in
-defm t2MVN  : T2I_un_irs  <"mvn", UnOpFrag<(not node:$Src)>, 1, 1>;
+defm t2MVN  : T2I_un_irs <0b0011, "mvn", UnOpFrag<(not node:$Src)>, 1, 1>;
 
 
 def : T2Pat<(and     GPR:$src, t2_so_imm_not:$imm),
@@ -837,81 +1259,184 @@ def : T2Pat<(t2_so_imm_not:$src),
 let isCommutable = 1 in
 def t2MUL: T2I<(outs GPR:$dst), (ins GPR:$a, GPR:$b), IIC_iMUL32,
                 "mul", "\t$dst, $a, $b",
-                [(set GPR:$dst, (mul GPR:$a, GPR:$b))]>;
+                [(set GPR:$dst, (mul GPR:$a, GPR:$b))]> {
+  let Inst{31-27} = 0b11111;
+  let Inst{26-23} = 0b0110;
+  let Inst{22-20} = 0b000;
+  let Inst{15-12} = 0b1111; // Ra = 0b1111 (no accumulate)
+  let Inst{7-4} = 0b0000; // Multiply
+}
 
 def t2MLA: T2I<(outs GPR:$dst), (ins GPR:$a, GPR:$b, GPR:$c), IIC_iMAC32,
 		"mla", "\t$dst, $a, $b, $c",
-		[(set GPR:$dst, (add (mul GPR:$a, GPR:$b), GPR:$c))]>;
+		[(set GPR:$dst, (add (mul GPR:$a, GPR:$b), GPR:$c))]> {
+  let Inst{31-27} = 0b11111;
+  let Inst{26-23} = 0b0110;
+  let Inst{22-20} = 0b000;
+  let Inst{15-12} = {?, ?, ?, ?}; // Ra
+  let Inst{7-4} = 0b0000; // Multiply
+}
 
 def t2MLS: T2I<(outs GPR:$dst), (ins GPR:$a, GPR:$b, GPR:$c), IIC_iMAC32,
 		"mls", "\t$dst, $a, $b, $c",
-                [(set GPR:$dst, (sub GPR:$c, (mul GPR:$a, GPR:$b)))]>;
+                [(set GPR:$dst, (sub GPR:$c, (mul GPR:$a, GPR:$b)))]> {
+  let Inst{31-27} = 0b11111;
+  let Inst{26-23} = 0b0110;
+  let Inst{22-20} = 0b000;
+  let Inst{15-12} = {?, ?, ?, ?}; // Ra
+  let Inst{7-4} = 0b0001; // Multiply and Subtract
+}
 
 // Extra precision multiplies with low / high results
 let neverHasSideEffects = 1 in {
 let isCommutable = 1 in {
 def t2SMULL : T2I<(outs GPR:$ldst, GPR:$hdst), (ins GPR:$a, GPR:$b), IIC_iMUL64,
-                   "smull", "\t$ldst, $hdst, $a, $b", []>;
+                   "smull", "\t$ldst, $hdst, $a, $b", []> {
+  let Inst{31-27} = 0b11111;
+  let Inst{26-23} = 0b0111;
+  let Inst{22-20} = 0b000;
+  let Inst{7-4} = 0b0000;
+}
 
 def t2UMULL : T2I<(outs GPR:$ldst, GPR:$hdst), (ins GPR:$a, GPR:$b), IIC_iMUL64,
-                   "umull", "\t$ldst, $hdst, $a, $b", []>;
+                   "umull", "\t$ldst, $hdst, $a, $b", []> {
+  let Inst{31-27} = 0b11111;
+  let Inst{26-23} = 0b0111;
+  let Inst{22-20} = 0b010;
+  let Inst{7-4} = 0b0000;
 }
+} // isCommutable
 
 // Multiply + accumulate
 def t2SMLAL : T2I<(outs GPR:$ldst, GPR:$hdst), (ins GPR:$a, GPR:$b), IIC_iMAC64,
-                  "smlal", "\t$ldst, $hdst, $a, $b", []>;
+                  "smlal", "\t$ldst, $hdst, $a, $b", []>{
+  let Inst{31-27} = 0b11111;
+  let Inst{26-23} = 0b0111;
+  let Inst{22-20} = 0b100;
+  let Inst{7-4} = 0b0000;
+}
 
 def t2UMLAL : T2I<(outs GPR:$ldst, GPR:$hdst), (ins GPR:$a, GPR:$b), IIC_iMAC64,
-                  "umlal", "\t$ldst, $hdst, $a, $b", []>;
+                  "umlal", "\t$ldst, $hdst, $a, $b", []>{
+  let Inst{31-27} = 0b11111;
+  let Inst{26-23} = 0b0111;
+  let Inst{22-20} = 0b110;
+  let Inst{7-4} = 0b0000;
+}
 
 def t2UMAAL : T2I<(outs GPR:$ldst, GPR:$hdst), (ins GPR:$a, GPR:$b), IIC_iMAC64,
-                  "umaal", "\t$ldst, $hdst, $a, $b", []>;
+                  "umaal", "\t$ldst, $hdst, $a, $b", []>{
+  let Inst{31-27} = 0b11111;
+  let Inst{26-23} = 0b0111;
+  let Inst{22-20} = 0b110;
+  let Inst{7-4} = 0b0110;
+}
 } // neverHasSideEffects
 
 // Most significant word multiply
 def t2SMMUL : T2I<(outs GPR:$dst), (ins GPR:$a, GPR:$b), IIC_iMUL32,
                   "smmul", "\t$dst, $a, $b",
-                  [(set GPR:$dst, (mulhs GPR:$a, GPR:$b))]>;
+                  [(set GPR:$dst, (mulhs GPR:$a, GPR:$b))]> {
+  let Inst{31-27} = 0b11111;
+  let Inst{26-23} = 0b0110;
+  let Inst{22-20} = 0b101;
+  let Inst{15-12} = 0b1111; // Ra = 0b1111 (no accumulate)
+  let Inst{7-4} = 0b0000; // No Rounding (Inst{4} = 0)
+}
 
 def t2SMMLA : T2I<(outs GPR:$dst), (ins GPR:$a, GPR:$b, GPR:$c), IIC_iMAC32,
                   "smmla", "\t$dst, $a, $b, $c",
-                  [(set GPR:$dst, (add (mulhs GPR:$a, GPR:$b), GPR:$c))]>;
+                  [(set GPR:$dst, (add (mulhs GPR:$a, GPR:$b), GPR:$c))]> {
+  let Inst{31-27} = 0b11111;
+  let Inst{26-23} = 0b0110;
+  let Inst{22-20} = 0b101;
+  let Inst{15-12} = {?, ?, ?, ?}; // Ra
+  let Inst{7-4} = 0b0000; // No Rounding (Inst{4} = 0)
+}
 
 
 def t2SMMLS : T2I <(outs GPR:$dst), (ins GPR:$a, GPR:$b, GPR:$c), IIC_iMAC32,
                    "smmls", "\t$dst, $a, $b, $c",
-                   [(set GPR:$dst, (sub GPR:$c, (mulhs GPR:$a, GPR:$b)))]>;
+                   [(set GPR:$dst, (sub GPR:$c, (mulhs GPR:$a, GPR:$b)))]> {
+  let Inst{31-27} = 0b11111;
+  let Inst{26-23} = 0b0110;
+  let Inst{22-20} = 0b110;
+  let Inst{15-12} = {?, ?, ?, ?}; // Ra
+  let Inst{7-4} = 0b0000; // No Rounding (Inst{4} = 0)
+}
 
 multiclass T2I_smul<string opc, PatFrag opnode> {
   def BB : T2I<(outs GPR:$dst), (ins GPR:$a, GPR:$b), IIC_iMUL32,
               !strconcat(opc, "bb"), "\t$dst, $a, $b",
               [(set GPR:$dst, (opnode (sext_inreg GPR:$a, i16),
-                                      (sext_inreg GPR:$b, i16)))]>;
+                                      (sext_inreg GPR:$b, i16)))]> {
+    let Inst{31-27} = 0b11111;
+    let Inst{26-23} = 0b0110;
+    let Inst{22-20} = 0b001;
+    let Inst{15-12} = 0b1111; // Ra = 0b1111 (no accumulate)
+    let Inst{7-6} = 0b00;
+    let Inst{5-4} = 0b00;
+  }
 
   def BT : T2I<(outs GPR:$dst), (ins GPR:$a, GPR:$b), IIC_iMUL32,
               !strconcat(opc, "bt"), "\t$dst, $a, $b",
               [(set GPR:$dst, (opnode (sext_inreg GPR:$a, i16),
-                                      (sra GPR:$b, (i32 16))))]>;
+                                      (sra GPR:$b, (i32 16))))]> {
+    let Inst{31-27} = 0b11111;
+    let Inst{26-23} = 0b0110;
+    let Inst{22-20} = 0b001;
+    let Inst{15-12} = 0b1111; // Ra = 0b1111 (no accumulate)
+    let Inst{7-6} = 0b00;
+    let Inst{5-4} = 0b01;
+  }
 
   def TB : T2I<(outs GPR:$dst), (ins GPR:$a, GPR:$b), IIC_iMUL32,
               !strconcat(opc, "tb"), "\t$dst, $a, $b",
               [(set GPR:$dst, (opnode (sra GPR:$a, (i32 16)),
-                                      (sext_inreg GPR:$b, i16)))]>;
+                                      (sext_inreg GPR:$b, i16)))]> {
+    let Inst{31-27} = 0b11111;
+    let Inst{26-23} = 0b0110;
+    let Inst{22-20} = 0b001;
+    let Inst{15-12} = 0b1111; // Ra = 0b1111 (no accumulate)
+    let Inst{7-6} = 0b00;
+    let Inst{5-4} = 0b10;
+  }
 
   def TT : T2I<(outs GPR:$dst), (ins GPR:$a, GPR:$b), IIC_iMUL32,
               !strconcat(opc, "tt"), "\t$dst, $a, $b",
               [(set GPR:$dst, (opnode (sra GPR:$a, (i32 16)),
-                                      (sra GPR:$b, (i32 16))))]>;
+                                      (sra GPR:$b, (i32 16))))]> {
+    let Inst{31-27} = 0b11111;
+    let Inst{26-23} = 0b0110;
+    let Inst{22-20} = 0b001;
+    let Inst{15-12} = 0b1111; // Ra = 0b1111 (no accumulate)
+    let Inst{7-6} = 0b00;
+    let Inst{5-4} = 0b11;
+  }
 
   def WB : T2I<(outs GPR:$dst), (ins GPR:$a, GPR:$b), IIC_iMUL16,
               !strconcat(opc, "wb"), "\t$dst, $a, $b",
               [(set GPR:$dst, (sra (opnode GPR:$a,
-                                    (sext_inreg GPR:$b, i16)), (i32 16)))]>;
+                                    (sext_inreg GPR:$b, i16)), (i32 16)))]> {
+    let Inst{31-27} = 0b11111;
+    let Inst{26-23} = 0b0110;
+    let Inst{22-20} = 0b011;
+    let Inst{15-12} = 0b1111; // Ra = 0b1111 (no accumulate)
+    let Inst{7-6} = 0b00;
+    let Inst{5-4} = 0b00;
+  }
 
   def WT : T2I<(outs GPR:$dst), (ins GPR:$a, GPR:$b), IIC_iMUL16,
               !strconcat(opc, "wt"), "\t$dst, $a, $b",
               [(set GPR:$dst, (sra (opnode GPR:$a,
-                                    (sra GPR:$b, (i32 16))), (i32 16)))]>;
+                                    (sra GPR:$b, (i32 16))), (i32 16)))]> {
+    let Inst{31-27} = 0b11111;
+    let Inst{26-23} = 0b0110;
+    let Inst{22-20} = 0b011;
+    let Inst{15-12} = 0b1111; // Ra = 0b1111 (no accumulate)
+    let Inst{7-6} = 0b00;
+    let Inst{5-4} = 0b01;
+  }
 }
 
 
@@ -920,32 +1445,74 @@ multiclass T2I_smla<string opc, PatFrag opnode> {
               !strconcat(opc, "bb"), "\t$dst, $a, $b, $acc",
               [(set GPR:$dst, (add GPR:$acc,
                                (opnode (sext_inreg GPR:$a, i16),
-                                       (sext_inreg GPR:$b, i16))))]>;
+                                       (sext_inreg GPR:$b, i16))))]> {
+    let Inst{31-27} = 0b11111;
+    let Inst{26-23} = 0b0110;
+    let Inst{22-20} = 0b001;
+    let Inst{15-12} = {?, ?, ?, ?}; // Ra
+    let Inst{7-6} = 0b00;
+    let Inst{5-4} = 0b00;
+  }
 
   def BT : T2I<(outs GPR:$dst), (ins GPR:$a, GPR:$b, GPR:$acc), IIC_iMAC16,
              !strconcat(opc, "bt"), "\t$dst, $a, $b, $acc",
              [(set GPR:$dst, (add GPR:$acc, (opnode (sext_inreg GPR:$a, i16),
-                                                    (sra GPR:$b, (i32 16)))))]>;
+                                                    (sra GPR:$b, (i32 16)))))]> {
+    let Inst{31-27} = 0b11111;
+    let Inst{26-23} = 0b0110;
+    let Inst{22-20} = 0b001;
+    let Inst{15-12} = {?, ?, ?, ?}; // Ra
+    let Inst{7-6} = 0b00;
+    let Inst{5-4} = 0b01;
+  }
 
   def TB : T2I<(outs GPR:$dst), (ins GPR:$a, GPR:$b, GPR:$acc), IIC_iMAC16,
               !strconcat(opc, "tb"), "\t$dst, $a, $b, $acc",
               [(set GPR:$dst, (add GPR:$acc, (opnode (sra GPR:$a, (i32 16)),
-                                                 (sext_inreg GPR:$b, i16))))]>;
+                                                 (sext_inreg GPR:$b, i16))))]> {
+    let Inst{31-27} = 0b11111;
+    let Inst{26-23} = 0b0110;
+    let Inst{22-20} = 0b001;
+    let Inst{15-12} = {?, ?, ?, ?}; // Ra
+    let Inst{7-6} = 0b00;
+    let Inst{5-4} = 0b10;
+  }
 
   def TT : T2I<(outs GPR:$dst), (ins GPR:$a, GPR:$b, GPR:$acc), IIC_iMAC16,
               !strconcat(opc, "tt"), "\t$dst, $a, $b, $acc",
              [(set GPR:$dst, (add GPR:$acc, (opnode (sra GPR:$a, (i32 16)),
-                                                    (sra GPR:$b, (i32 16)))))]>;
+                                                    (sra GPR:$b, (i32 16)))))]> {
+    let Inst{31-27} = 0b11111;
+    let Inst{26-23} = 0b0110;
+    let Inst{22-20} = 0b001;
+    let Inst{15-12} = {?, ?, ?, ?}; // Ra
+    let Inst{7-6} = 0b00;
+    let Inst{5-4} = 0b11;
+  }
 
   def WB : T2I<(outs GPR:$dst), (ins GPR:$a, GPR:$b, GPR:$acc), IIC_iMAC16,
               !strconcat(opc, "wb"), "\t$dst, $a, $b, $acc",
               [(set GPR:$dst, (add GPR:$acc, (sra (opnode GPR:$a,
-                                       (sext_inreg GPR:$b, i16)), (i32 16))))]>;
+                                       (sext_inreg GPR:$b, i16)), (i32 16))))]> {
+    let Inst{31-27} = 0b11111;
+    let Inst{26-23} = 0b0110;
+    let Inst{22-20} = 0b011;
+    let Inst{15-12} = {?, ?, ?, ?}; // Ra
+    let Inst{7-6} = 0b00;
+    let Inst{5-4} = 0b00;
+  }
 
   def WT : T2I<(outs GPR:$dst), (ins GPR:$a, GPR:$b, GPR:$acc), IIC_iMAC16,
               !strconcat(opc, "wt"), "\t$dst, $a, $b, $acc",
               [(set GPR:$dst, (add GPR:$acc, (sra (opnode GPR:$a,
-                                         (sra GPR:$b, (i32 16))), (i32 16))))]>;
+                                         (sra GPR:$b, (i32 16))), (i32 16))))]> {
+    let Inst{31-27} = 0b11111;
+    let Inst{26-23} = 0b0110;
+    let Inst{22-20} = 0b011;
+    let Inst{15-12} = {?, ?, ?, ?}; // Ra
+    let Inst{7-6} = 0b00;
+    let Inst{5-4} = 0b01;
+  }
 }
 
 defm t2SMUL : T2I_smul<"smul", BinOpFrag<(mul node:$LHS, node:$RHS)>>;
@@ -959,24 +1526,33 @@ defm t2SMLA : T2I_smla<"smla", BinOpFrag<(mul node:$LHS, node:$RHS)>>;
 //  Misc. Arithmetic Instructions.
 //
 
-def t2CLZ : T2I<(outs GPR:$dst), (ins GPR:$src), IIC_iUNAr,
-                "clz", "\t$dst, $src",
-                [(set GPR:$dst, (ctlz GPR:$src))]>;
+class T2I_misc<bits<2> op1, bits<2> op2, dag oops, dag iops, InstrItinClass itin,
+              string opc, string asm, list<dag> pattern>
+  : T2I<oops, iops, itin, opc, asm, pattern> {
+  let Inst{31-27} = 0b11111;
+  let Inst{26-22} = 0b01010;
+  let Inst{21-20} = op1;
+  let Inst{15-12} = 0b1111;
+  let Inst{7-6} = 0b10;
+  let Inst{5-4} = op2;
+}
+
+def t2CLZ : T2I_misc<0b11, 0b00, (outs GPR:$dst), (ins GPR:$src), IIC_iUNAr,
+                    "clz", "\t$dst, $src", [(set GPR:$dst, (ctlz GPR:$src))]>;
 
-def t2REV : T2I<(outs GPR:$dst), (ins GPR:$src), IIC_iUNAr,
-                "rev", ".w\t$dst, $src",
-                [(set GPR:$dst, (bswap GPR:$src))]>;
+def t2REV : T2I_misc<0b01, 0b00, (outs GPR:$dst), (ins GPR:$src), IIC_iUNAr,
+                   "rev", ".w\t$dst, $src", [(set GPR:$dst, (bswap GPR:$src))]>;
 
-def t2REV16 : T2I<(outs GPR:$dst), (ins GPR:$src), IIC_iUNAr,
-                "rev16", ".w\t$dst, $src",
+def t2REV16 : T2I_misc<0b01, 0b01, (outs GPR:$dst), (ins GPR:$src), IIC_iUNAr,
+                       "rev16", ".w\t$dst, $src",
                 [(set GPR:$dst,
                     (or (and (srl GPR:$src, (i32 8)), 0xFF),
                         (or (and (shl GPR:$src, (i32 8)), 0xFF00),
                             (or (and (srl GPR:$src, (i32 8)), 0xFF0000),
                                 (and (shl GPR:$src, (i32 8)), 0xFF000000)))))]>;
 
-def t2REVSH : T2I<(outs GPR:$dst), (ins GPR:$src), IIC_iUNAr,
-                 "revsh", ".w\t$dst, $src",
+def t2REVSH : T2I_misc<0b01, 0b11, (outs GPR:$dst), (ins GPR:$src), IIC_iUNAr,
+                       "revsh", ".w\t$dst, $src",
                  [(set GPR:$dst,
                     (sext_inreg
                       (or (srl (and GPR:$src, 0xFF00), (i32 8)),
@@ -986,7 +1562,13 @@ def t2PKHBT : T2I<(outs GPR:$dst), (ins GPR:$src1, GPR:$src2, i32imm:$shamt),
                   IIC_iALUsi, "pkhbt", "\t$dst, $src1, $src2, LSL $shamt",
                   [(set GPR:$dst, (or (and GPR:$src1, 0xFFFF),
                                       (and (shl GPR:$src2, (i32 imm:$shamt)),
-                                           0xFFFF0000)))]>;
+                                           0xFFFF0000)))]> {
+  let Inst{31-27} = 0b11101;
+  let Inst{26-25} = 0b01;
+  let Inst{24-20} = 0b01100;
+  let Inst{5} = 0; // BT form
+  let Inst{4} = 0;
+}
 
 // Alternate cases for PKHBT where identities eliminate some nodes.
 def : T2Pat<(or (and GPR:$src1, 0xFFFF), (and GPR:$src2, 0xFFFF0000)),
@@ -998,7 +1580,13 @@ def t2PKHTB : T2I<(outs GPR:$dst), (ins GPR:$src1, GPR:$src2, i32imm:$shamt),
                   IIC_iALUsi, "pkhtb", "\t$dst, $src1, $src2, ASR $shamt",
                   [(set GPR:$dst, (or (and GPR:$src1, 0xFFFF0000),
                                       (and (sra GPR:$src2, imm16_31:$shamt),
-                                           0xFFFF)))]>;
+                                           0xFFFF)))]> {
+  let Inst{31-27} = 0b11101;
+  let Inst{26-25} = 0b01;
+  let Inst{24-20} = 0b01100;
+  let Inst{5} = 1; // TB form
+  let Inst{4} = 0;
+}
 
 // Alternate cases for PKHTB where identities eliminate some nodes.  Note that
 // a shift amount of 0 is *not legal* here, it is PKHBT instead.
@@ -1012,15 +1600,15 @@ def : T2Pat<(or (and GPR:$src1, 0xFFFF0000),
 //  Comparison Instructions...
 //
 
-defm t2CMP  : T2I_cmp_is<"cmp",
-                         BinOpFrag<(ARMcmp node:$LHS, node:$RHS)>>;
-defm t2CMPz : T2I_cmp_is<"cmp",
-                         BinOpFrag<(ARMcmpZ node:$LHS, node:$RHS)>>;
+defm t2CMP  : T2I_cmp_irs<0b1101, "cmp",
+                          BinOpFrag<(ARMcmp node:$LHS, node:$RHS)>>;
+defm t2CMPz : T2I_cmp_irs<0b1101, "cmp",
+                          BinOpFrag<(ARMcmpZ node:$LHS, node:$RHS)>>;
 
-defm t2CMN  : T2I_cmp_is<"cmn",
-                         BinOpFrag<(ARMcmp node:$LHS,(ineg node:$RHS))>>;
-defm t2CMNz : T2I_cmp_is<"cmn",
-                         BinOpFrag<(ARMcmpZ node:$LHS,(ineg node:$RHS))>>;
+defm t2CMN  : T2I_cmp_irs<0b1000, "cmn",
+                          BinOpFrag<(ARMcmp node:$LHS,(ineg node:$RHS))>>;
+defm t2CMNz : T2I_cmp_irs<0b1000, "cmn",
+                          BinOpFrag<(ARMcmpZ node:$LHS,(ineg node:$RHS))>>;
 
 def : T2Pat<(ARMcmp  GPR:$src, t2_so_imm_neg:$imm),
             (t2CMNri GPR:$src, t2_so_imm_neg:$imm)>;
@@ -1028,10 +1616,10 @@ def : T2Pat<(ARMcmp  GPR:$src, t2_so_imm_neg:$imm),
 def : T2Pat<(ARMcmpZ  GPR:$src, t2_so_imm_neg:$imm),
             (t2CMNri   GPR:$src, t2_so_imm_neg:$imm)>;
 
-defm t2TST  : T2I_cmp_is<"tst",
-                         BinOpFrag<(ARMcmpZ (and node:$LHS, node:$RHS), 0)>>;
-defm t2TEQ  : T2I_cmp_is<"teq",
-                         BinOpFrag<(ARMcmpZ (xor node:$LHS, node:$RHS), 0)>>;
+defm t2TST  : T2I_cmp_irs<0b0000, "tst",
+                          BinOpFrag<(ARMcmpZ (and node:$LHS, node:$RHS), 0)>>;
+defm t2TEQ  : T2I_cmp_irs<0b0100, "teq",
+                          BinOpFrag<(ARMcmpZ (xor node:$LHS, node:$RHS), 0)>>;
 
 // A8.6.27  CBNZ, CBZ - Compare and branch on (non)zero.
 // Short range conditional branch. Looks awesome for loops. Need to figure
@@ -1044,25 +1632,54 @@ defm t2TEQ  : T2I_cmp_is<"teq",
 def t2MOVCCr : T2I<(outs GPR:$dst), (ins GPR:$false, GPR:$true), IIC_iCMOVr,
                    "mov", ".w\t$dst, $true",
       [/*(set GPR:$dst, (ARMcmov GPR:$false, GPR:$true, imm:$cc, CCR:$ccr))*/]>,
-                RegConstraint<"$false = $dst">;
+                RegConstraint<"$false = $dst"> {
+  let Inst{31-27} = 0b11101;
+  let Inst{26-25} = 0b01;
+  let Inst{24-21} = 0b0010;
+  let Inst{20} = 0; // The S bit.
+  let Inst{19-16} = 0b1111; // Rn
+  let Inst{14-12} = 0b000;
+  let Inst{7-4} = 0b0000;
+}
 
 def t2MOVCCi : T2I<(outs GPR:$dst), (ins GPR:$false, t2_so_imm:$true),
                    IIC_iCMOVi, "mov", ".w\t$dst, $true",
 [/*(set GPR:$dst, (ARMcmov GPR:$false, t2_so_imm:$true, imm:$cc, CCR:$ccr))*/]>,
-                   RegConstraint<"$false = $dst">;
-
-def t2MOVCClsl : T2I<(outs GPR:$dst), (ins GPR:$false, GPR:$true, i32imm:$rhs),
-                   IIC_iCMOVsi, "lsl", ".w\t$dst, $true, $rhs", []>,
-                   RegConstraint<"$false = $dst">;
-def t2MOVCClsr : T2I<(outs GPR:$dst), (ins GPR:$false, GPR:$true, i32imm:$rhs),
-                   IIC_iCMOVsi, "lsr", ".w\t$dst, $true, $rhs", []>,
-                   RegConstraint<"$false = $dst">;
-def t2MOVCCasr : T2I<(outs GPR:$dst), (ins GPR:$false, GPR:$true, i32imm:$rhs),
-                   IIC_iCMOVsi, "asr", ".w\t$dst, $true, $rhs", []>,
-                   RegConstraint<"$false = $dst">;
-def t2MOVCCror : T2I<(outs GPR:$dst), (ins GPR:$false, GPR:$true, i32imm:$rhs),
-                   IIC_iCMOVsi, "ror", ".w\t$dst, $true, $rhs", []>,
-                   RegConstraint<"$false = $dst">;
+                   RegConstraint<"$false = $dst"> {
+  let Inst{31-27} = 0b11110;
+  let Inst{25} = 0;
+  let Inst{24-21} = 0b0010;
+  let Inst{20} = 0; // The S bit.
+  let Inst{19-16} = 0b1111; // Rn
+  let Inst{15} = 0;
+}
+
+class T2I_movcc_sh<bits<2> opcod, dag oops, dag iops, InstrItinClass itin,
+                   string opc, string asm, list<dag> pattern>
+  : T2I<oops, iops, itin, opc, asm, pattern> {
+  let Inst{31-27} = 0b11101;
+  let Inst{26-25} = 0b01;
+  let Inst{24-21} = 0b0010;
+  let Inst{20} = 0; // The S bit.
+  let Inst{19-16} = 0b1111; // Rn
+  let Inst{5-4} = opcod; // Shift type.
+}
+def t2MOVCClsl : T2I_movcc_sh<0b00, (outs GPR:$dst),
+                             (ins GPR:$false, GPR:$true, i32imm:$rhs),
+                             IIC_iCMOVsi, "lsl", ".w\t$dst, $true, $rhs", []>,
+                 RegConstraint<"$false = $dst">;
+def t2MOVCClsr : T2I_movcc_sh<0b01, (outs GPR:$dst),
+                             (ins GPR:$false, GPR:$true, i32imm:$rhs),
+                             IIC_iCMOVsi, "lsr", ".w\t$dst, $true, $rhs", []>,
+                 RegConstraint<"$false = $dst">;
+def t2MOVCCasr : T2I_movcc_sh<0b10, (outs GPR:$dst),
+                             (ins GPR:$false, GPR:$true, i32imm:$rhs),
+                             IIC_iCMOVsi, "asr", ".w\t$dst, $true, $rhs", []>,
+                 RegConstraint<"$false = $dst">;
+def t2MOVCCror : T2I_movcc_sh<0b11, (outs GPR:$dst),
+                             (ins GPR:$false, GPR:$true, i32imm:$rhs),
+                             IIC_iCMOVsi, "ror", ".w\t$dst, $true, $rhs", []>,
+                 RegConstraint<"$false = $dst">;
 
 //===----------------------------------------------------------------------===//
 // Atomic operations intrinsics
@@ -1075,7 +1692,9 @@ def t2Int_MemBarrierV7 : AInoP<(outs), (ins),
                         "dmb", "",
                         [(ARMMemBarrierV7)]>,
                         Requires<[IsThumb2]> {
+  let Inst{31-4} = 0xF3BF8F5;
   // FIXME: add support for options other than a full system DMB
+  let Inst{3-0} = 0b1111;
 }
 
 def t2Int_SyncBarrierV7 : AInoP<(outs), (ins),
@@ -1083,47 +1702,76 @@ def t2Int_SyncBarrierV7 : AInoP<(outs), (ins),
                         "dsb", "",
                         [(ARMSyncBarrierV7)]>,
                         Requires<[IsThumb2]> {
+  let Inst{31-4} = 0xF3BF8F4;
   // FIXME: add support for options other than a full system DSB
+  let Inst{3-0} = 0b1111;
+}
+}
+
+class T2I_ldrex<bits<2> opcod, dag oops, dag iops, AddrMode am, SizeFlagVal sz,
+                InstrItinClass itin, string opc, string asm, string cstr,
+                list<dag> pattern, bits<4> rt2 = 0b1111>
+  : Thumb2I<oops, iops, am, sz, itin, opc, asm, cstr, pattern> {
+  let Inst{31-27} = 0b11101;
+  let Inst{26-20} = 0b0001101;
+  let Inst{11-8} = rt2;
+  let Inst{7-6} = 0b01;
+  let Inst{5-4} = opcod;
+  let Inst{3-0} = 0b1111;
 }
+class T2I_strex<bits<2> opcod, dag oops, dag iops, AddrMode am, SizeFlagVal sz,
+                InstrItinClass itin, string opc, string asm, string cstr,
+                list<dag> pattern, bits<4> rt2 = 0b1111>
+  : Thumb2I<oops, iops, am, sz, itin, opc, asm, cstr, pattern> {
+  let Inst{31-27} = 0b11101;
+  let Inst{26-20} = 0b0001100;
+  let Inst{11-8} = rt2;
+  let Inst{7-6} = 0b01;
+  let Inst{5-4} = opcod;
 }
 
 let mayLoad = 1 in {
-def t2LDREXB : Thumb2I<(outs GPR:$dest), (ins GPR:$ptr), AddrModeNone,
-                      Size4Bytes, NoItinerary,
-                      "ldrexb", "\t$dest, [$ptr]", "",
-                      []>;
-def t2LDREXH : Thumb2I<(outs GPR:$dest), (ins GPR:$ptr), AddrModeNone,
-                      Size4Bytes, NoItinerary,
-                      "ldrexh", "\t$dest, [$ptr]", "",
-                      []>;
+def t2LDREXB : T2I_ldrex<0b00, (outs GPR:$dest), (ins GPR:$ptr), AddrModeNone,
+                         Size4Bytes, NoItinerary, "ldrexb", "\t$dest, [$ptr]",
+                         "", []>;
+def t2LDREXH : T2I_ldrex<0b01, (outs GPR:$dest), (ins GPR:$ptr), AddrModeNone,
+                         Size4Bytes, NoItinerary, "ldrexh", "\t$dest, [$ptr]",
+                         "", []>;
 def t2LDREX  : Thumb2I<(outs GPR:$dest), (ins GPR:$ptr), AddrModeNone,
-                      Size4Bytes, NoItinerary,
-                      "ldrex", "\t$dest, [$ptr]", "",
-                      []>;
-def t2LDREXD : Thumb2I<(outs GPR:$dest, GPR:$dest2), (ins GPR:$ptr),
-                      AddrModeNone, Size4Bytes, NoItinerary,
-                      "ldrexd", "\t$dest, $dest2, [$ptr]", "",
-                      []>;
-}
-
-let mayStore = 1 in {
-def t2STREXB : Thumb2I<(outs GPR:$success), (ins GPR:$src, GPR:$ptr),
-                      AddrModeNone, Size4Bytes, NoItinerary,
-                      "strexb", "\t$success, $src, [$ptr]", "",
-                      []>;
-def t2STREXH : Thumb2I<(outs GPR:$success), (ins GPR:$src, GPR:$ptr),
-                      AddrModeNone, Size4Bytes, NoItinerary,
-                      "strexh", "\t$success, $src, [$ptr]", "",
-                      []>;
+                       Size4Bytes, NoItinerary,
+                       "ldrex", "\t$dest, [$ptr]", "",
+                      []> {
+  let Inst{31-27} = 0b11101;
+  let Inst{26-20} = 0b0000101;
+  let Inst{11-8} = 0b1111;
+  let Inst{7-0} = 0b00000000; // imm8 = 0
+}
+def t2LDREXD : T2I_ldrex<0b11, (outs GPR:$dest, GPR:$dest2), (ins GPR:$ptr),
+                         AddrModeNone, Size4Bytes, NoItinerary,
+                         "ldrexd", "\t$dest, $dest2, [$ptr]", "",
+                         [], {?, ?, ?, ?}>;
+}
+
+let mayStore = 1, Constraints = "@earlyclobber $success" in {
+def t2STREXB : T2I_strex<0b00, (outs GPR:$success), (ins GPR:$src, GPR:$ptr),
+                         AddrModeNone, Size4Bytes, NoItinerary,
+                         "strexb", "\t$success, $src, [$ptr]", "", []>;
+def t2STREXH : T2I_strex<0b01, (outs GPR:$success), (ins GPR:$src, GPR:$ptr),
+                         AddrModeNone, Size4Bytes, NoItinerary,
+                         "strexh", "\t$success, $src, [$ptr]", "", []>;
 def t2STREX  : Thumb2I<(outs GPR:$success), (ins GPR:$src, GPR:$ptr),
-                      AddrModeNone, Size4Bytes, NoItinerary,
-                      "strex", "\t$success, $src, [$ptr]", "",
-                      []>;
-def t2STREXD : Thumb2I<(outs GPR:$success),
-                      (ins GPR:$src, GPR:$src2, GPR:$ptr),
-                      AddrModeNone, Size4Bytes, NoItinerary,
-                      "strexd", "\t$success, $src, $src2, [$ptr]", "",
-                      []>;
+                       AddrModeNone, Size4Bytes, NoItinerary,
+                       "strex", "\t$success, $src, [$ptr]", "",
+                      []> {
+  let Inst{31-27} = 0b11101;
+  let Inst{26-20} = 0b0000100;
+  let Inst{7-0} = 0b00000000; // imm8 = 0
+}
+def t2STREXD : T2I_strex<0b11, (outs GPR:$success),
+                         (ins GPR:$src, GPR:$src2, GPR:$ptr),
+                         AddrModeNone, Size4Bytes, NoItinerary,
+                         "strexd", "\t$success, $src, $src2, [$ptr]", "", [],
+                         {?, ?, ?, ?}>;
 }
 
 //===----------------------------------------------------------------------===//
@@ -1135,7 +1783,11 @@ let isCall = 1,
   Defs = [R0, R12, LR, CPSR] in {
   def t2TPsoft : T2XI<(outs), (ins), IIC_Br,
                      "bl\t__aeabi_read_tp",
-                     [(set R0, ARMthread_pointer)]>;
+                     [(set R0, ARMthread_pointer)]> {
+    let Inst{31-27} = 0b11110;
+    let Inst{15-14} = 0b11;
+    let Inst{12} = 1;
+  }
 }
 
 //===----------------------------------------------------------------------===//
@@ -1183,31 +1835,61 @@ let isReturn = 1, isTerminator = 1, isBarrier = 1, mayLoad = 1,
   def t2LDM_RET : T2XI<(outs),
                     (ins addrmode4:$addr, pred:$p, reglist:$wb, variable_ops),
                     IIC_Br, "ldm${addr:submode}${p}${addr:wide}\t$addr, $wb",
-                    []>;
+                    []> {
+  let Inst{31-27} = 0b11101;
+  let Inst{26-25} = 0b00;
+  let Inst{24-23} = {?, ?}; // IA: '01', DB: '10'
+  let Inst{22} = 0;
+  let Inst{21} = ?; // The W bit.
+  let Inst{20} = 1; // Load
+}
 
 let isBranch = 1, isTerminator = 1, isBarrier = 1 in {
 let isPredicable = 1 in
 def t2B   : T2XI<(outs), (ins brtarget:$target), IIC_Br,
                  "b.w\t$target",
-                 [(br bb:$target)]>;
+                 [(br bb:$target)]> {
+  let Inst{31-27} = 0b11110;
+  let Inst{15-14} = 0b10;
+  let Inst{12} = 1;
+}
 
 let isNotDuplicable = 1, isIndirectBranch = 1 in {
 def t2BR_JT :
     T2JTI<(outs),
           (ins GPR:$target, GPR:$index, jt2block_operand:$jt, i32imm:$id),
            IIC_Br, "mov\tpc, $target\n$jt",
-          [(ARMbr2jt GPR:$target, GPR:$index, tjumptable:$jt, imm:$id)]>;
+          [(ARMbr2jt GPR:$target, GPR:$index, tjumptable:$jt, imm:$id)]> {
+  let Inst{31-27} = 0b11101;
+  let Inst{26-20} = 0b0100100;
+  let Inst{19-16} = 0b1111;
+  let Inst{14-12} = 0b000;
+  let Inst{11-8} = 0b1111; // Rd = pc
+  let Inst{7-4} = 0b0000;
+}
 
 // FIXME: Add a non-pc based case that can be predicated.
 def t2TBB :
     T2JTI<(outs),
         (ins tb_addrmode:$index, jt2block_operand:$jt, i32imm:$id),
-         IIC_Br, "tbb\t$index\n$jt", []>;
+         IIC_Br, "tbb\t$index\n$jt", []> {
+  let Inst{31-27} = 0b11101;
+  let Inst{26-20} = 0b0001101;
+  let Inst{19-16} = 0b1111; // Rn = pc (table follows this instruction)
+  let Inst{15-8} = 0b11110000;
+  let Inst{7-4} = 0b0000; // B form
+}
 
 def t2TBH :
     T2JTI<(outs),
         (ins tb_addrmode:$index, jt2block_operand:$jt, i32imm:$id),
-         IIC_Br, "tbh\t$index\n$jt", []>;
+         IIC_Br, "tbh\t$index\n$jt", []> {
+  let Inst{31-27} = 0b11101;
+  let Inst{26-20} = 0b0001101;
+  let Inst{19-16} = 0b1111; // Rn = pc (table follows this instruction)
+  let Inst{15-8} = 0b11110000;
+  let Inst{7-4} = 0b0001; // H form
+}
 } // isNotDuplicable, isIndirectBranch
 
 } // isBranch, isTerminator, isBarrier
@@ -1217,13 +1899,21 @@ def t2TBH :
 let isBranch = 1, isTerminator = 1 in
 def t2Bcc : T2I<(outs), (ins brtarget:$target), IIC_Br,
                 "b", ".w\t$target",
-                [/*(ARMbrcond bb:$target, imm:$cc)*/]>;
+                [/*(ARMbrcond bb:$target, imm:$cc)*/]> {
+  let Inst{31-27} = 0b11110;
+  let Inst{15-14} = 0b10;
+  let Inst{12} = 0;
+}
 
 
 // IT block
 def t2IT : Thumb2XI<(outs), (ins it_pred:$cc, it_mask:$mask),
                     AddrModeNone, Size2Bytes,  IIC_iALUx,
-                    "it$mask\t$cc", "", []>;
+                    "it$mask\t$cc", "", []> {
+  // 16-bit instruction.
+  let Inst{31-16} = 0x0000;
+  let Inst{15-8} = 0b10111111;
+}
 
 //===----------------------------------------------------------------------===//
 // Non-Instruction Patterns
diff --git a/libclamav/c++/llvm/lib/Target/ARM/ARMJITInfo.cpp b/libclamav/c++/llvm/lib/Target/ARM/ARMJITInfo.cpp
index aa50cfd..bef5a06 100644
--- a/libclamav/c++/llvm/lib/Target/ARM/ARMJITInfo.cpp
+++ b/libclamav/c++/llvm/lib/Target/ARM/ARMJITInfo.cpp
@@ -139,17 +139,11 @@ ARMJITInfo::getLazyResolverFunction(JITCompilerFn F) {
 
 void *ARMJITInfo::emitGlobalValueIndirectSym(const GlobalValue *GV, void *Ptr,
                                              JITCodeEmitter &JCE) {
-  MachineCodeEmitter::BufferState BS;
-  JCE.startGVStub(BS, GV, 4, 4);
-  intptr_t Addr = (intptr_t)JCE.getCurrentPCValue();
-  if (!sys::Memory::setRangeWritable((void*)Addr, 4)) {
-    llvm_unreachable("ERROR: Unable to mark indirect symbol writable");
-  }
-  JCE.emitWordLE((intptr_t)Ptr);
-  if (!sys::Memory::setRangeExecutable((void*)Addr, 4)) {
-    llvm_unreachable("ERROR: Unable to mark indirect symbol executable");
-  }
-  void *PtrAddr = JCE.finishGVStub(BS);
+  uint8_t Buffer[4];
+  uint8_t *Cur = Buffer;
+  MachineCodeEmitter::emitWordLEInto(Cur, (intptr_t)Ptr);
+  void *PtrAddr = JCE.allocIndirectGV(
+      GV, Buffer, sizeof(Buffer), /*Alignment=*/4);
   addIndirectSymAddr(Ptr, (intptr_t)PtrAddr);
   return PtrAddr;
 }
diff --git a/libclamav/c++/llvm/lib/Target/ARM/ARMLoadStoreOptimizer.cpp b/libclamav/c++/llvm/lib/Target/ARM/ARMLoadStoreOptimizer.cpp
index 22bd80e..b13f98a 100644
--- a/libclamav/c++/llvm/lib/Target/ARM/ARMLoadStoreOptimizer.cpp
+++ b/libclamav/c++/llvm/lib/Target/ARM/ARMLoadStoreOptimizer.cpp
@@ -78,7 +78,7 @@ namespace {
       MachineBasicBlock::iterator MBBI;
       bool Merged;
       MemOpQueueEntry(int o, int p, MachineBasicBlock::iterator i)
-        : Offset(o), Position(p), MBBI(i), Merged(false) {};
+        : Offset(o), Position(p), MBBI(i), Merged(false) {}
     };
     typedef SmallVector<MemOpQueueEntry,8> MemOpQueue;
     typedef MemOpQueue::iterator MemOpQueueIter;
@@ -87,6 +87,20 @@ namespace {
                   int Offset, unsigned Base, bool BaseKill, int Opcode,
                   ARMCC::CondCodes Pred, unsigned PredReg, unsigned Scratch,
                   DebugLoc dl, SmallVector<std::pair<unsigned, bool>, 8> &Regs);
+    void MergeOpsUpdate(MachineBasicBlock &MBB,
+                        MemOpQueue &MemOps,
+                        unsigned memOpsBegin,
+                        unsigned memOpsEnd,
+                        unsigned insertAfter,
+                        int Offset,
+                        unsigned Base,
+                        bool BaseKill,
+                        int Opcode,
+                        ARMCC::CondCodes Pred,
+                        unsigned PredReg,
+                        unsigned Scratch,
+                        DebugLoc dl,
+                        SmallVector<MachineBasicBlock::iterator, 4> &Merges);
     void MergeLDR_STR(MachineBasicBlock &MBB, unsigned SIndex, unsigned Base,
                       int Opcode, unsigned Size,
                       ARMCC::CondCodes Pred, unsigned PredReg,
@@ -248,6 +262,67 @@ ARMLoadStoreOpt::MergeOps(MachineBasicBlock &MBB,
   return true;
 }
 
+// MergeOpsUpdate - call MergeOps and update MemOps and merges accordingly on
+// success.
+void ARMLoadStoreOpt::
+MergeOpsUpdate(MachineBasicBlock &MBB,
+               MemOpQueue &memOps,
+               unsigned memOpsBegin,
+               unsigned memOpsEnd,
+               unsigned insertAfter,
+               int Offset,
+               unsigned Base,
+               bool BaseKill,
+               int Opcode,
+               ARMCC::CondCodes Pred,
+               unsigned PredReg,
+               unsigned Scratch,
+               DebugLoc dl,
+               SmallVector<MachineBasicBlock::iterator, 4> &Merges) {
+  // First calculate which of the registers should be killed by the merged
+  // instruction.
+  SmallVector<std::pair<unsigned, bool>, 8> Regs;
+  const unsigned insertPos = memOps[insertAfter].Position;
+  for (unsigned i = memOpsBegin; i < memOpsEnd; ++i) {
+    const MachineOperand &MO = memOps[i].MBBI->getOperand(0);
+    unsigned Reg = MO.getReg();
+    bool isKill = MO.isKill();
+
+    // If we are inserting the merged operation after an unmerged operation that
+    // uses the same register, make sure to transfer any kill flag.
+    for (unsigned j = memOpsEnd, e = memOps.size(); !isKill && j != e; ++j)
+      if (memOps[j].Position<insertPos) {
+        const MachineOperand &MOJ = memOps[j].MBBI->getOperand(0);
+        if (MOJ.getReg() == Reg && MOJ.isKill())
+          isKill = true;
+      }
+
+    Regs.push_back(std::make_pair(Reg, isKill));
+  }
+
+  // Try to do the merge.
+  MachineBasicBlock::iterator Loc = memOps[insertAfter].MBBI;
+  Loc++;
+  if (!MergeOps(MBB, Loc, Offset, Base, BaseKill, Opcode,
+                Pred, PredReg, Scratch, dl, Regs))
+    return;
+
+  // Merge succeeded, update records.
+  Merges.push_back(prior(Loc));
+  for (unsigned i = memOpsBegin; i < memOpsEnd; ++i) {
+    // Remove kill flags from any unmerged memops that come before insertPos.
+    if (Regs[i-memOpsBegin].second)
+      for (unsigned j = memOpsEnd, e = memOps.size(); j != e; ++j)
+        if (memOps[j].Position<insertPos) {
+          MachineOperand &MOJ = memOps[j].MBBI->getOperand(0);
+          if (MOJ.getReg() == Regs[i-memOpsBegin].first && MOJ.isKill())
+            MOJ.setIsKill(false);
+        }
+    MBB.erase(memOps[i].MBBI);
+    memOps[i].Merged = true;
+  }
+}
+
 /// MergeLDR_STR - Merge a number of load / store instructions into one or more
 /// load / store multiple instructions.
 void
@@ -259,58 +334,42 @@ ARMLoadStoreOpt::MergeLDR_STR(MachineBasicBlock &MBB, unsigned SIndex,
   bool isAM4 = isi32Load(Opcode) || isi32Store(Opcode);
   int Offset = MemOps[SIndex].Offset;
   int SOffset = Offset;
-  unsigned Pos = MemOps[SIndex].Position;
+  unsigned insertAfter = SIndex;
   MachineBasicBlock::iterator Loc = MemOps[SIndex].MBBI;
   DebugLoc dl = Loc->getDebugLoc();
-  unsigned PReg = Loc->getOperand(0).getReg();
-  unsigned PRegNum = ARMRegisterInfo::getRegisterNumbering(PReg);
-  bool isKill = Loc->getOperand(0).isKill();
+  const MachineOperand &PMO = Loc->getOperand(0);
+  unsigned PReg = PMO.getReg();
+  unsigned PRegNum = PMO.isUndef() ? UINT_MAX
+    : ARMRegisterInfo::getRegisterNumbering(PReg);
 
-  SmallVector<std::pair<unsigned,bool>, 8> Regs;
-  Regs.push_back(std::make_pair(PReg, isKill));
   for (unsigned i = SIndex+1, e = MemOps.size(); i != e; ++i) {
     int NewOffset = MemOps[i].Offset;
-    unsigned Reg = MemOps[i].MBBI->getOperand(0).getReg();
-    unsigned RegNum = ARMRegisterInfo::getRegisterNumbering(Reg);
-    isKill = MemOps[i].MBBI->getOperand(0).isKill();
+    const MachineOperand &MO = MemOps[i].MBBI->getOperand(0);
+    unsigned Reg = MO.getReg();
+    unsigned RegNum = MO.isUndef() ? UINT_MAX
+      : ARMRegisterInfo::getRegisterNumbering(Reg);
     // AM4 - register numbers in ascending order.
     // AM5 - consecutive register numbers in ascending order.
     if (NewOffset == Offset + (int)Size &&
         ((isAM4 && RegNum > PRegNum) || RegNum == PRegNum+1)) {
       Offset += Size;
-      Regs.push_back(std::make_pair(Reg, isKill));
       PRegNum = RegNum;
     } else {
       // Can't merge this in. Try merge the earlier ones first.
-      if (MergeOps(MBB, ++Loc, SOffset, Base, false, Opcode, Pred, PredReg,
-                   Scratch, dl, Regs)) {
-        Merges.push_back(prior(Loc));
-        for (unsigned j = SIndex; j < i; ++j) {
-          MBB.erase(MemOps[j].MBBI);
-          MemOps[j].Merged = true;
-        }
-      }
+      MergeOpsUpdate(MBB, MemOps, SIndex, i, insertAfter, SOffset,
+                     Base, false, Opcode, Pred, PredReg, Scratch, dl, Merges);
       MergeLDR_STR(MBB, i, Base, Opcode, Size, Pred, PredReg, Scratch,
                    MemOps, Merges);
       return;
     }
 
-    if (MemOps[i].Position > Pos) {
-      Pos = MemOps[i].Position;
-      Loc = MemOps[i].MBBI;
-    }
+    if (MemOps[i].Position > MemOps[insertAfter].Position)
+      insertAfter = i;
   }
 
   bool BaseKill = Loc->findRegisterUseOperandIdx(Base, true) != -1;
-  if (MergeOps(MBB, ++Loc, SOffset, Base, BaseKill, Opcode, Pred, PredReg,
-               Scratch, dl, Regs)) {
-    Merges.push_back(prior(Loc));
-    for (unsigned i = SIndex, e = MemOps.size(); i != e; ++i) {
-      MBB.erase(MemOps[i].MBBI);
-      MemOps[i].Merged = true;
-    }
-  }
-
+  MergeOpsUpdate(MBB, MemOps, SIndex, MemOps.size(), insertAfter, SOffset,
+                 Base, BaseKill, Opcode, Pred, PredReg, Scratch, dl, Merges);
   return;
 }
 
diff --git a/libclamav/c++/llvm/lib/Target/ARM/ARMRegisterInfo.td b/libclamav/c++/llvm/lib/Target/ARM/ARMRegisterInfo.td
index d393e8d..9fbde81 100644
--- a/libclamav/c++/llvm/lib/Target/ARM/ARMRegisterInfo.td
+++ b/libclamav/c++/llvm/lib/Target/ARM/ARMRegisterInfo.td
@@ -367,6 +367,19 @@ def QPR_8 : RegisterClass<"ARM", [v16i8, v8i16, v4i32, v2i64, v4f32, v2f64],
 // Condition code registers.
 def CCR : RegisterClass<"ARM", [i32], 32, [CPSR]>;
 
+// Just the stack pointer (for tSTRspi and friends).
+def JustSP : RegisterClass<"ARM", [i32], 32, [SP]> {
+  let MethodProtos = [{
+    iterator allocation_order_end(const MachineFunction &MF) const;
+  }];
+  let MethodBodies = [{
+      JustSPClass::iterator
+      JustSPClass::allocation_order_end(const MachineFunction &MF) const {
+        return allocation_order_begin(MF);
+      }
+  }];
+}
+
 //===----------------------------------------------------------------------===//
 // Subregister Set Definitions... now that we have all of the pieces, define the
 // sub registers for each register.
diff --git a/libclamav/c++/llvm/lib/Target/ARM/AsmParser/ARMAsmParser.cpp b/libclamav/c++/llvm/lib/Target/ARM/AsmParser/ARMAsmParser.cpp
index 894f913..ed4667b 100644
--- a/libclamav/c++/llvm/lib/Target/ARM/AsmParser/ARMAsmParser.cpp
+++ b/libclamav/c++/llvm/lib/Target/ARM/AsmParser/ARMAsmParser.cpp
@@ -15,6 +15,7 @@
 #include "llvm/MC/MCStreamer.h"
 #include "llvm/MC/MCExpr.h"
 #include "llvm/MC/MCInst.h"
+#include "llvm/Support/Compiler.h"
 #include "llvm/Support/SourceMgr.h"
 #include "llvm/Target/TargetRegistry.h"
 #include "llvm/Target/TargetAsmParser.h"
@@ -98,10 +99,6 @@ public:
   virtual bool ParseDirective(AsmToken DirectiveID);
 };
   
-} // end anonymous namespace
-
-namespace {
-
 /// ARMOperand - Instances of this class represent a parsed ARM machine
 /// instruction.
 struct ARMOperand {
@@ -670,7 +667,7 @@ bool ARMAsmParser::ParseDirectiveThumbFunc(SMLoc L) {
   const AsmToken &Tok = getLexer().getTok();
   if (Tok.isNot(AsmToken::Identifier) && Tok.isNot(AsmToken::String))
     return Error(L, "unexpected token in .syntax directive");
-  StringRef SymbolName = getLexer().getTok().getIdentifier();
+  StringRef ATTRIBUTE_UNUSED SymbolName = getLexer().getTok().getIdentifier();
   getLexer().Lex(); // Consume the identifier token.
 
   if (getLexer().isNot(AsmToken::EndOfStatement))
diff --git a/libclamav/c++/llvm/lib/Target/ARM/AsmPrinter/ARMAsmPrinter.cpp b/libclamav/c++/llvm/lib/Target/ARM/AsmPrinter/ARMAsmPrinter.cpp
index 362bbf1..931d8df 100644
--- a/libclamav/c++/llvm/lib/Target/ARM/AsmPrinter/ARMAsmPrinter.cpp
+++ b/libclamav/c++/llvm/lib/Target/ARM/AsmPrinter/ARMAsmPrinter.cpp
@@ -23,6 +23,7 @@
 #include "ARMTargetMachine.h"
 #include "llvm/Constants.h"
 #include "llvm/Module.h"
+#include "llvm/Type.h"
 #include "llvm/Assembly/Writer.h"
 #include "llvm/CodeGen/AsmPrinter.h"
 #include "llvm/CodeGen/DwarfWriter.h"
diff --git a/libclamav/c++/llvm/lib/Target/ARM/Thumb1InstrInfo.cpp b/libclamav/c++/llvm/lib/Target/ARM/Thumb1InstrInfo.cpp
index 66d3b83..e875394 100644
--- a/libclamav/c++/llvm/lib/Target/ARM/Thumb1InstrInfo.cpp
+++ b/libclamav/c++/llvm/lib/Target/ARM/Thumb1InstrInfo.cpp
@@ -180,7 +180,7 @@ restoreCalleeSavedRegisters(MachineBasicBlock &MBB,
   AddDefaultPred(MIB);
   MIB.addReg(0); // No write back.
 
-  bool NumRegs = 0;
+  bool NumRegs = false;
   for (unsigned i = CSI.size(); i != 0; --i) {
     unsigned Reg = CSI[i-1].getReg();
     if (Reg == ARM::LR) {
@@ -192,7 +192,7 @@ restoreCalleeSavedRegisters(MachineBasicBlock &MBB,
       MI = MBB.erase(MI);
     }
     MIB.addReg(Reg, getDefRegState(true));
-    ++NumRegs;
+    NumRegs = true;
   }
 
   // It's illegal to emit pop instruction without operands.
diff --git a/libclamav/c++/llvm/lib/Target/PowerPC/PPCFrameInfo.h b/libclamav/c++/llvm/lib/Target/PowerPC/PPCFrameInfo.h
index 73d30bf..7587b03 100644
--- a/libclamav/c++/llvm/lib/Target/PowerPC/PPCFrameInfo.h
+++ b/libclamav/c++/llvm/lib/Target/PowerPC/PPCFrameInfo.h
@@ -50,7 +50,7 @@ public:
       return isPPC64 ? -8U : -4U;
     
     // SVR4 ABI: First slot in the general register save area.
-    return -4U;
+    return isPPC64 ? -8U : -4U;
   }
   
   /// getLinkageSize - Return the size of the PowerPC ABI linkage area.
diff --git a/libclamav/c++/llvm/lib/Target/PowerPC/PPCISelLowering.cpp b/libclamav/c++/llvm/lib/Target/PowerPC/PPCISelLowering.cpp
index 30a7861..8248c94 100644
--- a/libclamav/c++/llvm/lib/Target/PowerPC/PPCISelLowering.cpp
+++ b/libclamav/c++/llvm/lib/Target/PowerPC/PPCISelLowering.cpp
@@ -419,6 +419,9 @@ const char *PPCTargetLowering::getTargetNodeName(unsigned Opcode) const {
   case PPCISD::Hi:              return "PPCISD::Hi";
   case PPCISD::Lo:              return "PPCISD::Lo";
   case PPCISD::TOC_ENTRY:       return "PPCISD::TOC_ENTRY";
+  case PPCISD::TOC_RESTORE:     return "PPCISD::TOC_RESTORE";
+  case PPCISD::LOAD:            return "PPCISD::LOAD";
+  case PPCISD::LOAD_TOC:        return "PPCISD::LOAD_TOC";
   case PPCISD::DYNALLOC:        return "PPCISD::DYNALLOC";
   case PPCISD::GlobalBaseReg:   return "PPCISD::GlobalBaseReg";
   case PPCISD::SRL:             return "PPCISD::SRL";
@@ -1330,7 +1333,7 @@ SDValue PPCTargetLowering::LowerTRAMPOLINE(SDValue Op, SelectionDAG &DAG) {
                 false, false, false, false, 0, CallingConv::C, false,
                 /*isReturnValueUsed=*/true,
                 DAG.getExternalSymbol("__trampoline_setup", PtrVT),
-                Args, DAG, dl);
+                Args, DAG, dl, DAG.GetOrdering(Chain.getNode()));
 
   SDValue Ops[] =
     { CallResult.first, CallResult.second };
@@ -2428,7 +2431,7 @@ unsigned PrepareCall(SelectionDAG &DAG, SDValue &Callee, SDValue &InFlag,
                      SDValue &Chain, DebugLoc dl, int SPDiff, bool isTailCall,
                      SmallVector<std::pair<unsigned, SDValue>, 8> &RegsToPass,
                      SmallVector<SDValue, 8> &Ops, std::vector<EVT> &NodeTys,
-                     bool isSVR4ABI) {
+                     bool isPPC64, bool isSVR4ABI) {
   EVT PtrVT = DAG.getTargetLoweringInfo().getPointerTy();
   NodeTys.push_back(MVT::Other);   // Returns a chain
   NodeTys.push_back(MVT::Flag);    // Returns a flag for retval copy to use.
@@ -2449,6 +2452,74 @@ unsigned PrepareCall(SelectionDAG &DAG, SDValue &Callee, SDValue &InFlag,
     // Otherwise, this is an indirect call.  We have to use a MTCTR/BCTRL pair
     // to do the call, we can't use PPCISD::CALL.
     SDValue MTCTROps[] = {Chain, Callee, InFlag};
+
+    if (isSVR4ABI && isPPC64) {
+      // Function pointers in the 64-bit SVR4 ABI do not point to the function
+      // entry point, but to the function descriptor (the function entry point
+      // address is part of the function descriptor though).
+      // The function descriptor is a three doubleword structure with the
+      // following fields: function entry point, TOC base address and
+      // environment pointer.
+      // Thus for a call through a function pointer, the following actions need
+      // to be performed:
+      //   1. Save the TOC of the caller in the TOC save area of its stack
+      //      frame (this is done in LowerCall_Darwin()).
+      //   2. Load the address of the function entry point from the function
+      //      descriptor.
+      //   3. Load the TOC of the callee from the function descriptor into r2.
+      //   4. Load the environment pointer from the function descriptor into
+      //      r11.
+      //   5. Branch to the function entry point address.
+      //   6. On return of the callee, the TOC of the caller needs to be
+      //      restored (this is done in FinishCall()).
+      //
+      // All those operations are flagged together to ensure that no other
+      // operations can be scheduled in between. E.g. without flagging the
+      // operations together, a TOC access in the caller could be scheduled
+      // between the load of the callee TOC and the branch to the callee, which
+      // results in the TOC access going through the TOC of the callee instead
+      // of going through the TOC of the caller, which leads to incorrect code.
+
+      // Load the address of the function entry point from the function
+      // descriptor.
+      SDVTList VTs = DAG.getVTList(MVT::i64, MVT::Other, MVT::Flag);
+      SDValue LoadFuncPtr = DAG.getNode(PPCISD::LOAD, dl, VTs, MTCTROps,
+                                        InFlag.getNode() ? 3 : 2);
+      Chain = LoadFuncPtr.getValue(1);
+      InFlag = LoadFuncPtr.getValue(2);
+
+      // Load environment pointer into r11.
+      // Offset of the environment pointer within the function descriptor.
+      SDValue PtrOff = DAG.getIntPtrConstant(16);
+
+      SDValue AddPtr = DAG.getNode(ISD::ADD, dl, MVT::i64, Callee, PtrOff);
+      SDValue LoadEnvPtr = DAG.getNode(PPCISD::LOAD, dl, VTs, Chain, AddPtr,
+                                       InFlag);
+      Chain = LoadEnvPtr.getValue(1);
+      InFlag = LoadEnvPtr.getValue(2);
+
+      SDValue EnvVal = DAG.getCopyToReg(Chain, dl, PPC::X11, LoadEnvPtr,
+                                        InFlag);
+      Chain = EnvVal.getValue(0);
+      InFlag = EnvVal.getValue(1);
+
+      // Load TOC of the callee into r2. We are using a target-specific load
+      // with r2 hard coded, because the result of a target-independent load
+      // would never go directly into r2, since r2 is a reserved register (which
+      // prevents the register allocator from allocating it), resulting in an
+      // additional register being allocated and an unnecessary move instruction
+      // being generated.
+      VTs = DAG.getVTList(MVT::Other, MVT::Flag);
+      SDValue LoadTOCPtr = DAG.getNode(PPCISD::LOAD_TOC, dl, VTs, Chain,
+                                       Callee, InFlag);
+      Chain = LoadTOCPtr.getValue(0);
+      InFlag = LoadTOCPtr.getValue(1);
+
+      MTCTROps[0] = Chain;
+      MTCTROps[1] = LoadFuncPtr;
+      MTCTROps[2] = InFlag;
+    }
+
     Chain = DAG.getNode(PPCISD::MTCTR, dl, NodeTys, MTCTROps,
                         2 + (InFlag.getNode() != 0));
     InFlag = Chain.getValue(1);
@@ -2523,6 +2594,7 @@ PPCTargetLowering::FinishCall(CallingConv::ID CallConv, DebugLoc dl,
   SmallVector<SDValue, 8> Ops;
   unsigned CallOpc = PrepareCall(DAG, Callee, InFlag, Chain, dl, SPDiff,
                                  isTailCall, RegsToPass, Ops, NodeTys,
+                                 PPCSubTarget.isPPC64(),
                                  PPCSubTarget.isSVR4ABI());
 
   // When performing tail call optimization the callee pops its arguments off
@@ -2569,8 +2641,23 @@ PPCTargetLowering::FinishCall(CallingConv::ID CallConv, DebugLoc dl,
   // stack frame. If caller and callee belong to the same module (and have the
   // same TOC), the NOP will remain unchanged.
   if (!isTailCall && PPCSubTarget.isSVR4ABI()&& PPCSubTarget.isPPC64()) {
-    // Insert NOP.
-    InFlag = DAG.getNode(PPCISD::NOP, dl, MVT::Flag, InFlag);
+    SDVTList VTs = DAG.getVTList(MVT::Other, MVT::Flag);
+    if (CallOpc == PPCISD::BCTRL_SVR4) {
+      // This is a call through a function pointer.
+      // Restore the caller TOC from the save area into R2.
+      // See PrepareCall() for more information about calls through function
+      // pointers in the 64-bit SVR4 ABI.
+      // We are using a target-specific load with r2 hard coded, because the
+      // result of a target-independent load would never go directly into r2,
+      // since r2 is a reserved register (which prevents the register allocator
+      // from allocating it), resulting in an additional register being
+      // allocated and an unnecessary move instruction being generated.
+      Chain = DAG.getNode(PPCISD::TOC_RESTORE, dl, VTs, Chain, InFlag);
+      InFlag = Chain.getValue(1);
+    } else {
+      // Otherwise insert NOP.
+      InFlag = DAG.getNode(PPCISD::NOP, dl, MVT::Flag, InFlag);
+    }
   }
 
   Chain = DAG.getCALLSEQ_END(Chain, DAG.getIntPtrConstant(NumBytes, true),
@@ -3123,6 +3210,21 @@ PPCTargetLowering::LowerCall_Darwin(SDValue Chain, SDValue Callee,
     Chain = DAG.getNode(ISD::TokenFactor, dl, MVT::Other,
                         &MemOpChains[0], MemOpChains.size());
 
+  // Check if this is an indirect call (MTCTR/BCTRL).
+  // See PrepareCall() for more information about calls through function
+  // pointers in the 64-bit SVR4 ABI.
+  if (!isTailCall && isPPC64 && PPCSubTarget.isSVR4ABI() &&
+      !dyn_cast<GlobalAddressSDNode>(Callee) &&
+      !dyn_cast<ExternalSymbolSDNode>(Callee) &&
+      !isBLACompatibleAddress(Callee, DAG)) {
+    // Load r2 into a virtual register and store it to the TOC save area.
+    SDValue Val = DAG.getCopyFromReg(Chain, dl, PPC::X2, MVT::i64);
+    // TOC save area offset.
+    SDValue PtrOff = DAG.getIntPtrConstant(40);
+    SDValue AddPtr = DAG.getNode(ISD::ADD, dl, PtrVT, StackPtr, PtrOff);
+    Chain = DAG.getStore(Val.getValue(1), dl, Val, AddPtr, NULL, 0);
+  }
+
   // Build a sequence of copy-to-reg nodes chained together with token chain
   // and flag operands which copy the outgoing args into the appropriate regs.
   SDValue InFlag;
diff --git a/libclamav/c++/llvm/lib/Target/PowerPC/PPCISelLowering.h b/libclamav/c++/llvm/lib/Target/PowerPC/PPCISelLowering.h
index e45b261..cf81395 100644
--- a/libclamav/c++/llvm/lib/Target/PowerPC/PPCISelLowering.h
+++ b/libclamav/c++/llvm/lib/Target/PowerPC/PPCISelLowering.h
@@ -61,6 +61,21 @@ namespace llvm {
       
       TOC_ENTRY,
 
+      /// The following three target-specific nodes are used for calls through
+      /// function pointers in the 64-bit SVR4 ABI.
+
+      /// Restore the TOC from the TOC save area of the current stack frame.
+      /// This is basically a hard coded load instruction which additionally
+      /// takes/produces a flag.
+      TOC_RESTORE,
+
+      /// Like a regular LOAD but additionally taking/producing a flag.
+      LOAD,
+
+      /// LOAD into r2 (also taking/producing a flag). Like TOC_RESTORE, this is
+      /// a hard coded load instruction.
+      LOAD_TOC,
+
       /// OPRC, CHAIN = DYNALLOC(CHAIN, NEGSIZE, FRAME_INDEX)
       /// This instruction is lowered in PPCRegisterInfo::eliminateFrameIndex to
       /// compute an allocation on the stack.
diff --git a/libclamav/c++/llvm/lib/Target/PowerPC/PPCInstr64Bit.td b/libclamav/c++/llvm/lib/Target/PowerPC/PPCInstr64Bit.td
index ebdc58b..219efb9 100644
--- a/libclamav/c++/llvm/lib/Target/PowerPC/PPCInstr64Bit.td
+++ b/libclamav/c++/llvm/lib/Target/PowerPC/PPCInstr64Bit.td
@@ -559,6 +559,14 @@ def LDtoc: DSForm_1<58, 0, (outs G8RC:$rD), (ins tocentry:$disp, G8RC:$reg),
                     "ld $rD, $disp($reg)", LdStLD,
                     [(set G8RC:$rD,
                      (PPCtoc_entry tglobaladdr:$disp, G8RC:$reg))]>, isPPC64;
+let RST = 2, DS = 8 in
+def LDinto_toc: DSForm_1<58, 0, (outs), (ins G8RC:$reg),
+                    "ld 2, 8($reg)", LdStLD,
+                    [(PPCload_toc G8RC:$reg)]>, isPPC64;
+let RST = 2, DS = 40, RA = 1 in
+def LDtoc_restore : DSForm_1<58, 0, (outs), (ins),
+                    "ld 2, 40(1)", LdStLD,
+                    []>, isPPC64;
 def LDX  : XForm_1<31,  21, (outs G8RC:$rD), (ins memrr:$src),
                    "ldx $rD, $src", LdStLD,
                    [(set G8RC:$rD, (load xaddr:$src))]>, isPPC64;
@@ -571,6 +579,13 @@ def LDU  : DSForm_1<58, 1, (outs G8RC:$rD, ptr_rc:$ea_result), (ins memrix:$addr
 
 }
 
+def : Pat<(PPCtoc_restore),
+          (LDtoc_restore)>;
+def : Pat<(PPCload ixaddr:$src),
+          (LD ixaddr:$src)>;
+def : Pat<(PPCload xaddr:$src),
+          (LDX xaddr:$src)>;
+
 let PPC970_Unit = 2 in {
 // Truncating stores.                       
 def STB8 : DForm_1<38, (outs), (ins G8RC:$rS, memri:$src),
diff --git a/libclamav/c++/llvm/lib/Target/PowerPC/PPCInstrInfo.td b/libclamav/c++/llvm/lib/Target/PowerPC/PPCInstrInfo.td
index 2b3f80d..8fe151a 100644
--- a/libclamav/c++/llvm/lib/Target/PowerPC/PPCInstrInfo.td
+++ b/libclamav/c++/llvm/lib/Target/PowerPC/PPCInstrInfo.td
@@ -115,6 +115,12 @@ def PPCcall_Darwin : SDNode<"PPCISD::CALL_Darwin", SDT_PPCCall,
 def PPCcall_SVR4  : SDNode<"PPCISD::CALL_SVR4", SDT_PPCCall,
                            [SDNPHasChain, SDNPOptInFlag, SDNPOutFlag]>;
 def PPCnop : SDNode<"PPCISD::NOP", SDT_PPCnop, [SDNPInFlag, SDNPOutFlag]>;
+def PPCload   : SDNode<"PPCISD::LOAD", SDTypeProfile<1, 1, []>,
+                       [SDNPHasChain, SDNPOptInFlag, SDNPOutFlag]>;
+def PPCload_toc : SDNode<"PPCISD::LOAD_TOC", SDTypeProfile<0, 1, []>,
+                          [SDNPHasChain, SDNPInFlag, SDNPOutFlag]>;
+def PPCtoc_restore : SDNode<"PPCISD::TOC_RESTORE", SDTypeProfile<0, 0, []>,
+                            [SDNPHasChain, SDNPInFlag, SDNPOutFlag]>;
 def PPCmtctr      : SDNode<"PPCISD::MTCTR", SDT_PPCCall,
                            [SDNPHasChain, SDNPOptInFlag, SDNPOutFlag]>;
 def PPCbctrl_Darwin  : SDNode<"PPCISD::BCTRL_Darwin", SDTNone,
diff --git a/libclamav/c++/llvm/lib/Target/PowerPC/PPCJITInfo.cpp b/libclamav/c++/llvm/lib/Target/PowerPC/PPCJITInfo.cpp
index c679bcd..be6e51e 100644
--- a/libclamav/c++/llvm/lib/Target/PowerPC/PPCJITInfo.cpp
+++ b/libclamav/c++/llvm/lib/Target/PowerPC/PPCJITInfo.cpp
@@ -339,7 +339,6 @@ extern "C" void sys_icache_invalidate(const void *Addr, size_t len);
 
 void *PPCJITInfo::emitFunctionStub(const Function* F, void *Fn,
                                    JITCodeEmitter &JCE) {
-  MachineCodeEmitter::BufferState BS;
   // If this is just a call to an external function, emit a branch instead of a
   // call.  The code is the same except for one bit of the last instruction.
   if (Fn != (void*)(intptr_t)PPC32CompilationCallback && 
diff --git a/libclamav/c++/llvm/lib/Target/TargetData.cpp b/libclamav/c++/llvm/lib/Target/TargetData.cpp
index 9434a19..ba3cc9d 100644
--- a/libclamav/c++/llvm/lib/Target/TargetData.cpp
+++ b/libclamav/c++/llvm/lib/Target/TargetData.cpp
@@ -321,18 +321,24 @@ class StructLayoutMap : public AbstractTypeUser {
   typedef DenseMap<const StructType*, StructLayout*> LayoutInfoTy;
   LayoutInfoTy LayoutInfo;
 
+  void RemoveEntry(LayoutInfoTy::iterator I, bool WasAbstract) {
+    I->second->~StructLayout();
+    free(I->second);
+    if (WasAbstract)
+      I->first->removeAbstractTypeUser(this);
+    LayoutInfo.erase(I);
+  }
+  
+  
   /// refineAbstractType - The callback method invoked when an abstract type is
   /// resolved to another type.  An object must override this method to update
   /// its internal state to reference NewType instead of OldType.
   ///
   virtual void refineAbstractType(const DerivedType *OldTy,
                                   const Type *) {
-    const StructType *STy = cast<const StructType>(OldTy);
-    LayoutInfoTy::iterator Iter = LayoutInfo.find(STy);
-    Iter->second->~StructLayout();
-    free(Iter->second);
-    LayoutInfo.erase(Iter);
-    OldTy->removeAbstractTypeUser(this);
+    LayoutInfoTy::iterator I = LayoutInfo.find(cast<const StructType>(OldTy));
+    assert(I != LayoutInfo.end() && "Using type but not in map?");
+    RemoveEntry(I, true);
   }
 
   /// typeBecameConcrete - The other case which AbstractTypeUsers must be aware
@@ -341,12 +347,9 @@ class StructLayoutMap : public AbstractTypeUser {
   /// This method notifies ATU's when this occurs for a type.
   ///
   virtual void typeBecameConcrete(const DerivedType *AbsTy) {
-    const StructType *STy = cast<const StructType>(AbsTy);
-    LayoutInfoTy::iterator Iter = LayoutInfo.find(STy);
-    Iter->second->~StructLayout();
-    free(Iter->second);
-    LayoutInfo.erase(Iter);
-    AbsTy->removeAbstractTypeUser(this);
+    LayoutInfoTy::iterator I = LayoutInfo.find(cast<const StructType>(AbsTy));
+    assert(I != LayoutInfo.end() && "Using type but not in map?");
+    RemoveEntry(I, true);
   }
 
 public:
@@ -368,13 +371,7 @@ public:
   void InvalidateEntry(const StructType *Ty) {
     LayoutInfoTy::iterator I = LayoutInfo.find(Ty);
     if (I == LayoutInfo.end()) return;
-
-    I->second->~StructLayout();
-    free(I->second);
-    LayoutInfo.erase(I);
-
-    if (Ty->isAbstract())
-      Ty->removeAbstractTypeUser(this);
+    RemoveEntry(I, Ty->isAbstract());
   }
 
   StructLayout *&operator[](const StructType *STy) {
@@ -424,8 +421,7 @@ const StructLayout *TargetData::getStructLayout(const StructType *Ty) const {
 void TargetData::InvalidateStructLayoutInfo(const StructType *Ty) const {
   if (!LayoutMap) return;  // No cache.
   
-  StructLayoutMap *STM = static_cast<StructLayoutMap*>(LayoutMap);
-  STM->InvalidateEntry(Ty);
+  static_cast<StructLayoutMap*>(LayoutMap)->InvalidateEntry(Ty);
 }
 
 std::string TargetData::getStringRepresentation() const {
diff --git a/libclamav/c++/llvm/lib/Target/TargetMachine.cpp b/libclamav/c++/llvm/lib/Target/TargetMachine.cpp
index fec59b5..46bc9a3 100644
--- a/libclamav/c++/llvm/lib/Target/TargetMachine.cpp
+++ b/libclamav/c++/llvm/lib/Target/TargetMachine.cpp
@@ -46,6 +46,7 @@ namespace llvm {
   bool DisableJumpTables;
   bool StrongPHIElim;
   bool AsmVerbosityDefault(false);
+  bool DisableScheduling;
 }
 
 static cl::opt<bool, true>
@@ -197,6 +198,11 @@ EnableStrongPHIElim(cl::Hidden, "strong-phi-elim",
   cl::desc("Use strong PHI elimination."),
   cl::location(StrongPHIElim),
   cl::init(false));
+static cl::opt<bool, true>
+DisableInstScheduling("disable-scheduling",
+  cl::desc("Disable instruction scheduling"),
+  cl::location(DisableScheduling),
+  cl::init(false));
 
 //---------------------------------------------------------------------------
 // TargetMachine Class
diff --git a/libclamav/c++/llvm/lib/Target/X86/AsmPrinter/X86ATTInstPrinter.cpp b/libclamav/c++/llvm/lib/Target/X86/AsmPrinter/X86ATTInstPrinter.cpp
index 8ec5b62..c74b97a 100644
--- a/libclamav/c++/llvm/lib/Target/X86/AsmPrinter/X86ATTInstPrinter.cpp
+++ b/libclamav/c++/llvm/lib/Target/X86/AsmPrinter/X86ATTInstPrinter.cpp
@@ -45,12 +45,14 @@ void X86ATTInstPrinter::printSSECC(const MCInst *MI, unsigned Op) {
 }
 
 /// print_pcrel_imm - This is used to print an immediate value that ends up
-/// being encoded as a pc-relative value.  These print slightly differently, for
-/// example, a $ is not emitted.
+/// being encoded as a pc-relative value (e.g. for jumps and calls).  These
+/// print slightly differently than normal immediates.  For example, a $ is not
+/// emitted.
 void X86ATTInstPrinter::print_pcrel_imm(const MCInst *MI, unsigned OpNo) {
   const MCOperand &Op = MI->getOperand(OpNo);
   if (Op.isImm())
-    O << Op.getImm();
+    // Print this as a signed 32-bit value.
+    O << (int)Op.getImm();
   else {
     assert(Op.isExpr() && "unknown pcrel immediate operand");
     Op.getExpr()->print(O, &MAI);
diff --git a/libclamav/c++/llvm/lib/Target/X86/AsmPrinter/X86MCInstLower.cpp b/libclamav/c++/llvm/lib/Target/X86/AsmPrinter/X86MCInstLower.cpp
index 38c0c28..1015b69 100644
--- a/libclamav/c++/llvm/lib/Target/X86/AsmPrinter/X86MCInstLower.cpp
+++ b/libclamav/c++/llvm/lib/Target/X86/AsmPrinter/X86MCInstLower.cpp
@@ -355,10 +355,6 @@ void X86MCInstLower::Lower(const MachineInstr *MI, MCInst &OutMI) const {
   case X86::LEA64_32r: // Handle 'subreg rewriting' for the lea64_32mem operand.
     lower_lea64_32mem(&OutMI, 1);
     break;
-  case X86::MOV16r0:
-    OutMI.setOpcode(X86::MOV32r0);
-    lower_subreg32(&OutMI, 0);
-    break;
   case X86::MOVZX16rr8:
     OutMI.setOpcode(X86::MOVZX32rr8);
     lower_subreg32(&OutMI, 0);
diff --git a/libclamav/c++/llvm/lib/Target/X86/CMakeLists.txt b/libclamav/c++/llvm/lib/Target/X86/CMakeLists.txt
index 3ad65fb..4186fec 100644
--- a/libclamav/c++/llvm/lib/Target/X86/CMakeLists.txt
+++ b/libclamav/c++/llvm/lib/Target/X86/CMakeLists.txt
@@ -3,6 +3,7 @@ set(LLVM_TARGET_DEFINITIONS X86.td)
 tablegen(X86GenRegisterInfo.h.inc -gen-register-desc-header)
 tablegen(X86GenRegisterNames.inc -gen-register-enums)
 tablegen(X86GenRegisterInfo.inc -gen-register-desc)
+tablegen(X86GenDisassemblerTables.inc -gen-disassembler)
 tablegen(X86GenInstrNames.inc -gen-instr-enums)
 tablegen(X86GenInstrInfo.inc -gen-instr-desc)
 tablegen(X86GenAsmWriter.inc -gen-asm-writer)
diff --git a/libclamav/c++/llvm/lib/Target/X86/Disassembler/CMakeLists.txt b/libclamav/c++/llvm/lib/Target/X86/Disassembler/CMakeLists.txt
index b329e89..2a83a9c 100644
--- a/libclamav/c++/llvm/lib/Target/X86/Disassembler/CMakeLists.txt
+++ b/libclamav/c++/llvm/lib/Target/X86/Disassembler/CMakeLists.txt
@@ -2,5 +2,6 @@ include_directories( ${CMAKE_CURRENT_BINARY_DIR}/.. ${CMAKE_CURRENT_SOURCE_DIR}/
 
 add_llvm_library(LLVMX86Disassembler
   X86Disassembler.cpp
+  X86DisassemblerDecoder.c
   )
 add_dependencies(LLVMX86Disassembler X86CodeGenTable_gen)
diff --git a/libclamav/c++/llvm/lib/Target/X86/Disassembler/X86Disassembler.cpp b/libclamav/c++/llvm/lib/Target/X86/Disassembler/X86Disassembler.cpp
index 2ebbc9b..a316860 100644
--- a/libclamav/c++/llvm/lib/Target/X86/Disassembler/X86Disassembler.cpp
+++ b/libclamav/c++/llvm/lib/Target/X86/Disassembler/X86Disassembler.cpp
@@ -6,18 +6,465 @@
 // License. See LICENSE.TXT for details.
 //
 //===----------------------------------------------------------------------===//
+//
+// This file is part of the X86 Disassembler.
+// It contains code to translate the data produced by the decoder into
+//  MCInsts.
+// Documentation for the disassembler can be found in X86Disassembler.h.
+//
+//===----------------------------------------------------------------------===//
 
+#include "X86Disassembler.h"
+#include "X86DisassemblerDecoder.h"
+
+#include "llvm/MC/MCDisassembler.h"
 #include "llvm/MC/MCDisassembler.h"
+#include "llvm/MC/MCInst.h"
 #include "llvm/Target/TargetRegistry.h"
-#include "X86.h"
+#include "llvm/Support/MemoryObject.h"
+#include "llvm/Support/ErrorHandling.h"
+#include "llvm/Support/raw_ostream.h"
+
+#include "X86GenRegisterNames.inc"
+
 using namespace llvm;
+using namespace llvm::X86Disassembler;
+
+namespace llvm {  
+  
+// Fill-ins to make the compiler happy.  These constants are never actually
+//   assigned; they are just filler to make an automatically-generated switch
+//   statement work.
+namespace X86 {
+  enum {
+    BX_SI = 500,
+    BX_DI = 501,
+    BP_SI = 502,
+    BP_DI = 503,
+    sib   = 504,
+    sib64 = 505
+  };
+}
+
+extern Target TheX86_32Target, TheX86_64Target;
+
+}
+
+static void translateInstruction(MCInst &target,
+                                 InternalInstruction &source);
+
+X86GenericDisassembler::X86GenericDisassembler(DisassemblerMode mode) :
+    MCDisassembler(),
+    fMode(mode) {
+}
+
+X86GenericDisassembler::~X86GenericDisassembler() {
+}
+
+/// regionReader - a callback function that wraps the readByte method from
+///   MemoryObject.
+///
+/// @param arg      - The generic callback parameter.  In this case, this should
+///                   be a pointer to a MemoryObject.
+/// @param byte     - A pointer to the byte to be read.
+/// @param address  - The address to be read.
+static int regionReader(void* arg, uint8_t* byte, uint64_t address) {
+  MemoryObject* region = static_cast<MemoryObject*>(arg);
+  return region->readByte(address, byte);
+}
+
+/// logger - a callback function that wraps the operator<< method from
+///   raw_ostream.
+///
+/// @param arg      - The generic callback parameter.  This should be a pointe
+///                   to a raw_ostream.
+/// @param log      - A string to be logged.  logger() adds a newline.
+static void logger(void* arg, const char* log) {
+  if (!arg)
+    return;
+  
+  raw_ostream &vStream = *(static_cast<raw_ostream*>(arg));
+  vStream << log << "\n";
+}  
+  
+//
+// Public interface for the disassembler
+//
+
+bool X86GenericDisassembler::getInstruction(MCInst &instr,
+                                            uint64_t &size,
+                                            const MemoryObject &region,
+                                            uint64_t address,
+                                            raw_ostream &vStream) const {
+  InternalInstruction internalInstr;
+  
+  int ret = decodeInstruction(&internalInstr,
+                              regionReader,
+                              (void*)&region,
+                              logger,
+                              (void*)&vStream,
+                              address,
+                              fMode);
+
+  if(ret) {
+    size = internalInstr.readerCursor - address;
+    return false;
+  }
+  else {
+    size = internalInstr.length;
+    translateInstruction(instr, internalInstr);
+    return true;
+  }
+}
+
+//
+// Private code that translates from struct InternalInstructions to MCInsts.
+//
+
+/// translateRegister - Translates an internal register to the appropriate LLVM
+///   register, and appends it as an operand to an MCInst.
+///
+/// @param mcInst     - The MCInst to append to.
+/// @param reg        - The Reg to append.
+static void translateRegister(MCInst &mcInst, Reg reg) {
+#define ENTRY(x) X86::x,
+  uint8_t llvmRegnums[] = {
+    ALL_REGS
+    0
+  };
+#undef ENTRY
+
+  uint8_t llvmRegnum = llvmRegnums[reg];
+  mcInst.addOperand(MCOperand::CreateReg(llvmRegnum));
+}
+
+/// translateImmediate  - Appends an immediate operand to an MCInst.
+///
+/// @param mcInst       - The MCInst to append to.
+/// @param immediate    - The immediate value to append.
+static void translateImmediate(MCInst &mcInst, uint64_t immediate) {
+  mcInst.addOperand(MCOperand::CreateImm(immediate));
+}
+
+/// translateRMRegister - Translates a register stored in the R/M field of the
+///   ModR/M byte to its LLVM equivalent and appends it to an MCInst.
+/// @param mcInst       - The MCInst to append to.
+/// @param insn         - The internal instruction to extract the R/M field
+///                       from.
+static void translateRMRegister(MCInst &mcInst,
+                                InternalInstruction &insn) {
+  assert(insn.eaBase != EA_BASE_sib && insn.eaBase != EA_BASE_sib64 && 
+         "A R/M register operand may not have a SIB byte");
+  
+  switch (insn.eaBase) {
+  case EA_BASE_NONE:
+    llvm_unreachable("EA_BASE_NONE for ModR/M base");
+    break;
+#define ENTRY(x) case EA_BASE_##x:
+  ALL_EA_BASES
+#undef ENTRY
+    llvm_unreachable("A R/M register operand may not have a base; "
+                     "the operand must be a register.");
+    break;
+#define ENTRY(x)                                                        \
+  case EA_REG_##x:                                                    \
+    mcInst.addOperand(MCOperand::CreateReg(X86::x)); break;
+  ALL_REGS
+#undef ENTRY
+  default:
+    llvm_unreachable("Unexpected EA base register");
+  }
+}
+
+/// translateRMMemory - Translates a memory operand stored in the Mod and R/M
+///   fields of an internal instruction (and possibly its SIB byte) to a memory
+///   operand in LLVM's format, and appends it to an MCInst.
+///
+/// @param mcInst       - The MCInst to append to.
+/// @param insn         - The instruction to extract Mod, R/M, and SIB fields
+///                       from.
+/// @param sr           - Whether or not to emit the segment register.  The
+///                       LEA instruction does not expect a segment-register
+///                       operand.
+static void translateRMMemory(MCInst &mcInst,
+                              InternalInstruction &insn,
+                              bool sr) {
+  // Addresses in an MCInst are represented as five operands:
+  //   1. basereg       (register)  The R/M base, or (if there is a SIB) the 
+  //                                SIB base
+  //   2. scaleamount   (immediate) 1, or (if there is a SIB) the specified 
+  //                                scale amount
+  //   3. indexreg      (register)  x86_registerNONE, or (if there is a SIB)
+  //                                the index (which is multiplied by the 
+  //                                scale amount)
+  //   4. displacement  (immediate) 0, or the displacement if there is one
+  //   5. segmentreg    (register)  x86_registerNONE for now, but could be set
+  //                                if we have segment overrides
+  
+  MCOperand baseReg;
+  MCOperand scaleAmount;
+  MCOperand indexReg;
+  MCOperand displacement;
+  MCOperand segmentReg;
+  
+  if (insn.eaBase == EA_BASE_sib || insn.eaBase == EA_BASE_sib64) {
+    if (insn.sibBase != SIB_BASE_NONE) {
+      switch (insn.sibBase) {
+      default:
+        llvm_unreachable("Unexpected sibBase");
+#define ENTRY(x)                                          \
+      case SIB_BASE_##x:                                  \
+        baseReg = MCOperand::CreateReg(X86::x); break;
+      ALL_SIB_BASES
+#undef ENTRY
+      }
+    } else {
+      baseReg = MCOperand::CreateReg(0);
+    }
+    
+    if (insn.sibIndex != SIB_INDEX_NONE) {
+      switch (insn.sibIndex) {
+      default:
+        llvm_unreachable("Unexpected sibIndex");
+#define ENTRY(x)                                          \
+      case SIB_INDEX_##x:                                 \
+        indexReg = MCOperand::CreateReg(X86::x); break;
+      EA_BASES_32BIT
+      EA_BASES_64BIT
+#undef ENTRY
+      }
+    } else {
+      indexReg = MCOperand::CreateReg(0);
+    }
+    
+    scaleAmount = MCOperand::CreateImm(insn.sibScale);
+  } else {
+    switch (insn.eaBase) {
+    case EA_BASE_NONE:
+      assert(insn.eaDisplacement != EA_DISP_NONE && 
+             "EA_BASE_NONE and EA_DISP_NONE for ModR/M base");
+      
+      if (insn.mode == MODE_64BIT)
+        baseReg = MCOperand::CreateReg(X86::RIP); // Section 2.2.1.6
+      else
+        baseReg = MCOperand::CreateReg(0);
+      
+      indexReg = MCOperand::CreateReg(0);
+      break;
+    case EA_BASE_BX_SI:
+      baseReg = MCOperand::CreateReg(X86::BX);
+      indexReg = MCOperand::CreateReg(X86::SI);
+      break;
+    case EA_BASE_BX_DI:
+      baseReg = MCOperand::CreateReg(X86::BX);
+      indexReg = MCOperand::CreateReg(X86::DI);
+      break;
+    case EA_BASE_BP_SI:
+      baseReg = MCOperand::CreateReg(X86::BP);
+      indexReg = MCOperand::CreateReg(X86::SI);
+      break;
+    case EA_BASE_BP_DI:
+      baseReg = MCOperand::CreateReg(X86::BP);
+      indexReg = MCOperand::CreateReg(X86::DI);
+      break;
+    default:
+      indexReg = MCOperand::CreateReg(0);
+      switch (insn.eaBase) {
+      default:
+        llvm_unreachable("Unexpected eaBase");
+        break;
+        // Here, we will use the fill-ins defined above.  However,
+        //   BX_SI, BX_DI, BP_SI, and BP_DI are all handled above and
+        //   sib and sib64 were handled in the top-level if, so they're only
+        //   placeholders to keep the compiler happy.
+#define ENTRY(x)                                        \
+      case EA_BASE_##x:                                 \
+        baseReg = MCOperand::CreateReg(X86::x); break; 
+      ALL_EA_BASES
+#undef ENTRY
+#define ENTRY(x) case EA_REG_##x:
+      ALL_REGS
+#undef ENTRY
+        llvm_unreachable("A R/M memory operand may not be a register; "
+                         "the base field must be a base.");
+            break;
+      }
+    }
+    
+    scaleAmount = MCOperand::CreateImm(1);
+  }
+  
+  displacement = MCOperand::CreateImm(insn.displacement);
+  
+  static const uint8_t segmentRegnums[SEG_OVERRIDE_max] = {
+    0,        // SEG_OVERRIDE_NONE
+    X86::CS,
+    X86::SS,
+    X86::DS,
+    X86::ES,
+    X86::FS,
+    X86::GS
+  };
+  
+  segmentReg = MCOperand::CreateReg(segmentRegnums[insn.segmentOverride]);
+  
+  mcInst.addOperand(baseReg);
+  mcInst.addOperand(scaleAmount);
+  mcInst.addOperand(indexReg);
+  mcInst.addOperand(displacement);
+  
+  if (sr)
+    mcInst.addOperand(segmentReg);
+}
+
+/// translateRM - Translates an operand stored in the R/M (and possibly SIB)
+///   byte of an instruction to LLVM form, and appends it to an MCInst.
+///
+/// @param mcInst       - The MCInst to append to.
+/// @param operand      - The operand, as stored in the descriptor table.
+/// @param insn         - The instruction to extract Mod, R/M, and SIB fields
+///                       from.
+static void translateRM(MCInst &mcInst,
+                        OperandSpecifier &operand,
+                        InternalInstruction &insn) {
+  switch (operand.type) {
+  default:
+    llvm_unreachable("Unexpected type for a R/M operand");
+  case TYPE_R8:
+  case TYPE_R16:
+  case TYPE_R32:
+  case TYPE_R64:
+  case TYPE_Rv:
+  case TYPE_MM:
+  case TYPE_MM32:
+  case TYPE_MM64:
+  case TYPE_XMM:
+  case TYPE_XMM32:
+  case TYPE_XMM64:
+  case TYPE_XMM128:
+  case TYPE_DEBUGREG:
+  case TYPE_CR32:
+  case TYPE_CR64:
+    translateRMRegister(mcInst, insn);
+    break;
+  case TYPE_M:
+  case TYPE_M8:
+  case TYPE_M16:
+  case TYPE_M32:
+  case TYPE_M64:
+  case TYPE_M128:
+  case TYPE_M512:
+  case TYPE_Mv:
+  case TYPE_M32FP:
+  case TYPE_M64FP:
+  case TYPE_M80FP:
+  case TYPE_M16INT:
+  case TYPE_M32INT:
+  case TYPE_M64INT:
+  case TYPE_M1616:
+  case TYPE_M1632:
+  case TYPE_M1664:
+    translateRMMemory(mcInst, insn, true);
+    break;
+  case TYPE_LEA:
+    translateRMMemory(mcInst, insn, false);
+    break;
+  }
+}
+  
+/// translateFPRegister - Translates a stack position on the FPU stack to its
+///   LLVM form, and appends it to an MCInst.
+///
+/// @param mcInst       - The MCInst to append to.
+/// @param stackPos     - The stack position to translate.
+static void translateFPRegister(MCInst &mcInst,
+                                uint8_t stackPos) {
+  assert(stackPos < 8 && "Invalid FP stack position");
+  
+  mcInst.addOperand(MCOperand::CreateReg(X86::ST0 + stackPos));
+}
+
+/// translateOperand - Translates an operand stored in an internal instruction 
+///   to LLVM's format and appends it to an MCInst.
+///
+/// @param mcInst       - The MCInst to append to.
+/// @param operand      - The operand, as stored in the descriptor table.
+/// @param insn         - The internal instruction.
+static void translateOperand(MCInst &mcInst,
+                             OperandSpecifier &operand,
+                             InternalInstruction &insn) {
+  switch (operand.encoding) {
+  default:
+    llvm_unreachable("Unhandled operand encoding during translation");
+  case ENCODING_REG:
+    translateRegister(mcInst, insn.reg);
+    break;
+  case ENCODING_RM:
+    translateRM(mcInst, operand, insn);
+    break;
+  case ENCODING_CB:
+  case ENCODING_CW:
+  case ENCODING_CD:
+  case ENCODING_CP:
+  case ENCODING_CO:
+  case ENCODING_CT:
+    llvm_unreachable("Translation of code offsets isn't supported.");
+  case ENCODING_IB:
+  case ENCODING_IW:
+  case ENCODING_ID:
+  case ENCODING_IO:
+  case ENCODING_Iv:
+  case ENCODING_Ia:
+    translateImmediate(mcInst, 
+                       insn.immediates[insn.numImmediatesTranslated++]);
+    break;
+  case ENCODING_RB:
+  case ENCODING_RW:
+  case ENCODING_RD:
+  case ENCODING_RO:
+    translateRegister(mcInst, insn.opcodeRegister);
+    break;
+  case ENCODING_I:
+    translateFPRegister(mcInst, insn.opcodeModifier);
+    break;
+  case ENCODING_Rv:
+    translateRegister(mcInst, insn.opcodeRegister);
+    break;
+  case ENCODING_DUP:
+    translateOperand(mcInst,
+                     insn.spec->operands[operand.type - TYPE_DUP0],
+                     insn);
+    break;
+  }
+}
+  
+/// translateInstruction - Translates an internal instruction and all its
+///   operands to an MCInst.
+///
+/// @param mcInst       - The MCInst to populate with the instruction's data.
+/// @param insn         - The internal instruction.
+static void translateInstruction(MCInst &mcInst,
+                                 InternalInstruction &insn) {  
+  assert(insn.spec);
+  
+  mcInst.setOpcode(insn.instructionID);
+  
+  int index;
+  
+  insn.numImmediatesTranslated = 0;
+  
+  for (index = 0; index < X86_MAX_OPERANDS; ++index) {
+    if (insn.spec->operands[index].encoding != ENCODING_NONE)                
+      translateOperand(mcInst, insn.spec->operands[index], insn);
+  }
+}
 
 static const MCDisassembler *createX86_32Disassembler(const Target &T) {
-  return 0;
+  return new X86Disassembler::X86_32Disassembler;
 }
 
 static const MCDisassembler *createX86_64Disassembler(const Target &T) {
-  return 0; 
+  return new X86Disassembler::X86_64Disassembler;
 }
 
 extern "C" void LLVMInitializeX86Disassembler() { 
diff --git a/libclamav/c++/llvm/lib/Target/X86/Disassembler/X86Disassembler.h b/libclamav/c++/llvm/lib/Target/X86/Disassembler/X86Disassembler.h
new file mode 100644
index 0000000..0e6e0b0
--- /dev/null
+++ b/libclamav/c++/llvm/lib/Target/X86/Disassembler/X86Disassembler.h
@@ -0,0 +1,150 @@
+//===- X86Disassembler.h - Disassembler for x86 and x86_64 ------*- C++ -*-===//
+//
+//                     The LLVM Compiler Infrastructure
+//
+// This file is distributed under the University of Illinois Open Source
+// License. See LICENSE.TXT for details.
+//
+//===----------------------------------------------------------------------===//
+//
+// The X86 disassembler is a table-driven disassembler for the 16-, 32-, and
+// 64-bit X86 instruction sets.  The main decode sequence for an assembly
+// instruction in this disassembler is:
+//
+// 1. Read the prefix bytes and determine the attributes of the instruction.
+//    These attributes, recorded in enum attributeBits
+//    (X86DisassemblerDecoderCommon.h), form a bitmask.  The table CONTEXTS_SYM
+//    provides a mapping from bitmasks to contexts, which are represented by
+//    enum InstructionContext (ibid.).
+//
+// 2. Read the opcode, and determine what kind of opcode it is.  The
+//    disassembler distinguishes four kinds of opcodes, which are enumerated in
+//    OpcodeType (X86DisassemblerDecoderCommon.h): one-byte (0xnn), two-byte
+//    (0x0f 0xnn), three-byte-38 (0x0f 0x38 0xnn), or three-byte-3a 
+//    (0x0f 0x3a 0xnn).  Mandatory prefixes are treated as part of the context.
+//
+// 3. Depending on the opcode type, look in one of four ClassDecision structures
+//    (X86DisassemblerDecoderCommon.h).  Use the opcode class to determine which
+//    OpcodeDecision (ibid.) to look the opcode in.  Look up the opcode, to get
+//    a ModRMDecision (ibid.).
+//
+// 4. Some instructions, such as escape opcodes or extended opcodes, or even
+//    instructions that have ModRM*Reg / ModRM*Mem forms in LLVM, need the
+//    ModR/M byte to complete decode.  The ModRMDecision's type is an entry from
+//    ModRMDecisionType (X86DisassemblerDecoderCommon.h) that indicates if the
+//    ModR/M byte is required and how to interpret it.
+//
+// 5. After resolving the ModRMDecision, the disassembler has a unique ID
+//    of type InstrUID (X86DisassemblerDecoderCommon.h).  Looking this ID up in
+//    INSTRUCTIONS_SYM yields the name of the instruction and the encodings and
+//    meanings of its operands.
+//
+// 6. For each operand, its encoding is an entry from OperandEncoding
+//    (X86DisassemblerDecoderCommon.h) and its type is an entry from
+//    OperandType (ibid.).  The encoding indicates how to read it from the
+//    instruction; the type indicates how to interpret the value once it has
+//    been read.  For example, a register operand could be stored in the R/M
+//    field of the ModR/M byte, the REG field of the ModR/M byte, or added to
+//    the main opcode.  This is orthogonal from its meaning (an GPR or an XMM
+//    register, for instance).  Given this information, the operands can be
+//    extracted and interpreted.
+//
+// 7. As the last step, the disassembler translates the instruction information
+//    and operands into a format understandable by the client - in this case, an
+//    MCInst for use by the MC infrastructure.
+//
+// The disassembler is broken broadly into two parts: the table emitter that
+// emits the instruction decode tables discussed above during compilation, and
+// the disassembler itself.  The table emitter is documented in more detail in
+// utils/TableGen/X86DisassemblerEmitter.h.
+//
+// X86Disassembler.h contains the public interface for the disassembler,
+//   adhering to the MCDisassembler interface.
+// X86Disassembler.cpp contains the code responsible for step 7, and for
+//   invoking the decoder to execute steps 1-6.
+// X86DisassemblerDecoderCommon.h contains the definitions needed by both the
+//   table emitter and the disassembler.
+// X86DisassemblerDecoder.h contains the public interface of the decoder,
+//   factored out into C for possible use by other projects.
+// X86DisassemblerDecoder.c contains the source code of the decoder, which is
+//   responsible for steps 1-6.
+//
+//===----------------------------------------------------------------------===//
+
+#ifndef X86DISASSEMBLER_H
+#define X86DISASSEMBLER_H
+
+#define INSTRUCTION_SPECIFIER_FIELDS  \
+  const char*             name;
+
+#define INSTRUCTION_IDS               \
+  InstrUID*  instructionIDs;
+
+#include "X86DisassemblerDecoderCommon.h"
+
+#undef INSTRUCTION_SPECIFIER_FIELDS
+#undef INSTRUCTION_IDS
+
+#include "llvm/MC/MCDisassembler.h"
+
+struct InternalInstruction;
+
+namespace llvm {
+  
+class MCInst;
+class MemoryObject;
+class raw_ostream;
+  
+namespace X86Disassembler {
+
+/// X86GenericDisassembler - Generic disassembler for all X86 platforms.
+///   All each platform class should have to do is subclass the constructor, and
+///   provide a different disassemblerMode value.
+class X86GenericDisassembler : public MCDisassembler {
+protected:
+  /// Constructor     - Initializes the disassembler.
+  ///
+  /// @param mode     - The X86 architecture mode to decode for.
+  X86GenericDisassembler(DisassemblerMode mode);
+public:
+  ~X86GenericDisassembler();
+
+  /// getInstruction - See MCDisassembler.
+  bool getInstruction(MCInst &instr,
+                      uint64_t &size,
+                      const MemoryObject &region,
+                      uint64_t address,
+                      raw_ostream &vStream) const;
+private:
+  DisassemblerMode              fMode;
+};
+
+/// X86_16Disassembler - 16-bit X86 disassembler.
+class X86_16Disassembler : public X86GenericDisassembler {
+public:
+  X86_16Disassembler() :
+    X86GenericDisassembler(MODE_16BIT) {
+  }
+};  
+
+/// X86_16Disassembler - 32-bit X86 disassembler.
+class X86_32Disassembler : public X86GenericDisassembler {
+public:
+  X86_32Disassembler() :
+    X86GenericDisassembler(MODE_32BIT) {
+  }
+};
+
+/// X86_16Disassembler - 64-bit X86 disassembler.
+class X86_64Disassembler : public X86GenericDisassembler {
+public:
+  X86_64Disassembler() :
+    X86GenericDisassembler(MODE_64BIT) {
+  }
+};
+
+} // namespace X86Disassembler
+  
+} // namespace llvm
+  
+#endif
diff --git a/libclamav/c++/llvm/lib/Target/X86/Disassembler/X86DisassemblerDecoder.c b/libclamav/c++/llvm/lib/Target/X86/Disassembler/X86DisassemblerDecoder.c
new file mode 100644
index 0000000..a0a04ba
--- /dev/null
+++ b/libclamav/c++/llvm/lib/Target/X86/Disassembler/X86DisassemblerDecoder.c
@@ -0,0 +1,1365 @@
+/*===- X86DisassemblerDecoder.c - Disassembler decoder -------------*- C -*-==*
+ *
+ *                     The LLVM Compiler Infrastructure
+ *
+ * This file is distributed under the University of Illinois Open Source
+ * License. See LICENSE.TXT for details.
+ *
+ *===----------------------------------------------------------------------===*
+ *
+ * This file is part of the X86 Disassembler.
+ * It contains the implementation of the instruction decoder.
+ * Documentation for the disassembler can be found in X86Disassembler.h.
+ *
+ *===----------------------------------------------------------------------===*/
+
+#include <assert.h>   /* for assert()     */
+#include <stdarg.h>   /* for va_*()       */
+#include <stdio.h>    /* for vsnprintf()  */
+#include <stdlib.h>   /* for exit()       */
+#include <string.h>   /* for memset()     */
+
+#include "X86DisassemblerDecoder.h"
+
+#include "X86GenDisassemblerTables.inc"
+
+#define TRUE  1
+#define FALSE 0
+
+#ifdef __GNUC__
+#define NORETURN __attribute__((noreturn))
+#else
+#define NORETURN
+#endif
+
+#define unreachable(s)                                      \
+  do {                                                      \
+    fprintf(stderr, "%s:%d: %s\n", __FILE__, __LINE__, s);  \
+    exit(-1);                                               \
+  } while (0);
+
+/*
+ * contextForAttrs - Client for the instruction context table.  Takes a set of
+ *   attributes and returns the appropriate decode context.
+ *
+ * @param attrMask  - Attributes, from the enumeration attributeBits.
+ * @return          - The InstructionContext to use when looking up an
+ *                    an instruction with these attributes.
+ */
+static InstructionContext contextForAttrs(uint8_t attrMask) {
+  return CONTEXTS_SYM[attrMask];
+}
+
+/*
+ * modRMRequired - Reads the appropriate instruction table to determine whether
+ *   the ModR/M byte is required to decode a particular instruction.
+ *
+ * @param type        - The opcode type (i.e., how many bytes it has).
+ * @param insnContext - The context for the instruction, as returned by
+ *                      contextForAttrs.
+ * @param opcode      - The last byte of the instruction's opcode, not counting
+ *                      ModR/M extensions and escapes.
+ * @return            - TRUE if the ModR/M byte is required, FALSE otherwise.
+ */
+static int modRMRequired(OpcodeType type,
+                                InstructionContext insnContext,
+                                uint8_t opcode) {
+  const struct ContextDecision* decision = 0;
+  
+  switch (type) {
+  case ONEBYTE:
+    decision = &ONEBYTE_SYM;
+    break;
+  case TWOBYTE:
+    decision = &TWOBYTE_SYM;
+    break;
+  case THREEBYTE_38:
+    decision = &THREEBYTE38_SYM;
+    break;
+  case THREEBYTE_3A:
+    decision = &THREEBYTE3A_SYM;
+    break;
+  }
+  
+  return decision->opcodeDecisions[insnContext].modRMDecisions[opcode].
+    modrm_type != MODRM_ONEENTRY;
+  
+  unreachable("Unknown opcode type");
+  return 0;
+}
+
+/*
+ * decode - Reads the appropriate instruction table to obtain the unique ID of
+ *   an instruction.
+ *
+ * @param type        - See modRMRequired().
+ * @param insnContext - See modRMRequired().
+ * @param opcode      - See modRMRequired().
+ * @param modRM       - The ModR/M byte if required, or any value if not.
+ */
+static InstrUID decode(OpcodeType type,
+                               InstructionContext insnContext,
+                               uint8_t opcode,
+                               uint8_t modRM) {
+  struct ModRMDecision* dec;
+  
+  switch (type) {
+  default:
+    unreachable("Unknown opcode type");
+  case ONEBYTE:
+    dec = &ONEBYTE_SYM.opcodeDecisions[insnContext].modRMDecisions[opcode];
+    break;
+  case TWOBYTE:
+    dec = &TWOBYTE_SYM.opcodeDecisions[insnContext].modRMDecisions[opcode];
+    break;
+  case THREEBYTE_38:
+    dec = &THREEBYTE38_SYM.opcodeDecisions[insnContext].modRMDecisions[opcode];
+    break;
+  case THREEBYTE_3A:
+    dec = &THREEBYTE3A_SYM.opcodeDecisions[insnContext].modRMDecisions[opcode];
+    break;
+  }
+  
+  switch (dec->modrm_type) {
+  default:
+    unreachable("Corrupt table!  Unknown modrm_type");
+  case MODRM_ONEENTRY:
+    return dec->instructionIDs[0];
+  case MODRM_SPLITRM:
+    if (modFromModRM(modRM) == 0x3)
+      return dec->instructionIDs[1];
+    else
+      return dec->instructionIDs[0];
+  case MODRM_FULL:
+    return dec->instructionIDs[modRM];
+  }
+  
+  return 0;
+}
+
+/*
+ * specifierForUID - Given a UID, returns the name and operand specification for
+ *   that instruction.
+ *
+ * @param uid - The unique ID for the instruction.  This should be returned by
+ *              decode(); specifierForUID will not check bounds.
+ * @return    - A pointer to the specification for that instruction.
+ */
+static struct InstructionSpecifier* specifierForUID(InstrUID uid) {
+  return &INSTRUCTIONS_SYM[uid];
+}
+
+/*
+ * consumeByte - Uses the reader function provided by the user to consume one
+ *   byte from the instruction's memory and advance the cursor.
+ *
+ * @param insn  - The instruction with the reader function to use.  The cursor
+ *                for this instruction is advanced.
+ * @param byte  - A pointer to a pre-allocated memory buffer to be populated
+ *                with the data read.
+ * @return      - 0 if the read was successful; nonzero otherwise.
+ */
+static int consumeByte(struct InternalInstruction* insn, uint8_t* byte) {
+  int ret = insn->reader(insn->readerArg, byte, insn->readerCursor);
+  
+  if (!ret)
+    ++(insn->readerCursor);
+  
+  return ret;
+}
+
+/*
+ * lookAtByte - Like consumeByte, but does not advance the cursor.
+ *
+ * @param insn  - See consumeByte().
+ * @param byte  - See consumeByte().
+ * @return      - See consumeByte().
+ */
+static int lookAtByte(struct InternalInstruction* insn, uint8_t* byte) {
+  return insn->reader(insn->readerArg, byte, insn->readerCursor);
+}
+
+static void unconsumeByte(struct InternalInstruction* insn) {
+  insn->readerCursor--;
+}
+
+#define CONSUME_FUNC(name, type)                                  \
+  static int name(struct InternalInstruction* insn, type* ptr) {  \
+    type combined = 0;                                            \
+    unsigned offset;                                              \
+    for (offset = 0; offset < sizeof(type); ++offset) {           \
+      uint8_t byte;                                               \
+      int ret = insn->reader(insn->readerArg,                     \
+                             &byte,                               \
+                             insn->readerCursor + offset);        \
+      if (ret)                                                    \
+        return ret;                                               \
+      combined = combined | ((type)byte << ((type)offset * 8));   \
+    }                                                             \
+    *ptr = combined;                                              \
+    insn->readerCursor += sizeof(type);                           \
+    return 0;                                                     \
+  }
+
+/*
+ * consume* - Use the reader function provided by the user to consume data
+ *   values of various sizes from the instruction's memory and advance the
+ *   cursor appropriately.  These readers perform endian conversion.
+ *
+ * @param insn    - See consumeByte().
+ * @param ptr     - A pointer to a pre-allocated memory of appropriate size to
+ *                  be populated with the data read.
+ * @return        - See consumeByte().
+ */
+CONSUME_FUNC(consumeInt8, int8_t)
+CONSUME_FUNC(consumeInt16, int16_t)
+CONSUME_FUNC(consumeInt32, int32_t)
+CONSUME_FUNC(consumeUInt16, uint16_t)
+CONSUME_FUNC(consumeUInt32, uint32_t)
+CONSUME_FUNC(consumeUInt64, uint64_t)
+
+/*
+ * dbgprintf - Uses the logging function provided by the user to log a single
+ *   message, typically without a carriage-return.
+ *
+ * @param insn    - The instruction containing the logging function.
+ * @param format  - See printf().
+ * @param ...     - See printf().
+ */
+static void dbgprintf(struct InternalInstruction* insn,
+                      const char* format,
+                      ...) {  
+  char buffer[256];
+  va_list ap;
+  
+  if (!insn->dlog)
+    return;
+    
+  va_start(ap, format);
+  (void)vsnprintf(buffer, sizeof(buffer), format, ap);
+  va_end(ap);
+  
+  insn->dlog(insn->dlogArg, buffer);
+  
+  return;
+}
+
+/*
+ * setPrefixPresent - Marks that a particular prefix is present at a particular
+ *   location.
+ *
+ * @param insn      - The instruction to be marked as having the prefix.
+ * @param prefix    - The prefix that is present.
+ * @param location  - The location where the prefix is located (in the address
+ *                    space of the instruction's reader).
+ */
+static void setPrefixPresent(struct InternalInstruction* insn,
+                                    uint8_t prefix,
+                                    uint64_t location)
+{
+  insn->prefixPresent[prefix] = 1;
+  insn->prefixLocations[prefix] = location;
+}
+
+/*
+ * isPrefixAtLocation - Queries an instruction to determine whether a prefix is
+ *   present at a given location.
+ *
+ * @param insn      - The instruction to be queried.
+ * @param prefix    - The prefix.
+ * @param location  - The location to query.
+ * @return          - Whether the prefix is at that location.
+ */
+static BOOL isPrefixAtLocation(struct InternalInstruction* insn,
+                               uint8_t prefix,
+                               uint64_t location)
+{
+  if (insn->prefixPresent[prefix] == 1 &&
+     insn->prefixLocations[prefix] == location)
+    return TRUE;
+  else
+    return FALSE;
+}
+
+/*
+ * readPrefixes - Consumes all of an instruction's prefix bytes, and marks the
+ *   instruction as having them.  Also sets the instruction's default operand,
+ *   address, and other relevant data sizes to report operands correctly.
+ *
+ * @param insn  - The instruction whose prefixes are to be read.
+ * @return      - 0 if the instruction could be read until the end of the prefix
+ *                bytes, and no prefixes conflicted; nonzero otherwise.
+ */
+static int readPrefixes(struct InternalInstruction* insn) {
+  BOOL isPrefix = TRUE;
+  BOOL prefixGroups[4] = { FALSE };
+  uint64_t prefixLocation;
+  uint8_t byte;
+  
+  BOOL hasAdSize = FALSE;
+  BOOL hasOpSize = FALSE;
+  
+  dbgprintf(insn, "readPrefixes()");
+    
+  while (isPrefix) {
+    prefixLocation = insn->readerCursor;
+    
+    if (consumeByte(insn, &byte))
+      return -1;
+    
+    switch (byte) {
+    case 0xf0:  /* LOCK */
+    case 0xf2:  /* REPNE/REPNZ */
+    case 0xf3:  /* REP or REPE/REPZ */
+      if (prefixGroups[0])
+        dbgprintf(insn, "Redundant Group 1 prefix");
+      prefixGroups[0] = TRUE;
+      setPrefixPresent(insn, byte, prefixLocation);
+      break;
+    case 0x2e:  /* CS segment override -OR- Branch not taken */
+    case 0x36:  /* SS segment override -OR- Branch taken */
+    case 0x3e:  /* DS segment override */
+    case 0x26:  /* ES segment override */
+    case 0x64:  /* FS segment override */
+    case 0x65:  /* GS segment override */
+      switch (byte) {
+      case 0x2e:
+        insn->segmentOverride = SEG_OVERRIDE_CS;
+        break;
+      case 0x36:
+        insn->segmentOverride = SEG_OVERRIDE_SS;
+        break;
+      case 0x3e:
+        insn->segmentOverride = SEG_OVERRIDE_DS;
+        break;
+      case 0x26:
+        insn->segmentOverride = SEG_OVERRIDE_ES;
+        break;
+      case 0x64:
+        insn->segmentOverride = SEG_OVERRIDE_FS;
+        break;
+      case 0x65:
+        insn->segmentOverride = SEG_OVERRIDE_GS;
+        break;
+      default:
+        unreachable("Unhandled override");
+      }
+      if (prefixGroups[1])
+        dbgprintf(insn, "Redundant Group 2 prefix");
+      prefixGroups[1] = TRUE;
+      setPrefixPresent(insn, byte, prefixLocation);
+      break;
+    case 0x66:  /* Operand-size override */
+      if (prefixGroups[2])
+        dbgprintf(insn, "Redundant Group 3 prefix");
+      prefixGroups[2] = TRUE;
+      hasOpSize = TRUE;
+      setPrefixPresent(insn, byte, prefixLocation);
+      break;
+    case 0x67:  /* Address-size override */
+      if (prefixGroups[3])
+        dbgprintf(insn, "Redundant Group 4 prefix");
+      prefixGroups[3] = TRUE;
+      hasAdSize = TRUE;
+      setPrefixPresent(insn, byte, prefixLocation);
+      break;
+    default:    /* Not a prefix byte */
+      isPrefix = FALSE;
+      break;
+    }
+    
+    if (isPrefix)
+      dbgprintf(insn, "Found prefix 0x%hhx", byte);
+  }
+  
+  if (insn->mode == MODE_64BIT) {
+    if ((byte & 0xf0) == 0x40) {
+      uint8_t opcodeByte;
+      
+      if(lookAtByte(insn, &opcodeByte) || ((opcodeByte & 0xf0) == 0x40)) {
+        dbgprintf(insn, "Redundant REX prefix");
+        return -1;
+      }
+      
+      insn->rexPrefix = byte;
+      insn->necessaryPrefixLocation = insn->readerCursor - 2;
+      
+      dbgprintf(insn, "Found REX prefix 0x%hhx", byte);
+    } else {                
+      unconsumeByte(insn);
+      insn->necessaryPrefixLocation = insn->readerCursor - 1;
+    }
+  } else {
+    unconsumeByte(insn);
+  }
+  
+  if (insn->mode == MODE_16BIT) {
+    insn->registerSize       = (hasOpSize ? 4 : 2);
+    insn->addressSize        = (hasAdSize ? 4 : 2);
+    insn->displacementSize   = (hasAdSize ? 4 : 2);
+    insn->immediateSize      = (hasOpSize ? 4 : 2);
+  } else if (insn->mode == MODE_32BIT) {
+    insn->registerSize       = (hasOpSize ? 2 : 4);
+    insn->addressSize        = (hasAdSize ? 2 : 4);
+    insn->displacementSize   = (hasAdSize ? 2 : 4);
+    insn->immediateSize      = (hasAdSize ? 2 : 4);
+  } else if (insn->mode == MODE_64BIT) {
+    if (insn->rexPrefix && wFromREX(insn->rexPrefix)) {
+      insn->registerSize       = 8;
+      insn->addressSize        = (hasAdSize ? 4 : 8);
+      insn->displacementSize   = 4;
+      insn->immediateSize      = 4;
+    } else if (insn->rexPrefix) {
+      insn->registerSize       = (hasOpSize ? 2 : 4);
+      insn->addressSize        = (hasAdSize ? 4 : 8);
+      insn->displacementSize   = (hasOpSize ? 2 : 4);
+      insn->immediateSize      = (hasOpSize ? 2 : 4);
+    } else {
+      insn->registerSize       = (hasOpSize ? 2 : 4);
+      insn->addressSize        = (hasAdSize ? 4 : 8);
+      insn->displacementSize   = (hasOpSize ? 2 : 4);
+      insn->immediateSize      = (hasOpSize ? 2 : 4);
+    }
+  }
+  
+  return 0;
+}
+
+/*
+ * readOpcode - Reads the opcode (excepting the ModR/M byte in the case of
+ *   extended or escape opcodes).
+ *
+ * @param insn  - The instruction whose opcode is to be read.
+ * @return      - 0 if the opcode could be read successfully; nonzero otherwise.
+ */
+static int readOpcode(struct InternalInstruction* insn) {  
+  /* Determine the length of the primary opcode */
+  
+  uint8_t current;
+  
+  dbgprintf(insn, "readOpcode()");
+  
+  insn->opcodeType = ONEBYTE;
+  if (consumeByte(insn, &current))
+    return -1;
+  
+  if (current == 0x0f) {
+    dbgprintf(insn, "Found a two-byte escape prefix (0x%hhx)", current);
+    
+    insn->twoByteEscape = current;
+    
+    if (consumeByte(insn, &current))
+      return -1;
+    
+    if (current == 0x38) {
+      dbgprintf(insn, "Found a three-byte escape prefix (0x%hhx)", current);
+      
+      insn->threeByteEscape = current;
+      
+      if (consumeByte(insn, &current))
+        return -1;
+      
+      insn->opcodeType = THREEBYTE_38;
+    } else if (current == 0x3a) {
+      dbgprintf(insn, "Found a three-byte escape prefix (0x%hhx)", current);
+      
+      insn->threeByteEscape = current;
+      
+      if (consumeByte(insn, &current))
+        return -1;
+      
+      insn->opcodeType = THREEBYTE_3A;
+    } else {
+      dbgprintf(insn, "Didn't find a three-byte escape prefix");
+      
+      insn->opcodeType = TWOBYTE;
+    }
+  }
+  
+  /*
+   * At this point we have consumed the full opcode.
+   * Anything we consume from here on must be unconsumed.
+   */
+  
+  insn->opcode = current;
+  
+  return 0;
+}
+
+static int readModRM(struct InternalInstruction* insn);
+
+/*
+ * getIDWithAttrMask - Determines the ID of an instruction, consuming
+ *   the ModR/M byte as appropriate for extended and escape opcodes,
+ *   and using a supplied attribute mask.
+ *
+ * @param instructionID - A pointer whose target is filled in with the ID of the
+ *                        instruction.
+ * @param insn          - The instruction whose ID is to be determined.
+ * @param attrMask      - The attribute mask to search.
+ * @return              - 0 if the ModR/M could be read when needed or was not
+ *                        needed; nonzero otherwise.
+ */
+static int getIDWithAttrMask(uint16_t* instructionID,
+                             struct InternalInstruction* insn,
+                             uint8_t attrMask) {
+  BOOL hasModRMExtension;
+  
+  uint8_t instructionClass;
+
+  instructionClass = contextForAttrs(attrMask);
+  
+  hasModRMExtension = modRMRequired(insn->opcodeType,
+                                    instructionClass,
+                                    insn->opcode);
+  
+  if (hasModRMExtension) {
+    readModRM(insn);
+    
+    *instructionID = decode(insn->opcodeType,
+                            instructionClass,
+                            insn->opcode,
+                            insn->modRM);
+  } else {
+    *instructionID = decode(insn->opcodeType,
+                            instructionClass,
+                            insn->opcode,
+                            0);
+  }
+      
+  return 0;
+}
+
+/*
+ * is16BitEquivalent - Determines whether two instruction names refer to
+ * equivalent instructions but one is 16-bit whereas the other is not.
+ *
+ * @param orig  - The instruction that is not 16-bit
+ * @param equiv - The instruction that is 16-bit
+ */
+static BOOL is16BitEquvalent(const char* orig, const char* equiv) {
+  off_t i;
+  
+  for(i = 0;; i++) {
+    if(orig[i] == '\0' && equiv[i] == '\0')
+      return TRUE;
+    if(orig[i] == '\0' || equiv[i] == '\0')
+      return FALSE;
+    if(orig[i] != equiv[i]) {
+      if((orig[i] == 'Q' || orig[i] == 'L') && equiv[i] == 'W')
+        continue;
+      if((orig[i] == '6' || orig[i] == '3') && equiv[i] == '1')
+        continue;
+      if((orig[i] == '4' || orig[i] == '2') && equiv[i] == '6')
+        continue;
+      return FALSE;
+    }
+  }
+}
+
+/*
+ * is64BitEquivalent - Determines whether two instruction names refer to
+ * equivalent instructions but one is 64-bit whereas the other is not.
+ *
+ * @param orig  - The instruction that is not 64-bit
+ * @param equiv - The instruction that is 64-bit
+ */
+static BOOL is64BitEquivalent(const char* orig, const char* equiv) {
+  off_t i;
+  
+  for(i = 0;; i++) {
+    if(orig[i] == '\0' && equiv[i] == '\0')
+      return TRUE;
+    if(orig[i] == '\0' || equiv[i] == '\0')
+      return FALSE;
+    if(orig[i] != equiv[i]) {
+      if((orig[i] == 'W' || orig[i] == 'L') && equiv[i] == 'Q')
+        continue;
+      if((orig[i] == '1' || orig[i] == '3') && equiv[i] == '6')
+        continue;
+      if((orig[i] == '6' || orig[i] == '2') && equiv[i] == '4')
+        continue;
+      return FALSE;
+    }
+  }
+}
+
+
+/*
+ * getID - Determines the ID of an instruction, consuming the ModR/M byte as 
+ *   appropriate for extended and escape opcodes.  Determines the attributes and 
+ *   context for the instruction before doing so.
+ *
+ * @param insn  - The instruction whose ID is to be determined.
+ * @return      - 0 if the ModR/M could be read when needed or was not needed;
+ *                nonzero otherwise.
+ */
+static int getID(struct InternalInstruction* insn) {  
+  uint8_t attrMask;
+  uint16_t instructionID;
+  
+  dbgprintf(insn, "getID()");
+    
+  attrMask = ATTR_NONE;
+  
+  if (insn->mode == MODE_64BIT)
+    attrMask |= ATTR_64BIT;
+  
+  if (insn->rexPrefix & 0x08)
+    attrMask |= ATTR_REXW;
+  
+  if (isPrefixAtLocation(insn, 0x66, insn->necessaryPrefixLocation))
+    attrMask |= ATTR_OPSIZE;
+  else if (isPrefixAtLocation(insn, 0xf3, insn->necessaryPrefixLocation))
+    attrMask |= ATTR_XS;
+  else if (isPrefixAtLocation(insn, 0xf2, insn->necessaryPrefixLocation))
+    attrMask |= ATTR_XD;
+  
+  if(getIDWithAttrMask(&instructionID, insn, attrMask))
+    return -1;
+  
+  /* The following clauses compensate for limitations of the tables. */
+  
+  if ((attrMask & ATTR_XD) && (attrMask & ATTR_REXW)) {
+    /*
+     * Although for SSE instructions it is usually necessary to treat REX.W+F2
+     * as F2 for decode (in the absence of a 64BIT_REXW_XD category) there is
+     * an occasional instruction where F2 is incidental and REX.W is the more
+     * significant.  If the decoded instruction is 32-bit and adding REX.W
+     * instead of F2 changes a 32 to a 64, we adopt the new encoding.
+     */
+    
+    struct InstructionSpecifier* spec;
+    uint16_t instructionIDWithREXw;
+    struct InstructionSpecifier* specWithREXw;
+    
+    spec = specifierForUID(instructionID);
+    
+    if (getIDWithAttrMask(&instructionIDWithREXw,
+                          insn,
+                          attrMask & (~ATTR_XD))) {
+      /*
+       * Decoding with REX.w would yield nothing; give up and return original
+       * decode.
+       */
+      
+      insn->instructionID = instructionID;
+      insn->spec = spec;
+      return 0;
+    }
+    
+    specWithREXw = specifierForUID(instructionIDWithREXw);
+    
+    if (is64BitEquivalent(spec->name, specWithREXw->name)) {
+      insn->instructionID = instructionIDWithREXw;
+      insn->spec = specWithREXw;
+    } else {
+      insn->instructionID = instructionID;
+      insn->spec = spec;
+    }
+    return 0;
+  }
+  
+  if (insn->prefixPresent[0x66] && !(attrMask & ATTR_OPSIZE)) {
+    /*
+     * The instruction tables make no distinction between instructions that
+     * allow OpSize anywhere (i.e., 16-bit operations) and that need it in a
+     * particular spot (i.e., many MMX operations).  In general we're
+     * conservative, but in the specific case where OpSize is present but not
+     * in the right place we check if there's a 16-bit operation.
+     */
+    
+    struct InstructionSpecifier* spec;
+    uint16_t instructionIDWithOpsize;
+    struct InstructionSpecifier* specWithOpsize;
+    
+    spec = specifierForUID(instructionID);
+    
+    if (getIDWithAttrMask(&instructionIDWithOpsize,
+                          insn,
+                          attrMask | ATTR_OPSIZE)) {
+      /* 
+       * ModRM required with OpSize but not present; give up and return version
+       * without OpSize set
+       */
+      
+      insn->instructionID = instructionID;
+      insn->spec = spec;
+      return 0;
+    }
+    
+    specWithOpsize = specifierForUID(instructionIDWithOpsize);
+    
+    if (is16BitEquvalent(spec->name, specWithOpsize->name)) {
+      insn->instructionID = instructionIDWithOpsize;
+      insn->spec = specWithOpsize;
+    } else {
+      insn->instructionID = instructionID;
+      insn->spec = spec;
+    }
+    return 0;
+  }
+  
+  insn->instructionID = instructionID;
+  insn->spec = specifierForUID(insn->instructionID);
+  
+  return 0;
+}
+
+/*
+ * readSIB - Consumes the SIB byte to determine addressing information for an
+ *   instruction.
+ *
+ * @param insn  - The instruction whose SIB byte is to be read.
+ * @return      - 0 if the SIB byte was successfully read; nonzero otherwise.
+ */
+static int readSIB(struct InternalInstruction* insn) {
+  SIBIndex sibIndexBase = 0;
+  SIBBase sibBaseBase = 0;
+  uint8_t index, base;
+  
+  dbgprintf(insn, "readSIB()");
+  
+  if (insn->consumedSIB)
+    return 0;
+  
+  insn->consumedSIB = TRUE;
+  
+  switch (insn->addressSize) {
+  case 2:
+    dbgprintf(insn, "SIB-based addressing doesn't work in 16-bit mode");
+    return -1;
+    break;
+  case 4:
+    sibIndexBase = SIB_INDEX_EAX;
+    sibBaseBase = SIB_BASE_EAX;
+    break;
+  case 8:
+    sibIndexBase = SIB_INDEX_RAX;
+    sibBaseBase = SIB_BASE_RAX;
+    break;
+  }
+
+  if (consumeByte(insn, &insn->sib))
+    return -1;
+  
+  index = indexFromSIB(insn->sib) | (xFromREX(insn->rexPrefix) << 3);
+  
+  switch (index) {
+  case 0x4:
+    insn->sibIndex = SIB_INDEX_NONE;
+    break;
+  default:
+    insn->sibIndex = (EABase)(sibIndexBase + index);
+    if (insn->sibIndex == SIB_INDEX_sib ||
+        insn->sibIndex == SIB_INDEX_sib64)
+      insn->sibIndex = SIB_INDEX_NONE;
+    break;
+  }
+  
+  switch (scaleFromSIB(insn->sib)) {
+  case 0:
+    insn->sibScale = 1;
+    break;
+  case 1:
+    insn->sibScale = 2;
+    break;
+  case 2:
+    insn->sibScale = 4;
+    break;
+  case 3:
+    insn->sibScale = 8;
+    break;
+  }
+  
+  base = baseFromSIB(insn->sib) | (bFromREX(insn->rexPrefix) << 3);
+  
+  switch (base) {
+  case 0x5:
+    switch (modFromModRM(insn->modRM)) {
+    case 0x0:
+      insn->eaDisplacement = EA_DISP_32;
+      insn->sibBase = SIB_BASE_NONE;
+      break;
+    case 0x1:
+      insn->eaDisplacement = EA_DISP_8;
+      insn->sibBase = (insn->addressSize == 4 ? 
+                       SIB_BASE_EBP : SIB_BASE_RBP);
+      break;
+    case 0x2:
+      insn->eaDisplacement = EA_DISP_32;
+      insn->sibBase = (insn->addressSize == 4 ? 
+                       SIB_BASE_EBP : SIB_BASE_RBP);
+      break;
+    case 0x3:
+      unreachable("Cannot have Mod = 0b11 and a SIB byte");
+    }
+    break;
+  default:
+    insn->sibBase = (EABase)(sibBaseBase + base);
+    break;
+  }
+  
+  return 0;
+}
+
+/*
+ * readDisplacement - Consumes the displacement of an instruction.
+ *
+ * @param insn  - The instruction whose displacement is to be read.
+ * @return      - 0 if the displacement byte was successfully read; nonzero 
+ *                otherwise.
+ */
+static int readDisplacement(struct InternalInstruction* insn) {  
+  int8_t d8;
+  int16_t d16;
+  int32_t d32;
+  
+  dbgprintf(insn, "readDisplacement()");
+  
+  if (insn->consumedDisplacement)
+    return 0;
+  
+  insn->consumedDisplacement = TRUE;
+  
+  switch (insn->eaDisplacement) {
+  case EA_DISP_NONE:
+    insn->consumedDisplacement = FALSE;
+    break;
+  case EA_DISP_8:
+    if (consumeInt8(insn, &d8))
+      return -1;
+    insn->displacement = d8;
+    break;
+  case EA_DISP_16:
+    if (consumeInt16(insn, &d16))
+      return -1;
+    insn->displacement = d16;
+    break;
+  case EA_DISP_32:
+    if (consumeInt32(insn, &d32))
+      return -1;
+    insn->displacement = d32;
+    break;
+  }
+  
+  insn->consumedDisplacement = TRUE;
+  return 0;
+}
+
+/*
+ * readModRM - Consumes all addressing information (ModR/M byte, SIB byte, and
+ *   displacement) for an instruction and interprets it.
+ *
+ * @param insn  - The instruction whose addressing information is to be read.
+ * @return      - 0 if the information was successfully read; nonzero otherwise.
+ */
+static int readModRM(struct InternalInstruction* insn) {  
+  uint8_t mod, rm, reg;
+  
+  dbgprintf(insn, "readModRM()");
+  
+  if (insn->consumedModRM)
+    return 0;
+  
+  consumeByte(insn, &insn->modRM);
+  insn->consumedModRM = TRUE;
+  
+  mod     = modFromModRM(insn->modRM);
+  rm      = rmFromModRM(insn->modRM);
+  reg     = regFromModRM(insn->modRM);
+  
+  /*
+   * This goes by insn->registerSize to pick the correct register, which messes
+   * up if we're using (say) XMM or 8-bit register operands.  That gets fixed in
+   * fixupReg().
+   */
+  switch (insn->registerSize) {
+  case 2:
+    insn->regBase = MODRM_REG_AX;
+    insn->eaRegBase = EA_REG_AX;
+    break;
+  case 4:
+    insn->regBase = MODRM_REG_EAX;
+    insn->eaRegBase = EA_REG_EAX;
+    break;
+  case 8:
+    insn->regBase = MODRM_REG_RAX;
+    insn->eaRegBase = EA_REG_RAX;
+    break;
+  }
+  
+  reg |= rFromREX(insn->rexPrefix) << 3;
+  rm  |= bFromREX(insn->rexPrefix) << 3;
+  
+  insn->reg = (Reg)(insn->regBase + reg);
+  
+  switch (insn->addressSize) {
+  case 2:
+    insn->eaBaseBase = EA_BASE_BX_SI;
+     
+    switch (mod) {
+    case 0x0:
+      if (rm == 0x6) {
+        insn->eaBase = EA_BASE_NONE;
+        insn->eaDisplacement = EA_DISP_16;
+        if(readDisplacement(insn))
+          return -1;
+      } else {
+        insn->eaBase = (EABase)(insn->eaBaseBase + rm);
+        insn->eaDisplacement = EA_DISP_NONE;
+      }
+      break;
+    case 0x1:
+      insn->eaBase = (EABase)(insn->eaBaseBase + rm);
+      insn->eaDisplacement = EA_DISP_8;
+      if(readDisplacement(insn))
+        return -1;
+      break;
+    case 0x2:
+      insn->eaBase = (EABase)(insn->eaBaseBase + rm);
+      insn->eaDisplacement = EA_DISP_16;
+      if(readDisplacement(insn))
+        return -1;
+      break;
+    case 0x3:
+      insn->eaBase = (EABase)(insn->eaRegBase + rm);
+      if(readDisplacement(insn))
+        return -1;
+      break;
+    }
+    break;
+  case 4:
+  case 8:
+    insn->eaBaseBase = (insn->addressSize == 4 ? EA_BASE_EAX : EA_BASE_RAX);
+    
+    switch (mod) {
+    case 0x0:
+      insn->eaDisplacement = EA_DISP_NONE; /* readSIB may override this */
+      switch (rm) {
+      case 0x4:
+      case 0xc:   /* in case REXW.b is set */
+        insn->eaBase = (insn->addressSize == 4 ? 
+                        EA_BASE_sib : EA_BASE_sib64);
+        readSIB(insn);
+        if(readDisplacement(insn))
+          return -1;
+        break;
+      case 0x5:
+        insn->eaBase = EA_BASE_NONE;
+        insn->eaDisplacement = EA_DISP_32;
+        if(readDisplacement(insn))
+          return -1;
+        break;
+      default:
+        insn->eaBase = (EABase)(insn->eaBaseBase + rm);
+        break;
+      }
+      break;
+    case 0x1:
+    case 0x2:
+      insn->eaDisplacement = (mod == 0x1 ? EA_DISP_8 : EA_DISP_32);
+      switch (rm) {
+      case 0x4:
+      case 0xc:   /* in case REXW.b is set */
+        insn->eaBase = EA_BASE_sib;
+        readSIB(insn);
+        if(readDisplacement(insn))
+          return -1;
+        break;
+      default:
+        insn->eaBase = (EABase)(insn->eaBaseBase + rm);
+        if(readDisplacement(insn))
+          return -1;
+        break;
+      }
+      break;
+    case 0x3:
+      insn->eaDisplacement = EA_DISP_NONE;
+      insn->eaBase = (EABase)(insn->eaRegBase + rm);
+      break;
+    }
+    break;
+  } /* switch (insn->addressSize) */
+  
+  return 0;
+}
+
+#define GENERIC_FIXUP_FUNC(name, base, prefix)            \
+  static uint8_t name(struct InternalInstruction *insn,   \
+                      OperandType type,                   \
+                      uint8_t index,                      \
+                      uint8_t *valid) {                   \
+    *valid = 1;                                           \
+    switch (type) {                                       \
+    default:                                              \
+      unreachable("Unhandled register type");             \
+    case TYPE_Rv:                                         \
+      return base + index;                                \
+    case TYPE_R8:                                         \
+      if(insn->rexPrefix &&                               \
+         index >= 4 && index <= 7) {                      \
+        return prefix##_SPL + (index - 4);                \
+      } else {                                            \
+        return prefix##_AL + index;                       \
+      }                                                   \
+    case TYPE_R16:                                        \
+      return prefix##_AX + index;                         \
+    case TYPE_R32:                                        \
+      return prefix##_EAX + index;                        \
+    case TYPE_R64:                                        \
+      return prefix##_RAX + index;                        \
+    case TYPE_XMM128:                                     \
+    case TYPE_XMM64:                                      \
+    case TYPE_XMM32:                                      \
+    case TYPE_XMM:                                        \
+      return prefix##_XMM0 + index;                       \
+    case TYPE_MM64:                                       \
+    case TYPE_MM32:                                       \
+    case TYPE_MM:                                         \
+      if(index > 7)                                       \
+        *valid = 0;                                       \
+      return prefix##_MM0 + index;                        \
+    case TYPE_SEGMENTREG:                                 \
+      if(index > 5)                                       \
+        *valid = 0;                                       \
+      return prefix##_ES + index;                         \
+    case TYPE_DEBUGREG:                                   \
+      if(index > 7)                                       \
+        *valid = 0;                                       \
+      return prefix##_DR0 + index;                        \
+    case TYPE_CR32:                                       \
+      if(index > 7)                                       \
+        *valid = 0;                                       \
+      return prefix##_ECR0 + index;                       \
+    case TYPE_CR64:                                       \
+      if(index > 8)                                       \
+        *valid = 0;                                       \
+      return prefix##_RCR0 + index;                       \
+    }                                                     \
+  }
+
+/*
+ * fixup*Value - Consults an operand type to determine the meaning of the
+ *   reg or R/M field.  If the operand is an XMM operand, for example, an
+ *   operand would be XMM0 instead of AX, which readModRM() would otherwise
+ *   misinterpret it as.
+ *
+ * @param insn  - The instruction containing the operand.
+ * @param type  - The operand type.
+ * @param index - The existing value of the field as reported by readModRM().
+ * @param valid - The address of a uint8_t.  The target is set to 1 if the
+ *                field is valid for the register class; 0 if not.
+ */
+GENERIC_FIXUP_FUNC(fixupRegValue, insn->regBase,    MODRM_REG)
+GENERIC_FIXUP_FUNC(fixupRMValue,  insn->eaRegBase,  EA_REG)
+
+/*
+ * fixupReg - Consults an operand specifier to determine which of the
+ *   fixup*Value functions to use in correcting readModRM()'ss interpretation.
+ *
+ * @param insn  - See fixup*Value().
+ * @param op    - The operand specifier.
+ * @return      - 0 if fixup was successful; -1 if the register returned was
+ *                invalid for its class.
+ */
+static int fixupReg(struct InternalInstruction *insn, 
+                    struct OperandSpecifier *op) {
+  uint8_t valid;
+  
+  dbgprintf(insn, "fixupReg()");
+  
+  switch ((OperandEncoding)op->encoding) {
+  default:
+    unreachable("Expected a REG or R/M encoding in fixupReg");
+  case ENCODING_REG:
+    insn->reg = (Reg)fixupRegValue(insn,
+                                   (OperandType)op->type,
+                                   insn->reg - insn->regBase,
+                                   &valid);
+    if (!valid)
+      return -1;
+    break;
+  case ENCODING_RM:
+    if (insn->eaBase >= insn->eaRegBase) {
+      insn->eaBase = (EABase)fixupRMValue(insn,
+                                          (OperandType)op->type,
+                                          insn->eaBase - insn->eaRegBase,
+                                          &valid);
+      if (!valid)
+        return -1;
+    }
+    break;
+  }
+  
+  return 0;
+}
+
+/*
+ * readOpcodeModifier - Reads an operand from the opcode field of an 
+ *   instruction.  Handles AddRegFrm instructions.
+ *
+ * @param insn    - The instruction whose opcode field is to be read.
+ * @param inModRM - Indicates that the opcode field is to be read from the
+ *                  ModR/M extension; useful for escape opcodes
+ */
+static void readOpcodeModifier(struct InternalInstruction* insn) {
+  dbgprintf(insn, "readOpcodeModifier()");
+  
+  if (insn->consumedOpcodeModifier)
+    return;
+  
+  insn->consumedOpcodeModifier = TRUE;
+  
+  switch(insn->spec->modifierType) {
+  default:
+    unreachable("Unknown modifier type.");
+  case MODIFIER_NONE:
+    unreachable("No modifier but an operand expects one.");
+  case MODIFIER_OPCODE:
+    insn->opcodeModifier = insn->opcode - insn->spec->modifierBase;
+    break;
+  case MODIFIER_MODRM:
+    insn->opcodeModifier = insn->modRM - insn->spec->modifierBase;
+    break;
+  }  
+}
+
+/*
+ * readOpcodeRegister - Reads an operand from the opcode field of an 
+ *   instruction and interprets it appropriately given the operand width.
+ *   Handles AddRegFrm instructions.
+ *
+ * @param insn  - See readOpcodeModifier().
+ * @param size  - The width (in bytes) of the register being specified.
+ *                1 means AL and friends, 2 means AX, 4 means EAX, and 8 means
+ *                RAX.
+ */
+static void readOpcodeRegister(struct InternalInstruction* insn, uint8_t size) {
+  dbgprintf(insn, "readOpcodeRegister()");
+
+  readOpcodeModifier(insn);
+  
+  if (size == 0)
+    size = insn->registerSize;
+  
+  switch (size) {
+  case 1:
+    insn->opcodeRegister = (Reg)(MODRM_REG_AL + ((bFromREX(insn->rexPrefix) << 3) 
+                                                  | insn->opcodeModifier));
+    if(insn->rexPrefix && 
+       insn->opcodeRegister >= MODRM_REG_AL + 0x4 &&
+       insn->opcodeRegister < MODRM_REG_AL + 0x8) {
+      insn->opcodeRegister = (Reg)(MODRM_REG_SPL
+                                   + (insn->opcodeRegister - MODRM_REG_AL - 4));
+    }
+      
+    break;
+  case 2:
+    insn->opcodeRegister = (Reg)(MODRM_REG_AX
+                                 + ((bFromREX(insn->rexPrefix) << 3) 
+                                    | insn->opcodeModifier));
+    break;
+  case 4:
+    insn->opcodeRegister = (Reg)(MODRM_REG_EAX +
+                                 + ((bFromREX(insn->rexPrefix) << 3) 
+                                    | insn->opcodeModifier));
+    break;
+  case 8:
+    insn->opcodeRegister = (Reg)(MODRM_REG_RAX 
+                                 + ((bFromREX(insn->rexPrefix) << 3) 
+                                    | insn->opcodeModifier));
+    break;
+  }
+}
+
+/*
+ * readImmediate - Consumes an immediate operand from an instruction, given the
+ *   desired operand size.
+ *
+ * @param insn  - The instruction whose operand is to be read.
+ * @param size  - The width (in bytes) of the operand.
+ * @return      - 0 if the immediate was successfully consumed; nonzero
+ *                otherwise.
+ */
+static int readImmediate(struct InternalInstruction* insn, uint8_t size) {
+  uint8_t imm8;
+  uint16_t imm16;
+  uint32_t imm32;
+  uint64_t imm64;
+  
+  dbgprintf(insn, "readImmediate()");
+  
+  if (insn->numImmediatesConsumed == 2)
+    unreachable("Already consumed two immediates");
+  
+  if (size == 0)
+    size = insn->immediateSize;
+  else
+    insn->immediateSize = size;
+  
+  switch (size) {
+  case 1:
+    if (consumeByte(insn, &imm8))
+      return -1;
+    insn->immediates[insn->numImmediatesConsumed] = imm8;
+    break;
+  case 2:
+    if (consumeUInt16(insn, &imm16))
+      return -1;
+    insn->immediates[insn->numImmediatesConsumed] = imm16;
+    break;
+  case 4:
+    if (consumeUInt32(insn, &imm32))
+      return -1;
+    insn->immediates[insn->numImmediatesConsumed] = imm32;
+    break;
+  case 8:
+    if (consumeUInt64(insn, &imm64))
+      return -1;
+    insn->immediates[insn->numImmediatesConsumed] = imm64;
+    break;
+  }
+  
+  insn->numImmediatesConsumed++;
+  
+  return 0;
+}
+
+/*
+ * readOperands - Consults the specifier for an instruction and consumes all
+ *   operands for that instruction, interpreting them as it goes.
+ *
+ * @param insn  - The instruction whose operands are to be read and interpreted.
+ * @return      - 0 if all operands could be read; nonzero otherwise.
+ */
+static int readOperands(struct InternalInstruction* insn) {
+  int index;
+  
+  dbgprintf(insn, "readOperands()");
+  
+  for (index = 0; index < X86_MAX_OPERANDS; ++index) {
+    switch (insn->spec->operands[index].encoding) {
+    case ENCODING_NONE:
+      break;
+    case ENCODING_REG:
+    case ENCODING_RM:
+      if (readModRM(insn))
+        return -1;
+      if (fixupReg(insn, &insn->spec->operands[index]))
+        return -1;
+      break;
+    case ENCODING_CB:
+    case ENCODING_CW:
+    case ENCODING_CD:
+    case ENCODING_CP:
+    case ENCODING_CO:
+    case ENCODING_CT:
+      dbgprintf(insn, "We currently don't hande code-offset encodings");
+      return -1;
+    case ENCODING_IB:
+      if (readImmediate(insn, 1))
+        return -1;
+      break;
+    case ENCODING_IW:
+      if (readImmediate(insn, 2))
+        return -1;
+      break;
+    case ENCODING_ID:
+      if (readImmediate(insn, 4))
+        return -1;
+      break;
+    case ENCODING_IO:
+      if (readImmediate(insn, 8))
+        return -1;
+      break;
+    case ENCODING_Iv:
+      readImmediate(insn, insn->immediateSize);
+      break;
+    case ENCODING_Ia:
+      readImmediate(insn, insn->addressSize);
+      break;
+    case ENCODING_RB:
+      readOpcodeRegister(insn, 1);
+      break;
+    case ENCODING_RW:
+      readOpcodeRegister(insn, 2);
+      break;
+    case ENCODING_RD:
+      readOpcodeRegister(insn, 4);
+      break;
+    case ENCODING_RO:
+      readOpcodeRegister(insn, 8);
+      break;
+    case ENCODING_Rv:
+      readOpcodeRegister(insn, 0);
+      break;
+    case ENCODING_I:
+      readOpcodeModifier(insn);
+      break;
+    case ENCODING_DUP:
+      break;
+    default:
+      dbgprintf(insn, "Encountered an operand with an unknown encoding.");
+      return -1;
+    }
+  }
+  
+  return 0;
+}
+
+/*
+ * decodeInstruction - Reads and interprets a full instruction provided by the
+ *   user.
+ *
+ * @param insn      - A pointer to the instruction to be populated.  Must be 
+ *                    pre-allocated.
+ * @param reader    - The function to be used to read the instruction's bytes.
+ * @param readerArg - A generic argument to be passed to the reader to store
+ *                    any internal state.
+ * @param logger    - If non-NULL, the function to be used to write log messages
+ *                    and warnings.
+ * @param loggerArg - A generic argument to be passed to the logger to store
+ *                    any internal state.
+ * @param startLoc  - The address (in the reader's address space) of the first
+ *                    byte in the instruction.
+ * @param mode      - The mode (real mode, IA-32e, or IA-32e in 64-bit mode) to
+ *                    decode the instruction in.
+ * @return          - 0 if the instruction's memory could be read; nonzero if
+ *                    not.
+ */
+int decodeInstruction(struct InternalInstruction* insn,
+                      byteReader_t reader,
+                      void* readerArg,
+                      dlog_t logger,
+                      void* loggerArg,
+                      uint64_t startLoc,
+                      DisassemblerMode mode) {
+  memset(insn, 0, sizeof(struct InternalInstruction));
+    
+  insn->reader = reader;
+  insn->readerArg = readerArg;
+  insn->dlog = logger;
+  insn->dlogArg = loggerArg;
+  insn->startLocation = startLoc;
+  insn->readerCursor = startLoc;
+  insn->mode = mode;
+  insn->numImmediatesConsumed = 0;
+  
+  if (readPrefixes(insn)       ||
+      readOpcode(insn)         ||
+      getID(insn)              ||
+      insn->instructionID == 0 ||
+      readOperands(insn))
+    return -1;
+  
+  insn->length = insn->readerCursor - insn->startLocation;
+  
+  dbgprintf(insn, "Read from 0x%llx to 0x%llx: length %llu",
+          startLoc, insn->readerCursor, insn->length);
+    
+  if (insn->length > 15)
+    dbgprintf(insn, "Instruction exceeds 15-byte limit");
+  
+  return 0;
+}
diff --git a/libclamav/c++/llvm/lib/Target/X86/Disassembler/X86DisassemblerDecoder.h b/libclamav/c++/llvm/lib/Target/X86/Disassembler/X86DisassemblerDecoder.h
new file mode 100644
index 0000000..c03c07a
--- /dev/null
+++ b/libclamav/c++/llvm/lib/Target/X86/Disassembler/X86DisassemblerDecoder.h
@@ -0,0 +1,515 @@
+/*===- X86DisassemblerDecoderInternal.h - Disassembler decoder -----*- C -*-==*
+ *
+ *                     The LLVM Compiler Infrastructure
+ *
+ * This file is distributed under the University of Illinois Open Source
+ * License. See LICENSE.TXT for details.
+ *
+ *===----------------------------------------------------------------------===*
+ *
+ * This file is part of the X86 Disassembler.
+ * It contains the public interface of the instruction decoder.
+ * Documentation for the disassembler can be found in X86Disassembler.h.
+ *
+ *===----------------------------------------------------------------------===*/
+
+#ifndef X86DISASSEMBLERDECODER_H
+#define X86DISASSEMBLERDECODER_H
+
+#ifdef __cplusplus
+extern "C" {
+#endif
+  
+#define INSTRUCTION_SPECIFIER_FIELDS  \
+  const char*             name;
+
+#define INSTRUCTION_IDS     \
+  InstrUID*  instructionIDs;
+
+#include "X86DisassemblerDecoderCommon.h"
+  
+#undef INSTRUCTION_SPECIFIER_FIELDS
+#undef INSTRUCTION_IDS
+  
+/*
+ * Accessor functions for various fields of an Intel instruction
+ */
+#define modFromModRM(modRM)  ((modRM & 0xc0) >> 6)
+#define regFromModRM(modRM)  ((modRM & 0x38) >> 3)
+#define rmFromModRM(modRM)   (modRM & 0x7)
+#define scaleFromSIB(sib)    ((sib & 0xc0) >> 6)
+#define indexFromSIB(sib)    ((sib & 0x38) >> 3)
+#define baseFromSIB(sib)     (sib & 0x7)
+#define wFromREX(rex)        ((rex & 0x8) >> 3)
+#define rFromREX(rex)        ((rex & 0x4) >> 2)
+#define xFromREX(rex)        ((rex & 0x2) >> 1)
+#define bFromREX(rex)        (rex & 0x1)
+
+/*
+ * These enums represent Intel registers for use by the decoder.
+ */
+
+#define REGS_8BIT     \
+  ENTRY(AL)           \
+  ENTRY(CL)           \
+  ENTRY(DL)           \
+  ENTRY(BL)           \
+  ENTRY(AH)           \
+  ENTRY(CH)           \
+  ENTRY(DH)           \
+  ENTRY(BH)           \
+  ENTRY(R8B)          \
+  ENTRY(R9B)          \
+  ENTRY(R10B)         \
+  ENTRY(R11B)         \
+  ENTRY(R12B)         \
+  ENTRY(R13B)         \
+  ENTRY(R14B)         \
+  ENTRY(R15B)         \
+  ENTRY(SPL)          \
+  ENTRY(BPL)          \
+  ENTRY(SIL)          \
+  ENTRY(DIL)
+
+#define EA_BASES_16BIT  \
+  ENTRY(BX_SI)          \
+  ENTRY(BX_DI)          \
+  ENTRY(BP_SI)          \
+  ENTRY(BP_DI)          \
+  ENTRY(SI)             \
+  ENTRY(DI)             \
+  ENTRY(BP)             \
+  ENTRY(BX)             \
+  ENTRY(R8W)            \
+  ENTRY(R9W)            \
+  ENTRY(R10W)           \
+  ENTRY(R11W)           \
+  ENTRY(R12W)           \
+  ENTRY(R13W)           \
+  ENTRY(R14W)           \
+  ENTRY(R15W)
+
+#define REGS_16BIT    \
+  ENTRY(AX)           \
+  ENTRY(CX)           \
+  ENTRY(DX)           \
+  ENTRY(BX)           \
+  ENTRY(SP)           \
+  ENTRY(BP)           \
+  ENTRY(SI)           \
+  ENTRY(DI)           \
+  ENTRY(R8W)          \
+  ENTRY(R9W)          \
+  ENTRY(R10W)         \
+  ENTRY(R11W)         \
+  ENTRY(R12W)         \
+  ENTRY(R13W)         \
+  ENTRY(R14W)         \
+  ENTRY(R15W)
+
+#define EA_BASES_32BIT  \
+  ENTRY(EAX)            \
+  ENTRY(ECX)            \
+  ENTRY(EDX)            \
+  ENTRY(EBX)            \
+  ENTRY(sib)            \
+  ENTRY(EBP)            \
+  ENTRY(ESI)            \
+  ENTRY(EDI)            \
+  ENTRY(R8D)            \
+  ENTRY(R9D)            \
+  ENTRY(R10D)           \
+  ENTRY(R11D)           \
+  ENTRY(R12D)           \
+  ENTRY(R13D)           \
+  ENTRY(R14D)           \
+  ENTRY(R15D)
+
+#define REGS_32BIT  \
+  ENTRY(EAX)        \
+  ENTRY(ECX)        \
+  ENTRY(EDX)        \
+  ENTRY(EBX)        \
+  ENTRY(ESP)        \
+  ENTRY(EBP)        \
+  ENTRY(ESI)        \
+  ENTRY(EDI)        \
+  ENTRY(R8D)        \
+  ENTRY(R9D)        \
+  ENTRY(R10D)       \
+  ENTRY(R11D)       \
+  ENTRY(R12D)       \
+  ENTRY(R13D)       \
+  ENTRY(R14D)       \
+  ENTRY(R15D)
+
+#define EA_BASES_64BIT  \
+  ENTRY(RAX)            \
+  ENTRY(RCX)            \
+  ENTRY(RDX)            \
+  ENTRY(RBX)            \
+  ENTRY(sib64)          \
+  ENTRY(RBP)            \
+  ENTRY(RSI)            \
+  ENTRY(RDI)            \
+  ENTRY(R8)             \
+  ENTRY(R9)             \
+  ENTRY(R10)            \
+  ENTRY(R11)            \
+  ENTRY(R12)            \
+  ENTRY(R13)            \
+  ENTRY(R14)            \
+  ENTRY(R15)
+
+#define REGS_64BIT  \
+  ENTRY(RAX)        \
+  ENTRY(RCX)        \
+  ENTRY(RDX)        \
+  ENTRY(RBX)        \
+  ENTRY(RSP)        \
+  ENTRY(RBP)        \
+  ENTRY(RSI)        \
+  ENTRY(RDI)        \
+  ENTRY(R8)         \
+  ENTRY(R9)         \
+  ENTRY(R10)        \
+  ENTRY(R11)        \
+  ENTRY(R12)        \
+  ENTRY(R13)        \
+  ENTRY(R14)        \
+  ENTRY(R15)
+
+#define REGS_MMX  \
+  ENTRY(MM0)      \
+  ENTRY(MM1)      \
+  ENTRY(MM2)      \
+  ENTRY(MM3)      \
+  ENTRY(MM4)      \
+  ENTRY(MM5)      \
+  ENTRY(MM6)      \
+  ENTRY(MM7)
+
+#define REGS_XMM  \
+  ENTRY(XMM0)     \
+  ENTRY(XMM1)     \
+  ENTRY(XMM2)     \
+  ENTRY(XMM3)     \
+  ENTRY(XMM4)     \
+  ENTRY(XMM5)     \
+  ENTRY(XMM6)     \
+  ENTRY(XMM7)     \
+  ENTRY(XMM8)     \
+  ENTRY(XMM9)     \
+  ENTRY(XMM10)    \
+  ENTRY(XMM11)    \
+  ENTRY(XMM12)    \
+  ENTRY(XMM13)    \
+  ENTRY(XMM14)    \
+  ENTRY(XMM15)
+  
+#define REGS_SEGMENT \
+  ENTRY(ES)          \
+  ENTRY(CS)          \
+  ENTRY(SS)          \
+  ENTRY(DS)          \
+  ENTRY(FS)          \
+  ENTRY(GS)
+  
+#define REGS_DEBUG  \
+  ENTRY(DR0)        \
+  ENTRY(DR1)        \
+  ENTRY(DR2)        \
+  ENTRY(DR3)        \
+  ENTRY(DR4)        \
+  ENTRY(DR5)        \
+  ENTRY(DR6)        \
+  ENTRY(DR7)
+
+#define REGS_CONTROL_32BIT  \
+  ENTRY(ECR0)               \
+  ENTRY(ECR1)               \
+  ENTRY(ECR2)               \
+  ENTRY(ECR3)               \
+  ENTRY(ECR4)               \
+  ENTRY(ECR5)               \
+  ENTRY(ECR6)               \
+  ENTRY(ECR7)
+
+#define REGS_CONTROL_64BIT  \
+  ENTRY(RCR0)               \
+  ENTRY(RCR1)               \
+  ENTRY(RCR2)               \
+  ENTRY(RCR3)               \
+  ENTRY(RCR4)               \
+  ENTRY(RCR5)               \
+  ENTRY(RCR6)               \
+  ENTRY(RCR7)               \
+  ENTRY(RCR8)
+  
+#define ALL_EA_BASES  \
+  EA_BASES_16BIT      \
+  EA_BASES_32BIT      \
+  EA_BASES_64BIT
+  
+#define ALL_SIB_BASES \
+  REGS_32BIT          \
+  REGS_64BIT
+
+#define ALL_REGS      \
+  REGS_8BIT           \
+  REGS_16BIT          \
+  REGS_32BIT          \
+  REGS_64BIT          \
+  REGS_MMX            \
+  REGS_XMM            \
+  REGS_SEGMENT        \
+  REGS_DEBUG          \
+  REGS_CONTROL_32BIT  \
+  REGS_CONTROL_64BIT  \
+  ENTRY(RIP)
+
+/*
+ * EABase - All possible values of the base field for effective-address 
+ *   computations, a.k.a. the Mod and R/M fields of the ModR/M byte.  We
+ *   distinguish between bases (EA_BASE_*) and registers that just happen to be
+ *   referred to when Mod == 0b11 (EA_REG_*).
+ */
+typedef enum {
+  EA_BASE_NONE,
+#define ENTRY(x) EA_BASE_##x,
+  ALL_EA_BASES
+#undef ENTRY
+#define ENTRY(x) EA_REG_##x,
+  ALL_REGS
+#undef ENTRY
+  EA_max
+} EABase;
+  
+/* 
+ * SIBIndex - All possible values of the SIB index field.
+ *   Borrows entries from ALL_EA_BASES with the special case that
+ *   sib is synonymous with NONE.
+ */
+typedef enum {
+  SIB_INDEX_NONE,
+#define ENTRY(x) SIB_INDEX_##x,
+  ALL_EA_BASES
+#undef ENTRY
+  SIB_INDEX_max
+} SIBIndex;
+  
+/*
+ * SIBBase - All possible values of the SIB base field.
+ */
+typedef enum {
+  SIB_BASE_NONE,
+#define ENTRY(x) SIB_BASE_##x,
+  ALL_SIB_BASES
+#undef ENTRY
+  SIB_BASE_max
+} SIBBase;
+
+/*
+ * EADisplacement - Possible displacement types for effective-address
+ *   computations.
+ */
+typedef enum {
+  EA_DISP_NONE,
+  EA_DISP_8,
+  EA_DISP_16,
+  EA_DISP_32
+} EADisplacement;
+
+/*
+ * Reg - All possible values of the reg field in the ModR/M byte.
+ */
+typedef enum {
+#define ENTRY(x) MODRM_REG_##x,
+  ALL_REGS
+#undef ENTRY
+  MODRM_REG_max
+} Reg;
+  
+/*
+ * SegmentOverride - All possible segment overrides.
+ */
+typedef enum {
+  SEG_OVERRIDE_NONE,
+  SEG_OVERRIDE_CS,
+  SEG_OVERRIDE_SS,
+  SEG_OVERRIDE_DS,
+  SEG_OVERRIDE_ES,
+  SEG_OVERRIDE_FS,
+  SEG_OVERRIDE_GS,
+  SEG_OVERRIDE_max
+} SegmentOverride;
+
+typedef uint8_t BOOL;
+
+/*
+ * byteReader_t - Type for the byte reader that the consumer must provide to
+ *   the decoder.  Reads a single byte from the instruction's address space.
+ * @param arg     - A baton that the consumer can associate with any internal
+ *                  state that it needs.
+ * @param byte    - A pointer to a single byte in memory that should be set to
+ *                  contain the value at address.
+ * @param address - The address in the instruction's address space that should
+ *                  be read from.
+ * @return        - -1 if the byte cannot be read for any reason; 0 otherwise.
+ */
+typedef int (*byteReader_t)(void* arg, uint8_t* byte, uint64_t address);
+
+/*
+ * dlog_t - Type for the logging function that the consumer can provide to
+ *   get debugging output from the decoder.
+ * @param arg     - A baton that the consumer can associate with any internal
+ *                  state that it needs.
+ * @param log     - A string that contains the message.  Will be reused after
+ *                  the logger returns.
+ */
+typedef void (*dlog_t)(void* arg, const char *log);
+
+/*
+ * The x86 internal instruction, which is produced by the decoder.
+ */
+struct InternalInstruction {
+  /* Reader interface (C) */
+  byteReader_t reader;
+  /* Opaque value passed to the reader */
+  void* readerArg;
+  /* The address of the next byte to read via the reader */
+  uint64_t readerCursor;
+
+  /* Logger interface (C) */
+  dlog_t dlog;
+  /* Opaque value passed to the logger */
+  void* dlogArg;
+
+  /* General instruction information */
+  
+  /* The mode to disassemble for (64-bit, protected, real) */
+  DisassemblerMode mode;
+  /* The start of the instruction, usable with the reader */
+  uint64_t startLocation;
+  /* The length of the instruction, in bytes */
+  size_t length;
+  
+  /* Prefix state */
+  
+  /* 1 if the prefix byte corresponding to the entry is present; 0 if not */
+  uint8_t prefixPresent[0x100];
+  /* contains the location (for use with the reader) of the prefix byte */
+  uint64_t prefixLocations[0x100];
+  /* The value of the REX prefix, if present */
+  uint8_t rexPrefix;
+  /* The location of the REX prefix */
+  uint64_t rexLocation;
+  /* The location where a mandatory prefix would have to be (i.e., right before
+     the opcode, or right before the REX prefix if one is present) */
+  uint64_t necessaryPrefixLocation;
+  /* The segment override type */
+  SegmentOverride segmentOverride;
+  
+  /* Sizes of various critical pieces of data */
+  uint8_t registerSize;
+  uint8_t addressSize;
+  uint8_t displacementSize;
+  uint8_t immediateSize;
+  
+  /* opcode state */
+  
+  /* The value of the two-byte escape prefix (usually 0x0f) */
+  uint8_t twoByteEscape;
+  /* The value of the three-byte escape prefix (usually 0x38 or 0x3a) */
+  uint8_t threeByteEscape;
+  /* The last byte of the opcode, not counting any ModR/M extension */
+  uint8_t opcode;
+  /* The ModR/M byte of the instruction, if it is an opcode extension */
+  uint8_t modRMExtension;
+  
+  /* decode state */
+  
+  /* The type of opcode, used for indexing into the array of decode tables */
+  OpcodeType opcodeType;
+  /* The instruction ID, extracted from the decode table */
+  uint16_t instructionID;
+  /* The specifier for the instruction, from the instruction info table */
+  struct InstructionSpecifier* spec;
+  
+  /* state for additional bytes, consumed during operand decode.  Pattern:
+     consumed___ indicates that the byte was already consumed and does not
+     need to be consumed again */
+  
+  /* The ModR/M byte, which contains most register operands and some portion of
+     all memory operands */
+  BOOL                          consumedModRM;
+  uint8_t                       modRM;
+  
+  /* The SIB byte, used for more complex 32- or 64-bit memory operands */
+  BOOL                          consumedSIB;
+  uint8_t                       sib;
+
+  /* The displacement, used for memory operands */
+  BOOL                          consumedDisplacement;
+  int32_t                       displacement;
+  
+  /* Immediates.  There can be two in some cases */
+  uint8_t                       numImmediatesConsumed;
+  uint8_t                       numImmediatesTranslated;
+  uint64_t                      immediates[2];
+  
+  /* A register or immediate operand encoded into the opcode */
+  BOOL                          consumedOpcodeModifier;
+  uint8_t                       opcodeModifier;
+  Reg                           opcodeRegister;
+  
+  /* Portions of the ModR/M byte */
+  
+  /* These fields determine the allowable values for the ModR/M fields, which
+     depend on operand and address widths */
+  EABase                        eaBaseBase;
+  EABase                        eaRegBase;
+  Reg                           regBase;
+
+  /* The Mod and R/M fields can encode a base for an effective address, or a
+     register.  These are separated into two fields here */
+  EABase                        eaBase;
+  EADisplacement                eaDisplacement;
+  /* The reg field always encodes a register */
+  Reg                           reg;
+  
+  /* SIB state */
+  SIBIndex                      sibIndex;
+  uint8_t                       sibScale;
+  SIBBase                       sibBase;
+};
+
+/* decodeInstruction - Decode one instruction and store the decoding results in
+ *   a buffer provided by the consumer.
+ * @param insn      - The buffer to store the instruction in.  Allocated by the
+ *                    consumer.
+ * @param reader    - The byteReader_t for the bytes to be read.
+ * @param readerArg - An argument to pass to the reader for storing context
+ *                    specific to the consumer.  May be NULL.
+ * @param logger    - The dlog_t to be used in printing status messages from the
+ *                    disassembler.  May be NULL.
+ * @param loggerArg - An argument to pass to the logger for storing context
+ *                    specific to the logger.  May be NULL.
+ * @param startLoc  - The address (in the reader's address space) of the first
+ *                    byte in the instruction.
+ * @param mode      - The mode (16-bit, 32-bit, 64-bit) to decode in.
+ * @return          - Nonzero if there was an error during decode, 0 otherwise.
+ */
+int decodeInstruction(struct InternalInstruction* insn,
+                      byteReader_t reader,
+                      void* readerArg,
+                      dlog_t logger,
+                      void* loggerArg,
+                      uint64_t startLoc,
+                      DisassemblerMode mode);
+
+#ifdef __cplusplus 
+}
+#endif
+  
+#endif
diff --git a/libclamav/c++/llvm/lib/Target/X86/Disassembler/X86DisassemblerDecoderCommon.h b/libclamav/c++/llvm/lib/Target/X86/Disassembler/X86DisassemblerDecoderCommon.h
new file mode 100644
index 0000000..c213f89
--- /dev/null
+++ b/libclamav/c++/llvm/lib/Target/X86/Disassembler/X86DisassemblerDecoderCommon.h
@@ -0,0 +1,355 @@
+/*===- X86DisassemblerDecoderCommon.h - Disassembler decoder -------*- C -*-==*
+ *
+ *                     The LLVM Compiler Infrastructure
+ *
+ * This file is distributed under the University of Illinois Open Source
+ * License. See LICENSE.TXT for details.
+ *
+ *===----------------------------------------------------------------------===*
+ *
+ * This file is part of the X86 Disassembler.
+ * It contains common definitions used by both the disassembler and the table
+ *  generator.
+ * Documentation for the disassembler can be found in X86Disassembler.h.
+ *
+ *===----------------------------------------------------------------------===*/
+
+/*
+ * This header file provides those definitions that need to be shared between
+ * the decoder and the table generator in a C-friendly manner.
+ */
+
+#ifndef X86DISASSEMBLERDECODERCOMMON_H
+#define X86DISASSEMBLERDECODERCOMMON_H
+
+#include "llvm/System/DataTypes.h"
+
+#define INSTRUCTIONS_SYM  x86DisassemblerInstrSpecifiers
+#define CONTEXTS_SYM      x86DisassemblerContexts
+#define ONEBYTE_SYM       x86DisassemblerOneByteOpcodes
+#define TWOBYTE_SYM       x86DisassemblerTwoByteOpcodes
+#define THREEBYTE38_SYM   x86DisassemblerThreeByte38Opcodes
+#define THREEBYTE3A_SYM   x86DisassemblerThreeByte3AOpcodes
+
+#define INSTRUCTIONS_STR  "x86DisassemblerInstrSpecifiers"
+#define CONTEXTS_STR      "x86DisassemblerContexts"
+#define ONEBYTE_STR       "x86DisassemblerOneByteOpcodes"
+#define TWOBYTE_STR       "x86DisassemblerTwoByteOpcodes"
+#define THREEBYTE38_STR   "x86DisassemblerThreeByte38Opcodes"
+#define THREEBYTE3A_STR   "x86DisassemblerThreeByte3AOpcodes"
+
+/*
+ * Attributes of an instruction that must be known before the opcode can be
+ * processed correctly.  Most of these indicate the presence of particular
+ * prefixes, but ATTR_64BIT is simply an attribute of the decoding context.
+ */
+#define ATTRIBUTE_BITS          \
+  ENUM_ENTRY(ATTR_NONE,   0x00) \
+  ENUM_ENTRY(ATTR_64BIT,  0x01) \
+  ENUM_ENTRY(ATTR_XS,     0x02) \
+  ENUM_ENTRY(ATTR_XD,     0x04) \
+  ENUM_ENTRY(ATTR_REXW,   0x08) \
+  ENUM_ENTRY(ATTR_OPSIZE, 0x10)
+
+#define ENUM_ENTRY(n, v) n = v,
+enum attributeBits {
+  ATTRIBUTE_BITS
+  ATTR_max
+};
+#undef ENUM_ENTRY
+
+/*
+ * Combinations of the above attributes that are relevant to instruction
+ * decode.  Although other combinations are possible, they can be reduced to
+ * these without affecting the ultimately decoded instruction.
+ */
+
+/*           Class name           Rank  Rationale for rank assignment         */
+#define INSTRUCTION_CONTEXTS                                                   \
+  ENUM_ENTRY(IC,                    0,  "says nothing about the instruction")  \
+  ENUM_ENTRY(IC_64BIT,              1,  "says the instruction applies in "     \
+                                        "64-bit mode but no more")             \
+  ENUM_ENTRY(IC_OPSIZE,             3,  "requires an OPSIZE prefix, so "       \
+                                        "operands change width")               \
+  ENUM_ENTRY(IC_XD,                 2,  "may say something about the opcode "  \
+                                        "but not the operands")                \
+  ENUM_ENTRY(IC_XS,                 2,  "may say something about the opcode "  \
+                                        "but not the operands")                \
+  ENUM_ENTRY(IC_64BIT_REXW,         4,  "requires a REX.W prefix, so operands "\
+                                        "change width; overrides IC_OPSIZE")   \
+  ENUM_ENTRY(IC_64BIT_OPSIZE,       3,  "Just as meaningful as IC_OPSIZE")     \
+  ENUM_ENTRY(IC_64BIT_XD,           5,  "XD instructions are SSE; REX.W is "   \
+                                        "secondary")                           \
+  ENUM_ENTRY(IC_64BIT_XS,           5,  "Just as meaningful as IC_64BIT_XD")   \
+  ENUM_ENTRY(IC_64BIT_REXW_XS,      6,  "OPSIZE could mean a different "       \
+                                        "opcode")                              \
+  ENUM_ENTRY(IC_64BIT_REXW_XD,      6,  "Just as meaningful as "               \
+                                        "IC_64BIT_REXW_XS")                    \
+  ENUM_ENTRY(IC_64BIT_REXW_OPSIZE,  7,  "The Dynamic Duo!  Prefer over all "   \
+                                        "else because this changes most "      \
+                                        "operands' meaning")
+
+#define ENUM_ENTRY(n, r, d) n,    
+typedef enum {
+  INSTRUCTION_CONTEXTS
+  IC_max
+} InstructionContext;
+#undef ENUM_ENTRY
+
+/*
+ * Opcode types, which determine which decode table to use, both in the Intel
+ * manual and also for the decoder.
+ */
+typedef enum {
+  ONEBYTE       = 0,
+  TWOBYTE       = 1,
+  THREEBYTE_38  = 2,
+  THREEBYTE_3A  = 3
+} OpcodeType;
+
+/*
+ * The following structs are used for the hierarchical decode table.  After
+ * determining the instruction's class (i.e., which IC_* constant applies to
+ * it), the decoder reads the opcode.  Some instructions require specific
+ * values of the ModR/M byte, so the ModR/M byte indexes into the final table.
+ *
+ * If a ModR/M byte is not required, "required" is left unset, and the values
+ * for each instructionID are identical.
+ */
+ 
+typedef uint16_t InstrUID;
+
+/*
+ * ModRMDecisionType - describes the type of ModR/M decision, allowing the 
+ * consumer to determine the number of entries in it.
+ *
+ * MODRM_ONEENTRY - No matter what the value of the ModR/M byte is, the decoded
+ *                  instruction is the same.
+ * MODRM_SPLITRM  - If the ModR/M byte is between 0x00 and 0xbf, the opcode
+ *                  corresponds to one instruction; otherwise, it corresponds to
+ *                  a different instruction.
+ * MODRM_FULL     - Potentially, each value of the ModR/M byte could correspond
+ *                  to a different instruction.
+ */
+
+#define MODRMTYPES            \
+  ENUM_ENTRY(MODRM_ONEENTRY)  \
+  ENUM_ENTRY(MODRM_SPLITRM)   \
+  ENUM_ENTRY(MODRM_FULL)
+
+#define ENUM_ENTRY(n) n,    
+typedef enum {
+  MODRMTYPES
+  MODRM_max
+} ModRMDecisionType;
+#undef ENUM_ENTRY
+
+/*
+ * ModRMDecision - Specifies whether a ModR/M byte is needed and (if so) which 
+ *  instruction each possible value of the ModR/M byte corresponds to.  Once
+ *  this information is known, we have narrowed down to a single instruction.
+ */
+struct ModRMDecision {
+  uint8_t     modrm_type;
+  
+  /* The macro below must be defined wherever this file is included. */
+  INSTRUCTION_IDS
+};
+
+/*
+ * OpcodeDecision - Specifies which set of ModR/M->instruction tables to look at
+ *   given a particular opcode.
+ */
+struct OpcodeDecision {
+  struct ModRMDecision modRMDecisions[256];
+};
+
+/*
+ * ContextDecision - Specifies which opcode->instruction tables to look at given
+ *   a particular context (set of attributes).  Since there are many possible
+ *   contexts, the decoder first uses CONTEXTS_SYM to determine which context
+ *   applies given a specific set of attributes.  Hence there are only IC_max
+ *   entries in this table, rather than 2^(ATTR_max).
+ */
+struct ContextDecision {
+  struct OpcodeDecision opcodeDecisions[IC_max];
+};
+
+/* 
+ * Physical encodings of instruction operands.
+ */
+
+#define ENCODINGS                                                              \
+  ENUM_ENTRY(ENCODING_NONE,   "")                                              \
+  ENUM_ENTRY(ENCODING_REG,    "Register operand in ModR/M byte.")              \
+  ENUM_ENTRY(ENCODING_RM,     "R/M operand in ModR/M byte.")                   \
+  ENUM_ENTRY(ENCODING_CB,     "1-byte code offset (possible new CS value)")    \
+  ENUM_ENTRY(ENCODING_CW,     "2-byte")                                        \
+  ENUM_ENTRY(ENCODING_CD,     "4-byte")                                        \
+  ENUM_ENTRY(ENCODING_CP,     "6-byte")                                        \
+  ENUM_ENTRY(ENCODING_CO,     "8-byte")                                        \
+  ENUM_ENTRY(ENCODING_CT,     "10-byte")                                       \
+  ENUM_ENTRY(ENCODING_IB,     "1-byte immediate")                              \
+  ENUM_ENTRY(ENCODING_IW,     "2-byte")                                        \
+  ENUM_ENTRY(ENCODING_ID,     "4-byte")                                        \
+  ENUM_ENTRY(ENCODING_IO,     "8-byte")                                        \
+  ENUM_ENTRY(ENCODING_RB,     "(AL..DIL, R8L..R15L) Register code added to "   \
+                              "the opcode byte")                               \
+  ENUM_ENTRY(ENCODING_RW,     "(AX..DI, R8W..R15W)")                           \
+  ENUM_ENTRY(ENCODING_RD,     "(EAX..EDI, R8D..R15D)")                         \
+  ENUM_ENTRY(ENCODING_RO,     "(RAX..RDI, R8..R15)")                           \
+  ENUM_ENTRY(ENCODING_I,      "Position on floating-point stack added to the " \
+                              "opcode byte")                                   \
+                                                                               \
+  ENUM_ENTRY(ENCODING_Iv,     "Immediate of operand size")                     \
+  ENUM_ENTRY(ENCODING_Ia,     "Immediate of address size")                     \
+  ENUM_ENTRY(ENCODING_Rv,     "Register code of operand size added to the "    \
+                              "opcode byte")                                   \
+  ENUM_ENTRY(ENCODING_DUP,    "Duplicate of another operand; ID is encoded "   \
+                              "in type")
+
+#define ENUM_ENTRY(n, d) n,    
+  typedef enum {
+    ENCODINGS
+    ENCODING_max
+  } OperandEncoding;
+#undef ENUM_ENTRY
+
+/* 
+ * Semantic interpretations of instruction operands.
+ */
+
+#define TYPES                                                                  \
+  ENUM_ENTRY(TYPE_NONE,       "")                                              \
+  ENUM_ENTRY(TYPE_REL8,       "1-byte immediate address")                      \
+  ENUM_ENTRY(TYPE_REL16,      "2-byte")                                        \
+  ENUM_ENTRY(TYPE_REL32,      "4-byte")                                        \
+  ENUM_ENTRY(TYPE_REL64,      "8-byte")                                        \
+  ENUM_ENTRY(TYPE_PTR1616,    "2+2-byte segment+offset address")               \
+  ENUM_ENTRY(TYPE_PTR1632,    "2+4-byte")                                      \
+  ENUM_ENTRY(TYPE_PTR1664,    "2+8-byte")                                      \
+  ENUM_ENTRY(TYPE_R8,         "1-byte register operand")                       \
+  ENUM_ENTRY(TYPE_R16,        "2-byte")                                        \
+  ENUM_ENTRY(TYPE_R32,        "4-byte")                                        \
+  ENUM_ENTRY(TYPE_R64,        "8-byte")                                        \
+  ENUM_ENTRY(TYPE_IMM8,       "1-byte immediate operand")                      \
+  ENUM_ENTRY(TYPE_IMM16,      "2-byte")                                        \
+  ENUM_ENTRY(TYPE_IMM32,      "4-byte")                                        \
+  ENUM_ENTRY(TYPE_IMM64,      "8-byte")                                        \
+  ENUM_ENTRY(TYPE_RM8,        "1-byte register or memory operand")             \
+  ENUM_ENTRY(TYPE_RM16,       "2-byte")                                        \
+  ENUM_ENTRY(TYPE_RM32,       "4-byte")                                        \
+  ENUM_ENTRY(TYPE_RM64,       "8-byte")                                        \
+  ENUM_ENTRY(TYPE_M,          "Memory operand")                                \
+  ENUM_ENTRY(TYPE_M8,         "1-byte")                                        \
+  ENUM_ENTRY(TYPE_M16,        "2-byte")                                        \
+  ENUM_ENTRY(TYPE_M32,        "4-byte")                                        \
+  ENUM_ENTRY(TYPE_M64,        "8-byte")                                        \
+  ENUM_ENTRY(TYPE_LEA,        "Effective address")                             \
+  ENUM_ENTRY(TYPE_M128,       "16-byte (SSE/SSE2)")                            \
+  ENUM_ENTRY(TYPE_M1616,      "2+2-byte segment+offset address")               \
+  ENUM_ENTRY(TYPE_M1632,      "2+4-byte")                                      \
+  ENUM_ENTRY(TYPE_M1664,      "2+8-byte")                                      \
+  ENUM_ENTRY(TYPE_M16_32,     "2+4-byte two-part memory operand (LIDT, LGDT)") \
+  ENUM_ENTRY(TYPE_M16_16,     "2+2-byte (BOUND)")                              \
+  ENUM_ENTRY(TYPE_M32_32,     "4+4-byte (BOUND)")                              \
+  ENUM_ENTRY(TYPE_M16_64,     "2+8-byte (LIDT, LGDT)")                         \
+  ENUM_ENTRY(TYPE_MOFFS8,     "1-byte memory offset (relative to segment "     \
+                              "base)")                                         \
+  ENUM_ENTRY(TYPE_MOFFS16,    "2-byte")                                        \
+  ENUM_ENTRY(TYPE_MOFFS32,    "4-byte")                                        \
+  ENUM_ENTRY(TYPE_MOFFS64,    "8-byte")                                        \
+  ENUM_ENTRY(TYPE_SREG,       "Byte with single bit set: 0 = ES, 1 = CS, "     \
+                              "2 = SS, 3 = DS, 4 = FS, 5 = GS")                \
+  ENUM_ENTRY(TYPE_M32FP,      "32-bit IEE754 memory floating-point operand")   \
+  ENUM_ENTRY(TYPE_M64FP,      "64-bit")                                        \
+  ENUM_ENTRY(TYPE_M80FP,      "80-bit extended")                               \
+  ENUM_ENTRY(TYPE_M16INT,     "2-byte memory integer operand for use in "      \
+                              "floating-point instructions")                   \
+  ENUM_ENTRY(TYPE_M32INT,     "4-byte")                                        \
+  ENUM_ENTRY(TYPE_M64INT,     "8-byte")                                        \
+  ENUM_ENTRY(TYPE_ST,         "Position on the floating-point stack")          \
+  ENUM_ENTRY(TYPE_MM,         "MMX register operand")                          \
+  ENUM_ENTRY(TYPE_MM32,       "4-byte MMX register or memory operand")         \
+  ENUM_ENTRY(TYPE_MM64,       "8-byte")                                        \
+  ENUM_ENTRY(TYPE_XMM,        "XMM register operand")                          \
+  ENUM_ENTRY(TYPE_XMM32,      "4-byte XMM register or memory operand")         \
+  ENUM_ENTRY(TYPE_XMM64,      "8-byte")                                        \
+  ENUM_ENTRY(TYPE_XMM128,     "16-byte")                                       \
+  ENUM_ENTRY(TYPE_XMM0,       "Implicit use of XMM0")                          \
+  ENUM_ENTRY(TYPE_SEGMENTREG, "Segment register operand")                      \
+  ENUM_ENTRY(TYPE_DEBUGREG,   "Debug register operand")                        \
+  ENUM_ENTRY(TYPE_CR32,       "4-byte control register operand")               \
+  ENUM_ENTRY(TYPE_CR64,       "8-byte")                                        \
+                                                                               \
+  ENUM_ENTRY(TYPE_Mv,         "Memory operand of operand size")                \
+  ENUM_ENTRY(TYPE_Rv,         "Register operand of operand size")              \
+  ENUM_ENTRY(TYPE_IMMv,       "Immediate operand of operand size")             \
+  ENUM_ENTRY(TYPE_RELv,       "Immediate address of operand size")             \
+  ENUM_ENTRY(TYPE_DUP0,       "Duplicate of operand 0")                        \
+  ENUM_ENTRY(TYPE_DUP1,       "operand 1")                                     \
+  ENUM_ENTRY(TYPE_DUP2,       "operand 2")                                     \
+  ENUM_ENTRY(TYPE_DUP3,       "operand 3")                                     \
+  ENUM_ENTRY(TYPE_DUP4,       "operand 4")                                     \
+  ENUM_ENTRY(TYPE_M512,       "512-bit FPU/MMX/XMM/MXCSR state")
+
+#define ENUM_ENTRY(n, d) n,    
+typedef enum {
+  TYPES
+  TYPE_max
+} OperandType;
+#undef ENUM_ENTRY
+
+/* 
+ * OperandSpecifier - The specification for how to extract and interpret one
+ *   operand.
+ */
+struct OperandSpecifier {
+  OperandEncoding  encoding;
+  OperandType      type;
+};
+
+/*
+ * Indicates where the opcode modifier (if any) is to be found.  Extended
+ * opcodes with AddRegFrm have the opcode modifier in the ModR/M byte.
+ */
+
+#define MODIFIER_TYPES        \
+  ENUM_ENTRY(MODIFIER_NONE)   \
+  ENUM_ENTRY(MODIFIER_OPCODE) \
+  ENUM_ENTRY(MODIFIER_MODRM)
+
+#define ENUM_ENTRY(n) n,
+typedef enum {
+  MODIFIER_TYPES
+  MODIFIER_max
+} ModifierType;
+#undef ENUM_ENTRY
+
+#define X86_MAX_OPERANDS 5
+
+/*
+ * The specification for how to extract and interpret a full instruction and
+ * its operands.
+ */
+struct InstructionSpecifier {
+  ModifierType modifierType;
+  uint8_t modifierBase;
+  struct OperandSpecifier operands[X86_MAX_OPERANDS];
+  
+  /* The macro below must be defined wherever this file is included. */
+  INSTRUCTION_SPECIFIER_FIELDS
+};
+
+/*
+ * Decoding mode for the Intel disassembler.  16-bit, 32-bit, and 64-bit mode
+ * are supported, and represent real mode, IA-32e, and IA-32e in 64-bit mode,
+ * respectively.
+ */
+typedef enum {
+  MODE_16BIT,
+  MODE_32BIT,
+  MODE_64BIT
+} DisassemblerMode;
+
+#endif
diff --git a/libclamav/c++/llvm/lib/Target/X86/Makefile b/libclamav/c++/llvm/lib/Target/X86/Makefile
index b311a6e..6098dbf 100644
--- a/libclamav/c++/llvm/lib/Target/X86/Makefile
+++ b/libclamav/c++/llvm/lib/Target/X86/Makefile
@@ -15,8 +15,8 @@ BUILT_SOURCES = X86GenRegisterInfo.h.inc X86GenRegisterNames.inc \
                 X86GenRegisterInfo.inc X86GenInstrNames.inc \
                 X86GenInstrInfo.inc X86GenAsmWriter.inc X86GenAsmMatcher.inc \
                 X86GenAsmWriter1.inc X86GenDAGISel.inc  \
-                X86GenFastISel.inc \
-                X86GenCallingConv.inc X86GenSubtarget.inc
+                X86GenDisassemblerTables.inc X86GenFastISel.inc \
+                X86GenCallingConv.inc X86GenSubtarget.inc \
 
 DIRS = AsmPrinter AsmParser Disassembler TargetInfo
 
diff --git a/libclamav/c++/llvm/lib/Target/X86/README.txt b/libclamav/c++/llvm/lib/Target/X86/README.txt
index 9b7aab8..afd9f53 100644
--- a/libclamav/c++/llvm/lib/Target/X86/README.txt
+++ b/libclamav/c++/llvm/lib/Target/X86/README.txt
@@ -123,20 +123,6 @@ when it can invert the result of the compare for free.
 
 //===---------------------------------------------------------------------===//
 
-How about intrinsics? An example is:
-  *res = _mm_mulhi_epu16(*A, _mm_mul_epu32(*B, *C));
-
-compiles to
-	pmuludq (%eax), %xmm0
-	movl 8(%esp), %eax
-	movdqa (%eax), %xmm1
-	pmulhuw %xmm0, %xmm1
-
-The transformation probably requires a X86 specific pass or a DAG combiner
-target specific hook.
-
-//===---------------------------------------------------------------------===//
-
 In many cases, LLVM generates code like this:
 
 _test:
@@ -1762,6 +1748,11 @@ LBB1_1:	## bb1
 	cmpl	$150, %edi
 	jne	LBB1_1	## bb1
 
+The issue is that we hoist the cast of "scaler" to long long outside of the
+loop, the value comes into the loop as two values, and
+RegsForValue::getCopyFromRegs doesn't know how to put an AssertSext on the
+constructed BUILD_PAIR which represents the cast value.
+
 //===---------------------------------------------------------------------===//
 
 Test instructions can be eliminated by using EFLAGS values from arithmetic
diff --git a/libclamav/c++/llvm/lib/Target/X86/X86.td b/libclamav/c++/llvm/lib/Target/X86/X86.td
index da467fe..a6e1ca3 100644
--- a/libclamav/c++/llvm/lib/Target/X86/X86.td
+++ b/libclamav/c++/llvm/lib/Target/X86/X86.td
@@ -63,7 +63,7 @@ def FeatureSSE4A   : SubtargetFeature<"sse4a", "HasSSE4A", "true",
 def FeatureAVX     : SubtargetFeature<"avx", "HasAVX", "true",
                                       "Enable AVX instructions">;
 def FeatureFMA3    : SubtargetFeature<"fma3", "HasFMA3", "true",
-                                      "Enable three-operand fused multiple-add">;
+                                     "Enable three-operand fused multiple-add">;
 def FeatureFMA4    : SubtargetFeature<"fma4", "HasFMA4", "true",
                                       "Enable four-operand fused multiple-add">;
 
diff --git a/libclamav/c++/llvm/lib/Target/X86/X86ISelDAGToDAG.cpp b/libclamav/c++/llvm/lib/Target/X86/X86ISelDAGToDAG.cpp
index a9a78be..cb82383 100644
--- a/libclamav/c++/llvm/lib/Target/X86/X86ISelDAGToDAG.cpp
+++ b/libclamav/c++/llvm/lib/Target/X86/X86ISelDAGToDAG.cpp
@@ -50,9 +50,6 @@
 #include "llvm/ADT/Statistic.h"
 using namespace llvm;
 
-#include "llvm/Support/CommandLine.h"
-static cl::opt<bool> AvoidDupAddrCompute("x86-avoid-dup-address", cl::Hidden);
-
 STATISTIC(NumLoadMoved, "Number of loads moved below TokenFactor");
 
 //===----------------------------------------------------------------------===//
@@ -1275,28 +1272,7 @@ bool X86DAGToDAGISel::SelectAddr(SDValue Op, SDValue N, SDValue &Base,
                                  SDValue &Scale, SDValue &Index,
                                  SDValue &Disp, SDValue &Segment) {
   X86ISelAddressMode AM;
-  bool Done = false;
-  if (AvoidDupAddrCompute && !N.hasOneUse()) {
-    unsigned Opcode = N.getOpcode();
-    if (Opcode != ISD::Constant && Opcode != ISD::FrameIndex &&
-        Opcode != X86ISD::Wrapper && Opcode != X86ISD::WrapperRIP) {
-      // If we are able to fold N into addressing mode, then we'll allow it even
-      // if N has multiple uses. In general, addressing computation is used as
-      // addresses by all of its uses. But watch out for CopyToReg uses, that
-      // means the address computation is liveout. It will be computed by a LEA
-      // so we want to avoid computing the address twice.
-      for (SDNode::use_iterator UI = N.getNode()->use_begin(),
-             UE = N.getNode()->use_end(); UI != UE; ++UI) {
-        if (UI->getOpcode() == ISD::CopyToReg) {
-          MatchAddressBase(N, AM);
-          Done = true;
-          break;
-        }
-      }
-    }
-  }
-
-  if (!Done && MatchAddress(N, AM))
+  if (MatchAddress(N, AM))
     return false;
 
   EVT VT = N.getValueType();
@@ -1891,27 +1867,28 @@ SDNode *X86DAGToDAGISel::Select(SDValue N) {
       }
     }
 
-    unsigned LoReg, HiReg;
+    unsigned LoReg, HiReg, ClrReg;
     unsigned ClrOpcode, SExtOpcode;
+    EVT ClrVT = NVT;
     switch (NVT.getSimpleVT().SimpleTy) {
     default: llvm_unreachable("Unsupported VT!");
     case MVT::i8:
-      LoReg = X86::AL;  HiReg = X86::AH;
+      LoReg = X86::AL;  ClrReg = HiReg = X86::AH;
       ClrOpcode  = 0;
       SExtOpcode = X86::CBW;
       break;
     case MVT::i16:
       LoReg = X86::AX;  HiReg = X86::DX;
-      ClrOpcode  = X86::MOV16r0;
+      ClrOpcode  = X86::MOV32r0;  ClrReg = X86::EDX;  ClrVT = MVT::i32;
       SExtOpcode = X86::CWD;
       break;
     case MVT::i32:
-      LoReg = X86::EAX; HiReg = X86::EDX;
+      LoReg = X86::EAX; ClrReg = HiReg = X86::EDX;
       ClrOpcode  = X86::MOV32r0;
       SExtOpcode = X86::CDQ;
       break;
     case MVT::i64:
-      LoReg = X86::RAX; HiReg = X86::RDX;
+      LoReg = X86::RAX; ClrReg = HiReg = X86::RDX;
       ClrOpcode  = ~0U; // NOT USED.
       SExtOpcode = X86::CQO;
       break;
@@ -1966,10 +1943,10 @@ SDNode *X86DAGToDAGISel::Select(SDValue N) {
                                            MVT::i64, Zero, ClrNode, SubRegNo),
                     0);
         } else {
-          ClrNode = SDValue(CurDAG->getMachineNode(ClrOpcode, dl, NVT), 0);
+          ClrNode = SDValue(CurDAG->getMachineNode(ClrOpcode, dl, ClrVT), 0);
         }
 
-        InFlag = CurDAG->getCopyToReg(CurDAG->getEntryNode(), dl, HiReg,
+        InFlag = CurDAG->getCopyToReg(CurDAG->getEntryNode(), dl, ClrReg,
                                       ClrNode, InFlag).getValue(1);
       }
     }
diff --git a/libclamav/c++/llvm/lib/Target/X86/X86ISelLowering.cpp b/libclamav/c++/llvm/lib/Target/X86/X86ISelLowering.cpp
index 0517b56..5f99fae 100644
--- a/libclamav/c++/llvm/lib/Target/X86/X86ISelLowering.cpp
+++ b/libclamav/c++/llvm/lib/Target/X86/X86ISelLowering.cpp
@@ -980,6 +980,7 @@ X86TargetLowering::X86TargetLowering(X86TargetMachine &TM)
   setTargetDAGCombine(ISD::SRL);
   setTargetDAGCombine(ISD::STORE);
   setTargetDAGCombine(ISD::MEMBARRIER);
+  setTargetDAGCombine(ISD::ZERO_EXTEND);
   if (Subtarget->is64Bit())
     setTargetDAGCombine(ISD::MUL);
 
@@ -4583,7 +4584,7 @@ X86TargetLowering::LowerEXTRACT_VECTOR_ELT(SDValue Op, SelectionDAG &DAG) {
                                                  MVT::v4i32, Vec),
                                      Op.getOperand(1)));
     // Transform it so it match pextrw which produces a 32-bit result.
-    EVT EltVT = (MVT::SimpleValueType)(VT.getSimpleVT().SimpleTy+1);
+    EVT EltVT = MVT::i32;
     SDValue Extract = DAG.getNode(X86ISD::PEXTRW, dl, EltVT,
                                     Op.getOperand(0), Op.getOperand(1));
     SDValue Assert  = DAG.getNode(ISD::AssertZext, dl, EltVT, Extract,
@@ -5752,14 +5753,11 @@ SDValue X86TargetLowering::LowerSETCC(SDValue Op, SelectionDAG &DAG) {
   SDValue Cond = EmitCmp(Op0, Op1, X86CC, DAG);
 
   // Use sbb x, x to materialize carry bit into a GPR.
-  // FIXME: Temporarily disabled since it breaks self-hosting. It's apparently
-  // miscompiling ARMISelDAGToDAG.cpp.
-  if (0 && !isFP && X86CC == X86::COND_B) {
+  if (X86CC == X86::COND_B)
     return DAG.getNode(ISD::AND, dl, MVT::i8,
                        DAG.getNode(X86ISD::SETCC_CARRY, dl, MVT::i8,
                                    DAG.getConstant(X86CC, MVT::i8), Cond),
                        DAG.getConstant(1, MVT::i8));
-  }
 
   return DAG.getNode(X86ISD::SETCC, dl, MVT::i8,
                      DAG.getConstant(X86CC, MVT::i8), Cond);
@@ -6196,7 +6194,8 @@ X86TargetLowering::EmitTargetCodeForMemset(SelectionDAG &DAG, DebugLoc dl,
         LowerCallTo(Chain, Type::getVoidTy(*DAG.getContext()),
                     false, false, false, false,
                     0, CallingConv::C, false, /*isReturnValueUsed=*/false,
-                    DAG.getExternalSymbol(bzeroEntry, IntPtr), Args, DAG, dl);
+                    DAG.getExternalSymbol(bzeroEntry, IntPtr), Args, DAG, dl,
+                    DAG.GetOrdering(Chain.getNode()));
       return CallResult.second;
     }
 
@@ -9349,6 +9348,32 @@ static SDValue PerformMEMBARRIERCombine(SDNode* N, SelectionDAG &DAG) {
   }
 }
 
+static SDValue PerformZExtCombine(SDNode *N, SelectionDAG &DAG) {
+  // (i32 zext (and (i8  x86isd::setcc_carry), 1)) ->
+  //           (and (i32 x86isd::setcc_carry), 1)
+  // This eliminates the zext. This transformation is necessary because
+  // ISD::SETCC is always legalized to i8.
+  DebugLoc dl = N->getDebugLoc();
+  SDValue N0 = N->getOperand(0);
+  EVT VT = N->getValueType(0);
+  if (N0.getOpcode() == ISD::AND &&
+      N0.hasOneUse() &&
+      N0.getOperand(0).hasOneUse()) {
+    SDValue N00 = N0.getOperand(0);
+    if (N00.getOpcode() != X86ISD::SETCC_CARRY)
+      return SDValue();
+    ConstantSDNode *C = dyn_cast<ConstantSDNode>(N0.getOperand(1));
+    if (!C || C->getZExtValue() != 1)
+      return SDValue();
+    return DAG.getNode(ISD::AND, dl, VT,
+                       DAG.getNode(X86ISD::SETCC_CARRY, dl, VT,
+                                   N00.getOperand(0), N00.getOperand(1)),
+                       DAG.getConstant(1, VT));
+  }
+
+  return SDValue();
+}
+
 SDValue X86TargetLowering::PerformDAGCombine(SDNode *N,
                                              DAGCombinerInfo &DCI) const {
   SelectionDAG &DAG = DCI.DAG;
@@ -9368,6 +9393,7 @@ SDValue X86TargetLowering::PerformDAGCombine(SDNode *N,
   case X86ISD::BT:          return PerformBTCombine(N, DAG, DCI);
   case X86ISD::VZEXT_MOVL:  return PerformVZEXT_MOVLCombine(N, DAG);
   case ISD::MEMBARRIER:     return PerformMEMBARRIERCombine(N, DAG);
+  case ISD::ZERO_EXTEND:    return PerformZExtCombine(N, DAG);
   }
 
   return SDValue();
diff --git a/libclamav/c++/llvm/lib/Target/X86/X86Instr64bit.td b/libclamav/c++/llvm/lib/Target/X86/X86Instr64bit.td
index b6a2c05..65fbbda 100644
--- a/libclamav/c++/llvm/lib/Target/X86/X86Instr64bit.td
+++ b/libclamav/c++/llvm/lib/Target/X86/X86Instr64bit.td
@@ -111,6 +111,9 @@ def ADJCALLSTACKUP64   : I<0, Pseudo, (outs), (ins i32imm:$amt1, i32imm:$amt2),
                           Requires<[In64BitMode]>;
 }
 
+// Interrupt Instructions
+def IRET64 : RI<0xcf, RawFrm, (outs), (ins), "iret{q}", []>;
+
 //===----------------------------------------------------------------------===//
 //  Call Instructions...
 //
@@ -131,20 +134,21 @@ let isCall = 1 in
     // the 32-bit pcrel field that we have.
     def CALL64pcrel32 : Ii32<0xE8, RawFrm,
                           (outs), (ins i64i32imm_pcrel:$dst, variable_ops),
-                          "call\t$dst", []>,
+                          "call{q}\t$dst", []>,
                         Requires<[In64BitMode, NotWin64]>;
     def CALL64r       : I<0xFF, MRM2r, (outs), (ins GR64:$dst, variable_ops),
-                          "call\t{*}$dst", [(X86call GR64:$dst)]>,
+                          "call{q}\t{*}$dst", [(X86call GR64:$dst)]>,
                         Requires<[NotWin64]>;
     def CALL64m       : I<0xFF, MRM2m, (outs), (ins i64mem:$dst, variable_ops),
-                          "call\t{*}$dst", [(X86call (loadi64 addr:$dst))]>,
+                          "call{q}\t{*}$dst", [(X86call (loadi64 addr:$dst))]>,
                         Requires<[NotWin64]>;
                         
     def FARCALL64   : RI<0xFF, MRM3m, (outs), (ins opaque80mem:$dst),
                          "lcall{q}\t{*}$dst", []>;
   }
 
-  // FIXME: We need to teach codegen about single list of call-clobbered registers.
+  // FIXME: We need to teach codegen about single list of call-clobbered 
+  // registers.
 let isCall = 1 in
   // All calls clobber the non-callee saved registers. RSP is marked as
   // a use to prevent stack-pointer assignments that appear immediately
@@ -162,9 +166,10 @@ let isCall = 1 in
     def WINCALL64r       : I<0xFF, MRM2r, (outs), (ins GR64:$dst, variable_ops),
                              "call\t{*}$dst",
                              [(X86call GR64:$dst)]>, Requires<[IsWin64]>;
-    def WINCALL64m       : I<0xFF, MRM2m, (outs), (ins i64mem:$dst, variable_ops),
-                             "call\t{*}$dst",
-                             [(X86call (loadi64 addr:$dst))]>, Requires<[IsWin64]>;
+    def WINCALL64m       : I<0xFF, MRM2m, (outs), 
+                             (ins i64mem:$dst, variable_ops), "call\t{*}$dst",
+                             [(X86call (loadi64 addr:$dst))]>, 
+                           Requires<[IsWin64]>;
   }
 
 
@@ -188,6 +193,8 @@ let isCall = 1, isTerminator = 1, isReturn = 1, isBarrier = 1 in
 
 // Branches
 let isBranch = 1, isTerminator = 1, isBarrier = 1, isIndirectBranch = 1 in {
+  def JMP64pcrel32 : I<0xE9, RawFrm, (outs), (ins brtarget:$dst), 
+                       "jmp{q}\t$dst", []>;
   def JMP64r     : I<0xFF, MRM4r, (outs), (ins GR64:$dst), "jmp{q}\t{*}$dst",
                      [(brind GR64:$dst)]>;
   def JMP64m     : I<0xFF, MRM4m, (outs), (ins i64mem:$dst), "jmp{q}\t{*}$dst",
@@ -210,6 +217,12 @@ def EH_RETURN64   : I<0xC3, RawFrm, (outs), (ins GR64:$addr),
 //===----------------------------------------------------------------------===//
 //  Miscellaneous Instructions...
 //
+
+def POPCNT64rr : RI<0xB8, MRMSrcReg, (outs GR64:$dst), (ins GR64:$src),
+                    "popcnt{q}\t{$src, $dst|$dst, $src}", []>, XS;
+def POPCNT64rm : RI<0xB8, MRMSrcMem, (outs GR64:$dst), (ins i64mem:$src),
+                    "popcnt{q}\t{$src, $dst|$dst, $src}", []>, XS;
+
 let Defs = [RBP,RSP], Uses = [RBP,RSP], mayLoad = 1, neverHasSideEffects = 1 in
 def LEAVE64  : I<0xC9, RawFrm,
                  (outs), (ins), "leave", []>;
@@ -238,9 +251,9 @@ def PUSH64i32  : Ii32<0x68, RawFrm, (outs), (ins i32imm:$imm),
 }
 
 let Defs = [RSP, EFLAGS], Uses = [RSP], mayLoad = 1 in
-def POPFQ    : I<0x9D, RawFrm, (outs), (ins), "popf", []>, REX_W;
+def POPFQ    : I<0x9D, RawFrm, (outs), (ins), "popf{q}", []>, REX_W;
 let Defs = [RSP], Uses = [RSP, EFLAGS], mayStore = 1 in
-def PUSHFQ   : I<0x9C, RawFrm, (outs), (ins), "pushf", []>;
+def PUSHFQ64   : I<0x9C, RawFrm, (outs), (ins), "pushf{q}", []>;
 
 def LEA64_32r : I<0x8D, MRMSrcMem,
                   (outs GR32:$dst), (ins lea64_32mem:$src),
@@ -309,6 +322,9 @@ def MOV64ri32 : RIi32<0xC7, MRM0r, (outs GR64:$dst), (ins i64i32imm:$src),
                       [(set GR64:$dst, i64immSExt32:$src)]>;
 }
 
+def MOV64rr_REV : RI<0x8B, MRMSrcReg, (outs GR64:$dst), (ins GR64:$src),
+                     "mov{q}\t{$src, $dst|$dst, $src}", []>;
+
 let canFoldAsLoad = 1, isReMaterializable = 1, mayHaveSideEffects = 1 in
 def MOV64rm : RI<0x8B, MRMSrcMem, (outs GR64:$dst), (ins i64mem:$src),
                  "mov{q}\t{$src, $dst|$dst, $src}",
@@ -321,24 +337,36 @@ def MOV64mi32 : RIi32<0xC7, MRM0m, (outs), (ins i64mem:$dst, i64i32imm:$src),
                       "mov{q}\t{$src, $dst|$dst, $src}",
                       [(store i64immSExt32:$src, addr:$dst)]>;
 
-def MOV64o8a : RIi8<0xA0, RawFrm, (outs), (ins i8imm:$src),
+def MOV64o8a : RIi8<0xA0, RawFrm, (outs), (ins offset8:$src),
                       "mov{q}\t{$src, %rax|%rax, $src}", []>;
-def MOV64o32a : RIi32<0xA1, RawFrm, (outs), (ins i32imm:$src),
+def MOV64o64a : RIi32<0xA1, RawFrm, (outs), (ins offset64:$src),
                        "mov{q}\t{$src, %rax|%rax, $src}", []>;
-def MOV64ao8 : RIi8<0xA2, RawFrm, (outs i8imm:$dst), (ins),
+def MOV64ao8 : RIi8<0xA2, RawFrm, (outs offset8:$dst), (ins),
                        "mov{q}\t{%rax, $dst|$dst, %rax}", []>;
-def MOV64ao32 : RIi32<0xA3, RawFrm, (outs i32imm:$dst), (ins),
+def MOV64ao64 : RIi32<0xA3, RawFrm, (outs offset64:$dst), (ins),
                        "mov{q}\t{%rax, $dst|$dst, %rax}", []>;
 
 // Moves to and from segment registers
 def MOV64rs : RI<0x8C, MRMDestReg, (outs GR64:$dst), (ins SEGMENT_REG:$src),
-                 "mov{w}\t{$src, $dst|$dst, $src}", []>;
+                 "mov{q}\t{$src, $dst|$dst, $src}", []>;
 def MOV64ms : RI<0x8C, MRMDestMem, (outs i64mem:$dst), (ins SEGMENT_REG:$src),
-                 "mov{w}\t{$src, $dst|$dst, $src}", []>;
+                 "mov{q}\t{$src, $dst|$dst, $src}", []>;
 def MOV64sr : RI<0x8E, MRMSrcReg, (outs SEGMENT_REG:$dst), (ins GR64:$src),
-                 "mov{w}\t{$src, $dst|$dst, $src}", []>;
+                 "mov{q}\t{$src, $dst|$dst, $src}", []>;
 def MOV64sm : RI<0x8E, MRMSrcMem, (outs SEGMENT_REG:$dst), (ins i64mem:$src),
-                 "mov{w}\t{$src, $dst|$dst, $src}", []>;
+                 "mov{q}\t{$src, $dst|$dst, $src}", []>;
+
+// Moves to and from debug registers
+def MOV64rd : I<0x21, MRMDestReg, (outs GR64:$dst), (ins DEBUG_REG:$src),
+                "mov{q}\t{$src, $dst|$dst, $src}", []>, TB;
+def MOV64dr : I<0x23, MRMSrcReg, (outs DEBUG_REG:$dst), (ins GR64:$src),
+                "mov{q}\t{$src, $dst|$dst, $src}", []>, TB;
+
+// Moves to and from control registers
+def MOV64rc : I<0x20, MRMDestReg, (outs GR64:$dst), (ins CONTROL_REG_64:$src),
+                "mov{q}\t{$src, $dst|$dst, $src}", []>, TB;
+def MOV64cr : I<0x22, MRMSrcReg, (outs CONTROL_REG_64:$dst), (ins GR64:$src),
+                "mov{q}\t{$src, $dst|$dst, $src}", []>, TB;
 
 // Sign/Zero extenders
 
@@ -365,6 +393,16 @@ def MOVSX64rm32: RI<0x63, MRMSrcMem, (outs GR64:$dst), (ins i32mem:$src),
                     "movs{lq|xd}\t{$src, $dst|$dst, $src}",
                     [(set GR64:$dst, (sextloadi64i32 addr:$src))]>;
 
+// movzbq and movzwq encodings for the disassembler
+def MOVZX64rr8_Q : RI<0xB6, MRMSrcReg, (outs GR64:$dst), (ins GR8:$src),
+                       "movz{bq|x}\t{$src, $dst|$dst, $src}", []>, TB;
+def MOVZX64rm8_Q : RI<0xB6, MRMSrcMem, (outs GR64:$dst), (ins i8mem:$src),
+                       "movz{bq|x}\t{$src, $dst|$dst, $src}", []>, TB;
+def MOVZX64rr16_Q : RI<0xB7, MRMSrcReg, (outs GR64:$dst), (ins GR16:$src),
+                       "movz{wq|x}\t{$src, $dst|$dst, $src}", []>, TB;
+def MOVZX64rm16_Q : RI<0xB7, MRMSrcMem, (outs GR64:$dst), (ins i16mem:$src),
+                       "movz{wq|x}\t{$src, $dst|$dst, $src}", []>, TB;
+
 // Use movzbl instead of movzbq when the destination is a register; it's
 // equivalent due to implicit zero-extending, and it has a smaller encoding.
 def MOVZX64rr8 : I<0xB6, MRMSrcReg, (outs GR64:$dst), (ins GR8 :$src),
@@ -430,31 +468,36 @@ let isTwoAddress = 1 in {
 let isConvertibleToThreeAddress = 1 in {
 let isCommutable = 1 in
 // Register-Register Addition
-def ADD64rr    : RI<0x01, MRMDestReg, (outs GR64:$dst), (ins GR64:$src1, GR64:$src2),
+def ADD64rr    : RI<0x01, MRMDestReg, (outs GR64:$dst), 
+                    (ins GR64:$src1, GR64:$src2),
                     "add{q}\t{$src2, $dst|$dst, $src2}",
                     [(set GR64:$dst, (add GR64:$src1, GR64:$src2)),
                      (implicit EFLAGS)]>;
 
 // Register-Integer Addition
-def ADD64ri8  : RIi8<0x83, MRM0r, (outs GR64:$dst), (ins GR64:$src1, i64i8imm:$src2),
+def ADD64ri8  : RIi8<0x83, MRM0r, (outs GR64:$dst), 
+                     (ins GR64:$src1, i64i8imm:$src2),
                      "add{q}\t{$src2, $dst|$dst, $src2}",
                      [(set GR64:$dst, (add GR64:$src1, i64immSExt8:$src2)),
                       (implicit EFLAGS)]>;
-def ADD64ri32 : RIi32<0x81, MRM0r, (outs GR64:$dst), (ins GR64:$src1, i64i32imm:$src2),
+def ADD64ri32 : RIi32<0x81, MRM0r, (outs GR64:$dst), 
+                      (ins GR64:$src1, i64i32imm:$src2),
                       "add{q}\t{$src2, $dst|$dst, $src2}",
                       [(set GR64:$dst, (add GR64:$src1, i64immSExt32:$src2)),
                        (implicit EFLAGS)]>;
 } // isConvertibleToThreeAddress
 
 // Register-Memory Addition
-def ADD64rm     : RI<0x03, MRMSrcMem, (outs GR64:$dst), (ins GR64:$src1, i64mem:$src2),
+def ADD64rm     : RI<0x03, MRMSrcMem, (outs GR64:$dst), 
+                     (ins GR64:$src1, i64mem:$src2),
                      "add{q}\t{$src2, $dst|$dst, $src2}",
                      [(set GR64:$dst, (add GR64:$src1, (load addr:$src2))),
                       (implicit EFLAGS)]>;
 
 // Register-Register Addition - Equivalent to the normal rr form (ADD64rr), but
 //   differently encoded.
-def ADD64mrmrr  : RI<0x03, MRMSrcReg, (outs GR64:$dst), (ins GR64:$src1, GR64:$src2),
+def ADD64mrmrr  : RI<0x03, MRMSrcReg, (outs GR64:$dst), 
+                     (ins GR64:$src1, GR64:$src2),
                      "add{l}\t{$src2, $dst|$dst, $src2}", []>;
 
 } // isTwoAddress
@@ -480,18 +523,26 @@ def ADC64i32 : RI<0x15, RawFrm, (outs), (ins i32imm:$src),
 
 let isTwoAddress = 1 in {
 let isCommutable = 1 in
-def ADC64rr  : RI<0x11, MRMDestReg, (outs GR64:$dst), (ins GR64:$src1, GR64:$src2),
+def ADC64rr  : RI<0x11, MRMDestReg, (outs GR64:$dst), 
+                  (ins GR64:$src1, GR64:$src2),
                   "adc{q}\t{$src2, $dst|$dst, $src2}",
                   [(set GR64:$dst, (adde GR64:$src1, GR64:$src2))]>;
 
-def ADC64rm  : RI<0x13, MRMSrcMem , (outs GR64:$dst), (ins GR64:$src1, i64mem:$src2),
+def ADC64rr_REV : RI<0x13, MRMSrcReg , (outs GR32:$dst), 
+                     (ins GR64:$src1, GR64:$src2),
+                    "adc{q}\t{$src2, $dst|$dst, $src2}", []>;
+
+def ADC64rm  : RI<0x13, MRMSrcMem , (outs GR64:$dst), 
+                  (ins GR64:$src1, i64mem:$src2),
                   "adc{q}\t{$src2, $dst|$dst, $src2}",
                   [(set GR64:$dst, (adde GR64:$src1, (load addr:$src2)))]>;
 
-def ADC64ri8 : RIi8<0x83, MRM2r, (outs GR64:$dst), (ins GR64:$src1, i64i8imm:$src2),
+def ADC64ri8 : RIi8<0x83, MRM2r, (outs GR64:$dst), 
+                    (ins GR64:$src1, i64i8imm:$src2),
                     "adc{q}\t{$src2, $dst|$dst, $src2}",
                     [(set GR64:$dst, (adde GR64:$src1, i64immSExt8:$src2))]>;
-def ADC64ri32 : RIi32<0x81, MRM2r, (outs GR64:$dst), (ins GR64:$src1, i64i32imm:$src2),
+def ADC64ri32 : RIi32<0x81, MRM2r, (outs GR64:$dst), 
+                      (ins GR64:$src1, i64i32imm:$src2),
                       "adc{q}\t{$src2, $dst|$dst, $src2}",
                       [(set GR64:$dst, (adde GR64:$src1, i64immSExt32:$src2))]>;
 } // isTwoAddress
@@ -501,21 +552,29 @@ def ADC64mr  : RI<0x11, MRMDestMem, (outs), (ins i64mem:$dst, GR64:$src2),
                   [(store (adde (load addr:$dst), GR64:$src2), addr:$dst)]>;
 def ADC64mi8 : RIi8<0x83, MRM2m, (outs), (ins i64mem:$dst, i64i8imm :$src2),
                     "adc{q}\t{$src2, $dst|$dst, $src2}",
-                 [(store (adde (load addr:$dst), i64immSExt8:$src2), addr:$dst)]>;
+                 [(store (adde (load addr:$dst), i64immSExt8:$src2), 
+                  addr:$dst)]>;
 def ADC64mi32 : RIi32<0x81, MRM2m, (outs), (ins i64mem:$dst, i64i32imm:$src2),
                       "adc{q}\t{$src2, $dst|$dst, $src2}",
-                 [(store (adde (load addr:$dst), i64immSExt8:$src2), addr:$dst)]>;
+                 [(store (adde (load addr:$dst), i64immSExt8:$src2), 
+                  addr:$dst)]>;
 } // Uses = [EFLAGS]
 
 let isTwoAddress = 1 in {
 // Register-Register Subtraction
-def SUB64rr  : RI<0x29, MRMDestReg, (outs GR64:$dst), (ins GR64:$src1, GR64:$src2),
+def SUB64rr  : RI<0x29, MRMDestReg, (outs GR64:$dst), 
+                  (ins GR64:$src1, GR64:$src2),
                   "sub{q}\t{$src2, $dst|$dst, $src2}",
                   [(set GR64:$dst, (sub GR64:$src1, GR64:$src2)),
                    (implicit EFLAGS)]>;
 
+def SUB64rr_REV : RI<0x2B, MRMSrcReg, (outs GR64:$dst), 
+                     (ins GR64:$src1, GR64:$src2),
+                     "sub{q}\t{$src2, $dst|$dst, $src2}", []>;
+
 // Register-Memory Subtraction
-def SUB64rm  : RI<0x2B, MRMSrcMem, (outs GR64:$dst), (ins GR64:$src1, i64mem:$src2),
+def SUB64rm  : RI<0x2B, MRMSrcMem, (outs GR64:$dst), 
+                  (ins GR64:$src1, i64mem:$src2),
                   "sub{q}\t{$src2, $dst|$dst, $src2}",
                   [(set GR64:$dst, (sub GR64:$src1, (load addr:$src2))),
                    (implicit EFLAGS)]>;
@@ -556,18 +615,26 @@ def SUB64mi32 : RIi32<0x81, MRM5m, (outs), (ins i64mem:$dst, i64i32imm:$src2),
 
 let Uses = [EFLAGS] in {
 let isTwoAddress = 1 in {
-def SBB64rr    : RI<0x19, MRMDestReg, (outs GR64:$dst), (ins GR64:$src1, GR64:$src2),
+def SBB64rr    : RI<0x19, MRMDestReg, (outs GR64:$dst), 
+                    (ins GR64:$src1, GR64:$src2),
                     "sbb{q}\t{$src2, $dst|$dst, $src2}",
                     [(set GR64:$dst, (sube GR64:$src1, GR64:$src2))]>;
 
-def SBB64rm  : RI<0x1B, MRMSrcMem, (outs GR64:$dst), (ins GR64:$src1, i64mem:$src2),
+def SBB64rr_REV : RI<0x1B, MRMSrcReg, (outs GR64:$dst), 
+                     (ins GR64:$src1, GR64:$src2),
+                     "sbb{q}\t{$src2, $dst|$dst, $src2}", []>;
+                     
+def SBB64rm  : RI<0x1B, MRMSrcMem, (outs GR64:$dst), 
+                  (ins GR64:$src1, i64mem:$src2),
                   "sbb{q}\t{$src2, $dst|$dst, $src2}",
                   [(set GR64:$dst, (sube GR64:$src1, (load addr:$src2)))]>;
 
-def SBB64ri8 : RIi8<0x83, MRM3r, (outs GR64:$dst), (ins GR64:$src1, i64i8imm:$src2),
+def SBB64ri8 : RIi8<0x83, MRM3r, (outs GR64:$dst), 
+                    (ins GR64:$src1, i64i8imm:$src2),
                     "sbb{q}\t{$src2, $dst|$dst, $src2}",
                     [(set GR64:$dst, (sube GR64:$src1, i64immSExt8:$src2))]>;
-def SBB64ri32 : RIi32<0x81, MRM3r, (outs GR64:$dst), (ins GR64:$src1, i64i32imm:$src2),
+def SBB64ri32 : RIi32<0x81, MRM3r, (outs GR64:$dst), 
+                      (ins GR64:$src1, i64i32imm:$src2),
                       "sbb{q}\t{$src2, $dst|$dst, $src2}",
                       [(set GR64:$dst, (sube GR64:$src1, i64immSExt32:$src2))]>;
 } // isTwoAddress
@@ -652,15 +719,19 @@ def IMUL64rmi32 : RIi32<0x69, MRMSrcMem,                   // GR64 = [mem64]*I32
 
 // Unsigned division / remainder
 let Defs = [RAX,RDX,EFLAGS], Uses = [RAX,RDX] in {
-def DIV64r : RI<0xF7, MRM6r, (outs), (ins GR64:$src),        // RDX:RAX/r64 = RAX,RDX
+// RDX:RAX/r64 = RAX,RDX
+def DIV64r : RI<0xF7, MRM6r, (outs), (ins GR64:$src),
                 "div{q}\t$src", []>;
 // Signed division / remainder
-def IDIV64r: RI<0xF7, MRM7r, (outs), (ins GR64:$src),        // RDX:RAX/r64 = RAX,RDX
+// RDX:RAX/r64 = RAX,RDX
+def IDIV64r: RI<0xF7, MRM7r, (outs), (ins GR64:$src),
                 "idiv{q}\t$src", []>;
 let mayLoad = 1 in {
-def DIV64m : RI<0xF7, MRM6m, (outs), (ins i64mem:$src),      // RDX:RAX/[mem64] = RAX,RDX
+// RDX:RAX/[mem64] = RAX,RDX
+def DIV64m : RI<0xF7, MRM6m, (outs), (ins i64mem:$src),
                 "div{q}\t$src", []>;
-def IDIV64m: RI<0xF7, MRM7m, (outs), (ins i64mem:$src),      // RDX:RAX/[mem64] = RAX,RDX
+// RDX:RAX/[mem64] = RAX,RDX
+def IDIV64m: RI<0xF7, MRM7m, (outs), (ins i64mem:$src),
                 "idiv{q}\t$src", []>;
 }
 }
@@ -694,19 +765,23 @@ def DEC64m : RI<0xFF, MRM1m, (outs), (ins i64mem:$dst), "dec{q}\t$dst",
 // In 64-bit mode, single byte INC and DEC cannot be encoded.
 let isTwoAddress = 1, isConvertibleToThreeAddress = 1 in {
 // Can transform into LEA.
-def INC64_16r : I<0xFF, MRM0r, (outs GR16:$dst), (ins GR16:$src), "inc{w}\t$dst",
+def INC64_16r : I<0xFF, MRM0r, (outs GR16:$dst), (ins GR16:$src), 
+                  "inc{w}\t$dst",
                   [(set GR16:$dst, (add GR16:$src, 1)),
                    (implicit EFLAGS)]>,
                 OpSize, Requires<[In64BitMode]>;
-def INC64_32r : I<0xFF, MRM0r, (outs GR32:$dst), (ins GR32:$src), "inc{l}\t$dst",
+def INC64_32r : I<0xFF, MRM0r, (outs GR32:$dst), (ins GR32:$src), 
+                  "inc{l}\t$dst",
                   [(set GR32:$dst, (add GR32:$src, 1)),
                    (implicit EFLAGS)]>,
                 Requires<[In64BitMode]>;
-def DEC64_16r : I<0xFF, MRM1r, (outs GR16:$dst), (ins GR16:$src), "dec{w}\t$dst",
+def DEC64_16r : I<0xFF, MRM1r, (outs GR16:$dst), (ins GR16:$src), 
+                  "dec{w}\t$dst",
                   [(set GR16:$dst, (add GR16:$src, -1)),
                    (implicit EFLAGS)]>,
                 OpSize, Requires<[In64BitMode]>;
-def DEC64_32r : I<0xFF, MRM1r, (outs GR32:$dst), (ins GR32:$src), "dec{l}\t$dst",
+def DEC64_32r : I<0xFF, MRM1r, (outs GR32:$dst), (ins GR32:$src), 
+                  "dec{l}\t$dst",
                   [(set GR32:$dst, (add GR32:$src, -1)),
                    (implicit EFLAGS)]>,
                 Requires<[In64BitMode]>;
@@ -743,13 +818,14 @@ def SHL64rCL : RI<0xD3, MRM4r, (outs GR64:$dst), (ins GR64:$src),
                   "shl{q}\t{%cl, $dst|$dst, %CL}",
                   [(set GR64:$dst, (shl GR64:$src, CL))]>;
 let isConvertibleToThreeAddress = 1 in   // Can transform into LEA.
-def SHL64ri  : RIi8<0xC1, MRM4r, (outs GR64:$dst), (ins GR64:$src1, i8imm:$src2),
+def SHL64ri  : RIi8<0xC1, MRM4r, (outs GR64:$dst), 
+                    (ins GR64:$src1, i8imm:$src2),
                     "shl{q}\t{$src2, $dst|$dst, $src2}",
                     [(set GR64:$dst, (shl GR64:$src1, (i8 imm:$src2)))]>;
 // NOTE: We don't include patterns for shifts of a register by one, because
 // 'add reg,reg' is cheaper.
 def SHL64r1  : RI<0xD1, MRM4r, (outs GR64:$dst), (ins GR64:$src1),
-                 "shr{q}\t$dst", []>;
+                 "shl{q}\t$dst", []>;
 } // isTwoAddress
 
 let Uses = [CL] in
@@ -792,9 +868,10 @@ let Uses = [CL] in
 def SAR64rCL : RI<0xD3, MRM7r, (outs GR64:$dst), (ins GR64:$src),
                  "sar{q}\t{%cl, $dst|$dst, %CL}",
                  [(set GR64:$dst, (sra GR64:$src, CL))]>;
-def SAR64ri  : RIi8<0xC1, MRM7r, (outs GR64:$dst), (ins GR64:$src1, i8imm:$src2),
-                   "sar{q}\t{$src2, $dst|$dst, $src2}",
-                   [(set GR64:$dst, (sra GR64:$src1, (i8 imm:$src2)))]>;
+def SAR64ri  : RIi8<0xC1, MRM7r, (outs GR64:$dst),
+                    (ins GR64:$src1, i8imm:$src2),
+                    "sar{q}\t{$src2, $dst|$dst, $src2}",
+                    [(set GR64:$dst, (sra GR64:$src1, (i8 imm:$src2)))]>;
 def SAR64r1  : RI<0xD1, MRM7r, (outs GR64:$dst), (ins GR64:$src1),
                  "sar{q}\t$dst",
                  [(set GR64:$dst, (sra GR64:$src1, (i8 1)))]>;
@@ -826,7 +903,8 @@ def RCL64mCL : RI<0xD3, MRM2m, (outs i64mem:$dst), (ins i64mem:$src),
 }
 def RCL64ri : RIi8<0xC1, MRM2r, (outs GR64:$dst), (ins GR64:$src, i8imm:$cnt),
                    "rcl{q}\t{$cnt, $dst|$dst, $cnt}", []>;
-def RCL64mi : RIi8<0xC1, MRM2m, (outs i64mem:$dst), (ins i64mem:$src, i8imm:$cnt),
+def RCL64mi : RIi8<0xC1, MRM2m, (outs i64mem:$dst), 
+                   (ins i64mem:$src, i8imm:$cnt),
                    "rcl{q}\t{$cnt, $dst|$dst, $cnt}", []>;
 
 def RCR64r1 : RI<0xD1, MRM3r, (outs GR64:$dst), (ins GR64:$src),
@@ -841,7 +919,8 @@ def RCR64mCL : RI<0xD3, MRM3m, (outs i64mem:$dst), (ins i64mem:$src),
 }
 def RCR64ri : RIi8<0xC1, MRM3r, (outs GR64:$dst), (ins GR64:$src, i8imm:$cnt),
                    "rcr{q}\t{$cnt, $dst|$dst, $cnt}", []>;
-def RCR64mi : RIi8<0xC1, MRM3m, (outs i64mem:$dst), (ins i64mem:$src, i8imm:$cnt),
+def RCR64mi : RIi8<0xC1, MRM3m, (outs i64mem:$dst), 
+                   (ins i64mem:$src, i8imm:$cnt),
                    "rcr{q}\t{$cnt, $dst|$dst, $cnt}", []>;
 }
 
@@ -850,7 +929,8 @@ let Uses = [CL] in
 def ROL64rCL : RI<0xD3, MRM0r, (outs GR64:$dst), (ins GR64:$src),
                   "rol{q}\t{%cl, $dst|$dst, %CL}",
                   [(set GR64:$dst, (rotl GR64:$src, CL))]>;
-def ROL64ri  : RIi8<0xC1, MRM0r, (outs GR64:$dst), (ins GR64:$src1, i8imm:$src2),
+def ROL64ri  : RIi8<0xC1, MRM0r, (outs GR64:$dst), 
+                    (ins GR64:$src1, i8imm:$src2),
                     "rol{q}\t{$src2, $dst|$dst, $src2}",
                     [(set GR64:$dst, (rotl GR64:$src1, (i8 imm:$src2)))]>;
 def ROL64r1  : RI<0xD1, MRM0r, (outs GR64:$dst), (ins GR64:$src1),
@@ -859,9 +939,9 @@ def ROL64r1  : RI<0xD1, MRM0r, (outs GR64:$dst), (ins GR64:$src1),
 } // isTwoAddress
 
 let Uses = [CL] in
-def ROL64mCL :  I<0xD3, MRM0m, (outs), (ins i64mem:$dst),
-                  "rol{q}\t{%cl, $dst|$dst, %CL}",
-                  [(store (rotl (loadi64 addr:$dst), CL), addr:$dst)]>;
+def ROL64mCL :  RI<0xD3, MRM0m, (outs), (ins i64mem:$dst),
+                   "rol{q}\t{%cl, $dst|$dst, %CL}",
+                   [(store (rotl (loadi64 addr:$dst), CL), addr:$dst)]>;
 def ROL64mi  : RIi8<0xC1, MRM0m, (outs), (ins i64mem:$dst, i8imm:$src),
                     "rol{q}\t{$src, $dst|$dst, $src}",
                 [(store (rotl (loadi64 addr:$dst), (i8 imm:$src)), addr:$dst)]>;
@@ -874,7 +954,8 @@ let Uses = [CL] in
 def ROR64rCL : RI<0xD3, MRM1r, (outs GR64:$dst), (ins GR64:$src),
                   "ror{q}\t{%cl, $dst|$dst, %CL}",
                   [(set GR64:$dst, (rotr GR64:$src, CL))]>;
-def ROR64ri  : RIi8<0xC1, MRM1r, (outs GR64:$dst), (ins GR64:$src1, i8imm:$src2),
+def ROR64ri  : RIi8<0xC1, MRM1r, (outs GR64:$dst), 
+                    (ins GR64:$src1, i8imm:$src2),
                     "ror{q}\t{$src2, $dst|$dst, $src2}",
                     [(set GR64:$dst, (rotr GR64:$src1, (i8 imm:$src2)))]>;
 def ROR64r1  : RI<0xD1, MRM1r, (outs GR64:$dst), (ins GR64:$src1),
@@ -896,23 +977,29 @@ def ROR64m1  : RI<0xD1, MRM1m, (outs), (ins i64mem:$dst),
 // Double shift instructions (generalizations of rotate)
 let isTwoAddress = 1 in {
 let Uses = [CL] in {
-def SHLD64rrCL : RI<0xA5, MRMDestReg, (outs GR64:$dst), (ins GR64:$src1, GR64:$src2),
+def SHLD64rrCL : RI<0xA5, MRMDestReg, (outs GR64:$dst), 
+                    (ins GR64:$src1, GR64:$src2),
                     "shld{q}\t{%cl, $src2, $dst|$dst, $src2, %CL}",
-                    [(set GR64:$dst, (X86shld GR64:$src1, GR64:$src2, CL))]>, TB;
-def SHRD64rrCL : RI<0xAD, MRMDestReg, (outs GR64:$dst), (ins GR64:$src1, GR64:$src2),
+                    [(set GR64:$dst, (X86shld GR64:$src1, GR64:$src2, CL))]>, 
+                    TB;
+def SHRD64rrCL : RI<0xAD, MRMDestReg, (outs GR64:$dst), 
+                    (ins GR64:$src1, GR64:$src2),
                     "shrd{q}\t{%cl, $src2, $dst|$dst, $src2, %CL}",
-                    [(set GR64:$dst, (X86shrd GR64:$src1, GR64:$src2, CL))]>, TB;
+                    [(set GR64:$dst, (X86shrd GR64:$src1, GR64:$src2, CL))]>, 
+                    TB;
 }
 
 let isCommutable = 1 in {  // FIXME: Update X86InstrInfo::commuteInstruction
 def SHLD64rri8 : RIi8<0xA4, MRMDestReg,
-                      (outs GR64:$dst), (ins GR64:$src1, GR64:$src2, i8imm:$src3),
+                      (outs GR64:$dst), 
+                      (ins GR64:$src1, GR64:$src2, i8imm:$src3),
                       "shld{q}\t{$src3, $src2, $dst|$dst, $src2, $src3}",
                       [(set GR64:$dst, (X86shld GR64:$src1, GR64:$src2,
                                        (i8 imm:$src3)))]>,
                  TB;
 def SHRD64rri8 : RIi8<0xAC, MRMDestReg,
-                      (outs GR64:$dst), (ins GR64:$src1, GR64:$src2, i8imm:$src3),
+                      (outs GR64:$dst), 
+                      (ins GR64:$src1, GR64:$src2, i8imm:$src3),
                       "shrd{q}\t{$src3, $src2, $dst|$dst, $src2, $src3}",
                       [(set GR64:$dst, (X86shrd GR64:$src1, GR64:$src2,
                                        (i8 imm:$src3)))]>,
@@ -965,6 +1052,9 @@ def AND64rr  : RI<0x21, MRMDestReg,
                   "and{q}\t{$src2, $dst|$dst, $src2}",
                   [(set GR64:$dst, (and GR64:$src1, GR64:$src2)),
                    (implicit EFLAGS)]>;
+def AND64rr_REV : RI<0x23, MRMSrcReg, (outs GR64:$dst), 
+                     (ins GR64:$src1, GR64:$src2),
+                     "and{q}\t{$src2, $dst|$dst, $src2}", []>;
 def AND64rm  : RI<0x23, MRMSrcMem,
                   (outs GR64:$dst), (ins GR64:$src1, i64mem:$src2),
                   "and{q}\t{$src2, $dst|$dst, $src2}",
@@ -1000,19 +1090,26 @@ def AND64mi32  : RIi32<0x81, MRM4m,
 
 let isTwoAddress = 1 in {
 let isCommutable = 1 in
-def OR64rr   : RI<0x09, MRMDestReg, (outs GR64:$dst), (ins GR64:$src1, GR64:$src2),
+def OR64rr   : RI<0x09, MRMDestReg, (outs GR64:$dst), 
+                  (ins GR64:$src1, GR64:$src2),
                   "or{q}\t{$src2, $dst|$dst, $src2}",
                   [(set GR64:$dst, (or GR64:$src1, GR64:$src2)),
                    (implicit EFLAGS)]>;
-def OR64rm   : RI<0x0B, MRMSrcMem , (outs GR64:$dst), (ins GR64:$src1, i64mem:$src2),
+def OR64rr_REV : RI<0x0B, MRMSrcReg, (outs GR64:$dst), 
+                    (ins GR64:$src1, GR64:$src2),
+                    "or{q}\t{$src2, $dst|$dst, $src2}", []>;
+def OR64rm   : RI<0x0B, MRMSrcMem , (outs GR64:$dst),
+                  (ins GR64:$src1, i64mem:$src2),
                   "or{q}\t{$src2, $dst|$dst, $src2}",
                   [(set GR64:$dst, (or GR64:$src1, (load addr:$src2))),
                    (implicit EFLAGS)]>;
-def OR64ri8  : RIi8<0x83, MRM1r, (outs GR64:$dst), (ins GR64:$src1, i64i8imm:$src2),
+def OR64ri8  : RIi8<0x83, MRM1r, (outs GR64:$dst),
+                    (ins GR64:$src1, i64i8imm:$src2),
                     "or{q}\t{$src2, $dst|$dst, $src2}",
                     [(set GR64:$dst, (or GR64:$src1, i64immSExt8:$src2)),
                      (implicit EFLAGS)]>;
-def OR64ri32 : RIi32<0x81, MRM1r, (outs GR64:$dst), (ins GR64:$src1, i64i32imm:$src2),
+def OR64ri32 : RIi32<0x81, MRM1r, (outs GR64:$dst),
+                     (ins GR64:$src1, i64i32imm:$src2),
                      "or{q}\t{$src2, $dst|$dst, $src2}",
                      [(set GR64:$dst, (or GR64:$src1, i64immSExt32:$src2)),
                       (implicit EFLAGS)]>;
@@ -1036,15 +1133,21 @@ def OR64i32 : RIi32<0x0D, RawFrm, (outs), (ins i32imm:$src),
 
 let isTwoAddress = 1 in {
 let isCommutable = 1 in
-def XOR64rr  : RI<0x31, MRMDestReg,  (outs GR64:$dst), (ins GR64:$src1, GR64:$src2), 
+def XOR64rr  : RI<0x31, MRMDestReg,  (outs GR64:$dst), 
+                  (ins GR64:$src1, GR64:$src2), 
                   "xor{q}\t{$src2, $dst|$dst, $src2}",
                   [(set GR64:$dst, (xor GR64:$src1, GR64:$src2)),
                    (implicit EFLAGS)]>;
-def XOR64rm  : RI<0x33, MRMSrcMem, (outs GR64:$dst), (ins GR64:$src1, i64mem:$src2), 
+def XOR64rr_REV : RI<0x33, MRMSrcReg, (outs GR64:$dst), 
+                     (ins GR64:$src1, GR64:$src2),
+                    "xor{q}\t{$src2, $dst|$dst, $src2}", []>;
+def XOR64rm  : RI<0x33, MRMSrcMem, (outs GR64:$dst), 
+                  (ins GR64:$src1, i64mem:$src2), 
                   "xor{q}\t{$src2, $dst|$dst, $src2}",
                   [(set GR64:$dst, (xor GR64:$src1, (load addr:$src2))),
                    (implicit EFLAGS)]>;
-def XOR64ri8 : RIi8<0x83, MRM6r,  (outs GR64:$dst), (ins GR64:$src1, i64i8imm:$src2),
+def XOR64ri8 : RIi8<0x83, MRM6r,  (outs GR64:$dst), 
+                    (ins GR64:$src1, i64i8imm:$src2),
                     "xor{q}\t{$src2, $dst|$dst, $src2}",
                     [(set GR64:$dst, (xor GR64:$src1, i64immSExt8:$src2)),
                      (implicit EFLAGS)]>;
@@ -1148,10 +1251,12 @@ def BT64rr : RI<0xA3, MRMDestReg, (outs), (ins GR64:$src1, GR64:$src2),
 // Unlike with the register+register form, the memory+register form of the
 // bt instruction does not ignore the high bits of the index. From ISel's
 // perspective, this is pretty bizarre. Disable these instructions for now.
-//def BT64mr : RI<0xA3, MRMDestMem, (outs), (ins i64mem:$src1, GR64:$src2),
-//               "bt{q}\t{$src2, $src1|$src1, $src2}",
+def BT64mr : RI<0xA3, MRMDestMem, (outs), (ins i64mem:$src1, GR64:$src2),
+               "bt{q}\t{$src2, $src1|$src1, $src2}",
 //               [(X86bt (loadi64 addr:$src1), GR64:$src2),
-//                (implicit EFLAGS)]>, TB;
+//                (implicit EFLAGS)]
+                []
+                >, TB;
 
 def BT64ri8 : Ii8<0xBA, MRM4r, (outs), (ins GR64:$src1, i64i8imm:$src2),
                 "bt{q}\t{$src2, $src1|$src1, $src2}",
@@ -1164,6 +1269,33 @@ def BT64mi8 : Ii8<0xBA, MRM4m, (outs), (ins i64mem:$src1, i64i8imm:$src2),
                 "bt{q}\t{$src2, $src1|$src1, $src2}",
                 [(X86bt (loadi64 addr:$src1), i64immSExt8:$src2),
                  (implicit EFLAGS)]>, TB;
+
+def BTC64rr : RI<0xBB, MRMDestReg, (outs), (ins GR64:$src1, GR64:$src2),
+                 "btc{q}\t{$src2, $src1|$src1, $src2}", []>, TB;
+def BTC64mr : RI<0xBB, MRMDestMem, (outs), (ins i64mem:$src1, GR64:$src2),
+                 "btc{q}\t{$src2, $src1|$src1, $src2}", []>, TB;
+def BTC64ri8 : RIi8<0xBA, MRM7r, (outs), (ins GR64:$src1, i64i8imm:$src2),
+                    "btc{q}\t{$src2, $src1|$src1, $src2}", []>, TB;
+def BTC64mi8 : RIi8<0xBA, MRM7m, (outs), (ins i64mem:$src1, i64i8imm:$src2),
+                    "btc{q}\t{$src2, $src1|$src1, $src2}", []>, TB;
+
+def BTR64rr : RI<0xB3, MRMDestReg, (outs), (ins GR64:$src1, GR64:$src2),
+                 "btr{q}\t{$src2, $src1|$src1, $src2}", []>, TB;
+def BTR64mr : RI<0xB3, MRMDestMem, (outs), (ins i64mem:$src1, GR64:$src2),
+                 "btr{q}\t{$src2, $src1|$src1, $src2}", []>, TB;
+def BTR64ri8 : RIi8<0xBA, MRM6r, (outs), (ins GR64:$src1, i64i8imm:$src2),
+                    "btr{q}\t{$src2, $src1|$src1, $src2}", []>, TB;
+def BTR64mi8 : RIi8<0xBA, MRM6m, (outs), (ins i64mem:$src1, i64i8imm:$src2),
+                    "btr{q}\t{$src2, $src1|$src1, $src2}", []>, TB;
+
+def BTS64rr : RI<0xAB, MRMDestReg, (outs), (ins GR64:$src1, GR64:$src2),
+                 "bts{q}\t{$src2, $src1|$src1, $src2}", []>, TB;
+def BTS64mr : RI<0xAB, MRMDestMem, (outs), (ins i64mem:$src1, GR64:$src2),
+                 "bts{q}\t{$src2, $src1|$src1, $src2}", []>, TB;
+def BTS64ri8 : RIi8<0xBA, MRM5r, (outs), (ins GR64:$src1, i64i8imm:$src2),
+                    "bts{q}\t{$src2, $src1|$src1, $src2}", []>, TB;
+def BTS64mi8 : RIi8<0xBA, MRM5m, (outs), (ins i64mem:$src1, i64i8imm:$src2),
+                    "bts{q}\t{$src2, $src1|$src1, $src2}", []>, TB;
 } // Defs = [EFLAGS]
 
 // Conditional moves
@@ -1171,164 +1303,164 @@ let Uses = [EFLAGS], isTwoAddress = 1 in {
 let isCommutable = 1 in {
 def CMOVB64rr : RI<0x42, MRMSrcReg,       // if <u, GR64 = GR64
                    (outs GR64:$dst), (ins GR64:$src1, GR64:$src2),
-                   "cmovb\t{$src2, $dst|$dst, $src2}",
+                   "cmovb{q}\t{$src2, $dst|$dst, $src2}",
                    [(set GR64:$dst, (X86cmov GR64:$src1, GR64:$src2,
                                      X86_COND_B, EFLAGS))]>, TB;
 def CMOVAE64rr: RI<0x43, MRMSrcReg,       // if >=u, GR64 = GR64
                    (outs GR64:$dst), (ins GR64:$src1, GR64:$src2),
-                   "cmovae\t{$src2, $dst|$dst, $src2}",
+                   "cmovae{q}\t{$src2, $dst|$dst, $src2}",
                    [(set GR64:$dst, (X86cmov GR64:$src1, GR64:$src2,
                                      X86_COND_AE, EFLAGS))]>, TB;
 def CMOVE64rr : RI<0x44, MRMSrcReg,       // if ==, GR64 = GR64
                    (outs GR64:$dst), (ins GR64:$src1, GR64:$src2),
-                   "cmove\t{$src2, $dst|$dst, $src2}",
+                   "cmove{q}\t{$src2, $dst|$dst, $src2}",
                    [(set GR64:$dst, (X86cmov GR64:$src1, GR64:$src2,
                                      X86_COND_E, EFLAGS))]>, TB;
 def CMOVNE64rr: RI<0x45, MRMSrcReg,       // if !=, GR64 = GR64
                    (outs GR64:$dst), (ins GR64:$src1, GR64:$src2),
-                   "cmovne\t{$src2, $dst|$dst, $src2}",
+                   "cmovne{q}\t{$src2, $dst|$dst, $src2}",
                    [(set GR64:$dst, (X86cmov GR64:$src1, GR64:$src2,
                                     X86_COND_NE, EFLAGS))]>, TB;
 def CMOVBE64rr: RI<0x46, MRMSrcReg,       // if <=u, GR64 = GR64
                    (outs GR64:$dst), (ins GR64:$src1, GR64:$src2),
-                   "cmovbe\t{$src2, $dst|$dst, $src2}",
+                   "cmovbe{q}\t{$src2, $dst|$dst, $src2}",
                    [(set GR64:$dst, (X86cmov GR64:$src1, GR64:$src2,
                                     X86_COND_BE, EFLAGS))]>, TB;
 def CMOVA64rr : RI<0x47, MRMSrcReg,       // if >u, GR64 = GR64
                    (outs GR64:$dst), (ins GR64:$src1, GR64:$src2),
-                   "cmova\t{$src2, $dst|$dst, $src2}",
+                   "cmova{q}\t{$src2, $dst|$dst, $src2}",
                    [(set GR64:$dst, (X86cmov GR64:$src1, GR64:$src2,
                                     X86_COND_A, EFLAGS))]>, TB;
 def CMOVL64rr : RI<0x4C, MRMSrcReg,       // if <s, GR64 = GR64
                    (outs GR64:$dst), (ins GR64:$src1, GR64:$src2),
-                   "cmovl\t{$src2, $dst|$dst, $src2}",
+                   "cmovl{q}\t{$src2, $dst|$dst, $src2}",
                    [(set GR64:$dst, (X86cmov GR64:$src1, GR64:$src2,
                                     X86_COND_L, EFLAGS))]>, TB;
 def CMOVGE64rr: RI<0x4D, MRMSrcReg,       // if >=s, GR64 = GR64
                    (outs GR64:$dst), (ins GR64:$src1, GR64:$src2),
-                   "cmovge\t{$src2, $dst|$dst, $src2}",
+                   "cmovge{q}\t{$src2, $dst|$dst, $src2}",
                    [(set GR64:$dst, (X86cmov GR64:$src1, GR64:$src2,
                                     X86_COND_GE, EFLAGS))]>, TB;
 def CMOVLE64rr: RI<0x4E, MRMSrcReg,       // if <=s, GR64 = GR64
                    (outs GR64:$dst), (ins GR64:$src1, GR64:$src2),
-                   "cmovle\t{$src2, $dst|$dst, $src2}",
+                   "cmovle{q}\t{$src2, $dst|$dst, $src2}",
                    [(set GR64:$dst, (X86cmov GR64:$src1, GR64:$src2,
                                     X86_COND_LE, EFLAGS))]>, TB;
 def CMOVG64rr : RI<0x4F, MRMSrcReg,       // if >s, GR64 = GR64
                    (outs GR64:$dst), (ins GR64:$src1, GR64:$src2),
-                   "cmovg\t{$src2, $dst|$dst, $src2}",
+                   "cmovg{q}\t{$src2, $dst|$dst, $src2}",
                    [(set GR64:$dst, (X86cmov GR64:$src1, GR64:$src2,
                                     X86_COND_G, EFLAGS))]>, TB;
 def CMOVS64rr : RI<0x48, MRMSrcReg,       // if signed, GR64 = GR64
                    (outs GR64:$dst), (ins GR64:$src1, GR64:$src2),
-                   "cmovs\t{$src2, $dst|$dst, $src2}",
+                   "cmovs{q}\t{$src2, $dst|$dst, $src2}",
                    [(set GR64:$dst, (X86cmov GR64:$src1, GR64:$src2,
                                     X86_COND_S, EFLAGS))]>, TB;
 def CMOVNS64rr: RI<0x49, MRMSrcReg,       // if !signed, GR64 = GR64
                    (outs GR64:$dst), (ins GR64:$src1, GR64:$src2),
-                   "cmovns\t{$src2, $dst|$dst, $src2}",
+                   "cmovns{q}\t{$src2, $dst|$dst, $src2}",
                    [(set GR64:$dst, (X86cmov GR64:$src1, GR64:$src2,
                                     X86_COND_NS, EFLAGS))]>, TB;
 def CMOVP64rr : RI<0x4A, MRMSrcReg,       // if parity, GR64 = GR64
                    (outs GR64:$dst), (ins GR64:$src1, GR64:$src2),
-                   "cmovp\t{$src2, $dst|$dst, $src2}",
+                   "cmovp{q}\t{$src2, $dst|$dst, $src2}",
                    [(set GR64:$dst, (X86cmov GR64:$src1, GR64:$src2,
                                     X86_COND_P, EFLAGS))]>, TB;
 def CMOVNP64rr : RI<0x4B, MRMSrcReg,       // if !parity, GR64 = GR64
                    (outs GR64:$dst), (ins GR64:$src1, GR64:$src2),
-                   "cmovnp\t{$src2, $dst|$dst, $src2}",
+                   "cmovnp{q}\t{$src2, $dst|$dst, $src2}",
                     [(set GR64:$dst, (X86cmov GR64:$src1, GR64:$src2,
                                      X86_COND_NP, EFLAGS))]>, TB;
 def CMOVO64rr : RI<0x40, MRMSrcReg,       // if overflow, GR64 = GR64
                    (outs GR64:$dst), (ins GR64:$src1, GR64:$src2),
-                   "cmovo\t{$src2, $dst|$dst, $src2}",
+                   "cmovo{q}\t{$src2, $dst|$dst, $src2}",
                    [(set GR64:$dst, (X86cmov GR64:$src1, GR64:$src2,
                                     X86_COND_O, EFLAGS))]>, TB;
 def CMOVNO64rr : RI<0x41, MRMSrcReg,       // if !overflow, GR64 = GR64
                    (outs GR64:$dst), (ins GR64:$src1, GR64:$src2),
-                   "cmovno\t{$src2, $dst|$dst, $src2}",
+                   "cmovno{q}\t{$src2, $dst|$dst, $src2}",
                     [(set GR64:$dst, (X86cmov GR64:$src1, GR64:$src2,
                                      X86_COND_NO, EFLAGS))]>, TB;
 } // isCommutable = 1
 
 def CMOVB64rm : RI<0x42, MRMSrcMem,       // if <u, GR64 = [mem64]
                    (outs GR64:$dst), (ins GR64:$src1, i64mem:$src2),
-                   "cmovb\t{$src2, $dst|$dst, $src2}",
+                   "cmovb{q}\t{$src2, $dst|$dst, $src2}",
                    [(set GR64:$dst, (X86cmov GR64:$src1, (loadi64 addr:$src2),
                                      X86_COND_B, EFLAGS))]>, TB;
 def CMOVAE64rm: RI<0x43, MRMSrcMem,       // if >=u, GR64 = [mem64]
                    (outs GR64:$dst), (ins GR64:$src1, i64mem:$src2),
-                   "cmovae\t{$src2, $dst|$dst, $src2}",
+                   "cmovae{q}\t{$src2, $dst|$dst, $src2}",
                    [(set GR64:$dst, (X86cmov GR64:$src1, (loadi64 addr:$src2),
                                      X86_COND_AE, EFLAGS))]>, TB;
 def CMOVE64rm : RI<0x44, MRMSrcMem,       // if ==, GR64 = [mem64]
                    (outs GR64:$dst), (ins GR64:$src1, i64mem:$src2),
-                   "cmove\t{$src2, $dst|$dst, $src2}",
+                   "cmove{q}\t{$src2, $dst|$dst, $src2}",
                    [(set GR64:$dst, (X86cmov GR64:$src1, (loadi64 addr:$src2),
                                      X86_COND_E, EFLAGS))]>, TB;
 def CMOVNE64rm: RI<0x45, MRMSrcMem,       // if !=, GR64 = [mem64]
                    (outs GR64:$dst), (ins GR64:$src1, i64mem:$src2),
-                   "cmovne\t{$src2, $dst|$dst, $src2}",
+                   "cmovne{q}\t{$src2, $dst|$dst, $src2}",
                    [(set GR64:$dst, (X86cmov GR64:$src1, (loadi64 addr:$src2),
                                     X86_COND_NE, EFLAGS))]>, TB;
 def CMOVBE64rm: RI<0x46, MRMSrcMem,       // if <=u, GR64 = [mem64]
                    (outs GR64:$dst), (ins GR64:$src1, i64mem:$src2),
-                   "cmovbe\t{$src2, $dst|$dst, $src2}",
+                   "cmovbe{q}\t{$src2, $dst|$dst, $src2}",
                    [(set GR64:$dst, (X86cmov GR64:$src1, (loadi64 addr:$src2),
                                     X86_COND_BE, EFLAGS))]>, TB;
 def CMOVA64rm : RI<0x47, MRMSrcMem,       // if >u, GR64 = [mem64]
                    (outs GR64:$dst), (ins GR64:$src1, i64mem:$src2),
-                   "cmova\t{$src2, $dst|$dst, $src2}",
+                   "cmova{q}\t{$src2, $dst|$dst, $src2}",
                    [(set GR64:$dst, (X86cmov GR64:$src1, (loadi64 addr:$src2),
                                     X86_COND_A, EFLAGS))]>, TB;
 def CMOVL64rm : RI<0x4C, MRMSrcMem,       // if <s, GR64 = [mem64]
                    (outs GR64:$dst), (ins GR64:$src1, i64mem:$src2),
-                   "cmovl\t{$src2, $dst|$dst, $src2}",
+                   "cmovl{q}\t{$src2, $dst|$dst, $src2}",
                    [(set GR64:$dst, (X86cmov GR64:$src1, (loadi64 addr:$src2),
                                     X86_COND_L, EFLAGS))]>, TB;
 def CMOVGE64rm: RI<0x4D, MRMSrcMem,       // if >=s, GR64 = [mem64]
                    (outs GR64:$dst), (ins GR64:$src1, i64mem:$src2),
-                   "cmovge\t{$src2, $dst|$dst, $src2}",
+                   "cmovge{q}\t{$src2, $dst|$dst, $src2}",
                    [(set GR64:$dst, (X86cmov GR64:$src1, (loadi64 addr:$src2),
                                     X86_COND_GE, EFLAGS))]>, TB;
 def CMOVLE64rm: RI<0x4E, MRMSrcMem,       // if <=s, GR64 = [mem64]
                    (outs GR64:$dst), (ins GR64:$src1, i64mem:$src2),
-                   "cmovle\t{$src2, $dst|$dst, $src2}",
+                   "cmovle{q}\t{$src2, $dst|$dst, $src2}",
                    [(set GR64:$dst, (X86cmov GR64:$src1, (loadi64 addr:$src2),
                                     X86_COND_LE, EFLAGS))]>, TB;
 def CMOVG64rm : RI<0x4F, MRMSrcMem,       // if >s, GR64 = [mem64]
                    (outs GR64:$dst), (ins GR64:$src1, i64mem:$src2),
-                   "cmovg\t{$src2, $dst|$dst, $src2}",
+                   "cmovg{q}\t{$src2, $dst|$dst, $src2}",
                    [(set GR64:$dst, (X86cmov GR64:$src1, (loadi64 addr:$src2),
                                     X86_COND_G, EFLAGS))]>, TB;
 def CMOVS64rm : RI<0x48, MRMSrcMem,       // if signed, GR64 = [mem64]
                    (outs GR64:$dst), (ins GR64:$src1, i64mem:$src2),
-                   "cmovs\t{$src2, $dst|$dst, $src2}",
+                   "cmovs{q}\t{$src2, $dst|$dst, $src2}",
                    [(set GR64:$dst, (X86cmov GR64:$src1, (loadi64 addr:$src2),
                                     X86_COND_S, EFLAGS))]>, TB;
 def CMOVNS64rm: RI<0x49, MRMSrcMem,       // if !signed, GR64 = [mem64]
                    (outs GR64:$dst), (ins GR64:$src1, i64mem:$src2),
-                   "cmovns\t{$src2, $dst|$dst, $src2}",
+                   "cmovns{q}\t{$src2, $dst|$dst, $src2}",
                    [(set GR64:$dst, (X86cmov GR64:$src1, (loadi64 addr:$src2),
                                     X86_COND_NS, EFLAGS))]>, TB;
 def CMOVP64rm : RI<0x4A, MRMSrcMem,       // if parity, GR64 = [mem64]
                    (outs GR64:$dst), (ins GR64:$src1, i64mem:$src2),
-                   "cmovp\t{$src2, $dst|$dst, $src2}",
+                   "cmovp{q}\t{$src2, $dst|$dst, $src2}",
                    [(set GR64:$dst, (X86cmov GR64:$src1, (loadi64 addr:$src2),
                                     X86_COND_P, EFLAGS))]>, TB;
 def CMOVNP64rm : RI<0x4B, MRMSrcMem,       // if !parity, GR64 = [mem64]
                    (outs GR64:$dst), (ins GR64:$src1, i64mem:$src2),
-                   "cmovnp\t{$src2, $dst|$dst, $src2}",
+                   "cmovnp{q}\t{$src2, $dst|$dst, $src2}",
                     [(set GR64:$dst, (X86cmov GR64:$src1, (loadi64 addr:$src2),
                                      X86_COND_NP, EFLAGS))]>, TB;
 def CMOVO64rm : RI<0x40, MRMSrcMem,       // if overflow, GR64 = [mem64]
                    (outs GR64:$dst), (ins GR64:$src1, i64mem:$src2),
-                   "cmovo\t{$src2, $dst|$dst, $src2}",
+                   "cmovo{q}\t{$src2, $dst|$dst, $src2}",
                    [(set GR64:$dst, (X86cmov GR64:$src1, (loadi64 addr:$src2),
                                     X86_COND_O, EFLAGS))]>, TB;
 def CMOVNO64rm : RI<0x41, MRMSrcMem,       // if !overflow, GR64 = [mem64]
                    (outs GR64:$dst), (ins GR64:$src1, i64mem:$src2),
-                   "cmovno\t{$src2, $dst|$dst, $src2}",
+                   "cmovno{q}\t{$src2, $dst|$dst, $src2}",
                     [(set GR64:$dst, (X86cmov GR64:$src1, (loadi64 addr:$src2),
                                      X86_COND_NO, EFLAGS))]>, TB;
 } // isTwoAddress
@@ -1337,9 +1469,9 @@ def CMOVNO64rm : RI<0x41, MRMSrcMem,       // if !overflow, GR64 = [mem64]
 let Defs = [EFLAGS], Uses = [EFLAGS], isCodeGenOnly = 1 in
 def SETB_C64r : RI<0x19, MRMInitReg, (outs GR64:$dst), (ins),
                   "sbb{q}\t$dst, $dst",
-                 [(set GR64:$dst, (zext (X86setcc_c X86_COND_B, EFLAGS)))]>;
+                 [(set GR64:$dst, (X86setcc_c X86_COND_B, EFLAGS))]>;
 
-def : Pat<(i64 (anyext (X86setcc_c X86_COND_B, EFLAGS))),
+def : Pat<(i64 (anyext (i8 (X86setcc_c X86_COND_B, EFLAGS)))),
           (SETB_C64r)>;
 
 //===----------------------------------------------------------------------===//
@@ -1347,11 +1479,16 @@ def : Pat<(i64 (anyext (X86setcc_c X86_COND_B, EFLAGS))),
 //
 
 // f64 -> signed i64
+def CVTSD2SI64rr: RSDI<0x2D, MRMSrcReg, (outs GR64:$dst), (ins FR64:$src),
+                       "cvtsd2si{q}\t{$src, $dst|$dst, $src}", []>;
+def CVTSD2SI64rm: RSDI<0x2D, MRMSrcMem, (outs GR64:$dst), (ins f64mem:$src),
+                       "cvtsd2si{q}\t{$src, $dst|$dst, $src}", []>;
 def Int_CVTSD2SI64rr: RSDI<0x2D, MRMSrcReg, (outs GR64:$dst), (ins VR128:$src),
                            "cvtsd2si{q}\t{$src, $dst|$dst, $src}",
                            [(set GR64:$dst,
                              (int_x86_sse2_cvtsd2si64 VR128:$src))]>;
-def Int_CVTSD2SI64rm: RSDI<0x2D, MRMSrcMem, (outs GR64:$dst), (ins f128mem:$src),
+def Int_CVTSD2SI64rm: RSDI<0x2D, MRMSrcMem, (outs GR64:$dst), 
+                           (ins f128mem:$src),
                            "cvtsd2si{q}\t{$src, $dst|$dst, $src}",
                            [(set GR64:$dst, (int_x86_sse2_cvtsd2si64
                                              (load addr:$src)))]>;
@@ -1365,7 +1502,8 @@ def Int_CVTTSD2SI64rr: RSDI<0x2C, MRMSrcReg, (outs GR64:$dst), (ins VR128:$src),
                             "cvttsd2si{q}\t{$src, $dst|$dst, $src}",
                             [(set GR64:$dst,
                               (int_x86_sse2_cvttsd2si64 VR128:$src))]>;
-def Int_CVTTSD2SI64rm: RSDI<0x2C, MRMSrcMem, (outs GR64:$dst), (ins f128mem:$src),
+def Int_CVTTSD2SI64rm: RSDI<0x2C, MRMSrcMem, (outs GR64:$dst), 
+                            (ins f128mem:$src),
                             "cvttsd2si{q}\t{$src, $dst|$dst, $src}",
                             [(set GR64:$dst,
                               (int_x86_sse2_cvttsd2si64
@@ -1410,7 +1548,8 @@ let isTwoAddress = 1 in {
                                 (int_x86_sse_cvtsi642ss VR128:$src1,
                                  GR64:$src2))]>;
   def Int_CVTSI2SS64rm : RSSI<0x2A, MRMSrcMem,
-                              (outs VR128:$dst), (ins VR128:$src1, i64mem:$src2),
+                              (outs VR128:$dst), 
+                              (ins VR128:$src1, i64mem:$src2),
                               "cvtsi2ss{q}\t{$src2, $dst|$dst, $src2}",
                               [(set VR128:$dst,
                                 (int_x86_sse_cvtsi642ss VR128:$src1,
@@ -1418,6 +1557,10 @@ let isTwoAddress = 1 in {
 }
 
 // f32 -> signed i64
+def CVTSS2SI64rr: RSSI<0x2D, MRMSrcReg, (outs GR64:$dst), (ins FR32:$src),
+                       "cvtss2si{q}\t{$src, $dst|$dst, $src}", []>;
+def CVTSS2SI64rm: RSSI<0x2D, MRMSrcMem, (outs GR64:$dst), (ins f32mem:$src),
+                       "cvtss2si{q}\t{$src, $dst|$dst, $src}", []>;
 def Int_CVTSS2SI64rr: RSSI<0x2D, MRMSrcReg, (outs GR64:$dst), (ins VR128:$src),
                            "cvtss2si{q}\t{$src, $dst|$dst, $src}",
                            [(set GR64:$dst,
@@ -1436,10 +1579,20 @@ def Int_CVTTSS2SI64rr: RSSI<0x2C, MRMSrcReg, (outs GR64:$dst), (ins VR128:$src),
                             "cvttss2si{q}\t{$src, $dst|$dst, $src}",
                             [(set GR64:$dst,
                               (int_x86_sse_cvttss2si64 VR128:$src))]>;
-def Int_CVTTSS2SI64rm: RSSI<0x2C, MRMSrcMem, (outs GR64:$dst), (ins f32mem:$src),
+def Int_CVTTSS2SI64rm: RSSI<0x2C, MRMSrcMem, (outs GR64:$dst),
+                            (ins f32mem:$src),
                             "cvttss2si{q}\t{$src, $dst|$dst, $src}",
                             [(set GR64:$dst,
                               (int_x86_sse_cvttss2si64 (load addr:$src)))]>;
+                              
+// Descriptor-table support instructions
+
+// LLDT is not interpreted specially in 64-bit mode because there is no sign
+//   extension.
+def SLDT64r : RI<0x00, MRM0r, (outs GR64:$dst), (ins),
+                 "sldt{q}\t$dst", []>, TB;
+def SLDT64m : RI<0x00, MRM0m, (outs i16mem:$dst), (ins),
+                 "sldt{q}\t$dst", []>, TB;
 
 //===----------------------------------------------------------------------===//
 // Alias Instructions
@@ -1505,17 +1658,37 @@ def LCMPXCHG64 : RI<0xB1, MRMDestMem, (outs), (ins i64mem:$ptr, GR64:$swap),
 
 let Constraints = "$val = $dst" in {
 let Defs = [EFLAGS] in
-def LXADD64 : RI<0xC1, MRMSrcMem, (outs GR64:$dst), (ins i64mem:$ptr,GR64:$val),
+def LXADD64 : RI<0xC1, MRMSrcMem, (outs GR64:$dst), (ins GR64:$val,i64mem:$ptr),
                "lock\n\t"
                "xadd\t$val, $ptr",
                [(set GR64:$dst, (atomic_load_add_64 addr:$ptr, GR64:$val))]>,
                 TB, LOCK;
 
-def XCHG64rm : RI<0x87, MRMSrcMem, (outs GR64:$dst), (ins i64mem:$ptr,GR64:$val),
-                  "xchg\t$val, $ptr", 
+def XCHG64rm : RI<0x87, MRMSrcMem, (outs GR64:$dst), 
+                  (ins GR64:$val,i64mem:$ptr),
+                  "xchg{q}\t{$val, $ptr|$ptr, $val}", 
                   [(set GR64:$dst, (atomic_swap_64 addr:$ptr, GR64:$val))]>;
+
+def XCHG64rr : RI<0x87, MRMSrcReg, (outs GR64:$dst), (ins GR64:$val,GR64:$src),
+                  "xchg{q}\t{$val, $src|$src, $val}", []>;
 }
 
+def XADD64rr  : RI<0xC1, MRMDestReg, (outs GR64:$dst), (ins GR64:$src),
+                   "xadd{q}\t{$src, $dst|$dst, $src}", []>, TB;
+def XADD64rm  : RI<0xC1, MRMDestMem, (outs), (ins i64mem:$dst, GR64:$src),
+                   "xadd{q}\t{$src, $dst|$dst, $src}", []>, TB;
+                   
+def CMPXCHG64rr  : RI<0xB1, MRMDestReg, (outs GR64:$dst), (ins GR64:$src),
+                      "cmpxchg{q}\t{$src, $dst|$dst, $src}", []>, TB;
+def CMPXCHG64rm  : RI<0xB1, MRMDestMem, (outs), (ins i64mem:$dst, GR64:$src),
+                      "cmpxchg{q}\t{$src, $dst|$dst, $src}", []>, TB;
+                      
+def CMPXCHG16B : RI<0xC7, MRM1m, (outs), (ins i128mem:$dst),
+                    "cmpxchg16b\t$dst", []>, TB;
+
+def XCHG64ar : RI<0x90, AddRegFrm, (outs), (ins GR64:$src),
+                  "xchg{q}\t{$src, %rax|%rax, $src}", []>;
+
 // Optimized codegen when the non-memory output is not used.
 let Defs = [EFLAGS] in {
 // FIXME: Use normal add / sub instructions and add lock prefix dynamically.
@@ -1585,6 +1758,36 @@ def LAR64rm : RI<0x02, MRMSrcMem, (outs GR64:$dst), (ins i16mem:$src),
 def LAR64rr : RI<0x02, MRMSrcReg, (outs GR64:$dst), (ins GR32:$src),
                  "lar{q}\t{$src, $dst|$dst, $src}", []>, TB;
                  
+def LSL64rm : RI<0x03, MRMSrcMem, (outs GR64:$dst), (ins i64mem:$src),
+                 "lsl{q}\t{$src, $dst|$dst, $src}", []>, TB; 
+def LSL64rr : RI<0x03, MRMSrcReg, (outs GR64:$dst), (ins GR64:$src),
+                 "lsl{q}\t{$src, $dst|$dst, $src}", []>, TB;
+
+def SWPGS : I<0x01, RawFrm, (outs), (ins), "swpgs", []>, TB;
+
+def PUSHFS64 : I<0xa0, RawFrm, (outs), (ins),
+                 "push{q}\t%fs", []>, TB;
+def PUSHGS64 : I<0xa8, RawFrm, (outs), (ins),
+                 "push{q}\t%gs", []>, TB;
+
+def POPFS64 : I<0xa1, RawFrm, (outs), (ins),
+                "pop{q}\t%fs", []>, TB;
+def POPGS64 : I<0xa9, RawFrm, (outs), (ins),
+                "pop{q}\t%gs", []>, TB;
+                 
+def LSS64rm : RI<0xb2, MRMSrcMem, (outs GR64:$dst), (ins opaque80mem:$src),
+                 "lss{q}\t{$src, $dst|$dst, $src}", []>, TB;
+def LFS64rm : RI<0xb4, MRMSrcMem, (outs GR64:$dst), (ins opaque80mem:$src),
+                 "lfs{q}\t{$src, $dst|$dst, $src}", []>, TB;
+def LGS64rm : RI<0xb5, MRMSrcMem, (outs GR64:$dst), (ins opaque80mem:$src),
+                 "lgs{q}\t{$src, $dst|$dst, $src}", []>, TB;
+
+// Specialized register support
+
+// no m form encodable; use SMSW16m
+def SMSW64r : RI<0x01, MRM4r, (outs GR64:$dst), (ins), 
+                 "smsw{q}\t$dst", []>, TB;
+
 // String manipulation instructions
 
 def LODSQ : RI<0xAD, RawFrm, (outs), (ins), "lodsq", []>;
@@ -1722,9 +1925,9 @@ def : Pat<(X86cmov (loadi64 addr:$src1), GR64:$src2, X86_COND_NO, EFLAGS),
 def : Pat<(zextloadi64i1 addr:$src), (MOVZX64rm8 addr:$src)>;
 
 // extload
-// When extloading from 16-bit and smaller memory locations into 64-bit registers,
-// use zero-extending loads so that the entire 64-bit register is defined, avoiding
-// partial-register updates.
+// When extloading from 16-bit and smaller memory locations into 64-bit 
+// registers, use zero-extending loads so that the entire 64-bit register is 
+// defined, avoiding partial-register updates.
 def : Pat<(extloadi64i1 addr:$src),  (MOVZX64rm8  addr:$src)>;
 def : Pat<(extloadi64i8 addr:$src),  (MOVZX64rm8  addr:$src)>;
 def : Pat<(extloadi64i16 addr:$src), (MOVZX64rm16 addr:$src)>;
@@ -1995,7 +2198,8 @@ def : Pat<(parallel (store (X86add_flag (loadi64 addr:$dst), i64immSExt8:$src2),
                            addr:$dst),
                     (implicit EFLAGS)),
           (ADD64mi8 addr:$dst, i64immSExt8:$src2)>;
-def : Pat<(parallel (store (X86add_flag (loadi64 addr:$dst), i64immSExt32:$src2),
+def : Pat<(parallel (store (X86add_flag (loadi64 addr:$dst), 
+                                        i64immSExt32:$src2),
                            addr:$dst),
                     (implicit EFLAGS)),
           (ADD64mi32 addr:$dst, i64immSExt32:$src2)>;
@@ -2025,11 +2229,13 @@ def : Pat<(parallel (store (X86sub_flag (loadi64 addr:$dst), GR64:$src2),
           (SUB64mr addr:$dst, GR64:$src2)>;
 
 // Memory-Integer Subtraction with EFLAGS result
-def : Pat<(parallel (store (X86sub_flag (loadi64 addr:$dst), i64immSExt8:$src2),
+def : Pat<(parallel (store (X86sub_flag (loadi64 addr:$dst), 
+                                        i64immSExt8:$src2),
                            addr:$dst),
                     (implicit EFLAGS)),
           (SUB64mi8 addr:$dst, i64immSExt8:$src2)>;
-def : Pat<(parallel (store (X86sub_flag (loadi64 addr:$dst), i64immSExt32:$src2),
+def : Pat<(parallel (store (X86sub_flag (loadi64 addr:$dst),
+                                        i64immSExt32:$src2),
                            addr:$dst),
                     (implicit EFLAGS)),
           (SUB64mi32 addr:$dst, i64immSExt32:$src2)>;
@@ -2153,7 +2359,8 @@ def : Pat<(parallel (store (X86xor_flag (loadi64 addr:$dst), i64immSExt8:$src2),
                            addr:$dst),
                     (implicit EFLAGS)),
           (XOR64mi8 addr:$dst, i64immSExt8:$src2)>;
-def : Pat<(parallel (store (X86xor_flag (loadi64 addr:$dst), i64immSExt32:$src2),
+def : Pat<(parallel (store (X86xor_flag (loadi64 addr:$dst), 
+                                        i64immSExt32:$src2),
                            addr:$dst),
                     (implicit EFLAGS)),
           (XOR64mi32 addr:$dst, i64immSExt32:$src2)>;
@@ -2185,7 +2392,8 @@ def : Pat<(parallel (store (X86and_flag (loadi64 addr:$dst), i64immSExt8:$src2),
                            addr:$dst),
                     (implicit EFLAGS)),
           (AND64mi8 addr:$dst, i64immSExt8:$src2)>;
-def : Pat<(parallel (store (X86and_flag (loadi64 addr:$dst), i64immSExt32:$src2),
+def : Pat<(parallel (store (X86and_flag (loadi64 addr:$dst), 
+                                        i64immSExt32:$src2),
                            addr:$dst),
                     (implicit EFLAGS)),
           (AND64mi32 addr:$dst, i64immSExt32:$src2)>;
diff --git a/libclamav/c++/llvm/lib/Target/X86/X86InstrFPStack.td b/libclamav/c++/llvm/lib/Target/X86/X86InstrFPStack.td
index b0b0409..71ec178 100644
--- a/libclamav/c++/llvm/lib/Target/X86/X86InstrFPStack.td
+++ b/libclamav/c++/llvm/lib/Target/X86/X86InstrFPStack.td
@@ -195,48 +195,67 @@ def _Fp80 : FpI_<(outs RFP80:$dst), (ins RFP80:$src1, RFP80:$src2), TwoArgFP,
 // These instructions cannot address 80-bit memory.
 multiclass FPBinary<SDNode OpNode, Format fp, string asmstring> {
 // ST(0) = ST(0) + [mem]
-def _Fp32m  : FpIf32<(outs RFP32:$dst), (ins RFP32:$src1, f32mem:$src2), OneArgFPRW,
+def _Fp32m  : FpIf32<(outs RFP32:$dst), 
+                     (ins RFP32:$src1, f32mem:$src2), OneArgFPRW,
                   [(set RFP32:$dst, 
                     (OpNode RFP32:$src1, (loadf32 addr:$src2)))]>;
-def _Fp64m  : FpIf64<(outs RFP64:$dst), (ins RFP64:$src1, f64mem:$src2), OneArgFPRW,
+def _Fp64m  : FpIf64<(outs RFP64:$dst), 
+                     (ins RFP64:$src1, f64mem:$src2), OneArgFPRW,
                   [(set RFP64:$dst, 
                     (OpNode RFP64:$src1, (loadf64 addr:$src2)))]>;
-def _Fp64m32: FpIf64<(outs RFP64:$dst), (ins RFP64:$src1, f32mem:$src2), OneArgFPRW,
+def _Fp64m32: FpIf64<(outs RFP64:$dst), 
+                     (ins RFP64:$src1, f32mem:$src2), OneArgFPRW,
                   [(set RFP64:$dst, 
                     (OpNode RFP64:$src1, (f64 (extloadf32 addr:$src2))))]>;
-def _Fp80m32: FpI_<(outs RFP80:$dst), (ins RFP80:$src1, f32mem:$src2), OneArgFPRW,
+def _Fp80m32: FpI_<(outs RFP80:$dst), 
+                   (ins RFP80:$src1, f32mem:$src2), OneArgFPRW,
                   [(set RFP80:$dst, 
                     (OpNode RFP80:$src1, (f80 (extloadf32 addr:$src2))))]>;
-def _Fp80m64: FpI_<(outs RFP80:$dst), (ins RFP80:$src1, f64mem:$src2), OneArgFPRW,
+def _Fp80m64: FpI_<(outs RFP80:$dst), 
+                   (ins RFP80:$src1, f64mem:$src2), OneArgFPRW,
                   [(set RFP80:$dst, 
                     (OpNode RFP80:$src1, (f80 (extloadf64 addr:$src2))))]>;
 def _F32m  : FPI<0xD8, fp, (outs), (ins f32mem:$src), 
-                 !strconcat("f", !strconcat(asmstring, "{s}\t$src"))> { let mayLoad = 1; }
+                 !strconcat("f", !strconcat(asmstring, "{s}\t$src"))> { 
+  let mayLoad = 1; 
+}
 def _F64m  : FPI<0xDC, fp, (outs), (ins f64mem:$src), 
-                 !strconcat("f", !strconcat(asmstring, "{l}\t$src"))> { let mayLoad = 1; }
+                 !strconcat("f", !strconcat(asmstring, "{l}\t$src"))> { 
+  let mayLoad = 1; 
+}
 // ST(0) = ST(0) + [memint]
-def _FpI16m32 : FpIf32<(outs RFP32:$dst), (ins RFP32:$src1, i16mem:$src2), OneArgFPRW,
+def _FpI16m32 : FpIf32<(outs RFP32:$dst), (ins RFP32:$src1, i16mem:$src2), 
+                       OneArgFPRW,
                     [(set RFP32:$dst, (OpNode RFP32:$src1,
                                        (X86fild addr:$src2, i16)))]>;
-def _FpI32m32 : FpIf32<(outs RFP32:$dst), (ins RFP32:$src1, i32mem:$src2), OneArgFPRW,
+def _FpI32m32 : FpIf32<(outs RFP32:$dst), (ins RFP32:$src1, i32mem:$src2), 
+                       OneArgFPRW,
                     [(set RFP32:$dst, (OpNode RFP32:$src1,
                                        (X86fild addr:$src2, i32)))]>;
-def _FpI16m64 : FpIf64<(outs RFP64:$dst), (ins RFP64:$src1, i16mem:$src2), OneArgFPRW,
+def _FpI16m64 : FpIf64<(outs RFP64:$dst), (ins RFP64:$src1, i16mem:$src2), 
+                       OneArgFPRW,
                     [(set RFP64:$dst, (OpNode RFP64:$src1,
                                        (X86fild addr:$src2, i16)))]>;
-def _FpI32m64 : FpIf64<(outs RFP64:$dst), (ins RFP64:$src1, i32mem:$src2), OneArgFPRW,
+def _FpI32m64 : FpIf64<(outs RFP64:$dst), (ins RFP64:$src1, i32mem:$src2), 
+                       OneArgFPRW,
                     [(set RFP64:$dst, (OpNode RFP64:$src1,
                                        (X86fild addr:$src2, i32)))]>;
-def _FpI16m80 : FpI_<(outs RFP80:$dst), (ins RFP80:$src1, i16mem:$src2), OneArgFPRW,
+def _FpI16m80 : FpI_<(outs RFP80:$dst), (ins RFP80:$src1, i16mem:$src2), 
+                       OneArgFPRW,
                     [(set RFP80:$dst, (OpNode RFP80:$src1,
                                        (X86fild addr:$src2, i16)))]>;
-def _FpI32m80 : FpI_<(outs RFP80:$dst), (ins RFP80:$src1, i32mem:$src2), OneArgFPRW,
+def _FpI32m80 : FpI_<(outs RFP80:$dst), (ins RFP80:$src1, i32mem:$src2), 
+                       OneArgFPRW,
                     [(set RFP80:$dst, (OpNode RFP80:$src1,
                                        (X86fild addr:$src2, i32)))]>;
 def _FI16m  : FPI<0xDE, fp, (outs), (ins i16mem:$src), 
-                  !strconcat("fi", !strconcat(asmstring, "{s}\t$src"))> { let mayLoad = 1; }
+                  !strconcat("fi", !strconcat(asmstring, "{s}\t$src"))> { 
+  let mayLoad = 1; 
+}
 def _FI32m  : FPI<0xDA, fp, (outs), (ins i32mem:$src), 
-                  !strconcat("fi", !strconcat(asmstring, "{l}\t$src"))> { let mayLoad = 1; }
+                  !strconcat("fi", !strconcat(asmstring, "{l}\t$src"))> { 
+  let mayLoad = 1; 
+}
 }
 
 defm ADD : FPBinary_rr<fadd>;
@@ -279,6 +298,9 @@ def DIV_FST0r   : FPST0rInst <0xF0, "fdiv\t$op">;
 def DIVR_FrST0  : FPrST0Inst <0xF0, "fdiv{|r}\t{%st(0), $op|$op, %ST(0)}">;
 def DIVR_FPrST0 : FPrST0PInst<0xF0, "fdiv{|r}p\t$op">;
 
+def COM_FST0r   : FPST0rInst <0xD0, "fcom\t$op">;
+def COMP_FST0r  : FPST0rInst <0xD8, "fcomp\t$op">;
+
 // Unary operations.
 multiclass FPUnary<SDNode OpNode, bits<8> opcode, string asmstring> {
 def _Fp32  : FpIf32<(outs RFP32:$dst), (ins RFP32:$src), OneArgFPRW,
@@ -305,22 +327,22 @@ def TST_F  : FPI<0xE4, RawFrm, (outs), (ins), "ftst">, D9;
 
 // Versions of FP instructions that take a single memory operand.  Added for the
 //   disassembler; remove as they are included with patterns elsewhere.
-def FCOM32m  : FPI<0xD8, MRM2m, (outs), (ins f32mem:$src), "fcom\t$src">;
-def FCOMP32m : FPI<0xD8, MRM3m, (outs), (ins f32mem:$src), "fcomp\t$src">;
+def FCOM32m  : FPI<0xD8, MRM2m, (outs), (ins f32mem:$src), "fcom{l}\t$src">;
+def FCOMP32m : FPI<0xD8, MRM3m, (outs), (ins f32mem:$src), "fcomp{l}\t$src">;
 
 def FLDENVm  : FPI<0xD9, MRM4m, (outs), (ins f32mem:$src), "fldenv\t$src">;
-def FSTENVm  : FPI<0xD9, MRM6m, (outs f32mem:$dst), (ins), "fstenv\t$dst">;
+def FSTENVm  : FPI<0xD9, MRM6m, (outs f32mem:$dst), (ins), "fnstenv\t$dst">;
 
 def FICOM32m : FPI<0xDA, MRM2m, (outs), (ins i32mem:$src), "ficom{l}\t$src">;
 def FICOMP32m: FPI<0xDA, MRM3m, (outs), (ins i32mem:$src), "ficomp{l}\t$src">;
 
-def FCOM64m  : FPI<0xDC, MRM2m, (outs), (ins f64mem:$src), "fcom\t$src">;
-def FCOMP64m : FPI<0xDC, MRM3m, (outs), (ins f64mem:$src), "fcomp\t$src">;
+def FCOM64m  : FPI<0xDC, MRM2m, (outs), (ins f64mem:$src), "fcom{ll}\t$src">;
+def FCOMP64m : FPI<0xDC, MRM3m, (outs), (ins f64mem:$src), "fcomp{ll}\t$src">;
 
 def FISTTP32m: FPI<0xDD, MRM1m, (outs i32mem:$dst), (ins), "fisttp{l}\t$dst">;
 def FRSTORm  : FPI<0xDD, MRM4m, (outs f32mem:$dst), (ins), "frstor\t$dst">;
-def FSAVEm   : FPI<0xDD, MRM6m, (outs f32mem:$dst), (ins), "fsave\t$dst">;
-def FSTSWm   : FPI<0xDD, MRM7m, (outs f32mem:$dst), (ins), "fstsw\t$dst">;
+def FSAVEm   : FPI<0xDD, MRM6m, (outs f32mem:$dst), (ins), "fnsave\t$dst">;
+def FNSTSWm  : FPI<0xDD, MRM7m, (outs f32mem:$dst), (ins), "fnstsw\t$dst">;
 
 def FICOM16m : FPI<0xDE, MRM2m, (outs), (ins i16mem:$src), "ficom{w}\t$src">;
 def FICOMP16m: FPI<0xDE, MRM3m, (outs), (ins i16mem:$src), "ficomp{w}\t$src">;
@@ -493,7 +515,8 @@ def ISTT_Fp64m80 : FpI_<(outs), (ins i64mem:$op, RFP80:$src), OneArgFP,
 let mayStore = 1 in {
 def ISTT_FP16m : FPI<0xDF, MRM1m, (outs), (ins i16mem:$dst), "fisttp{s}\t$dst">;
 def ISTT_FP32m : FPI<0xDB, MRM1m, (outs), (ins i32mem:$dst), "fisttp{l}\t$dst">;
-def ISTT_FP64m : FPI<0xDD, MRM1m, (outs), (ins i64mem:$dst), "fisttp{ll}\t$dst">;
+def ISTT_FP64m : FPI<0xDD, MRM1m, (outs), (ins i64mem:$dst), 
+  "fisttp{ll}\t$dst">;
 }
 
 // FP Stack manipulation instructions.
@@ -561,10 +584,15 @@ def UCOM_FIPr  : FPI<0xE8, AddRegFrm,     // CC = cmp ST(0) with ST(i), pop
                     "fucomip\t{$reg, %st(0)|%ST(0), $reg}">, DF;
 }
 
+def COM_FIr : FPI<0xF0, AddRegFrm, (outs), (ins RST:$reg),
+                  "fcomi\t{$reg, %st(0)|%ST(0), $reg}">, DB;
+def COM_FIPr : FPI<0xF0, AddRegFrm, (outs), (ins RST:$reg),
+                   "fcomip\t{$reg, %st(0)|%ST(0), $reg}">, DF;
+
 // Floating point flag ops.
 let Defs = [AX] in
 def FNSTSW8r  : I<0xE0, RawFrm,                  // AX = fp flags
-                  (outs), (ins), "fnstsw", []>, DF;
+                  (outs), (ins), "fnstsw %ax", []>, DF;
 
 def FNSTCW16m : I<0xD9, MRM7m,                   // [mem16] = X87 control world
                   (outs), (ins i16mem:$dst), "fnstcw\t$dst",
@@ -574,6 +602,44 @@ let mayLoad = 1 in
 def FLDCW16m  : I<0xD9, MRM5m,                   // X87 control world = [mem16]
                   (outs), (ins i16mem:$dst), "fldcw\t$dst", []>;
 
+// Register free
+
+def FFREE : FPI<0xC0, AddRegFrm, (outs), (ins RST:$reg),
+                "ffree\t$reg">, DD;
+
+// Clear exceptions
+
+def FNCLEX : I<0xE2, RawFrm, (outs), (ins), "fnclex", []>, DB;
+
+// Operandless floating-point instructions for the disassembler
+
+def FNOP : I<0xD0, RawFrm, (outs), (ins), "fnop", []>, D9;
+def FXAM : I<0xE5, RawFrm, (outs), (ins), "fxam", []>, D9;
+def FLDL2T : I<0xE9, RawFrm, (outs), (ins), "fldl2t", []>, D9;
+def FLDL2E : I<0xEA, RawFrm, (outs), (ins), "fldl2e", []>, D9;
+def FLDPI : I<0xEB, RawFrm, (outs), (ins), "fldpi", []>, D9;
+def FLDLG2 : I<0xEC, RawFrm, (outs), (ins), "fldlg2", []>, D9;
+def FLDLN2 : I<0xED, RawFrm, (outs), (ins), "fldln2", []>, D9;
+def F2XM1 : I<0xF0, RawFrm, (outs), (ins), "f2xm1", []>, D9;
+def FYL2X : I<0xF1, RawFrm, (outs), (ins), "fyl2x", []>, D9;
+def FPTAN : I<0xF2, RawFrm, (outs), (ins), "fptan", []>, D9;
+def FPATAN : I<0xF3, RawFrm, (outs), (ins), "fpatan", []>, D9;
+def FXTRACT : I<0xF4, RawFrm, (outs), (ins), "fxtract", []>, D9;
+def FPREM1 : I<0xF5, RawFrm, (outs), (ins), "fprem1", []>, D9;
+def FDECSTP : I<0xF6, RawFrm, (outs), (ins), "fdecstp", []>, D9;
+def FINCSTP : I<0xF7, RawFrm, (outs), (ins), "fincstp", []>, D9;
+def FPREM : I<0xF8, RawFrm, (outs), (ins), "fprem", []>, D9;
+def FYL2XP1 : I<0xF9, RawFrm, (outs), (ins), "fyl2xp1", []>, D9;
+def FSINCOS : I<0xFB, RawFrm, (outs), (ins), "fsincos", []>, D9;
+def FRNDINT : I<0xFC, RawFrm, (outs), (ins), "frndint", []>, D9;
+def FSCALE : I<0xFD, RawFrm, (outs), (ins), "fscale", []>, D9;
+def FCOMPP : I<0xD9, RawFrm, (outs), (ins), "fcompp", []>, DE;
+
+def FXSAVE : I<0xAE, MRM0m, (outs opaque512mem:$dst), (ins),
+               "fxsave\t$dst", []>, TB;
+def FXRSTOR : I<0xAE, MRM1m, (outs), (ins opaque512mem:$src),
+                "fxrstor\t$src", []>, TB;
+
 //===----------------------------------------------------------------------===//
 // Non-Instruction Patterns
 //===----------------------------------------------------------------------===//
@@ -585,11 +651,15 @@ def : Pat<(X86fld addr:$src, f80), (LD_Fp80m addr:$src)>;
 
 // Required for CALL which return f32 / f64 / f80 values.
 def : Pat<(X86fst RFP32:$src, addr:$op, f32), (ST_Fp32m addr:$op, RFP32:$src)>;
-def : Pat<(X86fst RFP64:$src, addr:$op, f32), (ST_Fp64m32 addr:$op, RFP64:$src)>;
+def : Pat<(X86fst RFP64:$src, addr:$op, f32), (ST_Fp64m32 addr:$op, 
+                                                          RFP64:$src)>;
 def : Pat<(X86fst RFP64:$src, addr:$op, f64), (ST_Fp64m addr:$op, RFP64:$src)>;
-def : Pat<(X86fst RFP80:$src, addr:$op, f32), (ST_Fp80m32 addr:$op, RFP80:$src)>;
-def : Pat<(X86fst RFP80:$src, addr:$op, f64), (ST_Fp80m64 addr:$op, RFP80:$src)>;
-def : Pat<(X86fst RFP80:$src, addr:$op, f80), (ST_FpP80m addr:$op, RFP80:$src)>;
+def : Pat<(X86fst RFP80:$src, addr:$op, f32), (ST_Fp80m32 addr:$op, 
+                                                          RFP80:$src)>;
+def : Pat<(X86fst RFP80:$src, addr:$op, f64), (ST_Fp80m64 addr:$op, 
+                                                          RFP80:$src)>;
+def : Pat<(X86fst RFP80:$src, addr:$op, f80), (ST_FpP80m addr:$op,
+                                                         RFP80:$src)>;
 
 // Floating point constant -0.0 and -1.0
 def : Pat<(f32 fpimmneg0), (CHS_Fp32 (LD_Fp032))>, Requires<[FPStackf32]>;
diff --git a/libclamav/c++/llvm/lib/Target/X86/X86InstrFormats.td b/libclamav/c++/llvm/lib/Target/X86/X86InstrFormats.td
index 2f14bb0..a799f16 100644
--- a/libclamav/c++/llvm/lib/Target/X86/X86InstrFormats.td
+++ b/libclamav/c++/llvm/lib/Target/X86/X86InstrFormats.td
@@ -115,17 +115,20 @@ class I<bits<8> o, Format f, dag outs, dag ins, string asm, list<dag> pattern>
   let Pattern = pattern;
   let CodeSize = 3;
 }
-class Ii8 <bits<8> o, Format f, dag outs, dag ins, string asm, list<dag> pattern>
+class Ii8 <bits<8> o, Format f, dag outs, dag ins, string asm, 
+           list<dag> pattern>
   : X86Inst<o, f, Imm8 , outs, ins, asm> {
   let Pattern = pattern;
   let CodeSize = 3;
 }
-class Ii16<bits<8> o, Format f, dag outs, dag ins, string asm, list<dag> pattern>
+class Ii16<bits<8> o, Format f, dag outs, dag ins, string asm, 
+           list<dag> pattern>
   : X86Inst<o, f, Imm16, outs, ins, asm> {
   let Pattern = pattern;
   let CodeSize = 3;
 }
-class Ii32<bits<8> o, Format f, dag outs, dag ins, string asm, list<dag> pattern>
+class Ii32<bits<8> o, Format f, dag outs, dag ins, string asm, 
+           list<dag> pattern>
   : X86Inst<o, f, Imm32, outs, ins, asm> {
   let Pattern = pattern;
   let CodeSize = 3;
@@ -169,7 +172,8 @@ class Iseg32 <bits<8> o, Format f, dag outs, dag ins, string asm,
 
 class SSI<bits<8> o, Format F, dag outs, dag ins, string asm, list<dag> pattern>
       : I<o, F, outs, ins, asm, pattern>, XS, Requires<[HasSSE1]>;
-class SSIi8<bits<8> o, Format F, dag outs, dag ins, string asm, list<dag> pattern>
+class SSIi8<bits<8> o, Format F, dag outs, dag ins, string asm, 
+            list<dag> pattern>
       : Ii8<o, F, outs, ins, asm, pattern>, XS, Requires<[HasSSE1]>;
 class PSI<bits<8> o, Format F, dag outs, dag ins, string asm, list<dag> pattern>
       : I<o, F, outs, ins, asm, pattern>, TB, Requires<[HasSSE1]>;
@@ -205,9 +209,11 @@ class PDIi8<bits<8> o, Format F, dag outs, dag ins, string asm,
 //   S3SI  - SSE3 instructions with XS prefix.
 //   S3DI  - SSE3 instructions with XD prefix.
 
-class S3SI<bits<8> o, Format F, dag outs, dag ins, string asm, list<dag> pattern>
+class S3SI<bits<8> o, Format F, dag outs, dag ins, string asm, 
+           list<dag> pattern>
       : I<o, F, outs, ins, asm, pattern>, XS, Requires<[HasSSE3]>;
-class S3DI<bits<8> o, Format F, dag outs, dag ins, string asm, list<dag> pattern>
+class S3DI<bits<8> o, Format F, dag outs, dag ins, string asm, 
+           list<dag> pattern>
       : I<o, F, outs, ins, asm, pattern>, XD, Requires<[HasSSE3]>;
 class S3I<bits<8> o, Format F, dag outs, dag ins, string asm, list<dag> pattern>
       : I<o, F, outs, ins, asm, pattern>, TB, OpSize, Requires<[HasSSE3]>;
@@ -255,7 +261,7 @@ class SS42FI<bits<8> o, Format F, dag outs, dag ins, string asm,
       
 //   SS42AI = SSE 4.2 instructions with TA prefix
 class SS42AI<bits<8> o, Format F, dag outs, dag ins, string asm,
-	     list<dag> pattern>
+             list<dag> pattern>
       : I<o, F, outs, ins, asm, pattern>, TA, Requires<[HasSSE42]>;
 
 // X86-64 Instruction templates...
@@ -297,17 +303,24 @@ class RPDI<bits<8> o, Format F, dag outs, dag ins, string asm,
 // MMXIi8 - MMX instructions with ImmT == Imm8 and TB prefix.
 // MMXID  - MMX instructions with XD prefix.
 // MMXIS  - MMX instructions with XS prefix.
-class MMXI<bits<8> o, Format F, dag outs, dag ins, string asm, list<dag> pattern>
+class MMXI<bits<8> o, Format F, dag outs, dag ins, string asm, 
+           list<dag> pattern>
       : I<o, F, outs, ins, asm, pattern>, TB, Requires<[HasMMX]>;
-class MMXI64<bits<8> o, Format F, dag outs, dag ins, string asm, list<dag> pattern>
+class MMXI64<bits<8> o, Format F, dag outs, dag ins, string asm, 
+             list<dag> pattern>
       : I<o, F, outs, ins, asm, pattern>, TB, Requires<[HasMMX,In64BitMode]>;
-class MMXRI<bits<8> o, Format F, dag outs, dag ins, string asm, list<dag> pattern>
+class MMXRI<bits<8> o, Format F, dag outs, dag ins, string asm, 
+            list<dag> pattern>
       : I<o, F, outs, ins, asm, pattern>, TB, REX_W, Requires<[HasMMX]>;
-class MMX2I<bits<8> o, Format F, dag outs, dag ins, string asm, list<dag> pattern>
+class MMX2I<bits<8> o, Format F, dag outs, dag ins, string asm, 
+            list<dag> pattern>
       : I<o, F, outs, ins, asm, pattern>, TB, OpSize, Requires<[HasMMX]>;
-class MMXIi8<bits<8> o, Format F, dag outs, dag ins, string asm, list<dag> pattern>
+class MMXIi8<bits<8> o, Format F, dag outs, dag ins, string asm, 
+             list<dag> pattern>
       : Ii8<o, F, outs, ins, asm, pattern>, TB, Requires<[HasMMX]>;
-class MMXID<bits<8> o, Format F, dag outs, dag ins, string asm, list<dag> pattern>
+class MMXID<bits<8> o, Format F, dag outs, dag ins, string asm, 
+            list<dag> pattern>
       : Ii8<o, F, outs, ins, asm, pattern>, XD, Requires<[HasMMX]>;
-class MMXIS<bits<8> o, Format F, dag outs, dag ins, string asm, list<dag> pattern>
+class MMXIS<bits<8> o, Format F, dag outs, dag ins, string asm, 
+            list<dag> pattern>
       : Ii8<o, F, outs, ins, asm, pattern>, XS, Requires<[HasMMX]>;
diff --git a/libclamav/c++/llvm/lib/Target/X86/X86InstrInfo.cpp b/libclamav/c++/llvm/lib/Target/X86/X86InstrInfo.cpp
index 1947d35..e555cd1 100644
--- a/libclamav/c++/llvm/lib/Target/X86/X86InstrInfo.cpp
+++ b/libclamav/c++/llvm/lib/Target/X86/X86InstrInfo.cpp
@@ -1018,13 +1018,11 @@ void X86InstrInfo::reMaterialize(MachineBasicBlock &MBB,
   switch (Opc) {
   default: break;
   case X86::MOV8r0:
-  case X86::MOV16r0:
   case X86::MOV32r0: {
     if (!isSafeToClobberEFLAGS(MBB, I)) {
       switch (Opc) {
       default: break;
       case X86::MOV8r0:  Opc = X86::MOV8ri;  break;
-      case X86::MOV16r0: Opc = X86::MOV16ri; break;
       case X86::MOV32r0: Opc = X86::MOV32ri; break;
       }
       Clone = false;
@@ -1880,7 +1878,7 @@ bool X86InstrInfo::copyRegToReg(MachineBasicBlock &MBB,
     if (SrcReg != X86::EFLAGS)
       return false;
     if (DestRC == &X86::GR64RegClass || DestRC == &X86::GR64_NOSPRegClass) {
-      BuildMI(MBB, MI, DL, get(X86::PUSHFQ));
+      BuildMI(MBB, MI, DL, get(X86::PUSHFQ64));
       BuildMI(MBB, MI, DL, get(X86::POP64r), DestReg);
       return true;
     } else if (DestRC == &X86::GR32RegClass ||
@@ -2292,9 +2290,7 @@ X86InstrInfo::foldMemoryOperandImpl(MachineFunction &MF,
     OpcodeTablePtr = &RegOp2MemOpTable2Addr;
     isTwoAddrFold = true;
   } else if (i == 0) { // If operand 0
-    if (MI->getOpcode() == X86::MOV16r0)
-      NewMI = MakeM0Inst(*this, X86::MOV16mi, MOs, MI);
-    else if (MI->getOpcode() == X86::MOV32r0)
+    if (MI->getOpcode() == X86::MOV32r0)
       NewMI = MakeM0Inst(*this, X86::MOV32mi, MOs, MI);
     else if (MI->getOpcode() == X86::MOV8r0)
       NewMI = MakeM0Inst(*this, X86::MOV8mi, MOs, MI);
@@ -2370,6 +2366,23 @@ MachineInstr* X86InstrInfo::foldMemoryOperandImpl(MachineFunction &MF,
   // Check switch flag 
   if (NoFusing) return NULL;
 
+  if (!MF.getFunction()->hasFnAttr(Attribute::OptimizeForSize))
+    switch (MI->getOpcode()) {
+    case X86::CVTSD2SSrr:
+    case X86::Int_CVTSD2SSrr:
+    case X86::CVTSS2SDrr:
+    case X86::Int_CVTSS2SDrr:
+    case X86::RCPSSr:
+    case X86::RCPSSr_Int:
+    case X86::ROUNDSDr_Int:
+    case X86::ROUNDSSr_Int:
+    case X86::RSQRTSSr:
+    case X86::RSQRTSSr_Int:
+    case X86::SQRTSSr:
+    case X86::SQRTSSr_Int:
+      return 0;
+    }
+
   const MachineFrameInfo *MFI = MF.getFrameInfo();
   unsigned Size = MFI->getObjectSize(FrameIndex);
   unsigned Alignment = MFI->getObjectAlignment(FrameIndex);
@@ -2405,6 +2418,23 @@ MachineInstr* X86InstrInfo::foldMemoryOperandImpl(MachineFunction &MF,
   // Check switch flag 
   if (NoFusing) return NULL;
 
+  if (!MF.getFunction()->hasFnAttr(Attribute::OptimizeForSize))
+    switch (MI->getOpcode()) {
+    case X86::CVTSD2SSrr:
+    case X86::Int_CVTSD2SSrr:
+    case X86::CVTSS2SDrr:
+    case X86::Int_CVTSS2SDrr:
+    case X86::RCPSSr:
+    case X86::RCPSSr_Int:
+    case X86::ROUNDSDr_Int:
+    case X86::ROUNDSSr_Int:
+    case X86::RSQRTSSr:
+    case X86::RSQRTSSr_Int:
+    case X86::SQRTSSr:
+    case X86::SQRTSSr_Int:
+      return 0;
+    }
+
   // Determine the alignment of the load.
   unsigned Alignment = 0;
   if (LoadMI->hasOneMemOperand())
@@ -2529,7 +2559,6 @@ bool X86InstrInfo::canFoldMemoryOperand(const MachineInstr *MI,
   } else if (OpNum == 0) { // If operand 0
     switch (Opc) {
     case X86::MOV8r0:
-    case X86::MOV16r0:
     case X86::MOV32r0:
       return true;
     default: break;
@@ -2558,7 +2587,6 @@ bool X86InstrInfo::unfoldMemoryOperand(MachineFunction &MF, MachineInstr *MI,
     MemOp2RegOpTable.find((unsigned*)MI->getOpcode());
   if (I == MemOp2RegOpTable.end())
     return false;
-  DebugLoc dl = MI->getDebugLoc();
   unsigned Opc = I->second.first;
   unsigned Index = I->second.second & 0xf;
   bool FoldedLoad = I->second.second & (1 << 4);
diff --git a/libclamav/c++/llvm/lib/Target/X86/X86InstrInfo.td b/libclamav/c++/llvm/lib/Target/X86/X86InstrInfo.td
index 3cc1853..4d922a5 100644
--- a/libclamav/c++/llvm/lib/Target/X86/X86InstrInfo.td
+++ b/libclamav/c++/llvm/lib/Target/X86/X86InstrInfo.td
@@ -1,4 +1,4 @@
-//===- X86InstrInfo.td - Describe the X86 Instruction Set --*- tablegen -*-===//
+
 // 
 //                     The LLVM Compiler Infrastructure
 //
@@ -41,6 +41,9 @@ def SDTX86BrCond  : SDTypeProfile<0, 3,
 def SDTX86SetCC   : SDTypeProfile<1, 2,
                                   [SDTCisVT<0, i8>,
                                    SDTCisVT<1, i8>, SDTCisVT<2, i32>]>;
+def SDTX86SetCC_C : SDTypeProfile<1, 2,
+                                  [SDTCisInt<0>,
+                                   SDTCisVT<1, i8>, SDTCisVT<2, i32>]>;
 
 def SDTX86cas : SDTypeProfile<0, 3, [SDTCisPtrTy<0>, SDTCisInt<1>, 
                                      SDTCisVT<2, i8>]>;
@@ -87,7 +90,7 @@ def X86cmov    : SDNode<"X86ISD::CMOV",     SDTX86Cmov>;
 def X86brcond  : SDNode<"X86ISD::BRCOND",   SDTX86BrCond,
                         [SDNPHasChain]>;
 def X86setcc   : SDNode<"X86ISD::SETCC",    SDTX86SetCC>;
-def X86setcc_c : SDNode<"X86ISD::SETCC_CARRY", SDTX86SetCC>;
+def X86setcc_c : SDNode<"X86ISD::SETCC_CARRY", SDTX86SetCC_C>;
 
 def X86cas : SDNode<"X86ISD::LCMPXCHG_DAG", SDTX86cas,
                         [SDNPHasChain, SDNPInFlag, SDNPOutFlag, SDNPMayStore,
@@ -196,6 +199,12 @@ class X86MemOperand<string printMethod> : Operand<iPTR> {
 def opaque32mem : X86MemOperand<"printopaquemem">;
 def opaque48mem : X86MemOperand<"printopaquemem">;
 def opaque80mem : X86MemOperand<"printopaquemem">;
+def opaque512mem : X86MemOperand<"printopaquemem">;
+
+def offset8 : Operand<i64>  { let PrintMethod = "print_pcrel_imm"; }
+def offset16 : Operand<i64> { let PrintMethod = "print_pcrel_imm"; }
+def offset32 : Operand<i64> { let PrintMethod = "print_pcrel_imm"; }
+def offset64 : Operand<i64> { let PrintMethod = "print_pcrel_imm"; }
 
 def i8mem   : X86MemOperand<"printi8mem">;
 def i16mem  : X86MemOperand<"printi16mem">;
@@ -289,6 +298,7 @@ def FarData      : Predicate<"TM.getCodeModel() != CodeModel::Small &&"
 def NearData     : Predicate<"TM.getCodeModel() == CodeModel::Small ||"
                              "TM.getCodeModel() == CodeModel::Kernel">;
 def IsStatic     : Predicate<"TM.getRelocationModel() == Reloc::Static">;
+def OptForSize   : Predicate<"OptForSize">;
 def OptForSpeed  : Predicate<"!OptForSize">;
 def FastBTMem    : Predicate<"!Subtarget->isBTMemSlow()">;
 def CallImmAddr  : Predicate<"Subtarget->IsLegalToCallImmediateAddr(TM)">;
@@ -351,7 +361,8 @@ def loadi16 : PatFrag<(ops node:$ptr), (i16 (unindexedload node:$ptr)), [{
   return false;
 }]>;
 
-def loadi16_anyext : PatFrag<(ops node:$ptr), (i32 (unindexedload node:$ptr)), [{
+def loadi16_anyext : PatFrag<(ops node:$ptr), (i32 (unindexedload node:$ptr)),
+[{
   LoadSDNode *LD = cast<LoadSDNode>(N);
   if (const Value *Src = LD->getSrcValue())
     if (const PointerType *PT = dyn_cast<PointerType>(Src->getType()))
@@ -539,13 +550,17 @@ def VASTART_SAVE_XMM_REGS : I<0, Pseudo,
 // Nop
 let neverHasSideEffects = 1 in {
   def NOOP : I<0x90, RawFrm, (outs), (ins), "nop", []>;
+  def NOOPW : I<0x1f, MRM0m, (outs), (ins i16mem:$zero),
+                "nop{w}\t$zero", []>, TB, OpSize;
   def NOOPL : I<0x1f, MRM0m, (outs), (ins i32mem:$zero),
-                "nopl\t$zero", []>, TB;
+                "nop{l}\t$zero", []>, TB;
 }
 
 // Trap
 def INT3 : I<0xcc, RawFrm, (outs), (ins), "int\t3", []>;
 def INT : I<0xcd, RawFrm, (outs), (ins i8imm:$trap), "int\t$trap", []>;
+def IRET16 : I<0xcf, RawFrm, (outs), (ins), "iret{w}", []>, OpSize;
+def IRET32 : I<0xcf, RawFrm, (outs), (ins), "iret{l}", []>;
 
 // PIC base construction.  This expands to code that looks like this:
 //     call  $next_inst
@@ -709,12 +724,14 @@ def ENTER : I<0xC8, RawFrm, (outs), (ins i16imm:$len, i8imm:$lvl),
 // Tail call stuff.
 
 let isCall = 1, isTerminator = 1, isReturn = 1, isBarrier = 1 in
-def TCRETURNdi : I<0, Pseudo, (outs), (ins i32imm:$dst, i32imm:$offset, variable_ops),
+def TCRETURNdi : I<0, Pseudo, (outs), 
+                   (ins i32imm:$dst, i32imm:$offset, variable_ops),
                  "#TC_RETURN $dst $offset",
                  []>;
 
 let isCall = 1, isTerminator = 1, isReturn = 1, isBarrier = 1 in
-def TCRETURNri : I<0, Pseudo, (outs), (ins GR32:$dst, i32imm:$offset, variable_ops),
+def TCRETURNri : I<0, Pseudo, (outs), 
+                   (ins GR32:$dst, i32imm:$offset, variable_ops),
                  "#TC_RETURN $dst $offset",
                  []>;
 
@@ -722,7 +739,8 @@ let isCall = 1, isTerminator = 1, isReturn = 1, isBarrier = 1 in
   def TAILJMPd : IBr<0xE9, (ins i32imm_pcrel:$dst), "jmp\t$dst  # TAILCALL",
                  []>;
 let isCall = 1, isTerminator = 1, isReturn = 1, isBarrier = 1 in
-  def TAILJMPr : I<0xFF, MRM4r, (outs), (ins GR32:$dst), "jmp{l}\t{*}$dst  # TAILCALL",
+  def TAILJMPr : I<0xFF, MRM4r, (outs), (ins GR32:$dst), 
+                   "jmp{l}\t{*}$dst  # TAILCALL",
                  []>;     
 let isCall = 1, isTerminator = 1, isReturn = 1, isBarrier = 1 in
   def TAILJMPm : I<0xFF, MRM4m, (outs), (ins i32mem:$dst),
@@ -735,6 +753,15 @@ let Defs = [EBP, ESP], Uses = [EBP, ESP], mayLoad = 1, neverHasSideEffects=1 in
 def LEAVE    : I<0xC9, RawFrm,
                  (outs), (ins), "leave", []>;
 
+def POPCNT16rr : I<0xB8, MRMSrcReg, (outs GR16:$dst), (ins GR16:$src),
+                   "popcnt{w}\t{$src, $dst|$dst, $src}", []>, OpSize, XS;
+def POPCNT16rm : I<0xB8, MRMSrcMem, (outs GR16:$dst), (ins i16mem:$src),
+                   "popcnt{w}\t{$src, $dst|$dst, $src}", []>, OpSize, XS;
+def POPCNT32rr : I<0xB8, MRMSrcReg, (outs GR32:$dst), (ins GR32:$src),
+                   "popcnt{l}\t{$src, $dst|$dst, $src}", []>, XS;
+def POPCNT32rm : I<0xB8, MRMSrcMem, (outs GR32:$dst), (ins i32mem:$src),
+                   "popcnt{l}\t{$src, $dst|$dst, $src}", []>, XS;
+
 let Defs = [ESP], Uses = [ESP], neverHasSideEffects=1 in {
 let mayLoad = 1 in {
 def POP16r  : I<0x58, AddRegFrm, (outs GR16:$reg), (ins), "pop{w}\t$reg", []>,
@@ -770,10 +797,14 @@ def PUSH32i32  : Ii32<0x68, RawFrm, (outs), (ins i32imm:$imm),
                       "push{l}\t$imm", []>;
 }
 
-let Defs = [ESP, EFLAGS], Uses = [ESP], mayLoad = 1, neverHasSideEffects=1 in
-def POPFD    : I<0x9D, RawFrm, (outs), (ins), "popf", []>;
-let Defs = [ESP], Uses = [ESP, EFLAGS], mayStore = 1, neverHasSideEffects=1 in
-def PUSHFD   : I<0x9C, RawFrm, (outs), (ins), "pushf", []>;
+let Defs = [ESP, EFLAGS], Uses = [ESP], mayLoad = 1, neverHasSideEffects=1 in {
+def POPF     : I<0x9D, RawFrm, (outs), (ins), "popf{w}", []>, OpSize;
+def POPFD    : I<0x9D, RawFrm, (outs), (ins), "popf{l}", []>;
+}
+let Defs = [ESP], Uses = [ESP, EFLAGS], mayStore = 1, neverHasSideEffects=1 in {
+def PUSHF    : I<0x9C, RawFrm, (outs), (ins), "pushf{w}", []>, OpSize;
+def PUSHFD   : I<0x9C, RawFrm, (outs), (ins), "pushf{l}", []>;
+}
 
 let isTwoAddress = 1 in                               // GR32 = bswap GR32
   def BSWAP32r : I<0xC8, AddRegFrm,
@@ -915,6 +946,13 @@ let Uses = [EAX] in
 def OUT32ir : Ii8<0xE7, RawFrm, (outs), (ins i16i8imm:$port),
                    "out{l}\t{%eax, $port|$port, %EAX}", []>;
 
+def IN8  : I<0x6C, RawFrm, (outs), (ins),
+             "ins{b}", []>;
+def IN16 : I<0x6D, RawFrm, (outs), (ins),
+             "ins{w}", []>,  OpSize;
+def IN32 : I<0x6D, RawFrm, (outs), (ins),
+             "ins{l}", []>;
+
 //===----------------------------------------------------------------------===//
 //  Move Instructions...
 //
@@ -947,18 +985,18 @@ def MOV32mi : Ii32<0xC7, MRM0m, (outs), (ins i32mem:$dst, i32imm:$src),
                    "mov{l}\t{$src, $dst|$dst, $src}",
                    [(store (i32 imm:$src), addr:$dst)]>;
 
-def MOV8o8a : Ii8 <0xA0, RawFrm, (outs), (ins i8imm:$src),
+def MOV8o8a : Ii8 <0xA0, RawFrm, (outs), (ins offset8:$src),
                    "mov{b}\t{$src, %al|%al, $src}", []>;
-def MOV16o16a : Ii16 <0xA1, RawFrm, (outs), (ins i16imm:$src),
+def MOV16o16a : Ii16 <0xA1, RawFrm, (outs), (ins offset16:$src),
                       "mov{w}\t{$src, %ax|%ax, $src}", []>, OpSize;
-def MOV32o32a : Ii32 <0xA1, RawFrm, (outs), (ins i32imm:$src),
+def MOV32o32a : Ii32 <0xA1, RawFrm, (outs), (ins offset32:$src),
                       "mov{l}\t{$src, %eax|%eax, $src}", []>;
 
-def MOV8ao8 : Ii8 <0xA2, RawFrm, (outs i8imm:$dst), (ins),
+def MOV8ao8 : Ii8 <0xA2, RawFrm, (outs offset8:$dst), (ins),
                    "mov{b}\t{%al, $dst|$dst, %al}", []>;
-def MOV16ao16 : Ii16 <0xA3, RawFrm, (outs i16imm:$dst), (ins),
+def MOV16ao16 : Ii16 <0xA3, RawFrm, (outs offset16:$dst), (ins),
                       "mov{w}\t{%ax, $dst|$dst, %ax}", []>, OpSize;
-def MOV32ao32 : Ii32 <0xA3, RawFrm, (outs i32imm:$dst), (ins),
+def MOV32ao32 : Ii32 <0xA3, RawFrm, (outs offset32:$dst), (ins),
                       "mov{l}\t{%eax, $dst|$dst, %eax}", []>;
 
 // Moves to and from segment registers
@@ -971,6 +1009,13 @@ def MOV16sr : I<0x8E, MRMSrcReg, (outs SEGMENT_REG:$dst), (ins GR16:$src),
 def MOV16sm : I<0x8E, MRMSrcMem, (outs SEGMENT_REG:$dst), (ins i16mem:$src),
                 "mov{w}\t{$src, $dst|$dst, $src}", []>;
 
+def MOV8rr_REV : I<0x8A, MRMSrcReg, (outs GR8:$dst), (ins GR8:$src),
+                   "mov{b}\t{$src, $dst|$dst, $src}", []>;
+def MOV16rr_REV : I<0x8B, MRMSrcReg, (outs GR16:$dst), (ins GR16:$src),
+                    "mov{w}\t{$src, $dst|$dst, $src}", []>, OpSize;
+def MOV32rr_REV : I<0x8B, MRMSrcReg, (outs GR32:$dst), (ins GR32:$src),
+                    "mov{l}\t{$src, $dst|$dst, $src}", []>;
+
 let canFoldAsLoad = 1, isReMaterializable = 1, mayHaveSideEffects = 1 in {
 def MOV8rm  : I<0x8A, MRMSrcMem, (outs GR8 :$dst), (ins i8mem :$src),
                 "mov{b}\t{$src, $dst|$dst, $src}",
@@ -1010,6 +1055,18 @@ def MOV8rm_NOREX : I<0x8A, MRMSrcMem,
                      (outs GR8_NOREX:$dst), (ins i8mem_NOREX:$src),
                      "mov{b}\t{$src, $dst|$dst, $src}  # NOREX", []>;
 
+// Moves to and from debug registers
+def MOV32rd : I<0x21, MRMDestReg, (outs GR32:$dst), (ins DEBUG_REG:$src),
+                "mov{l}\t{$src, $dst|$dst, $src}", []>, TB;
+def MOV32dr : I<0x23, MRMSrcReg, (outs DEBUG_REG:$dst), (ins GR32:$src),
+                "mov{l}\t{$src, $dst|$dst, $src}", []>, TB;
+                
+// Moves to and from control registers
+def MOV32rc : I<0x20, MRMDestReg, (outs GR32:$dst), (ins CONTROL_REG_32:$src),
+                "mov{q}\t{$src, $dst|$dst, $src}", []>, TB;
+def MOV32cr : I<0x22, MRMSrcReg, (outs CONTROL_REG_32:$dst), (ins GR32:$src),
+                "mov{q}\t{$src, $dst|$dst, $src}", []>, TB;
+
 //===----------------------------------------------------------------------===//
 //  Fixed-Register Multiplication and Division Instructions...
 //
@@ -1071,7 +1128,7 @@ def IMUL8m  : I<0xF6, MRM5m, (outs), (ins i8mem :$src),
 let Defs = [AX,DX,EFLAGS], Uses = [AX] in
 def IMUL16m : I<0xF7, MRM5m, (outs), (ins i16mem:$src),
                 "imul{w}\t$src", []>, OpSize; // AX,DX = AX*[mem16]
-let Defs = [EAX,EDX], Uses = [EAX] in
+let Defs = [EAX,EDX,EFLAGS], Uses = [EAX] in
 def IMUL32m : I<0xF7, MRM5m, (outs), (ins i32mem:$src),
                 "imul{l}\t$src", []>;  // EAX,EDX = EAX*[mem32]
 }
@@ -1079,45 +1136,47 @@ def IMUL32m : I<0xF7, MRM5m, (outs), (ins i32mem:$src),
 
 // unsigned division/remainder
 let Defs = [AL,AH,EFLAGS], Uses = [AX] in
-def DIV8r  : I<0xF6, MRM6r, (outs),  (ins GR8:$src),          // AX/r8 = AL,AH
+def DIV8r  : I<0xF6, MRM6r, (outs),  (ins GR8:$src),    // AX/r8 = AL,AH
                "div{b}\t$src", []>;
 let Defs = [AX,DX,EFLAGS], Uses = [AX,DX] in
-def DIV16r : I<0xF7, MRM6r, (outs),  (ins GR16:$src),         // DX:AX/r16 = AX,DX
+def DIV16r : I<0xF7, MRM6r, (outs),  (ins GR16:$src),   // DX:AX/r16 = AX,DX
                "div{w}\t$src", []>, OpSize;
 let Defs = [EAX,EDX,EFLAGS], Uses = [EAX,EDX] in
-def DIV32r : I<0xF7, MRM6r, (outs),  (ins GR32:$src),         // EDX:EAX/r32 = EAX,EDX
+def DIV32r : I<0xF7, MRM6r, (outs),  (ins GR32:$src),   // EDX:EAX/r32 = EAX,EDX
                "div{l}\t$src", []>;
 let mayLoad = 1 in {
 let Defs = [AL,AH,EFLAGS], Uses = [AX] in
-def DIV8m  : I<0xF6, MRM6m, (outs), (ins i8mem:$src),       // AX/[mem8] = AL,AH
+def DIV8m  : I<0xF6, MRM6m, (outs), (ins i8mem:$src),   // AX/[mem8] = AL,AH
                "div{b}\t$src", []>;
 let Defs = [AX,DX,EFLAGS], Uses = [AX,DX] in
-def DIV16m : I<0xF7, MRM6m, (outs), (ins i16mem:$src),      // DX:AX/[mem16] = AX,DX
+def DIV16m : I<0xF7, MRM6m, (outs), (ins i16mem:$src),  // DX:AX/[mem16] = AX,DX
                "div{w}\t$src", []>, OpSize;
 let Defs = [EAX,EDX,EFLAGS], Uses = [EAX,EDX] in
-def DIV32m : I<0xF7, MRM6m, (outs), (ins i32mem:$src),      // EDX:EAX/[mem32] = EAX,EDX
+                                                    // EDX:EAX/[mem32] = EAX,EDX
+def DIV32m : I<0xF7, MRM6m, (outs), (ins i32mem:$src),
                "div{l}\t$src", []>;
 }
 
 // Signed division/remainder.
 let Defs = [AL,AH,EFLAGS], Uses = [AX] in
-def IDIV8r : I<0xF6, MRM7r, (outs),  (ins GR8:$src),          // AX/r8 = AL,AH
+def IDIV8r : I<0xF6, MRM7r, (outs),  (ins GR8:$src),    // AX/r8 = AL,AH
                "idiv{b}\t$src", []>;
 let Defs = [AX,DX,EFLAGS], Uses = [AX,DX] in
-def IDIV16r: I<0xF7, MRM7r, (outs),  (ins GR16:$src),         // DX:AX/r16 = AX,DX
+def IDIV16r: I<0xF7, MRM7r, (outs),  (ins GR16:$src),   // DX:AX/r16 = AX,DX
                "idiv{w}\t$src", []>, OpSize;
 let Defs = [EAX,EDX,EFLAGS], Uses = [EAX,EDX] in
-def IDIV32r: I<0xF7, MRM7r, (outs),  (ins GR32:$src),         // EDX:EAX/r32 = EAX,EDX
+def IDIV32r: I<0xF7, MRM7r, (outs),  (ins GR32:$src),   // EDX:EAX/r32 = EAX,EDX
                "idiv{l}\t$src", []>;
 let mayLoad = 1, mayLoad = 1 in {
 let Defs = [AL,AH,EFLAGS], Uses = [AX] in
-def IDIV8m : I<0xF6, MRM7m, (outs), (ins i8mem:$src),      // AX/[mem8] = AL,AH
+def IDIV8m : I<0xF6, MRM7m, (outs), (ins i8mem:$src),   // AX/[mem8] = AL,AH
                "idiv{b}\t$src", []>;
 let Defs = [AX,DX,EFLAGS], Uses = [AX,DX] in
-def IDIV16m: I<0xF7, MRM7m, (outs), (ins i16mem:$src),     // DX:AX/[mem16] = AX,DX
+def IDIV16m: I<0xF7, MRM7m, (outs), (ins i16mem:$src),  // DX:AX/[mem16] = AX,DX
                "idiv{w}\t$src", []>, OpSize;
 let Defs = [EAX,EDX,EFLAGS], Uses = [EAX,EDX] in
-def IDIV32m: I<0xF7, MRM7m, (outs), (ins i32mem:$src),     // EDX:EAX/[mem32] = EAX,EDX
+def IDIV32m: I<0xF7, MRM7m, (outs), (ins i32mem:$src), 
+                                                    // EDX:EAX/[mem32] = EAX,EDX
                "idiv{l}\t$src", []>;
 }
 
@@ -1145,193 +1204,193 @@ def CMOV_GR8 : I<0, Pseudo,
 let isCommutable = 1 in {
 def CMOVB16rr : I<0x42, MRMSrcReg,       // if <u, GR16 = GR16
                   (outs GR16:$dst), (ins GR16:$src1, GR16:$src2),
-                  "cmovb\t{$src2, $dst|$dst, $src2}",
+                  "cmovb{w}\t{$src2, $dst|$dst, $src2}",
                   [(set GR16:$dst, (X86cmov GR16:$src1, GR16:$src2,
                                    X86_COND_B, EFLAGS))]>,
                   TB, OpSize;
 def CMOVB32rr : I<0x42, MRMSrcReg,       // if <u, GR32 = GR32
                   (outs GR32:$dst), (ins GR32:$src1, GR32:$src2),
-                  "cmovb\t{$src2, $dst|$dst, $src2}",
+                  "cmovb{l}\t{$src2, $dst|$dst, $src2}",
                   [(set GR32:$dst, (X86cmov GR32:$src1, GR32:$src2,
                                    X86_COND_B, EFLAGS))]>,
                    TB;
 def CMOVAE16rr: I<0x43, MRMSrcReg,       // if >=u, GR16 = GR16
                   (outs GR16:$dst), (ins GR16:$src1, GR16:$src2),
-                  "cmovae\t{$src2, $dst|$dst, $src2}",
+                  "cmovae{w}\t{$src2, $dst|$dst, $src2}",
                   [(set GR16:$dst, (X86cmov GR16:$src1, GR16:$src2,
                                    X86_COND_AE, EFLAGS))]>,
                    TB, OpSize;
 def CMOVAE32rr: I<0x43, MRMSrcReg,       // if >=u, GR32 = GR32
                   (outs GR32:$dst), (ins GR32:$src1, GR32:$src2),
-                  "cmovae\t{$src2, $dst|$dst, $src2}",
+                  "cmovae{l}\t{$src2, $dst|$dst, $src2}",
                   [(set GR32:$dst, (X86cmov GR32:$src1, GR32:$src2,
                                    X86_COND_AE, EFLAGS))]>,
                    TB;
 def CMOVE16rr : I<0x44, MRMSrcReg,       // if ==, GR16 = GR16
                   (outs GR16:$dst), (ins GR16:$src1, GR16:$src2),
-                  "cmove\t{$src2, $dst|$dst, $src2}",
+                  "cmove{w}\t{$src2, $dst|$dst, $src2}",
                   [(set GR16:$dst, (X86cmov GR16:$src1, GR16:$src2,
                                    X86_COND_E, EFLAGS))]>,
                    TB, OpSize;
 def CMOVE32rr : I<0x44, MRMSrcReg,       // if ==, GR32 = GR32
                   (outs GR32:$dst), (ins GR32:$src1, GR32:$src2),
-                  "cmove\t{$src2, $dst|$dst, $src2}",
+                  "cmove{l}\t{$src2, $dst|$dst, $src2}",
                   [(set GR32:$dst, (X86cmov GR32:$src1, GR32:$src2,
                                    X86_COND_E, EFLAGS))]>,
                    TB;
 def CMOVNE16rr: I<0x45, MRMSrcReg,       // if !=, GR16 = GR16
                   (outs GR16:$dst), (ins GR16:$src1, GR16:$src2),
-                  "cmovne\t{$src2, $dst|$dst, $src2}",
+                  "cmovne{w}\t{$src2, $dst|$dst, $src2}",
                   [(set GR16:$dst, (X86cmov GR16:$src1, GR16:$src2,
                                    X86_COND_NE, EFLAGS))]>,
                    TB, OpSize;
 def CMOVNE32rr: I<0x45, MRMSrcReg,       // if !=, GR32 = GR32
                   (outs GR32:$dst), (ins GR32:$src1, GR32:$src2),
-                  "cmovne\t{$src2, $dst|$dst, $src2}",
+                  "cmovne{l}\t{$src2, $dst|$dst, $src2}",
                   [(set GR32:$dst, (X86cmov GR32:$src1, GR32:$src2,
                                    X86_COND_NE, EFLAGS))]>,
                    TB;
 def CMOVBE16rr: I<0x46, MRMSrcReg,       // if <=u, GR16 = GR16
                   (outs GR16:$dst), (ins GR16:$src1, GR16:$src2),
-                  "cmovbe\t{$src2, $dst|$dst, $src2}",
+                  "cmovbe{w}\t{$src2, $dst|$dst, $src2}",
                   [(set GR16:$dst, (X86cmov GR16:$src1, GR16:$src2,
                                    X86_COND_BE, EFLAGS))]>,
                    TB, OpSize;
 def CMOVBE32rr: I<0x46, MRMSrcReg,       // if <=u, GR32 = GR32
                   (outs GR32:$dst), (ins GR32:$src1, GR32:$src2),
-                  "cmovbe\t{$src2, $dst|$dst, $src2}",
+                  "cmovbe{l}\t{$src2, $dst|$dst, $src2}",
                   [(set GR32:$dst, (X86cmov GR32:$src1, GR32:$src2,
                                    X86_COND_BE, EFLAGS))]>,
                    TB;
 def CMOVA16rr : I<0x47, MRMSrcReg,       // if >u, GR16 = GR16
                   (outs GR16:$dst), (ins GR16:$src1, GR16:$src2),
-                  "cmova\t{$src2, $dst|$dst, $src2}",
+                  "cmova{w}\t{$src2, $dst|$dst, $src2}",
                   [(set GR16:$dst, (X86cmov GR16:$src1, GR16:$src2,
                                    X86_COND_A, EFLAGS))]>,
                    TB, OpSize;
 def CMOVA32rr : I<0x47, MRMSrcReg,       // if >u, GR32 = GR32
                   (outs GR32:$dst), (ins GR32:$src1, GR32:$src2),
-                  "cmova\t{$src2, $dst|$dst, $src2}",
+                  "cmova{l}\t{$src2, $dst|$dst, $src2}",
                   [(set GR32:$dst, (X86cmov GR32:$src1, GR32:$src2,
                                    X86_COND_A, EFLAGS))]>,
                    TB;
 def CMOVL16rr : I<0x4C, MRMSrcReg,       // if <s, GR16 = GR16
                   (outs GR16:$dst), (ins GR16:$src1, GR16:$src2),
-                  "cmovl\t{$src2, $dst|$dst, $src2}",
+                  "cmovl{w}\t{$src2, $dst|$dst, $src2}",
                   [(set GR16:$dst, (X86cmov GR16:$src1, GR16:$src2,
                                    X86_COND_L, EFLAGS))]>,
                    TB, OpSize;
 def CMOVL32rr : I<0x4C, MRMSrcReg,       // if <s, GR32 = GR32
                   (outs GR32:$dst), (ins GR32:$src1, GR32:$src2),
-                  "cmovl\t{$src2, $dst|$dst, $src2}",
+                  "cmovl{l}\t{$src2, $dst|$dst, $src2}",
                   [(set GR32:$dst, (X86cmov GR32:$src1, GR32:$src2,
                                    X86_COND_L, EFLAGS))]>,
                    TB;
 def CMOVGE16rr: I<0x4D, MRMSrcReg,       // if >=s, GR16 = GR16
                   (outs GR16:$dst), (ins GR16:$src1, GR16:$src2),
-                  "cmovge\t{$src2, $dst|$dst, $src2}",
+                  "cmovge{w}\t{$src2, $dst|$dst, $src2}",
                   [(set GR16:$dst, (X86cmov GR16:$src1, GR16:$src2,
                                    X86_COND_GE, EFLAGS))]>,
                    TB, OpSize;
 def CMOVGE32rr: I<0x4D, MRMSrcReg,       // if >=s, GR32 = GR32
                   (outs GR32:$dst), (ins GR32:$src1, GR32:$src2),
-                  "cmovge\t{$src2, $dst|$dst, $src2}",
+                  "cmovge{l}\t{$src2, $dst|$dst, $src2}",
                   [(set GR32:$dst, (X86cmov GR32:$src1, GR32:$src2,
                                    X86_COND_GE, EFLAGS))]>,
                    TB;
 def CMOVLE16rr: I<0x4E, MRMSrcReg,       // if <=s, GR16 = GR16
                   (outs GR16:$dst), (ins GR16:$src1, GR16:$src2),
-                  "cmovle\t{$src2, $dst|$dst, $src2}",
+                  "cmovle{w}\t{$src2, $dst|$dst, $src2}",
                   [(set GR16:$dst, (X86cmov GR16:$src1, GR16:$src2,
                                    X86_COND_LE, EFLAGS))]>,
                    TB, OpSize;
 def CMOVLE32rr: I<0x4E, MRMSrcReg,       // if <=s, GR32 = GR32
                   (outs GR32:$dst), (ins GR32:$src1, GR32:$src2),
-                  "cmovle\t{$src2, $dst|$dst, $src2}",
+                  "cmovle{l}\t{$src2, $dst|$dst, $src2}",
                   [(set GR32:$dst, (X86cmov GR32:$src1, GR32:$src2,
                                    X86_COND_LE, EFLAGS))]>,
                    TB;
 def CMOVG16rr : I<0x4F, MRMSrcReg,       // if >s, GR16 = GR16
                   (outs GR16:$dst), (ins GR16:$src1, GR16:$src2),
-                  "cmovg\t{$src2, $dst|$dst, $src2}",
+                  "cmovg{w}\t{$src2, $dst|$dst, $src2}",
                   [(set GR16:$dst, (X86cmov GR16:$src1, GR16:$src2,
                                    X86_COND_G, EFLAGS))]>,
                    TB, OpSize;
 def CMOVG32rr : I<0x4F, MRMSrcReg,       // if >s, GR32 = GR32
                   (outs GR32:$dst), (ins GR32:$src1, GR32:$src2),
-                  "cmovg\t{$src2, $dst|$dst, $src2}",
+                  "cmovg{l}\t{$src2, $dst|$dst, $src2}",
                   [(set GR32:$dst, (X86cmov GR32:$src1, GR32:$src2,
                                    X86_COND_G, EFLAGS))]>,
                    TB;
 def CMOVS16rr : I<0x48, MRMSrcReg,       // if signed, GR16 = GR16
                   (outs GR16:$dst), (ins GR16:$src1, GR16:$src2),
-                  "cmovs\t{$src2, $dst|$dst, $src2}",
+                  "cmovs{w}\t{$src2, $dst|$dst, $src2}",
                   [(set GR16:$dst, (X86cmov GR16:$src1, GR16:$src2,
                                    X86_COND_S, EFLAGS))]>,
                   TB, OpSize;
 def CMOVS32rr : I<0x48, MRMSrcReg,       // if signed, GR32 = GR32
                   (outs GR32:$dst), (ins GR32:$src1, GR32:$src2),
-                  "cmovs\t{$src2, $dst|$dst, $src2}",
+                  "cmovs{l}\t{$src2, $dst|$dst, $src2}",
                   [(set GR32:$dst, (X86cmov GR32:$src1, GR32:$src2,
                                    X86_COND_S, EFLAGS))]>,
                   TB;
 def CMOVNS16rr: I<0x49, MRMSrcReg,       // if !signed, GR16 = GR16
                   (outs GR16:$dst), (ins GR16:$src1, GR16:$src2),
-                  "cmovns\t{$src2, $dst|$dst, $src2}",
+                  "cmovns{w}\t{$src2, $dst|$dst, $src2}",
                   [(set GR16:$dst, (X86cmov GR16:$src1, GR16:$src2,
                                    X86_COND_NS, EFLAGS))]>,
                   TB, OpSize;
 def CMOVNS32rr: I<0x49, MRMSrcReg,       // if !signed, GR32 = GR32
                   (outs GR32:$dst), (ins GR32:$src1, GR32:$src2),
-                  "cmovns\t{$src2, $dst|$dst, $src2}",
+                  "cmovns{l}\t{$src2, $dst|$dst, $src2}",
                   [(set GR32:$dst, (X86cmov GR32:$src1, GR32:$src2,
                                    X86_COND_NS, EFLAGS))]>,
                   TB;
 def CMOVP16rr : I<0x4A, MRMSrcReg,       // if parity, GR16 = GR16
                   (outs GR16:$dst), (ins GR16:$src1, GR16:$src2),
-                  "cmovp\t{$src2, $dst|$dst, $src2}",
+                  "cmovp{w}\t{$src2, $dst|$dst, $src2}",
                   [(set GR16:$dst, (X86cmov GR16:$src1, GR16:$src2,
                                    X86_COND_P, EFLAGS))]>,
                   TB, OpSize;
 def CMOVP32rr : I<0x4A, MRMSrcReg,       // if parity, GR32 = GR32
                   (outs GR32:$dst), (ins GR32:$src1, GR32:$src2),
-                  "cmovp\t{$src2, $dst|$dst, $src2}",
+                  "cmovp{l}\t{$src2, $dst|$dst, $src2}",
                   [(set GR32:$dst, (X86cmov GR32:$src1, GR32:$src2,
                                    X86_COND_P, EFLAGS))]>,
                   TB;
 def CMOVNP16rr : I<0x4B, MRMSrcReg,       // if !parity, GR16 = GR16
                   (outs GR16:$dst), (ins GR16:$src1, GR16:$src2),
-                  "cmovnp\t{$src2, $dst|$dst, $src2}",
+                  "cmovnp{w}\t{$src2, $dst|$dst, $src2}",
                    [(set GR16:$dst, (X86cmov GR16:$src1, GR16:$src2,
                                     X86_COND_NP, EFLAGS))]>,
                   TB, OpSize;
 def CMOVNP32rr : I<0x4B, MRMSrcReg,       // if !parity, GR32 = GR32
                   (outs GR32:$dst), (ins GR32:$src1, GR32:$src2),
-                  "cmovnp\t{$src2, $dst|$dst, $src2}",
+                  "cmovnp{l}\t{$src2, $dst|$dst, $src2}",
                    [(set GR32:$dst, (X86cmov GR32:$src1, GR32:$src2,
                                     X86_COND_NP, EFLAGS))]>,
                   TB;
 def CMOVO16rr : I<0x40, MRMSrcReg,       // if overflow, GR16 = GR16
                   (outs GR16:$dst), (ins GR16:$src1, GR16:$src2),
-                  "cmovo\t{$src2, $dst|$dst, $src2}",
+                  "cmovo{w}\t{$src2, $dst|$dst, $src2}",
                   [(set GR16:$dst, (X86cmov GR16:$src1, GR16:$src2,
                                    X86_COND_O, EFLAGS))]>,
                   TB, OpSize;
 def CMOVO32rr : I<0x40, MRMSrcReg,       // if overflow, GR32 = GR32
                   (outs GR32:$dst), (ins GR32:$src1, GR32:$src2),
-                  "cmovo\t{$src2, $dst|$dst, $src2}",
+                  "cmovo{l}\t{$src2, $dst|$dst, $src2}",
                   [(set GR32:$dst, (X86cmov GR32:$src1, GR32:$src2,
                                    X86_COND_O, EFLAGS))]>,
                   TB;
 def CMOVNO16rr : I<0x41, MRMSrcReg,       // if !overflow, GR16 = GR16
                   (outs GR16:$dst), (ins GR16:$src1, GR16:$src2),
-                  "cmovno\t{$src2, $dst|$dst, $src2}",
+                  "cmovno{w}\t{$src2, $dst|$dst, $src2}",
                    [(set GR16:$dst, (X86cmov GR16:$src1, GR16:$src2,
                                     X86_COND_NO, EFLAGS))]>,
                   TB, OpSize;
 def CMOVNO32rr : I<0x41, MRMSrcReg,       // if !overflow, GR32 = GR32
                   (outs GR32:$dst), (ins GR32:$src1, GR32:$src2),
-                  "cmovno\t{$src2, $dst|$dst, $src2}",
+                  "cmovno{l}\t{$src2, $dst|$dst, $src2}",
                    [(set GR32:$dst, (X86cmov GR32:$src1, GR32:$src2,
                                     X86_COND_NO, EFLAGS))]>,
                   TB;
@@ -1339,193 +1398,193 @@ def CMOVNO32rr : I<0x41, MRMSrcReg,       // if !overflow, GR32 = GR32
 
 def CMOVB16rm : I<0x42, MRMSrcMem,       // if <u, GR16 = [mem16]
                   (outs GR16:$dst), (ins GR16:$src1, i16mem:$src2),
-                  "cmovb\t{$src2, $dst|$dst, $src2}",
+                  "cmovb{w}\t{$src2, $dst|$dst, $src2}",
                   [(set GR16:$dst, (X86cmov GR16:$src1, (loadi16 addr:$src2),
                                    X86_COND_B, EFLAGS))]>,
                   TB, OpSize;
 def CMOVB32rm : I<0x42, MRMSrcMem,       // if <u, GR32 = [mem32]
                   (outs GR32:$dst), (ins GR32:$src1, i32mem:$src2),
-                  "cmovb\t{$src2, $dst|$dst, $src2}",
+                  "cmovb{l}\t{$src2, $dst|$dst, $src2}",
                   [(set GR32:$dst, (X86cmov GR32:$src1, (loadi32 addr:$src2),
                                    X86_COND_B, EFLAGS))]>,
                    TB;
 def CMOVAE16rm: I<0x43, MRMSrcMem,       // if >=u, GR16 = [mem16]
                   (outs GR16:$dst), (ins GR16:$src1, i16mem:$src2),
-                  "cmovae\t{$src2, $dst|$dst, $src2}",
+                  "cmovae{w}\t{$src2, $dst|$dst, $src2}",
                   [(set GR16:$dst, (X86cmov GR16:$src1, (loadi16 addr:$src2),
                                    X86_COND_AE, EFLAGS))]>,
                    TB, OpSize;
 def CMOVAE32rm: I<0x43, MRMSrcMem,       // if >=u, GR32 = [mem32]
                   (outs GR32:$dst), (ins GR32:$src1, i32mem:$src2),
-                  "cmovae\t{$src2, $dst|$dst, $src2}",
+                  "cmovae{l}\t{$src2, $dst|$dst, $src2}",
                   [(set GR32:$dst, (X86cmov GR32:$src1, (loadi32 addr:$src2),
                                    X86_COND_AE, EFLAGS))]>,
                    TB;
 def CMOVE16rm : I<0x44, MRMSrcMem,       // if ==, GR16 = [mem16]
                   (outs GR16:$dst), (ins GR16:$src1, i16mem:$src2),
-                  "cmove\t{$src2, $dst|$dst, $src2}",
+                  "cmove{w}\t{$src2, $dst|$dst, $src2}",
                   [(set GR16:$dst, (X86cmov GR16:$src1, (loadi16 addr:$src2),
                                    X86_COND_E, EFLAGS))]>,
                    TB, OpSize;
 def CMOVE32rm : I<0x44, MRMSrcMem,       // if ==, GR32 = [mem32]
                   (outs GR32:$dst), (ins GR32:$src1, i32mem:$src2),
-                  "cmove\t{$src2, $dst|$dst, $src2}",
+                  "cmove{l}\t{$src2, $dst|$dst, $src2}",
                   [(set GR32:$dst, (X86cmov GR32:$src1, (loadi32 addr:$src2),
                                    X86_COND_E, EFLAGS))]>,
                    TB;
 def CMOVNE16rm: I<0x45, MRMSrcMem,       // if !=, GR16 = [mem16]
                   (outs GR16:$dst), (ins GR16:$src1, i16mem:$src2),
-                  "cmovne\t{$src2, $dst|$dst, $src2}",
+                  "cmovne{w}\t{$src2, $dst|$dst, $src2}",
                   [(set GR16:$dst, (X86cmov GR16:$src1, (loadi16 addr:$src2),
                                    X86_COND_NE, EFLAGS))]>,
                    TB, OpSize;
 def CMOVNE32rm: I<0x45, MRMSrcMem,       // if !=, GR32 = [mem32]
                   (outs GR32:$dst), (ins GR32:$src1, i32mem:$src2),
-                  "cmovne\t{$src2, $dst|$dst, $src2}",
+                  "cmovne{l}\t{$src2, $dst|$dst, $src2}",
                   [(set GR32:$dst, (X86cmov GR32:$src1, (loadi32 addr:$src2),
                                    X86_COND_NE, EFLAGS))]>,
                    TB;
 def CMOVBE16rm: I<0x46, MRMSrcMem,       // if <=u, GR16 = [mem16]
                   (outs GR16:$dst), (ins GR16:$src1, i16mem:$src2),
-                  "cmovbe\t{$src2, $dst|$dst, $src2}",
+                  "cmovbe{w}\t{$src2, $dst|$dst, $src2}",
                   [(set GR16:$dst, (X86cmov GR16:$src1, (loadi16 addr:$src2),
                                    X86_COND_BE, EFLAGS))]>,
                    TB, OpSize;
 def CMOVBE32rm: I<0x46, MRMSrcMem,       // if <=u, GR32 = [mem32]
                   (outs GR32:$dst), (ins GR32:$src1, i32mem:$src2),
-                  "cmovbe\t{$src2, $dst|$dst, $src2}",
+                  "cmovbe{l}\t{$src2, $dst|$dst, $src2}",
                   [(set GR32:$dst, (X86cmov GR32:$src1, (loadi32 addr:$src2),
                                    X86_COND_BE, EFLAGS))]>,
                    TB;
 def CMOVA16rm : I<0x47, MRMSrcMem,       // if >u, GR16 = [mem16]
                   (outs GR16:$dst), (ins GR16:$src1, i16mem:$src2),
-                  "cmova\t{$src2, $dst|$dst, $src2}",
+                  "cmova{w}\t{$src2, $dst|$dst, $src2}",
                   [(set GR16:$dst, (X86cmov GR16:$src1, (loadi16 addr:$src2),
                                    X86_COND_A, EFLAGS))]>,
                    TB, OpSize;
 def CMOVA32rm : I<0x47, MRMSrcMem,       // if >u, GR32 = [mem32]
                   (outs GR32:$dst), (ins GR32:$src1, i32mem:$src2),
-                  "cmova\t{$src2, $dst|$dst, $src2}",
+                  "cmova{l}\t{$src2, $dst|$dst, $src2}",
                   [(set GR32:$dst, (X86cmov GR32:$src1, (loadi32 addr:$src2),
                                    X86_COND_A, EFLAGS))]>,
                    TB;
 def CMOVL16rm : I<0x4C, MRMSrcMem,       // if <s, GR16 = [mem16]
                   (outs GR16:$dst), (ins GR16:$src1, i16mem:$src2),
-                  "cmovl\t{$src2, $dst|$dst, $src2}",
+                  "cmovl{w}\t{$src2, $dst|$dst, $src2}",
                   [(set GR16:$dst, (X86cmov GR16:$src1, (loadi16 addr:$src2),
                                    X86_COND_L, EFLAGS))]>,
                    TB, OpSize;
 def CMOVL32rm : I<0x4C, MRMSrcMem,       // if <s, GR32 = [mem32]
                   (outs GR32:$dst), (ins GR32:$src1, i32mem:$src2),
-                  "cmovl\t{$src2, $dst|$dst, $src2}",
+                  "cmovl{l}\t{$src2, $dst|$dst, $src2}",
                   [(set GR32:$dst, (X86cmov GR32:$src1, (loadi32 addr:$src2),
                                    X86_COND_L, EFLAGS))]>,
                    TB;
 def CMOVGE16rm: I<0x4D, MRMSrcMem,       // if >=s, GR16 = [mem16]
                   (outs GR16:$dst), (ins GR16:$src1, i16mem:$src2),
-                  "cmovge\t{$src2, $dst|$dst, $src2}",
+                  "cmovge{w}\t{$src2, $dst|$dst, $src2}",
                   [(set GR16:$dst, (X86cmov GR16:$src1, (loadi16 addr:$src2),
                                    X86_COND_GE, EFLAGS))]>,
                    TB, OpSize;
 def CMOVGE32rm: I<0x4D, MRMSrcMem,       // if >=s, GR32 = [mem32]
                   (outs GR32:$dst), (ins GR32:$src1, i32mem:$src2),
-                  "cmovge\t{$src2, $dst|$dst, $src2}",
+                  "cmovge{l}\t{$src2, $dst|$dst, $src2}",
                   [(set GR32:$dst, (X86cmov GR32:$src1, (loadi32 addr:$src2),
                                    X86_COND_GE, EFLAGS))]>,
                    TB;
 def CMOVLE16rm: I<0x4E, MRMSrcMem,       // if <=s, GR16 = [mem16]
                   (outs GR16:$dst), (ins GR16:$src1, i16mem:$src2),
-                  "cmovle\t{$src2, $dst|$dst, $src2}",
+                  "cmovle{w}\t{$src2, $dst|$dst, $src2}",
                   [(set GR16:$dst, (X86cmov GR16:$src1, (loadi16 addr:$src2),
                                    X86_COND_LE, EFLAGS))]>,
                    TB, OpSize;
 def CMOVLE32rm: I<0x4E, MRMSrcMem,       // if <=s, GR32 = [mem32]
                   (outs GR32:$dst), (ins GR32:$src1, i32mem:$src2),
-                  "cmovle\t{$src2, $dst|$dst, $src2}",
+                  "cmovle{l}\t{$src2, $dst|$dst, $src2}",
                   [(set GR32:$dst, (X86cmov GR32:$src1, (loadi32 addr:$src2),
                                    X86_COND_LE, EFLAGS))]>,
                    TB;
 def CMOVG16rm : I<0x4F, MRMSrcMem,       // if >s, GR16 = [mem16]
                   (outs GR16:$dst), (ins GR16:$src1, i16mem:$src2),
-                  "cmovg\t{$src2, $dst|$dst, $src2}",
+                  "cmovg{w}\t{$src2, $dst|$dst, $src2}",
                   [(set GR16:$dst, (X86cmov GR16:$src1, (loadi16 addr:$src2),
                                    X86_COND_G, EFLAGS))]>,
                    TB, OpSize;
 def CMOVG32rm : I<0x4F, MRMSrcMem,       // if >s, GR32 = [mem32]
                   (outs GR32:$dst), (ins GR32:$src1, i32mem:$src2),
-                  "cmovg\t{$src2, $dst|$dst, $src2}",
+                  "cmovg{l}\t{$src2, $dst|$dst, $src2}",
                   [(set GR32:$dst, (X86cmov GR32:$src1, (loadi32 addr:$src2),
                                    X86_COND_G, EFLAGS))]>,
                    TB;
 def CMOVS16rm : I<0x48, MRMSrcMem,       // if signed, GR16 = [mem16]
                   (outs GR16:$dst), (ins GR16:$src1, i16mem:$src2),
-                  "cmovs\t{$src2, $dst|$dst, $src2}",
+                  "cmovs{w}\t{$src2, $dst|$dst, $src2}",
                   [(set GR16:$dst, (X86cmov GR16:$src1, (loadi16 addr:$src2),
                                    X86_COND_S, EFLAGS))]>,
                   TB, OpSize;
 def CMOVS32rm : I<0x48, MRMSrcMem,       // if signed, GR32 = [mem32]
                   (outs GR32:$dst), (ins GR32:$src1, i32mem:$src2),
-                  "cmovs\t{$src2, $dst|$dst, $src2}",
+                  "cmovs{l}\t{$src2, $dst|$dst, $src2}",
                   [(set GR32:$dst, (X86cmov GR32:$src1, (loadi32 addr:$src2),
                                    X86_COND_S, EFLAGS))]>,
                   TB;
 def CMOVNS16rm: I<0x49, MRMSrcMem,       // if !signed, GR16 = [mem16]
                   (outs GR16:$dst), (ins GR16:$src1, i16mem:$src2),
-                  "cmovns\t{$src2, $dst|$dst, $src2}",
+                  "cmovns{w}\t{$src2, $dst|$dst, $src2}",
                   [(set GR16:$dst, (X86cmov GR16:$src1, (loadi16 addr:$src2),
                                    X86_COND_NS, EFLAGS))]>,
                   TB, OpSize;
 def CMOVNS32rm: I<0x49, MRMSrcMem,       // if !signed, GR32 = [mem32]
                   (outs GR32:$dst), (ins GR32:$src1, i32mem:$src2),
-                  "cmovns\t{$src2, $dst|$dst, $src2}",
+                  "cmovns{l}\t{$src2, $dst|$dst, $src2}",
                   [(set GR32:$dst, (X86cmov GR32:$src1, (loadi32 addr:$src2),
                                    X86_COND_NS, EFLAGS))]>,
                   TB;
 def CMOVP16rm : I<0x4A, MRMSrcMem,       // if parity, GR16 = [mem16]
                   (outs GR16:$dst), (ins GR16:$src1, i16mem:$src2),
-                  "cmovp\t{$src2, $dst|$dst, $src2}",
+                  "cmovp{w}\t{$src2, $dst|$dst, $src2}",
                   [(set GR16:$dst, (X86cmov GR16:$src1, (loadi16 addr:$src2),
                                    X86_COND_P, EFLAGS))]>,
                   TB, OpSize;
 def CMOVP32rm : I<0x4A, MRMSrcMem,       // if parity, GR32 = [mem32]
                   (outs GR32:$dst), (ins GR32:$src1, i32mem:$src2),
-                  "cmovp\t{$src2, $dst|$dst, $src2}",
+                  "cmovp{l}\t{$src2, $dst|$dst, $src2}",
                   [(set GR32:$dst, (X86cmov GR32:$src1, (loadi32 addr:$src2),
                                    X86_COND_P, EFLAGS))]>,
                   TB;
 def CMOVNP16rm : I<0x4B, MRMSrcMem,       // if !parity, GR16 = [mem16]
                   (outs GR16:$dst), (ins GR16:$src1, i16mem:$src2),
-                  "cmovnp\t{$src2, $dst|$dst, $src2}",
+                  "cmovnp{w}\t{$src2, $dst|$dst, $src2}",
                    [(set GR16:$dst, (X86cmov GR16:$src1, (loadi16 addr:$src2),
                                     X86_COND_NP, EFLAGS))]>,
                   TB, OpSize;
 def CMOVNP32rm : I<0x4B, MRMSrcMem,       // if !parity, GR32 = [mem32]
                   (outs GR32:$dst), (ins GR32:$src1, i32mem:$src2),
-                  "cmovnp\t{$src2, $dst|$dst, $src2}",
+                  "cmovnp{l}\t{$src2, $dst|$dst, $src2}",
                    [(set GR32:$dst, (X86cmov GR32:$src1, (loadi32 addr:$src2),
                                     X86_COND_NP, EFLAGS))]>,
                   TB;
 def CMOVO16rm : I<0x40, MRMSrcMem,       // if overflow, GR16 = [mem16]
                   (outs GR16:$dst), (ins GR16:$src1, i16mem:$src2),
-                  "cmovo\t{$src2, $dst|$dst, $src2}",
+                  "cmovo{w}\t{$src2, $dst|$dst, $src2}",
                   [(set GR16:$dst, (X86cmov GR16:$src1, (loadi16 addr:$src2),
                                    X86_COND_O, EFLAGS))]>,
                   TB, OpSize;
 def CMOVO32rm : I<0x40, MRMSrcMem,       // if overflow, GR32 = [mem32]
                   (outs GR32:$dst), (ins GR32:$src1, i32mem:$src2),
-                  "cmovo\t{$src2, $dst|$dst, $src2}",
+                  "cmovo{l}\t{$src2, $dst|$dst, $src2}",
                   [(set GR32:$dst, (X86cmov GR32:$src1, (loadi32 addr:$src2),
                                    X86_COND_O, EFLAGS))]>,
                   TB;
 def CMOVNO16rm : I<0x41, MRMSrcMem,       // if !overflow, GR16 = [mem16]
                   (outs GR16:$dst), (ins GR16:$src1, i16mem:$src2),
-                  "cmovno\t{$src2, $dst|$dst, $src2}",
+                  "cmovno{w}\t{$src2, $dst|$dst, $src2}",
                    [(set GR16:$dst, (X86cmov GR16:$src1, (loadi16 addr:$src2),
                                     X86_COND_NO, EFLAGS))]>,
                   TB, OpSize;
 def CMOVNO32rm : I<0x41, MRMSrcMem,       // if !overflow, GR32 = [mem32]
                   (outs GR32:$dst), (ins GR32:$src1, i32mem:$src2),
-                  "cmovno\t{$src2, $dst|$dst, $src2}",
+                  "cmovno{l}\t{$src2, $dst|$dst, $src2}",
                    [(set GR32:$dst, (X86cmov GR32:$src1, (loadi32 addr:$src2),
                                     X86_COND_NO, EFLAGS))]>,
                   TB;
@@ -1583,11 +1642,13 @@ def INC8r  : I<0xFE, MRM0r, (outs GR8 :$dst), (ins GR8 :$src), "inc{b}\t$dst",
                [(set GR8:$dst, (add GR8:$src, 1)),
                 (implicit EFLAGS)]>;
 let isConvertibleToThreeAddress = 1, CodeSize = 1 in {  // Can xform into LEA.
-def INC16r : I<0x40, AddRegFrm, (outs GR16:$dst), (ins GR16:$src), "inc{w}\t$dst",
+def INC16r : I<0x40, AddRegFrm, (outs GR16:$dst), (ins GR16:$src), 
+               "inc{w}\t$dst",
                [(set GR16:$dst, (add GR16:$src, 1)),
                 (implicit EFLAGS)]>,
              OpSize, Requires<[In32BitMode]>;
-def INC32r : I<0x40, AddRegFrm, (outs GR32:$dst), (ins GR32:$src), "inc{l}\t$dst",
+def INC32r : I<0x40, AddRegFrm, (outs GR32:$dst), (ins GR32:$src), 
+               "inc{l}\t$dst",
                [(set GR32:$dst, (add GR32:$src, 1)),
                 (implicit EFLAGS)]>, Requires<[In32BitMode]>;
 }
@@ -1610,11 +1671,13 @@ def DEC8r  : I<0xFE, MRM1r, (outs GR8 :$dst), (ins GR8 :$src), "dec{b}\t$dst",
                [(set GR8:$dst, (add GR8:$src, -1)),
                 (implicit EFLAGS)]>;
 let isConvertibleToThreeAddress = 1, CodeSize = 1 in {   // Can xform into LEA.
-def DEC16r : I<0x48, AddRegFrm, (outs GR16:$dst), (ins GR16:$src), "dec{w}\t$dst",
+def DEC16r : I<0x48, AddRegFrm, (outs GR16:$dst), (ins GR16:$src), 
+               "dec{w}\t$dst",
                [(set GR16:$dst, (add GR16:$src, -1)),
                 (implicit EFLAGS)]>,
              OpSize, Requires<[In32BitMode]>;
-def DEC32r : I<0x48, AddRegFrm, (outs GR32:$dst), (ins GR32:$src), "dec{l}\t$dst",
+def DEC32r : I<0x48, AddRegFrm, (outs GR32:$dst), (ins GR32:$src), 
+               "dec{l}\t$dst",
                [(set GR32:$dst, (add GR32:$src, -1)),
                 (implicit EFLAGS)]>, Requires<[In32BitMode]>;
 }
@@ -1654,6 +1717,17 @@ def AND32rr  : I<0x21, MRMDestReg,
                   (implicit EFLAGS)]>;
 }
 
+// AND instructions with the destination register in REG and the source register
+//   in R/M.  Included for the disassembler.
+def AND8rr_REV : I<0x22, MRMSrcReg, (outs GR8:$dst), (ins GR8:$src1, GR8:$src2),
+                  "and{b}\t{$src2, $dst|$dst, $src2}", []>;
+def AND16rr_REV : I<0x23, MRMSrcReg, (outs GR16:$dst), 
+                    (ins GR16:$src1, GR16:$src2),
+                   "and{w}\t{$src2, $dst|$dst, $src2}", []>, OpSize;
+def AND32rr_REV : I<0x23, MRMSrcReg, (outs GR32:$dst), 
+                    (ins GR32:$src1, GR32:$src2),
+                   "and{l}\t{$src2, $dst|$dst, $src2}", []>;
+
 def AND8rm   : I<0x22, MRMSrcMem, 
                  (outs GR8 :$dst), (ins GR8 :$src1, i8mem :$src2),
                  "and{b}\t{$src2, $dst|$dst, $src2}",
@@ -1753,50 +1827,73 @@ let isTwoAddress = 0 in {
 
 
 let isCommutable = 1 in {   // X = OR Y, Z   --> X = OR Z, Y
-def OR8rr    : I<0x08, MRMDestReg, (outs GR8 :$dst), (ins GR8 :$src1, GR8 :$src2),
+def OR8rr    : I<0x08, MRMDestReg, (outs GR8 :$dst), 
+                 (ins GR8 :$src1, GR8 :$src2),
                  "or{b}\t{$src2, $dst|$dst, $src2}",
                  [(set GR8:$dst, (or GR8:$src1, GR8:$src2)),
                   (implicit EFLAGS)]>;
-def OR16rr   : I<0x09, MRMDestReg, (outs GR16:$dst), (ins GR16:$src1, GR16:$src2),
+def OR16rr   : I<0x09, MRMDestReg, (outs GR16:$dst), 
+                 (ins GR16:$src1, GR16:$src2),
                  "or{w}\t{$src2, $dst|$dst, $src2}",
                  [(set GR16:$dst, (or GR16:$src1, GR16:$src2)),
                   (implicit EFLAGS)]>, OpSize;
-def OR32rr   : I<0x09, MRMDestReg, (outs GR32:$dst), (ins GR32:$src1, GR32:$src2),
+def OR32rr   : I<0x09, MRMDestReg, (outs GR32:$dst), 
+                 (ins GR32:$src1, GR32:$src2),
                  "or{l}\t{$src2, $dst|$dst, $src2}",
                  [(set GR32:$dst, (or GR32:$src1, GR32:$src2)),
                   (implicit EFLAGS)]>;
 }
-def OR8rm    : I<0x0A, MRMSrcMem , (outs GR8 :$dst), (ins GR8 :$src1, i8mem :$src2),
+
+// OR instructions with the destination register in REG and the source register
+//   in R/M.  Included for the disassembler.
+def OR8rr_REV : I<0x0A, MRMSrcReg, (outs GR8:$dst), (ins GR8:$src1, GR8:$src2),
+                  "or{b}\t{$src2, $dst|$dst, $src2}", []>;
+def OR16rr_REV : I<0x0B, MRMSrcReg, (outs GR16:$dst),
+                   (ins GR16:$src1, GR16:$src2),
+                   "or{w}\t{$src2, $dst|$dst, $src2}", []>, OpSize;
+def OR32rr_REV : I<0x0B, MRMSrcReg, (outs GR32:$dst), 
+                   (ins GR32:$src1, GR32:$src2),
+                   "or{l}\t{$src2, $dst|$dst, $src2}", []>;
+                  
+def OR8rm    : I<0x0A, MRMSrcMem , (outs GR8 :$dst), 
+                 (ins GR8 :$src1, i8mem :$src2),
                  "or{b}\t{$src2, $dst|$dst, $src2}",
                 [(set GR8:$dst, (or GR8:$src1, (load addr:$src2))),
                  (implicit EFLAGS)]>;
-def OR16rm   : I<0x0B, MRMSrcMem , (outs GR16:$dst), (ins GR16:$src1, i16mem:$src2),
+def OR16rm   : I<0x0B, MRMSrcMem , (outs GR16:$dst), 
+                 (ins GR16:$src1, i16mem:$src2),
                  "or{w}\t{$src2, $dst|$dst, $src2}",
                 [(set GR16:$dst, (or GR16:$src1, (load addr:$src2))),
                  (implicit EFLAGS)]>, OpSize;
-def OR32rm   : I<0x0B, MRMSrcMem , (outs GR32:$dst), (ins GR32:$src1, i32mem:$src2),
+def OR32rm   : I<0x0B, MRMSrcMem , (outs GR32:$dst), 
+                 (ins GR32:$src1, i32mem:$src2),
                  "or{l}\t{$src2, $dst|$dst, $src2}",
                 [(set GR32:$dst, (or GR32:$src1, (load addr:$src2))),
                  (implicit EFLAGS)]>;
 
-def OR8ri    : Ii8 <0x80, MRM1r, (outs GR8 :$dst), (ins GR8 :$src1, i8imm:$src2),
+def OR8ri    : Ii8 <0x80, MRM1r, (outs GR8 :$dst), 
+                    (ins GR8 :$src1, i8imm:$src2),
                     "or{b}\t{$src2, $dst|$dst, $src2}",
                     [(set GR8:$dst, (or GR8:$src1, imm:$src2)),
                      (implicit EFLAGS)]>;
-def OR16ri   : Ii16<0x81, MRM1r, (outs GR16:$dst), (ins GR16:$src1, i16imm:$src2),
+def OR16ri   : Ii16<0x81, MRM1r, (outs GR16:$dst), 
+                    (ins GR16:$src1, i16imm:$src2),
                     "or{w}\t{$src2, $dst|$dst, $src2}", 
                     [(set GR16:$dst, (or GR16:$src1, imm:$src2)),
                      (implicit EFLAGS)]>, OpSize;
-def OR32ri   : Ii32<0x81, MRM1r, (outs GR32:$dst), (ins GR32:$src1, i32imm:$src2),
+def OR32ri   : Ii32<0x81, MRM1r, (outs GR32:$dst), 
+                    (ins GR32:$src1, i32imm:$src2),
                     "or{l}\t{$src2, $dst|$dst, $src2}",
                     [(set GR32:$dst, (or GR32:$src1, imm:$src2)),
                      (implicit EFLAGS)]>;
 
-def OR16ri8  : Ii8<0x83, MRM1r, (outs GR16:$dst), (ins GR16:$src1, i16i8imm:$src2),
+def OR16ri8  : Ii8<0x83, MRM1r, (outs GR16:$dst), 
+                   (ins GR16:$src1, i16i8imm:$src2),
                    "or{w}\t{$src2, $dst|$dst, $src2}",
                    [(set GR16:$dst, (or GR16:$src1, i16immSExt8:$src2)),
                     (implicit EFLAGS)]>, OpSize;
-def OR32ri8  : Ii8<0x83, MRM1r, (outs GR32:$dst), (ins GR32:$src1, i32i8imm:$src2),
+def OR32ri8  : Ii8<0x83, MRM1r, (outs GR32:$dst), 
+                   (ins GR32:$src1, i32i8imm:$src2),
                    "or{l}\t{$src2, $dst|$dst, $src2}",
                    [(set GR32:$dst, (or GR32:$src1, i32immSExt8:$src2)),
                     (implicit EFLAGS)]>;
@@ -1863,6 +1960,17 @@ let isCommutable = 1 in { // X = XOR Y, Z --> X = XOR Z, Y
                     (implicit EFLAGS)]>;
 } // isCommutable = 1
 
+// XOR instructions with the destination register in REG and the source register
+//   in R/M.  Included for the disassembler.
+def XOR8rr_REV : I<0x32, MRMSrcReg, (outs GR8:$dst), (ins GR8:$src1, GR8:$src2),
+                  "xor{b}\t{$src2, $dst|$dst, $src2}", []>;
+def XOR16rr_REV : I<0x33, MRMSrcReg, (outs GR16:$dst), 
+                    (ins GR16:$src1, GR16:$src2),
+                   "xor{w}\t{$src2, $dst|$dst, $src2}", []>, OpSize;
+def XOR32rr_REV : I<0x33, MRMSrcReg, (outs GR32:$dst), 
+                    (ins GR32:$src1, GR32:$src2),
+                   "xor{l}\t{$src2, $dst|$dst, $src2}", []>;
+
 def XOR8rm   : I<0x32, MRMSrcMem , 
                  (outs GR8 :$dst), (ins GR8:$src1, i8mem :$src2), 
                  "xor{b}\t{$src2, $dst|$dst, $src2}",
@@ -2202,7 +2310,8 @@ def RCL16mCL : I<0xD3, MRM2m, (outs i16mem:$dst), (ins i16mem:$src),
 }
 def RCL16ri : Ii8<0xC1, MRM2r, (outs GR16:$dst), (ins GR16:$src, i8imm:$cnt),
                   "rcl{w}\t{$cnt, $dst|$dst, $cnt}", []>, OpSize;
-def RCL16mi : Ii8<0xC1, MRM2m, (outs i16mem:$dst), (ins i16mem:$src, i8imm:$cnt),
+def RCL16mi : Ii8<0xC1, MRM2m, (outs i16mem:$dst), 
+                  (ins i16mem:$src, i8imm:$cnt),
                   "rcl{w}\t{$cnt, $dst|$dst, $cnt}", []>, OpSize;
 
 def RCL32r1 : I<0xD1, MRM2r, (outs GR32:$dst), (ins GR32:$src),
@@ -2217,7 +2326,8 @@ def RCL32mCL : I<0xD3, MRM2m, (outs i32mem:$dst), (ins i32mem:$src),
 }
 def RCL32ri : Ii8<0xC1, MRM2r, (outs GR32:$dst), (ins GR32:$src, i8imm:$cnt),
                   "rcl{l}\t{$cnt, $dst|$dst, $cnt}", []>;
-def RCL32mi : Ii8<0xC1, MRM2m, (outs i32mem:$dst), (ins i32mem:$src, i8imm:$cnt),
+def RCL32mi : Ii8<0xC1, MRM2m, (outs i32mem:$dst), 
+                  (ins i32mem:$src, i8imm:$cnt),
                   "rcl{l}\t{$cnt, $dst|$dst, $cnt}", []>;
                   
 def RCR8r1 : I<0xD0, MRM3r, (outs GR8:$dst), (ins GR8:$src),
@@ -2247,7 +2357,8 @@ def RCR16mCL : I<0xD3, MRM3m, (outs i16mem:$dst), (ins i16mem:$src),
 }
 def RCR16ri : Ii8<0xC1, MRM3r, (outs GR16:$dst), (ins GR16:$src, i8imm:$cnt),
                   "rcr{w}\t{$cnt, $dst|$dst, $cnt}", []>, OpSize;
-def RCR16mi : Ii8<0xC1, MRM3m, (outs i16mem:$dst), (ins i16mem:$src, i8imm:$cnt),
+def RCR16mi : Ii8<0xC1, MRM3m, (outs i16mem:$dst), 
+                  (ins i16mem:$src, i8imm:$cnt),
                   "rcr{w}\t{$cnt, $dst|$dst, $cnt}", []>, OpSize;
 
 def RCR32r1 : I<0xD1, MRM3r, (outs GR32:$dst), (ins GR32:$src),
@@ -2262,7 +2373,8 @@ def RCR32mCL : I<0xD3, MRM3m, (outs i32mem:$dst), (ins i32mem:$src),
 }
 def RCR32ri : Ii8<0xC1, MRM3r, (outs GR32:$dst), (ins GR32:$src, i8imm:$cnt),
                   "rcr{l}\t{$cnt, $dst|$dst, $cnt}", []>;
-def RCR32mi : Ii8<0xC1, MRM3m, (outs i32mem:$dst), (ins i32mem:$src, i8imm:$cnt),
+def RCR32mi : Ii8<0xC1, MRM3m, (outs i32mem:$dst), 
+                  (ins i32mem:$src, i8imm:$cnt),
                   "rcr{l}\t{$cnt, $dst|$dst, $cnt}", []>;
 
 // FIXME: provide shorter instructions when imm8 == 1
@@ -2283,7 +2395,8 @@ def ROL8ri   : Ii8<0xC0, MRM0r, (outs GR8 :$dst), (ins GR8 :$src1, i8imm:$src2),
                    [(set GR8:$dst, (rotl GR8:$src1, (i8 imm:$src2)))]>;
 def ROL16ri  : Ii8<0xC1, MRM0r, (outs GR16:$dst), (ins GR16:$src1, i8imm:$src2),
                    "rol{w}\t{$src2, $dst|$dst, $src2}",
-                   [(set GR16:$dst, (rotl GR16:$src1, (i8 imm:$src2)))]>, OpSize;
+                   [(set GR16:$dst, (rotl GR16:$src1, (i8 imm:$src2)))]>, 
+                   OpSize;
 def ROL32ri  : Ii8<0xC1, MRM0r, (outs GR32:$dst), (ins GR32:$src1, i8imm:$src2),
                    "rol{l}\t{$src2, $dst|$dst, $src2}",
                    [(set GR32:$dst, (rotl GR32:$src1, (i8 imm:$src2)))]>;
@@ -2352,7 +2465,8 @@ def ROR8ri   : Ii8<0xC0, MRM1r, (outs GR8 :$dst), (ins GR8 :$src1, i8imm:$src2),
                    [(set GR8:$dst, (rotr GR8:$src1, (i8 imm:$src2)))]>;
 def ROR16ri  : Ii8<0xC1, MRM1r, (outs GR16:$dst), (ins GR16:$src1, i8imm:$src2),
                    "ror{w}\t{$src2, $dst|$dst, $src2}",
-                   [(set GR16:$dst, (rotr GR16:$src1, (i8 imm:$src2)))]>, OpSize;
+                   [(set GR16:$dst, (rotr GR16:$src1, (i8 imm:$src2)))]>, 
+                   OpSize;
 def ROR32ri  : Ii8<0xC1, MRM1r, (outs GR32:$dst), (ins GR32:$src1, i8imm:$src2),
                    "ror{l}\t{$src2, $dst|$dst, $src2}",
                    [(set GR32:$dst, (rotr GR32:$src1, (i8 imm:$src2)))]>;
@@ -2408,17 +2522,21 @@ let isTwoAddress = 0 in {
 
 // Double shift instructions (generalizations of rotate)
 let Uses = [CL] in {
-def SHLD32rrCL : I<0xA5, MRMDestReg, (outs GR32:$dst), (ins GR32:$src1, GR32:$src2),
+def SHLD32rrCL : I<0xA5, MRMDestReg, (outs GR32:$dst), 
+                   (ins GR32:$src1, GR32:$src2),
                    "shld{l}\t{%cl, $src2, $dst|$dst, $src2, CL}",
                    [(set GR32:$dst, (X86shld GR32:$src1, GR32:$src2, CL))]>, TB;
-def SHRD32rrCL : I<0xAD, MRMDestReg, (outs GR32:$dst), (ins GR32:$src1, GR32:$src2),
+def SHRD32rrCL : I<0xAD, MRMDestReg, (outs GR32:$dst),
+                   (ins GR32:$src1, GR32:$src2),
                    "shrd{l}\t{%cl, $src2, $dst|$dst, $src2, CL}",
                    [(set GR32:$dst, (X86shrd GR32:$src1, GR32:$src2, CL))]>, TB;
-def SHLD16rrCL : I<0xA5, MRMDestReg, (outs GR16:$dst), (ins GR16:$src1, GR16:$src2),
+def SHLD16rrCL : I<0xA5, MRMDestReg, (outs GR16:$dst), 
+                   (ins GR16:$src1, GR16:$src2),
                    "shld{w}\t{%cl, $src2, $dst|$dst, $src2, CL}",
                    [(set GR16:$dst, (X86shld GR16:$src1, GR16:$src2, CL))]>,
                    TB, OpSize;
-def SHRD16rrCL : I<0xAD, MRMDestReg, (outs GR16:$dst), (ins GR16:$src1, GR16:$src2),
+def SHRD16rrCL : I<0xAD, MRMDestReg, (outs GR16:$dst), 
+                   (ins GR16:$src1, GR16:$src2),
                    "shrd{w}\t{%cl, $src2, $dst|$dst, $src2, CL}",
                    [(set GR16:$dst, (X86shrd GR16:$src1, GR16:$src2, CL))]>,
                    TB, OpSize;
@@ -2426,25 +2544,29 @@ def SHRD16rrCL : I<0xAD, MRMDestReg, (outs GR16:$dst), (ins GR16:$src1, GR16:$sr
 
 let isCommutable = 1 in {  // These instructions commute to each other.
 def SHLD32rri8 : Ii8<0xA4, MRMDestReg,
-                     (outs GR32:$dst), (ins GR32:$src1, GR32:$src2, i8imm:$src3),
+                     (outs GR32:$dst), 
+                     (ins GR32:$src1, GR32:$src2, i8imm:$src3),
                      "shld{l}\t{$src3, $src2, $dst|$dst, $src2, $src3}",
                      [(set GR32:$dst, (X86shld GR32:$src1, GR32:$src2,
                                       (i8 imm:$src3)))]>,
                  TB;
 def SHRD32rri8 : Ii8<0xAC, MRMDestReg,
-                     (outs GR32:$dst), (ins GR32:$src1, GR32:$src2, i8imm:$src3),
+                     (outs GR32:$dst), 
+                     (ins GR32:$src1, GR32:$src2, i8imm:$src3),
                      "shrd{l}\t{$src3, $src2, $dst|$dst, $src2, $src3}",
                      [(set GR32:$dst, (X86shrd GR32:$src1, GR32:$src2,
                                       (i8 imm:$src3)))]>,
                  TB;
 def SHLD16rri8 : Ii8<0xA4, MRMDestReg,
-                     (outs GR16:$dst), (ins GR16:$src1, GR16:$src2, i8imm:$src3),
+                     (outs GR16:$dst), 
+                     (ins GR16:$src1, GR16:$src2, i8imm:$src3),
                      "shld{w}\t{$src3, $src2, $dst|$dst, $src2, $src3}",
                      [(set GR16:$dst, (X86shld GR16:$src1, GR16:$src2,
                                       (i8 imm:$src3)))]>,
                      TB, OpSize;
 def SHRD16rri8 : Ii8<0xAC, MRMDestReg,
-                     (outs GR16:$dst), (ins GR16:$src1, GR16:$src2, i8imm:$src3),
+                     (outs GR16:$dst), 
+                     (ins GR16:$src1, GR16:$src2, i8imm:$src3),
                      "shrd{w}\t{$src3, $src2, $dst|$dst, $src2, $src3}",
                      [(set GR16:$dst, (X86shrd GR16:$src1, GR16:$src2,
                                       (i8 imm:$src3)))]>,
@@ -2642,6 +2764,16 @@ def ADC32rr  : I<0x11, MRMDestReg, (outs GR32:$dst),
                  "adc{l}\t{$src2, $dst|$dst, $src2}",
                  [(set GR32:$dst, (adde GR32:$src1, GR32:$src2))]>;
 }
+
+def ADC8rr_REV : I<0x12, MRMSrcReg, (outs GR8:$dst), (ins GR8:$src1, GR8:$src2),
+                 "adc{b}\t{$src2, $dst|$dst, $src2}", []>;
+def ADC16rr_REV : I<0x13, MRMSrcReg, (outs GR16:$dst), 
+                    (ins GR16:$src1, GR16:$src2),
+                    "adc{w}\t{$src2, $dst|$dst, $src2}", []>, OpSize;
+def ADC32rr_REV : I<0x13, MRMSrcReg, (outs GR32:$dst), 
+                    (ins GR32:$src1, GR32:$src2),
+                    "adc{l}\t{$src2, $dst|$dst, $src2}", []>;
+
 def ADC8rm   : I<0x12, MRMSrcMem , (outs GR8:$dst), 
                                    (ins GR8:$src1, i8mem:$src2),
                  "adc{b}\t{$src2, $dst|$dst, $src2}",
@@ -2728,6 +2860,15 @@ def SUB32rr : I<0x29, MRMDestReg, (outs GR32:$dst), (ins GR32:$src1,GR32:$src2),
                 [(set GR32:$dst, (sub GR32:$src1, GR32:$src2)),
                  (implicit EFLAGS)]>;
 
+def SUB8rr_REV : I<0x2A, MRMSrcReg, (outs GR8:$dst), (ins GR8:$src1, GR8:$src2),
+                   "sub{b}\t{$src2, $dst|$dst, $src2}", []>;
+def SUB16rr_REV : I<0x2B, MRMSrcReg, (outs GR16:$dst), 
+                    (ins GR16:$src1, GR16:$src2),
+                    "sub{w}\t{$src2, $dst|$dst, $src2}", []>, OpSize;
+def SUB32rr_REV : I<0x2B, MRMSrcReg, (outs GR32:$dst), 
+                    (ins GR32:$src1, GR32:$src2),
+                    "sub{l}\t{$src2, $dst|$dst, $src2}", []>;
+
 // Register-Memory Subtraction
 def SUB8rm  : I<0x2A, MRMSrcMem, (outs GR8 :$dst),
                                  (ins GR8 :$src1, i8mem :$src2),
@@ -2869,6 +3010,16 @@ let isTwoAddress = 0 in {
   def SBB32i32 : Ii32<0x1D, RawFrm, (outs), (ins i32imm:$src),
                       "sbb{l}\t{$src, %eax|%eax, $src}", []>;
 }
+
+def SBB8rr_REV : I<0x1A, MRMSrcReg, (outs GR8:$dst), (ins GR8:$src1, GR8:$src2),
+                   "sbb{b}\t{$src2, $dst|$dst, $src2}", []>;
+def SBB16rr_REV : I<0x1B, MRMSrcReg, (outs GR16:$dst), 
+                    (ins GR16:$src1, GR16:$src2),
+                    "sbb{w}\t{$src2, $dst|$dst, $src2}", []>, OpSize;
+def SBB32rr_REV : I<0x1B, MRMSrcReg, (outs GR32:$dst), 
+                    (ins GR32:$src1, GR32:$src2),
+                    "sbb{l}\t{$src2, $dst|$dst, $src2}", []>;
+
 def SBB8rm   : I<0x1A, MRMSrcMem, (outs GR8:$dst), (ins GR8:$src1, i8mem:$src2),
                     "sbb{b}\t{$src2, $dst|$dst, $src2}",
                     [(set GR8:$dst, (sube GR8:$src1, (load addr:$src2)))]>;
@@ -2923,7 +3074,8 @@ def IMUL16rm : I<0xAF, MRMSrcMem, (outs GR16:$dst),
                  "imul{w}\t{$src2, $dst|$dst, $src2}",
                  [(set GR16:$dst, (mul GR16:$src1, (load addr:$src2))),
                   (implicit EFLAGS)]>, TB, OpSize;
-def IMUL32rm : I<0xAF, MRMSrcMem, (outs GR32:$dst), (ins GR32:$src1, i32mem:$src2),
+def IMUL32rm : I<0xAF, MRMSrcMem, (outs GR32:$dst), 
+                 (ins GR32:$src1, i32mem:$src2),
                  "imul{l}\t{$src2, $dst|$dst, $src2}",
                  [(set GR32:$dst, (mul GR32:$src1, (load addr:$src2))),
                   (implicit EFLAGS)]>, TB;
@@ -2955,12 +3107,12 @@ def IMUL32rri8 : Ii8<0x6B, MRMSrcReg,                       // GR32 = GR32*I8
                       (implicit EFLAGS)]>;
 
 // Memory-Integer Signed Integer Multiply
-def IMUL16rmi  : Ii16<0x69, MRMSrcMem,                      // GR16 = [mem16]*I16
+def IMUL16rmi  : Ii16<0x69, MRMSrcMem,                     // GR16 = [mem16]*I16
                       (outs GR16:$dst), (ins i16mem:$src1, i16imm:$src2),
                       "imul{w}\t{$src2, $src1, $dst|$dst, $src1, $src2}",
                       [(set GR16:$dst, (mul (load addr:$src1), imm:$src2)),
                        (implicit EFLAGS)]>, OpSize;
-def IMUL32rmi  : Ii32<0x69, MRMSrcMem,                      // GR32 = [mem32]*I32
+def IMUL32rmi  : Ii32<0x69, MRMSrcMem,                     // GR32 = [mem32]*I32
                       (outs GR32:$dst), (ins i32mem:$src1, i32imm:$src2),
                       "imul{l}\t{$src2, $src1, $dst|$dst, $src1, $src2}",
                       [(set GR32:$dst, (mul (load addr:$src1), imm:$src2)),
@@ -3068,11 +3220,11 @@ def SETB_C8r : I<0x18, MRMInitReg, (outs GR8:$dst), (ins),
                  [(set GR8:$dst, (X86setcc_c X86_COND_B, EFLAGS))]>;
 def SETB_C16r : I<0x19, MRMInitReg, (outs GR16:$dst), (ins),
                   "sbb{w}\t$dst, $dst",
-                 [(set GR16:$dst, (zext (X86setcc_c X86_COND_B, EFLAGS)))]>,
+                 [(set GR16:$dst, (X86setcc_c X86_COND_B, EFLAGS))]>,
                 OpSize;
 def SETB_C32r : I<0x19, MRMInitReg, (outs GR32:$dst), (ins),
                   "sbb{l}\t$dst, $dst",
-                 [(set GR32:$dst, (zext (X86setcc_c X86_COND_B, EFLAGS)))]>;
+                 [(set GR32:$dst, (X86setcc_c X86_COND_B, EFLAGS))]>;
 } // isCodeGenOnly
 
 def SETEr    : I<0x94, MRM0r, 
@@ -3371,15 +3523,21 @@ def BT32rr : I<0xA3, MRMDestReg, (outs), (ins GR32:$src1, GR32:$src2),
 
 // Unlike with the register+register form, the memory+register form of the
 // bt instruction does not ignore the high bits of the index. From ISel's
-// perspective, this is pretty bizarre. Disable these instructions for now.
-//def BT16mr : I<0xA3, MRMDestMem, (outs), (ins i16mem:$src1, GR16:$src2),
-//               "bt{w}\t{$src2, $src1|$src1, $src2}",
+// perspective, this is pretty bizarre. Make these instructions disassembly
+// only for now.
+
+def BT16mr : I<0xA3, MRMDestMem, (outs), (ins i16mem:$src1, GR16:$src2),
+               "bt{w}\t{$src2, $src1|$src1, $src2}", 
 //               [(X86bt (loadi16 addr:$src1), GR16:$src2),
-//                (implicit EFLAGS)]>, OpSize, TB, Requires<[FastBTMem]>;
-//def BT32mr : I<0xA3, MRMDestMem, (outs), (ins i32mem:$src1, GR32:$src2),
-//               "bt{l}\t{$src2, $src1|$src1, $src2}",
+//                (implicit EFLAGS)]
+               []
+               >, OpSize, TB, Requires<[FastBTMem]>;
+def BT32mr : I<0xA3, MRMDestMem, (outs), (ins i32mem:$src1, GR32:$src2),
+               "bt{l}\t{$src2, $src1|$src1, $src2}", 
 //               [(X86bt (loadi32 addr:$src1), GR32:$src2),
-//                (implicit EFLAGS)]>, TB, Requires<[FastBTMem]>;
+//                (implicit EFLAGS)]
+               []
+               >, TB, Requires<[FastBTMem]>;
 
 def BT16ri8 : Ii8<0xBA, MRM4r, (outs), (ins GR16:$src1, i16i8imm:$src2),
                 "bt{w}\t{$src2, $src1|$src1, $src2}",
@@ -3400,12 +3558,67 @@ def BT32mi8 : Ii8<0xBA, MRM4m, (outs), (ins i32mem:$src1, i32i8imm:$src2),
                 "bt{l}\t{$src2, $src1|$src1, $src2}",
                 [(X86bt (loadi32 addr:$src1), i32immSExt8:$src2),
                  (implicit EFLAGS)]>, TB;
+
+def BTC16rr : I<0xBB, MRMDestReg, (outs), (ins GR16:$src1, GR16:$src2),
+                "btc{w}\t{$src2, $src1|$src1, $src2}", []>, OpSize, TB;
+def BTC32rr : I<0xBB, MRMDestReg, (outs), (ins GR32:$src1, GR32:$src2),
+                "btc{l}\t{$src2, $src1|$src1, $src2}", []>, TB;
+def BTC16mr : I<0xBB, MRMDestMem, (outs), (ins i16mem:$src1, GR16:$src2),
+                "btc{w}\t{$src2, $src1|$src1, $src2}", []>, OpSize, TB;
+def BTC32mr : I<0xBB, MRMDestMem, (outs), (ins i32mem:$src1, GR32:$src2),
+                "btc{l}\t{$src2, $src1|$src1, $src2}", []>, TB;
+def BTC16ri8 : Ii8<0xBA, MRM7r, (outs), (ins GR16:$src1, i16i8imm:$src2),
+                    "btc{w}\t{$src2, $src1|$src1, $src2}", []>, OpSize, TB;
+def BTC32ri8 : Ii8<0xBA, MRM7r, (outs), (ins GR32:$src1, i32i8imm:$src2),
+                    "btc{l}\t{$src2, $src1|$src1, $src2}", []>, TB;
+def BTC16mi8 : Ii8<0xBA, MRM7m, (outs), (ins i16mem:$src1, i16i8imm:$src2),
+                    "btc{w}\t{$src2, $src1|$src1, $src2}", []>, OpSize, TB;
+def BTC32mi8 : Ii8<0xBA, MRM7m, (outs), (ins i32mem:$src1, i32i8imm:$src2),
+                    "btc{l}\t{$src2, $src1|$src1, $src2}", []>, TB;
+
+def BTR16rr : I<0xB3, MRMDestReg, (outs), (ins GR16:$src1, GR16:$src2),
+                "btr{w}\t{$src2, $src1|$src1, $src2}", []>, OpSize, TB;
+def BTR32rr : I<0xB3, MRMDestReg, (outs), (ins GR32:$src1, GR32:$src2),
+                "btr{l}\t{$src2, $src1|$src1, $src2}", []>, TB;
+def BTR16mr : I<0xB3, MRMDestMem, (outs), (ins i16mem:$src1, GR16:$src2),
+                "btr{w}\t{$src2, $src1|$src1, $src2}", []>, OpSize, TB;
+def BTR32mr : I<0xB3, MRMDestMem, (outs), (ins i32mem:$src1, GR32:$src2),
+                "btr{l}\t{$src2, $src1|$src1, $src2}", []>, TB;
+def BTR16ri8 : Ii8<0xBA, MRM6r, (outs), (ins GR16:$src1, i16i8imm:$src2),
+                    "btr{w}\t{$src2, $src1|$src1, $src2}", []>, OpSize, TB;
+def BTR32ri8 : Ii8<0xBA, MRM6r, (outs), (ins GR32:$src1, i32i8imm:$src2),
+                    "btr{l}\t{$src2, $src1|$src1, $src2}", []>, TB;
+def BTR16mi8 : Ii8<0xBA, MRM6m, (outs), (ins i16mem:$src1, i16i8imm:$src2),
+                    "btr{w}\t{$src2, $src1|$src1, $src2}", []>, OpSize, TB;
+def BTR32mi8 : Ii8<0xBA, MRM6m, (outs), (ins i32mem:$src1, i32i8imm:$src2),
+                    "btr{l}\t{$src2, $src1|$src1, $src2}", []>, TB;
+
+def BTS16rr : I<0xAB, MRMDestReg, (outs), (ins GR16:$src1, GR16:$src2),
+                "bts{w}\t{$src2, $src1|$src1, $src2}", []>, OpSize, TB;
+def BTS32rr : I<0xAB, MRMDestReg, (outs), (ins GR32:$src1, GR32:$src2),
+                "bts{l}\t{$src2, $src1|$src1, $src2}", []>, TB;
+def BTS16mr : I<0xAB, MRMDestMem, (outs), (ins i16mem:$src1, GR16:$src2),
+                "bts{w}\t{$src2, $src1|$src1, $src2}", []>, OpSize, TB;
+def BTS32mr : I<0xAB, MRMDestMem, (outs), (ins i32mem:$src1, GR32:$src2),
+                "bts{l}\t{$src2, $src1|$src1, $src2}", []>, TB;
+def BTS16ri8 : Ii8<0xBA, MRM5r, (outs), (ins GR16:$src1, i16i8imm:$src2),
+                    "bts{w}\t{$src2, $src1|$src1, $src2}", []>, OpSize, TB;
+def BTS32ri8 : Ii8<0xBA, MRM5r, (outs), (ins GR32:$src1, i32i8imm:$src2),
+                    "bts{l}\t{$src2, $src1|$src1, $src2}", []>, TB;
+def BTS16mi8 : Ii8<0xBA, MRM5m, (outs), (ins i16mem:$src1, i16i8imm:$src2),
+                    "bts{w}\t{$src2, $src1|$src1, $src2}", []>, OpSize, TB;
+def BTS32mi8 : Ii8<0xBA, MRM5m, (outs), (ins i32mem:$src1, i32i8imm:$src2),
+                    "bts{l}\t{$src2, $src1|$src1, $src2}", []>, TB;
 } // Defs = [EFLAGS]
 
 // Sign/Zero extenders
 // Use movsbl intead of movsbw; we don't care about the high 16 bits
 // of the register here. This has a smaller encoding and avoids a
-// partial-register update.
+// partial-register update.  Actual movsbw included for the disassembler.
+def MOVSX16rr8W : I<0xBE, MRMSrcReg, (outs GR16:$dst), (ins GR8:$src),
+                    "movs{bw|x}\t{$src, $dst|$dst, $src}", []>, TB, OpSize;
+def MOVSX16rm8W : I<0xBE, MRMSrcMem, (outs GR16:$dst), (ins i8mem:$src),
+                    "movs{bw|x}\t{$src, $dst|$dst, $src}", []>, TB, OpSize;
 def MOVSX16rr8 : I<0xBE, MRMSrcReg, (outs GR16:$dst), (ins GR8 :$src),
                    "", [(set GR16:$dst, (sext GR8:$src))]>, TB;
 def MOVSX16rm8 : I<0xBE, MRMSrcMem, (outs GR16:$dst), (ins i8mem :$src),
@@ -3425,7 +3638,11 @@ def MOVSX32rm16: I<0xBF, MRMSrcMem, (outs GR32:$dst), (ins i16mem:$src),
 
 // Use movzbl intead of movzbw; we don't care about the high 16 bits
 // of the register here. This has a smaller encoding and avoids a
-// partial-register update.
+// partial-register update.  Actual movzbw included for the disassembler.
+def MOVZX16rr8W : I<0xB6, MRMSrcReg, (outs GR16:$dst), (ins GR8:$src),
+                    "movz{bw|x}\t{$src, $dst|$dst, $src}", []>, TB, OpSize;
+def MOVZX16rm8W : I<0xB6, MRMSrcMem, (outs GR16:$dst), (ins i8mem:$src),
+                    "movz{bw|x}\t{$src, $dst|$dst, $src}", []>, TB, OpSize;  
 def MOVZX16rr8 : I<0xB6, MRMSrcReg, (outs GR16:$dst), (ins GR8 :$src),
                    "", [(set GR16:$dst, (zext GR8:$src))]>, TB;
 def MOVZX16rm8 : I<0xB6, MRMSrcMem, (outs GR16:$dst), (ins i8mem :$src),
@@ -3483,15 +3700,18 @@ let Defs = [EFLAGS], isReMaterializable = 1, isAsCheapAsAMove = 1,
 def MOV8r0   : I<0x30, MRMInitReg, (outs GR8 :$dst), (ins),
                  "xor{b}\t$dst, $dst",
                  [(set GR8:$dst, 0)]>;
-// Use xorl instead of xorw since we don't care about the high 16 bits,
-// it's smaller, and it avoids a partial-register update.
-def MOV16r0  : I<0x31, MRMInitReg, (outs GR16:$dst), (ins),
-                 "", [(set GR16:$dst, 0)]>;
-def MOV32r0  : I<0x31, MRMInitReg,  (outs GR32:$dst), (ins),
+                 
+def MOV32r0  : I<0x31, MRMInitReg, (outs GR32:$dst), (ins),
                  "xor{l}\t$dst, $dst",
                  [(set GR32:$dst, 0)]>;
 }
 
+// Use xorl instead of xorw since we don't care about the high 16 bits,
+// it's smaller, and it avoids a partial-register update.
+let AddedComplexity = 1 in
+def : Pat<(i16 0),
+          (EXTRACT_SUBREG (MOV32r0), x86_subreg_16bit)>;
+
 //===----------------------------------------------------------------------===//
 // Thread Local Storage Instructions
 //
@@ -3538,18 +3758,32 @@ def EH_RETURN   : I<0xC3, RawFrm, (outs), (ins GR32:$addr),
 // Atomic swap. These are just normal xchg instructions. But since a memory
 // operand is referenced, the atomicity is ensured.
 let Constraints = "$val = $dst" in {
-def XCHG32rm : I<0x87, MRMSrcMem, (outs GR32:$dst), (ins i32mem:$ptr, GR32:$val),
+def XCHG32rm : I<0x87, MRMSrcMem, (outs GR32:$dst), 
+                 (ins GR32:$val, i32mem:$ptr),
                "xchg{l}\t{$val, $ptr|$ptr, $val}", 
                [(set GR32:$dst, (atomic_swap_32 addr:$ptr, GR32:$val))]>;
-def XCHG16rm : I<0x87, MRMSrcMem, (outs GR16:$dst), (ins i16mem:$ptr, GR16:$val),
+def XCHG16rm : I<0x87, MRMSrcMem, (outs GR16:$dst), 
+                 (ins GR16:$val, i16mem:$ptr),
                "xchg{w}\t{$val, $ptr|$ptr, $val}", 
                [(set GR16:$dst, (atomic_swap_16 addr:$ptr, GR16:$val))]>, 
                 OpSize;
-def XCHG8rm  : I<0x86, MRMSrcMem, (outs GR8:$dst), (ins i8mem:$ptr, GR8:$val),
+def XCHG8rm  : I<0x86, MRMSrcMem, (outs GR8:$dst), (ins GR8:$val, i8mem:$ptr),
                "xchg{b}\t{$val, $ptr|$ptr, $val}", 
                [(set GR8:$dst, (atomic_swap_8 addr:$ptr, GR8:$val))]>;
+
+def XCHG32rr : I<0x87, MRMSrcReg, (outs GR32:$dst), (ins GR32:$val, GR32:$src),
+                 "xchg{l}\t{$val, $src|$src, $val}", []>;
+def XCHG16rr : I<0x87, MRMSrcReg, (outs GR16:$dst), (ins GR16:$val, GR16:$src),
+                 "xchg{w}\t{$val, $src|$src, $val}", []>, OpSize;
+def XCHG8rr : I<0x86, MRMSrcReg, (outs GR8:$dst), (ins GR8:$val, GR8:$src),
+                "xchg{b}\t{$val, $src|$src, $val}", []>;
 }
 
+def XCHG16ar : I<0x90, AddRegFrm, (outs), (ins GR16:$src),
+                  "xchg{w}\t{$src, %ax|%ax, $src}", []>, OpSize;
+def XCHG32ar : I<0x90, AddRegFrm, (outs), (ins GR32:$src),
+                  "xchg{l}\t{$src, %eax|%eax, $src}", []>;
+
 // Atomic compare and swap.
 let Defs = [EAX, EFLAGS], Uses = [EAX] in {
 def LCMPXCHG32 : I<0xB1, MRMDestMem, (outs), (ins i32mem:$ptr, GR32:$swap),
@@ -3579,23 +3813,54 @@ def LCMPXCHG8 : I<0xB0, MRMDestMem, (outs), (ins i8mem:$ptr, GR8:$swap),
 
 // Atomic exchange and add
 let Constraints = "$val = $dst", Defs = [EFLAGS] in {
-def LXADD32 : I<0xC1, MRMSrcMem, (outs GR32:$dst), (ins i32mem:$ptr, GR32:$val),
+def LXADD32 : I<0xC1, MRMSrcMem, (outs GR32:$dst), (ins GR32:$val, i32mem:$ptr),
                "lock\n\t"
                "xadd{l}\t{$val, $ptr|$ptr, $val}",
                [(set GR32:$dst, (atomic_load_add_32 addr:$ptr, GR32:$val))]>,
                 TB, LOCK;
-def LXADD16 : I<0xC1, MRMSrcMem, (outs GR16:$dst), (ins i16mem:$ptr, GR16:$val),
+def LXADD16 : I<0xC1, MRMSrcMem, (outs GR16:$dst), (ins GR16:$val, i16mem:$ptr),
                "lock\n\t"
                "xadd{w}\t{$val, $ptr|$ptr, $val}",
                [(set GR16:$dst, (atomic_load_add_16 addr:$ptr, GR16:$val))]>,
                 TB, OpSize, LOCK;
-def LXADD8  : I<0xC0, MRMSrcMem, (outs GR8:$dst), (ins i8mem:$ptr, GR8:$val),
+def LXADD8  : I<0xC0, MRMSrcMem, (outs GR8:$dst), (ins GR8:$val, i8mem:$ptr),
                "lock\n\t"
                "xadd{b}\t{$val, $ptr|$ptr, $val}",
                [(set GR8:$dst, (atomic_load_add_8 addr:$ptr, GR8:$val))]>,
                 TB, LOCK;
 }
 
+def XADD8rr : I<0xC0, MRMDestReg, (outs GR8:$dst), (ins GR8:$src),
+                "xadd{b}\t{$src, $dst|$dst, $src}", []>, TB;
+def XADD16rr : I<0xC1, MRMDestReg, (outs GR16:$dst), (ins GR16:$src),
+                 "xadd{w}\t{$src, $dst|$dst, $src}", []>, TB, OpSize;
+def XADD32rr  : I<0xC1, MRMDestReg, (outs GR32:$dst), (ins GR32:$src),
+                 "xadd{l}\t{$src, $dst|$dst, $src}", []>, TB;
+
+def XADD8rm   : I<0xC0, MRMDestMem, (outs), (ins i8mem:$dst, GR8:$src),
+                 "xadd{b}\t{$src, $dst|$dst, $src}", []>, TB;
+def XADD16rm  : I<0xC1, MRMDestMem, (outs), (ins i16mem:$dst, GR16:$src),
+                 "xadd{w}\t{$src, $dst|$dst, $src}", []>, TB, OpSize;
+def XADD32rm  : I<0xC1, MRMDestMem, (outs), (ins i32mem:$dst, GR32:$src),
+                 "xadd{l}\t{$src, $dst|$dst, $src}", []>, TB;
+
+def CMPXCHG8rr : I<0xB0, MRMDestReg, (outs GR8:$dst), (ins GR8:$src),
+                   "cmpxchg{b}\t{$src, $dst|$dst, $src}", []>, TB;
+def CMPXCHG16rr : I<0xB1, MRMDestReg, (outs GR16:$dst), (ins GR16:$src),
+                    "cmpxchg{w}\t{$src, $dst|$dst, $src}", []>, TB, OpSize;
+def CMPXCHG32rr  : I<0xB1, MRMDestReg, (outs GR32:$dst), (ins GR32:$src),
+                     "cmpxchg{l}\t{$src, $dst|$dst, $src}", []>, TB;
+
+def CMPXCHG8rm   : I<0xB0, MRMDestMem, (outs), (ins i8mem:$dst, GR8:$src),
+                     "cmpxchg{b}\t{$src, $dst|$dst, $src}", []>, TB;
+def CMPXCHG16rm  : I<0xB1, MRMDestMem, (outs), (ins i16mem:$dst, GR16:$src),
+                     "cmpxchg{w}\t{$src, $dst|$dst, $src}", []>, TB, OpSize;
+def CMPXCHG32rm  : I<0xB1, MRMDestMem, (outs), (ins i32mem:$dst, GR32:$src),
+                     "cmpxchg{l}\t{$src, $dst|$dst, $src}", []>, TB;
+
+def CMPXCHG8B : I<0xC7, MRM1m, (outs), (ins i64mem:$dst),
+                  "cmpxchg8b\t$dst", []>, TB;
+
 // Optimized codegen when the non-memory output is not used.
 // FIXME: Use normal add / sub instructions and add lock prefix dynamically.
 let Defs = [EFLAGS] in {
@@ -3652,7 +3917,7 @@ def LOCK_SUB16mi  : Ii16<0x81, MRM5m, (outs), (ins i16mem:$dst, i16imm:$src2),
 def LOCK_SUB32mi  : Ii32<0x81, MRM5m, (outs), (ins i32mem:$dst, i32imm:$src2), 
                     "lock\n\t"
                      "sub{l}\t{$src2, $dst|$dst, $src2}", []>, LOCK;
-def LOCK_SUB16mi8 : Ii8<0x83, MRM5m, (outs), (ins i16mem:$dst, i16i8imm :$src2), 
+def LOCK_SUB16mi8 : Ii8<0x83, MRM5m, (outs), (ins i16mem:$dst, i16i8imm :$src2),
                     "lock\n\t"
                      "sub{w}\t{$src2, $dst|$dst, $src2}", []>, OpSize, LOCK;
 def LOCK_SUB32mi8 : Ii8<0x83, MRM5m, (outs), (ins i32mem:$dst, i32i8imm :$src2),
@@ -3777,12 +4042,193 @@ def LAR32rm : I<0x02, MRMSrcMem, (outs GR32:$dst), (ins i16mem:$src),
                 "lar{l}\t{$src, $dst|$dst, $src}", []>, TB;
 def LAR32rr : I<0x02, MRMSrcReg, (outs GR32:$dst), (ins GR32:$src),
                 "lar{l}\t{$src, $dst|$dst, $src}", []>, TB;
+
+def LSL16rm : I<0x03, MRMSrcMem, (outs GR16:$dst), (ins i16mem:$src),
+                "lsl{w}\t{$src, $dst|$dst, $src}", []>, TB, OpSize; 
+def LSL16rr : I<0x03, MRMSrcReg, (outs GR16:$dst), (ins GR16:$src),
+                "lsl{w}\t{$src, $dst|$dst, $src}", []>, TB, OpSize;
+def LSL32rm : I<0x03, MRMSrcMem, (outs GR32:$dst), (ins i32mem:$src),
+                "lsl{l}\t{$src, $dst|$dst, $src}", []>, TB; 
+def LSL32rr : I<0x03, MRMSrcReg, (outs GR32:$dst), (ins GR32:$src),
+                "lsl{l}\t{$src, $dst|$dst, $src}", []>, TB;
+                
+def INVLPG : I<0x01, RawFrm, (outs), (ins), "invlpg", []>, TB;
+
+def STRr : I<0x00, MRM1r, (outs GR16:$dst), (ins),
+             "str{w}\t{$dst}", []>, TB;
+def STRm : I<0x00, MRM1m, (outs i16mem:$dst), (ins),
+             "str{w}\t{$dst}", []>, TB;
+def LTRr : I<0x00, MRM3r, (outs), (ins GR16:$src),
+             "ltr{w}\t{$src}", []>, TB;
+def LTRm : I<0x00, MRM3m, (outs), (ins i16mem:$src),
+             "ltr{w}\t{$src}", []>, TB;
+             
+def PUSHFS16 : I<0xa0, RawFrm, (outs), (ins),
+                 "push{w}\t%fs", []>, OpSize, TB;
+def PUSHFS32 : I<0xa0, RawFrm, (outs), (ins),
+                 "push{l}\t%fs", []>, TB;
+def PUSHGS16 : I<0xa8, RawFrm, (outs), (ins),
+                 "push{w}\t%gs", []>, OpSize, TB;
+def PUSHGS32 : I<0xa8, RawFrm, (outs), (ins),
+                 "push{l}\t%gs", []>, TB;
+
+def POPFS16 : I<0xa1, RawFrm, (outs), (ins),
+                "pop{w}\t%fs", []>, OpSize, TB;
+def POPFS32 : I<0xa1, RawFrm, (outs), (ins),
+                "pop{l}\t%fs", []>, TB;
+def POPGS16 : I<0xa9, RawFrm, (outs), (ins),
+                "pop{w}\t%gs", []>, OpSize, TB;
+def POPGS32 : I<0xa9, RawFrm, (outs), (ins),
+                "pop{l}\t%gs", []>, TB;
+
+def LDS16rm : I<0xc5, MRMSrcMem, (outs GR16:$dst), (ins opaque32mem:$src),
+                "lds{w}\t{$src, $dst|$dst, $src}", []>, OpSize;
+def LDS32rm : I<0xc5, MRMSrcMem, (outs GR32:$dst), (ins opaque48mem:$src),
+                "lds{l}\t{$src, $dst|$dst, $src}", []>;
+def LSS16rm : I<0xb2, MRMSrcMem, (outs GR16:$dst), (ins opaque32mem:$src),
+                "lss{w}\t{$src, $dst|$dst, $src}", []>, TB, OpSize;
+def LSS32rm : I<0xb2, MRMSrcMem, (outs GR32:$dst), (ins opaque48mem:$src),
+                "lss{l}\t{$src, $dst|$dst, $src}", []>, TB;
+def LES16rm : I<0xc4, MRMSrcMem, (outs GR16:$dst), (ins opaque32mem:$src),
+                "les{w}\t{$src, $dst|$dst, $src}", []>, OpSize;
+def LES32rm : I<0xc4, MRMSrcMem, (outs GR32:$dst), (ins opaque48mem:$src),
+                "les{l}\t{$src, $dst|$dst, $src}", []>;
+def LFS16rm : I<0xb4, MRMSrcMem, (outs GR16:$dst), (ins opaque32mem:$src),
+                "lfs{w}\t{$src, $dst|$dst, $src}", []>, TB, OpSize;
+def LFS32rm : I<0xb4, MRMSrcMem, (outs GR32:$dst), (ins opaque48mem:$src),
+                "lfs{l}\t{$src, $dst|$dst, $src}", []>, TB;
+def LGS16rm : I<0xb5, MRMSrcMem, (outs GR16:$dst), (ins opaque32mem:$src),
+                "lgs{w}\t{$src, $dst|$dst, $src}", []>, TB, OpSize;
+def LGS32rm : I<0xb5, MRMSrcMem, (outs GR32:$dst), (ins opaque48mem:$src),
+                "lgs{l}\t{$src, $dst|$dst, $src}", []>, TB;
+
+def VERRr : I<0x00, MRM4r, (outs), (ins GR16:$seg),
+              "verr\t$seg", []>, TB;
+def VERRm : I<0x00, MRM4m, (outs), (ins i16mem:$seg),
+              "verr\t$seg", []>, TB;
+def VERWr : I<0x00, MRM5r, (outs), (ins GR16:$seg),
+              "verw\t$seg", []>, TB;
+def VERWm : I<0x00, MRM5m, (outs), (ins i16mem:$seg),
+              "verw\t$seg", []>, TB;
+
+// Descriptor-table support instructions
+
+def SGDTm : I<0x01, MRM0m, (outs opaque48mem:$dst), (ins),
+              "sgdt\t$dst", []>, TB;
+def SIDTm : I<0x01, MRM1m, (outs opaque48mem:$dst), (ins),
+              "sidt\t$dst", []>, TB;
+def SLDT16r : I<0x00, MRM0r, (outs GR16:$dst), (ins),
+                "sldt{w}\t$dst", []>, TB;
+def SLDT16m : I<0x00, MRM0m, (outs i16mem:$dst), (ins),
+                "sldt{w}\t$dst", []>, TB;
+def LGDTm : I<0x01, MRM2m, (outs), (ins opaque48mem:$src),
+              "lgdt\t$src", []>, TB;
+def LIDTm : I<0x01, MRM3m, (outs), (ins opaque48mem:$src),
+              "lidt\t$src", []>, TB;
+def LLDT16r : I<0x00, MRM2r, (outs), (ins GR16:$src),
+                "lldt{w}\t$src", []>, TB;
+def LLDT16m : I<0x00, MRM2m, (outs), (ins i16mem:$src),
+                "lldt{w}\t$src", []>, TB;
                 
 // String manipulation instructions
 
 def LODSB : I<0xAC, RawFrm, (outs), (ins), "lodsb", []>;
 def LODSW : I<0xAD, RawFrm, (outs), (ins), "lodsw", []>, OpSize;
-def LODSD : I<0xAD, RawFrm, (outs), (ins), "lodsd", []>;
+def LODSD : I<0xAD, RawFrm, (outs), (ins), "lods{l|d}", []>;
+
+def OUTSB : I<0x6E, RawFrm, (outs), (ins), "outsb", []>;
+def OUTSW : I<0x6F, RawFrm, (outs), (ins), "outsw", []>, OpSize;
+def OUTSD : I<0x6F, RawFrm, (outs), (ins), "outs{l|d}", []>;
+
+// CPU flow control instructions
+
+def HLT : I<0xF4, RawFrm, (outs), (ins), "hlt", []>;
+def RSM : I<0xAA, RawFrm, (outs), (ins), "rsm", []>, TB;
+
+// FPU control instructions
+
+def FNINIT : I<0xE3, RawFrm, (outs), (ins), "fninit", []>, DB;
+
+// Flag instructions
+
+def CLC : I<0xF8, RawFrm, (outs), (ins), "clc", []>;
+def STC : I<0xF9, RawFrm, (outs), (ins), "stc", []>;
+def CLI : I<0xFA, RawFrm, (outs), (ins), "cli", []>;
+def STI : I<0xFB, RawFrm, (outs), (ins), "sti", []>;
+def CLD : I<0xFC, RawFrm, (outs), (ins), "cld", []>;
+def STD : I<0xFD, RawFrm, (outs), (ins), "std", []>;
+def CMC : I<0xF5, RawFrm, (outs), (ins), "cmc", []>;
+
+def CLTS : I<0x06, RawFrm, (outs), (ins), "clts", []>, TB;
+
+// Table lookup instructions
+
+def XLAT : I<0xD7, RawFrm, (outs), (ins), "xlatb", []>;
+
+// Specialized register support
+
+def WRMSR : I<0x30, RawFrm, (outs), (ins), "wrmsr", []>, TB;
+def RDMSR : I<0x32, RawFrm, (outs), (ins), "rdmsr", []>, TB;
+def RDPMC : I<0x33, RawFrm, (outs), (ins), "rdpmc", []>, TB;
+
+def SMSW16r : I<0x01, MRM4r, (outs GR16:$dst), (ins), 
+                "smsw{w}\t$dst", []>, OpSize, TB;
+def SMSW32r : I<0x01, MRM4r, (outs GR32:$dst), (ins), 
+                "smsw{l}\t$dst", []>, TB;
+// For memory operands, there is only a 16-bit form
+def SMSW16m : I<0x01, MRM4m, (outs i16mem:$dst), (ins),
+                "smsw{w}\t$dst", []>, TB;
+
+def LMSW16r : I<0x01, MRM6r, (outs), (ins GR16:$src),
+                "lmsw{w}\t$src", []>, TB;
+def LMSW16m : I<0x01, MRM6m, (outs), (ins i16mem:$src),
+                "lmsw{w}\t$src", []>, TB;
+                
+def CPUID : I<0xA2, RawFrm, (outs), (ins), "cpuid", []>, TB;
+
+// Cache instructions
+
+def INVD : I<0x08, RawFrm, (outs), (ins), "invd", []>, TB;
+def WBINVD : I<0x09, RawFrm, (outs), (ins), "wbinvd", []>, TB;
+
+// VMX instructions
+
+// 66 0F 38 80
+def INVEPT : I<0x38, RawFrm, (outs), (ins), "invept", []>, OpSize, TB;
+// 66 0F 38 81
+def INVVPID : I<0x38, RawFrm, (outs), (ins), "invvpid", []>, OpSize, TB;
+// 0F 01 C1
+def VMCALL : I<0x01, RawFrm, (outs), (ins), "vmcall", []>, TB;
+def VMCLEARm : I<0xC7, MRM6m, (outs), (ins i64mem:$vmcs),
+  "vmclear\t$vmcs", []>, OpSize, TB;
+// 0F 01 C2
+def VMLAUNCH : I<0x01, RawFrm, (outs), (ins), "vmlaunch", []>, TB;
+// 0F 01 C3
+def VMRESUME : I<0x01, RawFrm, (outs), (ins), "vmresume", []>, TB;
+def VMPTRLDm : I<0xC7, MRM6m, (outs), (ins i64mem:$vmcs),
+  "vmptrld\t$vmcs", []>, TB;
+def VMPTRSTm : I<0xC7, MRM7m, (outs i64mem:$vmcs), (ins),
+  "vmptrst\t$vmcs", []>, TB;
+def VMREAD64rm : I<0x78, MRMDestMem, (outs i64mem:$dst), (ins GR64:$src),
+  "vmread{q}\t{$src, $dst|$dst, $src}", []>, TB;
+def VMREAD64rr : I<0x78, MRMDestReg, (outs GR64:$dst), (ins GR64:$src),
+  "vmread{q}\t{$src, $dst|$dst, $src}", []>, TB;
+def VMREAD32rm : I<0x78, MRMDestMem, (outs i32mem:$dst), (ins GR32:$src),
+  "vmread{l}\t{$src, $dst|$dst, $src}", []>, TB;
+def VMREAD32rr : I<0x78, MRMDestReg, (outs GR32:$dst), (ins GR32:$src),
+  "vmread{l}\t{$src, $dst|$dst, $src}", []>, TB;
+def VMWRITE64rm : I<0x79, MRMSrcMem, (outs GR64:$dst), (ins i64mem:$src),
+  "vmwrite{q}\t{$src, $dst|$dst, $src}", []>, TB;
+def VMWRITE64rr : I<0x79, MRMSrcReg, (outs GR64:$dst), (ins GR64:$src),
+  "vmwrite{q}\t{$src, $dst|$dst, $src}", []>, TB;
+def VMWRITE32rm : I<0x79, MRMSrcMem, (outs GR32:$dst), (ins i32mem:$src),
+  "vmwrite{l}\t{$src, $dst|$dst, $src}", []>, TB;
+def VMWRITE32rr : I<0x79, MRMSrcReg, (outs GR32:$dst), (ins GR32:$src),
+  "vmwrite{l}\t{$src, $dst|$dst, $src}", []>, TB;
+// 0F 01 C4
+def VMXOFF : I<0x01, RawFrm, (outs), (ins), "vmxoff", []>, OpSize;
+def VMXON : I<0xC7, MRM6m, (outs), (ins i64mem:$vmxon),
+  "vmxon\t{$vmxon}", []>, XD;
 
 //===----------------------------------------------------------------------===//
 // Non-Instruction Patterns
@@ -4028,15 +4474,18 @@ def : Pat<(srl_su GR16:$src, (i8 8)),
             x86_subreg_16bit)>,
       Requires<[In32BitMode]>;
 def : Pat<(i32 (zext (srl_su GR16:$src, (i8 8)))),
-          (MOVZX32rr8 (EXTRACT_SUBREG (i16 (COPY_TO_REGCLASS GR16:$src, GR16_ABCD)),
+          (MOVZX32rr8 (EXTRACT_SUBREG (i16 (COPY_TO_REGCLASS GR16:$src, 
+                                                             GR16_ABCD)),
                                       x86_subreg_8bit_hi))>,
       Requires<[In32BitMode]>;
 def : Pat<(i32 (anyext (srl_su GR16:$src, (i8 8)))),
-          (MOVZX32rr8 (EXTRACT_SUBREG (i16 (COPY_TO_REGCLASS GR16:$src, GR16_ABCD)),
+          (MOVZX32rr8 (EXTRACT_SUBREG (i16 (COPY_TO_REGCLASS GR16:$src, 
+                                                             GR16_ABCD)),
                                       x86_subreg_8bit_hi))>,
       Requires<[In32BitMode]>;
 def : Pat<(and (srl_su GR32:$src, (i8 8)), (i32 255)),
-          (MOVZX32rr8 (EXTRACT_SUBREG (i32 (COPY_TO_REGCLASS GR32:$src, GR32_ABCD)),
+          (MOVZX32rr8 (EXTRACT_SUBREG (i32 (COPY_TO_REGCLASS GR32:$src, 
+                                                             GR32_ABCD)),
                                       x86_subreg_8bit_hi))>,
       Requires<[In32BitMode]>;
 
@@ -4185,10 +4634,10 @@ def : Pat<(store (shld (loadi16 addr:$dst), (i8 imm:$amt1),
                        GR16:$src2, (i8 imm:$amt2)), addr:$dst),
           (SHLD16mri8 addr:$dst, GR16:$src2, (i8 imm:$amt1))>;
 
-// (anyext (setcc_carry)) -> (zext (setcc_carry))
-def : Pat<(i16 (anyext (X86setcc_c X86_COND_B, EFLAGS))),
+// (anyext (setcc_carry)) -> (setcc_carry)
+def : Pat<(i16 (anyext (i8 (X86setcc_c X86_COND_B, EFLAGS)))),
           (SETB_C16r)>;
-def : Pat<(i32 (anyext (X86setcc_c X86_COND_B, EFLAGS))),
+def : Pat<(i32 (anyext (i8 (X86setcc_c X86_COND_B, EFLAGS)))),
           (SETB_C32r)>;
 
 //===----------------------------------------------------------------------===//
diff --git a/libclamav/c++/llvm/lib/Target/X86/X86InstrMMX.td b/libclamav/c++/llvm/lib/Target/X86/X86InstrMMX.td
index 500785b..fc40c9a 100644
--- a/libclamav/c++/llvm/lib/Target/X86/X86InstrMMX.td
+++ b/libclamav/c++/llvm/lib/Target/X86/X86InstrMMX.td
@@ -72,13 +72,13 @@ let Constraints = "$src1 = $dst" in {
   multiclass MMXI_binop_rm<bits<8> opc, string OpcodeStr, SDNode OpNode,
                            ValueType OpVT, bit Commutable = 0> {
     def rr : MMXI<opc, MRMSrcReg, (outs VR64:$dst),
-				  (ins VR64:$src1, VR64:$src2),
+                  (ins VR64:$src1, VR64:$src2),
                   !strconcat(OpcodeStr, "\t{$src2, $dst|$dst, $src2}"),
                   [(set VR64:$dst, (OpVT (OpNode VR64:$src1, VR64:$src2)))]> {
       let isCommutable = Commutable;
     }
     def rm : MMXI<opc, MRMSrcMem, (outs VR64:$dst),
-				  (ins VR64:$src1, i64mem:$src2),
+                  (ins VR64:$src1, i64mem:$src2),
                   !strconcat(OpcodeStr, "\t{$src2, $dst|$dst, $src2}"),
                   [(set VR64:$dst, (OpVT (OpNode VR64:$src1,
                                          (bitconvert
@@ -88,13 +88,13 @@ let Constraints = "$src1 = $dst" in {
   multiclass MMXI_binop_rm_int<bits<8> opc, string OpcodeStr, Intrinsic IntId,
                                bit Commutable = 0> {
     def rr : MMXI<opc, MRMSrcReg, (outs VR64:$dst),
-				  (ins VR64:$src1, VR64:$src2),
+                 (ins VR64:$src1, VR64:$src2),
                  !strconcat(OpcodeStr, "\t{$src2, $dst|$dst, $src2}"),
                  [(set VR64:$dst, (IntId VR64:$src1, VR64:$src2))]> {
       let isCommutable = Commutable;
     }
     def rm : MMXI<opc, MRMSrcMem, (outs VR64:$dst),
-				  (ins VR64:$src1, i64mem:$src2),
+                 (ins VR64:$src1, i64mem:$src2),
                  !strconcat(OpcodeStr, "\t{$src2, $dst|$dst, $src2}"),
                  [(set VR64:$dst, (IntId VR64:$src1,
                                    (bitconvert (load_mmx addr:$src2))))]>;
@@ -144,9 +144,9 @@ let Constraints = "$src1 = $dst" in {
 //===----------------------------------------------------------------------===//
 
 def MMX_EMMS  : MMXI<0x77, RawFrm, (outs), (ins), "emms",
-						  [(int_x86_mmx_emms)]>;
+                     [(int_x86_mmx_emms)]>;
 def MMX_FEMMS : MMXI<0x0E, RawFrm, (outs), (ins), "femms",
-						  [(int_x86_mmx_femms)]>;
+                     [(int_x86_mmx_femms)]>;
 
 //===----------------------------------------------------------------------===//
 // MMX Scalar Instructions
@@ -155,16 +155,21 @@ def MMX_FEMMS : MMXI<0x0E, RawFrm, (outs), (ins), "femms",
 // Data Transfer Instructions
 def MMX_MOVD64rr : MMXI<0x6E, MRMSrcReg, (outs VR64:$dst), (ins GR32:$src),
                         "movd\t{$src, $dst|$dst, $src}",
-                        [(set VR64:$dst,
-		   	  (v2i32 (scalar_to_vector GR32:$src)))]>;
+                        [(set VR64:$dst, 
+                         (v2i32 (scalar_to_vector GR32:$src)))]>;
 let canFoldAsLoad = 1, isReMaterializable = 1 in
 def MMX_MOVD64rm : MMXI<0x6E, MRMSrcMem, (outs VR64:$dst), (ins i32mem:$src),
                         "movd\t{$src, $dst|$dst, $src}",
               [(set VR64:$dst,
-		(v2i32 (scalar_to_vector (loadi32 addr:$src))))]>;
+               (v2i32 (scalar_to_vector (loadi32 addr:$src))))]>;
 let mayStore = 1 in
 def MMX_MOVD64mr : MMXI<0x7E, MRMDestMem, (outs), (ins i32mem:$dst, VR64:$src),
                         "movd\t{$src, $dst|$dst, $src}", []>;
+def MMX_MOVD64grr : MMXI<0x7E, MRMDestReg, (outs), (ins GR32:$dst, VR64:$src),
+                        "movd\t{$src, $dst|$dst, $src}", []>;
+def MMX_MOVQ64gmr : MMXRI<0x7E, MRMDestMem, (outs), 
+                         (ins i64mem:$dst, VR64:$src),
+                         "movq\t{$src, $dst|$dst, $src}", []>;
 
 let neverHasSideEffects = 1 in
 def MMX_MOVD64to64rr : MMXRI<0x6E, MRMSrcReg, (outs VR64:$dst), (ins GR64:$src),
@@ -181,7 +186,7 @@ def MMX_MOVD64from64rr : MMXRI<0x7E, MRMDestReg,
 def MMX_MOVD64rrv164 : MMXI<0x6E, MRMSrcReg, (outs VR64:$dst), (ins GR64:$src),
                             "movd\t{$src, $dst|$dst, $src}",
                             [(set VR64:$dst,
-			      (v1i64 (scalar_to_vector GR64:$src)))]>;
+                             (v1i64 (scalar_to_vector GR64:$src)))]>;
 
 let neverHasSideEffects = 1 in
 def MMX_MOVQ64rr : MMXI<0x6F, MRMSrcReg, (outs VR64:$dst), (ins VR64:$src),
@@ -223,7 +228,7 @@ def MMX_MOVZDI2PDIrr : MMXI<0x6E, MRMSrcReg, (outs VR64:$dst), (ins GR32:$src),
                     (v2i32 (X86vzmovl (v2i32 (scalar_to_vector GR32:$src)))))]>;
 let AddedComplexity = 20 in
 def MMX_MOVZDI2PDIrm : MMXI<0x6E, MRMSrcMem, (outs VR64:$dst),
-					     (ins i32mem:$src),
+                           (ins i32mem:$src),
                              "movd\t{$src, $dst|$dst, $src}",
           [(set VR64:$dst,
                 (v2i32 (X86vzmovl (v2i32
@@ -432,21 +437,21 @@ def MMX_CVTPD2PIrr  : MMX2I<0x2D, MRMSrcReg, (outs VR64:$dst), (ins VR128:$src),
                             "cvtpd2pi\t{$src, $dst|$dst, $src}", []>;
 let mayLoad = 1 in
 def MMX_CVTPD2PIrm  : MMX2I<0x2D, MRMSrcMem, (outs VR64:$dst),
-					     (ins f128mem:$src),
+                            (ins f128mem:$src),
                             "cvtpd2pi\t{$src, $dst|$dst, $src}", []>;
 
 def MMX_CVTPI2PDrr  : MMX2I<0x2A, MRMSrcReg, (outs VR128:$dst), (ins VR64:$src),
                             "cvtpi2pd\t{$src, $dst|$dst, $src}", []>;
 let mayLoad = 1 in
 def MMX_CVTPI2PDrm  : MMX2I<0x2A, MRMSrcMem, (outs VR128:$dst),
-	  				     (ins i64mem:$src),
+                            (ins i64mem:$src),
                             "cvtpi2pd\t{$src, $dst|$dst, $src}", []>;
 
 def MMX_CVTPI2PSrr  : MMXI<0x2A, MRMSrcReg, (outs VR128:$dst), (ins VR64:$src),
                            "cvtpi2ps\t{$src, $dst|$dst, $src}", []>;
 let mayLoad = 1 in
 def MMX_CVTPI2PSrm  : MMXI<0x2A, MRMSrcMem, (outs VR128:$dst),
-					    (ins i64mem:$src),
+                           (ins i64mem:$src),
                            "cvtpi2ps\t{$src, $dst|$dst, $src}", []>;
 
 def MMX_CVTPS2PIrr  : MMXI<0x2D, MRMSrcReg, (outs VR64:$dst), (ins VR128:$src),
@@ -459,7 +464,7 @@ def MMX_CVTTPD2PIrr : MMX2I<0x2C, MRMSrcReg, (outs VR64:$dst), (ins VR128:$src),
                             "cvttpd2pi\t{$src, $dst|$dst, $src}", []>;
 let mayLoad = 1 in
 def MMX_CVTTPD2PIrm : MMX2I<0x2C, MRMSrcMem, (outs VR64:$dst),
-					     (ins f128mem:$src),
+                            (ins f128mem:$src),
                             "cvttpd2pi\t{$src, $dst|$dst, $src}", []>;
 
 def MMX_CVTTPS2PIrr : MMXI<0x2C, MRMSrcReg, (outs VR64:$dst), (ins VR128:$src),
@@ -481,14 +486,14 @@ def MMX_PEXTRWri  : MMXIi8<0xC5, MRMSrcReg,
                                              (iPTR imm:$src2)))]>;
 let Constraints = "$src1 = $dst" in {
   def MMX_PINSRWrri : MMXIi8<0xC4, MRMSrcReg,
-                      (outs VR64:$dst), (ins VR64:$src1, GR32:$src2,
-					     i16i8imm:$src3),
+                      (outs VR64:$dst), 
+                      (ins VR64:$src1, GR32:$src2,i16i8imm:$src3),
                       "pinsrw\t{$src3, $src2, $dst|$dst, $src2, $src3}",
                       [(set VR64:$dst, (v4i16 (MMX_X86pinsrw (v4i16 VR64:$src1),
                                                GR32:$src2,(iPTR imm:$src3))))]>;
   def MMX_PINSRWrmi : MMXIi8<0xC4, MRMSrcMem,
-                     (outs VR64:$dst), (ins VR64:$src1, i16mem:$src2,
-					    i16i8imm:$src3),
+                     (outs VR64:$dst),
+                     (ins VR64:$src1, i16mem:$src2, i16i8imm:$src3),
                      "pinsrw\t{$src3, $src2, $dst|$dst, $src2, $src3}",
                      [(set VR64:$dst,
                        (v4i16 (MMX_X86pinsrw (v4i16 VR64:$src1),
diff --git a/libclamav/c++/llvm/lib/Target/X86/X86InstrSSE.td b/libclamav/c++/llvm/lib/Target/X86/X86InstrSSE.td
index 62841f8..b26e508 100644
--- a/libclamav/c++/llvm/lib/Target/X86/X86InstrSSE.td
+++ b/libclamav/c++/llvm/lib/Target/X86/X86InstrSSE.td
@@ -70,7 +70,7 @@ def X86pcmpgtd : SDNode<"X86ISD::PCMPGTD", SDTIntBinOp>;
 def X86pcmpgtq : SDNode<"X86ISD::PCMPGTQ", SDTIntBinOp>;
 
 def SDTX86CmpPTest : SDTypeProfile<0, 2, [SDTCisVT<0, v4f32>,
-					  SDTCisVT<1, v4f32>]>;
+                                          SDTCisVT<1, v4f32>]>;
 def X86ptest   : SDNode<"X86ISD::PTEST", SDTX86CmpPTest>;
 
 //===----------------------------------------------------------------------===//
@@ -116,12 +116,18 @@ def alignedload : PatFrag<(ops node:$ptr), (load node:$ptr), [{
   return cast<LoadSDNode>(N)->getAlignment() >= 16;
 }]>;
 
-def alignedloadfsf32 : PatFrag<(ops node:$ptr), (f32   (alignedload node:$ptr))>;
-def alignedloadfsf64 : PatFrag<(ops node:$ptr), (f64   (alignedload node:$ptr))>;
-def alignedloadv4f32 : PatFrag<(ops node:$ptr), (v4f32 (alignedload node:$ptr))>;
-def alignedloadv2f64 : PatFrag<(ops node:$ptr), (v2f64 (alignedload node:$ptr))>;
-def alignedloadv4i32 : PatFrag<(ops node:$ptr), (v4i32 (alignedload node:$ptr))>;
-def alignedloadv2i64 : PatFrag<(ops node:$ptr), (v2i64 (alignedload node:$ptr))>;
+def alignedloadfsf32 : PatFrag<(ops node:$ptr), 
+                               (f32 (alignedload node:$ptr))>;
+def alignedloadfsf64 : PatFrag<(ops node:$ptr), 
+                               (f64 (alignedload node:$ptr))>;
+def alignedloadv4f32 : PatFrag<(ops node:$ptr), 
+                               (v4f32 (alignedload node:$ptr))>;
+def alignedloadv2f64 : PatFrag<(ops node:$ptr), 
+                               (v2f64 (alignedload node:$ptr))>;
+def alignedloadv4i32 : PatFrag<(ops node:$ptr), 
+                               (v4i32 (alignedload node:$ptr))>;
+def alignedloadv2i64 : PatFrag<(ops node:$ptr), 
+                               (v2i64 (alignedload node:$ptr))>;
 
 // Like 'load', but uses special alignment checks suitable for use in
 // memory operands in most SSE instructions, which are required to
@@ -363,6 +369,11 @@ def CVTSI2SSrm  : SSI<0x2A, MRMSrcMem, (outs FR32:$dst), (ins i32mem:$src),
                       [(set FR32:$dst, (sint_to_fp (loadi32 addr:$src)))]>;
 
 // Match intrinsics which expect XMM operand(s).
+def CVTSS2SIrr: SSI<0x2D, MRMSrcReg, (outs GR32:$dst), (ins FR32:$src),
+                    "cvtss2si{l}\t{$src, $dst|$dst, $src}", []>;
+def CVTSS2SIrm: SSI<0x2D, MRMSrcMem, (outs GR32:$dst), (ins f32mem:$src),
+                    "cvtss2si{l}\t{$src, $dst|$dst, $src}", []>;
+
 def Int_CVTSS2SIrr : SSI<0x2D, MRMSrcReg, (outs GR32:$dst), (ins VR128:$src),
                          "cvtss2si\t{$src, $dst|$dst, $src}",
                          [(set GR32:$dst, (int_x86_sse_cvtss2si VR128:$src))]>;
@@ -441,19 +452,26 @@ def UCOMISSrm: PSI<0x2E, MRMSrcMem, (outs), (ins FR32:$src1, f32mem:$src2),
                    "ucomiss\t{$src2, $src1|$src1, $src2}",
                    [(X86cmp FR32:$src1, (loadf32 addr:$src2)),
                     (implicit EFLAGS)]>;
+                    
+def COMISSrr: PSI<0x2F, MRMSrcReg, (outs), (ins VR128:$src1, VR128:$src2),
+                  "comiss\t{$src2, $src1|$src1, $src2}", []>;
+def COMISSrm: PSI<0x2F, MRMSrcMem, (outs), (ins VR128:$src1, f128mem:$src2),
+                  "comiss\t{$src2, $src1|$src1, $src2}", []>;
+                  
 } // Defs = [EFLAGS]
 
 // Aliases to match intrinsics which expect XMM operand(s).
 let Constraints = "$src1 = $dst" in {
   def Int_CMPSSrr : SSIi8<0xC2, MRMSrcReg,
-                        (outs VR128:$dst), (ins VR128:$src1, VR128:$src,
-					        SSECC:$cc),
+                        (outs VR128:$dst), 
+                        (ins VR128:$src1, VR128:$src, SSECC:$cc),
                         "cmp${cc}ss\t{$src, $dst|$dst, $src}",
-                        [(set VR128:$dst, (int_x86_sse_cmp_ss VR128:$src1,
-                                           	VR128:$src, imm:$cc))]>;
+                        [(set VR128:$dst, (int_x86_sse_cmp_ss 
+                                             VR128:$src1,
+                                             VR128:$src, imm:$cc))]>;
   def Int_CMPSSrm : SSIi8<0xC2, MRMSrcMem,
-                        (outs VR128:$dst), (ins VR128:$src1, f32mem:$src,
-						SSECC:$cc),
+                        (outs VR128:$dst), 
+                        (ins VR128:$src1, f32mem:$src, SSECC:$cc),
                         "cmp${cc}ss\t{$src, $dst|$dst, $src}",
                         [(set VR128:$dst, (int_x86_sse_cmp_ss VR128:$src1,
                                            (load addr:$src), imm:$cc))]>;
@@ -806,9 +824,10 @@ multiclass sse1_fp_unop_rm<bits<8> opc, string OpcodeStr,
   }
 
   // Scalar operation, mem.
-  def SSm : SSI<opc, MRMSrcMem, (outs FR32:$dst), (ins f32mem:$src),
+  def SSm : I<opc, MRMSrcMem, (outs FR32:$dst), (ins f32mem:$src),
                 !strconcat(OpcodeStr, "ss\t{$src, $dst|$dst, $src}"),
-                [(set FR32:$dst, (OpNode (load addr:$src)))]>;
+                [(set FR32:$dst, (OpNode (load addr:$src)))]>, XS,
+            Requires<[HasSSE1, OptForSize]>;
 
   // Vector operation, reg.
   def PSr : PSI<opc, MRMSrcReg, (outs VR128:$dst), (ins VR128:$src),
@@ -1098,9 +1117,10 @@ def CVTTSD2SIrm : SDI<0x2C, MRMSrcMem, (outs GR32:$dst), (ins f64mem:$src),
 def CVTSD2SSrr  : SDI<0x5A, MRMSrcReg, (outs FR32:$dst), (ins FR64:$src),
                       "cvtsd2ss\t{$src, $dst|$dst, $src}",
                       [(set FR32:$dst, (fround FR64:$src))]>;
-def CVTSD2SSrm  : SDI<0x5A, MRMSrcMem, (outs FR32:$dst), (ins f64mem:$src),
+def CVTSD2SSrm  : I<0x5A, MRMSrcMem, (outs FR32:$dst), (ins f64mem:$src),
                       "cvtsd2ss\t{$src, $dst|$dst, $src}",
-                      [(set FR32:$dst, (fround (loadf64 addr:$src)))]>;
+                      [(set FR32:$dst, (fround (loadf64 addr:$src)))]>, XD,
+                  Requires<[HasSSE2, OptForSize]>;
 def CVTSI2SDrr  : SDI<0x2A, MRMSrcReg, (outs FR64:$dst), (ins GR32:$src),
                       "cvtsi2sd\t{$src, $dst|$dst, $src}",
                       [(set FR64:$dst, (sint_to_fp GR32:$src))]>;
@@ -1137,7 +1157,10 @@ def CVTSS2SDrr : I<0x5A, MRMSrcReg, (outs FR64:$dst), (ins FR32:$src),
 def CVTSS2SDrm : I<0x5A, MRMSrcMem, (outs FR64:$dst), (ins f32mem:$src),
                    "cvtss2sd\t{$src, $dst|$dst, $src}",
                    [(set FR64:$dst, (extloadf32 addr:$src))]>, XS,
-                 Requires<[HasSSE2]>;
+                 Requires<[HasSSE2, OptForSize]>;
+
+def : Pat<(extloadf32 addr:$src),
+          (CVTSS2SDrr (MOVSSrm addr:$src))>, Requires<[HasSSE2, OptForSpeed]>;
 
 // Match intrinsics which expect XMM operand(s).
 def Int_CVTSD2SIrr : SDI<0x2D, MRMSrcReg, (outs GR32:$dst), (ins VR128:$src),
@@ -1205,14 +1228,14 @@ def UCOMISDrm: PDI<0x2E, MRMSrcMem, (outs), (ins FR64:$src1, f64mem:$src2),
 // Aliases to match intrinsics which expect XMM operand(s).
 let Constraints = "$src1 = $dst" in {
   def Int_CMPSDrr : SDIi8<0xC2, MRMSrcReg,
-                        (outs VR128:$dst), (ins VR128:$src1, VR128:$src,
-						SSECC:$cc),
+                        (outs VR128:$dst), 
+                        (ins VR128:$src1, VR128:$src, SSECC:$cc),
                         "cmp${cc}sd\t{$src, $dst|$dst, $src}",
                         [(set VR128:$dst, (int_x86_sse2_cmp_sd VR128:$src1,
                                            VR128:$src, imm:$cc))]>;
   def Int_CMPSDrm : SDIi8<0xC2, MRMSrcMem,
-                        (outs VR128:$dst), (ins VR128:$src1, f64mem:$src,
-						SSECC:$cc),
+                        (outs VR128:$dst), 
+                        (ins VR128:$src1, f64mem:$src, SSECC:$cc),
                         "cmp${cc}sd\t{$src, $dst|$dst, $src}",
                         [(set VR128:$dst, (int_x86_sse2_cmp_sd VR128:$src1,
                                            (load addr:$src), imm:$cc))]>;
@@ -1542,9 +1565,15 @@ def Int_CVTPS2DQrm : PDI<0x5B, MRMSrcMem, (outs VR128:$dst), (ins f128mem:$src),
                          [(set VR128:$dst, (int_x86_sse2_cvtps2dq
                                             (memop addr:$src)))]>;
 // SSE2 packed instructions with XS prefix
+def CVTTPS2DQrr : SSI<0x5B, MRMSrcReg, (outs VR128:$dst), (ins VR128:$src),
+                      "cvttps2dq\t{$src, $dst|$dst, $src}", []>;
+def CVTTPS2DQrm : SSI<0x5B, MRMSrcMem, (outs VR128:$dst), (ins f128mem:$src),
+                      "cvttps2dq\t{$src, $dst|$dst, $src}", []>;
+
 def Int_CVTTPS2DQrr : I<0x5B, MRMSrcReg, (outs VR128:$dst), (ins VR128:$src),
                         "cvttps2dq\t{$src, $dst|$dst, $src}",
-                        [(set VR128:$dst, (int_x86_sse2_cvttps2dq VR128:$src))]>,
+                        [(set VR128:$dst, 
+                              (int_x86_sse2_cvttps2dq VR128:$src))]>,
                       XS, Requires<[HasSSE2]>;
 def Int_CVTTPS2DQrm : I<0x5B, MRMSrcMem, (outs VR128:$dst), (ins f128mem:$src),
                         "cvttps2dq\t{$src, $dst|$dst, $src}",
@@ -1572,6 +1601,11 @@ def Int_CVTTPD2DQrm : PDI<0xE6, MRMSrcMem, (outs VR128:$dst),(ins f128mem:$src),
                                              (memop addr:$src)))]>;
 
 // SSE2 instructions without OpSize prefix
+def CVTPS2PDrr : I<0x5A, MRMSrcReg, (outs VR128:$dst), (ins VR128:$src),
+                       "cvtps2pd\t{$src, $dst|$dst, $src}", []>, TB;
+def CVTPS2PDrm : I<0x5A, MRMSrcMem, (outs VR128:$dst), (ins f64mem:$src),
+                       "cvtps2pd\t{$src, $dst|$dst, $src}", []>, TB;
+
 def Int_CVTPS2PDrr : I<0x5A, MRMSrcReg, (outs VR128:$dst), (ins VR128:$src),
                        "cvtps2pd\t{$src, $dst|$dst, $src}",
                        [(set VR128:$dst, (int_x86_sse2_cvtps2pd VR128:$src))]>,
@@ -1582,6 +1616,12 @@ def Int_CVTPS2PDrm : I<0x5A, MRMSrcMem, (outs VR128:$dst), (ins f64mem:$src),
                                           (load addr:$src)))]>,
                      TB, Requires<[HasSSE2]>;
 
+def CVTPD2PSrr : PDI<0x5A, MRMSrcReg, (outs VR128:$dst), (ins VR128:$src),
+                     "cvtpd2ps\t{$src, $dst|$dst, $src}", []>;
+def CVTPD2PSrm : PDI<0x5A, MRMSrcMem, (outs VR128:$dst), (ins f128mem:$src),
+                     "cvtpd2ps\t{$src, $dst|$dst, $src}", []>;
+
+
 def Int_CVTPD2PSrr : PDI<0x5A, MRMSrcReg, (outs VR128:$dst), (ins VR128:$src),
                          "cvtpd2ps\t{$src, $dst|$dst, $src}",
                         [(set VR128:$dst, (int_x86_sse2_cvtpd2ps VR128:$src))]>;
@@ -1856,31 +1896,34 @@ let Constraints = "$src1 = $dst" in {
 
 multiclass PDI_binop_rm_int<bits<8> opc, string OpcodeStr, Intrinsic IntId,
                             bit Commutable = 0> {
-  def rr : PDI<opc, MRMSrcReg, (outs VR128:$dst), (ins VR128:$src1, VR128:$src2),
+  def rr : PDI<opc, MRMSrcReg, (outs VR128:$dst), 
+                               (ins VR128:$src1, VR128:$src2),
                !strconcat(OpcodeStr, "\t{$src2, $dst|$dst, $src2}"),
                [(set VR128:$dst, (IntId VR128:$src1, VR128:$src2))]> {
     let isCommutable = Commutable;
   }
-  def rm : PDI<opc, MRMSrcMem, (outs VR128:$dst), (ins VR128:$src1, i128mem:$src2),
+  def rm : PDI<opc, MRMSrcMem, (outs VR128:$dst), 
+                               (ins VR128:$src1, i128mem:$src2),
                !strconcat(OpcodeStr, "\t{$src2, $dst|$dst, $src2}"),
                [(set VR128:$dst, (IntId VR128:$src1,
-                                        (bitconvert (memopv2i64 addr:$src2))))]>;
+                                        (bitconvert (memopv2i64 
+                                                     addr:$src2))))]>;
 }
 
 multiclass PDI_binop_rmi_int<bits<8> opc, bits<8> opc2, Format ImmForm,
                              string OpcodeStr,
                              Intrinsic IntId, Intrinsic IntId2> {
-  def rr : PDI<opc, MRMSrcReg, (outs VR128:$dst), (ins VR128:$src1,
-						       VR128:$src2),
+  def rr : PDI<opc, MRMSrcReg, (outs VR128:$dst), 
+                               (ins VR128:$src1, VR128:$src2),
                !strconcat(OpcodeStr, "\t{$src2, $dst|$dst, $src2}"),
                [(set VR128:$dst, (IntId VR128:$src1, VR128:$src2))]>;
-  def rm : PDI<opc, MRMSrcMem, (outs VR128:$dst), (ins VR128:$src1,
-						       i128mem:$src2),
+  def rm : PDI<opc, MRMSrcMem, (outs VR128:$dst),
+                               (ins VR128:$src1, i128mem:$src2),
                !strconcat(OpcodeStr, "\t{$src2, $dst|$dst, $src2}"),
                [(set VR128:$dst, (IntId VR128:$src1,
                                       (bitconvert (memopv2i64 addr:$src2))))]>;
-  def ri : PDIi8<opc2, ImmForm, (outs VR128:$dst), (ins VR128:$src1,
-							i32i8imm:$src2),
+  def ri : PDIi8<opc2, ImmForm, (outs VR128:$dst), 
+                                (ins VR128:$src1, i32i8imm:$src2),
                !strconcat(OpcodeStr, "\t{$src2, $dst|$dst, $src2}"),
                [(set VR128:$dst, (IntId2 VR128:$src1, (i32 imm:$src2)))]>;
 }
@@ -1888,14 +1931,14 @@ multiclass PDI_binop_rmi_int<bits<8> opc, bits<8> opc2, Format ImmForm,
 /// PDI_binop_rm - Simple SSE2 binary operator.
 multiclass PDI_binop_rm<bits<8> opc, string OpcodeStr, SDNode OpNode,
                         ValueType OpVT, bit Commutable = 0> {
-  def rr : PDI<opc, MRMSrcReg, (outs VR128:$dst), (ins VR128:$src1,
-						       VR128:$src2),
+  def rr : PDI<opc, MRMSrcReg, (outs VR128:$dst), 
+                               (ins VR128:$src1, VR128:$src2),
                !strconcat(OpcodeStr, "\t{$src2, $dst|$dst, $src2}"),
                [(set VR128:$dst, (OpVT (OpNode VR128:$src1, VR128:$src2)))]> {
     let isCommutable = Commutable;
   }
-  def rm : PDI<opc, MRMSrcMem, (outs VR128:$dst), (ins VR128:$src1,
-						       i128mem:$src2),
+  def rm : PDI<opc, MRMSrcMem, (outs VR128:$dst), 
+                               (ins VR128:$src1, i128mem:$src2),
                !strconcat(OpcodeStr, "\t{$src2, $dst|$dst, $src2}"),
                [(set VR128:$dst, (OpVT (OpNode VR128:$src1,
                                      (bitconvert (memopv2i64 addr:$src2)))))]>;
@@ -1909,16 +1952,16 @@ multiclass PDI_binop_rm<bits<8> opc, string OpcodeStr, SDNode OpNode,
 multiclass PDI_binop_rm_v2i64<bits<8> opc, string OpcodeStr, SDNode OpNode,
                               bit Commutable = 0> {
   def rr : PDI<opc, MRMSrcReg, (outs VR128:$dst),
-			       (ins VR128:$src1, VR128:$src2),
+               (ins VR128:$src1, VR128:$src2),
                !strconcat(OpcodeStr, "\t{$src2, $dst|$dst, $src2}"),
                [(set VR128:$dst, (v2i64 (OpNode VR128:$src1, VR128:$src2)))]> {
     let isCommutable = Commutable;
   }
   def rm : PDI<opc, MRMSrcMem, (outs VR128:$dst),
-			       (ins VR128:$src1, i128mem:$src2),
+               (ins VR128:$src1, i128mem:$src2),
                !strconcat(OpcodeStr, "\t{$src2, $dst|$dst, $src2}"),
                [(set VR128:$dst, (OpNode VR128:$src1,
-					 (memopv2i64 addr:$src2)))]>;
+               (memopv2i64 addr:$src2)))]>;
 }
 
 } // Constraints = "$src1 = $dst"
@@ -2455,6 +2498,13 @@ def : Pat<(v2i64 (X86vzmovl (bc_v2i64 (loadv4i32 addr:$src)))),
             (MOVZPQILo2PQIrm addr:$src)>;
 }
 
+// Instructions for the disassembler
+// xr = XMM register
+// xm = mem64
+
+def MOVQxrxr : I<0x7E, MRMSrcReg, (outs VR128:$dst), (ins VR128:$src),
+                 "movq\t{$src, $dst|$dst, $src}", []>, XS;
+
 //===---------------------------------------------------------------------===//
 // SSE3 Instructions
 //===---------------------------------------------------------------------===//
@@ -3175,13 +3225,14 @@ multiclass sse41_fp_unop_rm<bits<8> opcps, bits<8> opcpd,
                     OpSize;
 
   // Vector intrinsic operation, mem
-  def PSm_Int : SS4AIi8<opcps, MRMSrcMem,
+  def PSm_Int : Ii8<opcps, MRMSrcMem,
                     (outs VR128:$dst), (ins f128mem:$src1, i32i8imm:$src2),
                     !strconcat(OpcodeStr,
                     "ps\t{$src2, $src1, $dst|$dst, $src1, $src2}"),
                     [(set VR128:$dst,
                           (V4F32Int (memopv4f32 addr:$src1),imm:$src2))]>,
-                    OpSize;
+                    TA, OpSize,
+                Requires<[HasSSE41]>;
 
   // Vector intrinsic operation, reg
   def PDr_Int : SS4AIi8<opcpd, MRMSrcReg,
@@ -3661,7 +3712,7 @@ let Constraints = "$src1 = $dst" in {
                     "\t{$src3, $src2, $dst|$dst, $src2, $src3}"),
                    [(set VR128:$dst,
                      (X86insrtps VR128:$src1, VR128:$src2, imm:$src3))]>,
-		OpSize;
+      OpSize;
     def rm : SS4AIi8<opc, MRMSrcMem, (outs VR128:$dst),
                    (ins VR128:$src1, f32mem:$src2, i32i8imm:$src3),
                    !strconcat(OpcodeStr,
@@ -3786,76 +3837,63 @@ let Constraints = "$src1 = $dst" in {
 // String/text processing instructions.
 let Defs = [EFLAGS], usesCustomInserter = 1 in {
 def PCMPISTRM128REG : SS42AI<0, Pseudo, (outs VR128:$dst),
-			(ins VR128:$src1, VR128:$src2, i8imm:$src3),
-		    "#PCMPISTRM128rr PSEUDO!",
-		    [(set VR128:$dst,
-			(int_x86_sse42_pcmpistrm128 VR128:$src1, VR128:$src2,
-						    imm:$src3))]>, OpSize;
+  (ins VR128:$src1, VR128:$src2, i8imm:$src3),
+  "#PCMPISTRM128rr PSEUDO!",
+  [(set VR128:$dst, (int_x86_sse42_pcmpistrm128 VR128:$src1, VR128:$src2,
+                                                imm:$src3))]>, OpSize;
 def PCMPISTRM128MEM : SS42AI<0, Pseudo, (outs VR128:$dst),
-			(ins VR128:$src1, i128mem:$src2, i8imm:$src3),
-		    "#PCMPISTRM128rm PSEUDO!",
-		    [(set VR128:$dst,
-			(int_x86_sse42_pcmpistrm128 VR128:$src1,
-						    (load addr:$src2),
-						    imm:$src3))]>, OpSize;
+  (ins VR128:$src1, i128mem:$src2, i8imm:$src3),
+  "#PCMPISTRM128rm PSEUDO!",
+  [(set VR128:$dst, (int_x86_sse42_pcmpistrm128 VR128:$src1, (load addr:$src2),
+                                                imm:$src3))]>, OpSize;
 }
 
 let Defs = [XMM0, EFLAGS] in {
 def PCMPISTRM128rr : SS42AI<0x62, MRMSrcReg, (outs),
-			    (ins VR128:$src1, VR128:$src2, i8imm:$src3),
-		     "pcmpistrm\t{$src3, $src2, $src1|$src1, $src2, $src3}",
-		     []>, OpSize;
+  (ins VR128:$src1, VR128:$src2, i8imm:$src3),
+   "pcmpistrm\t{$src3, $src2, $src1|$src1, $src2, $src3}", []>, OpSize;
 def PCMPISTRM128rm : SS42AI<0x62, MRMSrcMem, (outs),
-			    (ins VR128:$src1, i128mem:$src2, i8imm:$src3),
-		     "pcmpistrm\t{$src3, $src2, $src1|$src1, $src2, $src3}",
-		     []>, OpSize;
+  (ins VR128:$src1, i128mem:$src2, i8imm:$src3),
+  "pcmpistrm\t{$src3, $src2, $src1|$src1, $src2, $src3}", []>, OpSize;
 }
 
-let Defs = [EFLAGS], Uses = [EAX, EDX],
-	usesCustomInserter = 1 in {
+let Defs = [EFLAGS], Uses = [EAX, EDX], usesCustomInserter = 1 in {
 def PCMPESTRM128REG : SS42AI<0, Pseudo, (outs VR128:$dst),
-			(ins VR128:$src1, VR128:$src3, i8imm:$src5),
-		    "#PCMPESTRM128rr PSEUDO!",
-		    [(set VR128:$dst,
-			(int_x86_sse42_pcmpestrm128 VR128:$src1, EAX,
-						    VR128:$src3,
-						    EDX, imm:$src5))]>, OpSize;
+  (ins VR128:$src1, VR128:$src3, i8imm:$src5),
+  "#PCMPESTRM128rr PSEUDO!",
+  [(set VR128:$dst, 
+        (int_x86_sse42_pcmpestrm128 
+         VR128:$src1, EAX, VR128:$src3, EDX, imm:$src5))]>, OpSize;
+
 def PCMPESTRM128MEM : SS42AI<0, Pseudo, (outs VR128:$dst),
-			(ins VR128:$src1, i128mem:$src3, i8imm:$src5),
-		    "#PCMPESTRM128rm PSEUDO!",
-		    [(set VR128:$dst,
-			(int_x86_sse42_pcmpestrm128 VR128:$src1, EAX,
-						    (load addr:$src3),
-						    EDX, imm:$src5))]>, OpSize;
+  (ins VR128:$src1, i128mem:$src3, i8imm:$src5),
+  "#PCMPESTRM128rm PSEUDO!",
+  [(set VR128:$dst, (int_x86_sse42_pcmpestrm128 
+                     VR128:$src1, EAX, (load addr:$src3), EDX, imm:$src5))]>, 
+  OpSize;
 }
 
 let Defs = [XMM0, EFLAGS], Uses = [EAX, EDX] in {
 def PCMPESTRM128rr : SS42AI<0x60, MRMSrcReg, (outs),
-			    (ins VR128:$src1, VR128:$src3, i8imm:$src5),
-		     "pcmpestrm\t{$src5, $src3, $src1|$src1, $src3, $src5}",
-		     []>, OpSize;
+  (ins VR128:$src1, VR128:$src3, i8imm:$src5),
+  "pcmpestrm\t{$src5, $src3, $src1|$src1, $src3, $src5}", []>, OpSize;
 def PCMPESTRM128rm : SS42AI<0x60, MRMSrcMem, (outs),
-			    (ins VR128:$src1, i128mem:$src3, i8imm:$src5),
-		     "pcmpestrm\t{$src5, $src3, $src1|$src1, $src3, $src5}",
-		     []>, OpSize;
+  (ins VR128:$src1, i128mem:$src3, i8imm:$src5),
+  "pcmpestrm\t{$src5, $src3, $src1|$src1, $src3, $src5}", []>, OpSize;
 }
 
 let Defs = [ECX, EFLAGS] in {
   multiclass SS42AI_pcmpistri<Intrinsic IntId128> {
-    def rr : SS42AI<0x63, MRMSrcReg, (outs),
-		(ins VR128:$src1, VR128:$src2, i8imm:$src3),
-		"pcmpistri\t{$src3, $src2, $src1|$src1, $src2, $src3}",
-		[(set ECX,
-		   (IntId128 VR128:$src1, VR128:$src2, imm:$src3)),
-	         (implicit EFLAGS)]>,
-		OpSize;
+    def rr : SS42AI<0x63, MRMSrcReg, (outs), 
+      (ins VR128:$src1, VR128:$src2, i8imm:$src3),
+      "pcmpistri\t{$src3, $src2, $src1|$src1, $src2, $src3}",
+      [(set ECX, (IntId128 VR128:$src1, VR128:$src2, imm:$src3)),
+       (implicit EFLAGS)]>, OpSize;
     def rm : SS42AI<0x63, MRMSrcMem, (outs),
-		(ins VR128:$src1, i128mem:$src2, i8imm:$src3),
-		"pcmpistri\t{$src3, $src2, $src1|$src1, $src2, $src3}",
-		[(set ECX,
-		  (IntId128 VR128:$src1, (load addr:$src2), imm:$src3)),
-		 (implicit EFLAGS)]>,
-		OpSize;
+      (ins VR128:$src1, i128mem:$src2, i8imm:$src3),
+      "pcmpistri\t{$src3, $src2, $src1|$src1, $src2, $src3}",
+      [(set ECX, (IntId128 VR128:$src1, (load addr:$src2), imm:$src3)),
+       (implicit EFLAGS)]>, OpSize;
   }
 }
 
@@ -3870,20 +3908,16 @@ let Defs = [ECX, EFLAGS] in {
 let Uses = [EAX, EDX] in {
   multiclass SS42AI_pcmpestri<Intrinsic IntId128> {
     def rr : SS42AI<0x61, MRMSrcReg, (outs),
-		(ins VR128:$src1, VR128:$src3, i8imm:$src5),
-		"pcmpestri\t{$src5, $src3, $src1|$src1, $src3, $src5}",
-		[(set ECX,
-		   (IntId128 VR128:$src1, EAX, VR128:$src3, EDX, imm:$src5)),
-	         (implicit EFLAGS)]>,
-		OpSize;
+      (ins VR128:$src1, VR128:$src3, i8imm:$src5),
+      "pcmpestri\t{$src5, $src3, $src1|$src1, $src3, $src5}",
+      [(set ECX, (IntId128 VR128:$src1, EAX, VR128:$src3, EDX, imm:$src5)),
+       (implicit EFLAGS)]>, OpSize;
     def rm : SS42AI<0x61, MRMSrcMem, (outs),
-		(ins VR128:$src1, i128mem:$src3, i8imm:$src5),
-		"pcmpestri\t{$src5, $src3, $src1|$src1, $src3, $src5}",
-		[(set ECX,
-		  (IntId128 VR128:$src1, EAX, (load addr:$src3),
-		    EDX, imm:$src5)),
-		 (implicit EFLAGS)]>,
-		OpSize;
+      (ins VR128:$src1, i128mem:$src3, i8imm:$src5),
+       "pcmpestri\t{$src5, $src3, $src1|$src1, $src3, $src5}",
+       [(set ECX, 
+             (IntId128 VR128:$src1, EAX, (load addr:$src3), EDX, imm:$src5)),
+        (implicit EFLAGS)]>, OpSize;
   }
 }
 }
diff --git a/libclamav/c++/llvm/lib/Target/X86/X86JITInfo.cpp b/libclamav/c++/llvm/lib/Target/X86/X86JITInfo.cpp
index ce06f0f..c69cc83 100644
--- a/libclamav/c++/llvm/lib/Target/X86/X86JITInfo.cpp
+++ b/libclamav/c++/llvm/lib/Target/X86/X86JITInfo.cpp
@@ -426,16 +426,19 @@ X86JITInfo::X86JITInfo(X86TargetMachine &tm) : TM(tm) {
 
 void *X86JITInfo::emitGlobalValueIndirectSym(const GlobalValue* GV, void *ptr,
                                              JITCodeEmitter &JCE) {
-  MachineCodeEmitter::BufferState BS;
 #if defined (X86_64_JIT)
-  JCE.startGVStub(BS, GV, 8, 8);
-  JCE.emitWordLE((unsigned)(intptr_t)ptr);
-  JCE.emitWordLE((unsigned)(((intptr_t)ptr) >> 32));
+  const unsigned Alignment = 8;
+  uint8_t Buffer[8];
+  uint8_t *Cur = Buffer;
+  MachineCodeEmitter::emitWordLEInto(Cur, (unsigned)(intptr_t)ptr);
+  MachineCodeEmitter::emitWordLEInto(Cur, (unsigned)(((intptr_t)ptr) >> 32));
 #else
-  JCE.startGVStub(BS, GV, 4, 4);
-  JCE.emitWordLE((intptr_t)ptr);
+  const unsigned Alignment = 4;
+  uint8_t Buffer[4];
+  uint8_t *Cur = Buffer;
+  MachineCodeEmitter::emitWordLEInto(Cur, (intptr_t)ptr);
 #endif
-  return JCE.finishGVStub(BS);
+  return JCE.allocIndirectGV(GV, Buffer, sizeof(Buffer), Alignment);
 }
 
 TargetJITInfo::StubLayout X86JITInfo::getStubLayout() {
@@ -451,7 +454,6 @@ TargetJITInfo::StubLayout X86JITInfo::getStubLayout() {
 
 void *X86JITInfo::emitFunctionStub(const Function* F, void *Target,
                                    JITCodeEmitter &JCE) {
-  MachineCodeEmitter::BufferState BS;
   // Note, we cast to intptr_t here to silence a -pedantic warning that 
   // complains about casting a function pointer to a normal pointer.
 #if defined (X86_32_JIT) && !defined (_MSC_VER)
diff --git a/libclamav/c++/llvm/lib/Target/X86/X86RegisterInfo.td b/libclamav/c++/llvm/lib/Target/X86/X86RegisterInfo.td
index 7bf074d..6db0cc3 100644
--- a/libclamav/c++/llvm/lib/Target/X86/X86RegisterInfo.td
+++ b/libclamav/c++/llvm/lib/Target/X86/X86RegisterInfo.td
@@ -195,6 +195,36 @@ let Namespace = "X86" in {
   def ES : Register<"es">;
   def FS : Register<"fs">;
   def GS : Register<"gs">;
+  
+  // Debug registers
+  def DR0 : Register<"dr0">;
+  def DR1 : Register<"dr1">;
+  def DR2 : Register<"dr2">;
+  def DR3 : Register<"dr3">;
+  def DR4 : Register<"dr4">;
+  def DR5 : Register<"dr5">;
+  def DR6 : Register<"dr6">;
+  def DR7 : Register<"dr7">;
+  
+  // Condition registers
+  def ECR0 : Register<"ecr0">;
+  def ECR1 : Register<"ecr1">;
+  def ECR2 : Register<"ecr2">;
+  def ECR3 : Register<"ecr3">;
+  def ECR4 : Register<"ecr4">;
+  def ECR5 : Register<"ecr5">;
+  def ECR6 : Register<"ecr6">;
+  def ECR7 : Register<"ecr7">;
+
+  def RCR0 : Register<"rcr0">;
+  def RCR1 : Register<"rcr1">;
+  def RCR2 : Register<"rcr2">;
+  def RCR3 : Register<"rcr3">;
+  def RCR4 : Register<"rcr4">;
+  def RCR5 : Register<"rcr5">;
+  def RCR6 : Register<"rcr6">;
+  def RCR7 : Register<"rcr7">;
+  def RCR8 : Register<"rcr8">; 
 }
 
 
@@ -446,6 +476,22 @@ def GR64 : RegisterClass<"X86", [i64], 64,
 def SEGMENT_REG : RegisterClass<"X86", [i16], 16, [CS, DS, SS, ES, FS, GS]> {
 }
 
+// Debug registers.
+def DEBUG_REG : RegisterClass<"X86", [i32], 32, 
+                              [DR0, DR1, DR2, DR3, DR4, DR5, DR6, DR7]> {
+}
+
+// Control registers.
+def CONTROL_REG_32 : RegisterClass<"X86", [i32], 32,
+                                   [ECR0, ECR1, ECR2, ECR3, ECR4, ECR5, ECR6,
+                                    ECR7]> {
+}
+
+def CONTROL_REG_64 : RegisterClass<"X86", [i64], 64,
+                                   [RCR0, RCR1, RCR2, RCR3, RCR4, RCR5, RCR6,
+                                    RCR7, RCR8]> {
+}
+
 // GR8_ABCD_L, GR8_ABCD_H, GR16_ABCD, GR32_ABCD, GR64_ABCD - Subclasses of
 // GR8, GR16, GR32, and GR64 which contain just the "a" "b", "c", and "d"
 // registers. On x86-32, GR16_ABCD and GR32_ABCD are classes for registers
@@ -661,7 +707,8 @@ def GR64_NOREX_NOSP : RegisterClass<"X86", [i64], 64,
   }];
   let MethodBodies = [{
     GR64_NOREX_NOSPClass::iterator
-    GR64_NOREX_NOSPClass::allocation_order_end(const MachineFunction &MF) const {
+    GR64_NOREX_NOSPClass::allocation_order_end(const MachineFunction &MF) const
+  {
       const TargetMachine &TM = MF.getTarget();
       const TargetRegisterInfo *RI = TM.getRegisterInfo();
       // Does the function dedicate RBP to being a frame ptr?
diff --git a/libclamav/c++/llvm/lib/Target/X86/X86Subtarget.h b/libclamav/c++/llvm/lib/Target/X86/X86Subtarget.h
index fb457dd..ef6dbaf 100644
--- a/libclamav/c++/llvm/lib/Target/X86/X86Subtarget.h
+++ b/libclamav/c++/llvm/lib/Target/X86/X86Subtarget.h
@@ -77,7 +77,7 @@ protected:
 
   /// IsBTMemSlow - True if BT (bit test) of memory instructions are slow.
   bool IsBTMemSlow;
-  
+
   /// DarwinVers - Nonzero if this is a darwin platform: the numeric
   /// version of the platform, e.g. 8 = 10.4 (Tiger), 9 = 10.5 (Leopard), etc.
   unsigned char DarwinVers; // Is any darwin-x86 platform.
@@ -169,8 +169,11 @@ public:
       p = "e-p:64:64-s:64-f64:64:64-i64:64:64-f80:128:128-n8:16:32:64";
     else if (isTargetDarwin())
       p = "e-p:32:32-f64:32:64-i64:32:64-f80:128:128-n8:16:32";
+    else if (isTargetCygMing() || isTargetWindows())
+      p = "e-p:32:32-f64:64:64-i64:64:64-f80:128:128-n8:16:32";
     else
       p = "e-p:32:32-f64:32:64-i64:32:64-f80:32:32-n8:16:32";
+
     return std::string(p);
   }
 
diff --git a/libclamav/c++/llvm/lib/Target/X86/X86TargetMachine.cpp b/libclamav/c++/llvm/lib/Target/X86/X86TargetMachine.cpp
index 0152121..962f0f7 100644
--- a/libclamav/c++/llvm/lib/Target/X86/X86TargetMachine.cpp
+++ b/libclamav/c++/llvm/lib/Target/X86/X86TargetMachine.cpp
@@ -91,10 +91,6 @@ X86TargetMachine::X86TargetMachine(const Target &T, const std::string &TT,
   assert(getRelocationModel() != Reloc::Default &&
          "Relocation mode not picked");
 
-  // If no code model is picked, default to small.
-  if (getCodeModel() == CodeModel::Default)
-    setCodeModel(CodeModel::Small);
-      
   // ELF and X86-64 don't have a distinct DynamicNoPIC model.  DynamicNoPIC
   // is defined as a model for code which may be used in static or dynamic
   // executables but not necessarily a shared library. On X86-32 we just
@@ -184,10 +180,6 @@ bool X86TargetMachine::addCodeEmitter(PassManagerBase &PM,
     Subtarget.setPICStyle(PICStyles::None);
   }
   
-  // 64-bit JIT places everything in the same buffer except external functions.
-  if (Subtarget.is64Bit())
-      setCodeModel(CodeModel::Large);
-
   PM.add(createX86CodeEmitterPass(*this, MCE));
 
   return false;
@@ -204,9 +196,6 @@ bool X86TargetMachine::addCodeEmitter(PassManagerBase &PM,
     Subtarget.setPICStyle(PICStyles::None);
   }
   
-  // 64-bit JIT places everything in the same buffer except external functions.
-  if (Subtarget.is64Bit())
-      setCodeModel(CodeModel::Large);
 
   PM.add(createX86JITCodeEmitterPass(*this, JCE));
 
@@ -240,3 +229,23 @@ bool X86TargetMachine::addSimpleCodeEmitter(PassManagerBase &PM,
   PM.add(createX86ObjectCodeEmitterPass(*this, OCE));
   return false;
 }
+
+void X86TargetMachine::setCodeModelForStatic() {
+
+    if (getCodeModel() != CodeModel::Default) return;
+
+    // For static codegen, if we're not already set, use Small codegen.
+    setCodeModel(CodeModel::Small);
+}
+
+
+void X86TargetMachine::setCodeModelForJIT() {
+
+  if (getCodeModel() != CodeModel::Default) return;
+
+  // 64-bit JIT places everything in the same buffer except external functions.
+  if (Subtarget.is64Bit())
+    setCodeModel(CodeModel::Large);
+  else
+    setCodeModel(CodeModel::Small);
+}
diff --git a/libclamav/c++/llvm/lib/Target/X86/X86TargetMachine.h b/libclamav/c++/llvm/lib/Target/X86/X86TargetMachine.h
index b538408..6183e91 100644
--- a/libclamav/c++/llvm/lib/Target/X86/X86TargetMachine.h
+++ b/libclamav/c++/llvm/lib/Target/X86/X86TargetMachine.h
@@ -38,6 +38,11 @@ class X86TargetMachine : public LLVMTargetMachine {
   X86ELFWriterInfo  ELFWriterInfo;
   Reloc::Model      DefRelocModel; // Reloc model before it's overridden.
 
+private:
+  // We have specific defaults for X86.
+  virtual void setCodeModelForJIT();
+  virtual void setCodeModelForStatic();
+  
 public:
   X86TargetMachine(const Target &T, const std::string &TT, 
                    const std::string &FS, bool is64Bit);
diff --git a/libclamav/c++/llvm/lib/Transforms/Hello/Hello.cpp b/libclamav/c++/llvm/lib/Transforms/Hello/Hello.cpp
index 91534a7..eac4e17 100644
--- a/libclamav/c++/llvm/lib/Transforms/Hello/Hello.cpp
+++ b/libclamav/c++/llvm/lib/Transforms/Hello/Hello.cpp
@@ -56,7 +56,7 @@ namespace {
     // We don't modify the program, so we preserve all analyses
     virtual void getAnalysisUsage(AnalysisUsage &AU) const {
       AU.setPreservesAll();
-    };
+    }
   };
 }
 
diff --git a/libclamav/c++/llvm/lib/Transforms/Scalar/CodeGenPrepare.cpp b/libclamav/c++/llvm/lib/Transforms/Scalar/CodeGenPrepare.cpp
index e4c4ae5..372616c 100644
--- a/libclamav/c++/llvm/lib/Transforms/Scalar/CodeGenPrepare.cpp
+++ b/libclamav/c++/llvm/lib/Transforms/Scalar/CodeGenPrepare.cpp
@@ -48,7 +48,7 @@ namespace {
     /// TLI - Keep a pointer of a TargetLowering to consult for determining
     /// transformation profitability.
     const TargetLowering *TLI;
-    ProfileInfo *PI;
+    ProfileInfo *PFI;
 
     /// BackEdges - Keep a set of all the loop back edges.
     ///
@@ -99,7 +99,7 @@ void CodeGenPrepare::findLoopBackEdges(const Function &F) {
 bool CodeGenPrepare::runOnFunction(Function &F) {
   bool EverMadeChange = false;
 
-  PI = getAnalysisIfAvailable<ProfileInfo>();
+  PFI = getAnalysisIfAvailable<ProfileInfo>();
   // First pass, eliminate blocks that contain only PHI nodes and an
   // unconditional branch.
   EverMadeChange |= EliminateMostlyEmptyBlocks(F);
@@ -288,9 +288,9 @@ void CodeGenPrepare::EliminateMostlyEmptyBlock(BasicBlock *BB) {
   // The PHIs are now updated, change everything that refers to BB to use
   // DestBB and remove BB.
   BB->replaceAllUsesWith(DestBB);
-  if (PI) {
-    PI->replaceAllUses(BB, DestBB);
-    PI->removeEdge(ProfileInfo::getEdge(BB, DestBB));
+  if (PFI) {
+    PFI->replaceAllUses(BB, DestBB);
+    PFI->removeEdge(ProfileInfo::getEdge(BB, DestBB));
   }
   BB->eraseFromParent();
 
@@ -368,9 +368,9 @@ static void SplitEdgeNicely(TerminatorInst *TI, unsigned SuccNum,
 
       // If we found a workable predecessor, change TI to branch to Succ.
       if (FoundMatch) {
-        ProfileInfo *PI = P->getAnalysisIfAvailable<ProfileInfo>();
-        if (PI)
-          PI->splitEdge(TIBB, Dest, Pred);
+        ProfileInfo *PFI = P->getAnalysisIfAvailable<ProfileInfo>();
+        if (PFI)
+          PFI->splitEdge(TIBB, Dest, Pred);
         Dest->removePredecessor(TIBB);
         TI->setSuccessor(SuccNum, Pred);
         return;
diff --git a/libclamav/c++/llvm/lib/Transforms/Scalar/GVN.cpp b/libclamav/c++/llvm/lib/Transforms/Scalar/GVN.cpp
index 222792b..249194d 100644
--- a/libclamav/c++/llvm/lib/Transforms/Scalar/GVN.cpp
+++ b/libclamav/c++/llvm/lib/Transforms/Scalar/GVN.cpp
@@ -48,7 +48,6 @@
 #include "llvm/Transforms/Utils/BasicBlockUtils.h"
 #include "llvm/Transforms/Utils/Local.h"
 #include "llvm/Transforms/Utils/SSAUpdater.h"
-#include <cstdio>
 using namespace llvm;
 
 STATISTIC(NumGVNInstr,  "Number of instructions deleted");
@@ -733,13 +732,13 @@ static RegisterPass<GVN> X("gvn",
                            "Global Value Numbering");
 
 void GVN::dump(DenseMap<uint32_t, Value*>& d) {
-  printf("{\n");
+  errs() << "{\n";
   for (DenseMap<uint32_t, Value*>::iterator I = d.begin(),
        E = d.end(); I != E; ++I) {
-      printf("%d\n", I->first);
+      errs() << I->first << "\n";
       I->second->dump();
   }
-  printf("}\n");
+  errs() << "}\n";
 }
 
 static bool isSafeReplacement(PHINode* p, Instruction *inst) {
@@ -1278,6 +1277,32 @@ struct AvailableValueInBlock {
     assert(!isSimpleValue() && "Wrong accessor");
     return cast<MemIntrinsic>(Val.getPointer());
   }
+  
+  /// MaterializeAdjustedValue - Emit code into this block to adjust the value
+  /// defined here to the specified type.  This handles various coercion cases.
+  Value *MaterializeAdjustedValue(const Type *LoadTy,
+                                  const TargetData *TD) const {
+    Value *Res;
+    if (isSimpleValue()) {
+      Res = getSimpleValue();
+      if (Res->getType() != LoadTy) {
+        assert(TD && "Need target data to handle type mismatch case");
+        Res = GetStoreValueForLoad(Res, Offset, LoadTy, BB->getTerminator(),
+                                   *TD);
+        
+        DEBUG(errs() << "GVN COERCED NONLOCAL VAL:\nOffset: " << Offset << "  "
+                     << *getSimpleValue() << '\n'
+                     << *Res << '\n' << "\n\n\n");
+      }
+    } else {
+      Res = GetMemInstValueForLoad(getMemIntrinValue(), Offset,
+                                   LoadTy, BB->getTerminator(), *TD);
+      DEBUG(errs() << "GVN COERCED NONLOCAL MEM INTRIN:\nOffset: " << Offset
+                   << "  " << *getMemIntrinValue() << '\n'
+                   << *Res << '\n' << "\n\n\n");
+    }
+    return Res;
+  }
 };
 
 /// ConstructSSAForLoadSet - Given a set of loads specified by ValuesPerBlock,
@@ -1286,7 +1311,15 @@ struct AvailableValueInBlock {
 static Value *ConstructSSAForLoadSet(LoadInst *LI, 
                          SmallVectorImpl<AvailableValueInBlock> &ValuesPerBlock,
                                      const TargetData *TD,
+                                     const DominatorTree &DT,
                                      AliasAnalysis *AA) {
+  // Check for the fully redundant, dominating load case.  In this case, we can
+  // just use the dominating value directly.
+  if (ValuesPerBlock.size() == 1 && 
+      DT.properlyDominates(ValuesPerBlock[0].BB, LI->getParent()))
+    return ValuesPerBlock[0].MaterializeAdjustedValue(LI->getType(), TD);
+
+  // Otherwise, we have to construct SSA form.
   SmallVector<PHINode*, 8> NewPHIs;
   SSAUpdater SSAUpdate(&NewPHIs);
   SSAUpdate.Initialize(LI);
@@ -1300,28 +1333,7 @@ static Value *ConstructSSAForLoadSet(LoadInst *LI,
     if (SSAUpdate.HasValueForBlock(BB))
       continue;
 
-    unsigned Offset = AV.Offset;
-
-    Value *AvailableVal;
-    if (AV.isSimpleValue()) {
-      AvailableVal = AV.getSimpleValue();
-      if (AvailableVal->getType() != LoadTy) {
-        assert(TD && "Need target data to handle type mismatch case");
-        AvailableVal = GetStoreValueForLoad(AvailableVal, Offset, LoadTy,
-                                            BB->getTerminator(), *TD);
-        
-        DEBUG(errs() << "GVN COERCED NONLOCAL VAL:\nOffset: " << Offset << "  "
-              << *AV.getSimpleValue() << '\n'
-              << *AvailableVal << '\n' << "\n\n\n");
-      }
-    } else {
-      AvailableVal = GetMemInstValueForLoad(AV.getMemIntrinValue(), Offset,
-                                            LoadTy, BB->getTerminator(), *TD);
-      DEBUG(errs() << "GVN COERCED NONLOCAL MEM INTRIN:\nOffset: " << Offset
-            << "  " << *AV.getMemIntrinValue() << '\n'
-            << *AvailableVal << '\n' << "\n\n\n");
-    }
-    SSAUpdate.AddAvailableValue(BB, AvailableVal);
+    SSAUpdate.AddAvailableValue(BB, AV.MaterializeAdjustedValue(LoadTy, TD));
   }
   
   // Perform PHI construction.
@@ -1346,7 +1358,7 @@ static bool isLifetimeStart(Instruction *Inst) {
 bool GVN::processNonLocalLoad(LoadInst *LI,
                               SmallVectorImpl<Instruction*> &toErase) {
   // Find the non-local dependencies of the load.
-  SmallVector<NonLocalDepEntry, 64> Deps;
+  SmallVector<NonLocalDepResult, 64> Deps;
   MD->getNonLocalPointerDependency(LI->getOperand(0), true, LI->getParent(),
                                    Deps);
   //DEBUG(errs() << "INVESTIGATING NONLOCAL LOAD: "
@@ -1490,7 +1502,7 @@ bool GVN::processNonLocalLoad(LoadInst *LI,
     DEBUG(errs() << "GVN REMOVING NONLOCAL LOAD: " << *LI << '\n');
     
     // Perform PHI construction.
-    Value *V = ConstructSSAForLoadSet(LI, ValuesPerBlock, TD,
+    Value *V = ConstructSSAForLoadSet(LI, ValuesPerBlock, TD, *DT,
                                       VN.getAliasAnalysis());
     LI->replaceAllUsesWith(V);
 
@@ -1679,7 +1691,7 @@ bool GVN::processNonLocalLoad(LoadInst *LI,
   ValuesPerBlock.push_back(AvailableValueInBlock::get(UnavailablePred,NewLoad));
 
   // Perform PHI construction.
-  Value *V = ConstructSSAForLoadSet(LI, ValuesPerBlock, TD,
+  Value *V = ConstructSSAForLoadSet(LI, ValuesPerBlock, TD, *DT,
                                     VN.getAliasAnalysis());
   LI->replaceAllUsesWith(V);
   if (isa<PHINode>(V))
diff --git a/libclamav/c++/llvm/lib/Transforms/Scalar/IndVarSimplify.cpp b/libclamav/c++/llvm/lib/Transforms/Scalar/IndVarSimplify.cpp
index 2912421..3aa4fd3 100644
--- a/libclamav/c++/llvm/lib/Transforms/Scalar/IndVarSimplify.cpp
+++ b/libclamav/c++/llvm/lib/Transforms/Scalar/IndVarSimplify.cpp
@@ -258,7 +258,7 @@ void IndVarSimplify::RewriteLoopExitValues(Loop *L,
 
         // Check that InVal is defined in the loop.
         Instruction *Inst = cast<Instruction>(InVal);
-        if (!L->contains(Inst->getParent()))
+        if (!L->contains(Inst))
           continue;
 
         // Okay, this instruction has a user outside of the current loop
diff --git a/libclamav/c++/llvm/lib/Transforms/Scalar/InstructionCombining.cpp b/libclamav/c++/llvm/lib/Transforms/Scalar/InstructionCombining.cpp
index b9c536f..72dc26e 100644
--- a/libclamav/c++/llvm/lib/Transforms/Scalar/InstructionCombining.cpp
+++ b/libclamav/c++/llvm/lib/Transforms/Scalar/InstructionCombining.cpp
@@ -75,6 +75,15 @@ STATISTIC(NumDeadInst , "Number of dead inst eliminated");
 STATISTIC(NumDeadStore, "Number of dead stores eliminated");
 STATISTIC(NumSunkInst , "Number of instructions sunk");
 
+/// SelectPatternFlavor - We can match a variety of different patterns for
+/// select operations.
+enum SelectPatternFlavor {
+  SPF_UNKNOWN = 0,
+  SPF_SMIN, SPF_UMIN,
+  SPF_SMAX, SPF_UMAX
+  //SPF_ABS - TODO.
+};
+
 namespace {
   /// InstCombineWorklist - This is the worklist management logic for
   /// InstCombine.
@@ -257,7 +266,8 @@ namespace {
                                                 ConstantInt *RHS);
     Instruction *FoldICmpDivCst(ICmpInst &ICI, BinaryOperator *DivI,
                                 ConstantInt *DivRHS);
-
+    Instruction *FoldICmpAddOpCst(ICmpInst &ICI, Value *X, ConstantInt *CI,
+                                  ICmpInst::Predicate Pred, Value *TheAdd);
     Instruction *FoldGEPICmp(GEPOperator *GEPLHS, Value *RHS,
                              ICmpInst::Predicate Cond, Instruction &I);
     Instruction *FoldShiftByConstant(Value *Op0, ConstantInt *Op1,
@@ -280,6 +290,9 @@ namespace {
     Instruction *FoldSelectOpOp(SelectInst &SI, Instruction *TI,
                                 Instruction *FI);
     Instruction *FoldSelectIntoOp(SelectInst &SI, Value*, Value*);
+    Instruction *FoldSPFofSPF(Instruction *Inner, SelectPatternFlavor SPF1,
+                              Value *A, Value *B, Instruction &Outer,
+                              SelectPatternFlavor SPF2, Value *C);
     Instruction *visitSelectInst(SelectInst &SI);
     Instruction *visitSelectInstWithICmp(SelectInst &SI, ICmpInst *ICI);
     Instruction *visitCallInst(CallInst &CI);
@@ -648,6 +661,57 @@ static inline Value *dyn_castFNegVal(Value *V) {
   return 0;
 }
 
+/// MatchSelectPattern - Pattern match integer [SU]MIN, [SU]MAX, and ABS idioms,
+/// returning the kind and providing the out parameter results if we
+/// successfully match.
+static SelectPatternFlavor
+MatchSelectPattern(Value *V, Value *&LHS, Value *&RHS) {
+  SelectInst *SI = dyn_cast<SelectInst>(V);
+  if (SI == 0) return SPF_UNKNOWN;
+  
+  ICmpInst *ICI = dyn_cast<ICmpInst>(SI->getCondition());
+  if (ICI == 0) return SPF_UNKNOWN;
+  
+  LHS = ICI->getOperand(0);
+  RHS = ICI->getOperand(1);
+  
+  // (icmp X, Y) ? X : Y 
+  if (SI->getTrueValue() == ICI->getOperand(0) &&
+      SI->getFalseValue() == ICI->getOperand(1)) {
+    switch (ICI->getPredicate()) {
+    default: return SPF_UNKNOWN; // Equality.
+    case ICmpInst::ICMP_UGT:
+    case ICmpInst::ICMP_UGE: return SPF_UMAX;
+    case ICmpInst::ICMP_SGT:
+    case ICmpInst::ICMP_SGE: return SPF_SMAX;
+    case ICmpInst::ICMP_ULT:
+    case ICmpInst::ICMP_ULE: return SPF_UMIN;
+    case ICmpInst::ICMP_SLT:
+    case ICmpInst::ICMP_SLE: return SPF_SMIN;
+    }
+  }
+  
+  // (icmp X, Y) ? Y : X 
+  if (SI->getTrueValue() == ICI->getOperand(1) &&
+      SI->getFalseValue() == ICI->getOperand(0)) {
+    switch (ICI->getPredicate()) {
+      default: return SPF_UNKNOWN; // Equality.
+      case ICmpInst::ICMP_UGT:
+      case ICmpInst::ICMP_UGE: return SPF_UMIN;
+      case ICmpInst::ICMP_SGT:
+      case ICmpInst::ICMP_SGE: return SPF_SMIN;
+      case ICmpInst::ICMP_ULT:
+      case ICmpInst::ICMP_ULE: return SPF_UMAX;
+      case ICmpInst::ICMP_SLT:
+      case ICmpInst::ICMP_SLE: return SPF_SMAX;
+    }
+  }
+  
+  // TODO: (X > 4) ? X : 5   -->  (X >= 5) ? X : 5  -->  MAX(X, 5)
+  
+  return SPF_UNKNOWN;
+}
+
 /// isFreeToInvert - Return true if the specified value is free to invert (apply
 /// ~ to).  This happens in cases where the ~ can be eliminated.
 static inline bool isFreeToInvert(Value *V) {
@@ -732,12 +796,12 @@ static bool MultiplyOverflows(ConstantInt *C1, ConstantInt *C2, bool sign) {
 
   APInt MulExt = LHSExt * RHSExt;
 
-  if (sign) {
-    APInt Min = APInt::getSignedMinValue(W).sext(W * 2);
-    APInt Max = APInt::getSignedMaxValue(W).sext(W * 2);
-    return MulExt.slt(Min) || MulExt.sgt(Max);
-  } else 
+  if (!sign)
     return MulExt.ugt(APInt::getLowBitsSet(W * 2, W));
+  
+  APInt Min = APInt::getSignedMinValue(W).sext(W * 2);
+  APInt Max = APInt::getSignedMaxValue(W).sext(W * 2);
+  return MulExt.slt(Min) || MulExt.sgt(Max);
 }
 
 
@@ -2736,9 +2800,13 @@ Instruction *InstCombiner::visitSub(BinaryOperator &I) {
   if (Op0 == Op1)                        // sub X, X  -> 0
     return ReplaceInstUsesWith(I, Constant::getNullValue(I.getType()));
 
-  // If this is a 'B = x-(-A)', change to B = x+A.
-  if (Value *V = dyn_castNegVal(Op1))
-    return BinaryOperator::CreateAdd(Op0, V);
+  // If this is a 'B = x-(-A)', change to B = x+A.  This preserves NSW/NUW.
+  if (Value *V = dyn_castNegVal(Op1)) {
+    BinaryOperator *Res = BinaryOperator::CreateAdd(Op0, V);
+    Res->setHasNoSignedWrap(I.hasNoSignedWrap());
+    Res->setHasNoUnsignedWrap(I.hasNoUnsignedWrap());
+    return Res;
+  }
 
   if (isa<UndefValue>(Op0))
     return ReplaceInstUsesWith(I, Op0);    // undef - X -> undef
@@ -6356,24 +6424,26 @@ Instruction *InstCombiner::visitICmpInst(ICmpInst &I) {
         // comparison into the select arms, which will cause one to be
         // constant folded and the select turned into a bitwise or.
         Value *Op1 = 0, *Op2 = 0;
-        if (LHSI->hasOneUse()) {
-          if (Constant *C = dyn_cast<Constant>(LHSI->getOperand(1))) {
-            // Fold the known value into the constant operand.
-            Op1 = ConstantExpr::getICmp(I.getPredicate(), C, RHSC);
-            // Insert a new ICmp of the other select operand.
-            Op2 = Builder->CreateICmp(I.getPredicate(), LHSI->getOperand(2),
-                                      RHSC, I.getName());
-          } else if (Constant *C = dyn_cast<Constant>(LHSI->getOperand(2))) {
-            // Fold the known value into the constant operand.
-            Op2 = ConstantExpr::getICmp(I.getPredicate(), C, RHSC);
-            // Insert a new ICmp of the other select operand.
+        if (Constant *C = dyn_cast<Constant>(LHSI->getOperand(1)))
+          Op1 = ConstantExpr::getICmp(I.getPredicate(), C, RHSC);
+        if (Constant *C = dyn_cast<Constant>(LHSI->getOperand(2)))
+          Op2 = ConstantExpr::getICmp(I.getPredicate(), C, RHSC);
+
+        // We only want to perform this transformation if it will not lead to
+        // additional code. This is true if either both sides of the select
+        // fold to a constant (in which case the icmp is replaced with a select
+        // which will usually simplify) or this is the only user of the
+        // select (in which case we are trading a select+icmp for a simpler
+        // select+icmp).
+        if ((Op1 && Op2) || (LHSI->hasOneUse() && (Op1 || Op2))) {
+          if (!Op1)
             Op1 = Builder->CreateICmp(I.getPredicate(), LHSI->getOperand(1),
                                       RHSC, I.getName());
-          }
-        }
-
-        if (Op1)
+          if (!Op2)
+            Op2 = Builder->CreateICmp(I.getPredicate(), LHSI->getOperand(2),
+                                      RHSC, I.getName());
           return SelectInst::Create(LHSI->getOperand(0), Op1, Op2);
+        }
         break;
       }
       case Instruction::Call:
@@ -6452,7 +6522,7 @@ Instruction *InstCombiner::visitICmpInst(ICmpInst &I) {
     //   if (X) ...
     // For generality, we handle any zero-extension of any operand comparison
     // with a constant or another cast from the same type.
-    if (isa<ConstantInt>(Op1) || isa<CastInst>(Op1))
+    if (isa<Constant>(Op1) || isa<CastInst>(Op1))
       if (Instruction *R = visitICmpInstWithCastAndCast(I))
         return R;
   }
@@ -6598,9 +6668,112 @@ Instruction *InstCombiner::visitICmpInst(ICmpInst &I) {
       }
     }
   }
+  
+  {
+    Value *X; ConstantInt *Cst;
+    // icmp X+Cst, X
+    if (match(Op0, m_Add(m_Value(X), m_ConstantInt(Cst))) && Op1 == X)
+      return FoldICmpAddOpCst(I, X, Cst, I.getPredicate(), Op0);
+
+    // icmp X, X+Cst
+    if (match(Op1, m_Add(m_Value(X), m_ConstantInt(Cst))) && Op0 == X)
+      return FoldICmpAddOpCst(I, X, Cst, I.getSwappedPredicate(), Op1);
+  }
   return Changed ? &I : 0;
 }
 
+/// FoldICmpAddOpCst - Fold "icmp pred (X+CI), X".
+Instruction *InstCombiner::FoldICmpAddOpCst(ICmpInst &ICI,
+                                            Value *X, ConstantInt *CI,
+                                            ICmpInst::Predicate Pred,
+                                            Value *TheAdd) {
+  // If we have X+0, exit early (simplifying logic below) and let it get folded
+  // elsewhere.   icmp X+0, X  -> icmp X, X
+  if (CI->isZero()) {
+    bool isTrue = ICmpInst::isTrueWhenEqual(Pred);
+    return ReplaceInstUsesWith(ICI, ConstantInt::get(ICI.getType(), isTrue));
+  }
+  
+  // (X+4) == X -> false.
+  if (Pred == ICmpInst::ICMP_EQ)
+    return ReplaceInstUsesWith(ICI, ConstantInt::getFalse(X->getContext()));
+
+  // (X+4) != X -> true.
+  if (Pred == ICmpInst::ICMP_NE)
+    return ReplaceInstUsesWith(ICI, ConstantInt::getTrue(X->getContext()));
+
+  // If this is an instruction (as opposed to constantexpr) get NUW/NSW info.
+  bool isNUW = false, isNSW = false;
+  if (BinaryOperator *Add = dyn_cast<BinaryOperator>(TheAdd)) {
+    isNUW = Add->hasNoUnsignedWrap();
+    isNSW = Add->hasNoSignedWrap();
+  }      
+  
+  // From this point on, we know that (X+C <= X) --> (X+C < X) because C != 0,
+  // so the values can never be equal.  Similiarly for all other "or equals"
+  // operators.
+  
+  // (X+1) <u X        --> X >u (MAXUINT-1)        --> X != 255
+  // (X+2) <u X        --> X >u (MAXUINT-2)        --> X > 253
+  // (X+MAXUINT) <u X  --> X >u (MAXUINT-MAXUINT)  --> X != 0
+  if (Pred == ICmpInst::ICMP_ULT || Pred == ICmpInst::ICMP_ULE) {
+    // If this is an NUW add, then this is always false.
+    if (isNUW)
+      return ReplaceInstUsesWith(ICI, ConstantInt::getFalse(X->getContext())); 
+    
+    Value *R = ConstantExpr::getSub(ConstantInt::get(CI->getType(), -1ULL), CI);
+    return new ICmpInst(ICmpInst::ICMP_UGT, X, R);
+  }
+  
+  // (X+1) >u X        --> X <u (0-1)        --> X != 255
+  // (X+2) >u X        --> X <u (0-2)        --> X <u 254
+  // (X+MAXUINT) >u X  --> X <u (0-MAXUINT)  --> X <u 1  --> X == 0
+  if (Pred == ICmpInst::ICMP_UGT || Pred == ICmpInst::ICMP_UGE) {
+    // If this is an NUW add, then this is always true.
+    if (isNUW)
+      return ReplaceInstUsesWith(ICI, ConstantInt::getTrue(X->getContext())); 
+    return new ICmpInst(ICmpInst::ICMP_ULT, X, ConstantExpr::getNeg(CI));
+  }
+  
+  unsigned BitWidth = CI->getType()->getPrimitiveSizeInBits();
+  ConstantInt *SMax = ConstantInt::get(X->getContext(),
+                                       APInt::getSignedMaxValue(BitWidth));
+
+  // (X+ 1) <s X       --> X >s (MAXSINT-1)          --> X == 127
+  // (X+ 2) <s X       --> X >s (MAXSINT-2)          --> X >s 125
+  // (X+MAXSINT) <s X  --> X >s (MAXSINT-MAXSINT)    --> X >s 0
+  // (X+MINSINT) <s X  --> X >s (MAXSINT-MINSINT)    --> X >s -1
+  // (X+ -2) <s X      --> X >s (MAXSINT- -2)        --> X >s 126
+  // (X+ -1) <s X      --> X >s (MAXSINT- -1)        --> X != 127
+  if (Pred == ICmpInst::ICMP_SLT || Pred == ICmpInst::ICMP_SLE) {
+    // If this is an NSW add, then we have two cases: if the constant is
+    // positive, then this is always false, if negative, this is always true.
+    if (isNSW) {
+      bool isTrue = CI->getValue().isNegative();
+      return ReplaceInstUsesWith(ICI, ConstantInt::get(ICI.getType(), isTrue));
+    }
+    
+    return new ICmpInst(ICmpInst::ICMP_SGT, X, ConstantExpr::getSub(SMax, CI));
+  }
+  
+  // (X+ 1) >s X       --> X <s (MAXSINT-(1-1))       --> X != 127
+  // (X+ 2) >s X       --> X <s (MAXSINT-(2-1))       --> X <s 126
+  // (X+MAXSINT) >s X  --> X <s (MAXSINT-(MAXSINT-1)) --> X <s 1
+  // (X+MINSINT) >s X  --> X <s (MAXSINT-(MINSINT-1)) --> X <s -2
+  // (X+ -2) >s X      --> X <s (MAXSINT-(-2-1))      --> X <s -126
+  // (X+ -1) >s X      --> X <s (MAXSINT-(-1-1))      --> X == -128
+  
+  // If this is an NSW add, then we have two cases: if the constant is
+  // positive, then this is always true, if negative, this is always false.
+  if (isNSW) {
+    bool isTrue = !CI->getValue().isNegative();
+    return ReplaceInstUsesWith(ICI, ConstantInt::get(ICI.getType(), isTrue));
+  }
+  
+  assert(Pred == ICmpInst::ICMP_SGT || Pred == ICmpInst::ICMP_SGE);
+  Constant *C = ConstantInt::get(X->getContext(), CI->getValue()-1);
+  return new ICmpInst(ICmpInst::ICMP_SLT, X, ConstantExpr::getSub(SMax, C));
+}
 
 /// FoldICmpDivCst - Fold "icmp pred, ([su]div X, DivRHS), CmpRHS" where DivRHS
 /// and CmpRHS are both known to be integer constants.
@@ -7075,8 +7248,7 @@ Instruction *InstCombiner::visitICmpInstWithInstAndIntCst(ICmpInst &ICI,
     break;
 
   case Instruction::Add:
-    // Fold: icmp pred (add, X, C1), C2
-
+    // Fold: icmp pred (add X, C1), C2
     if (!ICI.isEquality()) {
       ConstantInt *LHSC = dyn_cast<ConstantInt>(LHSI->getOperand(1));
       if (!LHSC) break;
@@ -7299,19 +7471,17 @@ Instruction *InstCombiner::visitICmpInstWithCastAndCast(ICmpInst &ICI) {
 
   // If the re-extended constant didn't change...
   if (Res2 == CI) {
-    // Make sure that sign of the Cmp and the sign of the Cast are the same.
-    // For example, we might have:
-    //    %A = sext i16 %X to i32
-    //    %B = icmp ugt i32 %A, 1330
-    // It is incorrect to transform this into 
-    //    %B = icmp ugt i16 %X, 1330
-    // because %A may have negative value. 
-    //
-    // However, we allow this when the compare is EQ/NE, because they are
-    // signless.
-    if (isSignedExt == isSignedCmp || ICI.isEquality())
+    // Deal with equality cases early.
+    if (ICI.isEquality())
       return new ICmpInst(ICI.getPredicate(), LHSCIOp, Res1);
-    return 0;
+
+    // A signed comparison of sign extended values simplifies into a
+    // signed comparison.
+    if (isSignedExt && isSignedCmp)
+      return new ICmpInst(ICI.getPredicate(), LHSCIOp, Res1);
+
+    // The other three cases all fold into an unsigned comparison.
+    return new ICmpInst(ICI.getUnsignedPredicate(), LHSCIOp, Res1);
   }
 
   // The re-extended constant changed so the constant cannot be represented 
@@ -9372,9 +9542,6 @@ Instruction *InstCombiner::visitSelectInstWithICmp(SelectInst &SI,
       return ReplaceInstUsesWith(SI, TrueVal);
     /// NOTE: if we wanted to, this is where to detect integer MIN/MAX
   }
-
-  /// NOTE: if we wanted to, this is where to detect integer ABS
-
   return Changed ? &SI : 0;
 }
 
@@ -9416,6 +9583,35 @@ static bool CanSelectOperandBeMappingIntoPredBlock(const Value *V,
   return false;
 }
 
+/// FoldSPFofSPF - We have an SPF (e.g. a min or max) of an SPF of the form:
+///   SPF2(SPF1(A, B), C) 
+Instruction *InstCombiner::FoldSPFofSPF(Instruction *Inner,
+                                        SelectPatternFlavor SPF1,
+                                        Value *A, Value *B,
+                                        Instruction &Outer,
+                                        SelectPatternFlavor SPF2, Value *C) {
+  if (C == A || C == B) {
+    // MAX(MAX(A, B), B) -> MAX(A, B)
+    // MIN(MIN(a, b), a) -> MIN(a, b)
+    if (SPF1 == SPF2)
+      return ReplaceInstUsesWith(Outer, Inner);
+    
+    // MAX(MIN(a, b), a) -> a
+    // MIN(MAX(a, b), a) -> a
+    if ((SPF1 == SPF_SMIN && SPF2 == SPF_SMAX) ||
+        (SPF1 == SPF_SMAX && SPF2 == SPF_SMIN) ||
+        (SPF1 == SPF_UMIN && SPF2 == SPF_UMAX) ||
+        (SPF1 == SPF_UMAX && SPF2 == SPF_UMIN))
+      return ReplaceInstUsesWith(Outer, C);
+  }
+  
+  // TODO: MIN(MIN(A, 23), 97)
+  return 0;
+}
+
+
+
+
 Instruction *InstCombiner::visitSelectInst(SelectInst &SI) {
   Value *CondVal = SI.getCondition();
   Value *TrueVal = SI.getTrueValue();
@@ -9622,9 +9818,28 @@ Instruction *InstCombiner::visitSelectInst(SelectInst &SI) {
 
   // See if we can fold the select into one of our operands.
   if (SI.getType()->isInteger()) {
-    Instruction *FoldI = FoldSelectIntoOp(SI, TrueVal, FalseVal);
-    if (FoldI)
+    if (Instruction *FoldI = FoldSelectIntoOp(SI, TrueVal, FalseVal))
       return FoldI;
+    
+    // MAX(MAX(a, b), a) -> MAX(a, b)
+    // MIN(MIN(a, b), a) -> MIN(a, b)
+    // MAX(MIN(a, b), a) -> a
+    // MIN(MAX(a, b), a) -> a
+    Value *LHS, *RHS, *LHS2, *RHS2;
+    if (SelectPatternFlavor SPF = MatchSelectPattern(&SI, LHS, RHS)) {
+      if (SelectPatternFlavor SPF2 = MatchSelectPattern(LHS, LHS2, RHS2))
+        if (Instruction *R = FoldSPFofSPF(cast<Instruction>(LHS),SPF2,LHS2,RHS2, 
+                                          SI, SPF, RHS))
+          return R;
+      if (SelectPatternFlavor SPF2 = MatchSelectPattern(RHS, LHS2, RHS2))
+        if (Instruction *R = FoldSPFofSPF(cast<Instruction>(RHS),SPF2,LHS2,RHS2,
+                                          SI, SPF, LHS))
+          return R;
+    }
+
+    // TODO.
+    // ABS(-X) -> ABS(X)
+    // ABS(ABS(X)) -> ABS(X)
   }
 
   // See if we can fold the select into a phi node if the condition is a select.
@@ -9896,9 +10111,11 @@ Instruction *InstCombiner::visitCallInst(CallInst &CI) {
                         Intrinsic::getDeclaration(M, MemCpyID, Tys, 1));
           Changed = true;
         }
+    }
 
+    if (MemTransferInst *MTI = dyn_cast<MemTransferInst>(MI)) {
       // memmove(x,x,size) -> noop.
-      if (MMI->getSource() == MMI->getDest())
+      if (MTI->getSource() == MTI->getDest())
         return EraseInstFromFunction(CI);
     }
 
@@ -11232,6 +11449,23 @@ Instruction *InstCombiner::SliceUpIllegalIntegerPHI(PHINode &FirstPhi) {
   for (unsigned PHIId = 0; PHIId != PHIsToSlice.size(); ++PHIId) {
     PHINode *PN = PHIsToSlice[PHIId];
     
+    // Scan the input list of the PHI.  If any input is an invoke, and if the
+    // input is defined in the predecessor, then we won't be split the critical
+    // edge which is required to insert a truncate.  Because of this, we have to
+    // bail out.
+    for (unsigned i = 0, e = PN->getNumIncomingValues(); i != e; ++i) {
+      InvokeInst *II = dyn_cast<InvokeInst>(PN->getIncomingValue(i));
+      if (II == 0) continue;
+      if (II->getParent() != PN->getIncomingBlock(i))
+        continue;
+     
+      // If we have a phi, and if it's directly in the predecessor, then we have
+      // a critical edge where we need to put the truncate.  Since we can't
+      // split the edge in instcombine, we have to bail out.
+      return 0;
+    }
+      
+    
     for (Value::use_iterator UI = PN->use_begin(), E = PN->use_end();
          UI != E; ++UI) {
       Instruction *User = cast<Instruction>(*UI);
@@ -11314,7 +11548,9 @@ Instruction *InstCombiner::SliceUpIllegalIntegerPHI(PHINode &FirstPhi) {
           PredVal = EltPHI;
           EltPHI->addIncoming(PredVal, Pred);
           continue;
-        } else if (PHINode *InPHI = dyn_cast<PHINode>(PN)) {
+        }
+        
+        if (PHINode *InPHI = dyn_cast<PHINode>(PN)) {
           // If the incoming value was a PHI, and if it was one of the PHIs we
           // already rewrote it, just use the lowered value.
           if (Value *Res = ExtractedVals[LoweredPHIRecord(InPHI, Offset, Ty)]) {
diff --git a/libclamav/c++/llvm/lib/Transforms/Scalar/JumpThreading.cpp b/libclamav/c++/llvm/lib/Transforms/Scalar/JumpThreading.cpp
index d58b9c9..7e6cf79 100644
--- a/libclamav/c++/llvm/lib/Transforms/Scalar/JumpThreading.cpp
+++ b/libclamav/c++/llvm/lib/Transforms/Scalar/JumpThreading.cpp
@@ -29,6 +29,7 @@
 #include "llvm/ADT/SmallSet.h"
 #include "llvm/Support/CommandLine.h"
 #include "llvm/Support/Debug.h"
+#include "llvm/Support/ValueHandle.h"
 #include "llvm/Support/raw_ostream.h"
 using namespace llvm;
 
diff --git a/libclamav/c++/llvm/lib/Transforms/Scalar/LICM.cpp b/libclamav/c++/llvm/lib/Transforms/Scalar/LICM.cpp
index 42a8fdc..99f3ae0 100644
--- a/libclamav/c++/llvm/lib/Transforms/Scalar/LICM.cpp
+++ b/libclamav/c++/llvm/lib/Transforms/Scalar/LICM.cpp
@@ -433,7 +433,7 @@ bool LICM::isNotUsedInLoop(Instruction &I) {
         if (PN->getIncomingValue(i) == &I)
           if (CurLoop->contains(PN->getIncomingBlock(i)))
             return false;
-    } else if (CurLoop->contains(User->getParent())) {
+    } else if (CurLoop->contains(User)) {
       return false;
     }
   }
@@ -831,7 +831,7 @@ void LICM::FindPromotableValuesInLoop(
          UI != UE; ++UI) {
       // Ignore instructions not in this loop.
       Instruction *Use = dyn_cast<Instruction>(*UI);
-      if (!Use || !CurLoop->contains(Use->getParent()))
+      if (!Use || !CurLoop->contains(Use))
         continue;
 
       if (!isa<LoadInst>(Use) && !isa<StoreInst>(Use)) {
diff --git a/libclamav/c++/llvm/lib/Transforms/Scalar/LoopIndexSplit.cpp b/libclamav/c++/llvm/lib/Transforms/Scalar/LoopIndexSplit.cpp
index 8b6a233..1d9dd68 100644
--- a/libclamav/c++/llvm/lib/Transforms/Scalar/LoopIndexSplit.cpp
+++ b/libclamav/c++/llvm/lib/Transforms/Scalar/LoopIndexSplit.cpp
@@ -288,7 +288,7 @@ bool LoopIndexSplit::runOnLoop(Loop *IncomingLoop, LPPassManager &LPM_Ref) {
 // isUsedOutsideLoop - Returns true iff V is used outside the loop L.
 static bool isUsedOutsideLoop(Value *V, Loop *L) {
   for(Value::use_iterator UI = V->use_begin(), E = V->use_end(); UI != E; ++UI)
-    if (!L->contains(cast<Instruction>(*UI)->getParent()))
+    if (!L->contains(cast<Instruction>(*UI)))
       return true;
   return false;
 }
@@ -842,7 +842,7 @@ void LoopIndexSplit::updatePHINodes(BasicBlock *ExitBB, BasicBlock *Latch,
       for (Value::use_iterator UI = PHV->use_begin(), E = PHV->use_end(); 
            UI != E; ++UI) 
         if (PHINode *U = dyn_cast<PHINode>(*UI)) 
-          if (LP->contains(U->getParent())) {
+          if (LP->contains(U)) {
             NewV = U;
             break;
           }
diff --git a/libclamav/c++/llvm/lib/Transforms/Scalar/LoopStrengthReduce.cpp b/libclamav/c++/llvm/lib/Transforms/Scalar/LoopStrengthReduce.cpp
index 85cc712..85f7368 100644
--- a/libclamav/c++/llvm/lib/Transforms/Scalar/LoopStrengthReduce.cpp
+++ b/libclamav/c++/llvm/lib/Transforms/Scalar/LoopStrengthReduce.cpp
@@ -144,7 +144,7 @@ namespace {
     /// StrengthReduceIVUsersOfStride - Strength reduce all of the users of a
     /// single stride of IV.  All of the users may have different starting
     /// values, and this may not be the only stride.
-    void StrengthReduceIVUsersOfStride(const SCEV *const &Stride,
+    void StrengthReduceIVUsersOfStride(const SCEV *Stride,
                                       IVUsersOfOneStride &Uses,
                                       Loop *L);
     void StrengthReduceIVUsers(Loop *L);
@@ -157,14 +157,14 @@ namespace {
     bool FindIVUserForCond(ICmpInst *Cond, IVStrideUse *&CondUse,
                            const SCEV* &CondStride);
     bool RequiresTypeConversion(const Type *Ty, const Type *NewTy);
-    const SCEV *CheckForIVReuse(bool, bool, bool, const SCEV *const&,
+    const SCEV *CheckForIVReuse(bool, bool, bool, const SCEV *,
                              IVExpr&, const Type*,
                              const std::vector<BasedUser>& UsersToProcess);
     bool ValidScale(bool, int64_t,
                     const std::vector<BasedUser>& UsersToProcess);
     bool ValidOffset(bool, int64_t, int64_t,
                      const std::vector<BasedUser>& UsersToProcess);
-    const SCEV *CollectIVUsers(const SCEV *const &Stride,
+    const SCEV *CollectIVUsers(const SCEV *Stride,
                               IVUsersOfOneStride &Uses,
                               Loop *L,
                               bool &AllUsesAreAddresses,
@@ -212,8 +212,6 @@ Pass *llvm::createLoopStrengthReducePass(const TargetLowering *TLI) {
 /// specified set are trivially dead, delete them and see if this makes any of
 /// their operands subsequently dead.
 void LoopStrengthReduce::DeleteTriviallyDeadInstructions() {
-  if (DeadInsts.empty()) return;
-
   while (!DeadInsts.empty()) {
     Instruction *I = dyn_cast_or_null<Instruction>(DeadInsts.pop_back_val());
 
@@ -232,44 +230,6 @@ void LoopStrengthReduce::DeleteTriviallyDeadInstructions() {
   }
 }
 
-/// containsAddRecFromDifferentLoop - Determine whether expression S involves a
-/// subexpression that is an AddRec from a loop other than L.  An outer loop
-/// of L is OK, but not an inner loop nor a disjoint loop.
-static bool containsAddRecFromDifferentLoop(const SCEV *S, Loop *L) {
-  // This is very common, put it first.
-  if (isa<SCEVConstant>(S))
-    return false;
-  if (const SCEVCommutativeExpr *AE = dyn_cast<SCEVCommutativeExpr>(S)) {
-    for (unsigned int i=0; i< AE->getNumOperands(); i++)
-      if (containsAddRecFromDifferentLoop(AE->getOperand(i), L))
-        return true;
-    return false;
-  }
-  if (const SCEVAddRecExpr *AE = dyn_cast<SCEVAddRecExpr>(S)) {
-    if (const Loop *newLoop = AE->getLoop()) {
-      if (newLoop == L)
-        return false;
-      // if newLoop is an outer loop of L, this is OK.
-      if (newLoop->contains(L->getHeader()))
-        return false;
-    }
-    return true;
-  }
-  if (const SCEVUDivExpr *DE = dyn_cast<SCEVUDivExpr>(S))
-    return containsAddRecFromDifferentLoop(DE->getLHS(), L) ||
-           containsAddRecFromDifferentLoop(DE->getRHS(), L);
-#if 0
-  // SCEVSDivExpr has been backed out temporarily, but will be back; we'll
-  // need this when it is.
-  if (const SCEVSDivExpr *DE = dyn_cast<SCEVSDivExpr>(S))
-    return containsAddRecFromDifferentLoop(DE->getLHS(), L) ||
-           containsAddRecFromDifferentLoop(DE->getRHS(), L);
-#endif
-  if (const SCEVCastExpr *CE = dyn_cast<SCEVCastExpr>(S))
-    return containsAddRecFromDifferentLoop(CE->getOperand(), L);
-  return false;
-}
-
 /// isAddressUse - Returns true if the specified instruction is using the
 /// specified value as an address.
 static bool isAddressUse(Instruction *Inst, Value *OperandVal) {
@@ -362,13 +322,13 @@ namespace {
     // Once we rewrite the code to insert the new IVs we want, update the
     // operands of Inst to use the new expression 'NewBase', with 'Imm' added
     // to it.
-    void RewriteInstructionToUseNewBase(const SCEV *const &NewBase,
+    void RewriteInstructionToUseNewBase(const SCEV *NewBase,
                                         Instruction *InsertPt,
                                        SCEVExpander &Rewriter, Loop *L, Pass *P,
                                         SmallVectorImpl<WeakVH> &DeadInsts,
                                         ScalarEvolution *SE);
 
-    Value *InsertCodeForBaseAtPosition(const SCEV *const &NewBase,
+    Value *InsertCodeForBaseAtPosition(const SCEV *NewBase,
                                        const Type *Ty,
                                        SCEVExpander &Rewriter,
                                        Instruction *IP,
@@ -378,12 +338,12 @@ namespace {
 }
 
 void BasedUser::dump() const {
-  errs() << " Base=" << *Base;
-  errs() << " Imm=" << *Imm;
-  errs() << "   Inst: " << *Inst;
+  dbgs() << " Base=" << *Base;
+  dbgs() << " Imm=" << *Imm;
+  dbgs() << "   Inst: " << *Inst;
 }
 
-Value *BasedUser::InsertCodeForBaseAtPosition(const SCEV *const &NewBase,
+Value *BasedUser::InsertCodeForBaseAtPosition(const SCEV *NewBase,
                                               const Type *Ty,
                                               SCEVExpander &Rewriter,
                                               Instruction *IP,
@@ -407,7 +367,7 @@ Value *BasedUser::InsertCodeForBaseAtPosition(const SCEV *const &NewBase,
 // value of NewBase in the case that it's a diffferent instruction from
 // the PHI that NewBase is computed from, or null otherwise.
 //
-void BasedUser::RewriteInstructionToUseNewBase(const SCEV *const &NewBase,
+void BasedUser::RewriteInstructionToUseNewBase(const SCEV *NewBase,
                                                Instruction *NewBasePt,
                                       SCEVExpander &Rewriter, Loop *L, Pass *P,
                                       SmallVectorImpl<WeakVH> &DeadInsts,
@@ -428,7 +388,7 @@ void BasedUser::RewriteInstructionToUseNewBase(const SCEV *const &NewBase,
     // If this is a use outside the loop (which means after, since it is based
     // on a loop indvar) we use the post-incremented value, so that we don't
     // artificially make the preinc value live out the bottom of the loop.
-    if (!isUseOfPostIncrementedValue && L->contains(Inst->getParent())) {
+    if (!isUseOfPostIncrementedValue && L->contains(Inst)) {
       if (NewBasePt && isa<PHINode>(OperandValToReplace)) {
         InsertPt = NewBasePt;
         ++InsertPt;
@@ -444,9 +404,9 @@ void BasedUser::RewriteInstructionToUseNewBase(const SCEV *const &NewBase,
     // Replace the use of the operand Value with the new Phi we just created.
     Inst->replaceUsesOfWith(OperandValToReplace, NewVal);
 
-    DEBUG(errs() << "      Replacing with ");
-    DEBUG(WriteAsOperand(errs(), NewVal, /*PrintType=*/false));
-    DEBUG(errs() << ", which has value " << *NewBase << " plus IMM "
+    DEBUG(dbgs() << "      Replacing with ");
+    DEBUG(WriteAsOperand(dbgs(), NewVal, /*PrintType=*/false));
+    DEBUG(dbgs() << ", which has value " << *NewBase << " plus IMM "
                  << *Imm << "\n");
     return;
   }
@@ -469,7 +429,7 @@ void BasedUser::RewriteInstructionToUseNewBase(const SCEV *const &NewBase,
       // that case(?).
       Instruction *OldLoc = dyn_cast<Instruction>(OperandValToReplace);
       BasicBlock *PHIPred = PN->getIncomingBlock(i);
-      if (L->contains(OldLoc->getParent())) {
+      if (L->contains(OldLoc)) {
         // If this is a critical edge, split the edge so that we do not insert
         // the code on all predecessor/successor paths.  We do this unless this
         // is the canonical backedge for this loop, as this can make some
@@ -486,7 +446,7 @@ void BasedUser::RewriteInstructionToUseNewBase(const SCEV *const &NewBase,
           // is outside of the loop, and PredTI is in the loop, we want to
           // move the block to be immediately before the PHI block, not
           // immediately after PredTI.
-          if (L->contains(PHIPred) && !L->contains(PN->getParent()))
+          if (L->contains(PHIPred) && !L->contains(PN))
             NewBB->moveBefore(PN->getParent());
 
           // Splitting the edge can reduce the number of PHI entries we have.
@@ -498,15 +458,15 @@ void BasedUser::RewriteInstructionToUseNewBase(const SCEV *const &NewBase,
       Value *&Code = InsertedCode[PHIPred];
       if (!Code) {
         // Insert the code into the end of the predecessor block.
-        Instruction *InsertPt = (L->contains(OldLoc->getParent())) ?
+        Instruction *InsertPt = (L->contains(OldLoc)) ?
                                 PHIPred->getTerminator() :
                                 OldLoc->getParent()->getTerminator();
         Code = InsertCodeForBaseAtPosition(NewBase, PN->getType(),
                                            Rewriter, InsertPt, SE);
 
-        DEBUG(errs() << "      Changing PHI use to ");
-        DEBUG(WriteAsOperand(errs(), Code, /*PrintType=*/false));
-        DEBUG(errs() << ", which has value " << *NewBase << " plus IMM "
+        DEBUG(dbgs() << "      Changing PHI use to ");
+        DEBUG(WriteAsOperand(dbgs(), Code, /*PrintType=*/false));
+        DEBUG(dbgs() << ", which has value " << *NewBase << " plus IMM "
                      << *Imm << "\n");
       }
 
@@ -523,7 +483,7 @@ void BasedUser::RewriteInstructionToUseNewBase(const SCEV *const &NewBase,
 
 /// fitsInAddressMode - Return true if V can be subsumed within an addressing
 /// mode, and does not need to be put in a register first.
-static bool fitsInAddressMode(const SCEV *const &V, const Type *AccessTy,
+static bool fitsInAddressMode(const SCEV *V, const Type *AccessTy,
                              const TargetLowering *TLI, bool HasBaseReg) {
   if (const SCEVConstant *SC = dyn_cast<SCEVConstant>(V)) {
     int64_t VC = SC->getValue()->getSExtValue();
@@ -737,7 +697,7 @@ RemoveCommonExpressionsFromUseBases(std::vector<BasedUser> &Uses,
     // it is clearly shared across all the IV's.  If the use is outside the loop
     // (which means after it) we don't want to factor anything *into* the loop,
     // so just use 0 as the base.
-    if (L->contains(Uses[0].Inst->getParent()))
+    if (L->contains(Uses[0].Inst))
       std::swap(Result, Uses[0].Base);
     return Result;
   }
@@ -762,7 +722,7 @@ RemoveCommonExpressionsFromUseBases(std::vector<BasedUser> &Uses,
     // after the loop to affect base computation of values *inside* the loop,
     // because we can always add their offsets to the result IV after the loop
     // is done, ensuring we get good code inside the loop.
-    if (!L->contains(Uses[i].Inst->getParent()))
+    if (!L->contains(Uses[i].Inst))
       continue;
     NumUsesInsideLoop++;
 
@@ -818,7 +778,7 @@ RemoveCommonExpressionsFromUseBases(std::vector<BasedUser> &Uses,
     // and a Result in the same instruction (for example because it would
     // require too many registers).  Check this.
     for (unsigned i=0; i<NumUses; ++i) {
-      if (!L->contains(Uses[i].Inst->getParent()))
+      if (!L->contains(Uses[i].Inst))
         continue;
       // We know this is an addressing mode use; if there are any uses that
       // are not, FreeResult would be Zero.
@@ -854,7 +814,7 @@ RemoveCommonExpressionsFromUseBases(std::vector<BasedUser> &Uses,
     // the final IV value coming into those uses does.  Instead of trying to
     // remove the pieces of the common base, which might not be there,
     // subtract off the base to compensate for this.
-    if (!L->contains(Uses[i].Inst->getParent())) {
+    if (!L->contains(Uses[i].Inst)) {
       Uses[i].Base = SE->getMinusSCEV(Uses[i].Base, Result);
       continue;
     }
@@ -975,7 +935,7 @@ bool LoopStrengthReduce::RequiresTypeConversion(const Type *Ty1,
 const SCEV *LoopStrengthReduce::CheckForIVReuse(bool HasBaseReg,
                                 bool AllUsesAreAddresses,
                                 bool AllUsesAreOutsideLoop,
-                                const SCEV *const &Stride,
+                                const SCEV *Stride,
                                 IVExpr &IV, const Type *Ty,
                                 const std::vector<BasedUser>& UsersToProcess) {
   if (const SCEVConstant *SC = dyn_cast<SCEVConstant>(Stride)) {
@@ -1088,7 +1048,7 @@ static bool PartitionByIsUseOfPostIncrementedValue(const BasedUser &Val) {
 
 /// isNonConstantNegative - Return true if the specified scev is negated, but
 /// not a constant.
-static bool isNonConstantNegative(const SCEV *const &Expr) {
+static bool isNonConstantNegative(const SCEV *Expr) {
   const SCEVMulExpr *Mul = dyn_cast<SCEVMulExpr>(Expr);
   if (!Mul) return false;
 
@@ -1105,7 +1065,7 @@ static bool isNonConstantNegative(const SCEV *const &Expr) {
 /// base of the strided accesses, as well as the old information from Uses. We
 /// progressively move information from the Base field to the Imm field, until
 /// we eventually have the full access expression to rewrite the use.
-const SCEV *LoopStrengthReduce::CollectIVUsers(const SCEV *const &Stride,
+const SCEV *LoopStrengthReduce::CollectIVUsers(const SCEV *Stride,
                                               IVUsersOfOneStride &Uses,
                                               Loop *L,
                                               bool &AllUsesAreAddresses,
@@ -1149,7 +1109,7 @@ const SCEV *LoopStrengthReduce::CollectIVUsers(const SCEV *const &Stride,
     // If the user is not in the current loop, this means it is using the exit
     // value of the IV.  Do not put anything in the base, make sure it's all in
     // the immediate field to allow as much factoring as possible.
-    if (!L->contains(UsersToProcess[i].Inst->getParent())) {
+    if (!L->contains(UsersToProcess[i].Inst)) {
       UsersToProcess[i].Imm = SE->getAddExpr(UsersToProcess[i].Imm,
                                              UsersToProcess[i].Base);
       UsersToProcess[i].Base =
@@ -1361,7 +1321,7 @@ LoopStrengthReduce::PrepareToStrengthReduceFully(
                                         const SCEV *CommonExprs,
                                         const Loop *L,
                                         SCEVExpander &PreheaderRewriter) {
-  DEBUG(errs() << "  Fully reducing all users\n");
+  DEBUG(dbgs() << "  Fully reducing all users\n");
 
   // Rewrite the UsersToProcess records, creating a separate PHI for each
   // unique Base value.
@@ -1393,7 +1353,7 @@ static Instruction *FindIVIncInsertPt(std::vector<BasedUser> &UsersToProcess,
                                       const Loop *L) {
   if (UsersToProcess.size() == 1 &&
       UsersToProcess[0].isUseOfPostIncrementedValue &&
-      L->contains(UsersToProcess[0].Inst->getParent()))
+      L->contains(UsersToProcess[0].Inst))
     return UsersToProcess[0].Inst;
   return L->getLoopLatch()->getTerminator();
 }
@@ -1410,7 +1370,7 @@ LoopStrengthReduce::PrepareToStrengthReduceWithNewPhi(
                                          Instruction *IVIncInsertPt,
                                          const Loop *L,
                                          SCEVExpander &PreheaderRewriter) {
-  DEBUG(errs() << "  Inserting new PHI:\n");
+  DEBUG(dbgs() << "  Inserting new PHI:\n");
 
   PHINode *Phi = InsertAffinePhi(SE->getUnknown(CommonBaseV),
                                  Stride, IVIncInsertPt, L,
@@ -1423,9 +1383,9 @@ LoopStrengthReduce::PrepareToStrengthReduceWithNewPhi(
   for (unsigned i = 0, e = UsersToProcess.size(); i != e; ++i)
     UsersToProcess[i].Phi = Phi;
 
-  DEBUG(errs() << "    IV=");
-  DEBUG(WriteAsOperand(errs(), Phi, /*PrintType=*/false));
-  DEBUG(errs() << "\n");
+  DEBUG(dbgs() << "    IV=");
+  DEBUG(WriteAsOperand(dbgs(), Phi, /*PrintType=*/false));
+  DEBUG(dbgs() << "\n");
 }
 
 /// PrepareToStrengthReduceFromSmallerStride - Prepare for the given users to
@@ -1438,7 +1398,7 @@ LoopStrengthReduce::PrepareToStrengthReduceFromSmallerStride(
                                          Value *CommonBaseV,
                                          const IVExpr &ReuseIV,
                                          Instruction *PreInsertPt) {
-  DEBUG(errs() << "  Rewriting in terms of existing IV of STRIDE "
+  DEBUG(dbgs() << "  Rewriting in terms of existing IV of STRIDE "
                << *ReuseIV.Stride << " and BASE " << *ReuseIV.Base << "\n");
 
   // All the users will share the reused IV.
@@ -1482,7 +1442,7 @@ static bool IsImmFoldedIntoAddrMode(GlobalValue *GV, int64_t Offset,
 /// stride of IV.  All of the users may have different starting values, and this
 /// may not be the only stride.
 void
-LoopStrengthReduce::StrengthReduceIVUsersOfStride(const SCEV *const &Stride,
+LoopStrengthReduce::StrengthReduceIVUsersOfStride(const SCEV *Stride,
                                                   IVUsersOfOneStride &Uses,
                                                   Loop *L) {
   // If all the users are moved to another stride, then there is nothing to do.
@@ -1547,7 +1507,7 @@ LoopStrengthReduce::StrengthReduceIVUsersOfStride(const SCEV *const &Stride,
                                          UsersToProcess, TLI);
 
       if (DoSink) {
-        DEBUG(errs() << "  Sinking " << *Imm << " back down into uses\n");
+        DEBUG(dbgs() << "  Sinking " << *Imm << " back down into uses\n");
         for (unsigned i = 0, e = UsersToProcess.size(); i != e; ++i)
           UsersToProcess[i].Imm = SE->getAddExpr(UsersToProcess[i].Imm, Imm);
         CommonExprs = NewCommon;
@@ -1559,7 +1519,7 @@ LoopStrengthReduce::StrengthReduceIVUsersOfStride(const SCEV *const &Stride,
 
   // Now that we know what we need to do, insert the PHI node itself.
   //
-  DEBUG(errs() << "LSR: Examining IVs of TYPE " << *ReplacedTy << " of STRIDE "
+  DEBUG(dbgs() << "LSR: Examining IVs of TYPE " << *ReplacedTy << " of STRIDE "
                << *Stride << ":\n"
                << "  Common base: " << *CommonExprs << "\n");
 
@@ -1623,10 +1583,10 @@ LoopStrengthReduce::StrengthReduceIVUsersOfStride(const SCEV *const &Stride,
     if (!Base->isZero()) {
       BaseV = PreheaderRewriter.expandCodeFor(Base, 0, PreInsertPt);
 
-      DEBUG(errs() << "  INSERTING code for BASE = " << *Base << ":");
+      DEBUG(dbgs() << "  INSERTING code for BASE = " << *Base << ":");
       if (BaseV->hasName())
-        DEBUG(errs() << " Result value name = %" << BaseV->getName());
-      DEBUG(errs() << "\n");
+        DEBUG(dbgs() << " Result value name = %" << BaseV->getName());
+      DEBUG(dbgs() << "\n");
 
       // If BaseV is a non-zero constant, make sure that it gets inserted into
       // the preheader, instead of being forward substituted into the uses.  We
@@ -1647,15 +1607,15 @@ LoopStrengthReduce::StrengthReduceIVUsersOfStride(const SCEV *const &Stride,
       // FIXME: Use emitted users to emit other users.
       BasedUser &User = UsersToProcess.back();
 
-      DEBUG(errs() << "    Examining ");
+      DEBUG(dbgs() << "    Examining ");
       if (User.isUseOfPostIncrementedValue)
-        DEBUG(errs() << "postinc");
+        DEBUG(dbgs() << "postinc");
       else
-        DEBUG(errs() << "preinc");
-      DEBUG(errs() << " use ");
-      DEBUG(WriteAsOperand(errs(), UsersToProcess.back().OperandValToReplace,
+        DEBUG(dbgs() << "preinc");
+      DEBUG(dbgs() << " use ");
+      DEBUG(WriteAsOperand(dbgs(), UsersToProcess.back().OperandValToReplace,
                            /*PrintType=*/false));
-      DEBUG(errs() << " in Inst: " << *User.Inst);
+      DEBUG(dbgs() << " in Inst: " << *User.Inst);
 
       // If this instruction wants to use the post-incremented value, move it
       // after the post-inc and use its value instead of the PHI.
@@ -1666,7 +1626,7 @@ LoopStrengthReduce::StrengthReduceIVUsersOfStride(const SCEV *const &Stride,
         // loop to ensure it is dominated by the increment. In case it's the
         // only use of the iv, the increment instruction is already before the
         // use.
-        if (L->contains(User.Inst->getParent()) && User.Inst != IVIncInsertPt)
+        if (L->contains(User.Inst) && User.Inst != IVIncInsertPt)
           User.Inst->moveBefore(IVIncInsertPt);
       }
 
@@ -1728,7 +1688,7 @@ LoopStrengthReduce::StrengthReduceIVUsersOfStride(const SCEV *const &Stride,
         // common base, and are adding it back here.  Use the same expression
         // as before, rather than CommonBaseV, so DAGCombiner will zap it.
         if (!CommonExprs->isZero()) {
-          if (L->contains(User.Inst->getParent()))
+          if (L->contains(User.Inst))
             RewriteExpr = SE->getAddExpr(RewriteExpr,
                                        SE->getUnknown(CommonBaseV));
           else
@@ -1815,7 +1775,7 @@ namespace {
     const ScalarEvolution *SE;
     explicit StrideCompare(const ScalarEvolution *se) : SE(se) {}
 
-    bool operator()(const SCEV *const &LHS, const SCEV *const &RHS) {
+    bool operator()(const SCEV *LHS, const SCEV *RHS) {
       const SCEVConstant *LHSC = dyn_cast<SCEVConstant>(LHS);
       const SCEVConstant *RHSC = dyn_cast<SCEVConstant>(RHS);
       if (LHSC && RHSC) {
@@ -2069,8 +2029,8 @@ ICmpInst *LoopStrengthReduce::ChangeCompareStride(Loop *L, ICmpInst *Cond,
     Cond = new ICmpInst(OldCond, Predicate, NewCmpLHS, NewCmpRHS,
                         L->getHeader()->getName() + ".termcond");
 
-    DEBUG(errs() << "    Change compare stride in Inst " << *OldCond);
-    DEBUG(errs() << " to " << *Cond << '\n');
+    DEBUG(dbgs() << "    Change compare stride in Inst " << *OldCond);
+    DEBUG(dbgs() << " to " << *Cond << '\n');
 
     // Remove the old compare instruction. The old indvar is probably dead too.
     DeadInsts.push_back(CondUse->getOperandValToReplace());
@@ -2403,7 +2363,7 @@ static bool isUsedByExitBranch(ICmpInst *Cond, Loop *L) {
 static bool ShouldCountToZero(ICmpInst *Cond, IVStrideUse* &CondUse,
                               ScalarEvolution *SE, Loop *L,
                               const TargetLowering *TLI = 0) {
-  if (!L->contains(Cond->getParent()))
+  if (!L->contains(Cond))
     return false;
 
   if (!isa<SCEVConstant>(CondUse->getOffset()))
@@ -2529,7 +2489,7 @@ void LoopStrengthReduce::OptimizeLoopTermCond(Loop *L) {
     if (!UsePostInc)
       continue;
 
-    DEBUG(errs() << "  Change loop exiting icmp to use postinc iv: "
+    DEBUG(dbgs() << "  Change loop exiting icmp to use postinc iv: "
           << *Cond << '\n');
 
     // It's possible for the setcc instruction to be anywhere in the loop, and
@@ -2608,9 +2568,9 @@ bool LoopStrengthReduce::OptimizeLoopCountIVOfStride(const SCEV* &Stride,
   }
 
   // Replace the increment with a decrement.
-  DEBUG(errs() << "LSR: Examining use ");
-  DEBUG(WriteAsOperand(errs(), CondOp0, /*PrintType=*/false));
-  DEBUG(errs() << " in Inst: " << *Cond << '\n');
+  DEBUG(dbgs() << "LSR: Examining use ");
+  DEBUG(WriteAsOperand(dbgs(), CondOp0, /*PrintType=*/false));
+  DEBUG(dbgs() << " in Inst: " << *Cond << '\n');
   BinaryOperator *Decr =  BinaryOperator::Create(Instruction::Sub,
                          Incr->getOperand(0), Incr->getOperand(1), "tmp", Incr);
   Incr->replaceAllUsesWith(Decr);
@@ -2624,7 +2584,7 @@ bool LoopStrengthReduce::OptimizeLoopCountIVOfStride(const SCEV* &Stride,
   unsigned InBlock = L->contains(PHIExpr->getIncomingBlock(0)) ? 1 : 0;
   Value *StartVal = PHIExpr->getIncomingValue(InBlock);
   Value *EndVal = Cond->getOperand(1);
-  DEBUG(errs() << "    Optimize loop counting iv to count down ["
+  DEBUG(dbgs() << "    Optimize loop counting iv to count down ["
         << *EndVal << " .. " << *StartVal << "]\n");
 
   // FIXME: check for case where both are constant.
@@ -2633,7 +2593,7 @@ bool LoopStrengthReduce::OptimizeLoopCountIVOfStride(const SCEV* &Stride,
                                           EndVal, StartVal, "tmp", PreInsertPt);
   PHIExpr->setIncomingValue(InBlock, NewStartVal);
   Cond->setOperand(1, Zero);
-  DEBUG(errs() << "    New icmp: " << *Cond << "\n");
+  DEBUG(dbgs() << "    New icmp: " << *Cond << "\n");
 
   int64_t SInt = cast<SCEVConstant>(Stride)->getValue()->getSExtValue();
   const SCEV *NewStride = 0;
@@ -2716,9 +2676,9 @@ bool LoopStrengthReduce::runOnLoop(Loop *L, LPPassManager &LPM) {
     return false;
 
   if (!IU->IVUsesByStride.empty()) {
-    DEBUG(errs() << "\nLSR on \"" << L->getHeader()->getParent()->getName()
+    DEBUG(dbgs() << "\nLSR on \"" << L->getHeader()->getParent()->getName()
           << "\" ";
-          L->dump());
+          L->print(dbgs()));
 
     // Sort the StrideOrder so we process larger strides first.
     std::stable_sort(IU->StrideOrder.begin(), IU->StrideOrder.end(),
@@ -2758,8 +2718,7 @@ bool LoopStrengthReduce::runOnLoop(Loop *L, LPPassManager &LPM) {
     IVsByStride.clear();
 
     // Clean up after ourselves
-    if (!DeadInsts.empty())
-      DeleteTriviallyDeadInstructions();
+    DeleteTriviallyDeadInstructions();
   }
 
   // At this point, it is worth checking to see if any recurrence PHIs are also
diff --git a/libclamav/c++/llvm/lib/Transforms/Scalar/LoopUnswitch.cpp b/libclamav/c++/llvm/lib/Transforms/Scalar/LoopUnswitch.cpp
index b7adfdc..0c19133 100644
--- a/libclamav/c++/llvm/lib/Transforms/Scalar/LoopUnswitch.cpp
+++ b/libclamav/c++/llvm/lib/Transforms/Scalar/LoopUnswitch.cpp
@@ -877,7 +877,7 @@ void LoopUnswitch::RewriteLoopBodyWithConditionConstant(Loop *L, Value *LIC,
     
     for (unsigned i = 0, e = Users.size(); i != e; ++i)
       if (Instruction *U = cast<Instruction>(Users[i])) {
-        if (!L->contains(U->getParent()))
+        if (!L->contains(U))
           continue;
         U->replaceUsesOfWith(LIC, Replacement);
         Worklist.push_back(U);
@@ -888,7 +888,7 @@ void LoopUnswitch::RewriteLoopBodyWithConditionConstant(Loop *L, Value *LIC,
     // can.  This case occurs when we unswitch switch statements.
     for (unsigned i = 0, e = Users.size(); i != e; ++i)
       if (Instruction *U = cast<Instruction>(Users[i])) {
-        if (!L->contains(U->getParent()))
+        if (!L->contains(U))
           continue;
 
         Worklist.push_back(U);
diff --git a/libclamav/c++/llvm/lib/Transforms/Scalar/SCCVN.cpp b/libclamav/c++/llvm/lib/Transforms/Scalar/SCCVN.cpp
index dbc82e1..f91fbda 100644
--- a/libclamav/c++/llvm/lib/Transforms/Scalar/SCCVN.cpp
+++ b/libclamav/c++/llvm/lib/Transforms/Scalar/SCCVN.cpp
@@ -34,7 +34,6 @@
 #include "llvm/Support/Debug.h"
 #include "llvm/Support/ErrorHandling.h"
 #include "llvm/Transforms/Utils/SSAUpdater.h"
-#include <cstdio>
 using namespace llvm;
 
 STATISTIC(NumSCCVNInstr,  "Number of instructions deleted by SCCVN");
diff --git a/libclamav/c++/llvm/lib/Transforms/Scalar/ScalarReplAggregates.cpp b/libclamav/c++/llvm/lib/Transforms/Scalar/ScalarReplAggregates.cpp
index b040a27..79bb7c5 100644
--- a/libclamav/c++/llvm/lib/Transforms/Scalar/ScalarReplAggregates.cpp
+++ b/libclamav/c++/llvm/lib/Transforms/Scalar/ScalarReplAggregates.cpp
@@ -74,6 +74,10 @@ namespace {
   private:
     TargetData *TD;
     
+    /// DeadInsts - Keep track of instructions we have made dead, so that
+    /// we can remove them after we are done working.
+    SmallVector<Value*, 32> DeadInsts;
+
     /// AllocaInfo - When analyzing uses of an alloca instruction, this captures
     /// information about the uses.  All these fields are initialized to false
     /// and set to true when something is learned.
@@ -102,25 +106,29 @@ namespace {
 
     int isSafeAllocaToScalarRepl(AllocaInst *AI);
 
-    void isSafeUseOfAllocation(Instruction *User, AllocaInst *AI,
-                               AllocaInfo &Info);
-    void isSafeElementUse(Value *Ptr, bool isFirstElt, AllocaInst *AI,
-                          AllocaInfo &Info);
-    void isSafeMemIntrinsicOnAllocation(MemIntrinsic *MI, AllocaInst *AI,
-                                        unsigned OpNo, AllocaInfo &Info);
-    void isSafeUseOfBitCastedAllocation(BitCastInst *User, AllocaInst *AI,
-                                        AllocaInfo &Info);
+    void isSafeForScalarRepl(Instruction *I, AllocaInst *AI, uint64_t Offset,
+                             AllocaInfo &Info);
+    void isSafeGEP(GetElementPtrInst *GEPI, AllocaInst *AI, uint64_t &Offset,
+                   AllocaInfo &Info);
+    void isSafeMemAccess(AllocaInst *AI, uint64_t Offset, uint64_t MemSize,
+                         const Type *MemOpType, bool isStore, AllocaInfo &Info);
+    bool TypeHasComponent(const Type *T, uint64_t Offset, uint64_t Size);
+    uint64_t FindElementAndOffset(const Type *&T, uint64_t &Offset,
+                                  const Type *&IdxTy);
     
     void DoScalarReplacement(AllocaInst *AI, 
                              std::vector<AllocaInst*> &WorkList);
-    void CleanupGEP(GetElementPtrInst *GEP);
-    void CleanupAllocaUsers(AllocaInst *AI);
+    void DeleteDeadInstructions();
+    void CleanupAllocaUsers(Value *V);
     AllocaInst *AddNewAlloca(Function &F, const Type *Ty, AllocaInst *Base);
     
-    void RewriteBitCastUserOfAlloca(Instruction *BCInst, AllocaInst *AI,
-                                    SmallVector<AllocaInst*, 32> &NewElts);
-    
-    void RewriteMemIntrinUserOfAlloca(MemIntrinsic *MI, Instruction *BCInst,
+    void RewriteForScalarRepl(Instruction *I, AllocaInst *AI, uint64_t Offset,
+                              SmallVector<AllocaInst*, 32> &NewElts);
+    void RewriteBitCast(BitCastInst *BC, AllocaInst *AI, uint64_t Offset,
+                        SmallVector<AllocaInst*, 32> &NewElts);
+    void RewriteGEP(GetElementPtrInst *GEPI, AllocaInst *AI, uint64_t Offset,
+                    SmallVector<AllocaInst*, 32> &NewElts);
+    void RewriteMemIntrinUserOfAlloca(MemIntrinsic *MI, Instruction *Inst,
                                       AllocaInst *AI,
                                       SmallVector<AllocaInst*, 32> &NewElts);
     void RewriteStoreUserOfWholeAlloca(StoreInst *SI, AllocaInst *AI,
@@ -360,399 +368,350 @@ void SROA::DoScalarReplacement(AllocaInst *AI,
     }
   }
 
-  // Now that we have created the alloca instructions that we want to use,
-  // expand the getelementptr instructions to use them.
-  while (!AI->use_empty()) {
-    Instruction *User = cast<Instruction>(AI->use_back());
-    if (BitCastInst *BCInst = dyn_cast<BitCastInst>(User)) {
-      RewriteBitCastUserOfAlloca(BCInst, AI, ElementAllocas);
-      BCInst->eraseFromParent();
-      continue;
-    }
-    
-    // Replace:
-    //   %res = load { i32, i32 }* %alloc
-    // with:
-    //   %load.0 = load i32* %alloc.0
-    //   %insert.0 insertvalue { i32, i32 } zeroinitializer, i32 %load.0, 0 
-    //   %load.1 = load i32* %alloc.1
-    //   %insert = insertvalue { i32, i32 } %insert.0, i32 %load.1, 1 
-    // (Also works for arrays instead of structs)
-    if (LoadInst *LI = dyn_cast<LoadInst>(User)) {
-      Value *Insert = UndefValue::get(LI->getType());
-      for (unsigned i = 0, e = ElementAllocas.size(); i != e; ++i) {
-        Value *Load = new LoadInst(ElementAllocas[i], "load", LI);
-        Insert = InsertValueInst::Create(Insert, Load, i, "insert", LI);
-      }
-      LI->replaceAllUsesWith(Insert);
-      LI->eraseFromParent();
-      continue;
-    }
-
-    // Replace:
-    //   store { i32, i32 } %val, { i32, i32 }* %alloc
-    // with:
-    //   %val.0 = extractvalue { i32, i32 } %val, 0 
-    //   store i32 %val.0, i32* %alloc.0
-    //   %val.1 = extractvalue { i32, i32 } %val, 1 
-    //   store i32 %val.1, i32* %alloc.1
-    // (Also works for arrays instead of structs)
-    if (StoreInst *SI = dyn_cast<StoreInst>(User)) {
-      Value *Val = SI->getOperand(0);
-      for (unsigned i = 0, e = ElementAllocas.size(); i != e; ++i) {
-        Value *Extract = ExtractValueInst::Create(Val, i, Val->getName(), SI);
-        new StoreInst(Extract, ElementAllocas[i], SI);
-      }
-      SI->eraseFromParent();
-      continue;
-    }
-    
-    GetElementPtrInst *GEPI = cast<GetElementPtrInst>(User);
-    // We now know that the GEP is of the form: GEP <ptr>, 0, <cst>
-    unsigned Idx =
-       (unsigned)cast<ConstantInt>(GEPI->getOperand(2))->getZExtValue();
-
-    assert(Idx < ElementAllocas.size() && "Index out of range?");
-    AllocaInst *AllocaToUse = ElementAllocas[Idx];
-
-    Value *RepValue;
-    if (GEPI->getNumOperands() == 3) {
-      // Do not insert a new getelementptr instruction with zero indices, only
-      // to have it optimized out later.
-      RepValue = AllocaToUse;
-    } else {
-      // We are indexing deeply into the structure, so we still need a
-      // getelement ptr instruction to finish the indexing.  This may be
-      // expanded itself once the worklist is rerun.
-      //
-      SmallVector<Value*, 8> NewArgs;
-      NewArgs.push_back(Constant::getNullValue(
-                                           Type::getInt32Ty(AI->getContext())));
-      NewArgs.append(GEPI->op_begin()+3, GEPI->op_end());
-      RepValue = GetElementPtrInst::Create(AllocaToUse, NewArgs.begin(),
-                                           NewArgs.end(), "", GEPI);
-      RepValue->takeName(GEPI);
-    }
-    
-    // If this GEP is to the start of the aggregate, check for memcpys.
-    if (Idx == 0 && GEPI->hasAllZeroIndices())
-      RewriteBitCastUserOfAlloca(GEPI, AI, ElementAllocas);
-
-    // Move all of the users over to the new GEP.
-    GEPI->replaceAllUsesWith(RepValue);
-    // Delete the old GEP
-    GEPI->eraseFromParent();
-  }
+  // Now that we have created the new alloca instructions, rewrite all the
+  // uses of the old alloca.
+  RewriteForScalarRepl(AI, AI, 0, ElementAllocas);
 
-  // Finally, delete the Alloca instruction
+  // Now erase any instructions that were made dead while rewriting the alloca.
+  DeleteDeadInstructions();
   AI->eraseFromParent();
+
   NumReplaced++;
 }
 
-/// isSafeElementUse - Check to see if this use is an allowed use for a
-/// getelementptr instruction of an array aggregate allocation.  isFirstElt
-/// indicates whether Ptr is known to the start of the aggregate.
-void SROA::isSafeElementUse(Value *Ptr, bool isFirstElt, AllocaInst *AI,
-                            AllocaInfo &Info) {
-  for (Value::use_iterator I = Ptr->use_begin(), E = Ptr->use_end();
-       I != E; ++I) {
-    Instruction *User = cast<Instruction>(*I);
-    switch (User->getOpcode()) {
-    case Instruction::Load:  break;
-    case Instruction::Store:
-      // Store is ok if storing INTO the pointer, not storing the pointer
-      if (User->getOperand(0) == Ptr) return MarkUnsafe(Info);
-      break;
-    case Instruction::GetElementPtr: {
-      GetElementPtrInst *GEP = cast<GetElementPtrInst>(User);
-      bool AreAllZeroIndices = isFirstElt;
-      if (GEP->getNumOperands() > 1 &&
-          (!isa<ConstantInt>(GEP->getOperand(1)) ||
-           !cast<ConstantInt>(GEP->getOperand(1))->isZero()))
-        // Using pointer arithmetic to navigate the array.
-        return MarkUnsafe(Info);
-      
-      // Verify that any array subscripts are in range.
-      for (gep_type_iterator GEPIt = gep_type_begin(GEP),
-           E = gep_type_end(GEP); GEPIt != E; ++GEPIt) {
-        // Ignore struct elements, no extra checking needed for these.
-        if (isa<StructType>(*GEPIt))
-          continue;
-
-        // This GEP indexes an array.  Verify that this is an in-range
-        // constant integer. Specifically, consider A[0][i]. We cannot know that
-        // the user isn't doing invalid things like allowing i to index an
-        // out-of-range subscript that accesses A[1].  Because of this, we have
-        // to reject SROA of any accesses into structs where any of the
-        // components are variables. 
-        ConstantInt *IdxVal = dyn_cast<ConstantInt>(GEPIt.getOperand());
-        if (!IdxVal) return MarkUnsafe(Info);
-        
-        // Are all indices still zero?
-        AreAllZeroIndices &= IdxVal->isZero();
-        
-        if (const ArrayType *AT = dyn_cast<ArrayType>(*GEPIt)) {
-          if (IdxVal->getZExtValue() >= AT->getNumElements())
-            return MarkUnsafe(Info);
-        } else if (const VectorType *VT = dyn_cast<VectorType>(*GEPIt)) {
-          if (IdxVal->getZExtValue() >= VT->getNumElements())
-            return MarkUnsafe(Info);
-        }
+/// DeleteDeadInstructions - Erase instructions on the DeadInstrs list,
+/// recursively including all their operands that become trivially dead.
+void SROA::DeleteDeadInstructions() {
+  while (!DeadInsts.empty()) {
+    Instruction *I = cast<Instruction>(DeadInsts.pop_back_val());
+
+    for (User::op_iterator OI = I->op_begin(), E = I->op_end(); OI != E; ++OI)
+      if (Instruction *U = dyn_cast<Instruction>(*OI)) {
+        // Zero out the operand and see if it becomes trivially dead.
+        // (But, don't add allocas to the dead instruction list -- they are
+        // already on the worklist and will be deleted separately.)
+        *OI = 0;
+        if (isInstructionTriviallyDead(U) && !isa<AllocaInst>(U))
+          DeadInsts.push_back(U);
       }
-      
-      isSafeElementUse(GEP, AreAllZeroIndices, AI, Info);
-      if (Info.isUnsafe) return;
-      break;
-    }
-    case Instruction::BitCast:
-      if (isFirstElt) {
-        isSafeUseOfBitCastedAllocation(cast<BitCastInst>(User), AI, Info);
-        if (Info.isUnsafe) return;
-        break;
-      }
-      DEBUG(errs() << "  Transformation preventing inst: " << *User << '\n');
-      return MarkUnsafe(Info);
-    case Instruction::Call:
-      if (MemIntrinsic *MI = dyn_cast<MemIntrinsic>(User)) {
-        if (isFirstElt) {
-          isSafeMemIntrinsicOnAllocation(MI, AI, I.getOperandNo(), Info);
-          if (Info.isUnsafe) return;
-          break;
-        }
-      }
-      DEBUG(errs() << "  Transformation preventing inst: " << *User << '\n');
-      return MarkUnsafe(Info);
-    default:
-      DEBUG(errs() << "  Transformation preventing inst: " << *User << '\n');
-      return MarkUnsafe(Info);
-    }
-  }
-  return;  // All users look ok :)
-}
 
-/// AllUsersAreLoads - Return true if all users of this value are loads.
-static bool AllUsersAreLoads(Value *Ptr) {
-  for (Value::use_iterator I = Ptr->use_begin(), E = Ptr->use_end();
-       I != E; ++I)
-    if (cast<Instruction>(*I)->getOpcode() != Instruction::Load)
-      return false;
-  return true;
-}
-
-/// isSafeUseOfAllocation - Check if this user is an allowed use for an
-/// aggregate allocation.
-void SROA::isSafeUseOfAllocation(Instruction *User, AllocaInst *AI,
-                                 AllocaInfo &Info) {
-  if (BitCastInst *C = dyn_cast<BitCastInst>(User))
-    return isSafeUseOfBitCastedAllocation(C, AI, Info);
-
-  if (LoadInst *LI = dyn_cast<LoadInst>(User))
-    if (!LI->isVolatile())
-      return;// Loads (returning a first class aggregrate) are always rewritable
-
-  if (StoreInst *SI = dyn_cast<StoreInst>(User))
-    if (!SI->isVolatile() && SI->getOperand(0) != AI)
-      return;// Store is ok if storing INTO the pointer, not storing the pointer
- 
-  GetElementPtrInst *GEPI = dyn_cast<GetElementPtrInst>(User);
-  if (GEPI == 0)
-    return MarkUnsafe(Info);
-
-  gep_type_iterator I = gep_type_begin(GEPI), E = gep_type_end(GEPI);
-
-  // The GEP is not safe to transform if not of the form "GEP <ptr>, 0, <cst>".
-  if (I == E ||
-      I.getOperand() != Constant::getNullValue(I.getOperand()->getType())) {
-    return MarkUnsafe(Info);
+    I->eraseFromParent();
   }
+}
+    
+/// isSafeForScalarRepl - Check if instruction I is a safe use with regard to
+/// performing scalar replacement of alloca AI.  The results are flagged in
+/// the Info parameter.  Offset indicates the position within AI that is
+/// referenced by this instruction.
+void SROA::isSafeForScalarRepl(Instruction *I, AllocaInst *AI, uint64_t Offset,
+                               AllocaInfo &Info) {
+  for (Value::use_iterator UI = I->use_begin(), E = I->use_end(); UI!=E; ++UI) {
+    Instruction *User = cast<Instruction>(*UI);
 
-  ++I;
-  if (I == E) return MarkUnsafe(Info);  // ran out of GEP indices??
-
-  bool IsAllZeroIndices = true;
-  
-  // If the first index is a non-constant index into an array, see if we can
-  // handle it as a special case.
-  if (const ArrayType *AT = dyn_cast<ArrayType>(*I)) {
-    if (!isa<ConstantInt>(I.getOperand())) {
-      IsAllZeroIndices = 0;
-      uint64_t NumElements = AT->getNumElements();
-      
-      // If this is an array index and the index is not constant, we cannot
-      // promote... that is unless the array has exactly one or two elements in
-      // it, in which case we CAN promote it, but we have to canonicalize this
-      // out if this is the only problem.
-      if ((NumElements == 1 || NumElements == 2) &&
-          AllUsersAreLoads(GEPI)) {
+    if (BitCastInst *BC = dyn_cast<BitCastInst>(User)) {
+      isSafeForScalarRepl(BC, AI, Offset, Info);
+    } else if (GetElementPtrInst *GEPI = dyn_cast<GetElementPtrInst>(User)) {
+      uint64_t GEPOffset = Offset;
+      isSafeGEP(GEPI, AI, GEPOffset, Info);
+      if (!Info.isUnsafe)
+        isSafeForScalarRepl(GEPI, AI, GEPOffset, Info);
+    } else if (MemIntrinsic *MI = dyn_cast<MemIntrinsic>(UI)) {
+      ConstantInt *Length = dyn_cast<ConstantInt>(MI->getLength());
+      if (Length)
+        isSafeMemAccess(AI, Offset, Length->getZExtValue(), 0,
+                        UI.getOperandNo() == 1, Info);
+      else
+        MarkUnsafe(Info);
+    } else if (LoadInst *LI = dyn_cast<LoadInst>(User)) {
+      if (!LI->isVolatile()) {
+        const Type *LIType = LI->getType();
+        isSafeMemAccess(AI, Offset, TD->getTypeAllocSize(LIType),
+                        LIType, false, Info);
+      } else
+        MarkUnsafe(Info);
+    } else if (StoreInst *SI = dyn_cast<StoreInst>(User)) {
+      // Store is ok if storing INTO the pointer, not storing the pointer
+      if (!SI->isVolatile() && SI->getOperand(0) != I) {
+        const Type *SIType = SI->getOperand(0)->getType();
+        isSafeMemAccess(AI, Offset, TD->getTypeAllocSize(SIType),
+                        SIType, true, Info);
+      } else
+        MarkUnsafe(Info);
+    } else if (isa<DbgInfoIntrinsic>(UI)) {
+      // If one user is DbgInfoIntrinsic then check if all users are
+      // DbgInfoIntrinsics.
+      if (OnlyUsedByDbgInfoIntrinsics(I)) {
         Info.needsCleanup = true;
-        return;  // Canonicalization required!
+        return;
       }
-      return MarkUnsafe(Info);
+      MarkUnsafe(Info);
+    } else {
+      DEBUG(errs() << "  Transformation preventing inst: " << *User << '\n');
+      MarkUnsafe(Info);
     }
+    if (Info.isUnsafe) return;
   }
- 
+}
+
+/// isSafeGEP - Check if a GEP instruction can be handled for scalar
+/// replacement.  It is safe when all the indices are constant, in-bounds
+/// references, and when the resulting offset corresponds to an element within
+/// the alloca type.  The results are flagged in the Info parameter.  Upon
+/// return, Offset is adjusted as specified by the GEP indices.
+void SROA::isSafeGEP(GetElementPtrInst *GEPI, AllocaInst *AI,
+                     uint64_t &Offset, AllocaInfo &Info) {
+  gep_type_iterator GEPIt = gep_type_begin(GEPI), E = gep_type_end(GEPI);
+  if (GEPIt == E)
+    return;
+
   // Walk through the GEP type indices, checking the types that this indexes
   // into.
-  for (; I != E; ++I) {
+  for (; GEPIt != E; ++GEPIt) {
     // Ignore struct elements, no extra checking needed for these.
-    if (isa<StructType>(*I))
+    if (isa<StructType>(*GEPIt))
       continue;
-    
-    ConstantInt *IdxVal = dyn_cast<ConstantInt>(I.getOperand());
-    if (!IdxVal) return MarkUnsafe(Info);
 
-    // Are all indices still zero?
-    IsAllZeroIndices &= IdxVal->isZero();
-    
-    if (const ArrayType *AT = dyn_cast<ArrayType>(*I)) {
-      // This GEP indexes an array.  Verify that this is an in-range constant
-      // integer. Specifically, consider A[0][i]. We cannot know that the user
-      // isn't doing invalid things like allowing i to index an out-of-range
-      // subscript that accesses A[1].  Because of this, we have to reject SROA
-      // of any accesses into structs where any of the components are variables.
-      if (IdxVal->getZExtValue() >= AT->getNumElements())
-        return MarkUnsafe(Info);
-    } else if (const VectorType *VT = dyn_cast<VectorType>(*I)) {
-      if (IdxVal->getZExtValue() >= VT->getNumElements())
-        return MarkUnsafe(Info);
+    ConstantInt *IdxVal = dyn_cast<ConstantInt>(GEPIt.getOperand());
+    if (!IdxVal)
+      return MarkUnsafe(Info);
+  }
+
+  // Compute the offset due to this GEP and check if the alloca has a
+  // component element at that offset.
+  SmallVector<Value*, 8> Indices(GEPI->op_begin() + 1, GEPI->op_end());
+  Offset += TD->getIndexedOffset(GEPI->getPointerOperandType(),
+                                 &Indices[0], Indices.size());
+  if (!TypeHasComponent(AI->getAllocatedType(), Offset, 0))
+    MarkUnsafe(Info);
+}
+
+/// isSafeMemAccess - Check if a load/store/memcpy operates on the entire AI
+/// alloca or has an offset and size that corresponds to a component element
+/// within it.  The offset checked here may have been formed from a GEP with a
+/// pointer bitcasted to a different type.
+void SROA::isSafeMemAccess(AllocaInst *AI, uint64_t Offset, uint64_t MemSize,
+                           const Type *MemOpType, bool isStore,
+                           AllocaInfo &Info) {
+  // Check if this is a load/store of the entire alloca.
+  if (Offset == 0 && MemSize == TD->getTypeAllocSize(AI->getAllocatedType())) {
+    bool UsesAggregateType = (MemOpType == AI->getAllocatedType());
+    // This is safe for MemIntrinsics (where MemOpType is 0), integer types
+    // (which are essentially the same as the MemIntrinsics, especially with
+    // regard to copying padding between elements), or references using the
+    // aggregate type of the alloca.
+    if (!MemOpType || isa<IntegerType>(MemOpType) || UsesAggregateType) {
+      if (!UsesAggregateType) {
+        if (isStore)
+          Info.isMemCpyDst = true;
+        else
+          Info.isMemCpySrc = true;
+      }
+      return;
     }
   }
-  
-  // If there are any non-simple uses of this getelementptr, make sure to reject
-  // them.
-  return isSafeElementUse(GEPI, IsAllZeroIndices, AI, Info);
+  // Check if the offset/size correspond to a component within the alloca type.
+  const Type *T = AI->getAllocatedType();
+  if (TypeHasComponent(T, Offset, MemSize))
+    return;
+
+  return MarkUnsafe(Info);
 }
 
-/// isSafeMemIntrinsicOnAllocation - Check if the specified memory
-/// intrinsic can be promoted by SROA.  At this point, we know that the operand
-/// of the memintrinsic is a pointer to the beginning of the allocation.
-void SROA::isSafeMemIntrinsicOnAllocation(MemIntrinsic *MI, AllocaInst *AI,
-                                          unsigned OpNo, AllocaInfo &Info) {
-  // If not constant length, give up.
-  ConstantInt *Length = dyn_cast<ConstantInt>(MI->getLength());
-  if (!Length) return MarkUnsafe(Info);
-  
-  // If not the whole aggregate, give up.
-  if (Length->getZExtValue() !=
-      TD->getTypeAllocSize(AI->getType()->getElementType()))
-    return MarkUnsafe(Info);
-  
-  // We only know about memcpy/memset/memmove.
-  if (!isa<MemIntrinsic>(MI))
-    return MarkUnsafe(Info);
-  
-  // Otherwise, we can transform it.  Determine whether this is a memcpy/set
-  // into or out of the aggregate.
-  if (OpNo == 1)
-    Info.isMemCpyDst = true;
-  else {
-    assert(OpNo == 2);
-    Info.isMemCpySrc = true;
+/// TypeHasComponent - Return true if T has a component type with the
+/// specified offset and size.  If Size is zero, do not check the size.
+bool SROA::TypeHasComponent(const Type *T, uint64_t Offset, uint64_t Size) {
+  const Type *EltTy;
+  uint64_t EltSize;
+  if (const StructType *ST = dyn_cast<StructType>(T)) {
+    const StructLayout *Layout = TD->getStructLayout(ST);
+    unsigned EltIdx = Layout->getElementContainingOffset(Offset);
+    EltTy = ST->getContainedType(EltIdx);
+    EltSize = TD->getTypeAllocSize(EltTy);
+    Offset -= Layout->getElementOffset(EltIdx);
+  } else if (const ArrayType *AT = dyn_cast<ArrayType>(T)) {
+    EltTy = AT->getElementType();
+    EltSize = TD->getTypeAllocSize(EltTy);
+    if (Offset >= AT->getNumElements() * EltSize)
+      return false;
+    Offset %= EltSize;
+  } else {
+    return false;
   }
+  if (Offset == 0 && (Size == 0 || EltSize == Size))
+    return true;
+  // Check if the component spans multiple elements.
+  if (Offset + Size > EltSize)
+    return false;
+  return TypeHasComponent(EltTy, Offset, Size);
 }
 
-/// isSafeUseOfBitCastedAllocation - Check if all users of this bitcast
-/// from an alloca are safe for SROA of that alloca.
-void SROA::isSafeUseOfBitCastedAllocation(BitCastInst *BC, AllocaInst *AI,
-                                          AllocaInfo &Info) {
-  for (Value::use_iterator UI = BC->use_begin(), E = BC->use_end();
-       UI != E; ++UI) {
-    if (BitCastInst *BCU = dyn_cast<BitCastInst>(UI)) {
-      isSafeUseOfBitCastedAllocation(BCU, AI, Info);
-    } else if (MemIntrinsic *MI = dyn_cast<MemIntrinsic>(UI)) {
-      isSafeMemIntrinsicOnAllocation(MI, AI, UI.getOperandNo(), Info);
-    } else if (StoreInst *SI = dyn_cast<StoreInst>(UI)) {
-      if (SI->isVolatile())
-        return MarkUnsafe(Info);
-      
-      // If storing the entire alloca in one chunk through a bitcasted pointer
-      // to integer, we can transform it.  This happens (for example) when you
-      // cast a {i32,i32}* to i64* and store through it.  This is similar to the
-      // memcpy case and occurs in various "byval" cases and emulated memcpys.
-      if (isa<IntegerType>(SI->getOperand(0)->getType()) &&
-          TD->getTypeAllocSize(SI->getOperand(0)->getType()) ==
-          TD->getTypeAllocSize(AI->getType()->getElementType())) {
-        Info.isMemCpyDst = true;
-        continue;
-      }
-      return MarkUnsafe(Info);
-    } else if (LoadInst *LI = dyn_cast<LoadInst>(UI)) {
-      if (LI->isVolatile())
-        return MarkUnsafe(Info);
-
-      // If loading the entire alloca in one chunk through a bitcasted pointer
-      // to integer, we can transform it.  This happens (for example) when you
-      // cast a {i32,i32}* to i64* and load through it.  This is similar to the
-      // memcpy case and occurs in various "byval" cases and emulated memcpys.
-      if (isa<IntegerType>(LI->getType()) &&
-          TD->getTypeAllocSize(LI->getType()) ==
-          TD->getTypeAllocSize(AI->getType()->getElementType())) {
-        Info.isMemCpySrc = true;
-        continue;
+/// RewriteForScalarRepl - Alloca AI is being split into NewElts, so rewrite
+/// the instruction I, which references it, to use the separate elements.
+/// Offset indicates the position within AI that is referenced by this
+/// instruction.
+void SROA::RewriteForScalarRepl(Instruction *I, AllocaInst *AI, uint64_t Offset,
+                                SmallVector<AllocaInst*, 32> &NewElts) {
+  for (Value::use_iterator UI = I->use_begin(), E = I->use_end(); UI!=E; ++UI) {
+    Instruction *User = cast<Instruction>(*UI);
+
+    if (BitCastInst *BC = dyn_cast<BitCastInst>(User)) {
+      RewriteBitCast(BC, AI, Offset, NewElts);
+    } else if (GetElementPtrInst *GEPI = dyn_cast<GetElementPtrInst>(User)) {
+      RewriteGEP(GEPI, AI, Offset, NewElts);
+    } else if (MemIntrinsic *MI = dyn_cast<MemIntrinsic>(User)) {
+      ConstantInt *Length = dyn_cast<ConstantInt>(MI->getLength());
+      uint64_t MemSize = Length->getZExtValue();
+      if (Offset == 0 &&
+          MemSize == TD->getTypeAllocSize(AI->getAllocatedType()))
+        RewriteMemIntrinUserOfAlloca(MI, I, AI, NewElts);
+      // Otherwise the intrinsic can only touch a single element and the
+      // address operand will be updated, so nothing else needs to be done.
+    } else if (LoadInst *LI = dyn_cast<LoadInst>(User)) {
+      const Type *LIType = LI->getType();
+      if (LIType == AI->getAllocatedType()) {
+        // Replace:
+        //   %res = load { i32, i32 }* %alloc
+        // with:
+        //   %load.0 = load i32* %alloc.0
+        //   %insert.0 insertvalue { i32, i32 } zeroinitializer, i32 %load.0, 0
+        //   %load.1 = load i32* %alloc.1
+        //   %insert = insertvalue { i32, i32 } %insert.0, i32 %load.1, 1
+        // (Also works for arrays instead of structs)
+        Value *Insert = UndefValue::get(LIType);
+        for (unsigned i = 0, e = NewElts.size(); i != e; ++i) {
+          Value *Load = new LoadInst(NewElts[i], "load", LI);
+          Insert = InsertValueInst::Create(Insert, Load, i, "insert", LI);
+        }
+        LI->replaceAllUsesWith(Insert);
+        DeadInsts.push_back(LI);
+      } else if (isa<IntegerType>(LIType) &&
+                 TD->getTypeAllocSize(LIType) ==
+                 TD->getTypeAllocSize(AI->getAllocatedType())) {
+        // If this is a load of the entire alloca to an integer, rewrite it.
+        RewriteLoadUserOfWholeAlloca(LI, AI, NewElts);
       }
-      return MarkUnsafe(Info);
-    } else if (isa<DbgInfoIntrinsic>(UI)) {
-      // If one user is DbgInfoIntrinsic then check if all users are
-      // DbgInfoIntrinsics.
-      if (OnlyUsedByDbgInfoIntrinsics(BC)) {
-        Info.needsCleanup = true;
-        return;
+    } else if (StoreInst *SI = dyn_cast<StoreInst>(User)) {
+      Value *Val = SI->getOperand(0);
+      const Type *SIType = Val->getType();
+      if (SIType == AI->getAllocatedType()) {
+        // Replace:
+        //   store { i32, i32 } %val, { i32, i32 }* %alloc
+        // with:
+        //   %val.0 = extractvalue { i32, i32 } %val, 0
+        //   store i32 %val.0, i32* %alloc.0
+        //   %val.1 = extractvalue { i32, i32 } %val, 1
+        //   store i32 %val.1, i32* %alloc.1
+        // (Also works for arrays instead of structs)
+        for (unsigned i = 0, e = NewElts.size(); i != e; ++i) {
+          Value *Extract = ExtractValueInst::Create(Val, i, Val->getName(), SI);
+          new StoreInst(Extract, NewElts[i], SI);
+        }
+        DeadInsts.push_back(SI);
+      } else if (isa<IntegerType>(SIType) &&
+                 TD->getTypeAllocSize(SIType) ==
+                 TD->getTypeAllocSize(AI->getAllocatedType())) {
+        // If this is a store of the entire alloca from an integer, rewrite it.
+        RewriteStoreUserOfWholeAlloca(SI, AI, NewElts);
       }
-      else
-        MarkUnsafe(Info);
     }
-    else {
-      return MarkUnsafe(Info);
-    }
-    if (Info.isUnsafe) return;
   }
 }
 
-/// RewriteBitCastUserOfAlloca - BCInst (transitively) bitcasts AI, or indexes
-/// to its first element.  Transform users of the cast to use the new values
-/// instead.
-void SROA::RewriteBitCastUserOfAlloca(Instruction *BCInst, AllocaInst *AI,
-                                      SmallVector<AllocaInst*, 32> &NewElts) {
-  Value::use_iterator UI = BCInst->use_begin(), UE = BCInst->use_end();
-  while (UI != UE) {
-    Instruction *User = cast<Instruction>(*UI++);
-    if (BitCastInst *BCU = dyn_cast<BitCastInst>(User)) {
-      RewriteBitCastUserOfAlloca(BCU, AI, NewElts);
-      if (BCU->use_empty()) BCU->eraseFromParent();
-      continue;
-    }
+/// RewriteBitCast - Update a bitcast reference to the alloca being replaced
+/// and recursively continue updating all of its uses.
+void SROA::RewriteBitCast(BitCastInst *BC, AllocaInst *AI, uint64_t Offset,
+                          SmallVector<AllocaInst*, 32> &NewElts) {
+  RewriteForScalarRepl(BC, AI, Offset, NewElts);
+  if (BC->getOperand(0) != AI)
+    return;
 
-    if (MemIntrinsic *MI = dyn_cast<MemIntrinsic>(User)) {
-      // This must be memcpy/memmove/memset of the entire aggregate.
-      // Split into one per element.
-      RewriteMemIntrinUserOfAlloca(MI, BCInst, AI, NewElts);
-      continue;
-    }
-      
-    if (StoreInst *SI = dyn_cast<StoreInst>(User)) {
-      // If this is a store of the entire alloca from an integer, rewrite it.
-      RewriteStoreUserOfWholeAlloca(SI, AI, NewElts);
-      continue;
-    }
+  // The bitcast references the original alloca.  Replace its uses with
+  // references to the first new element alloca.
+  Instruction *Val = NewElts[0];
+  if (Val->getType() != BC->getDestTy()) {
+    Val = new BitCastInst(Val, BC->getDestTy(), "", BC);
+    Val->takeName(BC);
+  }
+  BC->replaceAllUsesWith(Val);
+  DeadInsts.push_back(BC);
+}
 
-    if (LoadInst *LI = dyn_cast<LoadInst>(User)) {
-      // If this is a load of the entire alloca to an integer, rewrite it.
-      RewriteLoadUserOfWholeAlloca(LI, AI, NewElts);
-      continue;
-    }
-    
-    // Otherwise it must be some other user of a gep of the first pointer.  Just
-    // leave these alone.
-    continue;
+/// FindElementAndOffset - Return the index of the element containing Offset
+/// within the specified type, which must be either a struct or an array.
+/// Sets T to the type of the element and Offset to the offset within that
+/// element.  IdxTy is set to the type of the index result to be used in a
+/// GEP instruction.
+uint64_t SROA::FindElementAndOffset(const Type *&T, uint64_t &Offset,
+                                    const Type *&IdxTy) {
+  uint64_t Idx = 0;
+  if (const StructType *ST = dyn_cast<StructType>(T)) {
+    const StructLayout *Layout = TD->getStructLayout(ST);
+    Idx = Layout->getElementContainingOffset(Offset);
+    T = ST->getContainedType(Idx);
+    Offset -= Layout->getElementOffset(Idx);
+    IdxTy = Type::getInt32Ty(T->getContext());
+    return Idx;
   }
+  const ArrayType *AT = cast<ArrayType>(T);
+  T = AT->getElementType();
+  uint64_t EltSize = TD->getTypeAllocSize(T);
+  Idx = Offset / EltSize;
+  Offset -= Idx * EltSize;
+  IdxTy = Type::getInt64Ty(T->getContext());
+  return Idx;
+}
+
+/// RewriteGEP - Check if this GEP instruction moves the pointer across
+/// elements of the alloca that are being split apart, and if so, rewrite
+/// the GEP to be relative to the new element.
+void SROA::RewriteGEP(GetElementPtrInst *GEPI, AllocaInst *AI, uint64_t Offset,
+                      SmallVector<AllocaInst*, 32> &NewElts) {
+  uint64_t OldOffset = Offset;
+  SmallVector<Value*, 8> Indices(GEPI->op_begin() + 1, GEPI->op_end());
+  Offset += TD->getIndexedOffset(GEPI->getPointerOperandType(),
+                                 &Indices[0], Indices.size());
+
+  RewriteForScalarRepl(GEPI, AI, Offset, NewElts);
+
+  const Type *T = AI->getAllocatedType();
+  const Type *IdxTy;
+  uint64_t OldIdx = FindElementAndOffset(T, OldOffset, IdxTy);
+  if (GEPI->getOperand(0) == AI)
+    OldIdx = ~0ULL; // Force the GEP to be rewritten.
+
+  T = AI->getAllocatedType();
+  uint64_t EltOffset = Offset;
+  uint64_t Idx = FindElementAndOffset(T, EltOffset, IdxTy);
+
+  // If this GEP does not move the pointer across elements of the alloca
+  // being split, then it does not needs to be rewritten.
+  if (Idx == OldIdx)
+    return;
+
+  const Type *i32Ty = Type::getInt32Ty(AI->getContext());
+  SmallVector<Value*, 8> NewArgs;
+  NewArgs.push_back(Constant::getNullValue(i32Ty));
+  while (EltOffset != 0) {
+    uint64_t EltIdx = FindElementAndOffset(T, EltOffset, IdxTy);
+    NewArgs.push_back(ConstantInt::get(IdxTy, EltIdx));
+  }
+  Instruction *Val = NewElts[Idx];
+  if (NewArgs.size() > 1) {
+    Val = GetElementPtrInst::CreateInBounds(Val, NewArgs.begin(),
+                                            NewArgs.end(), "", GEPI);
+    Val->takeName(GEPI);
+  }
+  if (Val->getType() != GEPI->getType())
+    Val = new BitCastInst(Val, GEPI->getType(), Val->getNameStr(), GEPI);
+  GEPI->replaceAllUsesWith(Val);
+  DeadInsts.push_back(GEPI);
 }
 
 /// RewriteMemIntrinUserOfAlloca - MI is a memcpy/memset/memmove from or to AI.
 /// Rewrite it to copy or set the elements of the scalarized memory.
-void SROA::RewriteMemIntrinUserOfAlloca(MemIntrinsic *MI, Instruction *BCInst,
+void SROA::RewriteMemIntrinUserOfAlloca(MemIntrinsic *MI, Instruction *Inst,
                                         AllocaInst *AI,
                                         SmallVector<AllocaInst*, 32> &NewElts) {
-  
   // If this is a memcpy/memmove, construct the other pointer as the
   // appropriate type.  The "Other" pointer is the pointer that goes to memory
   // that doesn't have anything to do with the alloca that we are promoting. For
@@ -761,28 +720,41 @@ void SROA::RewriteMemIntrinUserOfAlloca(MemIntrinsic *MI, Instruction *BCInst,
   LLVMContext &Context = MI->getContext();
   unsigned MemAlignment = MI->getAlignment();
   if (MemTransferInst *MTI = dyn_cast<MemTransferInst>(MI)) { // memmove/memcopy
-    if (BCInst == MTI->getRawDest())
+    if (Inst == MTI->getRawDest())
       OtherPtr = MTI->getRawSource();
     else {
-      assert(BCInst == MTI->getRawSource());
+      assert(Inst == MTI->getRawSource());
       OtherPtr = MTI->getRawDest();
     }
   }
 
-  // Keep track of the other intrinsic argument, so it can be removed if it
-  // is dead when the intrinsic is replaced.
-  Value *PossiblyDead = OtherPtr;
-  
   // If there is an other pointer, we want to convert it to the same pointer
   // type as AI has, so we can GEP through it safely.
   if (OtherPtr) {
-    // It is likely that OtherPtr is a bitcast, if so, remove it.
-    if (BitCastInst *BC = dyn_cast<BitCastInst>(OtherPtr))
-      OtherPtr = BC->getOperand(0);
-    // All zero GEPs are effectively bitcasts.
-    if (GetElementPtrInst *GEP = dyn_cast<GetElementPtrInst>(OtherPtr))
-      if (GEP->hasAllZeroIndices())
-        OtherPtr = GEP->getOperand(0);
+
+    // Remove bitcasts and all-zero GEPs from OtherPtr.  This is an
+    // optimization, but it's also required to detect the corner case where
+    // both pointer operands are referencing the same memory, and where
+    // OtherPtr may be a bitcast or GEP that currently being rewritten.  (This
+    // function is only called for mem intrinsics that access the whole
+    // aggregate, so non-zero GEPs are not an issue here.)
+    while (1) {
+      if (BitCastInst *BC = dyn_cast<BitCastInst>(OtherPtr)) {
+        OtherPtr = BC->getOperand(0);
+        continue;
+      }
+      if (GetElementPtrInst *GEP = dyn_cast<GetElementPtrInst>(OtherPtr)) {
+        // All zero GEPs are effectively bitcasts.
+        if (GEP->hasAllZeroIndices()) {
+          OtherPtr = GEP->getOperand(0);
+          continue;
+        }
+      }
+      break;
+    }
+    // If OtherPtr has already been rewritten, this intrinsic will be dead.
+    if (OtherPtr == NewElts[0])
+      return;
     
     if (ConstantExpr *BCE = dyn_cast<ConstantExpr>(OtherPtr))
       if (BCE->getOpcode() == Instruction::BitCast)
@@ -798,7 +770,7 @@ void SROA::RewriteMemIntrinUserOfAlloca(MemIntrinsic *MI, Instruction *BCInst,
   // Process each element of the aggregate.
   Value *TheFn = MI->getOperand(0);
   const Type *BytePtrTy = MI->getRawDest()->getType();
-  bool SROADest = MI->getRawDest() == BCInst;
+  bool SROADest = MI->getRawDest() == Inst;
   
   Constant *Zero = Constant::getNullValue(Type::getInt32Ty(MI->getContext()));
 
@@ -807,12 +779,15 @@ void SROA::RewriteMemIntrinUserOfAlloca(MemIntrinsic *MI, Instruction *BCInst,
     Value *OtherElt = 0;
     unsigned OtherEltAlign = MemAlignment;
     
-    if (OtherPtr) {
+    if (OtherPtr == AI) {
+      OtherElt = NewElts[i];
+      OtherEltAlign = 0;
+    } else if (OtherPtr) {
       Value *Idx[2] = { Zero,
                       ConstantInt::get(Type::getInt32Ty(MI->getContext()), i) };
-      OtherElt = GetElementPtrInst::Create(OtherPtr, Idx, Idx + 2,
+      OtherElt = GetElementPtrInst::CreateInBounds(OtherPtr, Idx, Idx + 2,
                                            OtherPtr->getNameStr()+"."+Twine(i),
-                                           MI);
+                                                   MI);
       uint64_t EltOffset;
       const PointerType *OtherPtrTy = cast<PointerType>(OtherPtr->getType());
       if (const StructType *ST =
@@ -924,9 +899,7 @@ void SROA::RewriteMemIntrinUserOfAlloca(MemIntrinsic *MI, Instruction *BCInst,
       CallInst::Create(TheFn, Ops, Ops + 4, "", MI);
     }
   }
-  MI->eraseFromParent();
-  if (PossiblyDead)
-    RecursivelyDeleteTriviallyDeadInstructions(PossiblyDead);
+  DeadInsts.push_back(MI);
 }
 
 /// RewriteStoreUserOfWholeAlloca - We found a store of an integer that
@@ -937,15 +910,9 @@ void SROA::RewriteStoreUserOfWholeAlloca(StoreInst *SI, AllocaInst *AI,
   // Extract each element out of the integer according to its structure offset
   // and store the element value to the individual alloca.
   Value *SrcVal = SI->getOperand(0);
-  const Type *AllocaEltTy = AI->getType()->getElementType();
+  const Type *AllocaEltTy = AI->getAllocatedType();
   uint64_t AllocaSizeBits = TD->getTypeAllocSizeInBits(AllocaEltTy);
   
-  // If this isn't a store of an integer to the whole alloca, it may be a store
-  // to the first element.  Just ignore the store in this case and normal SROA
-  // will handle it.
-  if (!isa<IntegerType>(SrcVal->getType()) ||
-      TD->getTypeAllocSizeInBits(SrcVal->getType()) != AllocaSizeBits)
-    return;
   // Handle tail padding by extending the operand
   if (TD->getTypeSizeInBits(SrcVal->getType()) != AllocaSizeBits)
     SrcVal = new ZExtInst(SrcVal,
@@ -1050,7 +1017,7 @@ void SROA::RewriteStoreUserOfWholeAlloca(StoreInst *SI, AllocaInst *AI,
     }
   }
   
-  SI->eraseFromParent();
+  DeadInsts.push_back(SI);
 }
 
 /// RewriteLoadUserOfWholeAlloca - We found a load of the entire allocation to
@@ -1059,16 +1026,9 @@ void SROA::RewriteLoadUserOfWholeAlloca(LoadInst *LI, AllocaInst *AI,
                                         SmallVector<AllocaInst*, 32> &NewElts) {
   // Extract each element out of the NewElts according to its structure offset
   // and form the result value.
-  const Type *AllocaEltTy = AI->getType()->getElementType();
+  const Type *AllocaEltTy = AI->getAllocatedType();
   uint64_t AllocaSizeBits = TD->getTypeAllocSizeInBits(AllocaEltTy);
   
-  // If this isn't a load of the whole alloca to an integer, it may be a load
-  // of the first element.  Just ignore the load in this case and normal SROA
-  // will handle it.
-  if (!isa<IntegerType>(LI->getType()) ||
-      TD->getTypeAllocSizeInBits(LI->getType()) != AllocaSizeBits)
-    return;
-  
   DEBUG(errs() << "PROMOTING LOAD OF WHOLE ALLOCA: " << *AI << '\n' << *LI
                << '\n');
   
@@ -1139,10 +1099,9 @@ void SROA::RewriteLoadUserOfWholeAlloca(LoadInst *LI, AllocaInst *AI,
     ResultVal = new TruncInst(ResultVal, LI->getType(), "", LI);
 
   LI->replaceAllUsesWith(ResultVal);
-  LI->eraseFromParent();
+  DeadInsts.push_back(LI);
 }
 
-
 /// HasPadding - Return true if the specified type has any structure or
 /// alignment padding, false otherwise.
 static bool HasPadding(const Type *Ty, const TargetData &TD) {
@@ -1192,14 +1151,10 @@ int SROA::isSafeAllocaToScalarRepl(AllocaInst *AI) {
   // the users are safe to transform.
   AllocaInfo Info;
   
-  for (Value::use_iterator I = AI->use_begin(), E = AI->use_end();
-       I != E; ++I) {
-    isSafeUseOfAllocation(cast<Instruction>(*I), AI, Info);
-    if (Info.isUnsafe) {
-      DEBUG(errs() << "Cannot transform: " << *AI << "\n  due to user: "
-                   << **I << '\n');
-      return 0;
-    }
+  isSafeForScalarRepl(AI, AI, 0, Info);
+  if (Info.isUnsafe) {
+    DEBUG(errs() << "Cannot transform: " << *AI << '\n');
+    return 0;
   }
   
   // Okay, we know all the users are promotable.  If the aggregate is a memcpy
@@ -1208,88 +1163,28 @@ int SROA::isSafeAllocaToScalarRepl(AllocaInst *AI) {
   // types, but may actually be used.  In these cases, we refuse to promote the
   // struct.
   if (Info.isMemCpySrc && Info.isMemCpyDst &&
-      HasPadding(AI->getType()->getElementType(), *TD))
+      HasPadding(AI->getAllocatedType(), *TD))
     return 0;
 
   // If we require cleanup, return 1, otherwise return 3.
   return Info.needsCleanup ? 1 : 3;
 }
 
-/// CleanupGEP - GEP is used by an Alloca, which can be promoted after the GEP
-/// is canonicalized here.
-void SROA::CleanupGEP(GetElementPtrInst *GEPI) {
-  gep_type_iterator I = gep_type_begin(GEPI);
-  ++I;
-  
-  const ArrayType *AT = dyn_cast<ArrayType>(*I);
-  if (!AT) 
-    return;
-
-  uint64_t NumElements = AT->getNumElements();
-  
-  if (isa<ConstantInt>(I.getOperand()))
-    return;
-
-  if (NumElements == 1) {
-    GEPI->setOperand(2, 
-                  Constant::getNullValue(Type::getInt32Ty(GEPI->getContext())));
-    return;
-  } 
-    
-  assert(NumElements == 2 && "Unhandled case!");
-  // All users of the GEP must be loads.  At each use of the GEP, insert
-  // two loads of the appropriate indexed GEP and select between them.
-  Value *IsOne = new ICmpInst(GEPI, ICmpInst::ICMP_NE, I.getOperand(), 
-                              Constant::getNullValue(I.getOperand()->getType()),
-                              "isone");
-  // Insert the new GEP instructions, which are properly indexed.
-  SmallVector<Value*, 8> Indices(GEPI->op_begin()+1, GEPI->op_end());
-  Indices[1] = Constant::getNullValue(Type::getInt32Ty(GEPI->getContext()));
-  Value *ZeroIdx = GetElementPtrInst::Create(GEPI->getOperand(0),
-                                             Indices.begin(),
-                                             Indices.end(),
-                                             GEPI->getName()+".0", GEPI);
-  Indices[1] = ConstantInt::get(Type::getInt32Ty(GEPI->getContext()), 1);
-  Value *OneIdx = GetElementPtrInst::Create(GEPI->getOperand(0),
-                                            Indices.begin(),
-                                            Indices.end(),
-                                            GEPI->getName()+".1", GEPI);
-  // Replace all loads of the variable index GEP with loads from both
-  // indexes and a select.
-  while (!GEPI->use_empty()) {
-    LoadInst *LI = cast<LoadInst>(GEPI->use_back());
-    Value *Zero = new LoadInst(ZeroIdx, LI->getName()+".0", LI);
-    Value *One  = new LoadInst(OneIdx , LI->getName()+".1", LI);
-    Value *R = SelectInst::Create(IsOne, One, Zero, LI->getName(), LI);
-    LI->replaceAllUsesWith(R);
-    LI->eraseFromParent();
-  }
-  GEPI->eraseFromParent();
-}
-
-
 /// CleanupAllocaUsers - If SROA reported that it can promote the specified
 /// allocation, but only if cleaned up, perform the cleanups required.
-void SROA::CleanupAllocaUsers(AllocaInst *AI) {
-  // At this point, we know that the end result will be SROA'd and promoted, so
-  // we can insert ugly code if required so long as sroa+mem2reg will clean it
-  // up.
-  for (Value::use_iterator UI = AI->use_begin(), E = AI->use_end();
+void SROA::CleanupAllocaUsers(Value *V) {
+  for (Value::use_iterator UI = V->use_begin(), E = V->use_end();
        UI != E; ) {
     User *U = *UI++;
-    if (GetElementPtrInst *GEPI = dyn_cast<GetElementPtrInst>(U))
-      CleanupGEP(GEPI);
-    else {
-      Instruction *I = cast<Instruction>(U);
-      SmallVector<DbgInfoIntrinsic *, 2> DbgInUses;
-      if (!isa<StoreInst>(I) && OnlyUsedByDbgInfoIntrinsics(I, &DbgInUses)) {
-        // Safe to remove debug info uses.
-        while (!DbgInUses.empty()) {
-          DbgInfoIntrinsic *DI = DbgInUses.back(); DbgInUses.pop_back();
-          DI->eraseFromParent();
-        }
-        I->eraseFromParent();
+    Instruction *I = cast<Instruction>(U);
+    SmallVector<DbgInfoIntrinsic *, 2> DbgInUses;
+    if (!isa<StoreInst>(I) && OnlyUsedByDbgInfoIntrinsics(I, &DbgInUses)) {
+      // Safe to remove debug info uses.
+      while (!DbgInUses.empty()) {
+        DbgInfoIntrinsic *DI = DbgInUses.back(); DbgInUses.pop_back();
+        DI->eraseFromParent();
       }
+      I->eraseFromParent();
     }
   }
 }
@@ -1395,7 +1290,7 @@ bool SROA::CanConvertToScalar(Value *V, bool &IsNotTrivial, const Type *&VecTy,
       
       // Compute the offset that this GEP adds to the pointer.
       SmallVector<Value*, 8> Indices(GEP->op_begin()+1, GEP->op_end());
-      uint64_t GEPOffset = TD->getIndexedOffset(GEP->getOperand(0)->getType(),
+      uint64_t GEPOffset = TD->getIndexedOffset(GEP->getPointerOperandType(),
                                                 &Indices[0], Indices.size());
       // See if all uses can be converted.
       if (!CanConvertToScalar(GEP, IsNotTrivial, VecTy, SawVec,Offset+GEPOffset,
@@ -1457,7 +1352,7 @@ void SROA::ConvertUsesToScalar(Value *Ptr, AllocaInst *NewAI, uint64_t Offset) {
     if (GetElementPtrInst *GEP = dyn_cast<GetElementPtrInst>(User)) {
       // Compute the offset that this GEP adds to the pointer.
       SmallVector<Value*, 8> Indices(GEP->op_begin()+1, GEP->op_end());
-      uint64_t GEPOffset = TD->getIndexedOffset(GEP->getOperand(0)->getType(),
+      uint64_t GEPOffset = TD->getIndexedOffset(GEP->getPointerOperandType(),
                                                 &Indices[0], Indices.size());
       ConvertUsesToScalar(GEP, NewAI, Offset+GEPOffset*8);
       GEP->eraseFromParent();
@@ -1478,13 +1373,16 @@ void SROA::ConvertUsesToScalar(Value *Ptr, AllocaInst *NewAI, uint64_t Offset) {
     
     if (StoreInst *SI = dyn_cast<StoreInst>(User)) {
       assert(SI->getOperand(0) != Ptr && "Consistency error!");
-      // FIXME: Remove once builder has Twine API.
-      Value *Old = Builder.CreateLoad(NewAI,
-                                      (NewAI->getName()+".in").str().c_str());
+      Instruction *Old = Builder.CreateLoad(NewAI, NewAI->getName()+".in");
       Value *New = ConvertScalar_InsertValue(SI->getOperand(0), Old, Offset,
                                              Builder);
       Builder.CreateStore(New, NewAI);
       SI->eraseFromParent();
+      
+      // If the load we just inserted is now dead, then the inserted store
+      // overwrote the entire thing.
+      if (Old->use_empty())
+        Old->eraseFromParent();
       continue;
     }
     
@@ -1504,13 +1402,16 @@ void SROA::ConvertUsesToScalar(Value *Ptr, AllocaInst *NewAI, uint64_t Offset) {
           for (unsigned i = 1; i != NumBytes; ++i)
             APVal |= APVal << 8;
         
-        // FIXME: Remove once builder has Twine API.
-        Value *Old = Builder.CreateLoad(NewAI,
-                                        (NewAI->getName()+".in").str().c_str());
+        Instruction *Old = Builder.CreateLoad(NewAI, NewAI->getName()+".in");
         Value *New = ConvertScalar_InsertValue(
                                     ConstantInt::get(User->getContext(), APVal),
                                                Old, Offset, Builder);
         Builder.CreateStore(New, NewAI);
+        
+        // If the load we just inserted is now dead, then the memset overwrote
+        // the entire thing.
+        if (Old->use_empty())
+          Old->eraseFromParent();        
       }
       MSI->eraseFromParent();
       continue;
diff --git a/libclamav/c++/llvm/lib/Transforms/Scalar/SimplifyCFGPass.cpp b/libclamav/c++/llvm/lib/Transforms/Scalar/SimplifyCFGPass.cpp
index e905952..a36da78 100644
--- a/libclamav/c++/llvm/lib/Transforms/Scalar/SimplifyCFGPass.cpp
+++ b/libclamav/c++/llvm/lib/Transforms/Scalar/SimplifyCFGPass.cpp
@@ -189,6 +189,77 @@ static bool RemoveUnreachableBlocksFromFn(Function &F) {
   return true;
 }
 
+/// MergeEmptyReturnBlocks - If we have more than one empty (other than phi
+/// node) return blocks, merge them together to promote recursive block merging.
+static bool MergeEmptyReturnBlocks(Function &F) {
+  bool Changed = false;
+  
+  BasicBlock *RetBlock = 0;
+  
+  // Scan all the blocks in the function, looking for empty return blocks.
+  for (Function::iterator BBI = F.begin(), E = F.end(); BBI != E; ) {
+    BasicBlock &BB = *BBI++;
+    
+    // Only look at return blocks.
+    ReturnInst *Ret = dyn_cast<ReturnInst>(BB.getTerminator());
+    if (Ret == 0) continue;
+    
+    // Only look at the block if it is empty or the only other thing in it is a
+    // single PHI node that is the operand to the return.
+    if (Ret != &BB.front()) {
+      // Check for something else in the block.
+      BasicBlock::iterator I = Ret;
+      --I;
+      if (!isa<PHINode>(I) || I != BB.begin() ||
+          Ret->getNumOperands() == 0 ||
+          Ret->getOperand(0) != I)
+        continue;
+    }
+    
+    // If this is the first returning block, remember it and keep going.
+    if (RetBlock == 0) {
+      RetBlock = &BB;
+      continue;
+    }
+    
+    // Otherwise, we found a duplicate return block.  Merge the two.
+    Changed = true;
+    
+    // Case when there is no input to the return or when the returned values
+    // agree is trivial.  Note that they can't agree if there are phis in the
+    // blocks.
+    if (Ret->getNumOperands() == 0 ||
+        Ret->getOperand(0) == 
+          cast<ReturnInst>(RetBlock->getTerminator())->getOperand(0)) {
+      BB.replaceAllUsesWith(RetBlock);
+      BB.eraseFromParent();
+      continue;
+    }
+    
+    // If the canonical return block has no PHI node, create one now.
+    PHINode *RetBlockPHI = dyn_cast<PHINode>(RetBlock->begin());
+    if (RetBlockPHI == 0) {
+      Value *InVal = cast<ReturnInst>(RetBlock->begin())->getOperand(0);
+      RetBlockPHI = PHINode::Create(Ret->getOperand(0)->getType(), "merge",
+                                    &RetBlock->front());
+      
+      for (pred_iterator PI = pred_begin(RetBlock), E = pred_end(RetBlock);
+           PI != E; ++PI)
+        RetBlockPHI->addIncoming(InVal, *PI);
+      RetBlock->getTerminator()->setOperand(0, RetBlockPHI);
+    }
+    
+    // Turn BB into a block that just unconditionally branches to the return
+    // block.  This handles the case when the two return blocks have a common
+    // predecessor but that return different things.
+    RetBlockPHI->addIncoming(Ret->getOperand(0), &BB);
+    BB.getTerminator()->eraseFromParent();
+    BranchInst::Create(RetBlock, &BB);
+  }
+  
+  return Changed;
+}
+
 /// IterativeSimplifyCFG - Call SimplifyCFG on all the blocks in the function,
 /// iterating until no more changes are made.
 static bool IterativeSimplifyCFG(Function &F) {
@@ -216,6 +287,7 @@ static bool IterativeSimplifyCFG(Function &F) {
 //
 bool CFGSimplifyPass::runOnFunction(Function &F) {
   bool EverChanged = RemoveUnreachableBlocksFromFn(F);
+  EverChanged |= MergeEmptyReturnBlocks(F);
   EverChanged |= IterativeSimplifyCFG(F);
   
   // If neither pass changed anything, we're done.
diff --git a/libclamav/c++/llvm/lib/Transforms/Scalar/SimplifyLibCalls.cpp b/libclamav/c++/llvm/lib/Transforms/Scalar/SimplifyLibCalls.cpp
index 6fd884b..3c28ad2 100644
--- a/libclamav/c++/llvm/lib/Transforms/Scalar/SimplifyLibCalls.cpp
+++ b/libclamav/c++/llvm/lib/Transforms/Scalar/SimplifyLibCalls.cpp
@@ -76,6 +76,11 @@ public:
   /// return value has 'intptr_t' type.
   Value *EmitStrLen(Value *Ptr, IRBuilder<> &B);
 
+  /// EmitStrChr - Emit a call to the strchr function to the builder, for the
+  /// specified pointer and character.  Ptr is required to be some pointer type,
+  /// and the return value has 'i8*' type.
+  Value *EmitStrChr(Value *Ptr, char C, IRBuilder<> &B);
+  
   /// EmitMemCpy - Emit a call to the memcpy function to the builder.  This
   /// always expects that the size has type 'intptr_t' and Dst/Src are pointers.
   Value *EmitMemCpy(Value *Dst, Value *Src, Value *Len,
@@ -151,6 +156,26 @@ Value *LibCallOptimization::EmitStrLen(Value *Ptr, IRBuilder<> &B) {
   return CI;
 }
 
+/// EmitStrChr - Emit a call to the strchr function to the builder, for the
+/// specified pointer and character.  Ptr is required to be some pointer type,
+/// and the return value has 'i8*' type.
+Value *LibCallOptimization::EmitStrChr(Value *Ptr, char C, IRBuilder<> &B) {
+  Module *M = Caller->getParent();
+  AttributeWithIndex AWI =
+    AttributeWithIndex::get(~0u, Attribute::ReadOnly | Attribute::NoUnwind);
+  
+  const Type *I8Ptr = Type::getInt8PtrTy(*Context);
+  const Type *I32Ty = Type::getInt32Ty(*Context);
+  Constant *StrChr = M->getOrInsertFunction("strchr", AttrListPtr::get(&AWI, 1),
+                                            I8Ptr, I8Ptr, I32Ty, NULL);
+  CallInst *CI = B.CreateCall2(StrChr, CastToCStr(Ptr, B),
+                               ConstantInt::get(I32Ty, C), "strchr");
+  if (const Function *F = dyn_cast<Function>(StrChr->stripPointerCasts()))
+    CI->setCallingConv(F->getCallingConv());
+  return CI;
+}
+
+
 /// EmitMemCpy - Emit a call to the memcpy function to the builder.  This always
 /// expects that the size has type 'intptr_t' and Dst/Src are pointers.
 Value *LibCallOptimization::EmitMemCpy(Value *Dst, Value *Src, Value *Len,
@@ -880,17 +905,16 @@ struct StrLenOpt : public LibCallOptimization {
     if (uint64_t Len = GetStringLength(Src))
       return ConstantInt::get(CI->getType(), Len-1);
 
-    // Handle strlen(p) != 0.
-    if (!IsOnlyUsedInZeroEqualityComparison(CI)) return 0;
-
     // strlen(x) != 0 --> *x != 0
     // strlen(x) == 0 --> *x == 0
-    return B.CreateZExt(B.CreateLoad(Src, "strlenfirst"), CI->getType());
+    if (IsOnlyUsedInZeroEqualityComparison(CI))
+      return B.CreateZExt(B.CreateLoad(Src, "strlenfirst"), CI->getType());
+    return 0;
   }
 };
 
 //===---------------------------------------===//
-// 'strto*' Optimizations
+// 'strto*' Optimizations.  This handles strtol, strtod, strtof, strtoul, etc.
 
 struct StrToOpt : public LibCallOptimization {
   virtual Value *CallOptimizer(Function *Callee, CallInst *CI, IRBuilder<> &B) {
@@ -910,6 +934,52 @@ struct StrToOpt : public LibCallOptimization {
   }
 };
 
+//===---------------------------------------===//
+// 'strstr' Optimizations
+
+struct StrStrOpt : public LibCallOptimization {
+  virtual Value *CallOptimizer(Function *Callee, CallInst *CI, IRBuilder<> &B) {
+    const FunctionType *FT = Callee->getFunctionType();
+    if (FT->getNumParams() != 2 ||
+        !isa<PointerType>(FT->getParamType(0)) ||
+        !isa<PointerType>(FT->getParamType(1)) ||
+        !isa<PointerType>(FT->getReturnType()))
+      return 0;
+
+    // fold strstr(x, x) -> x.
+    if (CI->getOperand(1) == CI->getOperand(2))
+      return B.CreateBitCast(CI->getOperand(1), CI->getType());
+    
+    // See if either input string is a constant string.
+    std::string SearchStr, ToFindStr;
+    bool HasStr1 = GetConstantStringInfo(CI->getOperand(1), SearchStr);
+    bool HasStr2 = GetConstantStringInfo(CI->getOperand(2), ToFindStr);
+    
+    // fold strstr(x, "") -> x.
+    if (HasStr2 && ToFindStr.empty())
+      return B.CreateBitCast(CI->getOperand(1), CI->getType());
+    
+    // If both strings are known, constant fold it.
+    if (HasStr1 && HasStr2) {
+      std::string::size_type Offset = SearchStr.find(ToFindStr);
+      
+      if (Offset == std::string::npos) // strstr("foo", "bar") -> null
+        return Constant::getNullValue(CI->getType());
+
+      // strstr("abcd", "bc") -> gep((char*)"abcd", 1)
+      Value *Result = CastToCStr(CI->getOperand(1), B);
+      Result = B.CreateConstInBoundsGEP1_64(Result, Offset, "strstr");
+      return B.CreateBitCast(Result, CI->getType());
+    }
+    
+    // fold strstr(x, "y") -> strchr(x, 'y').
+    if (HasStr2 && ToFindStr.size() == 1)
+      return B.CreateBitCast(EmitStrChr(CI->getOperand(1), ToFindStr[0], B),
+                             CI->getType());
+    return 0;
+  }
+};
+  
 
 //===---------------------------------------===//
 // 'memcmp' Optimizations
@@ -941,19 +1011,6 @@ struct MemCmpOpt : public LibCallOptimization {
       return B.CreateSExt(B.CreateSub(LHSV, RHSV, "chardiff"), CI->getType());
     }
 
-    // memcmp(S1,S2,2) != 0 -> (*(short*)LHS ^ *(short*)RHS)  != 0
-    // memcmp(S1,S2,4) != 0 -> (*(int*)LHS ^ *(int*)RHS)  != 0
-    if ((Len == 2 || Len == 4) && IsOnlyUsedInZeroEqualityComparison(CI)) {
-      const Type *PTy = PointerType::getUnqual(Len == 2 ?
-                       Type::getInt16Ty(*Context) : Type::getInt32Ty(*Context));
-      LHS = B.CreateBitCast(LHS, PTy, "tmp");
-      RHS = B.CreateBitCast(RHS, PTy, "tmp");
-      LoadInst *LHSV = B.CreateLoad(LHS, "lhsv");
-      LoadInst *RHSV = B.CreateLoad(RHS, "rhsv");
-      LHSV->setAlignment(1); RHSV->setAlignment(1);  // Unaligned loads.
-      return B.CreateZExt(B.CreateXor(LHSV, RHSV, "shortdiff"), CI->getType());
-    }
-
     // Constant folding: memcmp(x, y, l) -> cnst (all arguments are constant)
     std::string LHSStr, RHSStr;
     if (GetConstantStringInfo(LHS, LHSStr) &&
@@ -1051,7 +1108,7 @@ struct SizeOpt : public LibCallOptimization {
 
     const Type *Ty = Callee->getFunctionType()->getReturnType();
 
-    if (Const->getZExtValue() < 2)
+    if (Const->getZExtValue() == 0)
       return Constant::getAllOnesValue(Ty);
     else
       return ConstantInt::get(Ty, 0);
@@ -1071,8 +1128,8 @@ struct MemCpyChkOpt : public LibCallOptimization {
     if (FT->getNumParams() != 4 || FT->getReturnType() != FT->getParamType(0) ||
         !isa<PointerType>(FT->getParamType(0)) ||
         !isa<PointerType>(FT->getParamType(1)) ||
-	!isa<IntegerType>(FT->getParamType(3)) ||
-	FT->getParamType(2) != TD->getIntPtrType(*Context))
+        !isa<IntegerType>(FT->getParamType(3)) ||
+        FT->getParamType(2) != TD->getIntPtrType(*Context))
       return 0;
 
     ConstantInt *SizeCI = dyn_cast<ConstantInt>(CI->getOperand(4));
@@ -1099,7 +1156,7 @@ struct MemSetChkOpt : public LibCallOptimization {
     if (FT->getNumParams() != 4 || FT->getReturnType() != FT->getParamType(0) ||
         !isa<PointerType>(FT->getParamType(0)) ||
         !isa<IntegerType>(FT->getParamType(1)) ||
-	!isa<IntegerType>(FT->getParamType(3)) ||
+        !isa<IntegerType>(FT->getParamType(3)) ||
         FT->getParamType(2) != TD->getIntPtrType(*Context))
       return 0;
 
@@ -1129,7 +1186,7 @@ struct MemMoveChkOpt : public LibCallOptimization {
     if (FT->getNumParams() != 4 || FT->getReturnType() != FT->getParamType(0) ||
         !isa<PointerType>(FT->getParamType(0)) ||
         !isa<PointerType>(FT->getParamType(1)) ||
-	!isa<IntegerType>(FT->getParamType(3)) ||
+        !isa<IntegerType>(FT->getParamType(3)) ||
         FT->getParamType(2) != TD->getIntPtrType(*Context))
       return 0;
 
@@ -1675,8 +1732,8 @@ namespace {
     // String and Memory LibCall Optimizations
     StrCatOpt StrCat; StrNCatOpt StrNCat; StrChrOpt StrChr; StrCmpOpt StrCmp;
     StrNCmpOpt StrNCmp; StrCpyOpt StrCpy; StrNCpyOpt StrNCpy; StrLenOpt StrLen;
-    StrToOpt StrTo; MemCmpOpt MemCmp; MemCpyOpt MemCpy; MemMoveOpt MemMove;
-    MemSetOpt MemSet;
+    StrToOpt StrTo; StrStrOpt StrStr;
+    MemCmpOpt MemCmp; MemCpyOpt MemCpy; MemMoveOpt MemMove; MemSetOpt MemSet;
     // Math Library Optimizations
     PowOpt Pow; Exp2Opt Exp2; UnaryDoubleFPOpt UnaryDoubleFP;
     // Integer Optimizations
@@ -1738,6 +1795,7 @@ void SimplifyLibCalls::InitOptimizations() {
   Optimizations["strtoll"] = &StrTo;
   Optimizations["strtold"] = &StrTo;
   Optimizations["strtoull"] = &StrTo;
+  Optimizations["strstr"] = &StrStr;
   Optimizations["memcmp"] = &MemCmp;
   Optimizations["memcpy"] = &MemCpy;
   Optimizations["memmove"] = &MemMove;
@@ -2644,12 +2702,6 @@ bool SimplifyLibCalls::doInitialization(Module &M) {
 //   * strcspn("",a) -> 0
 //   * strcspn(s,"") -> strlen(a)
 //
-// strstr: (PR5783)
-//   * strstr(x,x)  -> x
-//   * strstr(x, "") -> x
-//   * strstr(x, "a") -> strchr(x, 'a')
-//   * strstr(s1,s2) -> result   (if s1 and s2 are constant strings)
-//
 // tan, tanf, tanl:
 //   * tan(atan(x)) -> x
 //
diff --git a/libclamav/c++/llvm/lib/Transforms/Utils/BreakCriticalEdges.cpp b/libclamav/c++/llvm/lib/Transforms/Utils/BreakCriticalEdges.cpp
index ccd97c8..19c7206 100644
--- a/libclamav/c++/llvm/lib/Transforms/Utils/BreakCriticalEdges.cpp
+++ b/libclamav/c++/llvm/lib/Transforms/Utils/BreakCriticalEdges.cpp
@@ -309,10 +309,10 @@ BasicBlock *llvm::SplitCriticalEdge(TerminatorInst *TI, unsigned SuccNum,
         if (TIL == DestLoop) {
           // Both in the same loop, the NewBB joins loop.
           DestLoop->addBasicBlockToLoop(NewBB, LI->getBase());
-        } else if (TIL->contains(DestLoop->getHeader())) {
+        } else if (TIL->contains(DestLoop)) {
           // Edge from an outer loop to an inner loop.  Add to the outer loop.
           TIL->addBasicBlockToLoop(NewBB, LI->getBase());
-        } else if (DestLoop->contains(TIL->getHeader())) {
+        } else if (DestLoop->contains(TIL)) {
           // Edge from an inner loop to an outer loop.  Add to the outer loop.
           DestLoop->addBasicBlockToLoop(NewBB, LI->getBase());
         } else {
diff --git a/libclamav/c++/llvm/lib/Transforms/Utils/LoopSimplify.cpp b/libclamav/c++/llvm/lib/Transforms/Utils/LoopSimplify.cpp
index 690972d..7fcc5f7 100644
--- a/libclamav/c++/llvm/lib/Transforms/Utils/LoopSimplify.cpp
+++ b/libclamav/c++/llvm/lib/Transforms/Utils/LoopSimplify.cpp
@@ -109,7 +109,7 @@ X("loopsimplify", "Canonicalize natural loops", true);
 const PassInfo *const llvm::LoopSimplifyID = &X;
 Pass *llvm::createLoopSimplifyPass() { return new LoopSimplify(); }
 
-/// runOnFunction - Run down all loops in the CFG (recursively, but we could do
+/// runOnLoop - Run down all loops in the CFG (recursively, but we could do
 /// it in any convenient order) inserting preheaders...
 ///
 bool LoopSimplify::runOnLoop(Loop *l, LPPassManager &LPM) {
@@ -305,12 +305,6 @@ ReprocessLoop:
     }
   }
 
-  // If there are duplicate phi nodes (for example, from loop rotation),
-  // get rid of them.
-  for (Loop::block_iterator BB = L->block_begin(), E = L->block_end();
-       BB != E; ++BB)
-    EliminateDuplicatePHINodes(*BB);
-
   return Changed;
 }
 
diff --git a/libclamav/c++/llvm/lib/Transforms/Utils/LoopUnroll.cpp b/libclamav/c++/llvm/lib/Transforms/Utils/LoopUnroll.cpp
index 6232f32..6b2c591 100644
--- a/libclamav/c++/llvm/lib/Transforms/Utils/LoopUnroll.cpp
+++ b/libclamav/c++/llvm/lib/Transforms/Utils/LoopUnroll.cpp
@@ -194,7 +194,7 @@ bool llvm::UnrollLoop(Loop *L, unsigned Count, LoopInfo* LI, LPPassManager* LPM)
     OrigPHINode.push_back(PN);
     if (Instruction *I = 
                 dyn_cast<Instruction>(PN->getIncomingValueForBlock(LatchBlock)))
-      if (L->contains(I->getParent()))
+      if (L->contains(I))
         LastValueMap[I] = I;
   }
 
@@ -222,7 +222,7 @@ bool llvm::UnrollLoop(Loop *L, unsigned Count, LoopInfo* LI, LPPassManager* LPM)
           PHINode *NewPHI = cast<PHINode>(ValueMap[OrigPHINode[i]]);
           Value *InVal = NewPHI->getIncomingValueForBlock(LatchBlock);
           if (Instruction *InValI = dyn_cast<Instruction>(InVal))
-            if (It > 1 && L->contains(InValI->getParent()))
+            if (It > 1 && L->contains(InValI))
               InVal = LastValueMap[InValI];
           ValueMap[OrigPHINode[i]] = InVal;
           New->getInstList().erase(NewPHI);
@@ -244,7 +244,7 @@ bool llvm::UnrollLoop(Loop *L, unsigned Count, LoopInfo* LI, LPPassManager* LPM)
              UI != UE;) {
           Instruction *UseInst = cast<Instruction>(*UI);
           ++UI;
-          if (isa<PHINode>(UseInst) && !L->contains(UseInst->getParent())) {
+          if (isa<PHINode>(UseInst) && !L->contains(UseInst)) {
             PHINode *phi = cast<PHINode>(UseInst);
             Value *Incoming = phi->getIncomingValueForBlock(*BB);
             phi->addIncoming(Incoming, New);
@@ -295,7 +295,7 @@ bool llvm::UnrollLoop(Loop *L, unsigned Count, LoopInfo* LI, LPPassManager* LPM)
       // If this value was defined in the loop, take the value defined by the
       // last iteration of the loop.
       if (Instruction *InValI = dyn_cast<Instruction>(InVal)) {
-        if (L->contains(InValI->getParent()))
+        if (L->contains(InValI))
           InVal = LastValueMap[InVal];
       }
       PN->addIncoming(InVal, LastIterationBB);
diff --git a/libclamav/c++/llvm/lib/Transforms/Utils/SSAUpdater.cpp b/libclamav/c++/llvm/lib/Transforms/Utils/SSAUpdater.cpp
index ba41bf9..9881b3c 100644
--- a/libclamav/c++/llvm/lib/Transforms/Utils/SSAUpdater.cpp
+++ b/libclamav/c++/llvm/lib/Transforms/Utils/SSAUpdater.cpp
@@ -149,7 +149,29 @@ Value *SSAUpdater::GetValueInMiddleOfBlock(BasicBlock *BB) {
   if (SingularValue != 0)
     return SingularValue;
 
-  // Otherwise, we do need a PHI: insert one now.
+  // Otherwise, we do need a PHI: check to see if we already have one available
+  // in this block that produces the right value.
+  if (isa<PHINode>(BB->begin())) {
+    DenseMap<BasicBlock*, Value*> ValueMapping(PredValues.begin(),
+                                               PredValues.end());
+    PHINode *SomePHI;
+    for (BasicBlock::iterator It = BB->begin();
+         (SomePHI = dyn_cast<PHINode>(It)); ++It) {
+      // Scan this phi to see if it is what we need.
+      bool Equal = true;
+      for (unsigned i = 0, e = SomePHI->getNumIncomingValues(); i != e; ++i)
+        if (ValueMapping[SomePHI->getIncomingBlock(i)] !=
+            SomePHI->getIncomingValue(i)) {
+          Equal = false;
+          break;
+        }
+         
+      if (Equal)
+        return SomePHI;
+    }
+  }
+  
+  // Ok, we have no way out, insert a new one now.
   PHINode *InsertedPHI = PHINode::Create(PrototypeValue->getType(),
                                          PrototypeValue->getName(),
                                          &BB->front());
@@ -198,7 +220,7 @@ Value *SSAUpdater::GetValueAtEndOfBlockInternal(BasicBlock *BB) {
 
   // Query AvailableVals by doing an insertion of null.
   std::pair<AvailableValsTy::iterator, bool> InsertRes =
-  AvailableVals.insert(std::make_pair(BB, WeakVH()));
+    AvailableVals.insert(std::make_pair(BB, TrackingVH<Value>()));
 
   // Handle the case when the insertion fails because we have already seen BB.
   if (!InsertRes.second) {
@@ -214,8 +236,8 @@ Value *SSAUpdater::GetValueAtEndOfBlockInternal(BasicBlock *BB) {
     // it.  When we get back to the first instance of the recursion we will fill
     // in the PHI node.
     return InsertRes.first->second =
-    PHINode::Create(PrototypeValue->getType(), PrototypeValue->getName(),
-                    &BB->front());
+      PHINode::Create(PrototypeValue->getType(), PrototypeValue->getName(),
+                      &BB->front());
   }
 
   // Okay, the value isn't in the map and we just inserted a null in the entry
diff --git a/libclamav/c++/llvm/lib/VMCore/AsmWriter.cpp b/libclamav/c++/llvm/lib/VMCore/AsmWriter.cpp
index c765d96..4ef57fe 100644
--- a/libclamav/c++/llvm/lib/VMCore/AsmWriter.cpp
+++ b/libclamav/c++/llvm/lib/VMCore/AsmWriter.cpp
@@ -681,7 +681,7 @@ void SlotTracker::processFunction() {
   ST_DEBUG("Inserting Instructions:\n");
 
   MetadataContext &TheMetadata = TheFunction->getContext().getMetadata();
-  typedef SmallVector<std::pair<unsigned, TrackingVH<MDNode> >, 2> MDMapTy;
+  typedef SmallVector<std::pair<unsigned, MDNode*>, 2> MDMapTy;
   MDMapTy MDs;
 
   // Add all of the basic blocks and instructions with no names.
@@ -813,10 +813,9 @@ void SlotTracker::CreateFunctionSlot(const Value *V) {
 void SlotTracker::CreateMetadataSlot(const MDNode *N) {
   assert(N && "Can't insert a null Value into SlotTracker!");
 
-  // Don't insert if N contains an instruction.
-  for (unsigned i = 0, e = N->getNumElements(); i != e; ++i)
-    if (N->getElement(i) && isa<Instruction>(N->getElement(i)))
-      return;
+  // Don't insert if N is a function-local metadata.
+  if (N->isFunctionLocal())
+    return;
 
   ValueMap::iterator I = mdnMap.find(N);
   if (I != mdnMap.end())
@@ -1232,7 +1231,7 @@ static void WriteAsOperandInternal(raw_ostream &Out, const Value *V,
   }
 
   if (const MDNode *N = dyn_cast<MDNode>(V)) {
-    if (Machine->getMetadataSlot(N) == -1) {
+    if (N->isFunctionLocal()) {
       // Print metadata inline, not via slot reference number.
       Out << "!{";
       for (unsigned mi = 0, me = N->getNumElements(); mi != me; ++mi) {
@@ -2086,7 +2085,7 @@ void AssemblyWriter::printInstruction(const Instruction &I) {
   // Print Metadata info
   if (!MDNames.empty()) {
     MetadataContext &TheMetadata = I.getContext().getMetadata();
-    typedef SmallVector<std::pair<unsigned, TrackingVH<MDNode> >, 2> MDMapTy;
+    typedef SmallVector<std::pair<unsigned, MDNode*>, 2> MDMapTy;
     MDMapTy MDs;
     TheMetadata.getMDs(&I, MDs);
     for (MDMapTy::const_iterator MI = MDs.begin(), ME = MDs.end(); MI != ME; 
diff --git a/libclamav/c++/llvm/lib/VMCore/BasicBlock.cpp b/libclamav/c++/llvm/lib/VMCore/BasicBlock.cpp
index c7f7f53..16437bc 100644
--- a/libclamav/c++/llvm/lib/VMCore/BasicBlock.cpp
+++ b/libclamav/c++/llvm/lib/VMCore/BasicBlock.cpp
@@ -35,7 +35,7 @@ LLVMContext &BasicBlock::getContext() const {
 
 // Explicit instantiation of SymbolTableListTraits since some of the methods
 // are not in the public header file...
-template class SymbolTableListTraits<Instruction, BasicBlock>;
+template class llvm::SymbolTableListTraits<Instruction, BasicBlock>;
 
 
 BasicBlock::BasicBlock(LLVMContext &C, const Twine &Name, Function *NewParent,
diff --git a/libclamav/c++/llvm/lib/VMCore/ConstantFold.cpp b/libclamav/c++/llvm/lib/VMCore/ConstantFold.cpp
index 7f713d1..2449739 100644
--- a/libclamav/c++/llvm/lib/VMCore/ConstantFold.cpp
+++ b/libclamav/c++/llvm/lib/VMCore/ConstantFold.cpp
@@ -1839,14 +1839,16 @@ Constant *llvm::ConstantFoldCompareInstruction(LLVMContext &Context,
       }
     }
 
-    if (!isa<ConstantExpr>(C1) && isa<ConstantExpr>(C2)) {
+    if ((!isa<ConstantExpr>(C1) && isa<ConstantExpr>(C2)) ||
+        (C1->isNullValue() && !C2->isNullValue())) {
       // If C2 is a constant expr and C1 isn't, flip them around and fold the
       // other way if possible.
+      // Also, if C1 is null and C2 isn't, flip them around.
       switch (pred) {
       case ICmpInst::ICMP_EQ:
       case ICmpInst::ICMP_NE:
         // No change of predicate required.
-        return ConstantFoldCompareInstruction(Context, pred, C2, C1);
+        return ConstantExpr::getICmp(pred, C2, C1);
 
       case ICmpInst::ICMP_ULT:
       case ICmpInst::ICMP_SLT:
@@ -1858,7 +1860,7 @@ Constant *llvm::ConstantFoldCompareInstruction(LLVMContext &Context,
       case ICmpInst::ICMP_SGE:
         // Change the predicate as necessary to swap the operands.
         pred = ICmpInst::getSwappedPredicate((ICmpInst::Predicate)pred);
-        return ConstantFoldCompareInstruction(Context, pred, C2, C1);
+        return ConstantExpr::getICmp(pred, C2, C1);
 
       default:  // These predicates cannot be flopped around.
         break;
diff --git a/libclamav/c++/llvm/lib/VMCore/Constants.cpp b/libclamav/c++/llvm/lib/VMCore/Constants.cpp
index a62f75b..34fc9a8 100644
--- a/libclamav/c++/llvm/lib/VMCore/Constants.cpp
+++ b/libclamav/c++/llvm/lib/VMCore/Constants.cpp
@@ -627,6 +627,12 @@ Constant* ConstantVector::get(Constant* const* Vals, unsigned NumVals) {
   return get(std::vector<Constant*>(Vals, Vals+NumVals));
 }
 
+Constant* ConstantExpr::getNSWNeg(Constant* C) {
+  assert(C->getType()->isIntOrIntVector() &&
+         "Cannot NEG a nonintegral value!");
+  return getNSWSub(ConstantFP::getZeroValueForNegation(C->getType()), C);
+}
+
 Constant* ConstantExpr::getNSWAdd(Constant* C1, Constant* C2) {
   return getTy(C1->getType(), Instruction::Add, C1, C2,
                OverflowingBinaryOperator::NoSignedWrap);
@@ -637,6 +643,11 @@ Constant* ConstantExpr::getNSWSub(Constant* C1, Constant* C2) {
                OverflowingBinaryOperator::NoSignedWrap);
 }
 
+Constant* ConstantExpr::getNSWMul(Constant* C1, Constant* C2) {
+  return getTy(C1->getType(), Instruction::Mul, C1, C2,
+               OverflowingBinaryOperator::NoSignedWrap);
+}
+
 Constant* ConstantExpr::getExactSDiv(Constant* C1, Constant* C2) {
   return getTy(C1->getType(), Instruction::SDiv, C1, C2,
                SDivOperator::IsExact);
diff --git a/libclamav/c++/llvm/lib/VMCore/Dominators.cpp b/libclamav/c++/llvm/lib/VMCore/Dominators.cpp
index 26c02e0..3441750 100644
--- a/libclamav/c++/llvm/lib/VMCore/Dominators.cpp
+++ b/libclamav/c++/llvm/lib/VMCore/Dominators.cpp
@@ -47,8 +47,8 @@ VerifyDomInfoX("verify-dom-info", cl::location(VerifyDomInfo),
 //
 //===----------------------------------------------------------------------===//
 
-TEMPLATE_INSTANTIATION(class DomTreeNodeBase<BasicBlock>);
-TEMPLATE_INSTANTIATION(class DominatorTreeBase<BasicBlock>);
+TEMPLATE_INSTANTIATION(class llvm::DomTreeNodeBase<BasicBlock>);
+TEMPLATE_INSTANTIATION(class llvm::DominatorTreeBase<BasicBlock>);
 
 char DominatorTree::ID = 0;
 static RegisterPass<DominatorTree>
diff --git a/libclamav/c++/llvm/lib/VMCore/Function.cpp b/libclamav/c++/llvm/lib/VMCore/Function.cpp
index 88e1fe8..767f8a6 100644
--- a/libclamav/c++/llvm/lib/VMCore/Function.cpp
+++ b/libclamav/c++/llvm/lib/VMCore/Function.cpp
@@ -29,8 +29,8 @@ using namespace llvm;
 
 // Explicit instantiations of SymbolTableListTraits since some of the methods
 // are not in the public header file...
-template class SymbolTableListTraits<Argument, Function>;
-template class SymbolTableListTraits<BasicBlock, Function>;
+template class llvm::SymbolTableListTraits<Argument, Function>;
+template class llvm::SymbolTableListTraits<BasicBlock, Function>;
 
 //===----------------------------------------------------------------------===//
 // Argument Implementation
diff --git a/libclamav/c++/llvm/lib/VMCore/Instructions.cpp b/libclamav/c++/llvm/lib/VMCore/Instructions.cpp
index b03ee93..97fec39 100644
--- a/libclamav/c++/llvm/lib/VMCore/Instructions.cpp
+++ b/libclamav/c++/llvm/lib/VMCore/Instructions.cpp
@@ -1772,6 +1772,18 @@ BinaryOperator *BinaryOperator::CreateNeg(Value *Op, const Twine &Name,
                             Op->getType(), Name, InsertAtEnd);
 }
 
+BinaryOperator *BinaryOperator::CreateNSWNeg(Value *Op, const Twine &Name,
+                                             Instruction *InsertBefore) {
+  Value *zero = ConstantFP::getZeroValueForNegation(Op->getType());
+  return BinaryOperator::CreateNSWSub(zero, Op, Name, InsertBefore);
+}
+
+BinaryOperator *BinaryOperator::CreateNSWNeg(Value *Op, const Twine &Name,
+                                             BasicBlock *InsertAtEnd) {
+  Value *zero = ConstantFP::getZeroValueForNegation(Op->getType());
+  return BinaryOperator::CreateNSWSub(zero, Op, Name, InsertAtEnd);
+}
+
 BinaryOperator *BinaryOperator::CreateFNeg(Value *Op, const Twine &Name,
                                            Instruction *InsertBefore) {
   Value *zero = ConstantFP::getZeroValueForNegation(Op->getType());
diff --git a/libclamav/c++/llvm/lib/VMCore/LLVMContext.cpp b/libclamav/c++/llvm/lib/VMCore/LLVMContext.cpp
index 3b4a1a3..3e32605 100644
--- a/libclamav/c++/llvm/lib/VMCore/LLVMContext.cpp
+++ b/libclamav/c++/llvm/lib/VMCore/LLVMContext.cpp
@@ -19,7 +19,6 @@
 #include "llvm/Support/ManagedStatic.h"
 #include "llvm/Support/ValueHandle.h"
 #include "LLVMContextImpl.h"
-#include <set>
 
 using namespace llvm;
 
diff --git a/libclamav/c++/llvm/lib/VMCore/LLVMContextImpl.h b/libclamav/c++/llvm/lib/VMCore/LLVMContextImpl.h
index 8a2378e..2ea2d5e 100644
--- a/libclamav/c++/llvm/lib/VMCore/LLVMContextImpl.h
+++ b/libclamav/c++/llvm/lib/VMCore/LLVMContextImpl.h
@@ -27,6 +27,7 @@
 #include "llvm/ADT/APInt.h"
 #include "llvm/ADT/DenseMap.h"
 #include "llvm/ADT/FoldingSet.h"
+#include "llvm/ADT/SmallPtrSet.h"
 #include "llvm/ADT/StringMap.h"
 #include <vector>
 
@@ -159,6 +160,11 @@ public:
   TypeMap<StructValType, StructType> StructTypes;
   TypeMap<IntegerValType, IntegerType> IntegerTypes;
 
+  // Opaque types are not structurally uniqued, so don't use TypeMap.
+  typedef SmallPtrSet<const OpaqueType*, 8> OpaqueTypesTy;
+  OpaqueTypesTy OpaqueTypes;
+  
+
   /// ValueHandles - This map keeps track of all of the value handles that are
   /// watching a Value*.  The Value::HasValueHandle bit is used to know
   // whether or not a value has an entry in this map.
@@ -201,6 +207,11 @@ public:
         delete I->second;
     }
     MDNodeSet.clear();
+    for (OpaqueTypesTy::iterator I = OpaqueTypes.begin(), E = OpaqueTypes.end();
+        I != E; ++I) {
+      (*I)->AbstractTypeUsers.clear();
+      delete *I;
+    }
   }
 };
 
diff --git a/libclamav/c++/llvm/lib/VMCore/LeaksContext.h b/libclamav/c++/llvm/lib/VMCore/LeaksContext.h
index bd10a47..abff090 100644
--- a/libclamav/c++/llvm/lib/VMCore/LeaksContext.h
+++ b/libclamav/c++/llvm/lib/VMCore/LeaksContext.h
@@ -46,8 +46,9 @@ struct LeakDetectorImpl {
   // immediately, it is added to the CachedValue Value.  If it is
   // immediately removed, no set search need be performed.
   void addGarbage(const T* o) {
+    assert(Ts.count(o) == 0 && "Object already in set!");
     if (Cache) {
-      assert(Ts.count(Cache) == 0 && "Object already in set!");
+      assert(Cache != o && "Object already in set!");
       Ts.insert(Cache);
     }
     Cache = o;
diff --git a/libclamav/c++/llvm/lib/VMCore/Metadata.cpp b/libclamav/c++/llvm/lib/VMCore/Metadata.cpp
index b80b6bf..0a3ddcb 100644
--- a/libclamav/c++/llvm/lib/VMCore/Metadata.cpp
+++ b/libclamav/c++/llvm/lib/VMCore/Metadata.cpp
@@ -11,14 +11,15 @@
 //
 //===----------------------------------------------------------------------===//
 
-#include "LLVMContextImpl.h"
 #include "llvm/Metadata.h"
+#include "LLVMContextImpl.h"
 #include "llvm/LLVMContext.h"
 #include "llvm/Module.h"
 #include "llvm/Instruction.h"
 #include "llvm/ADT/DenseMap.h"
 #include "llvm/ADT/StringMap.h"
 #include "SymbolTableListTraitsImpl.h"
+#include "llvm/Support/ValueHandle.h"
 using namespace llvm;
 
 //===----------------------------------------------------------------------===//
@@ -28,6 +29,10 @@ using namespace llvm;
 //===----------------------------------------------------------------------===//
 // MDString implementation.
 //
+
+MDString::MDString(LLVMContext &C, StringRef S)
+  : MetadataBase(Type::getMetadataTy(C), Value::MDStringVal), Str(S) {}
+
 MDString *MDString::get(LLVMContext &Context, StringRef Str) {
   LLVMContextImpl *pImpl = Context.pImpl;
   StringMapEntry<MDString *> &Entry = 
@@ -47,23 +52,66 @@ MDString *MDString::get(LLVMContext &Context, const char *Str) {
 }
 
 //===----------------------------------------------------------------------===//
+// MDNodeElement implementation.
+//
+
+// Use CallbackVH to hold MDNode elements.
+namespace llvm {
+class MDNodeElement : public CallbackVH {
+  MDNode *Parent;
+public:
+  MDNodeElement() {}
+  MDNodeElement(Value *V, MDNode *P) : CallbackVH(V), Parent(P) {}
+  ~MDNodeElement() {}
+  
+  void set(Value *V, MDNode *P) {
+    setValPtr(V);
+    Parent = P;
+  }
+  
+  virtual void deleted();
+  virtual void allUsesReplacedWith(Value *NV);
+};
+} // end namespace llvm.
+
+
+void MDNodeElement::deleted() {
+  Parent->replaceElement(this, 0);
+}
+
+void MDNodeElement::allUsesReplacedWith(Value *NV) {
+  Parent->replaceElement(this, NV);
+}
+
+
+
+//===----------------------------------------------------------------------===//
 // MDNode implementation.
 //
-MDNode::MDNode(LLVMContext &C, Value *const *Vals, unsigned NumVals)
-  : MetadataBase(Type::getMetadataTy(C), Value::MDNodeVal) {
-  NodeSize = NumVals;
-  Node = new ElementVH[NodeSize];
-  ElementVH *Ptr = Node;
-  for (unsigned i = 0; i != NumVals; ++i) 
-    *Ptr++ = ElementVH(Vals[i], this);
+
+/// ~MDNode - Destroy MDNode.
+MDNode::~MDNode() {
+  LLVMContextImpl *pImpl = getType()->getContext().pImpl;
+  pImpl->MDNodeSet.RemoveNode(this);
+  delete [] Operands;
+  Operands = NULL;
 }
 
-void MDNode::Profile(FoldingSetNodeID &ID) const {
-  for (unsigned i = 0, e = getNumElements(); i != e; ++i)
-    ID.AddPointer(getElement(i));
+MDNode::MDNode(LLVMContext &C, Value *const *Vals, unsigned NumVals,
+               bool isFunctionLocal)
+  : MetadataBase(Type::getMetadataTy(C), Value::MDNodeVal) {
+  NumOperands = NumVals;
+  Operands = new MDNodeElement[NumOperands];
+    
+  for (unsigned i = 0; i != NumVals; ++i) 
+    Operands[i].set(Vals[i], this);
+    
+  if (isFunctionLocal)
+    SubclassData |= FunctionLocalBit;
 }
 
-MDNode *MDNode::get(LLVMContext &Context, Value*const* Vals, unsigned NumVals) {
+MDNode *MDNode::get(LLVMContext &Context, Value*const* Vals, unsigned NumVals,
+                    bool isFunctionLocal) {
   LLVMContextImpl *pImpl = Context.pImpl;
   FoldingSetNodeID ID;
   for (unsigned i = 0; i != NumVals; ++i)
@@ -73,50 +121,41 @@ MDNode *MDNode::get(LLVMContext &Context, Value*const* Vals, unsigned NumVals) {
   MDNode *N = pImpl->MDNodeSet.FindNodeOrInsertPos(ID, InsertPoint);
   if (!N) {
     // InsertPoint will have been set by the FindNodeOrInsertPos call.
-    N = new MDNode(Context, Vals, NumVals);
+    N = new MDNode(Context, Vals, NumVals, isFunctionLocal);
     pImpl->MDNodeSet.InsertNode(N, InsertPoint);
   }
   return N;
 }
 
-/// ~MDNode - Destroy MDNode.
-MDNode::~MDNode() {
-  LLVMContextImpl *pImpl = getType()->getContext().pImpl;
-  pImpl->MDNodeSet.RemoveNode(this);
-  delete [] Node;
-  Node = NULL;
+void MDNode::Profile(FoldingSetNodeID &ID) const {
+  for (unsigned i = 0, e = getNumElements(); i != e; ++i)
+    ID.AddPointer(getElement(i));
 }
 
-// Replace value from this node's element list.
-void MDNode::replaceElement(Value *From, Value *To) {
-  if (From == To || !getType())
-    return;
-  LLVMContext &Context = getType()->getContext();
-  LLVMContextImpl *pImpl = Context.pImpl;
 
-  // Find value. This is a linear search, do something if it consumes 
-  // lot of time. It is possible that to have multiple instances of
-  // From in this MDNode's element list.
-  SmallVector<unsigned, 4> Indexes;
-  unsigned Index = 0;
-  for (unsigned i = 0, e = getNumElements(); i != e; ++i, ++Index) {
-    Value *V = getElement(i);
-    if (V && V == From) 
-      Indexes.push_back(Index);
-  }
+/// getElement - Return specified element.
+Value *MDNode::getElement(unsigned i) const {
+  assert(i < getNumElements() && "Invalid element number!");
+  return Operands[i];
+}
+
 
-  if (Indexes.empty())
+
+// Replace value from this node's element list.
+void MDNode::replaceElement(MDNodeElement *Op, Value *To) {
+  Value *From = *Op;
+  
+  if (From == To)
     return;
 
-  // Remove "this" from the context map. 
+  LLVMContextImpl *pImpl = getType()->getContext().pImpl;
+
+  // Remove "this" from the context map.  FoldingSet doesn't have to reprofile
+  // this node to remove it, so we don't care what state the operands are in.
   pImpl->MDNodeSet.RemoveNode(this);
 
-  // Replace From element(s) in place.
-  for (SmallVector<unsigned, 4>::iterator I = Indexes.begin(), E = Indexes.end(); 
-       I != E; ++I) {
-    unsigned Index = *I;
-    Node[Index] = ElementVH(To, this);
-  }
+  // Update the operand.
+  Op->set(To, this);
 
   // Insert updated "this" into the context's folding node set.
   // If a node with same element list already exist then before inserting 
@@ -130,26 +169,30 @@ void MDNode::replaceElement(Value *From, Value *To) {
   if (N) {
     N->replaceAllUsesWith(this);
     delete N;
-    N = 0;
+    N = pImpl->MDNodeSet.FindNodeOrInsertPos(ID, InsertPoint);
+    assert(N == 0 && "shouldn't be in the map now!"); (void)N;
   }
 
-  N = pImpl->MDNodeSet.FindNodeOrInsertPos(ID, InsertPoint);
-  if (!N) {
-    // InsertPoint will have been set by the FindNodeOrInsertPos call.
-    N = this;
-    pImpl->MDNodeSet.InsertNode(N, InsertPoint);
-  }
+  // InsertPoint will have been set by the FindNodeOrInsertPos call.
+  pImpl->MDNodeSet.InsertNode(this, InsertPoint);
 }
 
 //===----------------------------------------------------------------------===//
 // NamedMDNode implementation.
 //
+static SmallVector<TrackingVH<MetadataBase>, 4> &getNMDOps(void *Operands) {
+  return *(SmallVector<TrackingVH<MetadataBase>, 4>*)Operands;
+}
+
 NamedMDNode::NamedMDNode(LLVMContext &C, const Twine &N,
                          MetadataBase *const *MDs, 
                          unsigned NumMDs, Module *ParentModule)
   : MetadataBase(Type::getMetadataTy(C), Value::NamedMDNodeVal), Parent(0) {
   setName(N);
-
+    
+  Operands = new SmallVector<TrackingVH<MetadataBase>, 4>();
+    
+  SmallVector<TrackingVH<MetadataBase>, 4> &Node = getNMDOps(Operands);
   for (unsigned i = 0; i != NumMDs; ++i)
     Node.push_back(TrackingVH<MetadataBase>(MDs[i]));
 
@@ -160,12 +203,35 @@ NamedMDNode::NamedMDNode(LLVMContext &C, const Twine &N,
 NamedMDNode *NamedMDNode::Create(const NamedMDNode *NMD, Module *M) {
   assert(NMD && "Invalid source NamedMDNode!");
   SmallVector<MetadataBase *, 4> Elems;
+  Elems.reserve(NMD->getNumElements());
+  
   for (unsigned i = 0, e = NMD->getNumElements(); i != e; ++i)
     Elems.push_back(NMD->getElement(i));
   return new NamedMDNode(NMD->getContext(), NMD->getName().data(),
                          Elems.data(), Elems.size(), M);
 }
 
+NamedMDNode::~NamedMDNode() {
+  dropAllReferences();
+  delete &getNMDOps(Operands);
+}
+
+/// getNumElements - Return number of NamedMDNode elements.
+unsigned NamedMDNode::getNumElements() const {
+  return (unsigned)getNMDOps(Operands).size();
+}
+
+/// getElement - Return specified element.
+MetadataBase *NamedMDNode::getElement(unsigned i) const {
+  assert(i < getNumElements() && "Invalid element number!");
+  return getNMDOps(Operands)[i];
+}
+
+/// addElement - Add metadata element.
+void NamedMDNode::addElement(MetadataBase *M) {
+  getNMDOps(Operands).push_back(TrackingVH<MetadataBase>(M));
+}
+
 /// eraseFromParent - Drop all references and remove the node from parent
 /// module.
 void NamedMDNode::eraseFromParent() {
@@ -174,12 +240,9 @@ void NamedMDNode::eraseFromParent() {
 
 /// dropAllReferences - Remove all uses and clear node vector.
 void NamedMDNode::dropAllReferences() {
-  Node.clear();
+  getNMDOps(Operands).clear();
 }
 
-NamedMDNode::~NamedMDNode() {
-  dropAllReferences();
-}
 
 //===----------------------------------------------------------------------===//
 // MetadataContextImpl implementation.
@@ -213,7 +276,8 @@ public:
   MDNode *getMD(unsigned Kind, const Instruction *Inst);
 
   /// getMDs - Get the metadata attached to an Instruction.
-  void getMDs(const Instruction *Inst, SmallVectorImpl<MDPairTy> &MDs) const;
+  void getMDs(const Instruction *Inst,
+              SmallVectorImpl<std::pair<unsigned, MDNode*> > &MDs) const;
 
   /// addMD - Attach the metadata of given kind to an Instruction.
   void addMD(unsigned Kind, MDNode *Node, Instruction *Inst);
@@ -338,7 +402,8 @@ MDNode *MetadataContextImpl::getMD(unsigned MDKind, const Instruction *Inst) {
 
 /// getMDs - Get the metadata attached to an Instruction.
 void MetadataContextImpl::
-getMDs(const Instruction *Inst, SmallVectorImpl<MDPairTy> &MDs) const {
+getMDs(const Instruction *Inst,
+       SmallVectorImpl<std::pair<unsigned, MDNode*> > &MDs) const {
   MDStoreTy::const_iterator I = MetadataStore.find(Inst);
   if (I == MetadataStore.end())
     return;
@@ -433,7 +498,7 @@ MDNode *MetadataContext::getMD(unsigned Kind, const Instruction *Inst) {
 /// getMDs - Get the metadata attached to an Instruction.
 void MetadataContext::
 getMDs(const Instruction *Inst, 
-       SmallVectorImpl<std::pair<unsigned, TrackingVH<MDNode> > > &MDs) const {
+       SmallVectorImpl<std::pair<unsigned, MDNode*> > &MDs) const {
   return pImpl->getMDs(Inst, MDs);
 }
 
diff --git a/libclamav/c++/llvm/lib/VMCore/Module.cpp b/libclamav/c++/llvm/lib/VMCore/Module.cpp
index 3efd3e3..ce281eb 100644
--- a/libclamav/c++/llvm/lib/VMCore/Module.cpp
+++ b/libclamav/c++/llvm/lib/VMCore/Module.cpp
@@ -47,9 +47,9 @@ GlobalAlias *ilist_traits<GlobalAlias>::createSentinel() {
 
 // Explicit instantiations of SymbolTableListTraits since some of the methods
 // are not in the public header file.
-template class SymbolTableListTraits<GlobalVariable, Module>;
-template class SymbolTableListTraits<Function, Module>;
-template class SymbolTableListTraits<GlobalAlias, Module>;
+template class llvm::SymbolTableListTraits<GlobalVariable, Module>;
+template class llvm::SymbolTableListTraits<Function, Module>;
+template class llvm::SymbolTableListTraits<GlobalAlias, Module>;
 
 //===----------------------------------------------------------------------===//
 // Primitive Module methods.
diff --git a/libclamav/c++/llvm/lib/VMCore/PassManager.cpp b/libclamav/c++/llvm/lib/VMCore/PassManager.cpp
index 52e8a82..d688385 100644
--- a/libclamav/c++/llvm/lib/VMCore/PassManager.cpp
+++ b/libclamav/c++/llvm/lib/VMCore/PassManager.cpp
@@ -1133,7 +1133,7 @@ bool BBPassManager::runOnFunction(Function &F) {
       removeDeadPasses(BP, I->getName(), ON_BASICBLOCK_MSG);
     }
 
-  return Changed |= doFinalization(F);
+  return doFinalization(F) || Changed;
 }
 
 // Implement doInitialization and doFinalization
@@ -1355,7 +1355,7 @@ bool FPPassManager::runOnModule(Module &M) {
   for (Module::iterator I = M.begin(), E = M.end(); I != E; ++I)
     runOnFunction(*I);
 
-  return Changed |= doFinalization(M);
+  return doFinalization(M) || Changed;
 }
 
 bool FPPassManager::doInitialization(Module &M) {
diff --git a/libclamav/c++/llvm/lib/VMCore/Type.cpp b/libclamav/c++/llvm/lib/VMCore/Type.cpp
index 739c463..fd46aa1 100644
--- a/libclamav/c++/llvm/lib/VMCore/Type.cpp
+++ b/libclamav/c++/llvm/lib/VMCore/Type.cpp
@@ -79,6 +79,9 @@ void Type::destroy() const {
     operator delete(const_cast<Type *>(this));
 
     return;
+  } else if (const OpaqueType *opaque_this = dyn_cast<OpaqueType>(this)) {
+    LLVMContextImpl *pImpl = this->getContext().pImpl;
+    pImpl->OpaqueTypes.erase(opaque_this);
   }
 
   // For all the other type subclasses, there is either no contained types or 
@@ -684,9 +687,11 @@ static bool TypesEqual(const Type *Ty, const Type *Ty2,
   }
 }
 
+namespace llvm { // in namespace llvm so findable by ADL
 static bool TypesEqual(const Type *Ty, const Type *Ty2) {
   std::map<const Type *, const Type *> EqTypes;
-  return TypesEqual(Ty, Ty2, EqTypes);
+  return ::TypesEqual(Ty, Ty2, EqTypes);
+}
 }
 
 // AbstractTypeHasCycleThrough - Return true there is a path from CurTy to
@@ -722,8 +727,10 @@ static bool ConcreteTypeHasCycleThrough(const Type *TargetTy, const Type *CurTy,
   return false;
 }
 
-/// TypeHasCycleThroughItself - Return true if the specified type has a cycle
-/// back to itself.
+/// TypeHasCycleThroughItself - Return true if the specified type has
+/// a cycle back to itself.
+
+namespace llvm { // in namespace llvm so it's findable by ADL
 static bool TypeHasCycleThroughItself(const Type *Ty) {
   SmallPtrSet<const Type*, 128> VisitedTypes;
 
@@ -740,6 +747,7 @@ static bool TypeHasCycleThroughItself(const Type *Ty) {
   }
   return false;
 }
+}
 
 //===----------------------------------------------------------------------===//
 // Function Type Factory and Value Class...
@@ -955,6 +963,20 @@ bool PointerType::isValidElementType(const Type *ElemTy) {
 
 
 //===----------------------------------------------------------------------===//
+// Opaque Type Factory...
+//
+
+OpaqueType *OpaqueType::get(LLVMContext &C) {
+  OpaqueType *OT = new OpaqueType(C);           // All opaque types are distinct
+  
+  LLVMContextImpl *pImpl = C.pImpl;
+  pImpl->OpaqueTypes.insert(OT);
+  return OT;
+}
+
+
+
+//===----------------------------------------------------------------------===//
 //                     Derived Type Refinement Functions
 //===----------------------------------------------------------------------===//
 
diff --git a/libclamav/c++/llvm/lib/VMCore/Verifier.cpp b/libclamav/c++/llvm/lib/VMCore/Verifier.cpp
index 7aa86b7..b7e8771 100644
--- a/libclamav/c++/llvm/lib/VMCore/Verifier.cpp
+++ b/libclamav/c++/llvm/lib/VMCore/Verifier.cpp
@@ -329,6 +329,8 @@ namespace {
                           int VT, unsigned ArgNo, std::string &Suffix);
     void VerifyIntrinsicPrototype(Intrinsic::ID ID, Function *F,
                                   unsigned RetNum, unsigned ParamNum, ...);
+    void VerifyFunctionLocalMetadata(MDNode *N, Function *F,
+                                     SmallPtrSet<MDNode *, 32> &Visited);
     void VerifyParameterAttrs(Attributes Attrs, const Type *Ty,
                               bool isReturnValue, const Value *V);
     void VerifyFunctionAttrs(const FunctionType *FT, const AttrListPtr &Attrs,
@@ -1526,6 +1528,38 @@ void Verifier::VerifyType(const Type *Ty) {
   }
 }
 
+/// VerifyFunctionLocalMetadata - Verify that the specified MDNode is local to
+/// specified Function.
+void Verifier::VerifyFunctionLocalMetadata(MDNode *N, Function *F,
+                                           SmallPtrSet<MDNode *, 32> &Visited) {
+  assert(N->isFunctionLocal() && "Should only be called on function-local MD");
+
+  // Only visit each node once.
+  if (!Visited.insert(N))
+    return;
+  
+  for (unsigned i = 0, e = N->getNumElements(); i != e; ++i) {
+    Value *V = N->getElement(i);
+    if (!V) continue;
+    
+    Function *ActualF = 0;
+    if (Instruction *I = dyn_cast<Instruction>(V))
+      ActualF = I->getParent()->getParent();
+    else if (BasicBlock *BB = dyn_cast<BasicBlock>(V))
+      ActualF = BB->getParent();
+    else if (Argument *A = dyn_cast<Argument>(V))
+      ActualF = A->getParent();
+    else if (MDNode *MD = dyn_cast<MDNode>(V))
+      if (MD->isFunctionLocal())
+        VerifyFunctionLocalMetadata(MD, F, Visited);
+
+    // If this was an instruction, bb, or argument, verify that it is in the
+    // function that we expect.
+    Assert1(ActualF == 0 || ActualF == F,
+            "function-local metadata used in wrong function", N);
+  }
+}
+
 // Flags used by TableGen to mark intrinsic parameters with the
 // LLVMExtendedElementVectorType and LLVMTruncatedElementVectorType classes.
 static const unsigned ExtendedElementVectorType = 0x40000000;
@@ -1542,6 +1576,15 @@ void Verifier::visitIntrinsicFunctionCall(Intrinsic::ID ID, CallInst &CI) {
 #include "llvm/Intrinsics.gen"
 #undef GET_INTRINSIC_VERIFIER
 
+  // If the intrinsic takes MDNode arguments, verify that they are either global
+  // or are local to *this* function.
+  for (unsigned i = 0, e = CI.getNumOperands(); i != e; ++i)
+    if (MDNode *MD = dyn_cast<MDNode>(CI.getOperand(i))) {
+      if (!MD->isFunctionLocal()) continue;
+      SmallPtrSet<MDNode *, 32> Visited;
+      VerifyFunctionLocalMetadata(MD, CI.getParent()->getParent(), Visited);
+    }
+
   switch (ID) {
   default:
     break;
diff --git a/libclamav/c++/llvm/test/CodeGen/ARM/inlineasm3.ll b/libclamav/c++/llvm/test/CodeGen/ARM/inlineasm3.ll
index 5ebf2fb..f062772 100644
--- a/libclamav/c++/llvm/test/CodeGen/ARM/inlineasm3.ll
+++ b/libclamav/c++/llvm/test/CodeGen/ARM/inlineasm3.ll
@@ -1,5 +1,6 @@
 ; RUN: llc < %s -march=arm -mattr=+neon | FileCheck %s
 
+; Radar 7449043
 %struct.int32x4_t = type { <4 x i32> }
 
 define arm_apcscc void @t() nounwind {
@@ -11,3 +12,14 @@ entry:
   call void asm sideeffect "vmov.I64 q15, #0\0Avmov.32 d30[0], $1\0Avmov ${0:q}, q15\0A", "=*w,r,~{d31},~{d30}"(%struct.int32x4_t* %tmp, i32 8192) nounwind
   ret void
 }
+
+; Radar 7457110
+%struct.int32x2_t = type { <4 x i32> }
+
+define arm_apcscc void @t2() nounwind {
+entry:
+; CHECK: vmov d30, d0
+; CHECK: vmov.32 r0, d30[0]
+  %asmtmp2 = tail call i32 asm sideeffect "vmov d30, $1\0Avmov.32 $0, d30[0]\0A", "=r,w,~{d30}"(<2 x i32> undef) nounwind
+  ret void
+}
diff --git a/libclamav/c++/llvm/test/CodeGen/PowerPC/2007-04-30-InlineAsmEarlyClobber.ll b/libclamav/c++/llvm/test/CodeGen/PowerPC/2007-04-30-InlineAsmEarlyClobber.ll
index f2fdedf..c4ed166 100644
--- a/libclamav/c++/llvm/test/CodeGen/PowerPC/2007-04-30-InlineAsmEarlyClobber.ll
+++ b/libclamav/c++/llvm/test/CodeGen/PowerPC/2007-04-30-InlineAsmEarlyClobber.ll
@@ -1,7 +1,7 @@
 ; RUN: llc < %s | grep {subfc r3,r5,r4}
 ; RUN: llc < %s | grep {subfze r4,r2}
-; RUN: llc < %s -regalloc=local | grep {subfc r5,r2,r4}
-; RUN: llc < %s -regalloc=local | grep {subfze r2,r3}
+; RUN: llc < %s -regalloc=local | grep {subfc r5,r4,r3}
+; RUN: llc < %s -regalloc=local | grep {subfze r2,r2}
 ; The first argument of subfc must not be the same as any other register.
 
 ; PR1357
diff --git a/libclamav/c++/llvm/test/CodeGen/Thumb/2009-12-17-pre-regalloc-taildup.ll b/libclamav/c++/llvm/test/CodeGen/Thumb/2009-12-17-pre-regalloc-taildup.ll
new file mode 100644
index 0000000..3401915
--- /dev/null
+++ b/libclamav/c++/llvm/test/CodeGen/Thumb/2009-12-17-pre-regalloc-taildup.ll
@@ -0,0 +1,66 @@
+; RUN: llc -O3 -pre-regalloc-taildup < %s | FileCheck %s
+target datalayout = "e-p:32:32:32-i1:8:32-i8:8:32-i16:16:32-i32:32:32-i64:32:32-f32:32:32-f64:32:32-v64:64:64-v128:128:128-a0:0:32-n32"
+target triple = "thumbv7-apple-darwin10"
+
+; This test should not produce any spills, even when tail duplication creates lots of phi nodes.
+; CHECK-NOT: push
+; CHECK-NOT: pop
+; CHECK: bx lr
+
+ at codetable.2928 = internal constant [5 x i8*] [i8* blockaddress(@interpret_threaded, %RETURN), i8* blockaddress(@interpret_threaded, %INCREMENT), i8* blockaddress(@interpret_threaded, %DECREMENT), i8* blockaddress(@interpret_threaded, %DOUBLE), i8* blockaddress(@interpret_threaded, %SWAPWORD)] ; <[5 x i8*]*> [#uses=5]
+ at llvm.used = appending global [1 x i8*] [i8* bitcast (i32 (i8*)* @interpret_threaded to i8*)], section "llvm.metadata" ; <[1 x i8*]*> [#uses=0]
+
+define arm_apcscc i32 @interpret_threaded(i8* nocapture %opcodes) nounwind readonly optsize {
+entry:
+  %0 = load i8* %opcodes, align 1                 ; <i8> [#uses=1]
+  %1 = zext i8 %0 to i32                          ; <i32> [#uses=1]
+  %2 = getelementptr inbounds [5 x i8*]* @codetable.2928, i32 0, i32 %1 ; <i8**> [#uses=1]
+  br label %bb
+
+bb:                                               ; preds = %bb.backedge, %entry
+  %indvar = phi i32 [ %phitmp, %bb.backedge ], [ 1, %entry ] ; <i32> [#uses=2]
+  %gotovar.22.0.in = phi i8** [ %gotovar.22.0.in.be, %bb.backedge ], [ %2, %entry ] ; <i8**> [#uses=1]
+  %result.0 = phi i32 [ %result.0.be, %bb.backedge ], [ 0, %entry ] ; <i32> [#uses=6]
+  %opcodes_addr.0 = getelementptr i8* %opcodes, i32 %indvar ; <i8*> [#uses=4]
+  %gotovar.22.0 = load i8** %gotovar.22.0.in, align 4 ; <i8*> [#uses=1]
+  indirectbr i8* %gotovar.22.0, [label %RETURN, label %INCREMENT, label %DECREMENT, label %DOUBLE, label %SWAPWORD]
+
+RETURN:                                           ; preds = %bb
+  ret i32 %result.0
+
+INCREMENT:                                        ; preds = %bb
+  %3 = add nsw i32 %result.0, 1                   ; <i32> [#uses=1]
+  %4 = load i8* %opcodes_addr.0, align 1          ; <i8> [#uses=1]
+  %5 = zext i8 %4 to i32                          ; <i32> [#uses=1]
+  %6 = getelementptr inbounds [5 x i8*]* @codetable.2928, i32 0, i32 %5 ; <i8**> [#uses=1]
+  br label %bb.backedge
+
+bb.backedge:                                      ; preds = %SWAPWORD, %DOUBLE, %DECREMENT, %INCREMENT
+  %gotovar.22.0.in.be = phi i8** [ %20, %SWAPWORD ], [ %14, %DOUBLE ], [ %10, %DECREMENT ], [ %6, %INCREMENT ] ; <i8**> [#uses=1]
+  %result.0.be = phi i32 [ %17, %SWAPWORD ], [ %11, %DOUBLE ], [ %7, %DECREMENT ], [ %3, %INCREMENT ] ; <i32> [#uses=1]
+  %phitmp = add i32 %indvar, 1                    ; <i32> [#uses=1]
+  br label %bb
+
+DECREMENT:                                        ; preds = %bb
+  %7 = add i32 %result.0, -1                      ; <i32> [#uses=1]
+  %8 = load i8* %opcodes_addr.0, align 1          ; <i8> [#uses=1]
+  %9 = zext i8 %8 to i32                          ; <i32> [#uses=1]
+  %10 = getelementptr inbounds [5 x i8*]* @codetable.2928, i32 0, i32 %9 ; <i8**> [#uses=1]
+  br label %bb.backedge
+
+DOUBLE:                                           ; preds = %bb
+  %11 = shl i32 %result.0, 1                      ; <i32> [#uses=1]
+  %12 = load i8* %opcodes_addr.0, align 1         ; <i8> [#uses=1]
+  %13 = zext i8 %12 to i32                        ; <i32> [#uses=1]
+  %14 = getelementptr inbounds [5 x i8*]* @codetable.2928, i32 0, i32 %13 ; <i8**> [#uses=1]
+  br label %bb.backedge
+
+SWAPWORD:                                         ; preds = %bb
+  %15 = shl i32 %result.0, 16                     ; <i32> [#uses=1]
+  %16 = ashr i32 %result.0, 16                    ; <i32> [#uses=1]
+  %17 = or i32 %15, %16                           ; <i32> [#uses=1]
+  %18 = load i8* %opcodes_addr.0, align 1         ; <i8> [#uses=1]
+  %19 = zext i8 %18 to i32                        ; <i32> [#uses=1]
+  %20 = getelementptr inbounds [5 x i8*]* @codetable.2928, i32 0, i32 %19 ; <i8**> [#uses=1]
+  br label %bb.backedge
+}
diff --git a/libclamav/c++/llvm/test/CodeGen/Thumb2/large-stack.ll b/libclamav/c++/llvm/test/CodeGen/Thumb2/large-stack.ll
index da44cde..fe0e506 100644
--- a/libclamav/c++/llvm/test/CodeGen/Thumb2/large-stack.ll
+++ b/libclamav/c++/llvm/test/CodeGen/Thumb2/large-stack.ll
@@ -1,24 +1,35 @@
-; RUN: llc < %s -march=thumb -mattr=+thumb2 | FileCheck %s
+; RUN: llc < %s -march=thumb -mattr=+thumb2 -mtriple=arm-apple-darwin | FileCheck %s -check-prefix=DARWIN
+; RUN: llc < %s -march=thumb -mattr=+thumb2 -mtriple=arm-linux-gnueabi | FileCheck %s -check-prefix=LINUX
 
 define void @test1() {
-; CHECK: test1:
-; CHECK: sub sp, #256
+; DARWIN: test1:
+; DARWIN: sub sp, #256
+; LINUX: test1:
+; LINUX: sub sp, #256
     %tmp = alloca [ 64 x i32 ] , align 4
     ret void
 }
 
 define void @test2() {
-; CHECK: test2:
-; CHECK: sub.w sp, sp, #4160
-; CHECK: sub sp, #8
+; DARWIN: test2:
+; DARWIN: sub.w sp, sp, #4160
+; DARWIN: sub sp, #8
+; LINUX: test2:
+; LINUX: sub.w sp, sp, #4160
+; LINUX: sub sp, #8
     %tmp = alloca [ 4168 x i8 ] , align 4
     ret void
 }
 
 define i32 @test3() {
-; CHECK: test3:
-; CHECK: sub.w sp, sp, #805306368
-; CHECK: sub sp, #20
+; DARWIN: test3:
+; DARWIN: push    {r4, r7, lr}
+; DARWIN: sub.w sp, sp, #805306368
+; DARWIN: sub sp, #20
+; LINUX: test3:
+; LINUX: stmfd   sp!, {r4, r7, r11, lr}
+; LINUX: sub.w sp, sp, #805306368
+; LINUX: sub sp, #16
     %retval = alloca i32, align 4
     %tmp = alloca i32, align 4
     %a = alloca [805306369 x i8], align 16
diff --git a/libclamav/c++/llvm/test/CodeGen/X86/2009-11-04-SubregCoalescingBug.ll b/libclamav/c++/llvm/test/CodeGen/X86/2009-11-04-SubregCoalescingBug.ll
index d84b63a..628b899 100644
--- a/libclamav/c++/llvm/test/CodeGen/X86/2009-11-04-SubregCoalescingBug.ll
+++ b/libclamav/c++/llvm/test/CodeGen/X86/2009-11-04-SubregCoalescingBug.ll
@@ -5,7 +5,7 @@ define void @bar(i32 %b, i32 %a) nounwind optsize ssp {
 entry:
 ; CHECK:     leal 15(%rsi), %edi
 ; CHECK-NOT: movl
-; CHECK:     call _foo
+; CHECK:     callq _foo
   %0 = add i32 %a, 15                             ; <i32> [#uses=1]
   %1 = zext i32 %0 to i64                         ; <i64> [#uses=1]
   tail call void @foo(i64 %1) nounwind
diff --git a/libclamav/c++/llvm/test/CodeGen/X86/abi-isel.ll b/libclamav/c++/llvm/test/CodeGen/X86/abi-isel.ll
index 6d7b2d4..9208738 100644
--- a/libclamav/c++/llvm/test/CodeGen/X86/abi-isel.ll
+++ b/libclamav/c++/llvm/test/CodeGen/X86/abi-isel.ll
@@ -8356,22 +8356,22 @@ entry:
 
 define void @lcallee() nounwind {
 entry:
-	tail call void @x() nounwind
-	tail call void @x() nounwind
-	tail call void @x() nounwind
-	tail call void @x() nounwind
-	tail call void @x() nounwind
-	tail call void @x() nounwind
-	tail call void @x() nounwind
+	call void @x() nounwind
+	call void @x() nounwind
+	call void @x() nounwind
+	call void @x() nounwind
+	call void @x() nounwind
+	call void @x() nounwind
+	call void @x() nounwind
 	ret void
 ; LINUX-64-STATIC: lcallee:
-; LINUX-64-STATIC: call    x
-; LINUX-64-STATIC: call    x
-; LINUX-64-STATIC: call    x
-; LINUX-64-STATIC: call    x
-; LINUX-64-STATIC: call    x
-; LINUX-64-STATIC: call    x
-; LINUX-64-STATIC: call    x
+; LINUX-64-STATIC: callq   x
+; LINUX-64-STATIC: callq   x
+; LINUX-64-STATIC: callq   x
+; LINUX-64-STATIC: callq   x
+; LINUX-64-STATIC: callq   x
+; LINUX-64-STATIC: callq   x
+; LINUX-64-STATIC: callq   x
 ; LINUX-64-STATIC: ret
 
 ; LINUX-32-STATIC: lcallee:
@@ -8400,13 +8400,13 @@ entry:
 
 ; LINUX-64-PIC: lcallee:
 ; LINUX-64-PIC: 	subq	$8, %rsp
-; LINUX-64-PIC-NEXT: 	call	x at PLT
-; LINUX-64-PIC-NEXT: 	call	x at PLT
-; LINUX-64-PIC-NEXT: 	call	x at PLT
-; LINUX-64-PIC-NEXT: 	call	x at PLT
-; LINUX-64-PIC-NEXT: 	call	x at PLT
-; LINUX-64-PIC-NEXT: 	call	x at PLT
-; LINUX-64-PIC-NEXT: 	call	x at PLT
+; LINUX-64-PIC-NEXT: 	callq	x at PLT
+; LINUX-64-PIC-NEXT: 	callq	x at PLT
+; LINUX-64-PIC-NEXT: 	callq	x at PLT
+; LINUX-64-PIC-NEXT: 	callq	x at PLT
+; LINUX-64-PIC-NEXT: 	callq	x at PLT
+; LINUX-64-PIC-NEXT: 	callq	x at PLT
+; LINUX-64-PIC-NEXT: 	callq	x at PLT
 ; LINUX-64-PIC-NEXT: 	addq	$8, %rsp
 ; LINUX-64-PIC-NEXT: 	ret
 
@@ -8448,37 +8448,37 @@ entry:
 
 ; DARWIN-64-STATIC: _lcallee:
 ; DARWIN-64-STATIC: 	subq	$8, %rsp
-; DARWIN-64-STATIC-NEXT: 	call	_x
-; DARWIN-64-STATIC-NEXT: 	call	_x
-; DARWIN-64-STATIC-NEXT: 	call	_x
-; DARWIN-64-STATIC-NEXT: 	call	_x
-; DARWIN-64-STATIC-NEXT: 	call	_x
-; DARWIN-64-STATIC-NEXT: 	call	_x
-; DARWIN-64-STATIC-NEXT: 	call	_x
+; DARWIN-64-STATIC-NEXT: 	callq	_x
+; DARWIN-64-STATIC-NEXT: 	callq	_x
+; DARWIN-64-STATIC-NEXT: 	callq	_x
+; DARWIN-64-STATIC-NEXT: 	callq	_x
+; DARWIN-64-STATIC-NEXT: 	callq	_x
+; DARWIN-64-STATIC-NEXT: 	callq	_x
+; DARWIN-64-STATIC-NEXT: 	callq	_x
 ; DARWIN-64-STATIC-NEXT: 	addq	$8, %rsp
 ; DARWIN-64-STATIC-NEXT: 	ret
 
 ; DARWIN-64-DYNAMIC: _lcallee:
 ; DARWIN-64-DYNAMIC: 	subq	$8, %rsp
-; DARWIN-64-DYNAMIC-NEXT: 	call	_x
-; DARWIN-64-DYNAMIC-NEXT: 	call	_x
-; DARWIN-64-DYNAMIC-NEXT: 	call	_x
-; DARWIN-64-DYNAMIC-NEXT: 	call	_x
-; DARWIN-64-DYNAMIC-NEXT: 	call	_x
-; DARWIN-64-DYNAMIC-NEXT: 	call	_x
-; DARWIN-64-DYNAMIC-NEXT: 	call	_x
+; DARWIN-64-DYNAMIC-NEXT: 	callq	_x
+; DARWIN-64-DYNAMIC-NEXT: 	callq	_x
+; DARWIN-64-DYNAMIC-NEXT: 	callq	_x
+; DARWIN-64-DYNAMIC-NEXT: 	callq	_x
+; DARWIN-64-DYNAMIC-NEXT: 	callq	_x
+; DARWIN-64-DYNAMIC-NEXT: 	callq	_x
+; DARWIN-64-DYNAMIC-NEXT: 	callq	_x
 ; DARWIN-64-DYNAMIC-NEXT: 	addq	$8, %rsp
 ; DARWIN-64-DYNAMIC-NEXT: 	ret
 
 ; DARWIN-64-PIC: _lcallee:
 ; DARWIN-64-PIC: 	subq	$8, %rsp
-; DARWIN-64-PIC-NEXT: 	call	_x
-; DARWIN-64-PIC-NEXT: 	call	_x
-; DARWIN-64-PIC-NEXT: 	call	_x
-; DARWIN-64-PIC-NEXT: 	call	_x
-; DARWIN-64-PIC-NEXT: 	call	_x
-; DARWIN-64-PIC-NEXT: 	call	_x
-; DARWIN-64-PIC-NEXT: 	call	_x
+; DARWIN-64-PIC-NEXT: 	callq	_x
+; DARWIN-64-PIC-NEXT: 	callq	_x
+; DARWIN-64-PIC-NEXT: 	callq	_x
+; DARWIN-64-PIC-NEXT: 	callq	_x
+; DARWIN-64-PIC-NEXT: 	callq	_x
+; DARWIN-64-PIC-NEXT: 	callq	_x
+; DARWIN-64-PIC-NEXT: 	callq	_x
 ; DARWIN-64-PIC-NEXT: 	addq	$8, %rsp
 ; DARWIN-64-PIC-NEXT: 	ret
 }
@@ -8487,22 +8487,22 @@ declare void @x()
 
 define internal void @dcallee() nounwind {
 entry:
-	tail call void @y() nounwind
-	tail call void @y() nounwind
-	tail call void @y() nounwind
-	tail call void @y() nounwind
-	tail call void @y() nounwind
-	tail call void @y() nounwind
-	tail call void @y() nounwind
+	call void @y() nounwind
+	call void @y() nounwind
+	call void @y() nounwind
+	call void @y() nounwind
+	call void @y() nounwind
+	call void @y() nounwind
+	call void @y() nounwind
 	ret void
 ; LINUX-64-STATIC: dcallee:
-; LINUX-64-STATIC: call    y
-; LINUX-64-STATIC: call    y
-; LINUX-64-STATIC: call    y
-; LINUX-64-STATIC: call    y
-; LINUX-64-STATIC: call    y
-; LINUX-64-STATIC: call    y
-; LINUX-64-STATIC: call    y
+; LINUX-64-STATIC: callq   y
+; LINUX-64-STATIC: callq   y
+; LINUX-64-STATIC: callq   y
+; LINUX-64-STATIC: callq   y
+; LINUX-64-STATIC: callq   y
+; LINUX-64-STATIC: callq   y
+; LINUX-64-STATIC: callq   y
 ; LINUX-64-STATIC: ret
 
 ; LINUX-32-STATIC: dcallee:
@@ -8531,13 +8531,13 @@ entry:
 
 ; LINUX-64-PIC: dcallee:
 ; LINUX-64-PIC: 	subq	$8, %rsp
-; LINUX-64-PIC-NEXT: 	call	y at PLT
-; LINUX-64-PIC-NEXT: 	call	y at PLT
-; LINUX-64-PIC-NEXT: 	call	y at PLT
-; LINUX-64-PIC-NEXT: 	call	y at PLT
-; LINUX-64-PIC-NEXT: 	call	y at PLT
-; LINUX-64-PIC-NEXT: 	call	y at PLT
-; LINUX-64-PIC-NEXT: 	call	y at PLT
+; LINUX-64-PIC-NEXT: 	callq	y at PLT
+; LINUX-64-PIC-NEXT: 	callq	y at PLT
+; LINUX-64-PIC-NEXT: 	callq	y at PLT
+; LINUX-64-PIC-NEXT: 	callq	y at PLT
+; LINUX-64-PIC-NEXT: 	callq	y at PLT
+; LINUX-64-PIC-NEXT: 	callq	y at PLT
+; LINUX-64-PIC-NEXT: 	callq	y at PLT
 ; LINUX-64-PIC-NEXT: 	addq	$8, %rsp
 ; LINUX-64-PIC-NEXT: 	ret
 
@@ -8579,37 +8579,37 @@ entry:
 
 ; DARWIN-64-STATIC: _dcallee:
 ; DARWIN-64-STATIC: 	subq	$8, %rsp
-; DARWIN-64-STATIC-NEXT: 	call	_y
-; DARWIN-64-STATIC-NEXT: 	call	_y
-; DARWIN-64-STATIC-NEXT: 	call	_y
-; DARWIN-64-STATIC-NEXT: 	call	_y
-; DARWIN-64-STATIC-NEXT: 	call	_y
-; DARWIN-64-STATIC-NEXT: 	call	_y
-; DARWIN-64-STATIC-NEXT: 	call	_y
+; DARWIN-64-STATIC-NEXT: 	callq	_y
+; DARWIN-64-STATIC-NEXT: 	callq	_y
+; DARWIN-64-STATIC-NEXT: 	callq	_y
+; DARWIN-64-STATIC-NEXT: 	callq	_y
+; DARWIN-64-STATIC-NEXT: 	callq	_y
+; DARWIN-64-STATIC-NEXT: 	callq	_y
+; DARWIN-64-STATIC-NEXT: 	callq	_y
 ; DARWIN-64-STATIC-NEXT: 	addq	$8, %rsp
 ; DARWIN-64-STATIC-NEXT: 	ret
 
 ; DARWIN-64-DYNAMIC: _dcallee:
 ; DARWIN-64-DYNAMIC: 	subq	$8, %rsp
-; DARWIN-64-DYNAMIC-NEXT: 	call	_y
-; DARWIN-64-DYNAMIC-NEXT: 	call	_y
-; DARWIN-64-DYNAMIC-NEXT: 	call	_y
-; DARWIN-64-DYNAMIC-NEXT: 	call	_y
-; DARWIN-64-DYNAMIC-NEXT: 	call	_y
-; DARWIN-64-DYNAMIC-NEXT: 	call	_y
-; DARWIN-64-DYNAMIC-NEXT: 	call	_y
+; DARWIN-64-DYNAMIC-NEXT: 	callq	_y
+; DARWIN-64-DYNAMIC-NEXT: 	callq	_y
+; DARWIN-64-DYNAMIC-NEXT: 	callq	_y
+; DARWIN-64-DYNAMIC-NEXT: 	callq	_y
+; DARWIN-64-DYNAMIC-NEXT: 	callq	_y
+; DARWIN-64-DYNAMIC-NEXT: 	callq	_y
+; DARWIN-64-DYNAMIC-NEXT: 	callq	_y
 ; DARWIN-64-DYNAMIC-NEXT: 	addq	$8, %rsp
 ; DARWIN-64-DYNAMIC-NEXT: 	ret
 
 ; DARWIN-64-PIC: _dcallee:
 ; DARWIN-64-PIC: 	subq	$8, %rsp
-; DARWIN-64-PIC-NEXT: 	call	_y
-; DARWIN-64-PIC-NEXT: 	call	_y
-; DARWIN-64-PIC-NEXT: 	call	_y
-; DARWIN-64-PIC-NEXT: 	call	_y
-; DARWIN-64-PIC-NEXT: 	call	_y
-; DARWIN-64-PIC-NEXT: 	call	_y
-; DARWIN-64-PIC-NEXT: 	call	_y
+; DARWIN-64-PIC-NEXT: 	callq	_y
+; DARWIN-64-PIC-NEXT: 	callq	_y
+; DARWIN-64-PIC-NEXT: 	callq	_y
+; DARWIN-64-PIC-NEXT: 	callq	_y
+; DARWIN-64-PIC-NEXT: 	callq	_y
+; DARWIN-64-PIC-NEXT: 	callq	_y
+; DARWIN-64-PIC-NEXT: 	callq	_y
 ; DARWIN-64-PIC-NEXT: 	addq	$8, %rsp
 ; DARWIN-64-PIC-NEXT: 	ret
 }
@@ -8761,12 +8761,12 @@ entry:
 
 define void @caller() nounwind {
 entry:
-	tail call void @callee() nounwind
-	tail call void @callee() nounwind
+	call void @callee() nounwind
+	call void @callee() nounwind
 	ret void
 ; LINUX-64-STATIC: caller:
-; LINUX-64-STATIC: call    callee
-; LINUX-64-STATIC: call    callee
+; LINUX-64-STATIC: callq   callee
+; LINUX-64-STATIC: callq   callee
 ; LINUX-64-STATIC: ret
 
 ; LINUX-32-STATIC: caller:
@@ -8785,8 +8785,8 @@ entry:
 
 ; LINUX-64-PIC: caller:
 ; LINUX-64-PIC: 	subq	$8, %rsp
-; LINUX-64-PIC-NEXT: 	call	callee at PLT
-; LINUX-64-PIC-NEXT: 	call	callee at PLT
+; LINUX-64-PIC-NEXT: 	callq	callee at PLT
+; LINUX-64-PIC-NEXT: 	callq	callee at PLT
 ; LINUX-64-PIC-NEXT: 	addq	$8, %rsp
 ; LINUX-64-PIC-NEXT: 	ret
 
@@ -8813,34 +8813,34 @@ entry:
 
 ; DARWIN-64-STATIC: _caller:
 ; DARWIN-64-STATIC: 	subq	$8, %rsp
-; DARWIN-64-STATIC-NEXT: 	call	_callee
-; DARWIN-64-STATIC-NEXT: 	call	_callee
+; DARWIN-64-STATIC-NEXT: 	callq	_callee
+; DARWIN-64-STATIC-NEXT: 	callq	_callee
 ; DARWIN-64-STATIC-NEXT: 	addq	$8, %rsp
 ; DARWIN-64-STATIC-NEXT: 	ret
 
 ; DARWIN-64-DYNAMIC: _caller:
 ; DARWIN-64-DYNAMIC: 	subq	$8, %rsp
-; DARWIN-64-DYNAMIC-NEXT: 	call	_callee
-; DARWIN-64-DYNAMIC-NEXT: 	call	_callee
+; DARWIN-64-DYNAMIC-NEXT: 	callq	_callee
+; DARWIN-64-DYNAMIC-NEXT: 	callq	_callee
 ; DARWIN-64-DYNAMIC-NEXT: 	addq	$8, %rsp
 ; DARWIN-64-DYNAMIC-NEXT: 	ret
 
 ; DARWIN-64-PIC: _caller:
 ; DARWIN-64-PIC: 	subq	$8, %rsp
-; DARWIN-64-PIC-NEXT: 	call	_callee
-; DARWIN-64-PIC-NEXT: 	call	_callee
+; DARWIN-64-PIC-NEXT: 	callq	_callee
+; DARWIN-64-PIC-NEXT: 	callq	_callee
 ; DARWIN-64-PIC-NEXT: 	addq	$8, %rsp
 ; DARWIN-64-PIC-NEXT: 	ret
 }
 
 define void @dcaller() nounwind {
 entry:
-	tail call void @dcallee() nounwind
-	tail call void @dcallee() nounwind
+	call void @dcallee() nounwind
+	call void @dcallee() nounwind
 	ret void
 ; LINUX-64-STATIC: dcaller:
-; LINUX-64-STATIC: call    dcallee
-; LINUX-64-STATIC: call    dcallee
+; LINUX-64-STATIC: callq   dcallee
+; LINUX-64-STATIC: callq   dcallee
 ; LINUX-64-STATIC: ret
 
 ; LINUX-32-STATIC: dcaller:
@@ -8859,8 +8859,8 @@ entry:
 
 ; LINUX-64-PIC: dcaller:
 ; LINUX-64-PIC: 	subq	$8, %rsp
-; LINUX-64-PIC-NEXT: 	call	dcallee
-; LINUX-64-PIC-NEXT: 	call	dcallee
+; LINUX-64-PIC-NEXT: 	callq	dcallee
+; LINUX-64-PIC-NEXT: 	callq	dcallee
 ; LINUX-64-PIC-NEXT: 	addq	$8, %rsp
 ; LINUX-64-PIC-NEXT: 	ret
 
@@ -8887,34 +8887,34 @@ entry:
 
 ; DARWIN-64-STATIC: _dcaller:
 ; DARWIN-64-STATIC: 	subq	$8, %rsp
-; DARWIN-64-STATIC-NEXT: 	call	_dcallee
-; DARWIN-64-STATIC-NEXT: 	call	_dcallee
+; DARWIN-64-STATIC-NEXT: 	callq	_dcallee
+; DARWIN-64-STATIC-NEXT: 	callq	_dcallee
 ; DARWIN-64-STATIC-NEXT: 	addq	$8, %rsp
 ; DARWIN-64-STATIC-NEXT: 	ret
 
 ; DARWIN-64-DYNAMIC: _dcaller:
 ; DARWIN-64-DYNAMIC: 	subq	$8, %rsp
-; DARWIN-64-DYNAMIC-NEXT: 	call	_dcallee
-; DARWIN-64-DYNAMIC-NEXT: 	call	_dcallee
+; DARWIN-64-DYNAMIC-NEXT: 	callq	_dcallee
+; DARWIN-64-DYNAMIC-NEXT: 	callq	_dcallee
 ; DARWIN-64-DYNAMIC-NEXT: 	addq	$8, %rsp
 ; DARWIN-64-DYNAMIC-NEXT: 	ret
 
 ; DARWIN-64-PIC: _dcaller:
 ; DARWIN-64-PIC: 	subq	$8, %rsp
-; DARWIN-64-PIC-NEXT: 	call	_dcallee
-; DARWIN-64-PIC-NEXT: 	call	_dcallee
+; DARWIN-64-PIC-NEXT: 	callq	_dcallee
+; DARWIN-64-PIC-NEXT: 	callq	_dcallee
 ; DARWIN-64-PIC-NEXT: 	addq	$8, %rsp
 ; DARWIN-64-PIC-NEXT: 	ret
 }
 
 define void @lcaller() nounwind {
 entry:
-	tail call void @lcallee() nounwind
-	tail call void @lcallee() nounwind
+	call void @lcallee() nounwind
+	call void @lcallee() nounwind
 	ret void
 ; LINUX-64-STATIC: lcaller:
-; LINUX-64-STATIC: call    lcallee
-; LINUX-64-STATIC: call    lcallee
+; LINUX-64-STATIC: callq   lcallee
+; LINUX-64-STATIC: callq   lcallee
 ; LINUX-64-STATIC: ret
 
 ; LINUX-32-STATIC: lcaller:
@@ -8933,8 +8933,8 @@ entry:
 
 ; LINUX-64-PIC: lcaller:
 ; LINUX-64-PIC: 	subq	$8, %rsp
-; LINUX-64-PIC-NEXT: 	call	lcallee at PLT
-; LINUX-64-PIC-NEXT: 	call	lcallee at PLT
+; LINUX-64-PIC-NEXT: 	callq	lcallee at PLT
+; LINUX-64-PIC-NEXT: 	callq	lcallee at PLT
 ; LINUX-64-PIC-NEXT: 	addq	$8, %rsp
 ; LINUX-64-PIC-NEXT: 	ret
 
@@ -8961,32 +8961,32 @@ entry:
 
 ; DARWIN-64-STATIC: _lcaller:
 ; DARWIN-64-STATIC: 	subq	$8, %rsp
-; DARWIN-64-STATIC-NEXT: 	call	_lcallee
-; DARWIN-64-STATIC-NEXT: 	call	_lcallee
+; DARWIN-64-STATIC-NEXT: 	callq	_lcallee
+; DARWIN-64-STATIC-NEXT: 	callq	_lcallee
 ; DARWIN-64-STATIC-NEXT: 	addq	$8, %rsp
 ; DARWIN-64-STATIC-NEXT: 	ret
 
 ; DARWIN-64-DYNAMIC: _lcaller:
 ; DARWIN-64-DYNAMIC: 	subq	$8, %rsp
-; DARWIN-64-DYNAMIC-NEXT: 	call	_lcallee
-; DARWIN-64-DYNAMIC-NEXT: 	call	_lcallee
+; DARWIN-64-DYNAMIC-NEXT: 	callq	_lcallee
+; DARWIN-64-DYNAMIC-NEXT: 	callq	_lcallee
 ; DARWIN-64-DYNAMIC-NEXT: 	addq	$8, %rsp
 ; DARWIN-64-DYNAMIC-NEXT: 	ret
 
 ; DARWIN-64-PIC: _lcaller:
 ; DARWIN-64-PIC: 	subq	$8, %rsp
-; DARWIN-64-PIC-NEXT: 	call	_lcallee
-; DARWIN-64-PIC-NEXT: 	call	_lcallee
+; DARWIN-64-PIC-NEXT: 	callq	_lcallee
+; DARWIN-64-PIC-NEXT: 	callq	_lcallee
 ; DARWIN-64-PIC-NEXT: 	addq	$8, %rsp
 ; DARWIN-64-PIC-NEXT: 	ret
 }
 
 define void @tailcaller() nounwind {
 entry:
-	tail call void @callee() nounwind
+	call void @callee() nounwind
 	ret void
 ; LINUX-64-STATIC: tailcaller:
-; LINUX-64-STATIC: call    callee
+; LINUX-64-STATIC: callq   callee
 ; LINUX-64-STATIC: ret
 
 ; LINUX-32-STATIC: tailcaller:
@@ -9003,7 +9003,7 @@ entry:
 
 ; LINUX-64-PIC: tailcaller:
 ; LINUX-64-PIC: 	subq	$8, %rsp
-; LINUX-64-PIC-NEXT: 	call	callee at PLT
+; LINUX-64-PIC-NEXT: 	callq	callee at PLT
 ; LINUX-64-PIC-NEXT: 	addq	$8, %rsp
 ; LINUX-64-PIC-NEXT: 	ret
 
@@ -9027,29 +9027,29 @@ entry:
 
 ; DARWIN-64-STATIC: _tailcaller:
 ; DARWIN-64-STATIC: 	subq	$8, %rsp
-; DARWIN-64-STATIC-NEXT: 	call	_callee
+; DARWIN-64-STATIC-NEXT: 	callq	_callee
 ; DARWIN-64-STATIC-NEXT: 	addq	$8, %rsp
 ; DARWIN-64-STATIC-NEXT: 	ret
 
 ; DARWIN-64-DYNAMIC: _tailcaller:
 ; DARWIN-64-DYNAMIC: 	subq	$8, %rsp
-; DARWIN-64-DYNAMIC-NEXT: 	call	_callee
+; DARWIN-64-DYNAMIC-NEXT: 	callq	_callee
 ; DARWIN-64-DYNAMIC-NEXT: 	addq	$8, %rsp
 ; DARWIN-64-DYNAMIC-NEXT: 	ret
 
 ; DARWIN-64-PIC: _tailcaller:
 ; DARWIN-64-PIC: 	subq	$8, %rsp
-; DARWIN-64-PIC-NEXT: 	call	_callee
+; DARWIN-64-PIC-NEXT: 	callq	_callee
 ; DARWIN-64-PIC-NEXT: 	addq	$8, %rsp
 ; DARWIN-64-PIC-NEXT: 	ret
 }
 
 define void @dtailcaller() nounwind {
 entry:
-	tail call void @dcallee() nounwind
+	call void @dcallee() nounwind
 	ret void
 ; LINUX-64-STATIC: dtailcaller:
-; LINUX-64-STATIC: call    dcallee
+; LINUX-64-STATIC: callq   dcallee
 ; LINUX-64-STATIC: ret
 
 ; LINUX-32-STATIC: dtailcaller:
@@ -9066,7 +9066,7 @@ entry:
 
 ; LINUX-64-PIC: dtailcaller:
 ; LINUX-64-PIC: 	subq	$8, %rsp
-; LINUX-64-PIC-NEXT: 	call	dcallee
+; LINUX-64-PIC-NEXT: 	callq	dcallee
 ; LINUX-64-PIC-NEXT: 	addq	$8, %rsp
 ; LINUX-64-PIC-NEXT: 	ret
 
@@ -9090,29 +9090,29 @@ entry:
 
 ; DARWIN-64-STATIC: _dtailcaller:
 ; DARWIN-64-STATIC: 	subq	$8, %rsp
-; DARWIN-64-STATIC-NEXT: 	call	_dcallee
+; DARWIN-64-STATIC-NEXT: 	callq	_dcallee
 ; DARWIN-64-STATIC-NEXT: 	addq	$8, %rsp
 ; DARWIN-64-STATIC-NEXT: 	ret
 
 ; DARWIN-64-DYNAMIC: _dtailcaller:
 ; DARWIN-64-DYNAMIC: 	subq	$8, %rsp
-; DARWIN-64-DYNAMIC-NEXT: 	call	_dcallee
+; DARWIN-64-DYNAMIC-NEXT: 	callq	_dcallee
 ; DARWIN-64-DYNAMIC-NEXT: 	addq	$8, %rsp
 ; DARWIN-64-DYNAMIC-NEXT: 	ret
 
 ; DARWIN-64-PIC: _dtailcaller:
 ; DARWIN-64-PIC: 	subq	$8, %rsp
-; DARWIN-64-PIC-NEXT: 	call	_dcallee
+; DARWIN-64-PIC-NEXT: 	callq	_dcallee
 ; DARWIN-64-PIC-NEXT: 	addq	$8, %rsp
 ; DARWIN-64-PIC-NEXT: 	ret
 }
 
 define void @ltailcaller() nounwind {
 entry:
-	tail call void @lcallee() nounwind
+	call void @lcallee() nounwind
 	ret void
 ; LINUX-64-STATIC: ltailcaller:
-; LINUX-64-STATIC: call    lcallee
+; LINUX-64-STATIC: callq   lcallee
 ; LINUX-64-STATIC: ret
 
 ; LINUX-32-STATIC: ltailcaller:
@@ -9129,7 +9129,7 @@ entry:
 
 ; LINUX-64-PIC: ltailcaller:
 ; LINUX-64-PIC: 	subq	$8, %rsp
-; LINUX-64-PIC-NEXT: 	call	lcallee at PLT
+; LINUX-64-PIC-NEXT: 	callq	lcallee at PLT
 ; LINUX-64-PIC-NEXT: 	addq	$8, %rsp
 ; LINUX-64-PIC-NEXT: 	ret
 
@@ -9153,19 +9153,19 @@ entry:
 
 ; DARWIN-64-STATIC: _ltailcaller:
 ; DARWIN-64-STATIC: 	subq	$8, %rsp
-; DARWIN-64-STATIC-NEXT: 	call	_lcallee
+; DARWIN-64-STATIC-NEXT: 	callq	_lcallee
 ; DARWIN-64-STATIC-NEXT: 	addq	$8, %rsp
 ; DARWIN-64-STATIC-NEXT: 	ret
 
 ; DARWIN-64-DYNAMIC: _ltailcaller:
 ; DARWIN-64-DYNAMIC: 	subq	$8, %rsp
-; DARWIN-64-DYNAMIC-NEXT: 	call	_lcallee
+; DARWIN-64-DYNAMIC-NEXT: 	callq	_lcallee
 ; DARWIN-64-DYNAMIC-NEXT: 	addq	$8, %rsp
 ; DARWIN-64-DYNAMIC-NEXT: 	ret
 
 ; DARWIN-64-PIC: _ltailcaller:
 ; DARWIN-64-PIC: 	subq	$8, %rsp
-; DARWIN-64-PIC-NEXT: 	call	_lcallee
+; DARWIN-64-PIC-NEXT: 	callq	_lcallee
 ; DARWIN-64-PIC-NEXT: 	addq	$8, %rsp
 ; DARWIN-64-PIC-NEXT: 	ret
 }
@@ -9173,13 +9173,13 @@ entry:
 define void @icaller() nounwind {
 entry:
 	%0 = load void ()** @ifunc, align 8
-	tail call void %0() nounwind
+	call void %0() nounwind
 	%1 = load void ()** @ifunc, align 8
-	tail call void %1() nounwind
+	call void %1() nounwind
 	ret void
 ; LINUX-64-STATIC: icaller:
-; LINUX-64-STATIC: call    *ifunc
-; LINUX-64-STATIC: call    *ifunc
+; LINUX-64-STATIC: callq   *ifunc
+; LINUX-64-STATIC: callq   *ifunc
 ; LINUX-64-STATIC: ret
 
 ; LINUX-32-STATIC: icaller:
@@ -9199,8 +9199,8 @@ entry:
 ; LINUX-64-PIC: icaller:
 ; LINUX-64-PIC: 	pushq	%rbx
 ; LINUX-64-PIC-NEXT: 	movq	ifunc at GOTPCREL(%rip), %rbx
-; LINUX-64-PIC-NEXT: 	call	*(%rbx)
-; LINUX-64-PIC-NEXT: 	call	*(%rbx)
+; LINUX-64-PIC-NEXT: 	callq	*(%rbx)
+; LINUX-64-PIC-NEXT: 	callq	*(%rbx)
 ; LINUX-64-PIC-NEXT: 	popq	%rbx
 ; LINUX-64-PIC-NEXT: 	ret
 
@@ -9237,24 +9237,24 @@ entry:
 ; DARWIN-64-STATIC: _icaller:
 ; DARWIN-64-STATIC: 	pushq	%rbx
 ; DARWIN-64-STATIC-NEXT: 	movq	_ifunc at GOTPCREL(%rip), %rbx
-; DARWIN-64-STATIC-NEXT: 	call	*(%rbx)
-; DARWIN-64-STATIC-NEXT: 	call	*(%rbx)
+; DARWIN-64-STATIC-NEXT: 	callq	*(%rbx)
+; DARWIN-64-STATIC-NEXT: 	callq	*(%rbx)
 ; DARWIN-64-STATIC-NEXT: 	popq	%rbx
 ; DARWIN-64-STATIC-NEXT: 	ret
 
 ; DARWIN-64-DYNAMIC: _icaller:
 ; DARWIN-64-DYNAMIC: 	pushq	%rbx
 ; DARWIN-64-DYNAMIC-NEXT: 	movq	_ifunc at GOTPCREL(%rip), %rbx
-; DARWIN-64-DYNAMIC-NEXT: 	call	*(%rbx)
-; DARWIN-64-DYNAMIC-NEXT: 	call	*(%rbx)
+; DARWIN-64-DYNAMIC-NEXT: 	callq	*(%rbx)
+; DARWIN-64-DYNAMIC-NEXT: 	callq	*(%rbx)
 ; DARWIN-64-DYNAMIC-NEXT: 	popq	%rbx
 ; DARWIN-64-DYNAMIC-NEXT: 	ret
 
 ; DARWIN-64-PIC: _icaller:
 ; DARWIN-64-PIC: 	pushq	%rbx
 ; DARWIN-64-PIC-NEXT: 	movq	_ifunc at GOTPCREL(%rip), %rbx
-; DARWIN-64-PIC-NEXT: 	call	*(%rbx)
-; DARWIN-64-PIC-NEXT: 	call	*(%rbx)
+; DARWIN-64-PIC-NEXT: 	callq	*(%rbx)
+; DARWIN-64-PIC-NEXT: 	callq	*(%rbx)
 ; DARWIN-64-PIC-NEXT: 	popq	%rbx
 ; DARWIN-64-PIC-NEXT: 	ret
 }
@@ -9262,13 +9262,13 @@ entry:
 define void @dicaller() nounwind {
 entry:
 	%0 = load void ()** @difunc, align 8
-	tail call void %0() nounwind
+	call void %0() nounwind
 	%1 = load void ()** @difunc, align 8
-	tail call void %1() nounwind
+	call void %1() nounwind
 	ret void
 ; LINUX-64-STATIC: dicaller:
-; LINUX-64-STATIC: call    *difunc
-; LINUX-64-STATIC: call    *difunc
+; LINUX-64-STATIC: callq   *difunc
+; LINUX-64-STATIC: callq   *difunc
 ; LINUX-64-STATIC: ret
 
 ; LINUX-32-STATIC: dicaller:
@@ -9288,8 +9288,8 @@ entry:
 ; LINUX-64-PIC: dicaller:
 ; LINUX-64-PIC: 	pushq	%rbx
 ; LINUX-64-PIC-NEXT: 	movq	difunc at GOTPCREL(%rip), %rbx
-; LINUX-64-PIC-NEXT: 	call	*(%rbx)
-; LINUX-64-PIC-NEXT: 	call	*(%rbx)
+; LINUX-64-PIC-NEXT: 	callq	*(%rbx)
+; LINUX-64-PIC-NEXT: 	callq	*(%rbx)
 ; LINUX-64-PIC-NEXT: 	popq	%rbx
 ; LINUX-64-PIC-NEXT: 	ret
 
@@ -9321,22 +9321,22 @@ entry:
 
 ; DARWIN-64-STATIC: _dicaller:
 ; DARWIN-64-STATIC: 	subq	$8, %rsp
-; DARWIN-64-STATIC-NEXT: 	call	*_difunc(%rip)
-; DARWIN-64-STATIC-NEXT: 	call	*_difunc(%rip)
+; DARWIN-64-STATIC-NEXT: 	callq	*_difunc(%rip)
+; DARWIN-64-STATIC-NEXT: 	callq	*_difunc(%rip)
 ; DARWIN-64-STATIC-NEXT: 	addq	$8, %rsp
 ; DARWIN-64-STATIC-NEXT: 	ret
 
 ; DARWIN-64-DYNAMIC: _dicaller:
 ; DARWIN-64-DYNAMIC: 	subq	$8, %rsp
-; DARWIN-64-DYNAMIC-NEXT: 	call	*_difunc(%rip)
-; DARWIN-64-DYNAMIC-NEXT: 	call	*_difunc(%rip)
+; DARWIN-64-DYNAMIC-NEXT: 	callq	*_difunc(%rip)
+; DARWIN-64-DYNAMIC-NEXT: 	callq	*_difunc(%rip)
 ; DARWIN-64-DYNAMIC-NEXT: 	addq	$8, %rsp
 ; DARWIN-64-DYNAMIC-NEXT: 	ret
 
 ; DARWIN-64-PIC: _dicaller:
 ; DARWIN-64-PIC: 	subq	$8, %rsp
-; DARWIN-64-PIC-NEXT: 	call	*_difunc(%rip)
-; DARWIN-64-PIC-NEXT: 	call	*_difunc(%rip)
+; DARWIN-64-PIC-NEXT: 	callq	*_difunc(%rip)
+; DARWIN-64-PIC-NEXT: 	callq	*_difunc(%rip)
 ; DARWIN-64-PIC-NEXT: 	addq	$8, %rsp
 ; DARWIN-64-PIC-NEXT: 	ret
 }
@@ -9344,13 +9344,13 @@ entry:
 define void @licaller() nounwind {
 entry:
 	%0 = load void ()** @lifunc, align 8
-	tail call void %0() nounwind
+	call void %0() nounwind
 	%1 = load void ()** @lifunc, align 8
-	tail call void %1() nounwind
+	call void %1() nounwind
 	ret void
 ; LINUX-64-STATIC: licaller:
-; LINUX-64-STATIC: call    *lifunc
-; LINUX-64-STATIC: call    *lifunc
+; LINUX-64-STATIC: callq   *lifunc
+; LINUX-64-STATIC: callq   *lifunc
 ; LINUX-64-STATIC: ret
 
 ; LINUX-32-STATIC: licaller:
@@ -9369,8 +9369,8 @@ entry:
 
 ; LINUX-64-PIC: licaller:
 ; LINUX-64-PIC: 	subq	$8, %rsp
-; LINUX-64-PIC-NEXT: 	call	*lifunc(%rip)
-; LINUX-64-PIC-NEXT: 	call	*lifunc(%rip)
+; LINUX-64-PIC-NEXT: 	callq	*lifunc(%rip)
+; LINUX-64-PIC-NEXT: 	callq	*lifunc(%rip)
 ; LINUX-64-PIC-NEXT: 	addq	$8, %rsp
 ; LINUX-64-PIC-NEXT: 	ret
 
@@ -9402,22 +9402,22 @@ entry:
 
 ; DARWIN-64-STATIC: _licaller:
 ; DARWIN-64-STATIC: 	subq	$8, %rsp
-; DARWIN-64-STATIC-NEXT: 	call	*_lifunc(%rip)
-; DARWIN-64-STATIC-NEXT: 	call	*_lifunc(%rip)
+; DARWIN-64-STATIC-NEXT: 	callq	*_lifunc(%rip)
+; DARWIN-64-STATIC-NEXT: 	callq	*_lifunc(%rip)
 ; DARWIN-64-STATIC-NEXT: 	addq	$8, %rsp
 ; DARWIN-64-STATIC-NEXT: 	ret
 
 ; DARWIN-64-DYNAMIC: _licaller:
 ; DARWIN-64-DYNAMIC: 	subq	$8, %rsp
-; DARWIN-64-DYNAMIC-NEXT: 	call	*_lifunc(%rip)
-; DARWIN-64-DYNAMIC-NEXT: 	call	*_lifunc(%rip)
+; DARWIN-64-DYNAMIC-NEXT: 	callq	*_lifunc(%rip)
+; DARWIN-64-DYNAMIC-NEXT: 	callq	*_lifunc(%rip)
 ; DARWIN-64-DYNAMIC-NEXT: 	addq	$8, %rsp
 ; DARWIN-64-DYNAMIC-NEXT: 	ret
 
 ; DARWIN-64-PIC: _licaller:
 ; DARWIN-64-PIC: 	subq	$8, %rsp
-; DARWIN-64-PIC-NEXT: 	call	*_lifunc(%rip)
-; DARWIN-64-PIC-NEXT: 	call	*_lifunc(%rip)
+; DARWIN-64-PIC-NEXT: 	callq	*_lifunc(%rip)
+; DARWIN-64-PIC-NEXT: 	callq	*_lifunc(%rip)
 ; DARWIN-64-PIC-NEXT: 	addq	$8, %rsp
 ; DARWIN-64-PIC-NEXT: 	ret
 }
@@ -9425,13 +9425,13 @@ entry:
 define void @itailcaller() nounwind {
 entry:
 	%0 = load void ()** @ifunc, align 8
-	tail call void %0() nounwind
+	call void %0() nounwind
 	%1 = load void ()** @ifunc, align 8
-	tail call void %1() nounwind
+	call void %1() nounwind
 	ret void
 ; LINUX-64-STATIC: itailcaller:
-; LINUX-64-STATIC: call    *ifunc
-; LINUX-64-STATIC: call    *ifunc
+; LINUX-64-STATIC: callq   *ifunc
+; LINUX-64-STATIC: callq   *ifunc
 ; LINUX-64-STATIC: ret
 
 ; LINUX-32-STATIC: itailcaller:
@@ -9451,8 +9451,8 @@ entry:
 ; LINUX-64-PIC: itailcaller:
 ; LINUX-64-PIC: 	pushq	%rbx
 ; LINUX-64-PIC-NEXT: 	movq	ifunc at GOTPCREL(%rip), %rbx
-; LINUX-64-PIC-NEXT: 	call	*(%rbx)
-; LINUX-64-PIC-NEXT: 	call	*(%rbx)
+; LINUX-64-PIC-NEXT: 	callq	*(%rbx)
+; LINUX-64-PIC-NEXT: 	callq	*(%rbx)
 ; LINUX-64-PIC-NEXT: 	popq	%rbx
 ; LINUX-64-PIC-NEXT: 	ret
 
@@ -9489,24 +9489,24 @@ entry:
 ; DARWIN-64-STATIC: _itailcaller:
 ; DARWIN-64-STATIC: 	pushq	%rbx
 ; DARWIN-64-STATIC-NEXT: 	movq	_ifunc at GOTPCREL(%rip), %rbx
-; DARWIN-64-STATIC-NEXT: 	call	*(%rbx)
-; DARWIN-64-STATIC-NEXT: 	call	*(%rbx)
+; DARWIN-64-STATIC-NEXT: 	callq	*(%rbx)
+; DARWIN-64-STATIC-NEXT: 	callq	*(%rbx)
 ; DARWIN-64-STATIC-NEXT: 	popq	%rbx
 ; DARWIN-64-STATIC-NEXT: 	ret
 
 ; DARWIN-64-DYNAMIC: _itailcaller:
 ; DARWIN-64-DYNAMIC: 	pushq	%rbx
 ; DARWIN-64-DYNAMIC-NEXT: 	movq	_ifunc at GOTPCREL(%rip), %rbx
-; DARWIN-64-DYNAMIC-NEXT: 	call	*(%rbx)
-; DARWIN-64-DYNAMIC-NEXT: 	call	*(%rbx)
+; DARWIN-64-DYNAMIC-NEXT: 	callq	*(%rbx)
+; DARWIN-64-DYNAMIC-NEXT: 	callq	*(%rbx)
 ; DARWIN-64-DYNAMIC-NEXT: 	popq	%rbx
 ; DARWIN-64-DYNAMIC-NEXT: 	ret
 
 ; DARWIN-64-PIC: _itailcaller:
 ; DARWIN-64-PIC: 	pushq	%rbx
 ; DARWIN-64-PIC-NEXT: 	movq	_ifunc at GOTPCREL(%rip), %rbx
-; DARWIN-64-PIC-NEXT: 	call	*(%rbx)
-; DARWIN-64-PIC-NEXT: 	call	*(%rbx)
+; DARWIN-64-PIC-NEXT: 	callq	*(%rbx)
+; DARWIN-64-PIC-NEXT: 	callq	*(%rbx)
 ; DARWIN-64-PIC-NEXT: 	popq	%rbx
 ; DARWIN-64-PIC-NEXT: 	ret
 }
@@ -9514,10 +9514,10 @@ entry:
 define void @ditailcaller() nounwind {
 entry:
 	%0 = load void ()** @difunc, align 8
-	tail call void %0() nounwind
+	call void %0() nounwind
 	ret void
 ; LINUX-64-STATIC: ditailcaller:
-; LINUX-64-STATIC: call    *difunc
+; LINUX-64-STATIC: callq   *difunc
 ; LINUX-64-STATIC: ret
 
 ; LINUX-32-STATIC: ditailcaller:
@@ -9535,7 +9535,7 @@ entry:
 ; LINUX-64-PIC: ditailcaller:
 ; LINUX-64-PIC: 	subq	$8, %rsp
 ; LINUX-64-PIC-NEXT: 	movq	difunc at GOTPCREL(%rip), %rax
-; LINUX-64-PIC-NEXT: 	call	*(%rax)
+; LINUX-64-PIC-NEXT: 	callq	*(%rax)
 ; LINUX-64-PIC-NEXT: 	addq	$8, %rsp
 ; LINUX-64-PIC-NEXT: 	ret
 
@@ -9562,18 +9562,18 @@ entry:
 
 ; DARWIN-64-STATIC: _ditailcaller:
 ; DARWIN-64-STATIC: 	subq	$8, %rsp
-; DARWIN-64-STATIC-NEXT: 	call	*_difunc(%rip)
+; DARWIN-64-STATIC-NEXT: 	callq	*_difunc(%rip)
 ; DARWIN-64-STATIC-NEXT: 	addq	$8, %rsp
 ; DARWIN-64-STATIC-NEXT: 	ret
 
 ; DARWIN-64-DYNAMIC: _ditailcaller:
 ; DARWIN-64-DYNAMIC: 	subq	$8, %rsp
-; DARWIN-64-DYNAMIC-NEXT: 	call	*_difunc(%rip)
+; DARWIN-64-DYNAMIC-NEXT: 	callq	*_difunc(%rip)
 ; DARWIN-64-DYNAMIC-NEXT: 	addq	$8, %rsp
 ; DARWIN-64-DYNAMIC-NEXT: 	ret
 
 ; DARWIN-64-PIC: _ditailcaller:
-; DARWIN-64-PIC: 	call	*_difunc(%rip)
+; DARWIN-64-PIC: 	callq	*_difunc(%rip)
 ; DARWIN-64-PIC-NEXT: 	addq	$8, %rsp
 ; DARWIN-64-PIC-NEXT: 	ret
 }
@@ -9581,10 +9581,10 @@ entry:
 define void @litailcaller() nounwind {
 entry:
 	%0 = load void ()** @lifunc, align 8
-	tail call void %0() nounwind
+	call void %0() nounwind
 	ret void
 ; LINUX-64-STATIC: litailcaller:
-; LINUX-64-STATIC: call    *lifunc
+; LINUX-64-STATIC: callq   *lifunc
 ; LINUX-64-STATIC: ret
 
 ; LINUX-32-STATIC: litailcaller:
@@ -9601,7 +9601,7 @@ entry:
 
 ; LINUX-64-PIC: litailcaller:
 ; LINUX-64-PIC: 	subq	$8, %rsp
-; LINUX-64-PIC-NEXT: 	call	*lifunc(%rip)
+; LINUX-64-PIC-NEXT: 	callq	*lifunc(%rip)
 ; LINUX-64-PIC-NEXT: 	addq	$8, %rsp
 ; LINUX-64-PIC-NEXT: 	ret
 
@@ -9628,19 +9628,19 @@ entry:
 
 ; DARWIN-64-STATIC: _litailcaller:
 ; DARWIN-64-STATIC: 	subq	$8, %rsp
-; DARWIN-64-STATIC-NEXT: 	call	*_lifunc(%rip)
+; DARWIN-64-STATIC-NEXT: 	callq	*_lifunc(%rip)
 ; DARWIN-64-STATIC-NEXT: 	addq	$8, %rsp
 ; DARWIN-64-STATIC-NEXT: 	ret
 
 ; DARWIN-64-DYNAMIC: _litailcaller:
 ; DARWIN-64-DYNAMIC: 	subq	$8, %rsp
-; DARWIN-64-DYNAMIC-NEXT: 	call	*_lifunc(%rip)
+; DARWIN-64-DYNAMIC-NEXT: 	callq	*_lifunc(%rip)
 ; DARWIN-64-DYNAMIC-NEXT: 	addq	$8, %rsp
 ; DARWIN-64-DYNAMIC-NEXT: 	ret
 
 ; DARWIN-64-PIC: _litailcaller:
 ; DARWIN-64-PIC: 	subq	$8, %rsp
-; DARWIN-64-PIC-NEXT: 	call	*_lifunc(%rip)
+; DARWIN-64-PIC-NEXT: 	callq	*_lifunc(%rip)
 ; DARWIN-64-PIC-NEXT: 	addq	$8, %rsp
 ; DARWIN-64-PIC-NEXT: 	ret
 }
diff --git a/libclamav/c++/llvm/test/CodeGen/X86/brcond-srl.ll b/libclamav/c++/llvm/test/CodeGen/X86/brcond-srl.ll
new file mode 100644
index 0000000..12674e9
--- /dev/null
+++ b/libclamav/c++/llvm/test/CodeGen/X86/brcond-srl.ll
@@ -0,0 +1,29 @@
+; RUN: llc < %s -march=x86 | FileCheck %s
+; rdar://7475489
+
+define i32 @t(i32 %a, i32 %b) nounwind ssp {
+entry:
+; CHECK: t:
+; CHECK: xorb
+; CHECK-NOT: andb
+; CHECK-NOT: shrb
+; CHECK: testb $64
+  %0 = and i32 %a, 16384
+  %1 = icmp ne i32 %0, 0
+  %2 = and i32 %b, 16384
+  %3 = icmp ne i32 %2, 0
+  %4 = xor i1 %1, %3
+  br i1 %4, label %bb1, label %bb
+
+bb:                                               ; preds = %entry
+  %5 = tail call i32 (...)* @foo() nounwind       ; <i32> [#uses=1]
+  ret i32 %5
+
+bb1:                                              ; preds = %entry
+  %6 = tail call i32 (...)* @bar() nounwind       ; <i32> [#uses=1]
+  ret i32 %6
+}
+
+declare i32 @foo(...)
+
+declare i32 @bar(...)
diff --git a/libclamav/c++/llvm/test/CodeGen/X86/break-sse-dep.ll b/libclamav/c++/llvm/test/CodeGen/X86/break-sse-dep.ll
new file mode 100644
index 0000000..acc0647
--- /dev/null
+++ b/libclamav/c++/llvm/test/CodeGen/X86/break-sse-dep.ll
@@ -0,0 +1,21 @@
+; RUN: llc < %s -march=x86-64 -mattr=+sse2 | FileCheck %s
+
+define double @t1(float* nocapture %x) nounwind readonly ssp {
+entry:
+; CHECK: t1:
+; CHECK: movss (%rdi), %xmm0
+; CHECK; cvtss2sd %xmm0, %xmm0
+
+  %0 = load float* %x, align 4
+  %1 = fpext float %0 to double
+  ret double %1
+}
+
+define float @t2(double* nocapture %x) nounwind readonly ssp optsize {
+entry:
+; CHECK: t2:
+; CHECK; cvtsd2ss (%rdi), %xmm0
+  %0 = load double* %x, align 8
+  %1 = fptrunc double %0 to float
+  ret float %1
+}
diff --git a/libclamav/c++/llvm/test/CodeGen/X86/bss_pagealigned.ll b/libclamav/c++/llvm/test/CodeGen/X86/bss_pagealigned.ll
index 4a1049b..27c5361 100644
--- a/libclamav/c++/llvm/test/CodeGen/X86/bss_pagealigned.ll
+++ b/libclamav/c++/llvm/test/CodeGen/X86/bss_pagealigned.ll
@@ -10,7 +10,7 @@ define void @unxlate_dev_mem_ptr(i64 %phis, i8* %addr) nounwind {
 ; CHECK: movq    $bm_pte, %rdi
 ; CHECK-NEXT: xorl    %esi, %esi
 ; CHECK-NEXT: movl    $4096, %edx
-; CHECK-NEXT: call    memset
+; CHECK-NEXT: callq   memset
   ret void
 }
 @bm_pte = internal global [512 x %struct.kmem_cache_order_objects] zeroinitializer, section ".bss.page_aligned", align 4096
diff --git a/libclamav/c++/llvm/test/CodeGen/X86/cmov.ll b/libclamav/c++/llvm/test/CodeGen/X86/cmov.ll
index f3c9a7a..39d9d1e 100644
--- a/libclamav/c++/llvm/test/CodeGen/X86/cmov.ll
+++ b/libclamav/c++/llvm/test/CodeGen/X86/cmov.ll
@@ -6,7 +6,7 @@ entry:
 ; CHECK: test1:
 ; CHECK: btl
 ; CHECK-NEXT: movl	$12, %eax
-; CHECK-NEXT: cmovae	(%rcx), %eax
+; CHECK-NEXT: cmovael	(%rcx), %eax
 ; CHECK-NEXT: ret
 
 	%0 = lshr i32 %x, %n		; <i32> [#uses=1]
@@ -21,7 +21,7 @@ entry:
 ; CHECK: test2:
 ; CHECK: btl
 ; CHECK-NEXT: movl	$12, %eax
-; CHECK-NEXT: cmovb	(%rcx), %eax
+; CHECK-NEXT: cmovbl	(%rcx), %eax
 ; CHECK-NEXT: ret
 
 	%0 = lshr i32 %x, %n		; <i32> [#uses=1]
@@ -41,7 +41,7 @@ declare void @bar(i64) nounwind
 
 define void @test3(i64 %a, i64 %b, i1 %p) nounwind {
 ; CHECK: test3:
-; CHECK:      cmovne  %edi, %esi
+; CHECK:      cmovnel %edi, %esi
 ; CHECK-NEXT: movl    %esi, %edi
 
   %c = trunc i64 %a to i32
diff --git a/libclamav/c++/llvm/test/CodeGen/X86/live-out-reg-info.ll b/libclamav/c++/llvm/test/CodeGen/X86/live-out-reg-info.ll
index 7132777..8cd9774 100644
--- a/libclamav/c++/llvm/test/CodeGen/X86/live-out-reg-info.ll
+++ b/libclamav/c++/llvm/test/CodeGen/X86/live-out-reg-info.ll
@@ -1,4 +1,4 @@
-; RUN: llc < %s -march=x86-64 | grep {testb	\[$\]1,}
+; RUN: llc < %s -march=x86-64 | grep testb
 
 ; Make sure dagcombine doesn't eliminate the comparison due
 ; to an off-by-one bug with ComputeMaskedBits information.
diff --git a/libclamav/c++/llvm/test/CodeGen/X86/loop-blocks.ll b/libclamav/c++/llvm/test/CodeGen/X86/loop-blocks.ll
index ec5236b..a125e54 100644
--- a/libclamav/c++/llvm/test/CodeGen/X86/loop-blocks.ll
+++ b/libclamav/c++/llvm/test/CodeGen/X86/loop-blocks.ll
@@ -10,9 +10,9 @@
 ;      CHECK:   jmp   .LBB1_1
 ; CHECK-NEXT:   align
 ; CHECK-NEXT: .LBB1_2:
-; CHECK-NEXT:   call loop_latch
+; CHECK-NEXT:   callq loop_latch
 ; CHECK-NEXT: .LBB1_1:
-; CHECK-NEXT:   call loop_header
+; CHECK-NEXT:   callq loop_header
 
 define void @simple() nounwind {
 entry:
@@ -40,9 +40,9 @@ done:
 ;      CHECK:   jmp .LBB2_1
 ; CHECK-NEXT:   align
 ; CHECK-NEXT: .LBB2_4:
-; CHECK-NEXT:   call bar99
+; CHECK-NEXT:   callq bar99
 ; CHECK-NEXT: .LBB2_1:
-; CHECK-NEXT:   call body
+; CHECK-NEXT:   callq body
 
 define void @slightly_more_involved() nounwind {
 entry:
@@ -75,18 +75,18 @@ exit:
 ;      CHECK:   jmp .LBB3_1
 ; CHECK-NEXT:   align
 ; CHECK-NEXT: .LBB3_4:
-; CHECK-NEXT:   call bar99
-; CHECK-NEXT:   call get
+; CHECK-NEXT:   callq bar99
+; CHECK-NEXT:   callq get
 ; CHECK-NEXT:   cmpl $2999, %eax
 ; CHECK-NEXT:   jg .LBB3_6
-; CHECK-NEXT:   call block_a_true_func
+; CHECK-NEXT:   callq block_a_true_func
 ; CHECK-NEXT:   jmp .LBB3_7
 ; CHECK-NEXT: .LBB3_6:
-; CHECK-NEXT:   call block_a_false_func
+; CHECK-NEXT:   callq block_a_false_func
 ; CHECK-NEXT: .LBB3_7:
-; CHECK-NEXT:   call block_a_merge_func
+; CHECK-NEXT:   callq block_a_merge_func
 ; CHECK-NEXT: .LBB3_1:
-; CHECK-NEXT:   call body
+; CHECK-NEXT:   callq body
 
 define void @yet_more_involved() nounwind {
 entry:
@@ -134,18 +134,18 @@ exit:
 ;      CHECK:   jmp     .LBB4_1
 ; CHECK-NEXT:   align
 ; CHECK-NEXT: .LBB4_7:
-; CHECK-NEXT:   call    bar100
+; CHECK-NEXT:   callq   bar100
 ; CHECK-NEXT:   jmp     .LBB4_1
 ; CHECK-NEXT: .LBB4_8:
-; CHECK-NEXT:   call    bar101
+; CHECK-NEXT:   callq   bar101
 ; CHECK-NEXT:   jmp     .LBB4_1
 ; CHECK-NEXT: .LBB4_9:
-; CHECK-NEXT:   call    bar102
+; CHECK-NEXT:   callq   bar102
 ; CHECK-NEXT:   jmp     .LBB4_1
 ; CHECK-NEXT: .LBB4_5:
-; CHECK-NEXT:   call    loop_latch
+; CHECK-NEXT:   callq   loop_latch
 ; CHECK-NEXT: .LBB4_1:
-; CHECK-NEXT:   call    loop_header
+; CHECK-NEXT:   callq   loop_header
 
 define void @cfg_islands() nounwind {
 entry:
diff --git a/libclamav/c++/llvm/test/CodeGen/X86/memcmp.ll b/libclamav/c++/llvm/test/CodeGen/X86/memcmp.ll
new file mode 100644
index 0000000..b90d2e2
--- /dev/null
+++ b/libclamav/c++/llvm/test/CodeGen/X86/memcmp.ll
@@ -0,0 +1,110 @@
+; RUN: llc %s -o - -march=x86-64 | FileCheck %s
+
+; This tests codegen time inlining/optimization of memcmp
+; rdar://6480398
+
+ at .str = private constant [23 x i8] c"fooooooooooooooooooooo\00", align 1 ; <[23 x i8]*> [#uses=1]
+
+declare i32 @memcmp(...)
+
+define void @memcmp2(i8* %X, i8* %Y, i32* nocapture %P) nounwind {
+entry:
+  %0 = tail call i32 (...)* @memcmp(i8* %X, i8* %Y, i32 2) nounwind ; <i32> [#uses=1]
+  %1 = icmp eq i32 %0, 0                          ; <i1> [#uses=1]
+  br i1 %1, label %return, label %bb
+
+bb:                                               ; preds = %entry
+  store i32 4, i32* %P, align 4
+  ret void
+
+return:                                           ; preds = %entry
+  ret void
+; CHECK: memcmp2:
+; CHECK: movw    (%rsi), %ax
+; CHECK: cmpw    %ax, (%rdi)
+}
+
+define void @memcmp2a(i8* %X, i32* nocapture %P) nounwind {
+entry:
+  %0 = tail call i32 (...)* @memcmp(i8* %X, i8* getelementptr inbounds ([23 x i8]* @.str, i32 0, i32 1), i32 2) nounwind ; <i32> [#uses=1]
+  %1 = icmp eq i32 %0, 0                          ; <i1> [#uses=1]
+  br i1 %1, label %return, label %bb
+
+bb:                                               ; preds = %entry
+  store i32 4, i32* %P, align 4
+  ret void
+
+return:                                           ; preds = %entry
+  ret void
+; CHECK: memcmp2a:
+; CHECK: cmpw    $28527, (%rdi)
+}
+
+
+define void @memcmp4(i8* %X, i8* %Y, i32* nocapture %P) nounwind {
+entry:
+  %0 = tail call i32 (...)* @memcmp(i8* %X, i8* %Y, i32 4) nounwind ; <i32> [#uses=1]
+  %1 = icmp eq i32 %0, 0                          ; <i1> [#uses=1]
+  br i1 %1, label %return, label %bb
+
+bb:                                               ; preds = %entry
+  store i32 4, i32* %P, align 4
+  ret void
+
+return:                                           ; preds = %entry
+  ret void
+; CHECK: memcmp4:
+; CHECK: movl    (%rsi), %eax
+; CHECK: cmpl    %eax, (%rdi)
+}
+
+define void @memcmp4a(i8* %X, i32* nocapture %P) nounwind {
+entry:
+  %0 = tail call i32 (...)* @memcmp(i8* %X, i8* getelementptr inbounds ([23 x i8]* @.str, i32 0, i32 1), i32 4) nounwind ; <i32> [#uses=1]
+  %1 = icmp eq i32 %0, 0                          ; <i1> [#uses=1]
+  br i1 %1, label %return, label %bb
+
+bb:                                               ; preds = %entry
+  store i32 4, i32* %P, align 4
+  ret void
+
+return:                                           ; preds = %entry
+  ret void
+; CHECK: memcmp4a:
+; CHECK: cmpl $1869573999, (%rdi)
+}
+
+define void @memcmp8(i8* %X, i8* %Y, i32* nocapture %P) nounwind {
+entry:
+  %0 = tail call i32 (...)* @memcmp(i8* %X, i8* %Y, i32 8) nounwind ; <i32> [#uses=1]
+  %1 = icmp eq i32 %0, 0                          ; <i1> [#uses=1]
+  br i1 %1, label %return, label %bb
+
+bb:                                               ; preds = %entry
+  store i32 4, i32* %P, align 4
+  ret void
+
+return:                                           ; preds = %entry
+  ret void
+; CHECK: memcmp8:
+; CHECK: movq    (%rsi), %rax
+; CHECK: cmpq    %rax, (%rdi)
+}
+
+define void @memcmp8a(i8* %X, i32* nocapture %P) nounwind {
+entry:
+  %0 = tail call i32 (...)* @memcmp(i8* %X, i8* getelementptr inbounds ([23 x i8]* @.str, i32 0, i32 0), i32 8) nounwind ; <i32> [#uses=1]
+  %1 = icmp eq i32 %0, 0                          ; <i1> [#uses=1]
+  br i1 %1, label %return, label %bb
+
+bb:                                               ; preds = %entry
+  store i32 4, i32* %P, align 4
+  ret void
+
+return:                                           ; preds = %entry
+  ret void
+; CHECK: memcmp8a:
+; CHECK: movabsq $8029759185026510694, %rax
+; CHECK: cmpq	%rax, (%rdi)
+}
+
diff --git a/libclamav/c++/llvm/test/CodeGen/X86/object-size.ll b/libclamav/c++/llvm/test/CodeGen/X86/object-size.ll
index 3f90245..eed3cfc 100644
--- a/libclamav/c++/llvm/test/CodeGen/X86/object-size.ll
+++ b/libclamav/c++/llvm/test/CodeGen/X86/object-size.ll
@@ -10,7 +10,7 @@ target triple = "x86_64-apple-darwin10.0"
 define void @bar() nounwind ssp {
 entry:
   %tmp = load i8** @p                             ; <i8*> [#uses=1]
-  %0 = call i64 @llvm.objectsize.i64(i8* %tmp, i32 0) ; <i64> [#uses=1]
+  %0 = call i64 @llvm.objectsize.i64(i8* %tmp, i1 0) ; <i64> [#uses=1]
   %cmp = icmp ne i64 %0, -1                       ; <i1> [#uses=1]
 ; X64: movq    $-1, %rax
 ; X64: cmpq    $-1, %rax
@@ -19,7 +19,7 @@ entry:
 cond.true:                                        ; preds = %entry
   %tmp1 = load i8** @p                            ; <i8*> [#uses=1]
   %tmp2 = load i8** @p                            ; <i8*> [#uses=1]
-  %1 = call i64 @llvm.objectsize.i64(i8* %tmp2, i32 1) ; <i64> [#uses=1]
+  %1 = call i64 @llvm.objectsize.i64(i8* %tmp2, i1 1) ; <i64> [#uses=1]
   %call = call i8* @__strcpy_chk(i8* %tmp1, i8* getelementptr inbounds ([3 x i8]* @.str, i32 0, i32 0), i64 %1) ssp ; <i8*> [#uses=1]
   br label %cond.end
 
@@ -33,7 +33,7 @@ cond.end:                                         ; preds = %cond.false, %cond.t
   ret void
 }
 
-declare i64 @llvm.objectsize.i64(i8*, i32) nounwind readonly
+declare i64 @llvm.objectsize.i64(i8*, i1) nounwind readonly
 
 declare i8* @__strcpy_chk(i8*, i8*, i64) ssp
 
@@ -47,7 +47,7 @@ entry:
   %tmp = load i8** %__dest.addr                   ; <i8*> [#uses=1]
   %tmp1 = load i8** %__src.addr                   ; <i8*> [#uses=1]
   %tmp2 = load i8** %__dest.addr                  ; <i8*> [#uses=1]
-  %0 = call i64 @llvm.objectsize.i64(i8* %tmp2, i32 1) ; <i64> [#uses=1]
+  %0 = call i64 @llvm.objectsize.i64(i8* %tmp2, i1 1) ; <i64> [#uses=1]
   %call = call i8* @__strcpy_chk(i8* %tmp, i8* %tmp1, i64 %0) ssp ; <i8*> [#uses=1]
   store i8* %call, i8** %retval
   %1 = load i8** %retval                          ; <i8*> [#uses=1]
diff --git a/libclamav/c++/llvm/test/CodeGen/X86/peep-test-3.ll b/libclamav/c++/llvm/test/CodeGen/X86/peep-test-3.ll
index 5aaf81b..a34a978 100644
--- a/libclamav/c++/llvm/test/CodeGen/X86/peep-test-3.ll
+++ b/libclamav/c++/llvm/test/CodeGen/X86/peep-test-3.ll
@@ -65,7 +65,7 @@ return:                                           ; preds = %entry
   ret void
 }
 
-; Just like @and, but without the trunc+store. This should use a testl
+; Just like @and, but without the trunc+store. This should use a testb
 ; instead of an andl.
 ; CHECK: test:
 define void @test(float* %A, i32 %IA, i32 %N, i8* %p) nounwind {
diff --git a/libclamav/c++/llvm/test/CodeGen/X86/phys-reg-local-regalloc.ll b/libclamav/c++/llvm/test/CodeGen/X86/phys-reg-local-regalloc.ll
new file mode 100644
index 0000000..e5e2d4b
--- /dev/null
+++ b/libclamav/c++/llvm/test/CodeGen/X86/phys-reg-local-regalloc.ll
@@ -0,0 +1,49 @@
+; RUN: llc < %s -march=x86 -mtriple=i386-apple-darwin9 -regalloc=local | FileCheck %s
+
+ at .str = private constant [12 x i8] c"x + y = %i\0A\00", align 1 ; <[12 x i8]*> [#uses=1]
+
+define i32 @main() nounwind {
+entry:
+; CHECK: movl 24(%esp), %eax
+; CHECK-NOT: movl
+; CHECK: movl	%eax, 36(%esp)
+; CHECK-NOT: movl
+; CHECK: movl 28(%esp), %ebx
+; CHECK-NOT: movl
+; CHECK: movl	%ebx, 40(%esp)
+; CHECK-NOT: movl
+; CHECK: addl %ebx, %eax
+  %retval = alloca i32                            ; <i32*> [#uses=2]
+  %"%ebx" = alloca i32                            ; <i32*> [#uses=1]
+  %"%eax" = alloca i32                            ; <i32*> [#uses=2]
+  %result = alloca i32                            ; <i32*> [#uses=2]
+  %y = alloca i32                                 ; <i32*> [#uses=2]
+  %x = alloca i32                                 ; <i32*> [#uses=2]
+  %0 = alloca i32                                 ; <i32*> [#uses=2]
+  %"alloca point" = bitcast i32 0 to i32          ; <i32> [#uses=0]
+  store i32 1, i32* %x, align 4
+  store i32 2, i32* %y, align 4
+  call void asm sideeffect alignstack "# top of block", "~{dirflag},~{fpsr},~{flags},~{edi},~{esi},~{edx},~{ecx},~{eax}"() nounwind
+  %asmtmp = call i32 asm sideeffect alignstack "movl $1, $0", "=={eax},*m,~{dirflag},~{fpsr},~{flags},~{memory}"(i32* %x) nounwind ; <i32> [#uses=1]
+  store i32 %asmtmp, i32* %"%eax"
+  %asmtmp1 = call i32 asm sideeffect alignstack "movl $1, $0", "=={ebx},*m,~{dirflag},~{fpsr},~{flags},~{memory}"(i32* %y) nounwind ; <i32> [#uses=1]
+  store i32 %asmtmp1, i32* %"%ebx"
+  %1 = call i32 asm "", "={bx}"() nounwind        ; <i32> [#uses=1]
+  %2 = call i32 asm "", "={ax}"() nounwind        ; <i32> [#uses=1]
+  %asmtmp2 = call i32 asm sideeffect alignstack "addl $1, $0", "=={eax},{ebx},{eax},~{dirflag},~{fpsr},~{flags},~{memory}"(i32 %1, i32 %2) nounwind ; <i32> [#uses=1]
+  store i32 %asmtmp2, i32* %"%eax"
+  %3 = call i32 asm "", "={ax}"() nounwind        ; <i32> [#uses=1]
+  call void asm sideeffect alignstack "movl $0, $1", "{eax},*m,~{dirflag},~{fpsr},~{flags},~{memory}"(i32 %3, i32* %result) nounwind
+  %4 = load i32* %result, align 4                 ; <i32> [#uses=1]
+  %5 = call i32 (i8*, ...)* @printf(i8* getelementptr inbounds ([12 x i8]* @.str, i32 0, i32 0), i32 %4) nounwind ; <i32> [#uses=0]
+  store i32 0, i32* %0, align 4
+  %6 = load i32* %0, align 4                      ; <i32> [#uses=1]
+  store i32 %6, i32* %retval, align 4
+  br label %return
+
+return:                                           ; preds = %entry
+  %retval3 = load i32* %retval                    ; <i32> [#uses=1]
+  ret i32 %retval3
+}
+
+declare i32 @printf(i8*, ...) nounwind
diff --git a/libclamav/c++/llvm/test/CodeGen/X86/select-aggregate.ll b/libclamav/c++/llvm/test/CodeGen/X86/select-aggregate.ll
index 822e594..44cafe2 100644
--- a/libclamav/c++/llvm/test/CodeGen/X86/select-aggregate.ll
+++ b/libclamav/c++/llvm/test/CodeGen/X86/select-aggregate.ll
@@ -1,7 +1,7 @@
 ; RUN: llc < %s -march=x86-64 | FileCheck %s
 ; PR5757
 
-; CHECK: cmovne %rdi, %rsi
+; CHECK: cmovneq %rdi, %rsi
 ; CHECK: movl (%rsi), %eax
 
 %0 = type { i64, i32 }
diff --git a/libclamav/c++/llvm/test/CodeGen/X86/setcc.ll b/libclamav/c++/llvm/test/CodeGen/X86/setcc.ll
index 42ce4c1..c37e15d 100644
--- a/libclamav/c++/llvm/test/CodeGen/X86/setcc.ll
+++ b/libclamav/c++/llvm/test/CodeGen/X86/setcc.ll
@@ -1,5 +1,4 @@
 ; RUN: llc < %s -mtriple=x86_64-apple-darwin | FileCheck %s
-; XFAIL: *
 ; rdar://7329206
 
 ; Use sbb x, x to materialize carry bit in a GPR. The value is either
diff --git a/libclamav/c++/llvm/test/CodeGen/X86/tail-opts.ll b/libclamav/c++/llvm/test/CodeGen/X86/tail-opts.ll
index c70c9fa..8c3cae9 100644
--- a/libclamav/c++/llvm/test/CodeGen/X86/tail-opts.ll
+++ b/libclamav/c++/llvm/test/CodeGen/X86/tail-opts.ll
@@ -274,7 +274,7 @@ declare fastcc %union.tree_node* @default_conversion(%union.tree_node*) nounwind
 ; one ret instruction.
 
 ; CHECK: foo:
-; CHECK:        call func
+; CHECK:        callq func
 ; CHECK-NEXT: .LBB5_2:
 ; CHECK-NEXT:   addq $8, %rsp
 ; CHECK-NEXT:   ret
diff --git a/libclamav/c++/llvm/test/CodeGen/X86/tailcall1.ll b/libclamav/c++/llvm/test/CodeGen/X86/tailcall1.ll
index 4923df2..42f8cdd 100644
--- a/libclamav/c++/llvm/test/CodeGen/X86/tailcall1.ll
+++ b/libclamav/c++/llvm/test/CodeGen/X86/tailcall1.ll
@@ -1,12 +1,10 @@
-; RUN: llc < %s -march=x86 -tailcallopt | grep TAILCALL | count 4
-define fastcc i32 @tailcallee(i32 %a1, i32 %a2, i32 %a3, i32 %a4) {
-entry:
-	ret i32 %a3
-}
+; RUN: llc < %s -march=x86 -tailcallopt | grep TAILCALL | count 5
+
+declare fastcc i32 @tailcallee(i32 %a1, i32 %a2, i32 %a3, i32 %a4)
 
-define fastcc i32 @tailcaller(i32 %in1, i32 %in2) {
+define fastcc i32 @tailcaller(i32 %in1, i32 %in2) nounwind {
 entry:
-	%tmp11 = tail call fastcc i32 @tailcallee( i32 %in1, i32 %in2, i32 %in1, i32 %in2 )		; <i32> [#uses=1]
+	%tmp11 = tail call fastcc i32 @tailcallee(i32 %in1, i32 %in2, i32 %in1, i32 %in2)
 	ret i32 %tmp11
 }
 
@@ -30,3 +28,10 @@ define fastcc i32 @ret_undef() nounwind {
   %p = tail call fastcc i32 @i32_callee()
   ret i32 undef
 }
+
+declare fastcc void @does_not_return()
+
+define fastcc i32 @noret() nounwind {
+  tail call fastcc void @does_not_return()
+  unreachable
+}
diff --git a/libclamav/c++/llvm/test/CodeGen/X86/widen_load-1.ll b/libclamav/c++/llvm/test/CodeGen/X86/widen_load-1.ll
index 2d34b31..8a970bf 100644
--- a/libclamav/c++/llvm/test/CodeGen/X86/widen_load-1.ll
+++ b/libclamav/c++/llvm/test/CodeGen/X86/widen_load-1.ll
@@ -5,7 +5,7 @@
 
 ; CHECK: movq    compl+128(%rip), %xmm0
 ; CHECK: movaps  %xmm0, (%rsp)
-; CHECK: call    killcommon
+; CHECK: callq   killcommon
 
 @compl = linkonce global [20 x i64] zeroinitializer, align 64 ; <[20 x i64]*> [#uses=1]
 
diff --git a/libclamav/c++/llvm/test/CodeGen/X86/x86-64-pic-1.ll b/libclamav/c++/llvm/test/CodeGen/X86/x86-64-pic-1.ll
index b21918e..46f6d33 100644
--- a/libclamav/c++/llvm/test/CodeGen/X86/x86-64-pic-1.ll
+++ b/libclamav/c++/llvm/test/CodeGen/X86/x86-64-pic-1.ll
@@ -1,5 +1,5 @@
 ; RUN: llc < %s -mtriple=x86_64-pc-linux -relocation-model=pic -o %t1
-; RUN: grep {call	f at PLT} %t1
+; RUN: grep {callq	f at PLT} %t1
 
 define void @g() {
 entry:
diff --git a/libclamav/c++/llvm/test/CodeGen/X86/x86-64-pic-10.ll b/libclamav/c++/llvm/test/CodeGen/X86/x86-64-pic-10.ll
index 7baa7e5..b6f82e2 100644
--- a/libclamav/c++/llvm/test/CodeGen/X86/x86-64-pic-10.ll
+++ b/libclamav/c++/llvm/test/CodeGen/X86/x86-64-pic-10.ll
@@ -1,5 +1,5 @@
 ; RUN: llc < %s -mtriple=x86_64-pc-linux -relocation-model=pic -o %t1
-; RUN: grep {call	g at PLT} %t1
+; RUN: grep {callq	g at PLT} %t1
 
 @g = alias weak i32 ()* @f
 
diff --git a/libclamav/c++/llvm/test/CodeGen/X86/x86-64-pic-11.ll b/libclamav/c++/llvm/test/CodeGen/X86/x86-64-pic-11.ll
index ef81685..4db331c 100644
--- a/libclamav/c++/llvm/test/CodeGen/X86/x86-64-pic-11.ll
+++ b/libclamav/c++/llvm/test/CodeGen/X86/x86-64-pic-11.ll
@@ -1,5 +1,5 @@
 ; RUN: llc < %s -mtriple=x86_64-pc-linux -relocation-model=pic -o %t1
-; RUN: grep {call	__fixunsxfti at PLT} %t1
+; RUN: grep {callq	__fixunsxfti at PLT} %t1
 
 define i128 @f(x86_fp80 %a) nounwind {
 entry:
diff --git a/libclamav/c++/llvm/test/CodeGen/X86/x86-64-pic-2.ll b/libclamav/c++/llvm/test/CodeGen/X86/x86-64-pic-2.ll
index a52c564..1ce2de7 100644
--- a/libclamav/c++/llvm/test/CodeGen/X86/x86-64-pic-2.ll
+++ b/libclamav/c++/llvm/test/CodeGen/X86/x86-64-pic-2.ll
@@ -1,6 +1,6 @@
 ; RUN: llc < %s -mtriple=x86_64-pc-linux -relocation-model=pic -o %t1
-; RUN: grep {call	f} %t1
-; RUN: not grep {call	f at PLT} %t1
+; RUN: grep {callq	f} %t1
+; RUN: not grep {callq	f at PLT} %t1
 
 define void @g() {
 entry:
diff --git a/libclamav/c++/llvm/test/CodeGen/X86/x86-64-pic-3.ll b/libclamav/c++/llvm/test/CodeGen/X86/x86-64-pic-3.ll
index 246c00f..aa3c888 100644
--- a/libclamav/c++/llvm/test/CodeGen/X86/x86-64-pic-3.ll
+++ b/libclamav/c++/llvm/test/CodeGen/X86/x86-64-pic-3.ll
@@ -1,6 +1,6 @@
 ; RUN: llc < %s -mtriple=x86_64-pc-linux -relocation-model=pic -o %t1
-; RUN: grep {call	f} %t1
-; RUN: not grep {call	f at PLT} %t1
+; RUN: grep {callq	f} %t1
+; RUN: not grep {callq	f at PLT} %t1
 
 define void @g() {
 entry:
diff --git a/libclamav/c++/llvm/test/TableGen/subst2.td b/libclamav/c++/llvm/test/TableGen/subst2.td
new file mode 100644
index 0000000..3366c9d
--- /dev/null
+++ b/libclamav/c++/llvm/test/TableGen/subst2.td
@@ -0,0 +1,15 @@
+// RUN: tblgen %s | FileCheck %s
+// CHECK: No subst
+// CHECK: No foo
+// CHECK: RECURSE foo
+
+class Recurse<string t> {
+  string Text = t;
+}
+
+class Text<string text> : 
+  Recurse<!subst("RECURSE", "RECURSE", !subst("NORECURSE", "foo", text))>;
+
+def Ok1 : Text<"No subst">;
+def Ok2 : Text<"No NORECURSE">;
+def Trouble : Text<"RECURSE NORECURSE">;
diff --git a/libclamav/c++/llvm/tools/llvmc/doc/LLVMC-Reference.rst b/libclamav/c++/llvm/tools/llvmc/doc/LLVMC-Reference.rst
index 4d80a2a..dfe3898 100644
--- a/libclamav/c++/llvm/tools/llvmc/doc/LLVMC-Reference.rst
+++ b/libclamav/c++/llvm/tools/llvmc/doc/LLVMC-Reference.rst
@@ -656,10 +656,10 @@ For example, without those definitions the following command wouldn't work::
     $ llvmc hello.cpp
     llvmc: Unknown suffix: cpp
 
-The language map entries should be added only for tools that are
-linked with the root node. Since tools are not allowed to have
-multiple output languages, for nodes "inside" the graph the input and
-output languages should match. This is enforced at compile-time.
+The language map entries are needed only for the tools that are linked from the
+root node. Since a tool can't have multiple output languages, for inner nodes of
+the graph the input and output languages should match. This is enforced at
+compile-time.
 
 Option preprocessor
 ===================
@@ -672,24 +672,34 @@ the driver with both of these options enabled.
 The ``OptionPreprocessor`` feature is reserved specially for these
 occasions. Example (adapted from the built-in Base plugin)::
 
-   def Preprocess : OptionPreprocessor<
-   (case (and (switch_on "O3"), (any_switch_on ["O0", "O1", "O2"])),
-              [(unset_option ["O0", "O1", "O2"]),
-               (warning "Multiple -O options specified, defaulted to -O3.")],
-         (and (switch_on "O2"), (any_switch_on ["O0", "O1"])),
-              (unset_option ["O0", "O1"]),
-         (and (switch_on "O1"), (switch_on "O0")),
-              (unset_option "O0"))
-   >;
 
-Here, ``OptionPreprocessor`` is used to unset all spurious optimization options
-(so that they are not forwarded to the compiler).
+    def Preprocess : OptionPreprocessor<
+    (case (not (any_switch_on ["O0", "O1", "O2", "O3"])),
+               (set_option "O2"),
+          (and (switch_on "O3"), (any_switch_on ["O0", "O1", "O2"])),
+               (unset_option ["O0", "O1", "O2"]),
+          (and (switch_on "O2"), (any_switch_on ["O0", "O1"])),
+               (unset_option ["O0", "O1"]),
+          (and (switch_on "O1"), (switch_on "O0")),
+               (unset_option "O0"))
+    >;
+
+Here, ``OptionPreprocessor`` is used to unset all spurious ``-O`` options so
+that they are not forwarded to the compiler. If no optimization options are
+specified, ``-O2`` is enabled.
 
 ``OptionPreprocessor`` is basically a single big ``case`` expression, which is
 evaluated only once right after the plugin is loaded. The only allowed actions
-in ``OptionPreprocessor`` are ``error``, ``warning`` and a special action
-``unset_option``, which, as the name suggests, unsets a given option. For
-convenience, ``unset_option`` also works on lists.
+in ``OptionPreprocessor`` are ``error``, ``warning``, and two special actions:
+``unset_option`` and ``set_option``. As their names suggest, they can be used to
+set or unset a given option. To set an option with ``set_option``, use the
+two-argument form: ``(set_option "parameter", VALUE)``. Here, ``VALUE`` can be
+either a string, a string list, or a boolean constant.
+
+For convenience, ``set_option`` and ``unset_option`` also work on lists. That
+is, instead of ``[(unset_option "A"), (unset_option "B")]`` you can use
+``(unset_option ["A", "B"])``. Obviously, ``(set_option ["A", "B"])`` is valid
+only if both ``A`` and ``B`` are switches.
 
 
 More advanced topics
diff --git a/libclamav/c++/llvm/tools/llvmc/doc/Makefile b/libclamav/c++/llvm/tools/llvmc/doc/Makefile
index 65e6b9b..ef98767 100644
--- a/libclamav/c++/llvm/tools/llvmc/doc/Makefile
+++ b/libclamav/c++/llvm/tools/llvmc/doc/Makefile
@@ -8,7 +8,13 @@
 ##===----------------------------------------------------------------------===##
 
 LEVEL=../../..
+
+ifneq (,$(strip $(wildcard $(LEVEL)/Makefile.config)))
 include $(LEVEL)/Makefile.config
+else
+CP=cp
+RM=rm
+endif
 
 DOC_DIR=../../../docs
 RST2HTML=rst2html --stylesheet=llvm.css --link-stylesheet
diff --git a/libclamav/c++/llvm/tools/llvmc/example/mcc16/plugins/PIC16Base/PIC16Base.td b/libclamav/c++/llvm/tools/llvmc/example/mcc16/plugins/PIC16Base/PIC16Base.td
index 5e6f6cb..717e95e 100644
--- a/libclamav/c++/llvm/tools/llvmc/example/mcc16/plugins/PIC16Base/PIC16Base.td
+++ b/libclamav/c++/llvm/tools/llvmc/example/mcc16/plugins/PIC16Base/PIC16Base.td
@@ -19,6 +19,8 @@ def OptionList : OptionList<[
     (help "Stop after b-code generation, do not compile")),
  (switch_option "c",
     (help "Stop after assemble, do not link")),
+ (prefix_option "p",
+    (help "Specify part name")),
  (prefix_list_option "I",
     (help "Add a directory to include path")),
  (prefix_list_option "L",
@@ -33,22 +35,27 @@ def OptionList : OptionList<[
     (help "Generate linker map file with the given name")),
  (prefix_list_option "D",
     (help "Define a macro")),
+ (switch_option "X",
+    (help "Do not invoke mp2hex to create an output hex file.")),
  (switch_option "O0",
     (help "Do not optimize")),
-// (switch_option "O1",
-//    (help "Optimization level 1")),
-// (switch_option "O2",
-//    (help "Optimization level 2. (Default)")),
-// (parameter_option "pre-RA-sched",
-//    (help "Example of an option that is passed to llc")),
- (prefix_list_option "Wa,", (comma_separated),
-    (help "Pass options to native assembler")),
- (prefix_list_option "Wl,", (comma_separated),
-    (help "Pass options to native linker"))
-// (prefix_list_option "Wllc,",
-//    (help "Pass options to llc")),
-// (prefix_list_option "Wo,",
-//    (help "Pass options to llvm-ld"))
+ (switch_option "O1",
+    (help "Optimization Level 1.")),
+ (switch_option "O2",
+    (help "Optimization Level 2.")),
+ (switch_option "O3",
+    (help "Optimization Level 3.")),
+ (switch_option "Od",
+    (help "Perform Debug-safe Optimizations only.")),
+ (switch_option "r",
+    (help "Use resource file for part info"),
+    (really_hidden)),
+ (parameter_option "regalloc",
+    (help "Register allocator to use.(possible values: simple, linearscan, pbqp, local. default = pbqp)")),
+ (prefix_list_option "Wa,",
+    (help "Pass options to assembler (Run 'gpasm -help' for assembler options)")),
+ (prefix_list_option "Wl,",
+    (help "Pass options to linker (Run 'mplink -help' for linker options)"))
 ]>;
 
 // Tools
@@ -58,34 +65,27 @@ class clang_based<string language, string cmd, string ext_E> : Tool<
  (output_suffix "bc"),
  (cmd_line (case
            (switch_on "E"),
-           (case
+           (case 
               (not_empty "o"), !strconcat(cmd, " -E $INFILE -o $OUTFILE"),
               (default), !strconcat(cmd, " -E $INFILE")),
            (default), !strconcat(cmd, " $INFILE -o $OUTFILE"))),
- (actions (case
+ (actions (case 
                 (and (multiple_input_files), (or (switch_on "S"), (switch_on "c"))),
               (error "cannot specify -o with -c or -S with multiple files"),
                 (switch_on "E"), [(stop_compilation), (output_suffix ext_E)],
                 (switch_on "bc"),[(stop_compilation), (output_suffix "bc")],
                 (switch_on "g"), (append_cmd "-g"),
+                (switch_on "O1"), (append_cmd ""),
+                (switch_on "O2"), (append_cmd ""),
+                (switch_on "O3"), (append_cmd ""),
+                (switch_on "Od"), (append_cmd ""),
                 (not_empty "D"), (forward "D"),
-                (not_empty "I"), (forward "I"))),
- (sink)
+                (not_empty "I"), (forward "I"),
+                (switch_on "O0"), (append_cmd "-O0"),
+                (default), (append_cmd "-O1")))
 ]>;
 
-def clang_cc : clang_based<"c", "$CALL(GetBinDir)clang-cc                                                        -I $CALL(GetStdHeadersDir) -triple=pic16-                                       -emit-llvm-bc ", "i">;
-
-//def clang_cc : Tool<[
-// (in_language "c"),
-// (out_language "llvm-bitcode"),
-// (output_suffix "bc"),
-// (cmd_line "$CALL(GetBinDir)clang-cc -I $CALL(GetStdHeadersDir) -triple=pic16- -emit-llvm-bc "),
-// (cmd_line kkkkk
-// (actions (case
-//          (switch_on "g"), (append_cmd "g"),
-//          (not_empty "I"), (forward "I"))),
-// (sink)
-//]>;
+def clang_cc : clang_based<"c", "$CALL(GetBinDir)clang -cc1                                                    -I $CALL(GetStdHeadersDir) -triple=pic16-                                       -emit-llvm-bc ", "i">;
 
 
 // pre-link-and-lto step.
@@ -93,9 +93,14 @@ def llvm_ld : Tool<[
  (in_language "llvm-bitcode"),
  (out_language "llvm-bitcode"),
  (output_suffix "bc"),
- (cmd_line "$CALL(GetBinDir)llvm-ld -L $CALL(GetStdLibsDir) -disable-gvn -instcombine -disable-inlining                   $INFILE -b $OUTFILE -l std"),
+ (cmd_line "$CALL(GetBinDir)llvm-ld -L $CALL(GetStdLibsDir) -instcombine -disable-licm-promotion $INFILE -b $OUTFILE -l std"),
  (actions (case
-          (switch_on "O0"), (append_cmd "-disable-opt"))),
+          (switch_on "O0"), (append_cmd "-disable-opt"),
+          (switch_on "O1"), (append_cmd "-disable-opt"),
+          (switch_on "O2"), (append_cmd ""), 
+// Whenever O3 is not specified on the command line, default i.e. disable-inlining will always be added.
+          (switch_on "O3"), (append_cmd ""),
+          (default), (append_cmd "-disable-inlining"))),
  (join)
 ]>;
 
@@ -104,7 +109,7 @@ def llvm_ld_optimizer : Tool<[
  (in_language "llvm-bitcode"),
  (out_language "llvm-bitcode"),
  (output_suffix "bc"),
- (cmd_line "$CALL(GetBinDir)llvm-ld -disable-gvn -instcombine -disable-inlining                   $INFILE -b $OUTFILE"),
+ (cmd_line "$CALL(GetBinDir)llvm-ld -instcombine -disable-inlining                   $INFILE -b $OUTFILE"),
  (actions (case
           (switch_on "O0"), (append_cmd "-disable-opt")))
 ]>;
@@ -114,7 +119,7 @@ def pic16passes : Tool<[
  (in_language "llvm-bitcode"),
  (out_language "llvm-bitcode"),
  (output_suffix "obc"),
- (cmd_line "$CALL(GetBinDir)opt -pic16cg -pic16overlay $INFILE -f -o $OUTFILE"),
+ (cmd_line "$CALL(GetBinDir)opt -pic16overlay $INFILE -f -o $OUTFILE"),
  (actions (case
           (switch_on "O0"), (append_cmd "-disable-opt")))
 ]>;
@@ -123,21 +128,24 @@ def llc : Tool<[
  (in_language "llvm-bitcode"),
  (out_language "assembler"),
  (output_suffix "s"),
- (cmd_line "$CALL(GetBinDir)llc -march=pic16 -disable-jump-tables -pre-RA-sched=list-burr -regalloc=pbqp -f $INFILE -o $OUTFILE"),
+ (cmd_line "$CALL(GetBinDir)llc -march=pic16 -disable-jump-tables -pre-RA-sched=list-burr -f $INFILE -o $OUTFILE"),
  (actions (case
-          (switch_on "S"), (stop_compilation)))
-//          (not_empty "Wllc,"), (unpack_values "Wllc,"),
-//         (not_empty "pre-RA-sched"), (forward "pre-RA-sched")))
+          (switch_on "S"), (stop_compilation),
+         (not_empty "regalloc"), (forward "regalloc"),
+         (empty "regalloc"), (append_cmd "-regalloc=pbqp")))
 ]>;
 
 def gpasm : Tool<[
  (in_language "assembler"),
  (out_language "object-code"),
  (output_suffix "o"),
- (cmd_line "$CALL(GetBinDir)gpasm -r decimal -p p16F1937 -I $CALL(GetStdAsmHeadersDir) -C -c -q $INFILE -o $OUTFILE"),
+ (cmd_line "$CALL(GetBinDir)gpasm -r decimal -I $CALL(GetStdAsmHeadersDir) -C -c -w 2 $INFILE -o $OUTFILE"),
  (actions (case
           (switch_on "c"), (stop_compilation),
           (switch_on "g"), (append_cmd "-g"),
+          (switch_on "r"), (append_cmd "-z"),
+          (not_empty "p"), (forward "p"),
+          (empty "p"), (append_cmd "-p 16f1xxx"),
           (not_empty "Wa,"), (forward_value "Wa,")))
 ]>;
 
@@ -145,13 +153,16 @@ def mplink : Tool<[
  (in_language "object-code"),
  (out_language "executable"),
  (output_suffix "cof"),
- (cmd_line "$CALL(GetBinDir)mplink.exe -k $CALL(GetStdLinkerScriptsDir) -l $CALL(GetStdLibsDir) -p 16f1937  intrinsics.lib devices.lib $INFILE -o $OUTFILE"),
+ (cmd_line "$CALL(GetBinDir)mplink -k $CALL(GetStdLinkerScriptsDir) -l $CALL(GetStdLibsDir) intrinsics.lib stdn.lib $INFILE -o $OUTFILE"),
  (actions (case
           (not_empty "Wl,"), (forward_value "Wl,"),
+          (switch_on "r"), (append_cmd "-e"),
+          (switch_on "X"), (append_cmd "-x"),
           (not_empty "L"), (forward_as "L", "-l"),
           (not_empty "K"), (forward_as "K", "-k"),
           (not_empty "m"), (forward "m"),
-//          (not_empty "l"), [(unpack_values "l"),(append_cmd ".lib")])),
+          (not_empty "p"), [(forward "p"), (append_cmd "-c")],
+          (empty "p"), (append_cmd "-p 16f1xxx -c"),
           (not_empty "k"), (forward_value "k"),
           (not_empty "l"), (forward_value "l"))),
  (join)
@@ -175,13 +186,13 @@ def LanguageMap : LanguageMap<[
 def CompilationGraph : CompilationGraph<[
     Edge<"root", "clang_cc">,
     Edge<"root", "llvm_ld">,
-    OptionalEdge<"root", "llvm_ld_optimizer", (case
+    OptionalEdge<"root", "llvm_ld_optimizer", (case 
                                          (switch_on "S"), (inc_weight),
                                          (switch_on "c"), (inc_weight))>,
     Edge<"root", "gpasm">,
     Edge<"root", "mplink">,
     Edge<"clang_cc", "llvm_ld">,
-    OptionalEdge<"clang_cc", "llvm_ld_optimizer", (case
+    OptionalEdge<"clang_cc", "llvm_ld_optimizer", (case 
                                          (switch_on "S"), (inc_weight),
                                          (switch_on "c"), (inc_weight))>,
     Edge<"llvm_ld", "pic16passes">,
diff --git a/libclamav/c++/llvm/tools/llvmc/plugins/Base/Base.td.in b/libclamav/c++/llvm/tools/llvmc/plugins/Base/Base.td.in
index 8f928cc..1413593 100644
--- a/libclamav/c++/llvm/tools/llvmc/plugins/Base/Base.td.in
+++ b/libclamav/c++/llvm/tools/llvmc/plugins/Base/Base.td.in
@@ -91,7 +91,9 @@ def OptList : OptionList<[
 // Option preprocessor.
 
 def Preprocess : OptionPreprocessor<
-(case (and (switch_on "O3"), (any_switch_on ["O0", "O1", "O2"])),
+(case (not (any_switch_on ["O0", "O1", "O2", "O3"])),
+           (set_option "O2"),
+      (and (switch_on "O3"), (any_switch_on ["O0", "O1", "O2"])),
            (unset_option ["O0", "O1", "O2"]),
       (and (switch_on "O2"), (any_switch_on ["O0", "O1"])),
            (unset_option ["O0", "O1"]),
diff --git a/libclamav/c++/llvm/unittests/ADT/APFloatTest.cpp b/libclamav/c++/llvm/unittests/ADT/APFloatTest.cpp
index 92f020b..76cdafc 100644
--- a/libclamav/c++/llvm/unittests/ADT/APFloatTest.cpp
+++ b/libclamav/c++/llvm/unittests/ADT/APFloatTest.cpp
@@ -8,10 +8,12 @@
 //===----------------------------------------------------------------------===//
 
 #include <ostream>
+#include <string>
 #include "llvm/Support/raw_ostream.h"
 #include "gtest/gtest.h"
 #include "llvm/ADT/APFloat.h"
 #include "llvm/ADT/SmallString.h"
+#include "llvm/ADT/SmallVector.h"
 
 using namespace llvm;
 
@@ -21,6 +23,13 @@ static double convertToDoubleFromString(const char *Str) {
   return F.convertToDouble();
 }
 
+static std::string convertToString(double d, unsigned Prec, unsigned Pad) {
+  llvm::SmallVector<char, 100> Buffer;
+  llvm::APFloat F(d);
+  F.toString(Buffer, Prec, Pad);
+  return std::string(Buffer.data(), Buffer.size());
+}
+
 namespace {
 
 TEST(APFloatTest, Zero) {
@@ -313,6 +322,19 @@ TEST(APFloatTest, fromHexadecimalString) {
   EXPECT_EQ(2.71828, convertToDoubleFromString("2.71828"));
 }
 
+TEST(APFloatTest, toString) {
+  ASSERT_EQ("10", convertToString(10.0, 6, 3));
+  ASSERT_EQ("1.0E+1", convertToString(10.0, 6, 0));
+  ASSERT_EQ("10100", convertToString(1.01E+4, 5, 2));
+  ASSERT_EQ("1.01E+4", convertToString(1.01E+4, 4, 2));
+  ASSERT_EQ("1.01E+4", convertToString(1.01E+4, 5, 1));
+  ASSERT_EQ("0.0101", convertToString(1.01E-2, 5, 2));
+  ASSERT_EQ("0.0101", convertToString(1.01E-2, 4, 2));
+  ASSERT_EQ("1.01E-2", convertToString(1.01E-2, 5, 1));
+  ASSERT_EQ("0.7853981633974483", convertToString(0.78539816339744830961, 0, 3));
+  ASSERT_EQ("4.940656458412465E-324", convertToString(4.9406564584124654e-324, 0, 3));
+}
+
 #ifdef GTEST_HAS_DEATH_TEST
 TEST(APFloatTest, SemanticsDeath) {
   EXPECT_DEATH(APFloat(APFloat::IEEEsingle, 0.0f).convertToDouble(), "Float semantics are not IEEEdouble");
diff --git a/libclamav/c++/llvm/unittests/ADT/DeltaAlgorithmTest.cpp b/libclamav/c++/llvm/unittests/ADT/DeltaAlgorithmTest.cpp
index 3628922..a1884cd 100644
--- a/libclamav/c++/llvm/unittests/ADT/DeltaAlgorithmTest.cpp
+++ b/libclamav/c++/llvm/unittests/ADT/DeltaAlgorithmTest.cpp
@@ -13,6 +13,8 @@
 #include <cstdarg>
 using namespace llvm;
 
+namespace std {
+
 std::ostream &operator<<(std::ostream &OS,
                          const std::set<unsigned> &S) {
   OS << "{";
@@ -26,6 +28,8 @@ std::ostream &operator<<(std::ostream &OS,
   return OS;
 }
 
+}
+
 namespace {
 
 class FixedDeltaAlgorithm : public DeltaAlgorithm {
diff --git a/libclamav/c++/llvm/unittests/ADT/StringRefTest.cpp b/libclamav/c++/llvm/unittests/ADT/StringRefTest.cpp
index dfa208a..6507c20 100644
--- a/libclamav/c++/llvm/unittests/ADT/StringRefTest.cpp
+++ b/libclamav/c++/llvm/unittests/ADT/StringRefTest.cpp
@@ -13,7 +13,7 @@
 #include "llvm/Support/raw_ostream.h"
 using namespace llvm;
 
-namespace {
+namespace llvm {
 
 std::ostream &operator<<(std::ostream &OS, const StringRef &S) {
   OS << S;
@@ -26,6 +26,9 @@ std::ostream &operator<<(std::ostream &OS,
   return OS;
 }
 
+}
+
+namespace {
 TEST(StringRefTest, Construction) {
   EXPECT_EQ("", StringRef());
   EXPECT_EQ("hello", StringRef("hello"));
@@ -198,6 +201,14 @@ TEST(StringRefTest, StartsWith) {
   EXPECT_FALSE(Str.startswith("hi"));
 }
 
+TEST(StringRefTest, EndsWith) {
+  StringRef Str("hello");
+  EXPECT_TRUE(Str.endswith("lo"));
+  EXPECT_FALSE(Str.endswith("helloworld"));
+  EXPECT_FALSE(Str.endswith("worldhello"));
+  EXPECT_FALSE(Str.endswith("so"));
+}
+
 TEST(StringRefTest, Find) {
   StringRef Str("hello");
   EXPECT_EQ(2U, Str.find('l'));
diff --git a/libclamav/c++/llvm/unittests/ExecutionEngine/JIT/JITTest.cpp b/libclamav/c++/llvm/unittests/ExecutionEngine/JIT/JITTest.cpp
index cca3860..56abb1b 100644
--- a/libclamav/c++/llvm/unittests/ExecutionEngine/JIT/JITTest.cpp
+++ b/libclamav/c++/llvm/unittests/ExecutionEngine/JIT/JITTest.cpp
@@ -12,6 +12,7 @@
 #include "llvm/ADT/SmallPtrSet.h"
 #include "llvm/Assembly/Parser.h"
 #include "llvm/BasicBlock.h"
+#include "llvm/Bitcode/ReaderWriter.h"
 #include "llvm/Constant.h"
 #include "llvm/Constants.h"
 #include "llvm/DerivedTypes.h"
@@ -24,6 +25,7 @@
 #include "llvm/Module.h"
 #include "llvm/ModuleProvider.h"
 #include "llvm/Support/IRBuilder.h"
+#include "llvm/Support/MemoryBuffer.h"
 #include "llvm/Support/SourceMgr.h"
 #include "llvm/Support/TypeBuilder.h"
 #include "llvm/Target/TargetSelect.h"
@@ -177,6 +179,17 @@ public:
   }
 };
 
+bool LoadAssemblyInto(Module *M, const char *assembly) {
+  SMDiagnostic Error;
+  bool success =
+    NULL != ParseAssemblyString(assembly, M, Error, M->getContext());
+  std::string errMsg;
+  raw_string_ostream os(errMsg);
+  Error.Print("", os);
+  EXPECT_TRUE(success) << os.str();
+  return success;
+}
+
 class JITTest : public testing::Test {
  protected:
   virtual void SetUp() {
@@ -192,12 +205,7 @@ class JITTest : public testing::Test {
   }
 
   void LoadAssembly(const char *assembly) {
-    SMDiagnostic Error;
-    bool success = NULL != ParseAssemblyString(assembly, M, Error, Context);
-    std::string errMsg;
-    raw_string_ostream os(errMsg);
-    Error.Print("", os);
-    ASSERT_TRUE(success) << os.str();
+    LoadAssemblyInto(M, assembly);
   }
 
   LLVMContext Context;
@@ -534,6 +542,41 @@ TEST_F(JITTest, FunctionPointersOutliveTheirCreator) {
 #endif
 }
 
+// ARM doesn't have an implementation of replaceMachineCodeForFunction(), so
+// recompileAndRelinkFunction doesn't work.
+#if !defined(__arm__)
+TEST_F(JITTest, FunctionIsRecompiledAndRelinked) {
+  Function *F = Function::Create(TypeBuilder<int(void), false>::get(Context),
+                                 GlobalValue::ExternalLinkage, "test", M);
+  BasicBlock *Entry = BasicBlock::Create(Context, "entry", F);
+  IRBuilder<> Builder(Entry);
+  Value *Val = ConstantInt::get(TypeBuilder<int, false>::get(Context), 1);
+  Builder.CreateRet(Val);
+
+  TheJIT->DisableLazyCompilation(true);
+  // Compile the function once, and make sure it works.
+  int (*OrigFPtr)() = reinterpret_cast<int(*)()>(
+    (intptr_t)TheJIT->recompileAndRelinkFunction(F));
+  EXPECT_EQ(1, OrigFPtr());
+
+  // Now change the function to return a different value.
+  Entry->eraseFromParent();
+  BasicBlock *NewEntry = BasicBlock::Create(Context, "new_entry", F);
+  Builder.SetInsertPoint(NewEntry);
+  Val = ConstantInt::get(TypeBuilder<int, false>::get(Context), 2);
+  Builder.CreateRet(Val);
+  // Recompile it, which should produce a new function pointer _and_ update the
+  // old one.
+  int (*NewFPtr)() = reinterpret_cast<int(*)()>(
+    (intptr_t)TheJIT->recompileAndRelinkFunction(F));
+
+  EXPECT_EQ(2, NewFPtr())
+    << "The new pointer should call the new version of the function";
+  EXPECT_EQ(2, OrigFPtr())
+    << "The old pointer's target should now jump to the new version";
+}
+#endif  // !defined(__arm__)
+
 }  // anonymous namespace
 // This variable is intentionally defined differently in the statically-compiled
 // program from the IR input to the JIT to assert that the JIT doesn't use its
@@ -560,6 +603,117 @@ TEST_F(JITTest, AvailableExternallyGlobalIsntEmitted) {
   EXPECT_EQ(42, loader()) << "func should return 42 from the external global,"
                           << " not 7 from the IR version.";
 }
+
+}  // anonymous namespace
+// This function is intentionally defined differently in the statically-compiled
+// program from the IR input to the JIT to assert that the JIT doesn't use its
+// definition.
+extern "C" int32_t JITTest_AvailableExternallyFunction() {
+  return 42;
+}
+namespace {
+
+TEST_F(JITTest, AvailableExternallyFunctionIsntCompiled) {
+  TheJIT->DisableLazyCompilation(true);
+  LoadAssembly("define available_externally i32 "
+               "    @JITTest_AvailableExternallyFunction() { "
+               "  ret i32 7 "
+               "} "
+               " "
+               "define i32 @func() { "
+               "  %result = tail call i32 "
+               "    @JITTest_AvailableExternallyFunction() "
+               "  ret i32 %result "
+               "} ");
+  Function *funcIR = M->getFunction("func");
+
+  int32_t (*func)() = reinterpret_cast<int32_t(*)()>(
+    (intptr_t)TheJIT->getPointerToFunction(funcIR));
+  EXPECT_EQ(42, func()) << "func should return 42 from the static version,"
+                        << " not 7 from the IR version.";
+}
+
+// Converts the LLVM assembly to bitcode and returns it in a std::string.  An
+// empty string indicates an error.
+std::string AssembleToBitcode(LLVMContext &Context, const char *Assembly) {
+  Module TempModule("TempModule", Context);
+  if (!LoadAssemblyInto(&TempModule, Assembly)) {
+    return "";
+  }
+
+  std::string Result;
+  raw_string_ostream OS(Result);
+  WriteBitcodeToFile(&TempModule, OS);
+  OS.flush();
+  return Result;
+}
+
+// Returns a newly-created ExecutionEngine that reads the bitcode in 'Bitcode'
+// lazily.  The associated ModuleProvider (owned by the ExecutionEngine) is
+// returned in MP.  Both will be NULL on an error.  Bitcode must live at least
+// as long as the ExecutionEngine.
+ExecutionEngine *getJITFromBitcode(
+  LLVMContext &Context, const std::string &Bitcode, ModuleProvider *&MP) {
+  // c_str() is null-terminated like MemoryBuffer::getMemBuffer requires.
+  MemoryBuffer *BitcodeBuffer =
+    MemoryBuffer::getMemBuffer(Bitcode.c_str(),
+                               Bitcode.c_str() + Bitcode.size(),
+                               "Bitcode for test");
+  std::string errMsg;
+  MP = getBitcodeModuleProvider(BitcodeBuffer, Context, &errMsg);
+  if (MP == NULL) {
+    ADD_FAILURE() << errMsg;
+    delete BitcodeBuffer;
+    return NULL;
+  }
+  ExecutionEngine *TheJIT = EngineBuilder(MP)
+    .setEngineKind(EngineKind::JIT)
+    .setErrorStr(&errMsg)
+    .create();
+  if (TheJIT == NULL) {
+    ADD_FAILURE() << errMsg;
+    delete MP;
+    MP = NULL;
+    return NULL;
+  }
+  return TheJIT;
+}
+
+TEST(LazyLoadedJITTest, EagerCompiledRecursionThroughGhost) {
+  LLVMContext Context;
+  const std::string Bitcode =
+    AssembleToBitcode(Context,
+                      "define i32 @recur1(i32 %a) { "
+                      "  %zero = icmp eq i32 %a, 0 "
+                      "  br i1 %zero, label %done, label %notdone "
+                      "done: "
+                      "  ret i32 3 "
+                      "notdone: "
+                      "  %am1 = sub i32 %a, 1 "
+                      "  %result = call i32 @recur2(i32 %am1) "
+                      "  ret i32 %result "
+                      "} "
+                      " "
+                      "define i32 @recur2(i32 %b) { "
+                      "  %result = call i32 @recur1(i32 %b) "
+                      "  ret i32 %result "
+                      "} ");
+  ASSERT_FALSE(Bitcode.empty()) << "Assembling failed";
+  ModuleProvider *MP;
+  OwningPtr<ExecutionEngine> TheJIT(getJITFromBitcode(Context, Bitcode, MP));
+  ASSERT_TRUE(TheJIT.get()) << "Failed to create JIT.";
+  TheJIT->DisableLazyCompilation(true);
+
+  Module *M = MP->getModule();
+  Function *recur1IR = M->getFunction("recur1");
+  Function *recur2IR = M->getFunction("recur2");
+  EXPECT_TRUE(recur1IR->hasNotBeenReadFromBitcode());
+  EXPECT_TRUE(recur2IR->hasNotBeenReadFromBitcode());
+
+  int32_t (*recur1)(int32_t) = reinterpret_cast<int32_t(*)(int32_t)>(
+    (intptr_t)TheJIT->getPointerToFunction(recur1IR));
+  EXPECT_EQ(3, recur1(4));
+}
 #endif
 // This code is copied from JITEventListenerTest, but it only runs once for all
 // the tests in this directory.  Everything seems fine, but that's strange
diff --git a/libclamav/c++/llvm/unittests/ExecutionEngine/JIT/Makefile b/libclamav/c++/llvm/unittests/ExecutionEngine/JIT/Makefile
index 8de390b..f5abe75 100644
--- a/libclamav/c++/llvm/unittests/ExecutionEngine/JIT/Makefile
+++ b/libclamav/c++/llvm/unittests/ExecutionEngine/JIT/Makefile
@@ -9,7 +9,7 @@
 
 LEVEL = ../../..
 TESTNAME = JIT
-LINK_COMPONENTS := asmparser core support jit native
+LINK_COMPONENTS := asmparser bitreader bitwriter core jit native support
 
 include $(LEVEL)/Makefile.config
 include $(LLVM_SRC_ROOT)/unittests/Makefile.unittest
diff --git a/libclamav/c++/llvm/unittests/Support/LeakDetectorTest.cpp b/libclamav/c++/llvm/unittests/Support/LeakDetectorTest.cpp
new file mode 100644
index 0000000..85ef046
--- /dev/null
+++ b/libclamav/c++/llvm/unittests/Support/LeakDetectorTest.cpp
@@ -0,0 +1,29 @@
+//===- llvm/unittest/LeakDetector/LeakDetector.cpp - LeakDetector tests ---===//
+//
+//                     The LLVM Compiler Infrastructure
+//
+// This file is distributed under the University of Illinois Open Source
+// License. See LICENSE.TXT for details.
+//
+//===----------------------------------------------------------------------===//
+
+#include "gtest/gtest.h"
+#include "llvm/Support/LeakDetector.h"
+
+using namespace llvm;
+
+namespace {
+
+#ifdef GTEST_HAS_DEATH_TEST
+TEST(LeakDetector, Death1) {
+  LeakDetector::addGarbageObject((void*) 1);
+  LeakDetector::addGarbageObject((void*) 2);
+
+  EXPECT_DEATH(LeakDetector::addGarbageObject((void*) 1),
+               ".*Ts.count\\(o\\) == 0 && \"Object already in set!\"");
+  EXPECT_DEATH(LeakDetector::addGarbageObject((void*) 2),
+               "Cache != o && \"Object already in set!\"");
+}
+#endif
+
+}
diff --git a/libclamav/c++/llvm/unittests/VMCore/DerivedTypesTest.cpp b/libclamav/c++/llvm/unittests/VMCore/DerivedTypesTest.cpp
new file mode 100644
index 0000000..11b4dff
--- /dev/null
+++ b/libclamav/c++/llvm/unittests/VMCore/DerivedTypesTest.cpp
@@ -0,0 +1,31 @@
+//===- llvm/unittest/VMCore/DerivedTypesTest.cpp - Types unit tests -------===//
+//
+//                     The LLVM Compiler Infrastructure
+//
+// This file is distributed under the University of Illinois Open Source
+// License. See LICENSE.TXT for details.
+//
+//===----------------------------------------------------------------------===//
+
+#include "gtest/gtest.h"
+#include "../lib/VMCore/LLVMContextImpl.h"
+#include "llvm/Type.h"
+#include "llvm/DerivedTypes.h"
+#include "llvm/LLVMContext.h"
+using namespace llvm;
+
+namespace {
+
+TEST(OpaqueTypeTest, RegisterWithContext) {
+  LLVMContext C;
+  LLVMContextImpl *pImpl = C.pImpl;  
+
+  EXPECT_EQ(0u, pImpl->OpaqueTypes.size());
+  {
+    PATypeHolder Type = OpaqueType::get(C);
+    EXPECT_EQ(1u, pImpl->OpaqueTypes.size());
+  }
+  EXPECT_EQ(0u, pImpl->OpaqueTypes.size());
+}
+
+}  // namespace
diff --git a/libclamav/c++/llvm/utils/TableGen/CMakeLists.txt b/libclamav/c++/llvm/utils/TableGen/CMakeLists.txt
index daf8676..ce9b66f 100644
--- a/libclamav/c++/llvm/utils/TableGen/CMakeLists.txt
+++ b/libclamav/c++/llvm/utils/TableGen/CMakeLists.txt
@@ -23,6 +23,8 @@ add_executable(tblgen
   TGValueTypes.cpp
   TableGen.cpp
   TableGenBackend.cpp
+  X86DisassemblerTables.cpp
+  X86RecognizableInstr.cpp
   )
 
 target_link_libraries(tblgen LLVMSupport LLVMSystem)
diff --git a/libclamav/c++/llvm/utils/TableGen/CodeEmitterGen.cpp b/libclamav/c++/llvm/utils/TableGen/CodeEmitterGen.cpp
index e9f30be..7e6c769 100644
--- a/libclamav/c++/llvm/utils/TableGen/CodeEmitterGen.cpp
+++ b/libclamav/c++/llvm/utils/TableGen/CodeEmitterGen.cpp
@@ -61,11 +61,14 @@ void CodeEmitterGen::reverseBits(std::vector<Record*> &Insts) {
 
 // If the VarBitInit at position 'bit' matches the specified variable then
 // return the variable bit position.  Otherwise return -1.
-int CodeEmitterGen::getVariableBit(const Init *VarVal,
+int CodeEmitterGen::getVariableBit(const std::string &VarName,
             BitsInit *BI, int bit) {
   if (VarBitInit *VBI = dynamic_cast<VarBitInit*>(BI->getBit(bit))) {
     TypedInit *TI = VBI->getVariable();
-    if (TI == VarVal) return VBI->getBitNum();
+    
+    if (VarInit *VI = dynamic_cast<VarInit*>(TI)) {
+      if (VI->getName() == VarName) return VBI->getBitNum();
+    }
   }
   
   return -1;
@@ -159,11 +162,11 @@ void CodeEmitterGen::run(raw_ostream &o) {
       if (!Vals[i].getPrefix() && !Vals[i].getValue()->isComplete()) {
         // Is the operand continuous? If so, we can just mask and OR it in
         // instead of doing it bit-by-bit, saving a lot in runtime cost.
-        const Init *VarVal = Vals[i].getValue();
+        const std::string &VarName = Vals[i].getName();
         bool gotOp = false;
         
         for (int bit = BI->getNumBits()-1; bit >= 0; ) {
-          int varBit = getVariableBit(VarVal, BI, bit);
+          int varBit = getVariableBit(VarName, BI, bit);
           
           if (varBit == -1) {
             --bit;
@@ -173,7 +176,7 @@ void CodeEmitterGen::run(raw_ostream &o) {
             int N = 1;
             
             for (--bit; bit >= 0;) {
-              varBit = getVariableBit(VarVal, BI, bit);
+              varBit = getVariableBit(VarName, BI, bit);
               if (varBit == -1 || varBit != (beginVarBit - N)) break;
               ++N;
               --bit;
@@ -185,7 +188,7 @@ void CodeEmitterGen::run(raw_ostream &o) {
               while (CGI.isFlatOperandNotEmitted(op))
                 ++op;
               
-              Case += "      // op: " + Vals[i].getName() + "\n"
+              Case += "      // op: " + VarName + "\n"
                    +  "      op = getMachineOpValue(MI, MI.getOperand("
                    +  utostr(op++) + "));\n";
               gotOp = true;
diff --git a/libclamav/c++/llvm/utils/TableGen/CodeEmitterGen.h b/libclamav/c++/llvm/utils/TableGen/CodeEmitterGen.h
index 2dc34ba..f0b3229 100644
--- a/libclamav/c++/llvm/utils/TableGen/CodeEmitterGen.h
+++ b/libclamav/c++/llvm/utils/TableGen/CodeEmitterGen.h
@@ -23,7 +23,6 @@ namespace llvm {
 
 class RecordVal;
 class BitsInit;
-struct Init;
 
 class CodeEmitterGen : public TableGenBackend {
   RecordKeeper &Records;
@@ -36,7 +35,7 @@ private:
   void emitMachineOpEmitter(raw_ostream &o, const std::string &Namespace);
   void emitGetValueBit(raw_ostream &o, const std::string &Namespace);
   void reverseBits(std::vector<Record*> &Insts);
-  int getVariableBit(const Init *VarVal, BitsInit *BI, int bit);
+  int getVariableBit(const std::string &VarName, BitsInit *BI, int bit);
 };
 
 } // End llvm namespace
diff --git a/libclamav/c++/llvm/utils/TableGen/CodeGenDAGPatterns.cpp b/libclamav/c++/llvm/utils/TableGen/CodeGenDAGPatterns.cpp
index fab41c5..cf79365 100644
--- a/libclamav/c++/llvm/utils/TableGen/CodeGenDAGPatterns.cpp
+++ b/libclamav/c++/llvm/utils/TableGen/CodeGenDAGPatterns.cpp
@@ -321,8 +321,7 @@ bool SDTypeConstraint::ApplyTypeConstraint(TreePatternNode *N,
       getOperandNum(x.SDTCisVTSmallerThanOp_Info.OtherOperandNum, N,NumResults);
     
     // It must be integer.
-    bool MadeChange = false;
-    MadeChange |= OtherNode->UpdateNodeType(MVT::iAny, TP);
+    bool MadeChange = OtherNode->UpdateNodeType(MVT::iAny, TP);
     
     // This code only handles nodes that have one type set.  Assert here so
     // that we can change this if we ever need to deal with multiple value
@@ -330,7 +329,7 @@ bool SDTypeConstraint::ApplyTypeConstraint(TreePatternNode *N,
     assert(OtherNode->getExtTypes().size() == 1 && "Node has too many types!");
     if (OtherNode->hasTypeSet() && OtherNode->getTypeNum(0) <= VT)
       OtherNode->UpdateNodeType(MVT::Other, TP);  // Throw an error.
-    return false;
+    return MadeChange;
   }
   case SDTCisOpSmallerThanOp: {
     TreePatternNode *BigOperand =
diff --git a/libclamav/c++/llvm/utils/TableGen/CodeGenInstruction.cpp b/libclamav/c++/llvm/utils/TableGen/CodeGenInstruction.cpp
index 8520d9e..c69ce96 100644
--- a/libclamav/c++/llvm/utils/TableGen/CodeGenInstruction.cpp
+++ b/libclamav/c++/llvm/utils/TableGen/CodeGenInstruction.cpp
@@ -18,36 +18,56 @@
 using namespace llvm;
 
 static void ParseConstraint(const std::string &CStr, CodeGenInstruction *I) {
-  // FIXME: Only supports TIED_TO for now.
+  // EARLY_CLOBBER: @early $reg
+  std::string::size_type wpos = CStr.find_first_of(" \t");
+  std::string::size_type start = CStr.find_first_not_of(" \t");
+  std::string Tok = CStr.substr(start, wpos - start);
+  if (Tok == "@earlyclobber") {
+    std::string Name = CStr.substr(wpos+1);
+    wpos = Name.find_first_not_of(" \t");
+    if (wpos == std::string::npos)
+      throw "Illegal format for @earlyclobber constraint: '" + CStr + "'";
+    Name = Name.substr(wpos);
+    std::pair<unsigned,unsigned> Op =
+      I->ParseOperandName(Name, false);
+
+    // Build the string for the operand
+    std::string OpConstraint = "(1 << TOI::EARLY_CLOBBER)";
+    if (!I->OperandList[Op.first].Constraints[Op.second].empty())
+      throw "Operand '" + Name + "' cannot have multiple constraints!";
+    I->OperandList[Op.first].Constraints[Op.second] = OpConstraint;
+    return;
+  }
+
+  // Only other constraint is "TIED_TO" for now.
   std::string::size_type pos = CStr.find_first_of('=');
   assert(pos != std::string::npos && "Unrecognized constraint");
-  std::string::size_type start = CStr.find_first_not_of(" \t");
+  start = CStr.find_first_not_of(" \t");
   std::string Name = CStr.substr(start, pos - start);
-  
+
   // TIED_TO: $src1 = $dst
-  std::string::size_type wpos = Name.find_first_of(" \t");
+  wpos = Name.find_first_of(" \t");
   if (wpos == std::string::npos)
     throw "Illegal format for tied-to constraint: '" + CStr + "'";
   std::string DestOpName = Name.substr(0, wpos);
   std::pair<unsigned,unsigned> DestOp = I->ParseOperandName(DestOpName, false);
-  
+
   Name = CStr.substr(pos+1);
   wpos = Name.find_first_not_of(" \t");
   if (wpos == std::string::npos)
     throw "Illegal format for tied-to constraint: '" + CStr + "'";
-  
+
   std::pair<unsigned,unsigned> SrcOp =
   I->ParseOperandName(Name.substr(wpos), false);
   if (SrcOp > DestOp)
     throw "Illegal tied-to operand constraint '" + CStr + "'";
-  
-  
+
+
   unsigned FlatOpNo = I->getFlattenedOperandNumber(SrcOp);
   // Build the string for the operand.
   std::string OpConstraint =
   "((" + utostr(FlatOpNo) + " << 16) | (1 << TOI::TIED_TO))";
-  
-  
+
   if (!I->OperandList[DestOp.first].Constraints[DestOp.second].empty())
     throw "Operand '" + DestOpName + "' cannot have multiple constraints!";
   I->OperandList[DestOp.first].Constraints[DestOp.second] = OpConstraint;
@@ -56,20 +76,20 @@ static void ParseConstraint(const std::string &CStr, CodeGenInstruction *I) {
 static void ParseConstraints(const std::string &CStr, CodeGenInstruction *I) {
   // Make sure the constraints list for each operand is large enough to hold
   // constraint info, even if none is present.
-  for (unsigned i = 0, e = I->OperandList.size(); i != e; ++i) 
+  for (unsigned i = 0, e = I->OperandList.size(); i != e; ++i)
     I->OperandList[i].Constraints.resize(I->OperandList[i].MINumOperands);
-  
+
   if (CStr.empty()) return;
-  
+
   const std::string delims(",");
   std::string::size_type bidx, eidx;
-  
+
   bidx = CStr.find_first_not_of(delims);
   while (bidx != std::string::npos) {
     eidx = CStr.find_first_of(delims, bidx);
     if (eidx == std::string::npos)
       eidx = CStr.length();
-    
+
     ParseConstraint(CStr.substr(bidx, eidx - bidx), I);
     bidx = CStr.find_first_not_of(delims, eidx);
   }
@@ -145,7 +165,7 @@ CodeGenInstruction::CodeGenInstruction(Record *R, const std::string &AsmStr)
     if (Rec->isSubClassOf("Operand")) {
       PrintMethod = Rec->getValueAsString("PrintMethod");
       MIOpInfo = Rec->getValueAsDag("MIOperandInfo");
-      
+
       // Verify that MIOpInfo has an 'ops' root value.
       if (!dynamic_cast<DefInit*>(MIOpInfo->getOperator()) ||
           dynamic_cast<DefInit*>(MIOpInfo->getOperator())
@@ -165,7 +185,7 @@ CodeGenInstruction::CodeGenInstruction(Record *R, const std::string &AsmStr)
     } else if (Rec->getName() == "variable_ops") {
       isVariadic = true;
       continue;
-    } else if (!Rec->isSubClassOf("RegisterClass") && 
+    } else if (!Rec->isSubClassOf("RegisterClass") &&
                Rec->getName() != "ptr_rc" && Rec->getName() != "unknown")
       throw "Unknown operand class '" + Rec->getName() +
             "' in '" + R->getName() + "' instruction!";
@@ -177,15 +197,15 @@ CodeGenInstruction::CodeGenInstruction(Record *R, const std::string &AsmStr)
     if (!OperandNames.insert(DI->getArgName(i)).second)
       throw "In instruction '" + R->getName() + "', operand #" + utostr(i) +
         " has the same name as a previous operand!";
-    
-    OperandList.push_back(OperandInfo(Rec, DI->getArgName(i), PrintMethod, 
+
+    OperandList.push_back(OperandInfo(Rec, DI->getArgName(i), PrintMethod,
                                       MIOperandNo, NumOps, MIOpInfo));
     MIOperandNo += NumOps;
   }
 
   // Parse Constraints.
   ParseConstraints(R->getValueAsString("Constraints"), this);
-  
+
   // For backward compatibility: isTwoAddress means operand 1 is tied to
   // operand 0.
   if (isTwoAddress) {
@@ -194,13 +214,13 @@ CodeGenInstruction::CodeGenInstruction(Record *R, const std::string &AsmStr)
             "already has constraint set!";
     OperandList[1].Constraints[0] = "((0 << 16) | (1 << TOI::TIED_TO))";
   }
-  
+
   // Any operands with unset constraints get 0 as their constraint.
   for (unsigned op = 0, e = OperandList.size(); op != e; ++op)
     for (unsigned j = 0, e = OperandList[op].MINumOperands; j != e; ++j)
       if (OperandList[op].Constraints[j].empty())
         OperandList[op].Constraints[j] = "0";
-  
+
   // Parse the DisableEncoding field.
   std::string DisableEncoding = R->getValueAsString("DisableEncoding");
   while (1) {
@@ -229,15 +249,15 @@ unsigned CodeGenInstruction::getOperandNamed(const std::string &Name) const {
         "' does not have an operand named '$" + Name + "'!";
 }
 
-std::pair<unsigned,unsigned> 
+std::pair<unsigned,unsigned>
 CodeGenInstruction::ParseOperandName(const std::string &Op,
                                      bool AllowWholeOp) {
   if (Op.empty() || Op[0] != '$')
     throw TheDef->getName() + ": Illegal operand name: '" + Op + "'";
-  
+
   std::string OpName = Op.substr(1);
   std::string SubOpName;
-  
+
   // Check to see if this is $foo.bar.
   std::string::size_type DotIdx = OpName.find_first_of(".");
   if (DotIdx != std::string::npos) {
@@ -246,7 +266,7 @@ CodeGenInstruction::ParseOperandName(const std::string &Op,
       throw TheDef->getName() + ": illegal empty suboperand name in '" +Op +"'";
     OpName = OpName.substr(0, DotIdx);
   }
-  
+
   unsigned OpIdx = getOperandNamed(OpName);
 
   if (SubOpName.empty()) {  // If no suboperand name was specified:
@@ -255,16 +275,16 @@ CodeGenInstruction::ParseOperandName(const std::string &Op,
         SubOpName.empty())
       throw TheDef->getName() + ": Illegal to refer to"
             " whole operand part of complex operand '" + Op + "'";
-  
+
     // Otherwise, return the operand.
     return std::make_pair(OpIdx, 0U);
   }
-  
+
   // Find the suboperand number involved.
   DagInit *MIOpInfo = OperandList[OpIdx].MIOperandInfo;
   if (MIOpInfo == 0)
     throw TheDef->getName() + ": unknown suboperand name in '" + Op + "'";
-  
+
   // Find the operand with the right name.
   for (unsigned i = 0, e = MIOpInfo->getNumArgs(); i != e; ++i)
     if (MIOpInfo->getArgName(i) == SubOpName)
diff --git a/libclamav/c++/llvm/utils/TableGen/DAGISelEmitter.cpp b/libclamav/c++/llvm/utils/TableGen/DAGISelEmitter.cpp
index 66debe2..a901fd0 100644
--- a/libclamav/c++/llvm/utils/TableGen/DAGISelEmitter.cpp
+++ b/libclamav/c++/llvm/utils/TableGen/DAGISelEmitter.cpp
@@ -1292,8 +1292,8 @@ public:
       // possible and it avoids CSE map recalculation for the node's
       // users, however it's tricky to use in a non-root context.
       //
-      // We also don't use if the pattern replacement is being used to
-      // jettison a chain result, since morphing the node in place
+      // We also don't use SelectNodeTo if the pattern replacement is being
+      // used to jettison a chain result, since morphing the node in place
       // would leave users of the chain dangling.
       //
       if (!isRoot || (InputHasChain && !NodeHasChain)) {
diff --git a/libclamav/c++/llvm/utils/TableGen/DisassemblerEmitter.cpp b/libclamav/c++/llvm/utils/TableGen/DisassemblerEmitter.cpp
index cc13125..61b9b15 100644
--- a/libclamav/c++/llvm/utils/TableGen/DisassemblerEmitter.cpp
+++ b/libclamav/c++/llvm/utils/TableGen/DisassemblerEmitter.cpp
@@ -10,7 +10,86 @@
 #include "DisassemblerEmitter.h"
 #include "CodeGenTarget.h"
 #include "Record.h"
+#include "X86DisassemblerTables.h"
+#include "X86RecognizableInstr.h"
 using namespace llvm;
+using namespace llvm::X86Disassembler;
+
+/// DisassemblerEmitter - Contains disassembler table emitters for various
+/// architectures.
+
+/// X86 Disassembler Emitter
+///
+/// *** IF YOU'RE HERE TO RESOLVE A "Primary decode conflict", LOOK DOWN NEAR
+///     THE END OF THIS COMMENT!
+///
+/// The X86 disassembler emitter is part of the X86 Disassembler, which is
+/// documented in lib/Target/X86/X86Disassembler.h.
+///
+/// The emitter produces the tables that the disassembler uses to translate
+/// instructions.  The emitter generates the following tables:
+///
+/// - One table (CONTEXTS_SYM) that contains a mapping of attribute masks to
+///   instruction contexts.  Although for each attribute there are cases where
+///   that attribute determines decoding, in the majority of cases decoding is
+///   the same whether or not an attribute is present.  For example, a 64-bit
+///   instruction with an OPSIZE prefix and an XS prefix decodes the same way in
+///   all cases as a 64-bit instruction with only OPSIZE set.  (The XS prefix
+///   may have effects on its execution, but does not change the instruction
+///   returned.)  This allows considerable space savings in other tables.
+/// - Four tables (ONEBYTE_SYM, TWOBYTE_SYM, THREEBYTE38_SYM, and
+///   THREEBYTE3A_SYM) contain the hierarchy that the decoder traverses while
+///   decoding an instruction.  At the lowest level of this hierarchy are
+///   instruction UIDs, 16-bit integers that can be used to uniquely identify
+///   the instruction and correspond exactly to its position in the list of
+///   CodeGenInstructions for the target.
+/// - One table (INSTRUCTIONS_SYM) contains information about the operands of
+///   each instruction and how to decode them.
+///
+/// During table generation, there may be conflicts between instructions that
+/// occupy the same space in the decode tables.  These conflicts are resolved as
+/// follows in setTableFields() (X86DisassemblerTables.cpp)
+///
+/// - If the current context is the native context for one of the instructions
+///   (that is, the attributes specified for it in the LLVM tables specify
+///   precisely the current context), then it has priority.
+/// - If the current context isn't native for either of the instructions, then
+///   the higher-priority context wins (that is, the one that is more specific).
+///   That hierarchy is determined by outranks() (X86DisassemblerTables.cpp)
+/// - If the current context is native for both instructions, then the table
+///   emitter reports a conflict and dies.
+///
+/// *** RESOLUTION FOR "Primary decode conflict"S
+///
+/// If two instructions collide, typically the solution is (in order of
+/// likelihood):
+///
+/// (1) to filter out one of the instructions by editing filter()
+///     (X86RecognizableInstr.cpp).  This is the most common resolution, but
+///     check the Intel manuals first to make sure that (2) and (3) are not the
+///     problem.
+/// (2) to fix the tables (X86.td and its subsidiaries) so the opcodes are
+///     accurate.  Sometimes they are not.
+/// (3) to fix the tables to reflect the actual context (for example, required
+///     prefixes), and possibly to add a new context by editing
+///     lib/Target/X86/X86DisassemblerDecoderCommon.h.  This is unlikely to be
+///     the cause.
+///
+/// DisassemblerEmitter.cpp contains the implementation for the emitter,
+///   which simply pulls out instructions from the CodeGenTarget and pushes them
+///   into X86DisassemblerTables.
+/// X86DisassemblerTables.h contains the interface for the instruction tables,
+///   which manage and emit the structures discussed above.
+/// X86DisassemblerTables.cpp contains the implementation for the instruction
+///   tables.
+/// X86ModRMFilters.h contains filters that can be used to determine which
+///   ModR/M values are valid for a particular instruction.  These are used to
+///   populate ModRMDecisions.
+/// X86RecognizableInstr.h contains the interface for a single instruction,
+///   which knows how to translate itself from a CodeGenInstruction and provide
+///   the information necessary for integration into the tables.
+/// X86RecognizableInstr.cpp contains the implementation for a single
+///   instruction.
 
 void DisassemblerEmitter::run(raw_ostream &OS) {
   CodeGenTarget Target;
@@ -25,6 +104,26 @@ void DisassemblerEmitter::run(raw_ostream &OS) {
      << " *===---------------------------------------------------------------"
      << "-------===*/\n";
 
+  // X86 uses a custom disassembler.
+  if (Target.getName() == "X86") {
+    DisassemblerTables Tables;
+  
+    std::vector<const CodeGenInstruction*> numberedInstructions;
+    Target.getInstructionsByEnumValue(numberedInstructions);
+    
+    for (unsigned i = 0, e = numberedInstructions.size(); i != e; ++i)
+      RecognizableInstr::processInstr(Tables, *numberedInstructions[i], i);
+
+    // FIXME: As long as we are using exceptions, might as well drop this to the
+    // actual conflict site.
+    if (Tables.hasConflicts())
+      throw TGError(Target.getTargetRecord()->getLoc(),
+                    "Primary decode conflict");
+
+    Tables.emit(OS);
+    return;
+  }
+
   throw TGError(Target.getTargetRecord()->getLoc(),
                 "Unable to generate disassembler for this target");
 }
diff --git a/libclamav/c++/llvm/utils/TableGen/LLVMCConfigurationEmitter.cpp b/libclamav/c++/llvm/utils/TableGen/LLVMCConfigurationEmitter.cpp
index 5be9ab7..b685840 100644
--- a/libclamav/c++/llvm/utils/TableGen/LLVMCConfigurationEmitter.cpp
+++ b/libclamav/c++/llvm/utils/TableGen/LLVMCConfigurationEmitter.cpp
@@ -17,6 +17,7 @@
 #include "llvm/ADT/IntrusiveRefCntPtr.h"
 #include "llvm/ADT/StringMap.h"
 #include "llvm/ADT/StringSet.h"
+
 #include <algorithm>
 #include <cassert>
 #include <functional>
@@ -26,6 +27,7 @@
 
 using namespace llvm;
 
+namespace {
 
 //===----------------------------------------------------------------------===//
 /// Typedefs
@@ -37,18 +39,16 @@ typedef std::vector<std::string> StrVector;
 /// Constants
 
 // Indentation.
-static const unsigned TabWidth = 4;
-static const unsigned Indent1  = TabWidth*1;
-static const unsigned Indent2  = TabWidth*2;
-static const unsigned Indent3  = TabWidth*3;
+const unsigned TabWidth = 4;
+const unsigned Indent1  = TabWidth*1;
+const unsigned Indent2  = TabWidth*2;
+const unsigned Indent3  = TabWidth*3;
 
 // Default help string.
-static const char * const DefaultHelpString = "NO HELP MESSAGE PROVIDED";
+const char * const DefaultHelpString = "NO HELP MESSAGE PROVIDED";
 
 // Name for the "sink" option.
-static const char * const SinkOptionName = "AutoGeneratedSinkOption";
-
-namespace {
+const char * const SinkOptionName = "AutoGeneratedSinkOption";
 
 //===----------------------------------------------------------------------===//
 /// Helper functions
@@ -86,26 +86,30 @@ const DagInit& InitPtrToDag(const Init* ptr) {
   return val;
 }
 
-const std::string GetOperatorName(const DagInit* D) {
-  return D->getOperator()->getAsString();
+const std::string GetOperatorName(const DagInit& D) {
+  return D.getOperator()->getAsString();
 }
 
-const std::string GetOperatorName(const DagInit& D) {
-  return GetOperatorName(&D);
+/// CheckBooleanConstant - Check that the provided value is a boolean constant.
+void CheckBooleanConstant(const Init* I) {
+  const DefInit& val = dynamic_cast<const DefInit&>(*I);
+  const std::string& str = val.getAsString();
+
+  if (str != "true" && str != "false") {
+    throw "Incorrect boolean value: '" + str +
+      "': must be either 'true' or 'false'";
+  }
 }
 
-// checkNumberOfArguments - Ensure that the number of args in d is
+// CheckNumberOfArguments - Ensure that the number of args in d is
 // greater than or equal to min_arguments, otherwise throw an exception.
-void checkNumberOfArguments (const DagInit* d, unsigned minArgs) {
-  if (!d || d->getNumArgs() < minArgs)
+void CheckNumberOfArguments (const DagInit& d, unsigned minArgs) {
+  if (d.getNumArgs() < minArgs)
     throw GetOperatorName(d) + ": too few arguments!";
 }
-void checkNumberOfArguments (const DagInit& d, unsigned minArgs) {
-  checkNumberOfArguments(&d, minArgs);
-}
 
-// isDagEmpty - is this DAG marked with an empty marker?
-bool isDagEmpty (const DagInit* d) {
+// IsDagEmpty - is this DAG marked with an empty marker?
+bool IsDagEmpty (const DagInit& d) {
   return GetOperatorName(d) == "empty_dag_marker";
 }
 
@@ -132,8 +136,8 @@ std::string EscapeVariableName(const std::string& Var) {
   return ret;
 }
 
-/// oneOf - Does the input string contain this character?
-bool oneOf(const char* lst, char c) {
+/// OneOf - Does the input string contain this character?
+bool OneOf(const char* lst, char c) {
   while (*lst) {
     if (*lst++ == c)
       return true;
@@ -142,7 +146,7 @@ bool oneOf(const char* lst, char c) {
 }
 
 template <class I, class S>
-void checkedIncrement(I& P, I E, S ErrorString) {
+void CheckedIncrement(I& P, I E, S ErrorString) {
   ++P;
   if (P == E)
     throw ErrorString;
@@ -499,17 +503,36 @@ public:
 
 };
 
+template <class Handler, class FunctionObject>
+Handler GetHandler(FunctionObject* Obj, const DagInit& Dag) {
+  const std::string& HandlerName = GetOperatorName(Dag);
+  return Obj->GetHandler(HandlerName);
+}
+
 template <class FunctionObject>
-void InvokeDagInitHandler(FunctionObject* Obj, Init* i) {
-  typedef void (FunctionObject::*Handler) (const DagInit*);
+void InvokeDagInitHandler(FunctionObject* Obj, Init* I) {
+  typedef void (FunctionObject::*Handler) (const DagInit&);
 
-  const DagInit& property = InitPtrToDag(i);
-  const std::string& property_name = GetOperatorName(property);
-  Handler h = Obj->GetHandler(property_name);
+  const DagInit& Dag = InitPtrToDag(I);
+  Handler h = GetHandler<Handler>(Obj, Dag);
 
-  ((Obj)->*(h))(&property);
+  ((Obj)->*(h))(Dag);
 }
 
+template <class FunctionObject>
+void InvokeDagInitHandler(const FunctionObject* const Obj,
+                          const Init* I, unsigned IndentLevel, raw_ostream& O)
+{
+  typedef void (FunctionObject::*Handler)
+    (const DagInit&, unsigned IndentLevel, raw_ostream& O) const;
+
+  const DagInit& Dag = InitPtrToDag(I);
+  Handler h = GetHandler<Handler>(Obj, Dag);
+
+  ((Obj)->*(h))(Dag, IndentLevel, O);
+}
+
+
 template <typename H>
 typename HandlerTable<H>::HandlerMap HandlerTable<H>::Handlers_;
 
@@ -521,7 +544,7 @@ bool HandlerTable<H>::staticMembersInitialized_ = false;
 /// option property list.
 class CollectOptionProperties;
 typedef void (CollectOptionProperties::* CollectOptionPropertiesHandler)
-(const DagInit*);
+(const DagInit&);
 
 class CollectOptionProperties
 : public HandlerTable<CollectOptionPropertiesHandler>
@@ -555,8 +578,8 @@ public:
 
   /// operator() - Just forwards to the corresponding property
   /// handler.
-  void operator() (Init* i) {
-    InvokeDagInitHandler(this, i);
+  void operator() (Init* I) {
+    InvokeDagInitHandler(this, I);
   }
 
 private:
@@ -564,44 +587,44 @@ private:
   /// Option property handlers --
   /// Methods that handle option properties such as (help) or (hidden).
 
-  void onExtern (const DagInit* d) {
-    checkNumberOfArguments(d, 0);
+  void onExtern (const DagInit& d) {
+    CheckNumberOfArguments(d, 0);
     optDesc_.setExtern();
   }
 
-  void onHelp (const DagInit* d) {
-    checkNumberOfArguments(d, 1);
-    optDesc_.Help = InitPtrToString(d->getArg(0));
+  void onHelp (const DagInit& d) {
+    CheckNumberOfArguments(d, 1);
+    optDesc_.Help = InitPtrToString(d.getArg(0));
   }
 
-  void onHidden (const DagInit* d) {
-    checkNumberOfArguments(d, 0);
+  void onHidden (const DagInit& d) {
+    CheckNumberOfArguments(d, 0);
     optDesc_.setHidden();
   }
 
-  void onReallyHidden (const DagInit* d) {
-    checkNumberOfArguments(d, 0);
+  void onReallyHidden (const DagInit& d) {
+    CheckNumberOfArguments(d, 0);
     optDesc_.setReallyHidden();
   }
 
-  void onCommaSeparated (const DagInit* d) {
-    checkNumberOfArguments(d, 0);
+  void onCommaSeparated (const DagInit& d) {
+    CheckNumberOfArguments(d, 0);
     if (!optDesc_.isList())
       throw "'comma_separated' is valid only on list options!";
     optDesc_.setCommaSeparated();
   }
 
-  void onRequired (const DagInit* d) {
-    checkNumberOfArguments(d, 0);
+  void onRequired (const DagInit& d) {
+    CheckNumberOfArguments(d, 0);
     if (optDesc_.isOneOrMore() || optDesc_.isOptional())
       throw "Only one of (required), (optional) or "
         "(one_or_more) properties is allowed!";
     optDesc_.setRequired();
   }
 
-  void onInit (const DagInit* d) {
-    checkNumberOfArguments(d, 1);
-    Init* i = d->getArg(0);
+  void onInit (const DagInit& d) {
+    CheckNumberOfArguments(d, 1);
+    Init* i = d.getArg(0);
     const std::string& str = i->getAsString();
 
     bool correct = optDesc_.isParameter() && dynamic_cast<StringInit*>(i);
@@ -613,8 +636,8 @@ private:
     optDesc_.InitVal = i;
   }
 
-  void onOneOrMore (const DagInit* d) {
-    checkNumberOfArguments(d, 0);
+  void onOneOrMore (const DagInit& d) {
+    CheckNumberOfArguments(d, 0);
     if (optDesc_.isRequired() || optDesc_.isOptional())
       throw "Only one of (required), (optional) or "
         "(one_or_more) properties is allowed!";
@@ -624,8 +647,8 @@ private:
     optDesc_.setOneOrMore();
   }
 
-  void onOptional (const DagInit* d) {
-    checkNumberOfArguments(d, 0);
+  void onOptional (const DagInit& d) {
+    CheckNumberOfArguments(d, 0);
     if (optDesc_.isRequired() || optDesc_.isOneOrMore())
       throw "Only one of (required), (optional) or "
         "(one_or_more) properties is allowed!";
@@ -635,9 +658,9 @@ private:
     optDesc_.setOptional();
   }
 
-  void onMultiVal (const DagInit* d) {
-    checkNumberOfArguments(d, 1);
-    int val = InitPtrToInt(d->getArg(0));
+  void onMultiVal (const DagInit& d) {
+    CheckNumberOfArguments(d, 1);
+    int val = InitPtrToInt(d.getArg(0));
     if (val < 2)
       throw "Error in the 'multi_val' property: "
         "the value must be greater than 1!";
@@ -660,7 +683,7 @@ public:
 
   void operator()(const Init* i) {
     const DagInit& d = InitPtrToDag(i);
-    checkNumberOfArguments(&d, 1);
+    CheckNumberOfArguments(d, 1);
 
     const OptionType::OptionType Type =
       stringToOptionType(GetOperatorName(d));
@@ -669,14 +692,14 @@ public:
     OptionDescription OD(Type, Name);
 
     if (!OD.isExtern())
-      checkNumberOfArguments(&d, 2);
+      CheckNumberOfArguments(d, 2);
 
     if (OD.isAlias()) {
       // Aliases store the aliased option name in the 'Help' field.
       OD.Help = InitPtrToString(d.getArg(1));
     }
     else if (!OD.isExtern()) {
-      processOptionProperties(&d, OD);
+      processOptionProperties(d, OD);
     }
     OptDescs_.InsertDescription(OD);
   }
@@ -684,12 +707,12 @@ public:
 private:
   /// processOptionProperties - Go through the list of option
   /// properties and call a corresponding handler for each.
-  static void processOptionProperties (const DagInit* d, OptionDescription& o) {
-    checkNumberOfArguments(d, 2);
-    DagInit::const_arg_iterator B = d->arg_begin();
+  static void processOptionProperties (const DagInit& d, OptionDescription& o) {
+    CheckNumberOfArguments(d, 2);
+    DagInit::const_arg_iterator B = d.arg_begin();
     // Skip the first argument: it's always the option name.
     ++B;
-    std::for_each(B, d->arg_end(), CollectOptionProperties(o));
+    std::for_each(B, d.arg_end(), CollectOptionProperties(o));
   }
 
 };
@@ -750,7 +773,7 @@ typedef std::vector<IntrusiveRefCntPtr<ToolDescription> > ToolDescriptions;
 
 class CollectToolProperties;
 typedef void (CollectToolProperties::* CollectToolPropertiesHandler)
-(const DagInit*);
+(const DagInit&);
 
 class CollectToolProperties : public HandlerTable<CollectToolPropertiesHandler>
 {
@@ -779,8 +802,8 @@ public:
     }
   }
 
-  void operator() (Init* i) {
-    InvokeDagInitHandler(this, i);
+  void operator() (Init* I) {
+    InvokeDagInitHandler(this, I);
   }
 
 private:
@@ -789,23 +812,23 @@ private:
   /// Functions that extract information about tool properties from
   /// DAG representation.
 
-  void onActions (const DagInit* d) {
-    checkNumberOfArguments(d, 1);
-    Init* Case = d->getArg(0);
+  void onActions (const DagInit& d) {
+    CheckNumberOfArguments(d, 1);
+    Init* Case = d.getArg(0);
     if (typeid(*Case) != typeid(DagInit) ||
-        GetOperatorName(static_cast<DagInit*>(Case)) != "case")
+        GetOperatorName(static_cast<DagInit&>(*Case)) != "case")
       throw "The argument to (actions) should be a 'case' construct!";
     toolDesc_.Actions = Case;
   }
 
-  void onCmdLine (const DagInit* d) {
-    checkNumberOfArguments(d, 1);
-    toolDesc_.CmdLine = d->getArg(0);
+  void onCmdLine (const DagInit& d) {
+    CheckNumberOfArguments(d, 1);
+    toolDesc_.CmdLine = d.getArg(0);
   }
 
-  void onInLanguage (const DagInit* d) {
-    checkNumberOfArguments(d, 1);
-    Init* arg = d->getArg(0);
+  void onInLanguage (const DagInit& d) {
+    CheckNumberOfArguments(d, 1);
+    Init* arg = d.getArg(0);
 
     // Find out the argument's type.
     if (typeid(*arg) == typeid(StringInit)) {
@@ -830,23 +853,23 @@ private:
     }
   }
 
-  void onJoin (const DagInit* d) {
-    checkNumberOfArguments(d, 0);
+  void onJoin (const DagInit& d) {
+    CheckNumberOfArguments(d, 0);
     toolDesc_.setJoin();
   }
 
-  void onOutLanguage (const DagInit* d) {
-    checkNumberOfArguments(d, 1);
-    toolDesc_.OutLanguage = InitPtrToString(d->getArg(0));
+  void onOutLanguage (const DagInit& d) {
+    CheckNumberOfArguments(d, 1);
+    toolDesc_.OutLanguage = InitPtrToString(d.getArg(0));
   }
 
-  void onOutputSuffix (const DagInit* d) {
-    checkNumberOfArguments(d, 1);
-    toolDesc_.OutputSuffix = InitPtrToString(d->getArg(0));
+  void onOutputSuffix (const DagInit& d) {
+    CheckNumberOfArguments(d, 1);
+    toolDesc_.OutputSuffix = InitPtrToString(d.getArg(0));
   }
 
-  void onSink (const DagInit* d) {
-    checkNumberOfArguments(d, 0);
+  void onSink (const DagInit& d) {
+    CheckNumberOfArguments(d, 0);
     toolDesc_.setSink();
   }
 
@@ -1033,12 +1056,12 @@ void WalkCase(const Init* Case, F1 TestCallback, F2 StatementCallback,
         throw "Case construct handler: no corresponding action "
           "found for the test " + Test.getAsString() + '!';
 
-      TestCallback(&Test, IndentLevel, (i == 1));
+      TestCallback(Test, IndentLevel, (i == 1));
     }
     else
     {
       if (dynamic_cast<DagInit*>(arg)
-          && GetOperatorName(static_cast<DagInit*>(arg)) == "case") {
+          && GetOperatorName(static_cast<DagInit&>(*arg)) == "case") {
         // Nested 'case'.
         WalkCase(arg, TestCallback, StatementCallback, IndentLevel + Indent1);
       }
@@ -1066,7 +1089,7 @@ class ExtractOptionNames {
         ActionName == "switch_on" || ActionName == "parameter_equals" ||
         ActionName == "element_in_list" || ActionName == "not_empty" ||
         ActionName == "empty") {
-      checkNumberOfArguments(&Stmt, 1);
+      CheckNumberOfArguments(Stmt, 1);
       const std::string& Name = InitPtrToString(Stmt.getArg(0));
       OptionNames_.insert(Name);
     }
@@ -1093,8 +1116,8 @@ public:
     }
   }
 
-  void operator()(const DagInit* Test, unsigned, bool) {
-    this->operator()(Test);
+  void operator()(const DagInit& Test, unsigned, bool) {
+    this->operator()(&Test);
   }
   void operator()(const Init* Statement, unsigned) {
     this->operator()(Statement);
@@ -1125,10 +1148,10 @@ void CheckForSuperfluousOptions (const RecordVector& Edges,
   for (RecordVector::const_iterator B = Edges.begin(), E = Edges.end();
        B != E; ++B) {
     const Record* Edge = *B;
-    DagInit* Weight = Edge->getValueAsDag("weight");
+    DagInit& Weight = *Edge->getValueAsDag("weight");
 
-    if (!isDagEmpty(Weight))
-      WalkCase(Weight, ExtractOptionNames(nonSuperfluousOptions), Id());
+    if (!IsDagEmpty(Weight))
+      WalkCase(&Weight, ExtractOptionNames(nonSuperfluousOptions), Id());
   }
 
   // Check that all options in OptDescs belong to the set of
@@ -1284,7 +1307,7 @@ bool EmitCaseTest1Arg(const std::string& TestName,
                       const DagInit& d,
                       const OptionDescriptions& OptDescs,
                       raw_ostream& O) {
-  checkNumberOfArguments(&d, 1);
+  CheckNumberOfArguments(d, 1);
   if (typeid(*d.getArg(0)) == typeid(ListInit))
     return EmitCaseTest1ArgList(TestName, d, OptDescs, O);
   else
@@ -1297,7 +1320,7 @@ bool EmitCaseTest2Args(const std::string& TestName,
                        unsigned IndentLevel,
                        const OptionDescriptions& OptDescs,
                        raw_ostream& O) {
-  checkNumberOfArguments(&d, 2);
+  CheckNumberOfArguments(d, 2);
   const std::string& OptName = InitPtrToString(d.getArg(0));
   const std::string& OptArg = InitPtrToString(d.getArg(1));
 
@@ -1348,7 +1371,7 @@ void EmitLogicalOperationTest(const DagInit& d, const char* LogicOp,
 void EmitLogicalNot(const DagInit& d, unsigned IndentLevel,
                     const OptionDescriptions& OptDescs, raw_ostream& O)
 {
-  checkNumberOfArguments(&d, 1);
+  CheckNumberOfArguments(d, 1);
   const DagInit& InnerTest = InitPtrToDag(d.getArg(0));
   O << "! (";
   EmitCaseTest(InnerTest, IndentLevel, OptDescs, O);
@@ -1390,7 +1413,7 @@ public:
     : EmitElseIf_(EmitElseIf), OptDescs_(OptDescs), O_(O)
   {}
 
-  void operator()(const DagInit* Test, unsigned IndentLevel, bool FirstTest)
+  void operator()(const DagInit& Test, unsigned IndentLevel, bool FirstTest)
   {
     if (GetOperatorName(Test) == "default") {
       O_.indent(IndentLevel) << "else {\n";
@@ -1398,7 +1421,7 @@ public:
     else {
       O_.indent(IndentLevel)
         << ((!FirstTest && EmitElseIf_) ? "else if (" : "if (");
-      EmitCaseTest(*Test, IndentLevel, OptDescs_, O_);
+      EmitCaseTest(Test, IndentLevel, OptDescs_, O_);
       O_ << ") {\n";
     }
   }
@@ -1419,7 +1442,7 @@ public:
 
     // Ignore nested 'case' DAG.
     if (!(dynamic_cast<const DagInit*>(Statement) &&
-          GetOperatorName(static_cast<const DagInit*>(Statement)) == "case")) {
+          GetOperatorName(static_cast<const DagInit&>(*Statement)) == "case")) {
       if (typeid(*Statement) == typeid(ListInit)) {
         const ListInit& DagList = *static_cast<const ListInit*>(Statement);
         for (ListInit::const_iterator B = DagList.begin(), E = DagList.end();
@@ -1452,10 +1475,10 @@ void EmitCaseConstructHandler(const Init* Case, unsigned IndentLevel,
            EmitCaseStatementCallback<F>(Callback, O), IndentLevel);
 }
 
-/// TokenizeCmdline - converts from
+/// TokenizeCmdLine - converts from
 /// "$CALL(HookName, 'Arg1', 'Arg2')/path -arg1 -arg2" to
 /// ["$CALL(", "HookName", "Arg1", "Arg2", ")/path", "-arg1", "-arg2"].
-void TokenizeCmdline(const std::string& CmdLine, StrVector& Out) {
+void TokenizeCmdLine(const std::string& CmdLine, StrVector& Out) {
   const char* Delimiters = " \t\n\v\f\r";
   enum TokenizerState
   { Normal, SpecialCommand, InsideSpecialCommand, InsideQuotationMarks }
@@ -1477,7 +1500,7 @@ void TokenizeCmdline(const std::string& CmdLine, StrVector& Out) {
         cur_st = SpecialCommand;
         break;
       }
-      if (oneOf(Delimiters, cur_ch)) {
+      if (OneOf(Delimiters, cur_ch)) {
         // Skip whitespace
         B = CmdLine.find_first_not_of(Delimiters, B);
         if (B == std::string::npos) {
@@ -1492,7 +1515,7 @@ void TokenizeCmdline(const std::string& CmdLine, StrVector& Out) {
 
 
     case SpecialCommand:
-      if (oneOf(Delimiters, cur_ch)) {
+      if (OneOf(Delimiters, cur_ch)) {
         cur_st = Normal;
         Out.push_back("");
         continue;
@@ -1505,7 +1528,7 @@ void TokenizeCmdline(const std::string& CmdLine, StrVector& Out) {
       break;
 
     case InsideSpecialCommand:
-      if (oneOf(Delimiters, cur_ch)) {
+      if (OneOf(Delimiters, cur_ch)) {
         continue;
       }
       if (cur_ch == '\'') {
@@ -1544,7 +1567,7 @@ SubstituteCall (StrVector::const_iterator Pos,
                 bool IsJoin, raw_ostream& O)
 {
   const char* errorMessage = "Syntax error in $CALL invocation!";
-  checkedIncrement(Pos, End, errorMessage);
+  CheckedIncrement(Pos, End, errorMessage);
   const std::string& CmdName = *Pos;
 
   if (CmdName == ")")
@@ -1556,7 +1579,7 @@ SubstituteCall (StrVector::const_iterator Pos,
 
   bool firstIteration = true;
   while (true) {
-    checkedIncrement(Pos, End, errorMessage);
+    CheckedIncrement(Pos, End, errorMessage);
     const std::string& Arg = *Pos;
     assert(Arg.size() != 0);
 
@@ -1591,7 +1614,7 @@ SubstituteEnv (StrVector::const_iterator Pos,
                StrVector::const_iterator End, raw_ostream& O)
 {
   const char* errorMessage = "Syntax error in $ENV invocation!";
-  checkedIncrement(Pos, End, errorMessage);
+  CheckedIncrement(Pos, End, errorMessage);
   const std::string& EnvName = *Pos;
 
   if (EnvName == ")")
@@ -1601,7 +1624,7 @@ SubstituteEnv (StrVector::const_iterator Pos,
   O << EnvName;
   O << "\"))";
 
-  checkedIncrement(Pos, End, errorMessage);
+  CheckedIncrement(Pos, End, errorMessage);
 
   return Pos;
 }
@@ -1642,7 +1665,7 @@ void EmitCmdLineVecFill(const Init* CmdLine, const std::string& ToolName,
                         bool IsJoin, unsigned IndentLevel,
                         raw_ostream& O) {
   StrVector StrVec;
-  TokenizeCmdline(InitPtrToString(CmdLine), StrVec);
+  TokenizeCmdLine(InitPtrToString(CmdLine), StrVec);
 
   if (StrVec.empty())
     throw "Tool '" + ToolName + "' has empty command line!";
@@ -1786,7 +1809,8 @@ void EmitForwardOptionPropertyHandlingCode (const OptionDescription& D,
 
 /// ActionHandlingCallbackBase - Base class of EmitActionHandlersCallback and
 /// EmitPreprocessOptionsCallback.
-struct ActionHandlingCallbackBase {
+struct ActionHandlingCallbackBase
+{
 
   void onErrorDag(const DagInit& d,
                   unsigned IndentLevel, raw_ostream& O) const
@@ -1801,7 +1825,7 @@ struct ActionHandlingCallbackBase {
   void onWarningDag(const DagInit& d,
                     unsigned IndentLevel, raw_ostream& O) const
   {
-    checkNumberOfArguments(&d, 1);
+    CheckNumberOfArguments(d, 1);
     O.indent(IndentLevel) << "llvm::errs() << \""
                           << InitPtrToString(d.getArg(0)) << "\";\n";
   }
@@ -1810,17 +1834,20 @@ struct ActionHandlingCallbackBase {
 
 /// EmitActionHandlersCallback - Emit code that handles actions. Used by
 /// EmitGenerateActionMethod() as an argument to EmitCaseConstructHandler().
+
 class EmitActionHandlersCallback;
+
 typedef void (EmitActionHandlersCallback::* EmitActionHandlersCallbackHandler)
 (const DagInit&, unsigned, raw_ostream&) const;
 
-class EmitActionHandlersCallback
-: public ActionHandlingCallbackBase,
+class EmitActionHandlersCallback :
+  public ActionHandlingCallbackBase,
   public HandlerTable<EmitActionHandlersCallbackHandler>
 {
-  const OptionDescriptions& OptDescs;
   typedef EmitActionHandlersCallbackHandler Handler;
 
+  const OptionDescriptions& OptDescs;
+
   /// EmitHookInvocation - Common code for hook invocation from actions. Used by
   /// onAppendCmd and onOutputSuffix.
   void EmitHookInvocation(const std::string& Str,
@@ -1828,7 +1855,7 @@ class EmitActionHandlersCallback
                           unsigned IndentLevel, raw_ostream& O) const
   {
     StrVector Out;
-    TokenizeCmdline(Str, Out);
+    TokenizeCmdLine(Str, Out);
 
     for (StrVector::const_iterator B = Out.begin(), E = Out.end();
          B != E; ++B) {
@@ -1848,7 +1875,7 @@ class EmitActionHandlersCallback
   void onAppendCmd (const DagInit& Dag,
                     unsigned IndentLevel, raw_ostream& O) const
   {
-    checkNumberOfArguments(&Dag, 1);
+    CheckNumberOfArguments(Dag, 1);
     this->EmitHookInvocation(InitPtrToString(Dag.getArg(0)),
                              "vec.push_back(", ");\n", IndentLevel, O);
   }
@@ -1856,7 +1883,7 @@ class EmitActionHandlersCallback
   void onForward (const DagInit& Dag,
                   unsigned IndentLevel, raw_ostream& O) const
   {
-    checkNumberOfArguments(&Dag, 1);
+    CheckNumberOfArguments(Dag, 1);
     const std::string& Name = InitPtrToString(Dag.getArg(0));
     EmitForwardOptionPropertyHandlingCode(OptDescs.FindOption(Name),
                                           IndentLevel, "", O);
@@ -1865,7 +1892,7 @@ class EmitActionHandlersCallback
   void onForwardAs (const DagInit& Dag,
                     unsigned IndentLevel, raw_ostream& O) const
   {
-    checkNumberOfArguments(&Dag, 2);
+    CheckNumberOfArguments(Dag, 2);
     const std::string& Name = InitPtrToString(Dag.getArg(0));
     const std::string& NewName = InitPtrToString(Dag.getArg(1));
     EmitForwardOptionPropertyHandlingCode(OptDescs.FindOption(Name),
@@ -1875,7 +1902,7 @@ class EmitActionHandlersCallback
   void onForwardValue (const DagInit& Dag,
                        unsigned IndentLevel, raw_ostream& O) const
   {
-    checkNumberOfArguments(&Dag, 1);
+    CheckNumberOfArguments(Dag, 1);
     const std::string& Name = InitPtrToString(Dag.getArg(0));
     const OptionDescription& D = OptDescs.FindListOrParameter(Name);
 
@@ -1893,7 +1920,7 @@ class EmitActionHandlersCallback
   void onForwardTransformedValue (const DagInit& Dag,
                                   unsigned IndentLevel, raw_ostream& O) const
   {
-    checkNumberOfArguments(&Dag, 2);
+    CheckNumberOfArguments(Dag, 2);
     const std::string& Name = InitPtrToString(Dag.getArg(0));
     const std::string& Hook = InitPtrToString(Dag.getArg(1));
     const OptionDescription& D = OptDescs.FindListOrParameter(Name);
@@ -1906,7 +1933,7 @@ class EmitActionHandlersCallback
   void onOutputSuffix (const DagInit& Dag,
                        unsigned IndentLevel, raw_ostream& O) const
   {
-    checkNumberOfArguments(&Dag, 1);
+    CheckNumberOfArguments(Dag, 1);
     this->EmitHookInvocation(InitPtrToString(Dag.getArg(0)),
                              "output_suffix = ", ";\n", IndentLevel, O);
   }
@@ -1949,20 +1976,16 @@ class EmitActionHandlersCallback
     }
   }
 
-  void operator()(const Init* Statement,
+  void operator()(const Init* I,
                   unsigned IndentLevel, raw_ostream& O) const
   {
-    const DagInit& Dag = InitPtrToDag(Statement);
-    const std::string& ActionName = GetOperatorName(Dag);
-    Handler h = GetHandler(ActionName);
-
-    ((this)->*(h))(Dag, IndentLevel, O);
+    InvokeDagInitHandler(this, I, IndentLevel, O);
   }
 };
 
 bool IsOutFileIndexCheckRequiredStr (const Init* CmdLine) {
   StrVector StrVec;
-  TokenizeCmdline(InitPtrToString(CmdLine), StrVec);
+  TokenizeCmdLine(InitPtrToString(CmdLine), StrVec);
 
   for (StrVector::const_iterator I = StrVec.begin(), E = StrVec.end();
        I != E; ++I) {
@@ -2280,11 +2303,46 @@ void EmitOptionDefinitions (const OptionDescriptions& descs,
 
 /// EmitPreprocessOptionsCallback - Helper function passed to
 /// EmitCaseConstructHandler() by EmitPreprocessOptions().
-class EmitPreprocessOptionsCallback : ActionHandlingCallbackBase {
+
+class EmitPreprocessOptionsCallback;
+
+typedef void
+(EmitPreprocessOptionsCallback::* EmitPreprocessOptionsCallbackHandler)
+(const DagInit&, unsigned, raw_ostream&) const;
+
+class EmitPreprocessOptionsCallback :
+  public ActionHandlingCallbackBase,
+  public HandlerTable<EmitPreprocessOptionsCallbackHandler>
+{
+  typedef EmitPreprocessOptionsCallbackHandler Handler;
+  typedef void
+  (EmitPreprocessOptionsCallback::* HandlerImpl)
+  (const Init*, unsigned, raw_ostream&) const;
+
   const OptionDescriptions& OptDescs_;
 
-  void onUnsetOption(Init* i, unsigned IndentLevel, raw_ostream& O) {
-    const std::string& OptName = InitPtrToString(i);
+  void onListOrDag(const DagInit& d, HandlerImpl h,
+                   unsigned IndentLevel, raw_ostream& O) const
+  {
+    CheckNumberOfArguments(d, 1);
+    const Init* I = d.getArg(0);
+
+    // If I is a list, apply h to each element.
+    if (typeid(*I) == typeid(ListInit)) {
+      const ListInit& L = *static_cast<const ListInit*>(I);
+      for (ListInit::const_iterator B = L.begin(), E = L.end(); B != E; ++B)
+        ((this)->*(h))(*B, IndentLevel, O);
+    }
+    // Otherwise, apply h to I.
+    else {
+      ((this)->*(h))(I, IndentLevel, O);
+    }
+  }
+
+  void onUnsetOptionImpl(const Init* I,
+                         unsigned IndentLevel, raw_ostream& O) const
+  {
+    const std::string& OptName = InitPtrToString(I);
     const OptionDescription& OptDesc = OptDescs_.FindOption(OptName);
 
     if (OptDesc.isSwitch()) {
@@ -2301,45 +2359,93 @@ class EmitPreprocessOptionsCallback : ActionHandlingCallbackBase {
     }
   }
 
-  void processDag(const Init* I, unsigned IndentLevel, raw_ostream& O)
+  void onUnsetOption(const DagInit& d,
+                     unsigned IndentLevel, raw_ostream& O) const
   {
-    const DagInit& d = InitPtrToDag(I);
-    const std::string& OpName = GetOperatorName(d);
+    this->onListOrDag(d, &EmitPreprocessOptionsCallback::onUnsetOptionImpl,
+                      IndentLevel, O);
+  }
+
+  void onSetOptionImpl(const DagInit& d,
+                       unsigned IndentLevel, raw_ostream& O) const {
+    CheckNumberOfArguments(d, 2);
+    const std::string& OptName = InitPtrToString(d.getArg(0));
+    const Init* Value = d.getArg(1);
+    const OptionDescription& OptDesc = OptDescs_.FindOption(OptName);
+
+    if (OptDesc.isList()) {
+      const ListInit& List = InitPtrToList(Value);
 
-    if (OpName == "warning") {
-      this->onWarningDag(d, IndentLevel, O);
+      O.indent(IndentLevel) << OptDesc.GenVariableName() << ".clear();\n";
+      for (ListInit::const_iterator B = List.begin(), E = List.end();
+           B != E; ++B) {
+        O.indent(IndentLevel) << OptDesc.GenVariableName() << ".push_back(\""
+                              << InitPtrToString(*B) << "\");\n";
+      }
     }
-    else if (OpName == "error") {
-      this->onWarningDag(d, IndentLevel, O);
+    else if (OptDesc.isSwitch()) {
+      CheckBooleanConstant(Value);
+      O.indent(IndentLevel) << OptDesc.GenVariableName()
+                            << " = " << Value->getAsString() << ";\n";
     }
-    else if (OpName == "unset_option") {
-      checkNumberOfArguments(&d, 1);
-      Init* I = d.getArg(0);
-      if (typeid(*I) == typeid(ListInit)) {
-        const ListInit& DagList = *static_cast<const ListInit*>(I);
-        for (ListInit::const_iterator B = DagList.begin(), E = DagList.end();
-             B != E; ++B)
-          this->onUnsetOption(*B, IndentLevel, O);
-      }
-      else {
-        this->onUnsetOption(I, IndentLevel, O);
-      }
+    else if (OptDesc.isParameter()) {
+      const std::string& Str = InitPtrToString(Value);
+      O.indent(IndentLevel) << OptDesc.GenVariableName()
+                            << " = \"" << Str << "\";\n";
     }
     else {
-      throw "Unknown operator in the option preprocessor: '" + OpName + "'!"
-        "\nOnly 'warning', 'error' and 'unset_option' are allowed.";
+      throw "Can't apply 'set_option' to alias option -" + OptName + " !";
     }
   }
 
-public:
+  void onSetSwitch(const Init* I,
+                   unsigned IndentLevel, raw_ostream& O) const {
+    const std::string& OptName = InitPtrToString(I);
+    const OptionDescription& OptDesc = OptDescs_.FindOption(OptName);
 
-  void operator()(const Init* I, unsigned IndentLevel, raw_ostream& O) {
-      this->processDag(I, IndentLevel, O);
+    if (OptDesc.isSwitch())
+      O.indent(IndentLevel) << OptDesc.GenVariableName() << " = true;\n";
+    else
+      throw "set_option: -" + OptName + " is not a switch option!";
   }
 
+  void onSetOption(const DagInit& d,
+                   unsigned IndentLevel, raw_ostream& O) const
+  {
+    CheckNumberOfArguments(d, 1);
+
+    // Two arguments: (set_option "parameter", VALUE), where VALUE can be a
+    // boolean, a string or a string list.
+    if (d.getNumArgs() > 1)
+      this->onSetOptionImpl(d, IndentLevel, O);
+    // One argument: (set_option "switch")
+    // or (set_option ["switch1", "switch2", ...])
+    else
+      this->onListOrDag(d, &EmitPreprocessOptionsCallback::onSetSwitch,
+                        IndentLevel, O);
+  }
+
+public:
+
   EmitPreprocessOptionsCallback(const OptionDescriptions& OptDescs)
   : OptDescs_(OptDescs)
-  {}
+  {
+    if (!staticMembersInitialized_) {
+      AddHandler("error", &EmitPreprocessOptionsCallback::onErrorDag);
+      AddHandler("warning", &EmitPreprocessOptionsCallback::onWarningDag);
+      AddHandler("unset_option", &EmitPreprocessOptionsCallback::onUnsetOption);
+      AddHandler("set_option", &EmitPreprocessOptionsCallback::onSetOption);
+
+      staticMembersInitialized_ = true;
+    }
+  }
+
+  void operator()(const Init* I,
+                  unsigned IndentLevel, raw_ostream& O) const
+  {
+    InvokeDagInitHandler(this, I, IndentLevel, O);
+  }
+
 };
 
 /// EmitPreprocessOptions - Emit the PreprocessOptionsLocal() function.
@@ -2407,7 +2513,7 @@ void IncDecWeight (const Init* i, unsigned IndentLevel,
     O.indent(IndentLevel) << "ret -= ";
   }
   else if (OpName == "error") {
-    checkNumberOfArguments(&d, 1);
+    CheckNumberOfArguments(d, 1);
     O.indent(IndentLevel) << "throw std::runtime_error(\""
                           << InitPtrToString(d.getArg(0))
                           << "\");\n";
@@ -2445,7 +2551,7 @@ void EmitEdgeClass (unsigned N, const std::string& Target,
   EmitCaseConstructHandler(Case, Indent2, IncDecWeight, false, OptDescs, O);
 
   O.indent(Indent2) << "return ret;\n";
-  O.indent(Indent1) << "};\n\n};\n\n";
+  O.indent(Indent1) << "}\n\n};\n\n";
 }
 
 /// EmitEdgeClasses - Emit Edge* classes that represent graph edges.
@@ -2457,10 +2563,10 @@ void EmitEdgeClasses (const RecordVector& EdgeVector,
          E = EdgeVector.end(); B != E; ++B) {
     const Record* Edge = *B;
     const std::string& NodeB = Edge->getValueAsString("b");
-    DagInit* Weight = Edge->getValueAsDag("weight");
+    DagInit& Weight = *Edge->getValueAsDag("weight");
 
-    if (!isDagEmpty(Weight))
-      EmitEdgeClass(i, NodeB, Weight, OptDescs, O);
+    if (!IsDagEmpty(Weight))
+      EmitEdgeClass(i, NodeB, &Weight, OptDescs, O);
     ++i;
   }
 }
@@ -2487,11 +2593,11 @@ void EmitPopulateCompilationGraph (const RecordVector& EdgeVector,
     const Record* Edge = *B;
     const std::string& NodeA = Edge->getValueAsString("a");
     const std::string& NodeB = Edge->getValueAsString("b");
-    DagInit* Weight = Edge->getValueAsDag("weight");
+    DagInit& Weight = *Edge->getValueAsDag("weight");
 
     O.indent(Indent1) << "G.insertEdge(\"" << NodeA << "\", ";
 
-    if (isDagEmpty(Weight))
+    if (IsDagEmpty(Weight))
       O << "new SimpleEdge(\"" << NodeB << "\")";
     else
       O << "new Edge" << i << "()";
@@ -2540,7 +2646,7 @@ public:
     const std::string& Name = GetOperatorName(Dag);
 
     if (Name == "forward_transformed_value") {
-      checkNumberOfArguments(Dag, 2);
+      CheckNumberOfArguments(Dag, 2);
       const std::string& OptName = InitPtrToString(Dag.getArg(0));
       const std::string& HookName = InitPtrToString(Dag.getArg(1));
       const OptionDescription& D = OptDescs_.FindOption(OptName);
@@ -2549,14 +2655,14 @@ public:
                                       : HookInfo::ArgHook);
     }
     else if (Name == "append_cmd" || Name == "output_suffix") {
-      checkNumberOfArguments(Dag, 1);
+      CheckNumberOfArguments(Dag, 1);
       this->onCmdLine(InitPtrToString(Dag.getArg(0)));
     }
   }
 
   void onCmdLine(const std::string& Cmd) {
     StrVector cmds;
-    TokenizeCmdline(Cmd, cmds);
+    TokenizeCmdLine(Cmd, cmds);
 
     for (StrVector::const_iterator B = cmds.begin(), E = cmds.end();
          B != E; ++B) {
@@ -2564,7 +2670,7 @@ public:
 
       if (cmd == "$CALL") {
         unsigned NumArgs = 0;
-        checkedIncrement(B, E, "Syntax error in $CALL invocation!");
+        CheckedIncrement(B, E, "Syntax error in $CALL invocation!");
         const std::string& HookName = *B;
 
         if (HookName.at(0) == ')')
diff --git a/libclamav/c++/llvm/utils/TableGen/Record.cpp b/libclamav/c++/llvm/utils/TableGen/Record.cpp
index 53f9014..542735e 100644
--- a/libclamav/c++/llvm/utils/TableGen/Record.cpp
+++ b/libclamav/c++/llvm/utils/TableGen/Record.cpp
@@ -945,11 +945,13 @@ Init *TernOpInit::Fold(Record *CurRec, MultiClass *CurMultiClass) {
         std::string Val = RHSs->getValue();
 
         std::string::size_type found;
+        std::string::size_type idx = 0;
         do {
-          found = Val.find(LHSs->getValue());
+          found = Val.find(LHSs->getValue(), idx);
           if (found != std::string::npos) {
             Val.replace(found, LHSs->getValue().size(), MHSs->getValue());
           }
+          idx = found +  MHSs->getValue().size();
         } while (found != std::string::npos);
 
         return new StringInit(Val);
diff --git a/libclamav/c++/llvm/utils/TableGen/X86DisassemblerShared.h b/libclamav/c++/llvm/utils/TableGen/X86DisassemblerShared.h
new file mode 100644
index 0000000..0417e9d
--- /dev/null
+++ b/libclamav/c++/llvm/utils/TableGen/X86DisassemblerShared.h
@@ -0,0 +1,38 @@
+//===- X86DisassemblerShared.h - Emitter shared header ----------*- C++ -*-===//
+//
+//                     The LLVM Compiler Infrastructure
+//
+// This file is distributed under the University of Illinois Open Source
+// License. See LICENSE.TXT for details.
+//
+//===----------------------------------------------------------------------===//
+
+#ifndef X86DISASSEMBLERSHARED_H
+#define X86DISASSEMBLERSHARED_H
+
+#include <string>
+#include <string.h>
+
+#define INSTRUCTION_SPECIFIER_FIELDS       \
+  bool                    filtered;        \
+  InstructionContext      insnContext;     \
+  std::string             name;            \
+                                           \
+  InstructionSpecifier() {                 \
+    filtered = false;                      \
+    insnContext = IC;                      \
+    name = "";                             \
+    modifierType = MODIFIER_NONE;          \
+    modifierBase = 0;                      \
+    memset(operands, 0, sizeof(operands)); \
+  }
+
+#define INSTRUCTION_IDS           \
+  InstrUID   instructionIDs[256];
+
+#include "../../lib/Target/X86/Disassembler/X86DisassemblerDecoderCommon.h"
+
+#undef INSTRUCTION_SPECIFIER_FIELDS
+#undef INSTRUCTION_IDS
+
+#endif
diff --git a/libclamav/c++/llvm/utils/TableGen/X86DisassemblerTables.cpp b/libclamav/c++/llvm/utils/TableGen/X86DisassemblerTables.cpp
new file mode 100644
index 0000000..be07031
--- /dev/null
+++ b/libclamav/c++/llvm/utils/TableGen/X86DisassemblerTables.cpp
@@ -0,0 +1,603 @@
+//===- X86DisassemblerTables.cpp - Disassembler tables ----------*- C++ -*-===//
+//
+//                     The LLVM Compiler Infrastructure
+//
+// This file is distributed under the University of Illinois Open Source
+// License. See LICENSE.TXT for details.
+//
+//===----------------------------------------------------------------------===//
+//
+// This file is part of the X86 Disassembler Emitter.
+// It contains the implementation of the disassembler tables.
+// Documentation for the disassembler emitter in general can be found in
+//  X86DisasemblerEmitter.h.
+//
+//===----------------------------------------------------------------------===//
+
+#include "X86DisassemblerShared.h"
+#include "X86DisassemblerTables.h"
+
+#include "TableGenBackend.h"
+#include "llvm/Support/ErrorHandling.h"
+#include "llvm/Support/Format.h"
+
+using namespace llvm;
+using namespace X86Disassembler;
+  
+/// inheritsFrom - Indicates whether all instructions in one class also belong
+///   to another class.
+///
+/// @param child  - The class that may be the subset
+/// @param parent - The class that may be the superset
+/// @return       - True if child is a subset of parent, false otherwise.
+static inline bool inheritsFrom(InstructionContext child,
+                                InstructionContext parent) {
+  if (child == parent)
+    return true;
+  
+  switch (parent) {
+  case IC:
+    return true;
+  case IC_64BIT:
+    return(inheritsFrom(child, IC_64BIT_REXW)   ||
+           inheritsFrom(child, IC_64BIT_OPSIZE) ||
+           inheritsFrom(child, IC_64BIT_XD)     ||
+           inheritsFrom(child, IC_64BIT_XS));
+  case IC_OPSIZE:
+    return(inheritsFrom(child, IC_64BIT_OPSIZE));
+  case IC_XD:
+    return(inheritsFrom(child, IC_64BIT_XD));
+  case IC_XS:
+    return(inheritsFrom(child, IC_64BIT_XS));
+  case IC_64BIT_REXW:
+    return(inheritsFrom(child, IC_64BIT_REXW_XS) ||
+           inheritsFrom(child, IC_64BIT_REXW_XD) ||
+           inheritsFrom(child, IC_64BIT_REXW_OPSIZE));
+  case IC_64BIT_OPSIZE:
+    return(inheritsFrom(child, IC_64BIT_REXW_OPSIZE));
+  case IC_64BIT_XD:
+    return(inheritsFrom(child, IC_64BIT_REXW_XD));
+  case IC_64BIT_XS:
+    return(inheritsFrom(child, IC_64BIT_REXW_XS));
+  case IC_64BIT_REXW_XD:
+    return false;
+  case IC_64BIT_REXW_XS:
+    return false;
+  case IC_64BIT_REXW_OPSIZE:
+    return false;
+  default:
+    return false;
+  }
+}
+
+/// outranks - Indicates whether, if an instruction has two different applicable
+///   classes, which class should be preferred when performing decode.  This
+///   imposes a total ordering (ties are resolved toward "lower")
+///
+/// @param upper  - The class that may be preferable
+/// @param lower  - The class that may be less preferable
+/// @return       - True if upper is to be preferred, false otherwise.
+static inline bool outranks(InstructionContext upper, 
+                            InstructionContext lower) {
+  assert(upper < IC_max);
+  assert(lower < IC_max);
+  
+#define ENUM_ENTRY(n, r, d) r,
+  static int ranks[IC_max] = {
+    INSTRUCTION_CONTEXTS
+  };
+#undef ENUM_ENTRY
+  
+  return (ranks[upper] > ranks[lower]);
+}
+
+/// stringForContext - Returns a string containing the name of a particular
+///   InstructionContext, usually for diagnostic purposes.
+///
+/// @param insnContext  - The instruction class to transform to a string.
+/// @return           - A statically-allocated string constant that contains the
+///                     name of the instruction class.
+static inline const char* stringForContext(InstructionContext insnContext) {
+  switch (insnContext) {
+  default:
+    llvm_unreachable("Unhandled instruction class");
+#define ENUM_ENTRY(n, r, d)   case n: return #n; break;
+  INSTRUCTION_CONTEXTS
+#undef ENUM_ENTRY
+  }
+
+  return 0;
+}
+
+/// stringForOperandType - Like stringForContext, but for OperandTypes.
+static inline const char* stringForOperandType(OperandType type) {
+  switch (type) {
+  default:
+    llvm_unreachable("Unhandled type");
+#define ENUM_ENTRY(i, d) case i: return #i;
+  TYPES
+#undef ENUM_ENTRY
+  }
+}
+
+/// stringForOperandEncoding - like stringForContext, but for
+///   OperandEncodings.
+static inline const char* stringForOperandEncoding(OperandEncoding encoding) {
+  switch (encoding) {
+  default:
+    llvm_unreachable("Unhandled encoding");
+#define ENUM_ENTRY(i, d) case i: return #i;
+  ENCODINGS
+#undef ENUM_ENTRY
+  }
+}
+
+void DisassemblerTables::emitOneID(raw_ostream &o,
+                                   uint32_t &i,
+                                   InstrUID id,
+                                   bool addComma) const {
+  if (id)
+    o.indent(i * 2) << format("0x%hx", id);
+  else
+    o.indent(i * 2) << 0;
+  
+  if (addComma)
+    o << ", ";
+  else
+    o << "  ";
+  
+  o << "/* ";
+  o << InstructionSpecifiers[id].name;
+  o << "*/";
+  
+  o << "\n";
+}
+
+/// emitEmptyTable - Emits the modRMEmptyTable, which is used as a ID table by
+///   all ModR/M decisions for instructions that are invalid for all possible
+///   ModR/M byte values.
+///
+/// @param o        - The output stream on which to emit the table.
+/// @param i        - The indentation level for that output stream.
+static void emitEmptyTable(raw_ostream &o, uint32_t &i)
+{
+  o.indent(i * 2) << "InstrUID modRMEmptyTable[1] = { 0 };" << "\n";
+  o << "\n";
+}
+
+/// getDecisionType - Determines whether a ModRM decision with 255 entries can
+///   be compacted by eliminating redundant information.
+///
+/// @param decision - The decision to be compacted.
+/// @return         - The compactest available representation for the decision.
+static ModRMDecisionType getDecisionType(ModRMDecision &decision)
+{
+  bool satisfiesOneEntry = true;
+  bool satisfiesSplitRM = true;
+  
+  uint16_t index;
+  
+  for (index = 0; index < 256; ++index) {
+    if (decision.instructionIDs[index] != decision.instructionIDs[0])
+      satisfiesOneEntry = false;
+    
+    if (((index & 0xc0) == 0xc0) &&
+       (decision.instructionIDs[index] != decision.instructionIDs[0xc0]))
+      satisfiesSplitRM = false;
+    
+    if (((index & 0xc0) != 0xc0) &&
+       (decision.instructionIDs[index] != decision.instructionIDs[0x00]))
+      satisfiesSplitRM = false;
+  }
+  
+  if (satisfiesOneEntry)
+    return MODRM_ONEENTRY;
+  
+  if (satisfiesSplitRM)
+    return MODRM_SPLITRM;
+  
+  return MODRM_FULL;
+}
+
+/// stringForDecisionType - Returns a statically-allocated string corresponding
+///   to a particular decision type.
+///
+/// @param dt - The decision type.
+/// @return   - A pointer to the statically-allocated string (e.g., 
+///             "MODRM_ONEENTRY" for MODRM_ONEENTRY).
+static const char* stringForDecisionType(ModRMDecisionType dt)
+{
+#define ENUM_ENTRY(n) case n: return #n;
+  switch (dt) {
+    default:
+      llvm_unreachable("Unknown decision type");  
+    MODRMTYPES
+  };  
+#undef ENUM_ENTRY
+}
+  
+/// stringForModifierType - Returns a statically-allocated string corresponding
+///   to an opcode modifier type.
+///
+/// @param mt - The modifier type.
+/// @return   - A pointer to the statically-allocated string (e.g.,
+///             "MODIFIER_NONE" for MODIFIER_NONE).
+static const char* stringForModifierType(ModifierType mt)
+{
+#define ENUM_ENTRY(n) case n: return #n;
+  switch(mt) {
+    default:
+      llvm_unreachable("Unknown modifier type");
+    MODIFIER_TYPES
+  };
+#undef ENUM_ENTRY
+}
+  
+DisassemblerTables::DisassemblerTables() {
+  unsigned i;
+  
+  for (i = 0; i < 4; i++) {
+    Tables[i] = new ContextDecision;
+    memset(Tables[i], 0, sizeof(ContextDecision));
+  }
+  
+  HasConflicts = false;
+}
+  
+DisassemblerTables::~DisassemblerTables() {
+  unsigned i;
+  
+  for (i = 0; i < 4; i++)
+    delete Tables[i];
+}
+  
+void DisassemblerTables::emitModRMDecision(raw_ostream &o1,
+                                           raw_ostream &o2,
+                                           uint32_t &i1,
+                                           uint32_t &i2,
+                                           ModRMDecision &decision)
+  const {
+  static uint64_t sTableNumber = 0;
+  uint64_t thisTableNumber = sTableNumber;
+  ModRMDecisionType dt = getDecisionType(decision);
+  uint16_t index;
+  
+  if (dt == MODRM_ONEENTRY && decision.instructionIDs[0] == 0)
+  {
+    o2.indent(i2) << "{ /* ModRMDecision */" << "\n";
+    i2++;
+    
+    o2.indent(i2) << stringForDecisionType(dt) << "," << "\n";
+    o2.indent(i2) << "modRMEmptyTable";
+    
+    i2--;
+    o2.indent(i2) << "}";
+    return;
+  }
+    
+  o1.indent(i1) << "InstrUID modRMTable" << thisTableNumber;
+    
+  switch (dt) {
+    default:
+      llvm_unreachable("Unknown decision type");
+    case MODRM_ONEENTRY:
+      o1 << "[1]";
+      break;
+    case MODRM_SPLITRM:
+      o1 << "[2]";
+      break;
+    case MODRM_FULL:
+      o1 << "[256]";
+      break;      
+  }
+
+  o1 << " = {" << "\n";
+  i1++;
+    
+  switch (dt) {
+    default:
+      llvm_unreachable("Unknown decision type");
+    case MODRM_ONEENTRY:
+      emitOneID(o1, i1, decision.instructionIDs[0], false);
+      break;
+    case MODRM_SPLITRM:
+      emitOneID(o1, i1, decision.instructionIDs[0x00], true); // mod = 0b00
+      emitOneID(o1, i1, decision.instructionIDs[0xc0], false); // mod = 0b11
+      break;
+    case MODRM_FULL:
+      for (index = 0; index < 256; ++index)
+        emitOneID(o1, i1, decision.instructionIDs[index], index < 255);
+      break;
+  }
+    
+  i1--;
+  o1.indent(i1) << "};" << "\n";
+  o1 << "\n";
+    
+  o2.indent(i2) << "{ /* struct ModRMDecision */" << "\n";
+  i2++;
+    
+  o2.indent(i2) << stringForDecisionType(dt) << "," << "\n";
+  o2.indent(i2) << "modRMTable" << sTableNumber << "\n";
+    
+  i2--;
+  o2.indent(i2) << "}";
+    
+  ++sTableNumber;
+}
+
+void DisassemblerTables::emitOpcodeDecision(
+  raw_ostream &o1,
+  raw_ostream &o2,
+  uint32_t &i1,
+  uint32_t &i2,
+  OpcodeDecision &decision) const {
+  uint16_t index;
+
+  o2.indent(i2) << "{ /* struct OpcodeDecision */" << "\n";
+  i2++;
+  o2.indent(i2) << "{" << "\n";
+  i2++;
+
+  for (index = 0; index < 256; ++index) {
+    o2.indent(i2);
+
+    o2 << "/* 0x" << format("%02hhx", index) << " */" << "\n";
+
+    emitModRMDecision(o1, o2, i1, i2, decision.modRMDecisions[index]);
+
+    if (index <  255)
+      o2 << ",";
+
+    o2 << "\n";
+  }
+
+  i2--;
+  o2.indent(i2) << "}" << "\n";
+  i2--;
+  o2.indent(i2) << "}" << "\n";
+}
+
+void DisassemblerTables::emitContextDecision(
+  raw_ostream &o1,
+  raw_ostream &o2,
+  uint32_t &i1,
+  uint32_t &i2,
+  ContextDecision &decision,
+  const char* name) const {
+  o2.indent(i2) << "struct ContextDecision " << name << " = {" << "\n";
+  i2++;
+  o2.indent(i2) << "{ /* opcodeDecisions */" << "\n";
+  i2++;
+
+  unsigned index;
+
+  for (index = 0; index < IC_max; ++index) {
+    o2.indent(i2) << "/* ";
+    o2 << stringForContext((InstructionContext)index);
+    o2 << " */";
+    o2 << "\n";
+
+    emitOpcodeDecision(o1, o2, i1, i2, decision.opcodeDecisions[index]);
+
+    if (index + 1 < IC_max)
+      o2 << ", ";
+  }
+
+  i2--;
+  o2.indent(i2) << "}" << "\n";
+  i2--;
+  o2.indent(i2) << "};" << "\n";
+}
+
+void DisassemblerTables::emitInstructionInfo(raw_ostream &o, uint32_t &i) 
+  const {
+  o.indent(i * 2) << "struct InstructionSpecifier ";
+  o << INSTRUCTIONS_STR << "[";
+  o << InstructionSpecifiers.size();
+  o << "] = {" << "\n";
+  
+  i++;
+
+  uint16_t numInstructions = InstructionSpecifiers.size();
+  uint16_t index, operandIndex;
+
+  for (index = 0; index < numInstructions; ++index) {
+    o.indent(i * 2) << "{ /* " << index << " */" << "\n";
+    i++;
+    
+    o.indent(i * 2) << 
+      stringForModifierType(InstructionSpecifiers[index].modifierType);
+    o << "," << "\n";
+    
+    o.indent(i * 2) << "0x";
+    o << format("%02hhx", (uint16_t)InstructionSpecifiers[index].modifierBase);
+    o << "," << "\n";
+
+    o.indent(i * 2) << "{" << "\n";
+    i++;
+
+    for (operandIndex = 0; operandIndex < X86_MAX_OPERANDS; ++operandIndex) {
+      o.indent(i * 2) << "{ ";
+      o << stringForOperandEncoding(InstructionSpecifiers[index]
+                                    .operands[operandIndex]
+                                    .encoding);
+      o << ", ";
+      o << stringForOperandType(InstructionSpecifiers[index]
+                                .operands[operandIndex]
+                                .type);
+      o << " }";
+
+      if (operandIndex < X86_MAX_OPERANDS - 1)
+        o << ",";
+
+      o << "\n";
+    }
+
+    i--;
+    o.indent(i * 2) << "}," << "\n";
+    
+    o.indent(i * 2) << "\"" << InstructionSpecifiers[index].name << "\"";
+    o << "\n";
+
+    i--;
+    o.indent(i * 2) << "}";
+
+    if (index + 1 < numInstructions)
+      o << ",";
+
+    o << "\n";
+  }
+
+  i--;
+  o.indent(i * 2) << "};" << "\n";
+}
+
+void DisassemblerTables::emitContextTable(raw_ostream &o, uint32_t &i) const {
+  uint16_t index;
+
+  o.indent(i * 2) << "InstructionContext ";
+  o << CONTEXTS_STR << "[256] = {" << "\n";
+  i++;
+
+  for (index = 0; index < 256; ++index) {
+    o.indent(i * 2);
+
+    if ((index & ATTR_64BIT) && (index & ATTR_REXW) && (index & ATTR_XS))
+      o << "IC_64BIT_REXW_XS";
+    else if ((index & ATTR_64BIT) && (index & ATTR_REXW) && (index & ATTR_XD))
+      o << "IC_64BIT_REXW_XD";
+    else if ((index & ATTR_64BIT) && (index & ATTR_REXW) && 
+             (index & ATTR_OPSIZE))
+      o << "IC_64BIT_REXW_OPSIZE";
+    else if ((index & ATTR_64BIT) && (index & ATTR_XS))
+      o << "IC_64BIT_XS";
+    else if ((index & ATTR_64BIT) && (index & ATTR_XD))
+      o << "IC_64BIT_XD";
+    else if ((index & ATTR_64BIT) && (index & ATTR_OPSIZE))
+      o << "IC_64BIT_OPSIZE";
+    else if ((index & ATTR_64BIT) && (index & ATTR_REXW))
+      o << "IC_64BIT_REXW";
+    else if ((index & ATTR_64BIT))
+      o << "IC_64BIT";
+    else if (index & ATTR_XS)
+      o << "IC_XS";
+    else if (index & ATTR_XD)
+      o << "IC_XD";
+    else if (index & ATTR_OPSIZE)
+      o << "IC_OPSIZE";
+    else
+      o << "IC";
+
+    if (index < 255)
+      o << ",";
+    else
+      o << " ";
+
+    o << " /* " << index << " */";
+
+    o << "\n";
+  }
+
+  i--;
+  o.indent(i * 2) << "};" << "\n";
+}
+
+void DisassemblerTables::emitContextDecisions(raw_ostream &o1,
+                                            raw_ostream &o2,
+                                            uint32_t &i1,
+                                            uint32_t &i2)
+  const {
+  emitContextDecision(o1, o2, i1, i2, *Tables[0], ONEBYTE_STR);
+  emitContextDecision(o1, o2, i1, i2, *Tables[1], TWOBYTE_STR);
+  emitContextDecision(o1, o2, i1, i2, *Tables[2], THREEBYTE38_STR);
+  emitContextDecision(o1, o2, i1, i2, *Tables[3], THREEBYTE3A_STR);
+}
+
+void DisassemblerTables::emit(raw_ostream &o) const {
+  uint32_t i1 = 0;
+  uint32_t i2 = 0;
+  
+  std::string s1;
+  std::string s2;
+  
+  raw_string_ostream o1(s1);
+  raw_string_ostream o2(s2);
+  
+  emitInstructionInfo(o, i2);
+  o << "\n";
+
+  emitContextTable(o, i2);
+  o << "\n";
+  
+  emitEmptyTable(o1, i1);
+  emitContextDecisions(o1, o2, i1, i2);
+  
+  o << o1.str();
+  o << "\n";
+  o << o2.str();
+  o << "\n";
+  o << "\n";
+}
+
+void DisassemblerTables::setTableFields(ModRMDecision     &decision,
+                                        const ModRMFilter &filter,
+                                        InstrUID          uid,
+                                        uint8_t           opcode) {
+  unsigned index;
+
+  for (index = 0; index < 256; ++index) {
+    if (filter.accepts(index)) {
+      if (decision.instructionIDs[index] == uid)
+        continue;
+
+      if (decision.instructionIDs[index] != 0) {
+        InstructionSpecifier &newInfo =
+          InstructionSpecifiers[uid];
+        InstructionSpecifier &previousInfo =
+          InstructionSpecifiers[decision.instructionIDs[index]];
+        
+        if(newInfo.filtered)
+          continue; // filtered instructions get lowest priority
+        
+        if(previousInfo.name == "NOOP")
+          continue; // special case for XCHG32ar and NOOP
+
+        if (outranks(previousInfo.insnContext, newInfo.insnContext))
+          continue;
+        
+        if (previousInfo.insnContext == newInfo.insnContext &&
+            !previousInfo.filtered) {
+          errs() << "Error: Primary decode conflict: ";
+          errs() << newInfo.name << " would overwrite " << previousInfo.name;
+          errs() << "\n";
+          errs() << "ModRM   " << index << "\n";
+          errs() << "Opcode  " << (uint16_t)opcode << "\n";
+          errs() << "Context " << stringForContext(newInfo.insnContext) << "\n";
+          HasConflicts = true;
+        }
+      }
+
+      decision.instructionIDs[index] = uid;
+    }
+  }
+}
+
+void DisassemblerTables::setTableFields(OpcodeType          type,
+                                        InstructionContext  insnContext,
+                                        uint8_t             opcode,
+                                        const ModRMFilter   &filter,
+                                        InstrUID            uid) {
+  unsigned index;
+  
+  ContextDecision &decision = *Tables[type];
+
+  for (index = 0; index < IC_max; ++index) {
+    if (inheritsFrom((InstructionContext)index, 
+                     InstructionSpecifiers[uid].insnContext))
+      setTableFields(decision.opcodeDecisions[index].modRMDecisions[opcode], 
+                     filter,
+                     uid,
+                     opcode);
+  }
+}
diff --git a/libclamav/c++/llvm/utils/TableGen/X86DisassemblerTables.h b/libclamav/c++/llvm/utils/TableGen/X86DisassemblerTables.h
new file mode 100644
index 0000000..08eba01
--- /dev/null
+++ b/libclamav/c++/llvm/utils/TableGen/X86DisassemblerTables.h
@@ -0,0 +1,291 @@
+//===- X86DisassemblerTables.h - Disassembler tables ------------*- C++ -*-===//
+//
+//                     The LLVM Compiler Infrastructure
+//
+// This file is distributed under the University of Illinois Open Source
+// License. See LICENSE.TXT for details.
+//
+//===----------------------------------------------------------------------===//
+//
+// This file is part of the X86 Disassembler Emitter.
+// It contains the interface of the disassembler tables.
+// Documentation for the disassembler emitter in general can be found in
+//  X86DisasemblerEmitter.h.
+//
+//===----------------------------------------------------------------------===//
+
+#ifndef X86DISASSEMBLERTABLES_H
+#define X86DISASSEMBLERTABLES_H
+
+#include "X86DisassemblerShared.h"
+#include "X86ModRMFilters.h"
+
+#include "llvm/Support/raw_ostream.h"
+
+#include <vector>
+
+namespace llvm {
+
+namespace X86Disassembler {
+
+/// DisassemblerTables - Encapsulates all the decode tables being generated by
+///   the table emitter.  Contains functions to populate the tables as well as
+///   to emit them as hierarchical C structures suitable for consumption by the
+///   runtime.
+class DisassemblerTables {
+private:
+  /// The decoder tables.  There is one for each opcode type:
+  /// [0] one-byte opcodes
+  /// [1] two-byte opcodes of the form 0f __
+  /// [2] three-byte opcodes of the form 0f 38 __
+  /// [3] three-byte opcodes of the form 0f 3a __
+  ContextDecision* Tables[4];
+  
+  /// The instruction information table
+  std::vector<InstructionSpecifier> InstructionSpecifiers;
+  
+  /// True if there are primary decode conflicts in the instruction set
+  bool HasConflicts;
+  
+  /// emitOneID - Emits a table entry for a single instruction entry, at the
+  ///   innermost level of the structure hierarchy.  The entry is printed out
+  ///   in the format "nnnn, /* MNEMONIC */" where nnnn is the ID in decimal,
+  ///   the comma is printed if addComma is true, and the menonic is the name
+  ///   of the instruction as listed in the LLVM tables.
+  ///
+  /// @param o        - The output stream to print the entry on.
+  /// @param i        - The indentation level for o.
+  /// @param id       - The unique ID of the instruction to print.
+  /// @param addComma - Whether or not to print a comma after the ID.  True if
+  ///                    additional items will follow.
+  void emitOneID(raw_ostream &o,
+                 uint32_t &i,
+                 InstrUID id,
+                 bool addComma) const;
+  
+  /// emitModRMDecision - Emits a table of entries corresponding to a single
+  ///   ModR/M decision.  Compacts the ModR/M decision if possible.  ModR/M
+  ///   decisions are printed as:
+  ///
+  ///   { /* struct ModRMDecision */
+  ///     TYPE,
+  ///     modRMTablennnn
+  ///   }
+  ///
+  ///   where nnnn is a unique ID for the corresponding table of IDs.
+  ///   TYPE indicates whether the table has one entry that is the same
+  ///   regardless of ModR/M byte, two entries - one for bytes 0x00-0xbf and one
+  ///   for bytes 0xc0-0xff -, or 256 entries, one for each possible byte.  
+  ///   nnnn is the number of a table for looking up these values.  The tables
+  ///   are writen separately so that tables consisting entirely of zeros will
+  ///   not be duplicated.  (These all have the name modRMEmptyTable.)  A table
+  ///   is printed as:
+  ///   
+  ///   InstrUID modRMTablennnn[k] = {
+  ///     nnnn, /* MNEMONIC */
+  ///     ...
+  ///     nnnn /* MNEMONIC */
+  ///   };
+  ///
+  /// @param o1       - The output stream to print the ID table to.
+  /// @param o2       - The output stream to print the decision structure to.
+  /// @param i1       - The indentation level to use with stream o1.
+  /// @param i2       - The indentation level to use with stream o2.
+  /// @param decision - The ModR/M decision to emit.  This decision has 256
+  ///                   entries - emitModRMDecision decides how to compact it.
+  void emitModRMDecision(raw_ostream &o1,
+                         raw_ostream &o2,
+                         uint32_t &i1,
+                         uint32_t &i2,
+                         ModRMDecision &decision) const;
+  
+  /// emitOpcodeDecision - Emits an OpcodeDecision and all its subsidiary ModR/M
+  ///   decisions.  An OpcodeDecision is printed as:
+  ///
+  ///   { /* struct OpcodeDecision */
+  ///     /* 0x00 */
+  ///     { /* struct ModRMDecision */
+  ///       ...
+  ///     }
+  ///     ...
+  ///   }
+  ///
+  ///   where the ModRMDecision structure is printed as described in the
+  ///   documentation for emitModRMDecision().  emitOpcodeDecision() passes on a
+  ///   stream and indent level for the UID tables generated by
+  ///   emitModRMDecision(), but does not use them itself.
+  ///
+  /// @param o1       - The output stream to print the ID tables generated by
+  ///                   emitModRMDecision() to.
+  /// @param o2       - The output stream for the decision structure itself.
+  /// @param i1       - The indent level to use with stream o1.
+  /// @param i2       - The indent level to use with stream o2.
+  /// @param decision - The OpcodeDecision to emit along with its subsidiary
+  ///                    structures.
+  void emitOpcodeDecision(raw_ostream &o1,
+                          raw_ostream &o2,
+                          uint32_t &i1,
+                          uint32_t &i2,
+                          OpcodeDecision &decision) const;
+  
+  /// emitContextDecision - Emits a ContextDecision and all its subsidiary 
+  ///   Opcode and ModRMDecisions.  A ContextDecision is printed as:
+  ///
+  ///   struct ContextDecision NAME = {
+  ///     { /* OpcodeDecisions */
+  ///       /* IC */
+  ///       { /* struct OpcodeDecision */
+  ///         ...
+  ///       },
+  ///       ...
+  ///     }
+  ///   }
+  ///
+  ///   NAME is the name of the ContextDecision (typically one of the four names 
+  ///   ONEBYTE_SYM, TWOBYTE_SYM, THREEBYTE38_SYM, and THREEBYTE3A_SYM from
+  ///   X86DisassemblerDecoderCommon.h).
+  ///   IC is one of the contexts in InstructionContext.  There is an opcode
+  ///   decision for each possible context.
+  ///   The OpcodeDecision structures are printed as described in the
+  ///   documentation for emitOpcodeDecision.
+  ///
+  /// @param o1       - The output stream to print the ID tables generated by
+  ///                   emitModRMDecision() to.
+  /// @param o2       - The output stream to print the decision structure to.
+  /// @param i1       - The indent level to use with stream o1.
+  /// @param i2       - The indent level to use with stream o2.
+  /// @param decision - The ContextDecision to emit along with its subsidiary
+  ///                   structures.
+  /// @param name     - The name for the ContextDecision.
+  void emitContextDecision(raw_ostream &o1,
+                           raw_ostream &o2,
+                           uint32_t &i1,
+                           uint32_t &i2,                           
+                           ContextDecision &decision,
+                           const char* name) const;
+  
+  /// emitInstructionInfo - Prints the instruction specifier table, which has
+  ///   one entry for each instruction, and contains name and operand
+  ///   information.  This table is printed as:
+  ///
+  ///   struct InstructionSpecifier CONTEXTS_SYM[k] = {
+  ///     {
+  ///       /* nnnn */
+  ///       "MNEMONIC",
+  ///       0xnn,
+  ///       {
+  ///         {
+  ///           ENCODING,
+  ///           TYPE
+  ///         },
+  ///         ...
+  ///       }
+  ///     },
+  ///   };
+  ///
+  ///   k is the total number of instructions.
+  ///   nnnn is the ID of the current instruction (0-based).  This table 
+  ///   includes entries for non-instructions like PHINODE.
+  ///   0xnn is the lowest possible opcode for the current instruction, used for
+  ///   AddRegFrm instructions to compute the operand's value.
+  ///   ENCODING and TYPE describe the encoding and type for a single operand.
+  ///
+  /// @param o  - The output stream to which the instruction table should be 
+  ///             written.
+  /// @param i  - The indent level for use with the stream.
+  void emitInstructionInfo(raw_ostream &o, uint32_t &i) const;
+  
+  /// emitContextTable - Prints the table that is used to translate from an
+  ///   instruction attribute mask to an instruction context.  This table is
+  ///   printed as:
+  ///
+  ///   InstructionContext CONTEXTS_STR[256] = {
+  ///     IC, /* 0x00 */
+  ///     ...
+  ///   };
+  ///
+  ///   IC is the context corresponding to the mask 0x00, and there are 256
+  ///   possible masks.
+  ///
+  /// @param o  - The output stream to which the context table should be written.
+  /// @param i  - The indent level for use with the stream.
+  void emitContextTable(raw_ostream &o, uint32_t &i) const;
+  
+  /// emitContextDecisions - Prints all four ContextDecision structures using
+  ///   emitContextDecision().
+  ///
+  /// @param o1 - The output stream to print the ID tables generated by
+  ///             emitModRMDecision() to.
+  /// @param o2 - The output stream to print the decision structures to.
+  /// @param i1 - The indent level to use with stream o1.
+  /// @param i2 - The indent level to use with stream o2.
+  void emitContextDecisions(raw_ostream &o1,
+                            raw_ostream &o2,
+                            uint32_t &i1,
+                            uint32_t &i2) const; 
+
+  /// setTableFields - Uses a ModRMFilter to set the appropriate entries in a
+  ///   ModRMDecision to refer to a particular instruction ID.
+  ///
+  /// @param decision - The ModRMDecision to populate.
+  /// @param filter   - The filter to use in deciding which entries to populate.
+  /// @param uid      - The unique ID to set matching entries to.
+  /// @param opcode   - The opcode of the instruction, for error reporting.
+  void setTableFields(ModRMDecision &decision,
+                      const ModRMFilter &filter,
+                      InstrUID uid,
+                      uint8_t opcode);
+public:
+  /// Constructor - Allocates space for the class decisions and clears them.
+  DisassemblerTables();
+  
+  ~DisassemblerTables();
+  
+  /// emit - Emits the instruction table, context table, and class decisions.
+  ///
+  /// @param o  - The output stream to print the tables to.
+  void emit(raw_ostream &o) const;
+  
+  /// setTableFields - Uses the opcode type, instruction context, opcode, and a
+  ///   ModRMFilter as criteria to set a particular set of entries in the
+  ///   decode tables to point to a specific uid.
+  ///
+  /// @param type         - The opcode type (ONEBYTE, TWOBYTE, etc.)
+  /// @param insnContext  - The context to use (IC, IC_64BIT, etc.)
+  /// @param opcode       - The last byte of the opcode (not counting any escape
+  ///                       or extended opcodes).
+  /// @param filter       - The ModRMFilter that decides which ModR/M byte values
+  ///                       correspond to the desired instruction.
+  /// @param uid          - The unique ID of the instruction.
+  void setTableFields(OpcodeType type,
+                      InstructionContext insnContext,
+                      uint8_t opcode,
+                      const ModRMFilter &filter,
+                      InstrUID uid);  
+  
+  /// specForUID - Returns the instruction specifier for a given unique
+  ///   instruction ID.  Used when resolving collisions.
+  ///
+  /// @param uid  - The unique ID of the instruction.
+  /// @return     - A reference to the instruction specifier. 
+  InstructionSpecifier& specForUID(InstrUID uid) {
+    if (uid >= InstructionSpecifiers.size())
+      InstructionSpecifiers.resize(uid + 1);
+    
+    return InstructionSpecifiers[uid];
+  }
+  
+  // hasConflicts - Reports whether there were primary decode conflicts
+  //   from any instructions added to the tables.
+  // @return  - true if there were; false otherwise.
+  
+  bool hasConflicts() {
+    return HasConflicts;
+  }
+};
+
+} // namespace X86Disassembler
+
+} // namespace llvm
+
+#endif
diff --git a/libclamav/c++/llvm/utils/TableGen/X86ModRMFilters.h b/libclamav/c++/llvm/utils/TableGen/X86ModRMFilters.h
new file mode 100644
index 0000000..45cb07a
--- /dev/null
+++ b/libclamav/c++/llvm/utils/TableGen/X86ModRMFilters.h
@@ -0,0 +1,197 @@
+//===- X86ModRMFilters.h - Disassembler ModR/M filterss ---------*- C++ -*-===//
+//
+//                     The LLVM Compiler Infrastructure
+//
+// This file is distributed under the University of Illinois Open Source
+// License. See LICENSE.TXT for details.
+//
+//===----------------------------------------------------------------------===//
+//
+// This file is part of the X86 Disassembler Emitter.
+// It contains ModR/M filters that determine which values of the ModR/M byte
+//  are valid for a partiuclar instruction.
+// Documentation for the disassembler emitter in general can be found in
+//  X86DisasemblerEmitter.h.
+//
+//===----------------------------------------------------------------------===//
+
+#ifndef X86MODRMFILTERS_H
+#define X86MODRMFILTERS_H
+
+#include "llvm/System/DataTypes.h"
+
+namespace llvm {
+
+namespace X86Disassembler {
+
+/// ModRMFilter - Abstract base class for clases that recognize patterns in
+///   ModR/M bytes.
+class ModRMFilter {
+public:
+  /// Destructor    - Override as necessary.
+  virtual ~ModRMFilter() { }
+
+  /// isDumb        - Indicates whether this filter returns the same value for
+  ///                 any value of the ModR/M byte.
+  ///
+  /// @result       - True if the filter returns the same value for any ModR/M
+  ///                 byte; false if not.
+  virtual bool isDumb() const { return false; }
+  
+  /// accepts       - Indicates whether the filter accepts a particular ModR/M
+  ///                 byte value.
+  ///
+  /// @result       - True if the filter accepts the ModR/M byte; false if not.
+  virtual bool accepts(uint8_t modRM) const = 0;
+};
+
+/// DumbFilter - Accepts any ModR/M byte.  Used for instructions that do not
+///   require a ModR/M byte or instructions where the entire ModR/M byte is used
+///   for operands.
+class DumbFilter : public ModRMFilter {
+public:
+  bool isDumb() const {
+    return true;
+  }
+  
+  bool accepts(uint8_t modRM) const {
+    return true;
+  }
+};
+
+/// ModFilter - Filters based on the mod bits [bits 7-6] of the ModR/M byte.
+///   Some instructions are classified based on whether they are 11 or anything
+///   else.  This filter performs that classification.
+class ModFilter : public ModRMFilter {
+private:
+  bool R;
+public:
+  /// Constructor
+  ///
+  /// @r            - True if the mod bits of the ModR/M byte must be 11; false
+  ///                 otherwise.  The name r derives from the fact that the mod
+  ///                 bits indicate whether the R/M bits [bits 2-0] signify a
+  ///                 register or a memory operand.
+  ModFilter(bool r) :
+    ModRMFilter(),
+    R(r) {
+  }
+    
+  bool accepts(uint8_t modRM) const {
+    if (R == ((modRM & 0xc0) == 0xc0))
+      return true;
+    else
+      return false;
+  }
+};
+
+/// EscapeFilter - Filters escape opcodes, which are classified in two ways.  If
+///   the ModR/M byte is between 0xc0 and 0xff, then there is one slot for each
+///   possible value.  Otherwise, there is one instruction for each value of the
+///   nnn field [bits 5-3], known elsewhere as the reg field.
+class EscapeFilter : public ModRMFilter {
+private:
+  bool C0_FF;
+  uint8_t NNN_or_ModRM;
+public:
+  /// Constructor
+  ///
+  /// @c0_ff        - True if the ModR/M byte must fall between 0xc0 and 0xff;
+  ///                 false otherwise.
+  /// @nnn_or_modRM - If c0_ff is true, the required value of the entire ModR/M
+  ///                 byte.  If c0_ff is false, the required value of the nnn
+  ///                 field.
+  EscapeFilter(bool c0_ff, uint8_t nnn_or_modRM) :
+    ModRMFilter(),
+    C0_FF(c0_ff),
+    NNN_or_ModRM(nnn_or_modRM) {
+  }
+    
+  bool accepts(uint8_t modRM) const {
+    if ((C0_FF && modRM >= 0xc0 && (modRM == NNN_or_ModRM)) ||
+        (!C0_FF && modRM < 0xc0  && ((modRM & 0x38) >> 3) == NNN_or_ModRM))
+      return true;
+    else
+      return false;
+  }
+};
+
+/// AddRegEscapeFilter - Some escape opcodes have one of the register operands
+///   added to the ModR/M byte, meaning that a range of eight ModR/M values
+///   maps to a single instruction.  Such instructions require the ModR/M byte
+///   to fall between 0xc0 and 0xff.
+class AddRegEscapeFilter : public ModRMFilter {
+private:
+  uint8_t ModRM;
+public:
+  /// Constructor
+  ///
+  /// @modRM        - The value of the ModR/M byte when the register operand
+  ///                 refers to the first register in the register set.
+  AddRegEscapeFilter(uint8_t modRM) : ModRM(modRM) {
+  }
+  
+  bool accepts(uint8_t modRM) const {
+    if (modRM >= ModRM && modRM < ModRM + 8)
+      return true;
+    else
+      return false;
+  }
+};
+
+/// ExtendedFilter - Extended opcodes are classified based on the value of the
+///   mod field [bits 7-6] and the value of the nnn field [bits 5-3]. 
+class ExtendedFilter : public ModRMFilter {
+private:
+  bool R;
+  uint8_t NNN;
+public:
+  /// Constructor
+  ///
+  /// @r            - True if the mod field must be set to 11; false otherwise.
+  ///                 The name is explained at ModFilter.
+  /// @nnn          - The required value of the nnn field.
+  ExtendedFilter(bool r, uint8_t nnn) : 
+    ModRMFilter(),
+    R(r),
+    NNN(nnn) {
+  }
+    
+  bool accepts(uint8_t modRM) const {
+    if (((R  && ((modRM & 0xc0) == 0xc0)) ||
+        (!R && ((modRM & 0xc0) != 0xc0))) &&
+        (((modRM & 0x38) >> 3) == NNN))
+      return true;
+    else
+      return false;
+  }
+};
+
+/// ExactFilter - The occasional extended opcode (such as VMCALL or MONITOR)
+///   requires the ModR/M byte to have a specific value.
+class ExactFilter : public ModRMFilter
+{
+private:
+  uint8_t ModRM;
+public:
+  /// Constructor
+  ///
+  /// @modRM        - The required value of the full ModR/M byte.
+  ExactFilter(uint8_t modRM) :
+    ModRMFilter(),
+    ModRM(modRM) {
+  }
+    
+  bool accepts(uint8_t modRM) const {
+    if (ModRM == modRM)
+      return true;
+    else
+      return false;
+  }
+};
+
+} // namespace X86Disassembler
+
+} // namespace llvm
+
+#endif
diff --git a/libclamav/c++/llvm/utils/TableGen/X86RecognizableInstr.cpp b/libclamav/c++/llvm/utils/TableGen/X86RecognizableInstr.cpp
new file mode 100644
index 0000000..2b6e30d
--- /dev/null
+++ b/libclamav/c++/llvm/utils/TableGen/X86RecognizableInstr.cpp
@@ -0,0 +1,959 @@
+//===- X86RecognizableInstr.cpp - Disassembler instruction spec --*- C++ -*-===//
+//
+//                     The LLVM Compiler Infrastructure
+//
+// This file is distributed under the University of Illinois Open Source
+// License. See LICENSE.TXT for details.
+//
+//===----------------------------------------------------------------------===//
+//
+// This file is part of the X86 Disassembler Emitter.
+// It contains the implementation of a single recognizable instruction.
+// Documentation for the disassembler emitter in general can be found in
+//  X86DisasemblerEmitter.h.
+//
+//===----------------------------------------------------------------------===//
+
+#include "X86DisassemblerShared.h"
+#include "X86RecognizableInstr.h"
+#include "X86ModRMFilters.h"
+
+#include "llvm/Support/ErrorHandling.h"
+
+#include <string>
+
+using namespace llvm;
+
+// A clone of X86 since we can't depend on something that is generated.
+namespace X86Local {
+  enum {
+    Pseudo      = 0,
+    RawFrm      = 1,
+    AddRegFrm   = 2,
+    MRMDestReg  = 3,
+    MRMDestMem  = 4,
+    MRMSrcReg   = 5,
+    MRMSrcMem   = 6,
+    MRM0r = 16, MRM1r = 17, MRM2r = 18, MRM3r = 19, 
+    MRM4r = 20, MRM5r = 21, MRM6r = 22, MRM7r = 23,
+    MRM0m = 24, MRM1m = 25, MRM2m = 26, MRM3m = 27,
+    MRM4m = 28, MRM5m = 29, MRM6m = 30, MRM7m = 31,
+    MRMInitReg  = 32
+  };
+  
+  enum {
+    TB  = 1,
+    REP = 2,
+    D8 = 3, D9 = 4, DA = 5, DB = 6,
+    DC = 7, DD = 8, DE = 9, DF = 10,
+    XD = 11,  XS = 12,
+    T8 = 13,  TA = 14
+  };
+}
+  
+#define ONE_BYTE_EXTENSION_TABLES \
+  EXTENSION_TABLE(80)             \
+  EXTENSION_TABLE(81)             \
+  EXTENSION_TABLE(82)             \
+  EXTENSION_TABLE(83)             \
+  EXTENSION_TABLE(8f)             \
+  EXTENSION_TABLE(c0)             \
+  EXTENSION_TABLE(c1)             \
+  EXTENSION_TABLE(c6)             \
+  EXTENSION_TABLE(c7)             \
+  EXTENSION_TABLE(d0)             \
+  EXTENSION_TABLE(d1)             \
+  EXTENSION_TABLE(d2)             \
+  EXTENSION_TABLE(d3)             \
+  EXTENSION_TABLE(f6)             \
+  EXTENSION_TABLE(f7)             \
+  EXTENSION_TABLE(fe)             \
+  EXTENSION_TABLE(ff)
+  
+#define TWO_BYTE_EXTENSION_TABLES \
+  EXTENSION_TABLE(00)             \
+  EXTENSION_TABLE(01)             \
+  EXTENSION_TABLE(18)             \
+  EXTENSION_TABLE(71)             \
+  EXTENSION_TABLE(72)             \
+  EXTENSION_TABLE(73)             \
+  EXTENSION_TABLE(ae)             \
+  EXTENSION_TABLE(b9)             \
+  EXTENSION_TABLE(ba)             \
+  EXTENSION_TABLE(c7)
+  
+#define TWO_BYTE_FULL_EXTENSION_TABLES \
+  EXTENSION_TABLE(01)
+  
+
+using namespace X86Disassembler;
+
+/// needsModRMForDecode - Indicates whether a particular instruction requires a
+///   ModR/M byte for the instruction to be properly decoded.  For example, a 
+///   MRMDestReg instruction needs the Mod field in the ModR/M byte to be set to
+///   0b11.
+///
+/// @param form - The form of the instruction.
+/// @return     - true if the form implies that a ModR/M byte is required, false
+///               otherwise.
+static bool needsModRMForDecode(uint8_t form) {
+  if (form == X86Local::MRMDestReg    ||
+     form == X86Local::MRMDestMem    ||
+     form == X86Local::MRMSrcReg     ||
+     form == X86Local::MRMSrcMem     ||
+     (form >= X86Local::MRM0r && form <= X86Local::MRM7r) ||
+     (form >= X86Local::MRM0m && form <= X86Local::MRM7m))
+    return true;
+  else
+    return false;
+}
+
+/// isRegFormat - Indicates whether a particular form requires the Mod field of
+///   the ModR/M byte to be 0b11.
+///
+/// @param form - The form of the instruction.
+/// @return     - true if the form implies that Mod must be 0b11, false
+///               otherwise.
+static bool isRegFormat(uint8_t form) {
+  if (form == X86Local::MRMDestReg ||
+     form == X86Local::MRMSrcReg  ||
+     (form >= X86Local::MRM0r && form <= X86Local::MRM7r))
+    return true;
+  else
+    return false;
+}
+
+/// byteFromBitsInit - Extracts a value at most 8 bits in width from a BitsInit.
+///   Useful for switch statements and the like.
+///
+/// @param init - A reference to the BitsInit to be decoded.
+/// @return     - The field, with the first bit in the BitsInit as the lowest
+///               order bit.
+static uint8_t byteFromBitsInit(BitsInit &init) {
+  int width = init.getNumBits();
+
+  assert(width <= 8 && "Field is too large for uint8_t!");
+
+  int     index;
+  uint8_t mask = 0x01;
+
+  uint8_t ret = 0;
+
+  for (index = 0; index < width; index++) {
+    if (static_cast<BitInit*>(init.getBit(index))->getValue())
+      ret |= mask;
+
+    mask <<= 1;
+  }
+
+  return ret;
+}
+
+/// byteFromRec - Extract a value at most 8 bits in with from a Record given the
+///   name of the field.
+///
+/// @param rec  - The record from which to extract the value.
+/// @param name - The name of the field in the record.
+/// @return     - The field, as translated by byteFromBitsInit().
+static uint8_t byteFromRec(const Record* rec, const std::string &name) {
+  BitsInit* bits = rec->getValueAsBitsInit(name);
+  return byteFromBitsInit(*bits);
+}
+
+RecognizableInstr::RecognizableInstr(DisassemblerTables &tables,
+                                     const CodeGenInstruction &insn,
+                                     InstrUID uid) {
+  UID = uid;
+
+  Rec = insn.TheDef;
+  Name = Rec->getName();
+  Spec = &tables.specForUID(UID);
+  
+  if (!Rec->isSubClassOf("X86Inst")) {
+    ShouldBeEmitted = false;
+    return;
+  }
+  
+  Prefix   = byteFromRec(Rec, "Prefix");
+  Opcode   = byteFromRec(Rec, "Opcode");
+  Form     = byteFromRec(Rec, "FormBits");
+  SegOvr   = byteFromRec(Rec, "SegOvrBits");
+  
+  HasOpSizePrefix  = Rec->getValueAsBit("hasOpSizePrefix");
+  HasREX_WPrefix   = Rec->getValueAsBit("hasREX_WPrefix");
+  HasLockPrefix    = Rec->getValueAsBit("hasLockPrefix");
+  IsCodeGenOnly    = Rec->getValueAsBit("isCodeGenOnly");
+  
+  Name      = Rec->getName();
+  AsmString = Rec->getValueAsString("AsmString");
+  
+  Operands = &insn.OperandList;
+  
+  IsSSE            = HasOpSizePrefix && (Name.find("16") == Name.npos);
+  HasFROperands    = false;
+  
+  ShouldBeEmitted  = true;
+}
+  
+void RecognizableInstr::processInstr(DisassemblerTables &tables,
+                                   const CodeGenInstruction &insn,
+                                   InstrUID uid)
+{
+  RecognizableInstr recogInstr(tables, insn, uid);
+  
+  recogInstr.emitInstructionSpecifier(tables);
+  
+  if (recogInstr.shouldBeEmitted())
+    recogInstr.emitDecodePath(tables);
+}
+
+InstructionContext RecognizableInstr::insnContext() const {
+  InstructionContext insnContext;
+
+  if (Name.find("64") != Name.npos || HasREX_WPrefix) {
+    if (HasREX_WPrefix && HasOpSizePrefix)
+      insnContext = IC_64BIT_REXW_OPSIZE;
+    else if (HasOpSizePrefix)
+      insnContext = IC_64BIT_OPSIZE;
+    else if (HasREX_WPrefix && Prefix == X86Local::XS)
+      insnContext = IC_64BIT_REXW_XS;
+    else if (HasREX_WPrefix && Prefix == X86Local::XD)
+      insnContext = IC_64BIT_REXW_XD;
+    else if (Prefix == X86Local::XD)
+      insnContext = IC_64BIT_XD;
+    else if (Prefix == X86Local::XS)
+      insnContext = IC_64BIT_XS;
+    else if (HasREX_WPrefix)
+      insnContext = IC_64BIT_REXW;
+    else
+      insnContext = IC_64BIT;
+  } else {
+    if (HasOpSizePrefix)
+      insnContext = IC_OPSIZE;
+    else if (Prefix == X86Local::XD)
+      insnContext = IC_XD;
+    else if (Prefix == X86Local::XS)
+      insnContext = IC_XS;
+    else
+      insnContext = IC;
+  }
+
+  return insnContext;
+}
+  
+RecognizableInstr::filter_ret RecognizableInstr::filter() const {
+  // Filter out intrinsics
+  
+  if (!Rec->isSubClassOf("X86Inst"))
+    return FILTER_STRONG;
+  
+  if (Form == X86Local::Pseudo ||
+      IsCodeGenOnly)
+    return FILTER_STRONG;
+  
+  // Filter out instructions with a LOCK prefix;
+  //   prefer forms that do not have the prefix
+  if (HasLockPrefix)
+    return FILTER_WEAK;
+  
+  // Filter out artificial instructions
+
+  if (Name.find("TAILJMP") != Name.npos    ||
+     Name.find("_Int") != Name.npos       ||
+     Name.find("_int") != Name.npos       ||
+     Name.find("Int_") != Name.npos       ||
+     Name.find("_NOREX") != Name.npos     ||
+     Name.find("EH_RETURN") != Name.npos  ||
+     Name.find("V_SET") != Name.npos      ||
+     Name.find("LOCK_") != Name.npos      ||
+     Name.find("WIN") != Name.npos)
+    return FILTER_STRONG;
+
+  // Special cases.
+  
+  if (Name.find("PCMPISTRI") != Name.npos && Name != "PCMPISTRI")
+    return FILTER_WEAK;
+  if (Name.find("PCMPESTRI") != Name.npos && Name != "PCMPESTRI")
+    return FILTER_WEAK;
+
+  if (Name.find("MOV") != Name.npos && Name.find("r0") != Name.npos)
+    return FILTER_WEAK;
+  if (Name.find("MOVZ") != Name.npos && Name.find("MOVZX") == Name.npos)
+    return FILTER_WEAK;
+  if (Name.find("Fs") != Name.npos)
+    return FILTER_WEAK;
+  if (Name == "MOVLPDrr"          ||
+      Name == "MOVLPSrr"          ||
+      Name == "PUSHFQ"            ||
+      Name == "BSF16rr"           ||
+      Name == "BSF16rm"           ||
+      Name == "BSR16rr"           ||
+      Name == "BSR16rm"           ||
+      Name == "MOVSX16rm8"        ||
+      Name == "MOVSX16rr8"        ||
+      Name == "MOVZX16rm8"        ||
+      Name == "MOVZX16rr8"        ||
+      Name == "PUSH32i16"         ||
+      Name == "PUSH64i16"         ||
+      Name == "MOVPQI2QImr"       ||
+      Name == "MOVSDmr"           ||
+      Name == "MOVSDrm"           ||
+      Name == "MOVSSmr"           ||
+      Name == "MOVSSrm"           ||
+      Name == "MMX_MOVD64rrv164"  ||
+      Name == "CRC32m16"          ||
+      Name == "MOV64ri64i32"      ||
+      Name == "CRC32r16")
+    return FILTER_WEAK;
+
+  // Filter out instructions with segment override prefixes.
+  // They're too messy to handle now and we'll special case them if needed.
+
+  if (SegOvr)
+    return FILTER_STRONG;
+  
+  // Filter out instructions that can't be printed.
+
+  if (AsmString.size() == 0)
+    return FILTER_STRONG;
+  
+  // Filter out instructions with subreg operands.
+  
+  if (AsmString.find("subreg") != AsmString.npos)
+    return FILTER_STRONG;
+
+  assert(Form != X86Local::MRMInitReg &&
+         "FORMAT_MRMINITREG instruction not skipped");
+  
+  if (HasFROperands && Name.find("MOV") != Name.npos &&
+     ((Name.find("2") != Name.npos && Name.find("32") == Name.npos) || 
+      (Name.find("to") != Name.npos)))
+    return FILTER_WEAK;
+
+  return FILTER_NORMAL;
+}
+  
+void RecognizableInstr::handleOperand(
+  bool optional,
+  unsigned &operandIndex,
+  unsigned &physicalOperandIndex,
+  unsigned &numPhysicalOperands,
+  unsigned *operandMapping,
+  OperandEncoding (*encodingFromString)(const std::string&, bool hasOpSizePrefix)) {
+  if (optional) {
+    if (physicalOperandIndex >= numPhysicalOperands)
+      return;
+  } else {
+    assert(physicalOperandIndex < numPhysicalOperands);
+  }
+  
+  while (operandMapping[operandIndex] != operandIndex) {
+    Spec->operands[operandIndex].encoding = ENCODING_DUP;
+    Spec->operands[operandIndex].type =
+      (OperandType)(TYPE_DUP0 + operandMapping[operandIndex]);
+    ++operandIndex;
+  }
+  
+  const std::string &typeName = (*Operands)[operandIndex].Rec->getName();
+  
+  Spec->operands[operandIndex].encoding = encodingFromString(typeName,
+                                                              HasOpSizePrefix);
+  Spec->operands[operandIndex].type = typeFromString(typeName, 
+                                                      IsSSE,
+                                                      HasREX_WPrefix,
+                                                      HasOpSizePrefix);
+  
+  ++operandIndex;
+  ++physicalOperandIndex;
+}
+
+void RecognizableInstr::emitInstructionSpecifier(DisassemblerTables &tables) {
+  Spec->name       = Name;
+    
+  if (!Rec->isSubClassOf("X86Inst"))
+    return;
+  
+  switch (filter()) {
+  case FILTER_WEAK:
+    Spec->filtered = true;
+    break;
+  case FILTER_STRONG:
+    ShouldBeEmitted = false;
+    return;
+  case FILTER_NORMAL:
+    break;
+  }
+  
+  Spec->insnContext = insnContext();
+    
+  const std::vector<CodeGenInstruction::OperandInfo> &OperandList = *Operands;
+  
+  unsigned operandIndex;
+  unsigned numOperands = OperandList.size();
+  unsigned numPhysicalOperands = 0;
+  
+  // operandMapping maps from operands in OperandList to their originals.
+  // If operandMapping[i] != i, then the entry is a duplicate.
+  unsigned operandMapping[X86_MAX_OPERANDS];
+  
+  bool hasFROperands = false;
+  
+  assert(numOperands < X86_MAX_OPERANDS && "X86_MAX_OPERANDS is not large enough");
+  
+  for (operandIndex = 0; operandIndex < numOperands; ++operandIndex) {
+    if (OperandList[operandIndex].Constraints.size()) {
+      const std::string &constraint = OperandList[operandIndex].Constraints[0];
+      std::string::size_type tiedToPos;
+
+      if ((tiedToPos = constraint.find(" << 16) | (1 << TOI::TIED_TO))")) !=
+         constraint.npos) {
+        tiedToPos--;
+        operandMapping[operandIndex] = constraint[tiedToPos] - '0';
+      } else {
+        ++numPhysicalOperands;
+        operandMapping[operandIndex] = operandIndex;
+      }
+    } else {
+      ++numPhysicalOperands;
+      operandMapping[operandIndex] = operandIndex;
+    }
+
+    const std::string &recName = OperandList[operandIndex].Rec->getName();
+
+    if (recName.find("FR") != recName.npos)
+      hasFROperands = true;
+  }
+  
+  if (hasFROperands && Name.find("MOV") != Name.npos &&
+     ((Name.find("2") != Name.npos && Name.find("32") == Name.npos) ||
+      (Name.find("to") != Name.npos)))
+    ShouldBeEmitted = false;
+  
+  if (!ShouldBeEmitted)
+    return;
+
+#define HANDLE_OPERAND(class)               \
+  handleOperand(false,                      \
+                operandIndex,               \
+                physicalOperandIndex,       \
+                numPhysicalOperands,        \
+                operandMapping,             \
+                class##EncodingFromString);
+  
+#define HANDLE_OPTIONAL(class)              \
+  handleOperand(true,                       \
+                operandIndex,               \
+                physicalOperandIndex,       \
+                numPhysicalOperands,        \
+                operandMapping,             \
+                class##EncodingFromString);
+  
+  // operandIndex should always be < numOperands
+  operandIndex = 0;
+  // physicalOperandIndex should always be < numPhysicalOperands
+  unsigned physicalOperandIndex = 0;
+    
+  switch (Form) {
+  case X86Local::RawFrm:
+    // Operand 1 (optional) is an address or immediate.
+    // Operand 2 (optional) is an immediate.
+    assert(numPhysicalOperands <= 2 && 
+           "Unexpected number of operands for RawFrm");
+    HANDLE_OPTIONAL(relocation)
+    HANDLE_OPTIONAL(immediate)
+    break;
+  case X86Local::AddRegFrm:
+    // Operand 1 is added to the opcode.
+    // Operand 2 (optional) is an address.
+    assert(numPhysicalOperands >= 1 && numPhysicalOperands <= 2 &&
+           "Unexpected number of operands for AddRegFrm");
+    HANDLE_OPERAND(opcodeModifier)
+    HANDLE_OPTIONAL(relocation)
+    break;
+  case X86Local::MRMDestReg:
+    // Operand 1 is a register operand in the R/M field.
+    // Operand 2 is a register operand in the Reg/Opcode field.
+    // Operand 3 (optional) is an immediate.
+    assert(numPhysicalOperands >= 2 && numPhysicalOperands <= 3 &&
+           "Unexpected number of operands for MRMDestRegFrm");
+    HANDLE_OPERAND(rmRegister)
+    HANDLE_OPERAND(roRegister)
+    HANDLE_OPTIONAL(immediate)
+    break;
+  case X86Local::MRMDestMem:
+    // Operand 1 is a memory operand (possibly SIB-extended)
+    // Operand 2 is a register operand in the Reg/Opcode field.
+    // Operand 3 (optional) is an immediate.
+    assert(numPhysicalOperands >= 2 && numPhysicalOperands <= 3 &&
+           "Unexpected number of operands for MRMDestMemFrm");
+    HANDLE_OPERAND(memory)
+    HANDLE_OPERAND(roRegister)
+    HANDLE_OPTIONAL(immediate)
+    break;
+  case X86Local::MRMSrcReg:
+    // Operand 1 is a register operand in the Reg/Opcode field.
+    // Operand 2 is a register operand in the R/M field.
+    // Operand 3 (optional) is an immediate.
+    assert(numPhysicalOperands >= 2 && numPhysicalOperands <= 3 &&
+           "Unexpected number of operands for MRMSrcRegFrm");
+    HANDLE_OPERAND(roRegister)
+    HANDLE_OPERAND(rmRegister)
+    HANDLE_OPTIONAL(immediate)
+    break;
+  case X86Local::MRMSrcMem:
+    // Operand 1 is a register operand in the Reg/Opcode field.
+    // Operand 2 is a memory operand (possibly SIB-extended)
+    // Operand 3 (optional) is an immediate.
+    assert(numPhysicalOperands >= 2 && numPhysicalOperands <= 3 &&
+           "Unexpected number of operands for MRMSrcMemFrm");
+    HANDLE_OPERAND(roRegister)
+    HANDLE_OPERAND(memory)
+    HANDLE_OPTIONAL(immediate)
+    break;
+  case X86Local::MRM0r:
+  case X86Local::MRM1r:
+  case X86Local::MRM2r:
+  case X86Local::MRM3r:
+  case X86Local::MRM4r:
+  case X86Local::MRM5r:
+  case X86Local::MRM6r:
+  case X86Local::MRM7r:
+    // Operand 1 is a register operand in the R/M field.
+    // Operand 2 (optional) is an immediate or relocation.
+    assert(numPhysicalOperands <= 2 &&
+           "Unexpected number of operands for MRMnRFrm");
+    HANDLE_OPTIONAL(rmRegister)
+    HANDLE_OPTIONAL(relocation)
+    break;
+  case X86Local::MRM0m:
+  case X86Local::MRM1m:
+  case X86Local::MRM2m:
+  case X86Local::MRM3m:
+  case X86Local::MRM4m:
+  case X86Local::MRM5m:
+  case X86Local::MRM6m:
+  case X86Local::MRM7m:
+    // Operand 1 is a memory operand (possibly SIB-extended)
+    // Operand 2 (optional) is an immediate or relocation.
+    assert(numPhysicalOperands >= 1 && numPhysicalOperands <= 2 &&
+           "Unexpected number of operands for MRMnMFrm");
+    HANDLE_OPERAND(memory)
+    HANDLE_OPTIONAL(relocation)
+    break;
+  case X86Local::MRMInitReg:
+    // Ignored.
+    break;
+  }
+  
+  #undef HANDLE_OPERAND
+  #undef HANDLE_OPTIONAL
+}
+
+void RecognizableInstr::emitDecodePath(DisassemblerTables &tables) const {
+  // Special cases where the LLVM tables are not complete
+
+#define EXACTCASE(class, name, lastbyte)         \
+  if (Name == name) {                           \
+    tables.setTableFields(class,                 \
+                          insnContext(),         \
+                          Opcode,               \
+                          ExactFilter(lastbyte), \
+                          UID);                 \
+    Spec->modifierBase = Opcode;               \
+    return;                                      \
+  } 
+
+  EXACTCASE(TWOBYTE, "MONITOR",  0xc8)
+  EXACTCASE(TWOBYTE, "MWAIT",    0xc9)
+  EXACTCASE(TWOBYTE, "SWPGS",    0xf8)
+  EXACTCASE(TWOBYTE, "INVEPT",   0x80)
+  EXACTCASE(TWOBYTE, "INVVPID",  0x81)
+  EXACTCASE(TWOBYTE, "VMCALL",   0xc1)
+  EXACTCASE(TWOBYTE, "VMLAUNCH", 0xc2)
+  EXACTCASE(TWOBYTE, "VMRESUME", 0xc3)
+  EXACTCASE(TWOBYTE, "VMXOFF",   0xc4)
+
+  if (Name == "INVLPG") {
+    tables.setTableFields(TWOBYTE,
+                          insnContext(),
+                          Opcode,
+                          ExtendedFilter(false, 7),
+                          UID);
+    Spec->modifierBase = Opcode;
+    return;
+  }
+
+  OpcodeType    opcodeType  = (OpcodeType)-1;
+  
+  ModRMFilter*  filter      = NULL; 
+  uint8_t       opcodeToSet = 0;
+
+  switch (Prefix) {
+  // Extended two-byte opcodes can start with f2 0f, f3 0f, or 0f
+  case X86Local::XD:
+  case X86Local::XS:
+  case X86Local::TB:
+    opcodeType = TWOBYTE;
+
+    switch (Opcode) {
+#define EXTENSION_TABLE(n) case 0x##n:
+    TWO_BYTE_EXTENSION_TABLES
+#undef EXTENSION_TABLE
+      switch (Form) {
+      default:
+        llvm_unreachable("Unhandled two-byte extended opcode");
+      case X86Local::MRM0r:
+      case X86Local::MRM1r:
+      case X86Local::MRM2r:
+      case X86Local::MRM3r:
+      case X86Local::MRM4r:
+      case X86Local::MRM5r:
+      case X86Local::MRM6r:
+      case X86Local::MRM7r:
+        filter = new ExtendedFilter(true, Form - X86Local::MRM0r);
+        break;
+      case X86Local::MRM0m:
+      case X86Local::MRM1m:
+      case X86Local::MRM2m:
+      case X86Local::MRM3m:
+      case X86Local::MRM4m:
+      case X86Local::MRM5m:
+      case X86Local::MRM6m:
+      case X86Local::MRM7m:
+        filter = new ExtendedFilter(false, Form - X86Local::MRM0m);
+        break;
+      } // switch (Form)
+      break;
+    default:
+      if (needsModRMForDecode(Form))
+        filter = new ModFilter(isRegFormat(Form));
+      else
+        filter = new DumbFilter();
+        
+      break;
+    } // switch (opcode)
+    opcodeToSet = Opcode;
+    break;
+  case X86Local::T8:
+    opcodeType = THREEBYTE_38;
+    if (needsModRMForDecode(Form))
+      filter = new ModFilter(isRegFormat(Form));
+    else
+      filter = new DumbFilter();
+    opcodeToSet = Opcode;
+    break;
+  case X86Local::TA:
+    opcodeType = THREEBYTE_3A;
+    if (needsModRMForDecode(Form))
+      filter = new ModFilter(isRegFormat(Form));
+    else
+      filter = new DumbFilter();
+    opcodeToSet = Opcode;
+    break;
+  case X86Local::D8:
+  case X86Local::D9:
+  case X86Local::DA:
+  case X86Local::DB:
+  case X86Local::DC:
+  case X86Local::DD:
+  case X86Local::DE:
+  case X86Local::DF:
+    assert(Opcode >= 0xc0 && "Unexpected opcode for an escape opcode");
+    opcodeType = ONEBYTE;
+    if (Form == X86Local::AddRegFrm) {
+      Spec->modifierType = MODIFIER_MODRM;
+      Spec->modifierBase = Opcode;
+      filter = new AddRegEscapeFilter(Opcode);
+    } else {
+      filter = new EscapeFilter(true, Opcode);
+    }
+    opcodeToSet = 0xd8 + (Prefix - X86Local::D8);
+    break;
+  default:
+    opcodeType = ONEBYTE;
+    switch (Opcode) {
+#define EXTENSION_TABLE(n) case 0x##n:
+    ONE_BYTE_EXTENSION_TABLES
+#undef EXTENSION_TABLE
+      switch (Form) {
+      default:
+        llvm_unreachable("Fell through the cracks of a single-byte "
+                         "extended opcode");
+      case X86Local::MRM0r:
+      case X86Local::MRM1r:
+      case X86Local::MRM2r:
+      case X86Local::MRM3r:
+      case X86Local::MRM4r:
+      case X86Local::MRM5r:
+      case X86Local::MRM6r:
+      case X86Local::MRM7r:
+        filter = new ExtendedFilter(true, Form - X86Local::MRM0r);
+        break;
+      case X86Local::MRM0m:
+      case X86Local::MRM1m:
+      case X86Local::MRM2m:
+      case X86Local::MRM3m:
+      case X86Local::MRM4m:
+      case X86Local::MRM5m:
+      case X86Local::MRM6m:
+      case X86Local::MRM7m:
+        filter = new ExtendedFilter(false, Form - X86Local::MRM0m);
+        break;
+      } // switch (Form)
+      break;
+    case 0xd8:
+    case 0xd9:
+    case 0xda:
+    case 0xdb:
+    case 0xdc:
+    case 0xdd:
+    case 0xde:
+    case 0xdf:
+      filter = new EscapeFilter(false, Form - X86Local::MRM0m);
+      break;
+    default:
+      if (needsModRMForDecode(Form))
+        filter = new ModFilter(isRegFormat(Form));
+      else
+        filter = new DumbFilter();
+      break;
+    } // switch (Opcode)
+    opcodeToSet = Opcode;
+  } // switch (Prefix)
+
+  assert(opcodeType != (OpcodeType)-1 &&
+         "Opcode type not set");
+  assert(filter && "Filter not set");
+
+  if (Form == X86Local::AddRegFrm) {
+    if(Spec->modifierType != MODIFIER_MODRM) {
+      assert(opcodeToSet < 0xf9 &&
+             "Not enough room for all ADDREG_FRM operands");
+    
+      uint8_t currentOpcode;
+
+      for (currentOpcode = opcodeToSet;
+           currentOpcode < opcodeToSet + 8;
+           ++currentOpcode)
+        tables.setTableFields(opcodeType, 
+                              insnContext(), 
+                              currentOpcode, 
+                              *filter, 
+                              UID);
+    
+      Spec->modifierType = MODIFIER_OPCODE;
+      Spec->modifierBase = opcodeToSet;
+    } else {
+      // modifierBase was set where MODIFIER_MODRM was set
+      tables.setTableFields(opcodeType, 
+                            insnContext(), 
+                            opcodeToSet, 
+                            *filter, 
+                            UID);
+    }
+  } else {
+    tables.setTableFields(opcodeType,
+                          insnContext(),
+                          opcodeToSet,
+                          *filter,
+                          UID);
+    
+    Spec->modifierType = MODIFIER_NONE;
+    Spec->modifierBase = opcodeToSet;
+  }
+  
+  delete filter;
+}
+
+#define TYPE(str, type) if (s == str) return type;
+OperandType RecognizableInstr::typeFromString(const std::string &s,
+                                              bool isSSE,
+                                              bool hasREX_WPrefix,
+                                              bool hasOpSizePrefix) {
+  if (isSSE) {
+    // For SSE instructions, we ignore the OpSize prefix and force operand 
+    // sizes.
+    TYPE("GR16",              TYPE_R16)
+    TYPE("GR32",              TYPE_R32)
+    TYPE("GR64",              TYPE_R64)
+  }
+  if(hasREX_WPrefix) {
+    // For instructions with a REX_W prefix, a declared 32-bit register encoding
+    // is special.
+    TYPE("GR32",              TYPE_R32)
+  }
+  if(!hasOpSizePrefix) {
+    // For instructions without an OpSize prefix, a declared 16-bit register or
+    // immediate encoding is special.
+    TYPE("GR16",              TYPE_R16)
+    TYPE("i16imm",            TYPE_IMM16)
+  }
+  TYPE("i16mem",              TYPE_Mv)
+  TYPE("i16imm",              TYPE_IMMv)
+  TYPE("i16i8imm",            TYPE_IMMv)
+  TYPE("GR16",                TYPE_Rv)
+  TYPE("i32mem",              TYPE_Mv)
+  TYPE("i32imm",              TYPE_IMMv)
+  TYPE("i32i8imm",            TYPE_IMM32)
+  TYPE("GR32",                TYPE_Rv)
+  TYPE("i64mem",              TYPE_Mv)
+  TYPE("i64i32imm",           TYPE_IMM64)
+  TYPE("i64i8imm",            TYPE_IMM64)
+  TYPE("GR64",                TYPE_R64)
+  TYPE("i8mem",               TYPE_M8)
+  TYPE("i8imm",               TYPE_IMM8)
+  TYPE("GR8",                 TYPE_R8)
+  TYPE("VR128",               TYPE_XMM128)
+  TYPE("f128mem",             TYPE_M128)
+  TYPE("FR64",                TYPE_XMM64)
+  TYPE("f64mem",              TYPE_M64FP)
+  TYPE("FR32",                TYPE_XMM32)
+  TYPE("f32mem",              TYPE_M32FP)
+  TYPE("RST",                 TYPE_ST)
+  TYPE("i128mem",             TYPE_M128)
+  TYPE("i64i32imm_pcrel",     TYPE_REL64)
+  TYPE("i32imm_pcrel",        TYPE_REL32)
+  TYPE("SSECC",               TYPE_IMM8)
+  TYPE("brtarget",            TYPE_RELv)
+  TYPE("brtarget8",           TYPE_REL8)
+  TYPE("f80mem",              TYPE_M80FP)
+  TYPE("lea32mem",            TYPE_LEA)
+  TYPE("lea64_32mem",         TYPE_LEA)
+  TYPE("lea64mem",            TYPE_LEA)
+  TYPE("VR64",                TYPE_MM64)
+  TYPE("i64imm",              TYPE_IMMv)
+  TYPE("opaque32mem",         TYPE_M1616)
+  TYPE("opaque48mem",         TYPE_M1632)
+  TYPE("opaque80mem",         TYPE_M1664)
+  TYPE("opaque512mem",        TYPE_M512)
+  TYPE("SEGMENT_REG",         TYPE_SEGMENTREG)
+  TYPE("DEBUG_REG",           TYPE_DEBUGREG)
+  TYPE("CONTROL_REG_32",      TYPE_CR32)
+  TYPE("CONTROL_REG_64",      TYPE_CR64)
+  TYPE("offset8",             TYPE_MOFFS8)
+  TYPE("offset16",            TYPE_MOFFS16)
+  TYPE("offset32",            TYPE_MOFFS32)
+  TYPE("offset64",            TYPE_MOFFS64)
+  errs() << "Unhandled type string " << s << "\n";
+  llvm_unreachable("Unhandled type string");
+}
+#undef TYPE
+
+#define ENCODING(str, encoding) if (s == str) return encoding;
+OperandEncoding RecognizableInstr::immediateEncodingFromString
+  (const std::string &s,
+   bool hasOpSizePrefix) {
+  if(!hasOpSizePrefix) {
+    // For instructions without an OpSize prefix, a declared 16-bit register or
+    // immediate encoding is special.
+    ENCODING("i16imm",        ENCODING_IW)
+  }
+  ENCODING("i32i8imm",        ENCODING_IB)
+  ENCODING("SSECC",           ENCODING_IB)
+  ENCODING("i16imm",          ENCODING_Iv)
+  ENCODING("i16i8imm",        ENCODING_IB)
+  ENCODING("i32imm",          ENCODING_Iv)
+  ENCODING("i64i32imm",       ENCODING_ID)
+  ENCODING("i64i8imm",        ENCODING_IB)
+  ENCODING("i8imm",           ENCODING_IB)
+  errs() << "Unhandled immediate encoding " << s << "\n";
+  llvm_unreachable("Unhandled immediate encoding");
+}
+
+OperandEncoding RecognizableInstr::rmRegisterEncodingFromString
+  (const std::string &s,
+   bool hasOpSizePrefix) {
+  ENCODING("GR16",            ENCODING_RM)
+  ENCODING("GR32",            ENCODING_RM)
+  ENCODING("GR64",            ENCODING_RM)
+  ENCODING("GR8",             ENCODING_RM)
+  ENCODING("VR128",           ENCODING_RM)
+  ENCODING("FR64",            ENCODING_RM)
+  ENCODING("FR32",            ENCODING_RM)
+  ENCODING("VR64",            ENCODING_RM)
+  errs() << "Unhandled R/M register encoding " << s << "\n";
+  llvm_unreachable("Unhandled R/M register encoding");
+}
+
+OperandEncoding RecognizableInstr::roRegisterEncodingFromString
+  (const std::string &s,
+   bool hasOpSizePrefix) {
+  ENCODING("GR16",            ENCODING_REG)
+  ENCODING("GR32",            ENCODING_REG)
+  ENCODING("GR64",            ENCODING_REG)
+  ENCODING("GR8",             ENCODING_REG)
+  ENCODING("VR128",           ENCODING_REG)
+  ENCODING("FR64",            ENCODING_REG)
+  ENCODING("FR32",            ENCODING_REG)
+  ENCODING("VR64",            ENCODING_REG)
+  ENCODING("SEGMENT_REG",     ENCODING_REG)
+  ENCODING("DEBUG_REG",       ENCODING_REG)
+  ENCODING("CONTROL_REG_32",  ENCODING_REG)
+  ENCODING("CONTROL_REG_64",  ENCODING_REG)
+  errs() << "Unhandled reg/opcode register encoding " << s << "\n";
+  llvm_unreachable("Unhandled reg/opcode register encoding");
+}
+
+OperandEncoding RecognizableInstr::memoryEncodingFromString
+  (const std::string &s,
+   bool hasOpSizePrefix) {
+  ENCODING("i16mem",          ENCODING_RM)
+  ENCODING("i32mem",          ENCODING_RM)
+  ENCODING("i64mem",          ENCODING_RM)
+  ENCODING("i8mem",           ENCODING_RM)
+  ENCODING("f128mem",         ENCODING_RM)
+  ENCODING("f64mem",          ENCODING_RM)
+  ENCODING("f32mem",          ENCODING_RM)
+  ENCODING("i128mem",         ENCODING_RM)
+  ENCODING("f80mem",          ENCODING_RM)
+  ENCODING("lea32mem",        ENCODING_RM)
+  ENCODING("lea64_32mem",     ENCODING_RM)
+  ENCODING("lea64mem",        ENCODING_RM)
+  ENCODING("opaque32mem",     ENCODING_RM)
+  ENCODING("opaque48mem",     ENCODING_RM)
+  ENCODING("opaque80mem",     ENCODING_RM)
+  ENCODING("opaque512mem",    ENCODING_RM)
+  errs() << "Unhandled memory encoding " << s << "\n";
+  llvm_unreachable("Unhandled memory encoding");
+}
+
+OperandEncoding RecognizableInstr::relocationEncodingFromString
+  (const std::string &s,
+   bool hasOpSizePrefix) {
+  if(!hasOpSizePrefix) {
+    // For instructions without an OpSize prefix, a declared 16-bit register or
+    // immediate encoding is special.
+    ENCODING("i16imm",        ENCODING_IW)
+  }
+  ENCODING("i16imm",          ENCODING_Iv)
+  ENCODING("i16i8imm",        ENCODING_IB)
+  ENCODING("i32imm",          ENCODING_Iv)
+  ENCODING("i32i8imm",        ENCODING_IB)
+  ENCODING("i64i32imm",       ENCODING_ID)
+  ENCODING("i64i8imm",        ENCODING_IB)
+  ENCODING("i8imm",           ENCODING_IB)
+  ENCODING("i64i32imm_pcrel", ENCODING_ID)
+  ENCODING("i32imm_pcrel",    ENCODING_ID)
+  ENCODING("brtarget",        ENCODING_Iv)
+  ENCODING("brtarget8",       ENCODING_IB)
+  ENCODING("i64imm",          ENCODING_IO)
+  ENCODING("offset8",         ENCODING_Ia)
+  ENCODING("offset16",        ENCODING_Ia)
+  ENCODING("offset32",        ENCODING_Ia)
+  ENCODING("offset64",        ENCODING_Ia)
+  errs() << "Unhandled relocation encoding " << s << "\n";
+  llvm_unreachable("Unhandled relocation encoding");
+}
+
+OperandEncoding RecognizableInstr::opcodeModifierEncodingFromString
+  (const std::string &s,
+   bool hasOpSizePrefix) {
+  ENCODING("RST",             ENCODING_I)
+  ENCODING("GR32",            ENCODING_Rv)
+  ENCODING("GR64",            ENCODING_RO)
+  ENCODING("GR16",            ENCODING_Rv)
+  ENCODING("GR8",             ENCODING_RB)
+  errs() << "Unhandled opcode modifier encoding " << s << "\n";
+  llvm_unreachable("Unhandled opcode modifier encoding");
+}
+#undef ENCODING
diff --git a/libclamav/c++/llvm/utils/TableGen/X86RecognizableInstr.h b/libclamav/c++/llvm/utils/TableGen/X86RecognizableInstr.h
new file mode 100644
index 0000000..84374b0
--- /dev/null
+++ b/libclamav/c++/llvm/utils/TableGen/X86RecognizableInstr.h
@@ -0,0 +1,237 @@
+//===- X86RecognizableInstr.h - Disassembler instruction spec ----*- C++ -*-===//
+//
+//                     The LLVM Compiler Infrastructure
+//
+// This file is distributed under the University of Illinois Open Source
+// License. See LICENSE.TXT for details.
+//
+//===----------------------------------------------------------------------===//
+//
+// This file is part of the X86 Disassembler Emitter.
+// It contains the interface of a single recognizable instruction.
+// Documentation for the disassembler emitter in general can be found in
+//  X86DisasemblerEmitter.h.
+//
+//===----------------------------------------------------------------------===//
+
+#ifndef X86RECOGNIZABLEINSTR_H
+#define X86RECOGNIZABLEINSTR_H
+
+#include "X86DisassemblerTables.h"
+
+#include "CodeGenTarget.h"
+#include "Record.h"
+
+#include "llvm/System/DataTypes.h"
+#include "llvm/ADT/SmallVector.h"
+
+namespace llvm {
+
+namespace X86Disassembler {
+
+/// RecognizableInstr - Encapsulates all information required to decode a single
+///   instruction, as extracted from the LLVM instruction tables.  Has methods
+///   to interpret the information available in the LLVM tables, and to emit the
+///   instruction into DisassemblerTables.
+class RecognizableInstr {
+private:
+  /// The opcode of the instruction, as used in an MCInst
+  InstrUID UID;
+  /// The record from the .td files corresponding to this instruction
+  const Record* Rec;
+  /// The prefix field from the record
+  uint8_t Prefix;
+  /// The opcode field from the record; this is the opcode used in the Intel
+  /// encoding and therefore distinct from the UID
+  uint8_t Opcode;
+  /// The form field from the record
+  uint8_t Form;
+  /// The segment override field from the record
+  uint8_t SegOvr;
+  /// The hasOpSizePrefix field from the record
+  bool HasOpSizePrefix;
+  /// The hasREX_WPrefix field from the record
+  bool HasREX_WPrefix;
+  /// The hasLockPrefix field from the record
+  bool HasLockPrefix;
+  /// The isCodeGenOnly filed from the record
+  bool IsCodeGenOnly;
+  
+  /// The instruction name as listed in the tables
+  std::string Name;
+  /// The AT&T AsmString for the instruction
+  std::string AsmString;
+  
+  /// Indicates whether the instruction is SSE
+  bool IsSSE;
+  /// Indicates whether the instruction has FR operands - MOVs with FR operands
+  /// are typically ignored
+  bool HasFROperands;
+  /// Indicates whether the instruction should be emitted into the decode
+  /// tables; regardless, it will be emitted into the instruction info table
+  bool ShouldBeEmitted;
+  
+  /// The operands of the instruction, as listed in the CodeGenInstruction.
+  /// They are not one-to-one with operands listed in the MCInst; for example,
+  /// memory operands expand to 5 operands in the MCInst
+  const std::vector<CodeGenInstruction::OperandInfo>* Operands;
+  /// The description of the instruction that is emitted into the instruction
+  /// info table
+  InstructionSpecifier* Spec;
+
+  /// insnContext - Returns the primary context in which the instruction is
+  ///   valid.
+  ///
+  /// @return - The context in which the instruction is valid.
+  InstructionContext insnContext() const;
+  
+  enum filter_ret {
+    FILTER_STRONG,    // instruction has no place in the instruction tables
+    FILTER_WEAK,      // instruction may conflict, and should be eliminated if
+                      // it does
+    FILTER_NORMAL     // instruction should have high priority and generate an
+                      // error if it conflcits with any other FILTER_NORMAL
+                      // instruction
+  };
+  
+  /// filter - Determines whether the instruction should be decodable.  Some 
+  ///   instructions are pure intrinsics and use unencodable operands; many
+  ///   synthetic instructions are duplicates of other instructions; other
+  ///   instructions only differ in the logical way in which they are used, and
+  ///   have the same decoding.  Because these would cause decode conflicts,
+  ///   they must be filtered out.
+  ///
+  /// @return - The degree of filtering to be applied (see filter_ret).
+  filter_ret filter() const;
+  
+  /// typeFromString - Translates an operand type from the string provided in
+  ///   the LLVM tables to an OperandType for use in the operand specifier.
+  ///
+  /// @param s              - The string, as extracted by calling Rec->getName()
+  ///                         on a CodeGenInstruction::OperandInfo.
+  /// @param isSSE          - Indicates whether the instruction is an SSE 
+  ///                         instruction.  For SSE instructions, immediates are 
+  ///                         fixed-size rather than being affected by the
+  ///                         mandatory OpSize prefix.
+  /// @param hasREX_WPrefix - Indicates whether the instruction has a REX.W
+  ///                         prefix.  If it does, 32-bit register operands stay
+  ///                         32-bit regardless of the operand size.
+  /// @param hasOpSizePrefix- Indicates whether the instruction has an OpSize
+  ///                         prefix.  If it does not, then 16-bit register
+  ///                         operands stay 16-bit.
+  /// @return               - The operand's type.
+  static OperandType typeFromString(const std::string& s, 
+                                    bool isSSE,
+                                    bool hasREX_WPrefix,
+                                    bool hasOpSizePrefix);
+  
+  /// immediateEncodingFromString - Translates an immediate encoding from the
+  ///   string provided in the LLVM tables to an OperandEncoding for use in
+  ///   the operand specifier.
+  ///
+  /// @param s                - See typeFromString().
+  /// @param hasOpSizePrefix  - Indicates whether the instruction has an OpSize
+  ///                           prefix.  If it does not, then 16-bit immediate
+  ///                           operands stay 16-bit.
+  /// @return                 - The operand's encoding.
+  static OperandEncoding immediateEncodingFromString(const std::string &s,
+                                                     bool hasOpSizePrefix);
+  
+  /// rmRegisterEncodingFromString - Like immediateEncodingFromString, but
+  ///   handles operands that are in the REG field of the ModR/M byte.
+  static OperandEncoding rmRegisterEncodingFromString(const std::string &s,
+                                                      bool hasOpSizePrefix);
+  
+  /// rmRegisterEncodingFromString - Like immediateEncodingFromString, but
+  ///   handles operands that are in the REG field of the ModR/M byte.
+  static OperandEncoding roRegisterEncodingFromString(const std::string &s,
+                                                      bool hasOpSizePrefix);
+  static OperandEncoding memoryEncodingFromString(const std::string &s,
+                                                  bool hasOpSizePrefix);
+  static OperandEncoding relocationEncodingFromString(const std::string &s,
+                                                      bool hasOpSizePrefix);
+  static OperandEncoding opcodeModifierEncodingFromString(const std::string &s,
+                                                          bool hasOpSizePrefix);
+  
+  /// handleOperand - Converts a single operand from the LLVM table format to
+  ///   the emitted table format, handling any duplicate operands it encounters
+  ///   and then one non-duplicate.
+  ///
+  /// @param optional             - Determines whether to assert that the
+  ///                               operand exists.
+  /// @param operandIndex         - The index into the generated operand table.
+  ///                               Incremented by this function one or more
+  ///                               times to reflect possible duplicate 
+  ///                               operands).
+  /// @param physicalOperandIndex - The index of the current operand into the
+  ///                               set of non-duplicate ('physical') operands.
+  ///                               Incremented by this function once.
+  /// @param numPhysicalOperands  - The number of non-duplicate operands in the
+  ///                               instructions.
+  /// @param operandMapping       - The operand mapping, which has an entry for
+  ///                               each operand that indicates whether it is a
+  ///                               duplicate, and of what.
+  void handleOperand(bool optional,
+                     unsigned &operandIndex,
+                     unsigned &physicalOperandIndex,
+                     unsigned &numPhysicalOperands,
+                     unsigned *operandMapping,
+                     OperandEncoding (*encodingFromString)
+                       (const std::string&,
+                        bool hasOpSizePrefix));
+  
+  /// shouldBeEmitted - Returns the shouldBeEmitted field.  Although filter()
+  ///   filters out many instructions, at various points in decoding we
+  ///   determine that the instruction should not actually be decodable.  In
+  ///   particular, MMX MOV instructions aren't emitted, but they're only
+  ///   identified during operand parsing.
+  ///
+  /// @return - true if at this point we believe the instruction should be
+  ///   emitted; false if not.  This will return false if filter() returns false
+  ///   once emitInstructionSpecifier() has been called.
+  bool shouldBeEmitted() const {
+    return ShouldBeEmitted;
+  }
+  
+  /// emitInstructionSpecifier - Loads the instruction specifier for the current
+  ///   instruction into a DisassemblerTables.
+  ///
+  /// @arg tables - The DisassemblerTables to populate with the specifier for
+  ///               the current instruction.
+  void emitInstructionSpecifier(DisassemblerTables &tables);
+  
+  /// emitDecodePath - Populates the proper fields in the decode tables
+  ///   corresponding to the decode paths for this instruction.
+  ///
+  /// @arg tables - The DisassemblerTables to populate with the decode
+  ///               decode information for the current instruction.
+  void emitDecodePath(DisassemblerTables &tables) const;
+
+  /// Constructor - Initializes a RecognizableInstr with the appropriate fields
+  ///   from a CodeGenInstruction.
+  ///
+  /// @arg tables - The DisassemblerTables that the specifier will be added to.
+  /// @arg insn   - The CodeGenInstruction to extract information from.
+  /// @arg uid    - The unique ID of the current instruction.
+  RecognizableInstr(DisassemblerTables &tables,
+                    const CodeGenInstruction &insn,
+                    InstrUID uid);
+public:
+  /// processInstr - Accepts a CodeGenInstruction and loads decode information
+  ///   for it into a DisassemblerTables if appropriate.
+  ///
+  /// @arg tables - The DiassemblerTables to be populated with decode
+  ///               information.
+  /// @arg insn   - The CodeGenInstruction to be used as a source for this
+  ///               information.
+  /// @uid        - The unique ID of the instruction.
+  static void processInstr(DisassemblerTables &tables,
+                           const CodeGenInstruction &insn,
+                           InstrUID uid);
+};
+  
+} // namespace X86Disassembler
+
+} // namespace llvm
+
+#endif
diff --git a/libclamav/c++/llvm/utils/buildit/build_llvm b/libclamav/c++/llvm/utils/buildit/build_llvm
index 4392b27..25f6554 100755
--- a/libclamav/c++/llvm/utils/buildit/build_llvm
+++ b/libclamav/c++/llvm/utils/buildit/build_llvm
@@ -243,7 +243,11 @@ if ! test $? == 0 ; then
 fi 
 
 # Install Version.h
-RC_ProjectSourceSubversion=`printf "%d" $LLVM_SUBMIT_SUBVERSION`
+LLVM_MINOR_VERSION=`echo $LLVM_SUBMIT_SUBVERSION | sed -e 's,0*\([1-9][0-9]*\),\1,'`
+if [ "x$LLVM_MINOR_VERSION" = "x" ]; then
+    LLVM_MINOR_VERSION=0
+fi
+RC_ProjectSourceSubversion=`printf "%d" $LLVM_MINOR_VERSION`
 echo "#define LLVM_VERSION ${RC_ProjectSourceVersion}" > $DEST_DIR$DEST_ROOT/include/llvm/Version.h
 echo "#define LLVM_MINOR_VERSION ${RC_ProjectSourceSubversion}" >> $DEST_DIR$DEST_ROOT/include/llvm/Version.h
 
diff --git a/libclamav/c++/llvm/utils/emacs/llvm-mode.el b/libclamav/c++/llvm/utils/emacs/llvm-mode.el
index 55c56da..b1af853 100644
--- a/libclamav/c++/llvm/utils/emacs/llvm-mode.el
+++ b/libclamav/c++/llvm/utils/emacs/llvm-mode.el
@@ -122,7 +122,7 @@
 
 ;; Associate .ll files with llvm-mode
 (setq auto-mode-alist
-   (append '(("\\.ll$" . llvm-mode) ("\\.llx$" . llvm-mode)) auto-mode-alist))
+   (append '(("\\.ll$" . llvm-mode)) auto-mode-alist))
 
 (provide 'llvm-mode)
 ;; end of llvm-mode.el
diff --git a/libclamav/c++/llvm/utils/lit/TestFormats.py b/libclamav/c++/llvm/utils/lit/TestFormats.py
deleted file mode 100644
index 7305c79..0000000
--- a/libclamav/c++/llvm/utils/lit/TestFormats.py
+++ /dev/null
@@ -1,184 +0,0 @@
-import os
-
-import Test
-import TestRunner
-import Util
-
-class GoogleTest(object):
-    def __init__(self, test_sub_dir, test_suffix):
-        self.test_sub_dir = str(test_sub_dir)
-        self.test_suffix = str(test_suffix)
-
-    def getGTestTests(self, path):
-        """getGTestTests(path) - [name]
-        
-        Return the tests available in gtest executable."""
-
-        lines = Util.capture([path, '--gtest_list_tests']).split('\n')
-        nested_tests = []
-        for ln in lines:
-            if not ln.strip():
-                continue
-
-            prefix = ''
-            index = 0
-            while ln[index*2:index*2+2] == '  ':
-                index += 1
-            while len(nested_tests) > index:
-                nested_tests.pop()
-            
-            ln = ln[index*2:]
-            if ln.endswith('.'):
-                nested_tests.append(ln)
-            else:
-                yield ''.join(nested_tests) + ln
-
-    def getTestsInDirectory(self, testSuite, path_in_suite,
-                            litConfig, localConfig):
-        source_path = testSuite.getSourcePath(path_in_suite)
-        for filename in os.listdir(source_path):
-            # Check for the one subdirectory (build directory) tests will be in.
-            if filename != self.test_sub_dir:
-                continue
-
-            filepath = os.path.join(source_path, filename)
-            for subfilename in os.listdir(filepath):
-                if subfilename.endswith(self.test_suffix):
-                    execpath = os.path.join(filepath, subfilename)
-
-                    # Discover the tests in this executable.
-                    for name in self.getGTestTests(execpath):
-                        testPath = path_in_suite + (filename, subfilename, name)
-                        yield Test.Test(testSuite, testPath, localConfig)
-
-    def execute(self, test, litConfig):
-        testPath,testName = os.path.split(test.getSourcePath())
-        while not os.path.exists(testPath):
-            # Handle GTest parametrized and typed tests, whose name includes
-            # some '/'s.
-            testPath, namePrefix = os.path.split(testPath)
-            testName = os.path.join(namePrefix, testName)
-
-        cmd = [testPath, '--gtest_filter=' + testName]
-        out, err, exitCode = TestRunner.executeCommand(cmd)
-            
-        if not exitCode:
-            return Test.PASS,''
-
-        return Test.FAIL, out + err
-
-###
-
-class FileBasedTest(object):
-    def getTestsInDirectory(self, testSuite, path_in_suite,
-                            litConfig, localConfig):
-        source_path = testSuite.getSourcePath(path_in_suite)
-        for filename in os.listdir(source_path):
-            filepath = os.path.join(source_path, filename)
-            if not os.path.isdir(filepath):
-                base,ext = os.path.splitext(filename)
-                if ext in localConfig.suffixes:
-                    yield Test.Test(testSuite, path_in_suite + (filename,),
-                                    localConfig)
-
-class ShTest(FileBasedTest):
-    def __init__(self, execute_external = False):
-        self.execute_external = execute_external
-
-    def execute(self, test, litConfig):
-        return TestRunner.executeShTest(test, litConfig,
-                                        self.execute_external)
-
-class TclTest(FileBasedTest):
-    def execute(self, test, litConfig):
-        return TestRunner.executeTclTest(test, litConfig)
-
-###
-
-import re
-import tempfile
-
-class OneCommandPerFileTest:
-    # FIXME: Refactor into generic test for running some command on a directory
-    # of inputs.
-
-    def __init__(self, command, dir, recursive=False,
-                 pattern=".*", useTempInput=False):
-        if isinstance(command, str):
-            self.command = [command]
-        else:
-            self.command = list(command)
-        self.dir = str(dir)
-        self.recursive = bool(recursive)
-        self.pattern = re.compile(pattern)
-        self.useTempInput = useTempInput
-
-    def getTestsInDirectory(self, testSuite, path_in_suite,
-                            litConfig, localConfig):
-        for dirname,subdirs,filenames in os.walk(self.dir):
-            if not self.recursive:
-                subdirs[:] = []
-
-            subdirs[:] = [d for d in subdirs
-                          if (d != '.svn' and
-                              d not in localConfig.excludes)]
-
-            for filename in filenames:
-                if (not self.pattern.match(filename) or
-                    filename in localConfig.excludes):
-                    continue
-
-                path = os.path.join(dirname,filename)
-                suffix = path[len(self.dir):]
-                if suffix.startswith(os.sep):
-                    suffix = suffix[1:]
-                test = Test.Test(testSuite,
-                                 path_in_suite + tuple(suffix.split(os.sep)),
-                                 localConfig)
-                # FIXME: Hack?
-                test.source_path = path
-                yield test
-
-    def createTempInput(self, tmp, test):
-        abstract
-
-    def execute(self, test, litConfig):
-        if test.config.unsupported:
-            return (Test.UNSUPPORTED, 'Test is unsupported')
-
-        cmd = list(self.command)
-
-        # If using temp input, create a temporary file and hand it to the
-        # subclass.
-        if self.useTempInput:
-            tmp = tempfile.NamedTemporaryFile(suffix='.cpp')
-            self.createTempInput(tmp, test)
-            tmp.flush()
-            cmd.append(tmp.name)
-        else:
-            cmd.append(test.source_path)
-
-        out, err, exitCode = TestRunner.executeCommand(cmd)
-
-        diags = out + err
-        if not exitCode and not diags.strip():
-            return Test.PASS,''
-
-        # Try to include some useful information.
-        report = """Command: %s\n""" % ' '.join(["'%s'" % a
-                                                 for a in cmd])
-        if self.useTempInput:
-            report += """Temporary File: %s\n""" % tmp.name
-            report += "--\n%s--\n""" % open(tmp.name).read()
-        report += """Output:\n--\n%s--""" % diags
-
-        return Test.FAIL, report
-
-class SyntaxCheckTest(OneCommandPerFileTest):
-    def __init__(self, compiler, dir, extra_cxx_args=[], *args, **kwargs):
-        cmd = [compiler, '-x', 'c++', '-fsyntax-only'] + extra_cxx_args
-        OneCommandPerFileTest.__init__(self, cmd, dir,
-                                       useTempInput=1, *args, **kwargs)
-
-    def createTempInput(self, tmp, test):
-        print >>tmp, '#include "%s"' % test.source_path
diff --git a/libclamav/c++/llvm/utils/lit/lit.py b/libclamav/c++/llvm/utils/lit/lit.py
index 293976f..851063b 100755
--- a/libclamav/c++/llvm/utils/lit/lit.py
+++ b/libclamav/c++/llvm/utils/lit/lit.py
@@ -1,576 +1,5 @@
 #!/usr/bin/env python
 
-"""
-lit - LLVM Integrated Tester.
-
-See lit.pod for more information.
-"""
-
-import math, os, platform, random, re, sys, time, threading, traceback
-
-import ProgressBar
-import TestRunner
-import Util
-
-from TestingConfig import TestingConfig
-import LitConfig
-import Test
-
-# Configuration files to look for when discovering test suites. These can be
-# overridden with --config-prefix.
-#
-# FIXME: Rename to 'config.lit', 'site.lit', and 'local.lit' ?
-gConfigName = 'lit.cfg'
-gSiteConfigName = 'lit.site.cfg'
-
-kLocalConfigName = 'lit.local.cfg'
-
-class TestingProgressDisplay:
-    def __init__(self, opts, numTests, progressBar=None):
-        self.opts = opts
-        self.numTests = numTests
-        self.current = None
-        self.lock = threading.Lock()
-        self.progressBar = progressBar
-        self.completed = 0
-
-    def update(self, test):
-        # Avoid locking overhead in quiet mode
-        if self.opts.quiet and not test.result.isFailure:
-            self.completed += 1
-            return
-
-        # Output lock.
-        self.lock.acquire()
-        try:
-            self.handleUpdate(test)
-        finally:
-            self.lock.release()
-
-    def finish(self):
-        if self.progressBar:
-            self.progressBar.clear()
-        elif self.opts.quiet:
-            pass
-        elif self.opts.succinct:
-            sys.stdout.write('\n')
-
-    def handleUpdate(self, test):
-        self.completed += 1
-        if self.progressBar:
-            self.progressBar.update(float(self.completed)/self.numTests,
-                                    test.getFullName())
-
-        if self.opts.succinct and not test.result.isFailure:
-            return
-
-        if self.progressBar:
-            self.progressBar.clear()
-
-        print '%s: %s (%d of %d)' % (test.result.name, test.getFullName(),
-                                     self.completed, self.numTests)
-
-        if test.result.isFailure and self.opts.showOutput:
-            print "%s TEST '%s' FAILED %s" % ('*'*20, test.getFullName(),
-                                              '*'*20)
-            print test.output
-            print "*" * 20
-
-        sys.stdout.flush()
-
-class TestProvider:
-    def __init__(self, tests, maxTime):
-        self.maxTime = maxTime
-        self.iter = iter(tests)
-        self.lock = threading.Lock()
-        self.startTime = time.time()
-
-    def get(self):
-        # Check if we have run out of time.
-        if self.maxTime is not None:
-            if time.time() - self.startTime > self.maxTime:
-                return None
-
-        # Otherwise take the next test.
-        self.lock.acquire()
-        try:
-            item = self.iter.next()
-        except StopIteration:
-            item = None
-        self.lock.release()
-        return item
-
-class Tester(threading.Thread):
-    def __init__(self, litConfig, provider, display):
-        threading.Thread.__init__(self)
-        self.litConfig = litConfig
-        self.provider = provider
-        self.display = display
-
-    def run(self):
-        while 1:
-            item = self.provider.get()
-            if item is None:
-                break
-            self.runTest(item)
-
-    def runTest(self, test):
-        result = None
-        startTime = time.time()
-        try:
-            result, output = test.config.test_format.execute(test,
-                                                             self.litConfig)
-        except KeyboardInterrupt:
-            # This is a sad hack. Unfortunately subprocess goes
-            # bonkers with ctrl-c and we start forking merrily.
-            print '\nCtrl-C detected, goodbye.'
-            os.kill(0,9)
-        except:
-            if self.litConfig.debug:
-                raise
-            result = Test.UNRESOLVED
-            output = 'Exception during script execution:\n'
-            output += traceback.format_exc()
-            output += '\n'
-        elapsed = time.time() - startTime
-
-        test.setResult(result, output, elapsed)
-        self.display.update(test)
-
-def dirContainsTestSuite(path):
-    cfgpath = os.path.join(path, gSiteConfigName)
-    if os.path.exists(cfgpath):
-        return cfgpath
-    cfgpath = os.path.join(path, gConfigName)
-    if os.path.exists(cfgpath):
-        return cfgpath
-
-def getTestSuite(item, litConfig, cache):
-    """getTestSuite(item, litConfig, cache) -> (suite, relative_path)
-
-    Find the test suite containing @arg item.
-
-    @retval (None, ...) - Indicates no test suite contains @arg item.
-    @retval (suite, relative_path) - The suite that @arg item is in, and its
-    relative path inside that suite.
-    """
-    def search1(path):
-        # Check for a site config or a lit config.
-        cfgpath = dirContainsTestSuite(path)
-
-        # If we didn't find a config file, keep looking.
-        if not cfgpath:
-            parent,base = os.path.split(path)
-            if parent == path:
-                return (None, ())
-
-            ts, relative = search(parent)
-            return (ts, relative + (base,))
-
-        # We found a config file, load it.
-        if litConfig.debug:
-            litConfig.note('loading suite config %r' % cfgpath)
-
-        cfg = TestingConfig.frompath(cfgpath, None, litConfig, mustExist = True)
-        source_root = os.path.realpath(cfg.test_source_root or path)
-        exec_root = os.path.realpath(cfg.test_exec_root or path)
-        return Test.TestSuite(cfg.name, source_root, exec_root, cfg), ()
-
-    def search(path):
-        # Check for an already instantiated test suite.
-        res = cache.get(path)
-        if res is None:
-            cache[path] = res = search1(path)
-        return res
-
-    # Canonicalize the path.
-    item = os.path.realpath(item)
-
-    # Skip files and virtual components.
-    components = []
-    while not os.path.isdir(item):
-        parent,base = os.path.split(item)
-        if parent == item:
-            return (None, ())
-        components.append(base)
-        item = parent
-    components.reverse()
-
-    ts, relative = search(item)
-    return ts, tuple(relative + tuple(components))
-
-def getLocalConfig(ts, path_in_suite, litConfig, cache):
-    def search1(path_in_suite):
-        # Get the parent config.
-        if not path_in_suite:
-            parent = ts.config
-        else:
-            parent = search(path_in_suite[:-1])
-
-        # Load the local configuration.
-        source_path = ts.getSourcePath(path_in_suite)
-        cfgpath = os.path.join(source_path, kLocalConfigName)
-        if litConfig.debug:
-            litConfig.note('loading local config %r' % cfgpath)
-        return TestingConfig.frompath(cfgpath, parent, litConfig,
-                                    mustExist = False,
-                                    config = parent.clone(cfgpath))
-
-    def search(path_in_suite):
-        key = (ts, path_in_suite)
-        res = cache.get(key)
-        if res is None:
-            cache[key] = res = search1(path_in_suite)
-        return res
-
-    return search(path_in_suite)
-
-def getTests(path, litConfig, testSuiteCache, localConfigCache):
-    # Find the test suite for this input and its relative path.
-    ts,path_in_suite = getTestSuite(path, litConfig, testSuiteCache)
-    if ts is None:
-        litConfig.warning('unable to find test suite for %r' % path)
-        return (),()
-
-    if litConfig.debug:
-        litConfig.note('resolved input %r to %r::%r' % (path, ts.name,
-                                                        path_in_suite))
-
-    return ts, getTestsInSuite(ts, path_in_suite, litConfig,
-                               testSuiteCache, localConfigCache)
-
-def getTestsInSuite(ts, path_in_suite, litConfig,
-                    testSuiteCache, localConfigCache):
-    # Check that the source path exists (errors here are reported by the
-    # caller).
-    source_path = ts.getSourcePath(path_in_suite)
-    if not os.path.exists(source_path):
-        return
-
-    # Check if the user named a test directly.
-    if not os.path.isdir(source_path):
-        lc = getLocalConfig(ts, path_in_suite[:-1], litConfig, localConfigCache)
-        yield Test.Test(ts, path_in_suite, lc)
-        return
-
-    # Otherwise we have a directory to search for tests, start by getting the
-    # local configuration.
-    lc = getLocalConfig(ts, path_in_suite, litConfig, localConfigCache)
-
-    # Search for tests.
-    for res in lc.test_format.getTestsInDirectory(ts, path_in_suite,
-                                                  litConfig, lc):
-        yield res
-
-    # Search subdirectories.
-    for filename in os.listdir(source_path):
-        # FIXME: This doesn't belong here?
-        if filename in ('Output', '.svn') or filename in lc.excludes:
-            continue
-
-        # Ignore non-directories.
-        file_sourcepath = os.path.join(source_path, filename)
-        if not os.path.isdir(file_sourcepath):
-            continue
-
-        # Check for nested test suites, first in the execpath in case there is a
-        # site configuration and then in the source path.
-        file_execpath = ts.getExecPath(path_in_suite + (filename,))
-        if dirContainsTestSuite(file_execpath):
-            sub_ts, subiter = getTests(file_execpath, litConfig,
-                                       testSuiteCache, localConfigCache)
-        elif dirContainsTestSuite(file_sourcepath):
-            sub_ts, subiter = getTests(file_sourcepath, litConfig,
-                                       testSuiteCache, localConfigCache)
-        else:
-            # Otherwise, continue loading from inside this test suite.
-            subiter = getTestsInSuite(ts, path_in_suite + (filename,),
-                                      litConfig, testSuiteCache,
-                                      localConfigCache)
-            sub_ts = None
-
-        N = 0
-        for res in subiter:
-            N += 1
-            yield res
-        if sub_ts and not N:
-            litConfig.warning('test suite %r contained no tests' % sub_ts.name)
-
-def runTests(numThreads, litConfig, provider, display):
-    # If only using one testing thread, don't use threads at all; this lets us
-    # profile, among other things.
-    if numThreads == 1:
-        t = Tester(litConfig, provider, display)
-        t.run()
-        return
-
-    # Otherwise spin up the testing threads and wait for them to finish.
-    testers = [Tester(litConfig, provider, display)
-               for i in range(numThreads)]
-    for t in testers:
-        t.start()
-    try:
-        for t in testers:
-            t.join()
-    except KeyboardInterrupt:
-        sys.exit(2)
-
-def main():
-    global options
-    from optparse import OptionParser, OptionGroup
-    parser = OptionParser("usage: %prog [options] {file-or-path}")
-
-    parser.add_option("-j", "--threads", dest="numThreads", metavar="N",
-                      help="Number of testing threads",
-                      type=int, action="store", default=None)
-    parser.add_option("", "--config-prefix", dest="configPrefix",
-                      metavar="NAME", help="Prefix for 'lit' config files",
-                      action="store", default=None)
-    parser.add_option("", "--param", dest="userParameters",
-                      metavar="NAME=VAL",
-                      help="Add 'NAME' = 'VAL' to the user defined parameters",
-                      type=str, action="append", default=[])
-
-    group = OptionGroup(parser, "Output Format")
-    # FIXME: I find these names very confusing, although I like the
-    # functionality.
-    group.add_option("-q", "--quiet", dest="quiet",
-                     help="Suppress no error output",
-                     action="store_true", default=False)
-    group.add_option("-s", "--succinct", dest="succinct",
-                     help="Reduce amount of output",
-                     action="store_true", default=False)
-    group.add_option("-v", "--verbose", dest="showOutput",
-                     help="Show all test output",
-                     action="store_true", default=False)
-    group.add_option("", "--no-progress-bar", dest="useProgressBar",
-                     help="Do not use curses based progress bar",
-                     action="store_false", default=True)
-    parser.add_option_group(group)
-
-    group = OptionGroup(parser, "Test Execution")
-    group.add_option("", "--path", dest="path",
-                     help="Additional paths to add to testing environment",
-                     action="append", type=str, default=[])
-    group.add_option("", "--vg", dest="useValgrind",
-                     help="Run tests under valgrind",
-                     action="store_true", default=False)
-    group.add_option("", "--vg-arg", dest="valgrindArgs", metavar="ARG",
-                     help="Specify an extra argument for valgrind",
-                     type=str, action="append", default=[])
-    group.add_option("", "--time-tests", dest="timeTests",
-                     help="Track elapsed wall time for each test",
-                     action="store_true", default=False)
-    group.add_option("", "--no-execute", dest="noExecute",
-                     help="Don't execute any tests (assume PASS)",
-                     action="store_true", default=False)
-    parser.add_option_group(group)
-
-    group = OptionGroup(parser, "Test Selection")
-    group.add_option("", "--max-tests", dest="maxTests", metavar="N",
-                     help="Maximum number of tests to run",
-                     action="store", type=int, default=None)
-    group.add_option("", "--max-time", dest="maxTime", metavar="N",
-                     help="Maximum time to spend testing (in seconds)",
-                     action="store", type=float, default=None)
-    group.add_option("", "--shuffle", dest="shuffle",
-                     help="Run tests in random order",
-                     action="store_true", default=False)
-    parser.add_option_group(group)
-
-    group = OptionGroup(parser, "Debug and Experimental Options")
-    group.add_option("", "--debug", dest="debug",
-                      help="Enable debugging (for 'lit' development)",
-                      action="store_true", default=False)
-    group.add_option("", "--show-suites", dest="showSuites",
-                      help="Show discovered test suites",
-                      action="store_true", default=False)
-    group.add_option("", "--no-tcl-as-sh", dest="useTclAsSh",
-                      help="Don't run Tcl scripts using 'sh'",
-                      action="store_false", default=True)
-    group.add_option("", "--repeat", dest="repeatTests", metavar="N",
-                      help="Repeat tests N times (for timing)",
-                      action="store", default=None, type=int)
-    parser.add_option_group(group)
-
-    (opts, args) = parser.parse_args()
-
-    if not args:
-        parser.error('No inputs specified')
-
-    if opts.configPrefix is not None:
-        global gConfigName, gSiteConfigName
-        gConfigName = '%s.cfg' % opts.configPrefix
-        gSiteConfigName = '%s.site.cfg' % opts.configPrefix
-
-    if opts.numThreads is None:
-        opts.numThreads = Util.detectCPUs()
-
-    inputs = args
-
-    # Create the user defined parameters.
-    userParams = {}
-    for entry in opts.userParameters:
-        if '=' not in entry:
-            name,val = entry,''
-        else:
-            name,val = entry.split('=', 1)
-        userParams[name] = val
-
-    # Create the global config object.
-    litConfig = LitConfig.LitConfig(progname = os.path.basename(sys.argv[0]),
-                                    path = opts.path,
-                                    quiet = opts.quiet,
-                                    useValgrind = opts.useValgrind,
-                                    valgrindArgs = opts.valgrindArgs,
-                                    useTclAsSh = opts.useTclAsSh,
-                                    noExecute = opts.noExecute,
-                                    debug = opts.debug,
-                                    isWindows = (platform.system()=='Windows'),
-                                    params = userParams)
-
-    # Load the tests from the inputs.
-    tests = []
-    testSuiteCache = {}
-    localConfigCache = {}
-    for input in inputs:
-        prev = len(tests)
-        tests.extend(getTests(input, litConfig,
-                              testSuiteCache, localConfigCache)[1])
-        if prev == len(tests):
-            litConfig.warning('input %r contained no tests' % input)
-
-    # If there were any errors during test discovery, exit now.
-    if litConfig.numErrors:
-        print >>sys.stderr, '%d errors, exiting.' % litConfig.numErrors
-        sys.exit(2)
-
-    if opts.showSuites:
-        suitesAndTests = dict([(ts,[])
-                               for ts,_ in testSuiteCache.values()
-                               if ts])
-        for t in tests:
-            suitesAndTests[t.suite].append(t)
-
-        print '-- Test Suites --'
-        suitesAndTests = suitesAndTests.items()
-        suitesAndTests.sort(key = lambda (ts,_): ts.name)
-        for ts,ts_tests in suitesAndTests:
-            print '  %s - %d tests' %(ts.name, len(ts_tests))
-            print '    Source Root: %s' % ts.source_root
-            print '    Exec Root  : %s' % ts.exec_root
-
-    # Select and order the tests.
-    numTotalTests = len(tests)
-    if opts.shuffle:
-        random.shuffle(tests)
-    else:
-        tests.sort(key = lambda t: t.getFullName())
-    if opts.maxTests is not None:
-        tests = tests[:opts.maxTests]
-
-    extra = ''
-    if len(tests) != numTotalTests:
-        extra = ' of %d' % numTotalTests
-    header = '-- Testing: %d%s tests, %d threads --'%(len(tests),extra,
-                                                      opts.numThreads)
-
-    if opts.repeatTests:
-        tests = [t.copyWithIndex(i)
-                 for t in tests
-                 for i in range(opts.repeatTests)]
-
-    progressBar = None
-    if not opts.quiet:
-        if opts.succinct and opts.useProgressBar:
-            try:
-                tc = ProgressBar.TerminalController()
-                progressBar = ProgressBar.ProgressBar(tc, header)
-            except ValueError:
-                print header
-                progressBar = ProgressBar.SimpleProgressBar('Testing: ')
-        else:
-            print header
-
-    # Don't create more threads than tests.
-    opts.numThreads = min(len(tests), opts.numThreads)
-
-    startTime = time.time()
-    display = TestingProgressDisplay(opts, len(tests), progressBar)
-    provider = TestProvider(tests, opts.maxTime)
-    runTests(opts.numThreads, litConfig, provider, display)
-    display.finish()
-
-    if not opts.quiet:
-        print 'Testing Time: %.2fs'%(time.time() - startTime)
-
-    # Update results for any tests which weren't run.
-    for t in tests:
-        if t.result is None:
-            t.setResult(Test.UNRESOLVED, '', 0.0)
-
-    # List test results organized by kind.
-    hasFailures = False
-    byCode = {}
-    for t in tests:
-        if t.result not in byCode:
-            byCode[t.result] = []
-        byCode[t.result].append(t)
-        if t.result.isFailure:
-            hasFailures = True
-
-    # FIXME: Show unresolved and (optionally) unsupported tests.
-    for title,code in (('Unexpected Passing Tests', Test.XPASS),
-                       ('Failing Tests', Test.FAIL)):
-        elts = byCode.get(code)
-        if not elts:
-            continue
-        print '*'*20
-        print '%s (%d):' % (title, len(elts))
-        for t in elts:
-            print '    %s' % t.getFullName()
-        print
-
-    if opts.timeTests:
-        # Collate, in case we repeated tests.
-        times = {}
-        for t in tests:
-            key = t.getFullName()
-            times[key] = times.get(key, 0.) + t.elapsed
-
-        byTime = list(times.items())
-        byTime.sort(key = lambda (name,elapsed): elapsed)
-        if byTime:
-            Util.printHistogram(byTime, title='Tests')
-
-    for name,code in (('Expected Passes    ', Test.PASS),
-                      ('Expected Failures  ', Test.XFAIL),
-                      ('Unsupported Tests  ', Test.UNSUPPORTED),
-                      ('Unresolved Tests   ', Test.UNRESOLVED),
-                      ('Unexpected Passes  ', Test.XPASS),
-                      ('Unexpected Failures', Test.FAIL),):
-        if opts.quiet and not code.isFailure:
-            continue
-        N = len(byCode.get(code,[]))
-        if N:
-            print '  %s: %d' % (name,N)
-
-    # If we encountered any additional errors, exit abnormally.
-    if litConfig.numErrors:
-        print >>sys.stderr, '\n%d error(s), exiting.' % litConfig.numErrors
-        sys.exit(2)
-
-    # Warn about warnings.
-    if litConfig.numWarnings:
-        print >>sys.stderr, '\n%d warning(s) in tests.' % litConfig.numWarnings
-
-    if hasFailures:
-        sys.exit(1)
-    sys.exit(0)
-
 if __name__=='__main__':
-    # Bump the GIL check interval, its more important to get any one thread to a
-    # blocking operation (hopefully exec) than to try and unblock other threads.
-    import sys
-    sys.setcheckinterval(1000)
-    main()
+    import lit
+    lit.main()
diff --git a/libclamav/c++/llvm/utils/lit/ExampleTests.ObjDir/lit.site.cfg b/libclamav/c++/llvm/utils/lit/lit/ExampleTests.ObjDir/lit.site.cfg
similarity index 100%
rename from libclamav/c++/llvm/utils/lit/ExampleTests.ObjDir/lit.site.cfg
rename to libclamav/c++/llvm/utils/lit/lit/ExampleTests.ObjDir/lit.site.cfg
diff --git a/libclamav/c++/llvm/utils/lit/ExampleTests/Clang/fsyntax-only.c b/libclamav/c++/llvm/utils/lit/lit/ExampleTests/Clang/fsyntax-only.c
similarity index 100%
rename from libclamav/c++/llvm/utils/lit/ExampleTests/Clang/fsyntax-only.c
rename to libclamav/c++/llvm/utils/lit/lit/ExampleTests/Clang/fsyntax-only.c
diff --git a/libclamav/c++/llvm/utils/lit/ExampleTests/Clang/lit.cfg b/libclamav/c++/llvm/utils/lit/lit/ExampleTests/Clang/lit.cfg
similarity index 100%
rename from libclamav/c++/llvm/utils/lit/ExampleTests/Clang/lit.cfg
rename to libclamav/c++/llvm/utils/lit/lit/ExampleTests/Clang/lit.cfg
diff --git a/libclamav/c++/llvm/utils/lit/ExampleTests/LLVM.InTree/test/Bar/bar-test.ll b/libclamav/c++/llvm/utils/lit/lit/ExampleTests/LLVM.InTree/test/Bar/bar-test.ll
similarity index 100%
rename from libclamav/c++/llvm/utils/lit/ExampleTests/LLVM.InTree/test/Bar/bar-test.ll
rename to libclamav/c++/llvm/utils/lit/lit/ExampleTests/LLVM.InTree/test/Bar/bar-test.ll
diff --git a/libclamav/c++/llvm/utils/lit/ExampleTests/LLVM.InTree/test/Bar/dg.exp b/libclamav/c++/llvm/utils/lit/lit/ExampleTests/LLVM.InTree/test/Bar/dg.exp
similarity index 100%
rename from libclamav/c++/llvm/utils/lit/ExampleTests/LLVM.InTree/test/Bar/dg.exp
rename to libclamav/c++/llvm/utils/lit/lit/ExampleTests/LLVM.InTree/test/Bar/dg.exp
diff --git a/libclamav/c++/llvm/utils/lit/ExampleTests/LLVM.InTree/test/lit.cfg b/libclamav/c++/llvm/utils/lit/lit/ExampleTests/LLVM.InTree/test/lit.cfg
similarity index 100%
rename from libclamav/c++/llvm/utils/lit/ExampleTests/LLVM.InTree/test/lit.cfg
rename to libclamav/c++/llvm/utils/lit/lit/ExampleTests/LLVM.InTree/test/lit.cfg
diff --git a/libclamav/c++/llvm/utils/lit/ExampleTests/LLVM.InTree/test/lit.site.cfg b/libclamav/c++/llvm/utils/lit/lit/ExampleTests/LLVM.InTree/test/lit.site.cfg
similarity index 100%
rename from libclamav/c++/llvm/utils/lit/ExampleTests/LLVM.InTree/test/lit.site.cfg
rename to libclamav/c++/llvm/utils/lit/lit/ExampleTests/LLVM.InTree/test/lit.site.cfg
diff --git a/libclamav/c++/llvm/utils/lit/ExampleTests/LLVM.InTree/test/site.exp b/libclamav/c++/llvm/utils/lit/lit/ExampleTests/LLVM.InTree/test/site.exp
similarity index 100%
rename from libclamav/c++/llvm/utils/lit/ExampleTests/LLVM.InTree/test/site.exp
rename to libclamav/c++/llvm/utils/lit/lit/ExampleTests/LLVM.InTree/test/site.exp
diff --git a/libclamav/c++/llvm/utils/lit/ExampleTests/LLVM.OutOfTree/lit.local.cfg b/libclamav/c++/llvm/utils/lit/lit/ExampleTests/LLVM.OutOfTree/lit.local.cfg
similarity index 100%
rename from libclamav/c++/llvm/utils/lit/ExampleTests/LLVM.OutOfTree/lit.local.cfg
rename to libclamav/c++/llvm/utils/lit/lit/ExampleTests/LLVM.OutOfTree/lit.local.cfg
diff --git a/libclamav/c++/llvm/utils/lit/ExampleTests/LLVM.OutOfTree/obj/test/Foo/lit.local.cfg b/libclamav/c++/llvm/utils/lit/lit/ExampleTests/LLVM.OutOfTree/obj/test/Foo/lit.local.cfg
similarity index 100%
rename from libclamav/c++/llvm/utils/lit/ExampleTests/LLVM.OutOfTree/obj/test/Foo/lit.local.cfg
rename to libclamav/c++/llvm/utils/lit/lit/ExampleTests/LLVM.OutOfTree/obj/test/Foo/lit.local.cfg
diff --git a/libclamav/c++/llvm/utils/lit/ExampleTests/LLVM.OutOfTree/obj/test/lit.site.cfg b/libclamav/c++/llvm/utils/lit/lit/ExampleTests/LLVM.OutOfTree/obj/test/lit.site.cfg
similarity index 100%
rename from libclamav/c++/llvm/utils/lit/ExampleTests/LLVM.OutOfTree/obj/test/lit.site.cfg
rename to libclamav/c++/llvm/utils/lit/lit/ExampleTests/LLVM.OutOfTree/obj/test/lit.site.cfg
diff --git a/libclamav/c++/llvm/utils/lit/ExampleTests/LLVM.OutOfTree/obj/test/site.exp b/libclamav/c++/llvm/utils/lit/lit/ExampleTests/LLVM.OutOfTree/obj/test/site.exp
similarity index 100%
rename from libclamav/c++/llvm/utils/lit/ExampleTests/LLVM.OutOfTree/obj/test/site.exp
rename to libclamav/c++/llvm/utils/lit/lit/ExampleTests/LLVM.OutOfTree/obj/test/site.exp
diff --git a/libclamav/c++/llvm/utils/lit/ExampleTests/LLVM.OutOfTree/src/test/Foo/data.txt b/libclamav/c++/llvm/utils/lit/lit/ExampleTests/LLVM.OutOfTree/src/test/Foo/data.txt
similarity index 100%
rename from libclamav/c++/llvm/utils/lit/ExampleTests/LLVM.OutOfTree/src/test/Foo/data.txt
rename to libclamav/c++/llvm/utils/lit/lit/ExampleTests/LLVM.OutOfTree/src/test/Foo/data.txt
diff --git a/libclamav/c++/llvm/utils/lit/ExampleTests/LLVM.OutOfTree/src/test/Foo/dg.exp b/libclamav/c++/llvm/utils/lit/lit/ExampleTests/LLVM.OutOfTree/src/test/Foo/dg.exp
similarity index 100%
rename from libclamav/c++/llvm/utils/lit/ExampleTests/LLVM.OutOfTree/src/test/Foo/dg.exp
rename to libclamav/c++/llvm/utils/lit/lit/ExampleTests/LLVM.OutOfTree/src/test/Foo/dg.exp
diff --git a/libclamav/c++/llvm/utils/lit/ExampleTests/LLVM.OutOfTree/src/test/Foo/pct-S.ll b/libclamav/c++/llvm/utils/lit/lit/ExampleTests/LLVM.OutOfTree/src/test/Foo/pct-S.ll
similarity index 100%
rename from libclamav/c++/llvm/utils/lit/ExampleTests/LLVM.OutOfTree/src/test/Foo/pct-S.ll
rename to libclamav/c++/llvm/utils/lit/lit/ExampleTests/LLVM.OutOfTree/src/test/Foo/pct-S.ll
diff --git a/libclamav/c++/llvm/utils/lit/ExampleTests/LLVM.OutOfTree/src/test/lit.cfg b/libclamav/c++/llvm/utils/lit/lit/ExampleTests/LLVM.OutOfTree/src/test/lit.cfg
similarity index 100%
rename from libclamav/c++/llvm/utils/lit/ExampleTests/LLVM.OutOfTree/src/test/lit.cfg
rename to libclamav/c++/llvm/utils/lit/lit/ExampleTests/LLVM.OutOfTree/src/test/lit.cfg
diff --git a/libclamav/c++/llvm/utils/lit/ExampleTests/ShExternal/lit.local.cfg b/libclamav/c++/llvm/utils/lit/lit/ExampleTests/ShExternal/lit.local.cfg
similarity index 100%
rename from libclamav/c++/llvm/utils/lit/ExampleTests/ShExternal/lit.local.cfg
rename to libclamav/c++/llvm/utils/lit/lit/ExampleTests/ShExternal/lit.local.cfg
diff --git a/libclamav/c++/llvm/utils/lit/ExampleTests/ShInternal/lit.local.cfg b/libclamav/c++/llvm/utils/lit/lit/ExampleTests/ShInternal/lit.local.cfg
similarity index 100%
rename from libclamav/c++/llvm/utils/lit/ExampleTests/ShInternal/lit.local.cfg
rename to libclamav/c++/llvm/utils/lit/lit/ExampleTests/ShInternal/lit.local.cfg
diff --git a/libclamav/c++/llvm/utils/lit/ExampleTests/TclTest/lit.local.cfg b/libclamav/c++/llvm/utils/lit/lit/ExampleTests/TclTest/lit.local.cfg
similarity index 100%
rename from libclamav/c++/llvm/utils/lit/ExampleTests/TclTest/lit.local.cfg
rename to libclamav/c++/llvm/utils/lit/lit/ExampleTests/TclTest/lit.local.cfg
diff --git a/libclamav/c++/llvm/utils/lit/ExampleTests/TclTest/stderr-pipe.ll b/libclamav/c++/llvm/utils/lit/lit/ExampleTests/TclTest/stderr-pipe.ll
similarity index 100%
rename from libclamav/c++/llvm/utils/lit/ExampleTests/TclTest/stderr-pipe.ll
rename to libclamav/c++/llvm/utils/lit/lit/ExampleTests/TclTest/stderr-pipe.ll
diff --git a/libclamav/c++/llvm/utils/lit/ExampleTests/TclTest/tcl-redir-1.ll b/libclamav/c++/llvm/utils/lit/lit/ExampleTests/TclTest/tcl-redir-1.ll
similarity index 100%
rename from libclamav/c++/llvm/utils/lit/ExampleTests/TclTest/tcl-redir-1.ll
rename to libclamav/c++/llvm/utils/lit/lit/ExampleTests/TclTest/tcl-redir-1.ll
diff --git a/libclamav/c++/llvm/utils/lit/ExampleTests/fail.c b/libclamav/c++/llvm/utils/lit/lit/ExampleTests/fail.c
similarity index 100%
rename from libclamav/c++/llvm/utils/lit/ExampleTests/fail.c
rename to libclamav/c++/llvm/utils/lit/lit/ExampleTests/fail.c
diff --git a/libclamav/c++/llvm/utils/lit/ExampleTests/lit.cfg b/libclamav/c++/llvm/utils/lit/lit/ExampleTests/lit.cfg
similarity index 100%
rename from libclamav/c++/llvm/utils/lit/ExampleTests/lit.cfg
rename to libclamav/c++/llvm/utils/lit/lit/ExampleTests/lit.cfg
diff --git a/libclamav/c++/llvm/utils/lit/ExampleTests/pass.c b/libclamav/c++/llvm/utils/lit/lit/ExampleTests/pass.c
similarity index 100%
rename from libclamav/c++/llvm/utils/lit/ExampleTests/pass.c
rename to libclamav/c++/llvm/utils/lit/lit/ExampleTests/pass.c
diff --git a/libclamav/c++/llvm/utils/lit/ExampleTests/xfail.c b/libclamav/c++/llvm/utils/lit/lit/ExampleTests/xfail.c
similarity index 100%
rename from libclamav/c++/llvm/utils/lit/ExampleTests/xfail.c
rename to libclamav/c++/llvm/utils/lit/lit/ExampleTests/xfail.c
diff --git a/libclamav/c++/llvm/utils/lit/ExampleTests/xpass.c b/libclamav/c++/llvm/utils/lit/lit/ExampleTests/xpass.c
similarity index 100%
rename from libclamav/c++/llvm/utils/lit/ExampleTests/xpass.c
rename to libclamav/c++/llvm/utils/lit/lit/ExampleTests/xpass.c
diff --git a/libclamav/c++/llvm/utils/lit/LitConfig.py b/libclamav/c++/llvm/utils/lit/lit/LitConfig.py
similarity index 100%
rename from libclamav/c++/llvm/utils/lit/LitConfig.py
rename to libclamav/c++/llvm/utils/lit/lit/LitConfig.py
diff --git a/libclamav/c++/llvm/utils/lit/LitFormats.py b/libclamav/c++/llvm/utils/lit/lit/LitFormats.py
similarity index 100%
rename from libclamav/c++/llvm/utils/lit/LitFormats.py
rename to libclamav/c++/llvm/utils/lit/lit/LitFormats.py
diff --git a/libclamav/c++/llvm/utils/lit/ProgressBar.py b/libclamav/c++/llvm/utils/lit/lit/ProgressBar.py
similarity index 100%
rename from libclamav/c++/llvm/utils/lit/ProgressBar.py
rename to libclamav/c++/llvm/utils/lit/lit/ProgressBar.py
diff --git a/libclamav/c++/llvm/utils/lit/ShCommands.py b/libclamav/c++/llvm/utils/lit/lit/ShCommands.py
similarity index 100%
rename from libclamav/c++/llvm/utils/lit/ShCommands.py
rename to libclamav/c++/llvm/utils/lit/lit/ShCommands.py
diff --git a/libclamav/c++/llvm/utils/lit/ShUtil.py b/libclamav/c++/llvm/utils/lit/lit/ShUtil.py
similarity index 100%
rename from libclamav/c++/llvm/utils/lit/ShUtil.py
rename to libclamav/c++/llvm/utils/lit/lit/ShUtil.py
diff --git a/libclamav/c++/llvm/utils/lit/TclUtil.py b/libclamav/c++/llvm/utils/lit/lit/TclUtil.py
similarity index 100%
rename from libclamav/c++/llvm/utils/lit/TclUtil.py
rename to libclamav/c++/llvm/utils/lit/lit/TclUtil.py
diff --git a/libclamav/c++/llvm/utils/lit/Test.py b/libclamav/c++/llvm/utils/lit/lit/Test.py
similarity index 100%
rename from libclamav/c++/llvm/utils/lit/Test.py
rename to libclamav/c++/llvm/utils/lit/lit/Test.py
diff --git a/libclamav/c++/llvm/utils/lit/lit/TestFormats.py b/libclamav/c++/llvm/utils/lit/lit/TestFormats.py
new file mode 100644
index 0000000..5dfd54a
--- /dev/null
+++ b/libclamav/c++/llvm/utils/lit/lit/TestFormats.py
@@ -0,0 +1,189 @@
+import os
+
+import Test
+import TestRunner
+import Util
+
+class GoogleTest(object):
+    def __init__(self, test_sub_dir, test_suffix):
+        self.test_sub_dir = str(test_sub_dir)
+        self.test_suffix = str(test_suffix)
+
+    def getGTestTests(self, path, litConfig):
+        """getGTestTests(path) - [name]
+        
+        Return the tests available in gtest executable."""
+
+        try:
+            lines = Util.capture([path, '--gtest_list_tests']).split('\n')
+        except:
+            litConfig.error("unable to discover google-tests in %r" % path)
+            raise StopIteration
+
+        nested_tests = []
+        for ln in lines:
+            if not ln.strip():
+                continue
+
+            prefix = ''
+            index = 0
+            while ln[index*2:index*2+2] == '  ':
+                index += 1
+            while len(nested_tests) > index:
+                nested_tests.pop()
+            
+            ln = ln[index*2:]
+            if ln.endswith('.'):
+                nested_tests.append(ln)
+            else:
+                yield ''.join(nested_tests) + ln
+
+    def getTestsInDirectory(self, testSuite, path_in_suite,
+                            litConfig, localConfig):
+        source_path = testSuite.getSourcePath(path_in_suite)
+        for filename in os.listdir(source_path):
+            # Check for the one subdirectory (build directory) tests will be in.
+            if filename != self.test_sub_dir:
+                continue
+
+            filepath = os.path.join(source_path, filename)
+            for subfilename in os.listdir(filepath):
+                if subfilename.endswith(self.test_suffix):
+                    execpath = os.path.join(filepath, subfilename)
+
+                    # Discover the tests in this executable.
+                    for name in self.getGTestTests(execpath, litConfig):
+                        testPath = path_in_suite + (filename, subfilename, name)
+                        yield Test.Test(testSuite, testPath, localConfig)
+
+    def execute(self, test, litConfig):
+        testPath,testName = os.path.split(test.getSourcePath())
+        while not os.path.exists(testPath):
+            # Handle GTest parametrized and typed tests, whose name includes
+            # some '/'s.
+            testPath, namePrefix = os.path.split(testPath)
+            testName = os.path.join(namePrefix, testName)
+
+        cmd = [testPath, '--gtest_filter=' + testName]
+        out, err, exitCode = TestRunner.executeCommand(cmd)
+            
+        if not exitCode:
+            return Test.PASS,''
+
+        return Test.FAIL, out + err
+
+###
+
+class FileBasedTest(object):
+    def getTestsInDirectory(self, testSuite, path_in_suite,
+                            litConfig, localConfig):
+        source_path = testSuite.getSourcePath(path_in_suite)
+        for filename in os.listdir(source_path):
+            filepath = os.path.join(source_path, filename)
+            if not os.path.isdir(filepath):
+                base,ext = os.path.splitext(filename)
+                if ext in localConfig.suffixes:
+                    yield Test.Test(testSuite, path_in_suite + (filename,),
+                                    localConfig)
+
+class ShTest(FileBasedTest):
+    def __init__(self, execute_external = False):
+        self.execute_external = execute_external
+
+    def execute(self, test, litConfig):
+        return TestRunner.executeShTest(test, litConfig,
+                                        self.execute_external)
+
+class TclTest(FileBasedTest):
+    def execute(self, test, litConfig):
+        return TestRunner.executeTclTest(test, litConfig)
+
+###
+
+import re
+import tempfile
+
+class OneCommandPerFileTest:
+    # FIXME: Refactor into generic test for running some command on a directory
+    # of inputs.
+
+    def __init__(self, command, dir, recursive=False,
+                 pattern=".*", useTempInput=False):
+        if isinstance(command, str):
+            self.command = [command]
+        else:
+            self.command = list(command)
+        self.dir = str(dir)
+        self.recursive = bool(recursive)
+        self.pattern = re.compile(pattern)
+        self.useTempInput = useTempInput
+
+    def getTestsInDirectory(self, testSuite, path_in_suite,
+                            litConfig, localConfig):
+        for dirname,subdirs,filenames in os.walk(self.dir):
+            if not self.recursive:
+                subdirs[:] = []
+
+            subdirs[:] = [d for d in subdirs
+                          if (d != '.svn' and
+                              d not in localConfig.excludes)]
+
+            for filename in filenames:
+                if (not self.pattern.match(filename) or
+                    filename in localConfig.excludes):
+                    continue
+
+                path = os.path.join(dirname,filename)
+                suffix = path[len(self.dir):]
+                if suffix.startswith(os.sep):
+                    suffix = suffix[1:]
+                test = Test.Test(testSuite,
+                                 path_in_suite + tuple(suffix.split(os.sep)),
+                                 localConfig)
+                # FIXME: Hack?
+                test.source_path = path
+                yield test
+
+    def createTempInput(self, tmp, test):
+        abstract
+
+    def execute(self, test, litConfig):
+        if test.config.unsupported:
+            return (Test.UNSUPPORTED, 'Test is unsupported')
+
+        cmd = list(self.command)
+
+        # If using temp input, create a temporary file and hand it to the
+        # subclass.
+        if self.useTempInput:
+            tmp = tempfile.NamedTemporaryFile(suffix='.cpp')
+            self.createTempInput(tmp, test)
+            tmp.flush()
+            cmd.append(tmp.name)
+        else:
+            cmd.append(test.source_path)
+
+        out, err, exitCode = TestRunner.executeCommand(cmd)
+
+        diags = out + err
+        if not exitCode and not diags.strip():
+            return Test.PASS,''
+
+        # Try to include some useful information.
+        report = """Command: %s\n""" % ' '.join(["'%s'" % a
+                                                 for a in cmd])
+        if self.useTempInput:
+            report += """Temporary File: %s\n""" % tmp.name
+            report += "--\n%s--\n""" % open(tmp.name).read()
+        report += """Output:\n--\n%s--""" % diags
+
+        return Test.FAIL, report
+
+class SyntaxCheckTest(OneCommandPerFileTest):
+    def __init__(self, compiler, dir, extra_cxx_args=[], *args, **kwargs):
+        cmd = [compiler, '-x', 'c++', '-fsyntax-only'] + extra_cxx_args
+        OneCommandPerFileTest.__init__(self, cmd, dir,
+                                       useTempInput=1, *args, **kwargs)
+
+    def createTempInput(self, tmp, test):
+        print >>tmp, '#include "%s"' % test.source_path
diff --git a/libclamav/c++/llvm/utils/lit/TestRunner.py b/libclamav/c++/llvm/utils/lit/lit/TestRunner.py
similarity index 100%
rename from libclamav/c++/llvm/utils/lit/TestRunner.py
rename to libclamav/c++/llvm/utils/lit/lit/TestRunner.py
diff --git a/libclamav/c++/llvm/utils/lit/TestingConfig.py b/libclamav/c++/llvm/utils/lit/lit/TestingConfig.py
similarity index 100%
rename from libclamav/c++/llvm/utils/lit/TestingConfig.py
rename to libclamav/c++/llvm/utils/lit/lit/TestingConfig.py
diff --git a/libclamav/c++/llvm/utils/lit/Util.py b/libclamav/c++/llvm/utils/lit/lit/Util.py
similarity index 100%
rename from libclamav/c++/llvm/utils/lit/Util.py
rename to libclamav/c++/llvm/utils/lit/lit/Util.py
diff --git a/libclamav/c++/llvm/utils/lit/lit/__init__.py b/libclamav/c++/llvm/utils/lit/lit/__init__.py
new file mode 100644
index 0000000..0102602
--- /dev/null
+++ b/libclamav/c++/llvm/utils/lit/lit/__init__.py
@@ -0,0 +1,10 @@
+"""'lit' Testing Tool"""
+
+from lit import main
+
+__author__ = 'Daniel Dunbar'
+__email__ = 'daniel at zuster.org'
+__versioninfo__ = (0, 1, 0)
+__version__ = '.'.join(map(str, __versioninfo__))
+
+__all__ = []
diff --git a/libclamav/c++/llvm/utils/lit/lit/lit.py b/libclamav/c++/llvm/utils/lit/lit/lit.py
new file mode 100755
index 0000000..f1f19c4
--- /dev/null
+++ b/libclamav/c++/llvm/utils/lit/lit/lit.py
@@ -0,0 +1,579 @@
+#!/usr/bin/env python
+
+"""
+lit - LLVM Integrated Tester.
+
+See lit.pod for more information.
+"""
+
+import math, os, platform, random, re, sys, time, threading, traceback
+
+import ProgressBar
+import TestRunner
+import Util
+
+from TestingConfig import TestingConfig
+import LitConfig
+import Test
+
+# Configuration files to look for when discovering test suites. These can be
+# overridden with --config-prefix.
+#
+# FIXME: Rename to 'config.lit', 'site.lit', and 'local.lit' ?
+gConfigName = 'lit.cfg'
+gSiteConfigName = 'lit.site.cfg'
+
+kLocalConfigName = 'lit.local.cfg'
+
+class TestingProgressDisplay:
+    def __init__(self, opts, numTests, progressBar=None):
+        self.opts = opts
+        self.numTests = numTests
+        self.current = None
+        self.lock = threading.Lock()
+        self.progressBar = progressBar
+        self.completed = 0
+
+    def update(self, test):
+        # Avoid locking overhead in quiet mode
+        if self.opts.quiet and not test.result.isFailure:
+            self.completed += 1
+            return
+
+        # Output lock.
+        self.lock.acquire()
+        try:
+            self.handleUpdate(test)
+        finally:
+            self.lock.release()
+
+    def finish(self):
+        if self.progressBar:
+            self.progressBar.clear()
+        elif self.opts.quiet:
+            pass
+        elif self.opts.succinct:
+            sys.stdout.write('\n')
+
+    def handleUpdate(self, test):
+        self.completed += 1
+        if self.progressBar:
+            self.progressBar.update(float(self.completed)/self.numTests,
+                                    test.getFullName())
+
+        if self.opts.succinct and not test.result.isFailure:
+            return
+
+        if self.progressBar:
+            self.progressBar.clear()
+
+        print '%s: %s (%d of %d)' % (test.result.name, test.getFullName(),
+                                     self.completed, self.numTests)
+
+        if test.result.isFailure and self.opts.showOutput:
+            print "%s TEST '%s' FAILED %s" % ('*'*20, test.getFullName(),
+                                              '*'*20)
+            print test.output
+            print "*" * 20
+
+        sys.stdout.flush()
+
+class TestProvider:
+    def __init__(self, tests, maxTime):
+        self.maxTime = maxTime
+        self.iter = iter(tests)
+        self.lock = threading.Lock()
+        self.startTime = time.time()
+
+    def get(self):
+        # Check if we have run out of time.
+        if self.maxTime is not None:
+            if time.time() - self.startTime > self.maxTime:
+                return None
+
+        # Otherwise take the next test.
+        self.lock.acquire()
+        try:
+            item = self.iter.next()
+        except StopIteration:
+            item = None
+        self.lock.release()
+        return item
+
+class Tester(threading.Thread):
+    def __init__(self, litConfig, provider, display):
+        threading.Thread.__init__(self)
+        self.litConfig = litConfig
+        self.provider = provider
+        self.display = display
+
+    def run(self):
+        while 1:
+            item = self.provider.get()
+            if item is None:
+                break
+            self.runTest(item)
+
+    def runTest(self, test):
+        result = None
+        startTime = time.time()
+        try:
+            result, output = test.config.test_format.execute(test,
+                                                             self.litConfig)
+        except KeyboardInterrupt:
+            # This is a sad hack. Unfortunately subprocess goes
+            # bonkers with ctrl-c and we start forking merrily.
+            print '\nCtrl-C detected, goodbye.'
+            os.kill(0,9)
+        except:
+            if self.litConfig.debug:
+                raise
+            result = Test.UNRESOLVED
+            output = 'Exception during script execution:\n'
+            output += traceback.format_exc()
+            output += '\n'
+        elapsed = time.time() - startTime
+
+        test.setResult(result, output, elapsed)
+        self.display.update(test)
+
+def dirContainsTestSuite(path):
+    cfgpath = os.path.join(path, gSiteConfigName)
+    if os.path.exists(cfgpath):
+        return cfgpath
+    cfgpath = os.path.join(path, gConfigName)
+    if os.path.exists(cfgpath):
+        return cfgpath
+
+def getTestSuite(item, litConfig, cache):
+    """getTestSuite(item, litConfig, cache) -> (suite, relative_path)
+
+    Find the test suite containing @arg item.
+
+    @retval (None, ...) - Indicates no test suite contains @arg item.
+    @retval (suite, relative_path) - The suite that @arg item is in, and its
+    relative path inside that suite.
+    """
+    def search1(path):
+        # Check for a site config or a lit config.
+        cfgpath = dirContainsTestSuite(path)
+
+        # If we didn't find a config file, keep looking.
+        if not cfgpath:
+            parent,base = os.path.split(path)
+            if parent == path:
+                return (None, ())
+
+            ts, relative = search(parent)
+            return (ts, relative + (base,))
+
+        # We found a config file, load it.
+        if litConfig.debug:
+            litConfig.note('loading suite config %r' % cfgpath)
+
+        cfg = TestingConfig.frompath(cfgpath, None, litConfig, mustExist = True)
+        source_root = os.path.realpath(cfg.test_source_root or path)
+        exec_root = os.path.realpath(cfg.test_exec_root or path)
+        return Test.TestSuite(cfg.name, source_root, exec_root, cfg), ()
+
+    def search(path):
+        # Check for an already instantiated test suite.
+        res = cache.get(path)
+        if res is None:
+            cache[path] = res = search1(path)
+        return res
+
+    # Canonicalize the path.
+    item = os.path.realpath(item)
+
+    # Skip files and virtual components.
+    components = []
+    while not os.path.isdir(item):
+        parent,base = os.path.split(item)
+        if parent == item:
+            return (None, ())
+        components.append(base)
+        item = parent
+    components.reverse()
+
+    ts, relative = search(item)
+    return ts, tuple(relative + tuple(components))
+
+def getLocalConfig(ts, path_in_suite, litConfig, cache):
+    def search1(path_in_suite):
+        # Get the parent config.
+        if not path_in_suite:
+            parent = ts.config
+        else:
+            parent = search(path_in_suite[:-1])
+
+        # Load the local configuration.
+        source_path = ts.getSourcePath(path_in_suite)
+        cfgpath = os.path.join(source_path, kLocalConfigName)
+        if litConfig.debug:
+            litConfig.note('loading local config %r' % cfgpath)
+        return TestingConfig.frompath(cfgpath, parent, litConfig,
+                                    mustExist = False,
+                                    config = parent.clone(cfgpath))
+
+    def search(path_in_suite):
+        key = (ts, path_in_suite)
+        res = cache.get(key)
+        if res is None:
+            cache[key] = res = search1(path_in_suite)
+        return res
+
+    return search(path_in_suite)
+
+def getTests(path, litConfig, testSuiteCache, localConfigCache):
+    # Find the test suite for this input and its relative path.
+    ts,path_in_suite = getTestSuite(path, litConfig, testSuiteCache)
+    if ts is None:
+        litConfig.warning('unable to find test suite for %r' % path)
+        return (),()
+
+    if litConfig.debug:
+        litConfig.note('resolved input %r to %r::%r' % (path, ts.name,
+                                                        path_in_suite))
+
+    return ts, getTestsInSuite(ts, path_in_suite, litConfig,
+                               testSuiteCache, localConfigCache)
+
+def getTestsInSuite(ts, path_in_suite, litConfig,
+                    testSuiteCache, localConfigCache):
+    # Check that the source path exists (errors here are reported by the
+    # caller).
+    source_path = ts.getSourcePath(path_in_suite)
+    if not os.path.exists(source_path):
+        return
+
+    # Check if the user named a test directly.
+    if not os.path.isdir(source_path):
+        lc = getLocalConfig(ts, path_in_suite[:-1], litConfig, localConfigCache)
+        yield Test.Test(ts, path_in_suite, lc)
+        return
+
+    # Otherwise we have a directory to search for tests, start by getting the
+    # local configuration.
+    lc = getLocalConfig(ts, path_in_suite, litConfig, localConfigCache)
+
+    # Search for tests.
+    for res in lc.test_format.getTestsInDirectory(ts, path_in_suite,
+                                                  litConfig, lc):
+        yield res
+
+    # Search subdirectories.
+    for filename in os.listdir(source_path):
+        # FIXME: This doesn't belong here?
+        if filename in ('Output', '.svn') or filename in lc.excludes:
+            continue
+
+        # Ignore non-directories.
+        file_sourcepath = os.path.join(source_path, filename)
+        if not os.path.isdir(file_sourcepath):
+            continue
+
+        # Check for nested test suites, first in the execpath in case there is a
+        # site configuration and then in the source path.
+        file_execpath = ts.getExecPath(path_in_suite + (filename,))
+        if dirContainsTestSuite(file_execpath):
+            sub_ts, subiter = getTests(file_execpath, litConfig,
+                                       testSuiteCache, localConfigCache)
+        elif dirContainsTestSuite(file_sourcepath):
+            sub_ts, subiter = getTests(file_sourcepath, litConfig,
+                                       testSuiteCache, localConfigCache)
+        else:
+            # Otherwise, continue loading from inside this test suite.
+            subiter = getTestsInSuite(ts, path_in_suite + (filename,),
+                                      litConfig, testSuiteCache,
+                                      localConfigCache)
+            sub_ts = None
+
+        N = 0
+        for res in subiter:
+            N += 1
+            yield res
+        if sub_ts and not N:
+            litConfig.warning('test suite %r contained no tests' % sub_ts.name)
+
+def runTests(numThreads, litConfig, provider, display):
+    # If only using one testing thread, don't use threads at all; this lets us
+    # profile, among other things.
+    if numThreads == 1:
+        t = Tester(litConfig, provider, display)
+        t.run()
+        return
+
+    # Otherwise spin up the testing threads and wait for them to finish.
+    testers = [Tester(litConfig, provider, display)
+               for i in range(numThreads)]
+    for t in testers:
+        t.start()
+    try:
+        for t in testers:
+            t.join()
+    except KeyboardInterrupt:
+        sys.exit(2)
+
+def main():
+    # Bump the GIL check interval, its more important to get any one thread to a
+    # blocking operation (hopefully exec) than to try and unblock other threads.
+    #
+    # FIXME: This is a hack.
+    import sys
+    sys.setcheckinterval(1000)
+
+    global options
+    from optparse import OptionParser, OptionGroup
+    parser = OptionParser("usage: %prog [options] {file-or-path}")
+
+    parser.add_option("-j", "--threads", dest="numThreads", metavar="N",
+                      help="Number of testing threads",
+                      type=int, action="store", default=None)
+    parser.add_option("", "--config-prefix", dest="configPrefix",
+                      metavar="NAME", help="Prefix for 'lit' config files",
+                      action="store", default=None)
+    parser.add_option("", "--param", dest="userParameters",
+                      metavar="NAME=VAL",
+                      help="Add 'NAME' = 'VAL' to the user defined parameters",
+                      type=str, action="append", default=[])
+
+    group = OptionGroup(parser, "Output Format")
+    # FIXME: I find these names very confusing, although I like the
+    # functionality.
+    group.add_option("-q", "--quiet", dest="quiet",
+                     help="Suppress no error output",
+                     action="store_true", default=False)
+    group.add_option("-s", "--succinct", dest="succinct",
+                     help="Reduce amount of output",
+                     action="store_true", default=False)
+    group.add_option("-v", "--verbose", dest="showOutput",
+                     help="Show all test output",
+                     action="store_true", default=False)
+    group.add_option("", "--no-progress-bar", dest="useProgressBar",
+                     help="Do not use curses based progress bar",
+                     action="store_false", default=True)
+    parser.add_option_group(group)
+
+    group = OptionGroup(parser, "Test Execution")
+    group.add_option("", "--path", dest="path",
+                     help="Additional paths to add to testing environment",
+                     action="append", type=str, default=[])
+    group.add_option("", "--vg", dest="useValgrind",
+                     help="Run tests under valgrind",
+                     action="store_true", default=False)
+    group.add_option("", "--vg-arg", dest="valgrindArgs", metavar="ARG",
+                     help="Specify an extra argument for valgrind",
+                     type=str, action="append", default=[])
+    group.add_option("", "--time-tests", dest="timeTests",
+                     help="Track elapsed wall time for each test",
+                     action="store_true", default=False)
+    group.add_option("", "--no-execute", dest="noExecute",
+                     help="Don't execute any tests (assume PASS)",
+                     action="store_true", default=False)
+    parser.add_option_group(group)
+
+    group = OptionGroup(parser, "Test Selection")
+    group.add_option("", "--max-tests", dest="maxTests", metavar="N",
+                     help="Maximum number of tests to run",
+                     action="store", type=int, default=None)
+    group.add_option("", "--max-time", dest="maxTime", metavar="N",
+                     help="Maximum time to spend testing (in seconds)",
+                     action="store", type=float, default=None)
+    group.add_option("", "--shuffle", dest="shuffle",
+                     help="Run tests in random order",
+                     action="store_true", default=False)
+    parser.add_option_group(group)
+
+    group = OptionGroup(parser, "Debug and Experimental Options")
+    group.add_option("", "--debug", dest="debug",
+                      help="Enable debugging (for 'lit' development)",
+                      action="store_true", default=False)
+    group.add_option("", "--show-suites", dest="showSuites",
+                      help="Show discovered test suites",
+                      action="store_true", default=False)
+    group.add_option("", "--no-tcl-as-sh", dest="useTclAsSh",
+                      help="Don't run Tcl scripts using 'sh'",
+                      action="store_false", default=True)
+    group.add_option("", "--repeat", dest="repeatTests", metavar="N",
+                      help="Repeat tests N times (for timing)",
+                      action="store", default=None, type=int)
+    parser.add_option_group(group)
+
+    (opts, args) = parser.parse_args()
+
+    if not args:
+        parser.error('No inputs specified')
+
+    if opts.configPrefix is not None:
+        global gConfigName, gSiteConfigName
+        gConfigName = '%s.cfg' % opts.configPrefix
+        gSiteConfigName = '%s.site.cfg' % opts.configPrefix
+
+    if opts.numThreads is None:
+        opts.numThreads = Util.detectCPUs()
+
+    inputs = args
+
+    # Create the user defined parameters.
+    userParams = {}
+    for entry in opts.userParameters:
+        if '=' not in entry:
+            name,val = entry,''
+        else:
+            name,val = entry.split('=', 1)
+        userParams[name] = val
+
+    # Create the global config object.
+    litConfig = LitConfig.LitConfig(progname = os.path.basename(sys.argv[0]),
+                                    path = opts.path,
+                                    quiet = opts.quiet,
+                                    useValgrind = opts.useValgrind,
+                                    valgrindArgs = opts.valgrindArgs,
+                                    useTclAsSh = opts.useTclAsSh,
+                                    noExecute = opts.noExecute,
+                                    debug = opts.debug,
+                                    isWindows = (platform.system()=='Windows'),
+                                    params = userParams)
+
+    # Load the tests from the inputs.
+    tests = []
+    testSuiteCache = {}
+    localConfigCache = {}
+    for input in inputs:
+        prev = len(tests)
+        tests.extend(getTests(input, litConfig,
+                              testSuiteCache, localConfigCache)[1])
+        if prev == len(tests):
+            litConfig.warning('input %r contained no tests' % input)
+
+    # If there were any errors during test discovery, exit now.
+    if litConfig.numErrors:
+        print >>sys.stderr, '%d errors, exiting.' % litConfig.numErrors
+        sys.exit(2)
+
+    if opts.showSuites:
+        suitesAndTests = dict([(ts,[])
+                               for ts,_ in testSuiteCache.values()
+                               if ts])
+        for t in tests:
+            suitesAndTests[t.suite].append(t)
+
+        print '-- Test Suites --'
+        suitesAndTests = suitesAndTests.items()
+        suitesAndTests.sort(key = lambda (ts,_): ts.name)
+        for ts,ts_tests in suitesAndTests:
+            print '  %s - %d tests' %(ts.name, len(ts_tests))
+            print '    Source Root: %s' % ts.source_root
+            print '    Exec Root  : %s' % ts.exec_root
+
+    # Select and order the tests.
+    numTotalTests = len(tests)
+    if opts.shuffle:
+        random.shuffle(tests)
+    else:
+        tests.sort(key = lambda t: t.getFullName())
+    if opts.maxTests is not None:
+        tests = tests[:opts.maxTests]
+
+    extra = ''
+    if len(tests) != numTotalTests:
+        extra = ' of %d' % numTotalTests
+    header = '-- Testing: %d%s tests, %d threads --'%(len(tests),extra,
+                                                      opts.numThreads)
+
+    if opts.repeatTests:
+        tests = [t.copyWithIndex(i)
+                 for t in tests
+                 for i in range(opts.repeatTests)]
+
+    progressBar = None
+    if not opts.quiet:
+        if opts.succinct and opts.useProgressBar:
+            try:
+                tc = ProgressBar.TerminalController()
+                progressBar = ProgressBar.ProgressBar(tc, header)
+            except ValueError:
+                print header
+                progressBar = ProgressBar.SimpleProgressBar('Testing: ')
+        else:
+            print header
+
+    # Don't create more threads than tests.
+    opts.numThreads = min(len(tests), opts.numThreads)
+
+    startTime = time.time()
+    display = TestingProgressDisplay(opts, len(tests), progressBar)
+    provider = TestProvider(tests, opts.maxTime)
+    runTests(opts.numThreads, litConfig, provider, display)
+    display.finish()
+
+    if not opts.quiet:
+        print 'Testing Time: %.2fs'%(time.time() - startTime)
+
+    # Update results for any tests which weren't run.
+    for t in tests:
+        if t.result is None:
+            t.setResult(Test.UNRESOLVED, '', 0.0)
+
+    # List test results organized by kind.
+    hasFailures = False
+    byCode = {}
+    for t in tests:
+        if t.result not in byCode:
+            byCode[t.result] = []
+        byCode[t.result].append(t)
+        if t.result.isFailure:
+            hasFailures = True
+
+    # FIXME: Show unresolved and (optionally) unsupported tests.
+    for title,code in (('Unexpected Passing Tests', Test.XPASS),
+                       ('Failing Tests', Test.FAIL)):
+        elts = byCode.get(code)
+        if not elts:
+            continue
+        print '*'*20
+        print '%s (%d):' % (title, len(elts))
+        for t in elts:
+            print '    %s' % t.getFullName()
+        print
+
+    if opts.timeTests:
+        # Collate, in case we repeated tests.
+        times = {}
+        for t in tests:
+            key = t.getFullName()
+            times[key] = times.get(key, 0.) + t.elapsed
+
+        byTime = list(times.items())
+        byTime.sort(key = lambda (name,elapsed): elapsed)
+        if byTime:
+            Util.printHistogram(byTime, title='Tests')
+
+    for name,code in (('Expected Passes    ', Test.PASS),
+                      ('Expected Failures  ', Test.XFAIL),
+                      ('Unsupported Tests  ', Test.UNSUPPORTED),
+                      ('Unresolved Tests   ', Test.UNRESOLVED),
+                      ('Unexpected Passes  ', Test.XPASS),
+                      ('Unexpected Failures', Test.FAIL),):
+        if opts.quiet and not code.isFailure:
+            continue
+        N = len(byCode.get(code,[]))
+        if N:
+            print '  %s: %d' % (name,N)
+
+    # If we encountered any additional errors, exit abnormally.
+    if litConfig.numErrors:
+        print >>sys.stderr, '\n%d error(s), exiting.' % litConfig.numErrors
+        sys.exit(2)
+
+    # Warn about warnings.
+    if litConfig.numWarnings:
+        print >>sys.stderr, '\n%d warning(s) in tests.' % litConfig.numWarnings
+
+    if hasFailures:
+        sys.exit(1)
+    sys.exit(0)
+
+if __name__=='__main__':
+    main()
diff --git a/libclamav/c++/llvm/utils/lit/setup.py b/libclamav/c++/llvm/utils/lit/setup.py
new file mode 100644
index 0000000..e6ae3d8
--- /dev/null
+++ b/libclamav/c++/llvm/utils/lit/setup.py
@@ -0,0 +1,69 @@
+import lit
+
+# FIXME: Support distutils?
+from setuptools import setup, find_packages
+setup(
+    name = "Lit",
+    version = lit.__version__,
+
+    author = lit.__author__,
+    author_email = lit.__email__,
+    url = 'http://llvm.org',
+    license = 'BSD',
+
+    description = "A Software Testing Tool",
+    keywords = 'test C++ automatic discovery',
+    long_description = """\
+Lit
++++
+
+About
+=====
+
+Lit is a portable tool for executing LLVM and Clang style test suites,
+summarizing their results, and providing indication of failures. Lit is designed
+to be a lightweight testing tool with as simple a user interface as possible.
+
+
+Features
+========
+
+ * Portable!
+ * Flexible test discovery.
+ * Parallel test execution.
+ * Support for multiple test formats and test suite designs.
+
+
+Documentation
+=============
+
+The offical Lit documentation is in the man page, available online in the `LLVM
+Command Guide http://llvm.org/cmds/lit.html`_.
+
+
+Source
+======
+
+The Lit source is available as part of LLVM, in the `LLVM SVN repository
+<http://llvm.org/svn/llvm-project/llvm/trunk/utils/lit`_.
+""",
+
+    classifiers=[
+        'Development Status :: 3 - Alpha',
+        'Environment :: Console',
+        'Intended Audience :: Developers',
+        'License :: OSI Approved :: University of Illinois/NCSA Open Source License',
+        'Natural Language :: English',
+        'Operating System :: OS Independent',
+        'Progamming Language :: Python',
+        'Topic :: Software Development :: Testing',
+        ],
+
+    zip_safe = False,
+    packages = find_packages(),
+    entry_points = {
+        'console_scripts': [
+            'lit = lit:main',
+            ],
+        }
+)
diff --git a/libclamav/c++/llvm/utils/llvmdo b/libclamav/c++/llvm/utils/llvmdo
index 26f2183..4a7e05a 100755
--- a/libclamav/c++/llvm/utils/llvmdo
+++ b/libclamav/c++/llvm/utils/llvmdo
@@ -112,7 +112,6 @@ files_to_match="\
   -o -name *.intro \
   -o -name *.l \
   -o -name *.ll \
-  -o -name *.llx \
   -o -name *.lst \
   -o -name *.m4 \
   -o -name *.pod \
diff --git a/libclamav/c++/llvm/utils/unittest/googletest/gtest.cc b/libclamav/c++/llvm/utils/unittest/googletest/gtest.cc
index e46e90a..b5a654f 100644
--- a/libclamav/c++/llvm/utils/unittest/googletest/gtest.cc
+++ b/libclamav/c++/llvm/utils/unittest/googletest/gtest.cc
@@ -532,7 +532,7 @@ TypeId GetTestTypeId() {
 
 // The value of GetTestTypeId() as seen from within the Google Test
 // library.  This is solely for testing GetTestTypeId().
-extern const TypeId kTestTypeIdInGoogleTest = GetTestTypeId();
+const TypeId kTestTypeIdInGoogleTest = GetTestTypeId();
 
 // This predicate-formatter checks that 'results' contains a test part
 // failure of the given type and that the failure message contains the
diff --git a/libclamav/c++/llvm/utils/unittest/googletest/include/gtest/gtest-param-test.h b/libclamav/c++/llvm/utils/unittest/googletest/include/gtest/gtest-param-test.h
index 2d63237..0cf05dc 100644
--- a/libclamav/c++/llvm/utils/unittest/googletest/include/gtest/gtest-param-test.h
+++ b/libclamav/c++/llvm/utils/unittest/googletest/include/gtest/gtest-param-test.h
@@ -155,7 +155,6 @@ INSTANTIATE_TEST_CASE_P(AnotherInstantiationName, FooTest, ValuesIn(pets));
 
 #include <gtest/internal/gtest-internal.h>
 #include <gtest/internal/gtest-param-util.h>
-#include <gtest/internal/gtest-param-util-generated.h>
 
 namespace testing {
 
@@ -289,6 +288,12 @@ internal::ParamGenerator<typename Container::value_type> ValuesIn(
   return ValuesIn(container.begin(), container.end());
 }
 
+} // namespace testing
+
+#include <gtest/internal/gtest-param-util-generated.h>
+
+namespace testing {
+
 // Values() allows generating tests from explicitly specified list of
 // parameters.
 //

-- 
Debian repository for ClamAV



More information about the Pkg-clamav-commits mailing list